Bachelor’s Thesis-I: Hallucination detection of LLMs

I am presently engaged in collaborative research on the topic of Large Language Model (LLM) hallucinations with Prof. Ganesh Ramakrishnan, in conjunction with Adobe. Our work focused on devising methods for the identification of hallucinations, and implementing mechanisms to alert instances where LLMs produce inaccurate outcomes. We established a robust framework applicable across various tasks, minimizing the need for extensive constraints, data, and computational resources. We submitted the work to EMNLP 2024.