Research
My work focuses on building explainable and trustworthy artificial intelligence systems, with emphasis on large language models, hallucination detection, and multilingual natural language processing.
Research Areas
Explainable AI for Large Language Models
Developing methods to make large language models more interpretable and transparent. This involves creating techniques to understand model decision-making processes, identify potential biases, and provide meaningful explanations for model outputs. The goal is to build AI systems that are not only powerful but also trustworthy and accountable.
Hallucination Detection and Mitigation
Addressing one of the most critical challenges in large language models: the generation of plausible but factually incorrect information. My work involves developing robust detection mechanisms to identify hallucinated content and creating mitigation strategies to reduce false information generation while maintaining model performance and fluency.
Multilingual and Low-Resource Machine Translation
Building robust translation systems for multilingual contexts with a special focus on low-resource language pairs. This research aims to democratize language technology by improving translation quality for underrepresented languages, enabling better cross-lingual communication and information access for diverse linguistic communities worldwide.
LLM Evaluation and Benchmarking
Creating comprehensive evaluation frameworks and benchmarks to assess large language model performance across multiple dimensions including accuracy, fairness, robustness, and cross-lingual capabilities. This work contributes to establishing standardized methods for measuring progress in NLP research and ensuring model reliability.
Current Project
Cross-Lingual LLM-Generated Text Detection
In SubmissionDeveloping evaluation methods for detecting LLM-generated text in Hindi and Telugu. This project addresses the need for robust detection mechanisms in Indic languages, contributing to efforts in combating misinformation and ensuring content authenticity in multilingual contexts.
The work introduces novel benchmarking approaches and evaluation metrics specifically designed for low-resource languages, with potential applications in content moderation, academic integrity, and digital authenticity verification across diverse linguistic communities.