Research Areas

Explainable AI for Large Language Models

Developing methods to make large language models more interpretable and transparent. This involves creating techniques to understand model decision-making processes, identify potential biases, and provide meaningful explanations for model outputs. The goal is to build AI systems that are not only powerful but also trustworthy and accountable.

Interpretability Model Transparency Trustworthy AI XAI

Hallucination Detection and Mitigation

Addressing one of the most critical challenges in large language models: the generation of plausible but factually incorrect information. My work involves developing robust detection mechanisms to identify hallucinated content and creating mitigation strategies to reduce false information generation while maintaining model performance and fluency.

Factuality Hallucination Detection LLM Reliability Content Verification

Multilingual and Low-Resource Machine Translation

Building robust translation systems for multilingual contexts with a special focus on low-resource language pairs. This research aims to democratize language technology by improving translation quality for underrepresented languages, enabling better cross-lingual communication and information access for diverse linguistic communities worldwide.

Multilingual NLP Low-Resource Languages Neural MT Cross-lingual Transfer

LLM Evaluation and Benchmarking

Creating comprehensive evaluation frameworks and benchmarks to assess large language model performance across multiple dimensions including accuracy, fairness, robustness, and cross-lingual capabilities. This work contributes to establishing standardized methods for measuring progress in NLP research and ensuring model reliability.

Model Evaluation Benchmarking Performance Metrics Quality Assessment

Current Project

Cross-Lingual LLM-Generated Text Detection

In Submission

Developing evaluation methods for detecting LLM-generated text in Hindi and Telugu. This project addresses the need for robust detection mechanisms in Indic languages, contributing to efforts in combating misinformation and ensuring content authenticity in multilingual contexts.

The work introduces novel benchmarking approaches and evaluation metrics specifically designed for low-resource languages, with potential applications in content moderation, academic integrity, and digital authenticity verification across diverse linguistic communities.

Manuscript submitted to ACL Rolling Review (ARR), 2026