Research Statement

Real-world data is rarely clean or unimodal. Whether in healthcare, finance, or autonomous systems, models must navigate complex, heterogeneous information streams while maintaining reliability in the face of uncertainty.

My research focuses on developing machine learning architectures that are inherently robust and interpretable. I am interested in solving fundamental challenges related to learning from small or skewed datasets, ensuring that AI systems remain fair and trustworthy when deployed in critical, high-stakes environments.

By addressing the fragility of current machine learning models, my work aims to build AI systems that are not only accurate on benchmarks but reliable in the wild. This reliability is the key to unlocking the potential of AI in critical sectors where safety and transparency are paramount.

Research Areas

Robust Machine Learning

Developing models that perform reliably by learning effectively from small, skewed, or noisy datasets.

Multimodal Representation

Designing models that can navigate complex, heterogeneous information streams to capture nuanced context.

Trustworthy AI

Ensuring machine learning systems are interpretable, fair, and transparent for deployment in critical environments.