AI/ML Research

A quick map of what I work on, representative papers/projects, and what I’m looking for in a PhD lab.

Research statement

My research starts from a practical question: how do you build ML systems that actually work in low-resource settings? That constraint (on data, compute, and reliable infrastructure) is not a limitation I work around. It is the research problem. The world’s most consequential decisions in health, education, and climate are made in exactly these settings, and they deserve models that are efficient, interpretable, and honest about their own failure modes.

This question has led me across three connected areas. In efficient deep learning, I use neural architecture search to find models that are small enough to deploy on edge hardware without sacrificing meaningful accuracy; my Green-NAS work found a weather model 239× lighter than GraphCast with near-identical RMSE. In NLP and scaling theory, I study how transformer architecture choices interact with scale, finding that conventional wisdom (deeper is better) breaks down in predictable ways that practitioners can exploit. In causal inference and statistics, I apply rigorous evaluation frameworks (temporal validation, fairness audits, causal discovery) to applied development problems in public health and education, where a spurious model can cause real policy harm.

These threads share a common thread: I care about evaluation. A model that passes standard cross-validation but fails on future data, or that achieves 0.76 AUC by exploiting socioeconomic proxies, is not a useful model. My most consistent methodological contribution across papers has been designing evaluation protocols that actually test the thing being claimed.

Going forward, I want to deepen the connection between architecture efficiency and deployment constraints, working toward models that are not just small, but whose uncertainty and failure modes are legible. I am actively seeking PhD positions in machine learning and AI, particularly in labs where rigorous evaluation, open artifacts, and application to high-stakes domains are taken seriously.


Research themes

Efficient deep learning & neural architecture search

Designing compute-efficient models and search strategies that are practical under real-world constraints (latency, memory, energy).

Neural Architecture SearchGreen AIEdge DeploymentDeep Learning

NLP & LLMs (representation, scaling, and reliability)

Studying representation learning and transformer behavior, with a focus on reliability and evidence-backed evaluation.

NLP & LLMsTransformersInterpretabilityEvaluation

Causal inference & statistics for global development

Using modern statistical methods and causal thinking to study development outcomes and support policy-relevant decisions.

Causal InferenceAI for DevelopmentPublic HealthStatistical Modeling

What I’m looking for