Trustworthy Machine Learning
Uncertainty
Explainability
MSc in Computer Science, 2025
Korea Advanced Institute of Science and Technology
I am an incoming PhD Student at KAIST AI (Spring 2026), where I work in the group on Scalable Trustworthy AI (STAI).
I am driven by a simple question: why do some models look brilliant in benchmarks yet stumble when deployed in the wild? My work focuses on understanding that gap - probing the assumptions, inductive biases, and failure modes that cause systems to overperform in controlled settings but degrade under real-world complexity.
Rather than chasing marginal gains on curated datasets, I’m interested in stress-testing models against reality and figuring out what fundamentally limits their transfer. When a method doesn’t scale beyond toy problems, I want to know what breaks, why it breaks, and what that teaches us about building more reliable learning systems.