Atharva Kulkarni

profile.jpg

Hello! I am a second year CS PhD student at the University of Southern California, advised by Swabha Swayamdipta, and a member of DILL Lab and USC NLP Group.

My broad research focus is on theoretical & empirical understanding of foundation models. Specifically, I am interested in:

  • Studying their learning dynamics, geometric properties, & structural constraints.
  • Understanding the science behind their scaling & generalization capabilities.
  • Building a principled understanding of when & why they work / fail.
  • Exploring avenues for improving their reliability, safety, & trustworthiness.

I am actively looking for summer 2026 research internships around these topics — feel free to reach out if you’re hiring!

Previously, I was a visiting PhD student at UC Berkeley – Simons Institute for the Theory of Computing for Spring 2025. I completed my Masters in Language Technologies (MLT) from Carnegie Mellon UniversityLanguage Technologies Institute, where I was fortunate to be advised by Barnabás Póczos & Graham Neubig. I have also interned at Apple with the Siri & Information Intelligence group (summer ‘23 & ‘24).

Before coming to CMU, I was a Predoctoral Researcher (Research Associate) at the Laboratory for Computational Social Systems (LCS2), IIIT Delhi. Even before that, I graduated with a Bachelors of Engineering in Computer Science from Savitribai Phule Pune University.

I’m eager to connect with my academic peers! If our research interests align (or diverge) in intriguing ways, I’d be delighted to explore potential collaborations or simply exchange ideas!

I am also a strong advocate for diversity and mentorship in computer science research. If you’d like to chat about navigating ML/NLP research, grad school applications, or want to collaborate on a research project, please visit the Outreach & Mentorship tab for more information.


News

Aug 2025 Summer 2024 internship work at Apple Research on Hallucination Metrics Meta-Evaluation got accepted to EMNLP 2025 FIndings 🇨🇳!
Jan 2025 Attending UC Berkeley – Simons Institute for the Theory of Computing as a visiting PhD student for the Special Year on Large Language Models and Transformers Progam (Part 2) 🐻!
Sep 2024 Finally our work on evaluating LLMs on comorbid mental mealth issues is out! It will be present at EMNLP 2024 main conference :us:!
🕰️ all news ...

Selected Publications

  1. TMLR
    Atharva Kulkarni*, Lucio M. Dery*, Amrith Setlur, Aditi Raghunathan, Ameet Talwalkar, and Graham Neubig
    Transactions on Machine Learning Research, Feb 2024