Avatar

Niladri S. Chatterji

Senior Research Scientist

Meta

I am currently on the Llama team at Meta Gen AI.

Previously, I was a postdoctoral researcher at Stanford University working with Tatsu Hashimoto and Percy Liang. Before that I completed my PhD at UC Berkeley advised by Peter Bartlett, and graduated from IIT Bombay in 2015.

My research interests lie at the intersection of Machine Learning and Statistics. My current research interests center around building more robust language models. In the past, I have worked on interpolating models, optimization theory, online learning, and MCMC algorithms.

Education

  • PhD at UC Berkeley, 2021

  • BTech and MTech at IIT Bombay, 2015

Publications

Is importance weighting incompatible with interpolating classifiers?

ICLR 2022; also presented as a spotlight talk in the Workshop on Distribution Shifts, NeurIPS 2021

Foolish crowds support benign overfitting

Journal of Machine Learning Research; also presented at NeurIPS 2022

On the theory of reinforcement learning with once-per-episode feedback

NeurIPS 2021; also presented as an oral talk in the Workshop on Reinforcement Learning Theory, ICML 2021

The intriguing role of module criticality in the generalization of deep networks

ICLR 2020 (Spotlight Talk); also appeared at Workshops on ML with Guarantees & on Science of Deep Learning, NeurIPS 2019