PhD Student · Natural Language Processing

|

Second year PhD student at Cornell University, advised by Prof. Tanya Goyal. Gratefully supported by an NSF GRFP Fellowship. My research focuses on knowledge in language models — how they store it, how we can edit it, and how we can make them more reliable and factual.

Research

01

Knowledge in LMs

Language models are increasingly used as substitutes for search, yet they remain static snapshots of their training data and are prone to hallucination. How can we make them reliably incorporate new information while preserving what they already know?

02

Calibration

Language models typically present anything they say with high confidence, regardless of whether they are right or wrong. However, they do seem to maintain some representation internally of their confidence. How can we calibrate them such that their internal confidence is reflected verbally?

03

Empirical Understanding

We have very little understanding how LMs work. Although I love reading theory and interpretability papers, as an empiricist at heart I'm most motivated by empirically grounded work - e.g., physics of LMs, scaling laws, etc.

Publications

2026

Updating Parametric Knowledge with Context Distillation Retains Post-Training Capabilities

Shankar Padmanabhan, Mustafa Omer Gul, Tanya Goyal
In review at ICML, 2026
2025

Breadcrumbs Reasoning: Memory-Efficient Reasoning with Compression Beacons

Giovanni Monea, Yair Feldman, Shankar Padmanabhan, Kianté Brantley, Yoav Artzi
NeurIPS Workshop on Efficient Reasoning, 2025
2023

Propagating Knowledge Updates to LMs Through Distillation

Shankar Padmanabhan, Yasumasa Onoe, Michael J.Q. Zhang, Greg Durrett, Eunsol Choi
NeurIPS, 2023
2023

Can LMs Learn New Entities from Descriptions? Challenges in Propagating Injected Knowledge

Yasumasa Onoe, Michael J.Q. Zhang, Shankar Padmanabhan, Greg Durrett, Eunsol Choi
ACL, 2023

Preprints

2021

Optimal Placement of Public Electric Vehicle Charging Stations Using Deep Reinforcement Learning

Shankar Padmanabhan*, Aidan Petratos*, Allen Ting*, Kristina Zhou, Dylan Hageman, Jesse Pisel, Michael Pyrcz
arXiv, 2021

About

Currently a second-year PhD student at Cornell University, I work under the guidance of Prof. Tanya Goyal on problems at the core of natural language processing. My research revolves around knowledge in language models — how they store it, how we can edit it, and how we can calibrate them to be more reliable and factual.

I'm also deeply motivated by empirically grounded research that helps us understand language models better — scaling laws, physics of LMs, and the broader science of deep learning.

I obtained my undergraduate degree at UT Austin, where I was first introduced to NLP. I was fortunate to work with Prof. Eunsol Choi, Prof. Greg Durrett, and Prof. Richard Tsai.

At a Glance

Institution Cornell University
Department Computer Science
Advisor Prof. Tanya Goyal
Focus NLP, Knowledge in LMs
Undergrad UT Austin
Fellowship NSF GRFP

Beyond Research

In my free time, I'm passionate about learning languages. Through a small amount of "supervised training" (Duolingo) and a large amount of "unsupervised training" (native media), I've become a fluent speaker of Spanish, and I'm currently learning French and Tamil as well.

I also enjoy ultimate frisbee and calisthenics. I played for UT's ultimate frisbee team as an undergrad, and can currently do a pull-up with 100 lbs of additional weight.

Contact me

Open to collaborations and conversations about research. Also happy to chat with any high schooler/undergrad/MS student if I can be of help in any way (e.g, advice about attending grad school).