CMSA New Technologies in Mathematics Seminar: Physics of Language Models: Knowledge Storage, Extraction, and Manipulation

CMSA EVENTS

View Calendar
October 18, 2023 2:00 pm - 3:00 pm
CMSA, 20 Garden St, G10
Address: 20 Garden Street, Cambridge, MA 02138
Speaker:

Yuanzhi Li - CMU Dept. of Machine Learning

Large language models (LLMs) can memorize a massive amount of knowledge during pre-training, but can they effectively use this knowledge at inference time? In this work, we show several striking results about this question. Using a synthetic biography dataset, we first show that even if an LLM achieves zero training loss when pretraining on the biography dataset, it sometimes can not be finetuned to answer questions as simple as "What is the birthday of XXX" at all. We show that sufficient data augmentation during pre-training, such as rewriting the same biography multiple times or simply using the person's full name in every sentence, can mitigate this issue. Using linear probing, we unravel that such augmentation forces the model to store knowledge about a person in the token embeddings of their name rather than other locations.
We then show that LLMs are very bad at manipulating knowledge they learn during pre-training unless a chain of thought is used at inference time. We pretrained an LLM on the synthetic biography dataset, so that it could answer "What is the birthday of XXX" with 100% accuracy.  Even so, it could not be further fine-tuned to answer questions like "Is the birthday of XXX even or odd?” directly.  Even using Chain of Thought training data only helps the model answer such questions in a CoT manner, not directly.
We will also discuss preliminary progress on understanding the scaling law of how large a language model needs to be to store X pieces of knowledge and extract them efficiently. For example, is a 1B parameter language model enough to store all the knowledge of a middle school student?

 

https://harvard.zoom.us/j/95706757940?pwd=dHhMeXBtd1BhN0RuTWNQR0xEVzJkdz09
Password: cmsa