CMSA New Technologies in Mathematics: A Mathematical Exploration of Why Language Models Help Solve Downstream Tasks


View Calendar
February 10, 2021 3:00 pm - 4:00 pm
via Zoom Video Conferencing

Nikunj Saunshi - Dept. of Computer Science, Princeton University

Autoregressive language models pretrained on large corpora have been successful at solving downstream tasks, even with zero-shot usage. However, there is little theoretical justification for their success. This paper considers the following questions: (1) Why should learning the distribution of natural language help with downstream classification tasks? (2) Why do features learned using language modeling help solve downstream tasks with linear classifiers? For (1), we hypothesize, and verify empirically, that classification tasks of interest can be reformulated as next word prediction tasks, thus making language modeling a meaningful pretraining task. For (2), we analyze properties of the cross-entropy objective to show that eps-optimal language models in cross-entropy (log-perplexity) learn features that are O(sqrt{eps}) good on such linear classification tasks, thus demonstrating mathematically that doing well on language modeling can be beneficial for downstream tasks. We perform experiments to verify assumptions and validate our theoretical results. Our theoretical insights motivate a simple alternative to the cross-entropy objective that performs well on some linear classification tasks.