CMSA New Technologies in Mathematics: Minerva: Solving Quantitative Reasoning Problems with Language Models
SEMINARS, CMSA EVENTS
Guy Gur-Ari - Google Research
Quantitative reasoning tasks which can involve mathematics, science, and programming are often challenging for machine learning models in general and for language models in particular. We show that transformer-based language models obtain significantly better performance on math and science questions when trained in an unsupervised way on a large, math-focused dataset. Performance can be further improved using prompting and sampling techniques including chain-of-thought and majority voting. Minerva, a model that combines these techniques, achieves SOTA on several math and science benchmarks. I will describe the model, its capabilities and limitations.
For more information, please see: https://cmsa.fas.harvard.edu/tech-in-math/