A Mathematical Introduction to Machine Learning
The heart of modern machine learning (ML) is the approximation of high-dimensional functions. Traditional approaches, such as approximation by piecewise polynomials, wavelets, or other linear combinations of fixed basis functions, suffer from the curse of dimensionality (CoD). This does not seem to be the case for the neural network-based ML models. To quantify this, we need to develop the corresponding mathematical framework. At the same time, we might be able to use ML to solve problems in computational science that we could not solve before due to CoD. In this talk, I will report the progress made so far at the theoretical front, and highlight the main remaining challenges. I will also discuss some examples along the lines of “AI for Science”.
https://harvard.zoom.us/j/779283357?pwd=MitXVm1pYUlJVzZqT3lwV2pCT1ZUQT09