Calendar

< 2021 >
December 1
  • 01
    December 1, 2021

    Joint Harvard-CUHK-YMSC Differential Geometry Seminar

    3:00 AM-4:00 AM
    December 1, 2021

    will speak on:

    Lagrangians and mirror symmetry in the Higgs bundle moduli space


    The talk concerns recent work with Tamas Hausel in asking how SYZ mirror symmetry works for the moduli space of Higgs bundles. Focusing on C^*-invariant Lagrangian submanifolds, we use the notion of virtual multiplicity as a tool firstly to examine if the Lagrangian is closed, but  also to open up new features involving finite-dimensional algebras which are deformations of cohomology algebras. Answering some of the questions raised  requires revisiting basic constructions of stable bundles on curves.

    ********************************************************************

    For details visit:

    http://www.ims.cuhk.edu.hk/cgi-bin/SeminarAdmin/bin/Web

    http://www.ims.cuhk.edu.hk/activities/seminar/joint-dg-seminar/

     


    Zoom Link: https://cuhk.zoom.us/j/99285825827

    Meeting ID: 992 8582 5827
    Passcode: 20211201

    CMSA New Technologies in Mathematics Seminar: The Principles of Deep Learning Theory

    2:00 PM-3:00 PM
    December 1, 2021

    Deep learning is an exciting approach to modern artificial intelligence based on artificial neural networks. The goal of this talk is to provide a blueprint — using tools from physics — for theoretically analyzing deep neural networks of practical relevance. This task will encompass both understanding the statistics of initialized deep networks and determining the training dynamics of such an ensemble when learning from data.

    In terms of their “microscopic” definition, deep neural networks are a flexible set of functions built out of many basic computational blocks called neurons, with many neurons in parallel organized into sequential layers. Borrowing from the effective theory framework, we will develop a perturbative 1/n expansion around the limit of an infinite number of neurons per layer and systematically integrate out the parameters of the network. We will explain how the network simplifies at large width and how the propagation of signals from layer to layer can be understood in terms of a Wilsonian renormalization group flow. This will make manifest that deep networks have a tuning problem, analogous to criticality, that needs to be solved in order to make them useful. Ultimately we will find a “macroscopic” description for wide and deep networks in terms of weakly-interacting statistical models, with the strength of the interactions between the neurons growing with depth-to-width aspect ratio of the network. Time permitting, we will explain how the interactions induce representation learning.

    This talk is based on a book, “The Principles of Deep Learning Theory,” co-authored with Sho Yaida and based on research also in collaboration with Boris Hanin. It will be published next year by Cambridge University Press.


    https://harvard.zoom.us/j/99651364593?pwd=Q1R0RTMrZ2NZQjg1U1ZOaUYzSE02QT09

    Mod p points on Shimura varieties

    3:00 PM-4:00 PM
    December 1, 2021
    1 Oxford Street, Cambridge, MA 02138 USA

    The study of mod p points on Shimura varieties was originally motivated by the study of the Hasse-Weil zeta function for Shimura varieties. It involves some rather subtle problems which test just how much we know about motives over finite fields. In this talk I will explain some recent results, and applications, and what still remains conjectural.

    Mod p points on Shimura varieties

    3:00 PM-4:00 PM
    December 1, 2021
    1 Oxford Street, Cambridge, MA 02138 USA

    The study of mod p points on Shimura varieties was originally motivated by the study of the Hasse-Weil zeta function for Shimura varieties. It involves some rather subtle problems which test just how much we know about motives over finite fields. In this talk I will explain some recent results, and applications, and what still remains conjectural.