news

See Older News

announcements

Big Data Conference 2024
September 6, 2024 - September 7, 2024      9:00 am
https://cmsa.fas.harvard.edu/event/bigdata_2024/   On  September 6-7, 2024, the CMSA will host the tenth annual Conference on Big Data. The Big Data Conference features speakers from the...
Read more
See Older Announcements

upcoming events

«
»
Sun
Mon
Tue
Wed
Thu
Fri
Sat
December
December
December
December
December
December
1
2
3
4
5
6
7
8
9
10
11
12
13
  • CMSA EVENT: CMSA Interdisciplinary Science Seminar: A universal triangulation for flat tori

    Speaker: Francis Lazarus – CNRS / Grenoble University

    9:00 AM-10:00 AM
    January 13, 2022

    A celebrated theorem of Nash completed by Kuiper implies that every smooth Riemannian surface has a C¹ isometric embedding in the Euclidean 3-space E³. An analogous result, due to Burago and Zalgaller, states that every polyhedral surface, obtained by gluing Euclidean triangles, has an isometric PL embedding in E³. In particular, this provides PL isometric embeddings for every flat torus (a quotient of E² by a rank 2 lattice). However, the proof of Burago and Zalgaller is partially constructive, relying on the Nash-Kuiper theorem. In practice, it produces PL embeddings with a huge number of vertices, moreover distinct for every flat torus. Based on a construction of Zalgaller and on recent works by Arnoux et al. we exhibit a universal triangulation with less than 10.000 vertices, admitting for any flat torus an isometric embedding that is linear on each triangle. Based on joint work with Florent Tallerie.


    Zoom ID: 950 2372 5230 (Password: cmsa)

14
15
16
17
18
19
20
  • CMSA EVENT: CMSA: Markov chains, optimal control, and reinforcement learning

    Speaker: Yann Ollivier – Facebook

    9:00 AM-10:00 AM
    January 20, 2022

    Markov decision processes are a model for several artificial intelligence problems, such as games (chess, Go…) or robotics. At each timestep, an agent has to choose an action, then receives a reward, and then the agent’s environment changes (deterministically or stochastically) in response to the agent’s action. The agent’s goal is to adjust its actions to maximize its total reward. In principle, the optimal behavior can be obtained by dynamic programming or optimal control techniques, although practice is another story.

    Here we consider a more complex problem: learn all optimal behaviors for all possible reward functions in a given environment. Ideally, such a “controllable agent” could be given a description of a task (reward function, such as “you get +10 for reaching here but -1 for going through there”) and immediately perform the optimal behavior for that task. This requires a good understanding of the mapping from a reward function to the associated optimal behavior.

    We prove that there exists a particular “map” of a Markov decision process, on which near-optimal behaviors for all reward functions can be read directly by an algebraic formula. Moreover, this “map” is learnable by standard deep learning techniques from random interactions with the environment. We will present our recent theoretical and empirical results in this direction.


    Zoom ID: 950 2372 5230 (Password: cmsa)

21
22
23
24
25
26
  • : CMSA Colloquium: The black hole information paradox

    Speaker: Samir Mathur – Ohio State University

    9:30 AM-10:30 AM
    January 26, 2022

    In 1975, Stephen Hawking showed that black holes radiate away in a manner that violates quantum theory. Starting in 1997, it was observed that black holes in string theory did not have the form expected from general relativity: in place of “empty space will all the mass at the center”, one finds a “fuzzball” where the mass is distributed throughout the interior of the horizon. This resolves the paradox, but opposition to this resolution came from groups who sought to extrapolate some ideas in holography. In 2009 it was shown, using some theorems from quantum information theory, that these extrapolations were incorrect, and the fuzzball structure was essential for resolving the puzzle. Opposition continued along different lines, with a postulate that information would leak out through wormholes. Recently, it was shown that this wormhole idea had some basic flaws, leaving the fuzzball paradigm as the natural resolution of Hawking’s puzzle.


    https://harvard.zoom.us/j/95767170359

    (Password: cmsa)

  • CMSA EVENT: CMSA New Technologies in Mathematics Seminar: Machine learning with mathematicians

    Speaker: Alex Davies – DeepMind

    2:00 PM-3:00 PM
    January 26, 2022

    Can machine learning be a useful tool for research mathematicians? There are many examples of mathematicians pioneering new technologies to aid our understanding of the mathematical world: using very early computers to help formulate the Birch and Swinnerton-Dyer conjecture and using computer aid to prove the four colour theorem are among the most notable. Up until now there hasn’t been significant use of machine learning in the field and it hasn’t been clear where it might be useful for the questions that mathematicians care about. In this talk we will discuss the results of our recent Nature paper, where we worked together with top mathematicians to use machine learning to achieve two new results – proving a new connection between the hyperbolic and geometric structure of knots, and conjecturing a resolution to a 50-year problem in representation theory, the combinatorial invariance conjecture. Through these examples we demonstrate a way that machine learning can be used by mathematicians to help guide the development of surprising and beautiful new conjectures.

27
  • CMSA EVENT: CMSA Active Matter Seminar: Learning to School in the presence of hydrodynamic interactions

    Speaker: Petros Koumoutsakos – Harvard

    1:00 PM-2:00 PM
    January 27, 2022

    Fluids pervade complex systems, ranging from fish schools, to bacterial colonies and nanoparticles in drug delivery. Despite its importance, little is known about the role of fluid mechanics in such applications. Is schooling the result of vortex dynamics synthesized by individual fish wakes or the result of behavioral traits? Is fish schooling energetically favorable?  I will present multifidelity computational studies of collective swimming in 2D and 3D flows. Our studies demonstrate that classical models of collective swimming (like the Reynolds model) fail to maintain coherence in the presence of long range hydrodynamic interactions. We demonstrate in turn that collective swimming can be achieved through reinforcement learning. We extend these studies to 2D and 3D viscous flows governed by the Navier Stokes equations. We examine various hydrodynamic benefits with a progressive increase of the school size and demonstrate the importance of controlling the vorticity field generated by up to 300 synchronized swimmers.


     

  • SEMINARS: Master Teapots and Entropy Algorithms for the Mandelbrot set

    Speaker: Kathryn Lindsey – Boston College

    4:00 PM-6:00 PM
    January 27, 2022

    The core entropy of a postcritically finite quadratic polynomial is the topological entropy of its restriction to its Hubbard tree.  Core entropy is also the logarithm of the largest eigenvalue of the matrix associated to the Markov partition of the Hubbard tree obtained by cutting the tree at the postcritical set.  Tiozzo proved that core entropy is a continuous function of external angle for the Mandelbrot set.  How do the other (non-largest) eigenvalues of the Markov transition matrix vary with external angle?  In a recent preprint, G. Tiozzo, C. Wu and I answered this question. We defined “Master Teapots” associated to principal veins in the Mandelbrot set and proved that the eigenvalues outside the unit circle move continuously while roots inside the unit circle “persist.”  This talk will discuss this circle of ideas and results, and is based on joint work with G. Tiozzo and C. Wu.


     

  • SEMINARS: Master Teapots and Entropy Algorithms for the Mandelbrot set

    Speaker: Kathryn Lindsey – Boston College

    4:00 PM-6:00 PM
    January 27, 2022

    The core entropy of a postcritically finite quadratic polynomial is the topological entropy of its restriction to its Hubbard tree.  Core entropy is also the logarithm of the largest eigenvalue of the matrix associated to the Markov partition of the Hubbard tree obtained by cutting the tree at the postcritical set.  Tiozzo proved that core entropy is a continuous function of external angle for the Mandelbrot set.  How do the other (non-largest) eigenvalues of the Markov transition matrix vary with external angle?  In a recent preprint, G. Tiozzo, C. Wu and I answered this question. We defined “Master Teapots” associated to principal veins in the Mandelbrot set and proved that the eigenvalues outside the unit circle move continuously while roots inside the unit circle “persist.”  This talk will discuss this circle of ideas and results, and is based on joint work with G. Tiozzo and C. Wu.


     

28
29
30
31
February
February
February
February
February