news

See Older News

announcements

See Older Announcements

upcoming events

< 2019 >
September
«
»
Sun
Mon
Tue
Wed
Thu
Fri
Sat
1
2
3
4
5
6
7
8
9
10
11
12
13
  • GAUGE-TOPOLOGY-SYMPLECTIC SEMINAR: Floer-theoretic heaviness and Lagrangian skeleta

    GAUGE-TOPOLOGY-SYMPLECTIC SEMINAR
    Floer-theoretic heaviness and Lagrangian skeleta

    Speaker: Dmitry Tonkonog – Harvard

    3:30 PM-4:30 PM
    September 13, 2019
    1 Oxford Street, Cambridge, MA 02138 USA
    Abstract:

    I will talk on what I have learned from Umut Varolgunes, and on our work in preparation. In 2007, Entov and Polterovich introduced the notion of heavy and superheavy subsets of symplectic manifolds; such subsets have remarkable symplectic rigidity properties. One can define a variant version of (super)heaviness using relative symplectic homology studied by Varolgunes. I will explain this story, and prove that Lagrangian skeleta of symplectic divisor complements of Calabi-Yau’s are superheavy in this sense.

14
15
16
17
18
19
20
  • GAUGE-TOPOLOGY-SYMPLECTIC SEMINAR: Thurston norm minimizing Seifert surfaces

    GAUGE-TOPOLOGY-SYMPLECTIC SEMINAR
    Thurston norm minimizing Seifert surfaces

    Speaker: Yi Ni – Caltech

    3:30 PM-4:30 PM
    September 20, 2019
    1 Oxford Street, Cambridge, MA 02138 USA
    Abstract:

    Let K be a null-homologous knot in a closed 3-manifold Y, and F be a Seifert surface. One can cap off the boundary of F with a disk in the zero surgery on K to get a closed surface F_0.

    If we know that F is Thurston norm minimizing, we can ask whether F_0 is also Thurston norm minimizing. A classical theorem of Gabai says that the answer is Yes when Y is the 3-sphere. Gabai’s theorem can be generalized to many other 3-manifolds using Heegaard Floer homology. In this talk, we will discuss a sufficient condition for F_0 to be Thurston norm minimizing.

21
22
23
24
25
26
27
  • GAUGE-TOPOLOGY-SYMPLECTIC SEMINAR: Principal component analysis for topologists

    GAUGE-TOPOLOGY-SYMPLECTIC SEMINAR
    Principal component analysis for topologists

    Speaker: Jon Bloom – Broad Institute

    3:30 PM-4:30 PM
    September 27, 2019
    1 Oxford Street, Cambridge, MA 02138 USA
    Abstract:

    Geometrically, PCA finds the k-dimensional subspace nearest a point cloud in m-dimensional Euclidean space. Numerically, PCA also specifies a basis for this subspace, the principal directions of the data. Deep learning-ly, PCA is the shallowest of autoencoder models, that of a linear autoencoder (LAE) which learns the subspace but not the basis. I’ll relate these views by considering the gradient dynamics of learning from the perspective of a simple F_2-perfect Morse function on the real Grassmannian. I’ll then prove that L_2-regularized LAEs are symmetric at all critical points and recover the principal directions as singular vectors of the decoder, with implications for eigenvector algorithms and perhaps learning in the brain. If asked, I’ll speculate on a role for Morse homology in deep learning. Based on “Loss Landscapes of Regularized Linear Autoencoders” (ICML 2019) with Daniel Kunin, Aleksandrina Goeva, and Cotton Seed at the Broad Institute.

28
29
30
October
October
October
October
October