Calendar

< 2019 >
September
  • 13
    September 13, 2019

    Floer-theoretic heaviness and Lagrangian skeleta

    3:30 PM-4:30 PM
    September 13, 2019
    Science Center 507
    1 Oxford Street, Cambridge, MA 02138 USA
    Abstract:

    I will talk on what I have learned from Umut Varolgunes, and on our work in preparation. In 2007, Entov and Polterovich introduced the notion of heavy and superheavy subsets of symplectic manifolds; such subsets have remarkable symplectic rigidity properties. One can define a variant version of (super)heaviness using relative symplectic homology studied by Varolgunes. I will explain this story, and prove that Lagrangian skeleta of symplectic divisor complements of Calabi-Yau’s are superheavy in this sense.

  • 20
    September 20, 2019

    Thurston norm minimizing Seifert surfaces

    3:30 PM-4:30 PM
    September 20, 2019
    Science Center 507
    1 Oxford Street, Cambridge, MA 02138 USA
    Abstract:

    Let K be a null-homologous knot in a closed 3-manifold Y, and F be a Seifert surface. One can cap off the boundary of F with a disk in the zero surgery on K to get a closed surface F_0.

    If we know that F is Thurston norm minimizing, we can ask whether F_0 is also Thurston norm minimizing. A classical theorem of Gabai says that the answer is Yes when Y is the 3-sphere. Gabai’s theorem can be generalized to many other 3-manifolds using Heegaard Floer homology. In this talk, we will discuss a sufficient condition for F_0 to be Thurston norm minimizing.

  • 27
    September 27, 2019

    Principal component analysis for topologists

    3:30 PM-4:30 PM
    September 27, 2019
    Science Center 507
    1 Oxford Street, Cambridge, MA 02138 USA
    Abstract:

    Geometrically, PCA finds the k-dimensional subspace nearest a point cloud in m-dimensional Euclidean space. Numerically, PCA also specifies a basis for this subspace, the principal directions of the data. Deep learning-ly, PCA is the shallowest of autoencoder models, that of a linear autoencoder (LAE) which learns the subspace but not the basis. I’ll relate these views by considering the gradient dynamics of learning from the perspective of a simple F_2-perfect Morse function on the real Grassmannian. I’ll then prove that L_2-regularized LAEs are symmetric at all critical points and recover the principal directions as singular vectors of the decoder, with implications for eigenvector algorithms and perhaps learning in the brain. If asked, I’ll speculate on a role for Morse homology in deep learning. Based on “Loss Landscapes of Regularized Linear Autoencoders” (ICML 2019) with Daniel Kunin, Aleksandrina Goeva, and Cotton Seed at the Broad Institute.