Back to School Seminar

grad students share math they've been up to this summer

Organizer: Bryan Lu (blu17@uw.edu)

Meetings: Fridays 2:30 PM - 3:30 PM @ PDL C-038

Over the summer, the graduate students at UW have learned many new and interesting topics from conferences, summer schools, reading courses, or just on their own. This seminar is intended to be a place where we can share what we have recently learned to our fellow graduate students. Not only does this serve as a way to reengage with our existing graduate student community after the summer, but it also welcomes the incoming cohort of students and introduces topics that we are interested in.

This seminar is aimed at all of our graduate students, so talks should be accessible to a general audience. Ideally, at least \(\frac 1e\) of your talk should be accessible to incoming first-year students. Talks should be approximately 51 minutes long, with time for questions afterwards.


Schedule

Week 2 – October 3, 2025

Speaker: Nelson Niu

Title: A categorical framework for coherence theorems

Abstract: Categories with operations that are associative, commutative, and distributive up to isomorphism abound in mathematics: for instance, in homotopy theory, they are the inputs to infinite loop space machines and K-theory functors. But before we can work with such categories, we must prove coherence theorems for them to ensure these structures are well-behaved. In ongoing joint work with Jonathan Rubin, we establish a general categorical approach to proving such coherence theorems versatile enough to incorporate (weak) distributivity laws, module and algebra categories, and the higher arity twisted products that appear in equivariant settings. Building on Mac Lane’s original coherence proof for symmetric monoidal categories, we employ tools from logic and rewriting theory to study the relevant universal parameter categories, clarifying the necessary axioms.

Week 3 – October 10, 2025

Speaker: Grace O’Brien

Title: Spline time: Using geometry to understand deep neural networks

Abstract: Have you heard the words “neural networks” but not totally understood what they are? Are you interested in hearing about my experience interning at PNNL? Do you like pretty pictures? Come to my talk and get an introduction to machine learning and the math behind it. I’ll also discuss my specific project this summer, described below.

AI systems are increasingly becoming a part of our everyday lives, including in safety-critical systems. However, the innerworkings of deep neural networks are still largely a black box. Even in the case of classification tasks, common methods used to assess model performance do not give insight into whether the model will generalize or into other performance nuances. In this project, we use mathematical techniques to better understand how these processes work and explore how to identify problems such as overfitting, memorization, and poor generalization. In the process of training a model, piecewise-linear activation functions partition the input space into two, creating a tiling that shifts over time. Following the work of Balestriero, Baraniuk, and others, we use geometric tools to study this tiling to gain insight into the model’s training progress and, potentially provide greater assurances that a model is ready for deployment.

Week 4 – October 17, 2025

Speaker: Clare Minnerath

Title: (The search for) Web bases for \(SL_r(\CC)\)-invariants

Abstract: The study of \(SL_r(\CC)\) tensor invariants has been extended by the addition of tensor diagrams. The search for a basis among webs, the planar version of tensor diagrams, has yielded compelling results for \(r=2\) and \(3\), but has proven elusive for larger \(r\). In an example forward fashion, we will see how you can go from a tensor diagram to an element of the invariant ring, explore the known web bases for \(SL_2\) and \(SL_3\) invariants, and discuss what properties we hope for in a web basis when \(r\ge 4\).

Based on lectures given by Christian Gaetz at SLMath: Graphical Models in Algebraic Combinatorics.

Week 5 – October 24, 2025

Speaker: Ethan MacBrough

Title: Working with singularities explicitly

Abstract: Algebraic geometers have developed several notions of “nice singularity” which are useful in applications; typically these nice singularities are chosen to balance flexibility (interesting geometric constructions you might want to perform will often lead to singularities) and tame behavior (any interesting geometric conclusions you might want to draw will be screwed up by sufficiently bad singularities). A third desirable feature is ease of determining whether or not a given singularity is nice; unfortunately, this third property often gets kicked to the road in favor of optimizing the above dichotomy. Aside from the psychological distress this may cause, this becomes seriously problematic when you’re trying to “run experiments” (i.e. construct interesting examples) to analyze the subtle behavior of these singularities. Thankfully, there are several tricks which are generally effective for analyzing the singularity type when you have explicit equations. In this talk I will go through a few examples showing some of these techniques in action. Most of the talk will have no real prerequisites as long as you’re willing to black box some details, but in the later part I assume familiarity with basic homological algebra.

Week 6 – October 31, 2025

Speaker: Connor McCausland

Title: TBD

Abstract: TBD

Week 7 – November 7, 2025

Speaker: Dan Guyer

Title: TBD

Abstract: TBD

Week 8 – November 14, 2025

Speaker: Varun Shah

Title: TBD

Abstract: TBD

Week 9 – November 21, 2025

Speaker: Ting Gong

Title: TBD

Abstract: TBD

Week 11 – December 5, 2025

Speaker: Bryan Lu

Title: TBD

Abstract: TBD