Distinguished Women in Mathematics Lecture Series

Upcoming Speakers

Spring 2026

Yusu Wang

Professor in Halιcιoğlu Data Science Institute at University of California, San Diego

Colloquium

Date & Time: Monday, April 20, 2-3 PM

Location: PMA 6.104

Talk: When and How Do GNNs Learn Generalizable Algorithmic Procedures

Abstract: A central challenge in modern machine learning is learning generalizable procedures that remain effective on unseen, potentially out-of-distribution (OOD) data. Such generalization depends on a complex interplay among model architectures, task structures, data assumptions, and training methodologies. In this talk, I will focus on the interaction between model architecture and task structure in the context of graph learning. We are particularly interested in two questions: Do different graph neural networks learn fundamentally different algorithmic procedures? And can OOD generalization be achieved with only finite samples? How do we probe what's learned internally? To explore these questions, I will present our initial studies using two concrete settings, graph partitioning/clustering and graph shortest-path computation, as testbeds for understanding how graph models internalize and apply algorithmic structure. This talk is based on joint work with several collaborators, whom I will acknowledge during the talk.

Pizza Seminar

A preparatory talk by Jen Rozenblit

Date & Time: Friday, April 17, 1-2pm

Location: PMA Vaughn Lounge

Talk: Graph Neural Networks & Algorithmic Alignment

Abstract: Machine learning is fundamentally constrained: we have limited data and ambitious goals for what to do with it. This tension is especially present for graph-structured data, where inputs vary in size and topology, and the "right" thing to learn depends very heavily on the task. Given these constraints, when can a neural network that is trained on a finite collection of small examples be trusted to behave like a classical algorithm on inputs much larger than anything it has seen? I'll start by defining what a graph neural network (GNN) is, walking through the Bellman–Ford algorithm for shortest paths and introducing the graph partitioning problem as a representative algorithm for it. With these in hand, I'll motivate why we'd want a neural network to learn such procedures in the first place, and what "learn" should even mean when we care about generalizing to graphs much larger than those seen during training. Along the way I'll introduce algorithmic alignment, namely the idea that a network learns an algorithm more efficiently when its own computational structure already mirrors that algorithm's. No background is assumed -- I'll define essentially everything from scratch and give intuition for anything I don't state rigorously. Come for the pizza, stay for the vibes.

Lunch

Faculty and graduate students are invited.

Date & Time: Monday, April 20, noon-1pm

Location: PMA 8.136

--> -->