schedule

Monday, May 11

Time Event Chair
9:00–9:40 Opening remarks Matthieu Wyart
9:40–10:20 Jacob Andreas - Knowledge and self-knowledge in language models  
10:20–10:50 Coffee break  
10:50–11:30 Michael Gastpar - Universal Prediction Perspectives on LLMs  
11:30–12:10 Andrew Saxe - Solvable dynamics of aspects of language acquisition in neural networks  
12:10–13:30 Lunch  
13:30–14:10 Alessandro Laio - Identifying semantic information in deep representations of language Gemma Boleda
14:10–14:50 Emily Cheng - Higher representational dimensionality signifies feature abstraction in brains and machines during language processing  
14:50–15:30 Isabel Papadimitriou - Syntax from data points: understanding the learning and representation of structural abstraction in language models  
15:30–16:00 Coffee break  
16:00–16:40 Alessandro Favero - How Combinatorial Creativity Emerges in Generative Diffusion Models Greta Tuckute
16:40–17:20 Eric J. Michaud - Neural network activation geometry and units of decomposition  
17:20–18:00 Jean-Rémi King - Emergence of Language in the Human Brain  

Tuesday, May 12

Time Event Chair
9:00–9:40 Federica Gerace - Testing transformer learnability on the iterated prime factorization of the natural numbers Francesco Cagnetta
9:40–10:20 Emmanuel Abbe - Dynamic reasoning and planning models  
10:20–10:50 Coffee break  
10:50–11:30 Greta Tuckute - From Sounds to Linguistic Meanings in Biological and Artificial Systems  
11:30–12:10 Surbhi Goel - Effective Human-AI Collaboration via Communicating Uncertainty  
12:10–13:30 Lunch  
13:30–14:10 Tankut Can - Memory for Narratives and the Entropy of English Sebastian Goldt
14:10–14:50 Daniel J. Korchinski - Linear analogies and the geometry of model representations from text statistics  

Wednesday, May 13

Time Event Chair
9:00–9:40 Martin Schrimpf - Brain-Like Artificial Intelligence: Alignment and Misalignment in NeuroAI Marco Baroni
9:40–10:20 Antoine Bosselut - From Attention to Internalization: Reasoning as Test-Time Learning  
10:20–10:50 Coffee break  
10:50–11:30 Gemma Boleda - LLMs as a synthesis between symbolic and distributed approaches to language  
11:30–12:10 Blake Bordelon - What are models scaling towards? Universal training dynamics of transformers and hyperparameter transfer across model size and training horizon  
12:10–13:30 Lunch  
13:30–14:10 Maissam Barkeshli - TBA Florent Krzakala
14:10–14:50 Mary Letey - Solvable models of in-context learning  
14:50–15:30 Eshaan Nichani - Sharp Scaling Laws for Spectral Optimizers in Learning Associative Memory  
15:30–16:00 Coffee break  
16:00–18:20 Panel discussion: G. Boleda, F. Cagnetta, J.-R. King, I. Papadimitriou, A. Saxe, M. Schrimpf Daniel S. Fisher