This seminar aims to increase the links between the different laboratories in Saclay in the field of Applied Maths, Statistics and Machine Learning. The Seminar is organized every first Tuesday of the month with 2 presentations followed by a small refreshment. The localization of the seminar will change to accommodate the different labs.
Organization
Due to access restriction, you need to register for the seminar. A link is provided in the description and should also be sent with the seminar announcement. It will also help us organize for the food quantities. If you think you will come, please register! (even if you are unsure)
To not miss the next seminar, please subscribe to the announcement mailing list palaisien@inria.fr.
You can also add the calendar from the seminar to your own calendar (see below).
Next seminars
REGISTER
12 May 2026, 12h At Inria Saclay - Amphi Sophie Germain
Intelligent robots do not just respond to commands; they imagine what you meant, what you wanted, what you believed. And they do this while learning from very little, and running on a chip in your living room. In this talk, I will present recent advances in generative modeling that aim to equip embodied agents with efficient models that can run faster, with fewer data, and more efficient models and imagine possible futures under uncertainty.
Intelligent robots do not just respond to commands; they imagine what you meant, what you wanted, what you believed. And they do this while learning from very little, and running on a chip in your living room.
In this talk, I will present recent advances in generative modeling that aim to equip embodied agents with efficient models that can run faster, with fewer data, and more efficient models and imagine possible futures under uncertainty.
This talk explores two surprising mechanisms that shape the behavior of diffusion-based generative models. First, we uncover an implicit regularization effect in denoising score matching: large learning rates prevent models from overfitting the training data, mitigating memorization without explicit constraints. Second, we show that in Latent Diffusion Models, running the diffusion too long can worsen sample quality, a phenomenon rooted not in numerical instability, but in the ...
This talk explores two surprising mechanisms that shape the behavior of diffusion-based generative models. First, we uncover an implicit regularization effect in denoising score matching: large learning rates prevent models from overfitting the training data, mitigating memorization without explicit constraints. Second, we show that in Latent Diffusion Models, running the diffusion too long can worsen sample quality, a phenomenon rooted not in numerical instability, but in the dimensionality reduction of the latent space. We derive conditions linking optimal stopping time to latent dimension, revealing it as a key hyperparameter for generation quality. Together, these findings highlight how early (diffusion) stopping and optimization dynamics jointly govern generalization in modern diffusion models.
REGISTER
02 Jun 2026, 12h At Inria Saclay - Salle Gilles Kahn
The program and the organization of this seminar is driven by a scientific committee composed of members of the different laboratories in Saclay. The members of the committee are currently: