About the seminar

This seminar aims to increase the links between the different laboratories in Saclay in the field of Applied Maths, Statistics and Machine Learning. The Seminar is organized every first Tuesday of the month with 2 presentations followed by a small refreshment. The localization of the seminar will change to accommodate the different labs.

Organization

Due to access restriction, you need to register for the seminar. A link is provided in the description and should also be sent with the seminar announcement. It will also help us organize for the food quantities. If you think you will come, please register! (even if you are unsure)

To not miss the next seminar, please subscribe to the announcement mailing list palaisien@inria.fr.
You can also add the calendar from the seminar to your own calendar (see below).

Next seminars

REGISTER 04 Nov 2025, 12h At TBA
Samuel Hurault - From Score Matching to Diffusion: A Fine-Grained Error Analysis in the Gaussian Setting
Sampling from an unknown distribution, accessible only through discrete samples, is a fundamental problem at the core of generative AI. The current state-of-the-art methods follow a two-step process: first, estimating the score function (the gradient of a smoothed log-distribution) and then applying a diffusion-based sampling algorithm -- such as Langevin or Diffusion models. The resulting distribution's correctness can be impacted by four major factors: the generalization and optimization ...
Sampling from an unknown distribution, accessible only through discrete samples, is a fundamental problem at the core of generative AI. The current state-of-the-art methods follow a two-step process: first, estimating the score function (the gradient of a smoothed log-distribution) and then applying a diffusion-based sampling algorithm -- such as Langevin or Diffusion models. The resulting distribution's correctness can be impacted by four major factors: the generalization and optimization errors in score matching, and the discretization and minimal noise amplitude in the diffusion. In this paper, we make the sampling error explicit when using a diffusion sampler in the Gaussian setting. We provide a sharp analysis of the Wasserstein sampling error that arises from these four error sources. This allows us to rigorously track how the anisotropy of the data distribution (encoded by its power spectrum) interacts with key parameters of the end-to-end sampling method, including the number of initial samples, the stepsizes in both score matching and diffusion, and the noise amplitude. Notably, we show that the Wasserstein sampling error can be expressed as a kernel-type norm of the data power spectrum, where the specific kernel depends on the method parameters. This result provides a foundation for further analysis of the tradeoffs involved in optimizing sampling accuracy.

Based on this preprint
Mathurin Massias - On the Closed-Form of Flow Matching: Generalization Does Not Arise from Target Stochasticity
Modern deep generative models can now produce high-quality synthetic samples that are often indistinguishable from real training data. A growing body of research aims to understand why recent methods - such as diffusion and flow matching techniques - generalize so effectively. Among the proposed explanations are the inductive biases of deep learning architectures and the stochastic nature of the conditional flow matching loss. In this work, we rule out the noisy nature of the loss as a ...
Modern deep generative models can now produce high-quality synthetic samples that are often indistinguishable from real training data. A growing body of research aims to understand why recent methods - such as diffusion and flow matching techniques - generalize so effectively. Among the proposed explanations are the inductive biases of deep learning architectures and the stochastic nature of the conditional flow matching loss. In this work, we rule out the noisy nature of the loss as a primary contributor to generalization in flow matching. First, we empirically show that in high-dimensional settings, the stochastic and closed-form versions of the flow matching loss yield nearly equivalent losses. Then, using state-of-the-art flow matching models on standard image datasets, we demonstrate that both variants achieve comparable statistical performance, with the surprising observation that using the closed-form can even improve performance.

Based on this preprint
REGISTER 02 Dec 2025, 12h At TBA

Scientific Committee

The program and the organization of this seminar is driven by a scientific committee composed of members of the different laboratories in Saclay. The members of the committee are currently:

Funding

This seminar is made possible with financial support of the ENSAE and DataIA.