Introduction to Flow Matching and Diffusion Models: Building Generative AI from Scratch

MIT’s Computer Science class, “Introduction to Advanced Topics in Deep Learning: Focus on Diffusion Models,” explores cutting-edge techniques surrounding diffusion models and their applications. Co-taught by Peter Holderrieth (PhD Student) and Ezra Erives (MEng Student), this course assumes knowledge of linear algebra, real analysis, basic probability theory, Python programming skills, and some PyTorch experience.

The curriculum primarily revolves around continuous data such as images, videos, and protein structures rather than discrete ones like text found in large language models (LLMs). The lectures cover fundamental concepts related to stochastic differential equations (SDEs), flow matching, score matching, conditional image generation through diffusion models’ lens.

Students engage with practical learning via three lab assignments designed by Tommi Jaakkola (course sponsor and advisor) that guide them through working with SDEs, implementing Flow Matching & Score Matching techniques as well as Conditional Image Generation using provided Python frameworks. Solutions to these labs can be found on GitHub for assistance when needed.

The course acknowledges several individuals who contributed significantly to its development and execution while expressing gratitude towards MIT EECS staff members Lisa Bella, Ellen Reid et al., Students For Open And Universal Learning (SOUL), The Missing Semester of Your CS Education project team, participants from the initial offering (MIT 6.S184/6.S975) during IAP 2025, and readers like you for their interest in this course material.

Overall, “Introduction to Advanced Topics in Deep Learning: Focus on Diffusion Models” offers an insightful exploration into advanced concepts related to diffusion models within the realm of deep learning.

Complete Article after the Jump: Here!