Whether you are a researcher, an engineer, an expert in deep learning, or simply eager to learn more about these crucial methods at the core of modern AI, this program is designed for you!
The DLS 2025 prestigious lineup
Prof. Yulan He
King's College London
Prof. Dan Jurafsky
Stanford University
Prof. Elisa Ricci
University of Trento
Prof. Stuart Russell
UC Berkeley
Prof. Mihaela Van der Schaar
University of Cambridge
Hugging Face
Hugging Face @ DLS 2025
Elie Bakouch
Co-leader of SmolLM Team
Nouamane Tazi
Co-leader of Ultra-scale Playbook Team
Practical information
- When?
June 23rd, 2025 to July 4th, 2025
- Where?
Campus SophiaTech, Sophia Antipolis
- Language
English
- Target audience
Engineers, master and doctorate students, researchers
- Pre-requisite
▸ Master of Science
▸ If you do not have strong background and practice in machine learning, you are strongly advised to register to tutorials, additionally to conferences
▸ Being currently employed or seeking a job
Registration
Please do provide any useful information allowing us to identify which category of registration fees applies to your case.
- External companies / Individuals
-
Price per week (tutorials and conferences): 900 €
Price per week (conferences only): 500 €
Register - Partnering companies / Individuals
-
Price per week (tutorials and conferences): 810 €
Price per week (conferences only): 450 €
Register - External academic staff
-
Price per week (tutorials and conferences): 630 €
Price per week (conferences only): 350 €
Register - Academic staff from EFELIA Côte d’Azur consortium
-
Price per week (tutorials and conferences): 220 €
Price per week (conferences only): 220 €
Register
Week 1
Tutorials: Monday and Tuesday
Conferences: Wednesday, Thursday, and Friday
Week 2
Tutorials: Monday and Tuesday
Conferences: Wednesday, Thursday, and Friday
Prices include lectures with labs or tutorials with labs for each day.
Detailed program
- Monday, June 23
-
Tutorial and lab: Build your own LLM from Scratch
- Morning: Refresher Pytorch, Multi-Layer Perceptron, Recurrent Neural Network, applied to Natural Language Processing
- Afternoon: Tokenizer, Text embedding, Attention
Speakers: Prof. Frederic Precioso & Team EFELIA Côte d'Azur
- Tuesday, June 24
-
Tutorial and lab: Build your own LLM from Scratch
- Morning: Transformer-Encoder, Attention, Multi-Head Attention
- Afternoon: LLM (Encoder) for Text Classification, for Image + Text classification
Speakers: Prof. Frederic Precioso & Team EFELIA Côte d'Azur
- Wednesday, June 25
-
- Morning
Speaker: Elie Bakouch, Co-leader of SmolLM Team at Hugging Face
Topic: SmolLM, how Small Language Models can compete with LLMs
Title: Pre-training smol and large LLMs
Abstract: In this talk, you'll get a clear overview of current best practices for pre-training language models, both small and large. I'll discuss the latest optimizers (AdamW, Muon, Shampoo), how to choose learning rate schedules and batch sizes, stability improvements (normalization, weight decay), recent architecture innovations (linear attention, MoE), and effective methods for extending context lengths (RoPE, chunked attention). Examples from models like DeepSeek and the upcoming Llama 4 will make these techniques concrete and actionable.- Afternoon
Speaker: Nouamane Tazi, Co-leader of Ultra-scale Playbook Team at Hugging Face
Topic: The Ultra-Scale Playbook - Training efficiently LLMs on GPU Clusters
Title: The Ultra-Scale Talk: Scaling Training to Thousands of GPUs
Abstract: Training large language models (LLMs) efficiently requires scaling across multiple GPUs. This lecture will explore methodologies for expanding training from a single GPU to thousands, covering 5D parallelism techniques. Attendees will gain insights into optimizing throughput, GPU utilization, and training efficiency, with practical examples and benchmarks.
Learn more - Thursday, June 26
-
- Speaker: Prof. Mihaela van der Schaar, University of Cambridge (UK)
- Topic: Machine Learning and Data-centric AI for Healthcare and Medecine
- Friday, June 27
-
- Speaker: Prof. Yulan He, King’s College London (UK)
- Topic: Self-evolution of large language models (LLMs)
Title: Self-Evolution of Large Language Models
Abstract: This tutorial explores the emerging concept of self-evolution in large language models (LLMs), where models self-evaluate, refine, and improve their reasoning capabilities over time with minimal human intervention. We will start with the technical foundations behind self-improvement, including approaches such as bootstrapped reasoning, synthesising reasoning and acting, verbalised reinforcement learning, and LLM learning via self-play or self-planning. We will then present case studies illustrating LLM self-evolution in various applications, such as question answering, student answer scoring, causal event extraction, and the use of LLM agents in murder mystery games. In addition, we will discuss the challenges of model alignment, control, and safety in the context of LLM self-evolution. Finally, we will conclude the tutorial with an outlook for future research. - Monday, June 30
-
Tutorial and lab: Build your own LLM from Scratch
- Morning: Transformer-Decoder, Masked Multi-Head Attention, LLM(Decoder) for Text generation
- Afternoon: LLM(Encoder-Decoder), Cross-Attention, for Translation, Summarization
Speakers: Prof. Frederic Precioso & Team EFELIA Côte d'Azur
- Tuesday, July 1st
-
Tutorial and lab: Build your own LLM from Scratch
- Morning: Reinforcement Learning, Reinforcement Learning from Human Feedback (RLHF)
- Afternoon: Use case DeepSeek R1
Speakers: Prof. Frederic Precioso & Team EFELIA Côte d'Azur
- Wednesday, July 2nd
-
- Speaker: Prof. Elisa Ricci, University of Trento (Italy)
- Topic: Foundation models for multimedia / Computer vision and vision-language models
- Thursday, July 3rd
-
- Speaker: Prof. Dan Jurafsky, Stanford University (USA)
- Topic: LLMs assessment, Ethics
- Friday, July 4th
-
- Speaker: Prof. Stuart Russell, University of California at Berkeley (USA)
- Topic: From Reinforcement Learning from Human Feedback for LLMs to Assistance Games
Title: What if we succeed?
Abstract: Many experts claim that recent advances in AI put artificial general intelligence (AGI) within reach. Is this true? If so, is that a good thing? Alan Turing predicted that AGI would result in the machines taking control. I will argue that Turing was right to express concern but wrong to think that doom is inevitable. Instead, we need to develop a new kind of AI that is provably beneficial to humans. Unfortunately, we are heading in the opposite direction and we need to take steps to correct this. Even so, questions remain about whether human flourishing is compatible with AGI.