This course explores of Large Language Models (LLMs), from their fundamental principles to cutting-edge research directions. We aim to discuss the design and future of AI systems through lecture content and student-led presentations.
The course is structured around topics such as transformer architectures, empirical behaviors, training paradigms, and safety considerations. Students will also explore emerging challenges and the broader implications of AI technologies.
- Introduce fundamental concepts in AI and LLMs.
- Discuss architecture and principles of LLMs, including transformers.
- Explore topics in LLMs and modern AI systems such as training paradigms (pre-training, post-training, alignment), inference/test-time computations, embeddings/representations, evaluations, capabilities, safety/security (jailbreaking, oversight, hallucinations, uncertainty), interpretability (circuits).
- Student presentations on key research papers and recent breakthroughs.
- Course Syllabus.
- Lecture Notes; Work in progress.
- What is AI? Definitions and Goals
- Historical Overview of Artificial Intelligence
- The Challenge of AGI and Feasibility of AI in Daily Tasks
- Input/Output Processing in AI Systems
- Transformer Mechanisms and Attention
- Key Architecture Details: Positional Encoding, Faster Attention
- Variations Across Model Architectures (e.g., GPT, Llama)
- Empirical Behavior: Scaling Laws, Emergence
- Extensions: Vision and Multimodal Language Models
Pre-Training and Post-Training (Lec 07)
- Pre-Training Paradigms
- Post-Training: Fine-Tuning and Instruction Tuning
- Alignment: Reward Learning and Reinforcement Learning from Human Feedback (RLHF)
- Simple and Advanced Sampling Methods
- Prompting, Chain-of-Thought, and Tree-of-Thought
- Reasoning
- Jailbreaking and Oversight Mechanisms
- Addressing Hallucinations in AI Systems
- Ensuring Robustness and Security
- Embeddings and Representations
- Transformer Circuits
After initial lectures, students will lead presentations on topics of their choice in recent advances or research questions in AI.
- Foundations of Large Language Models, U of Michigan, 2024
- Language Modeling from Scratch, Stanford, Spring 2024
- Recent Advances on Foundation Models, U of Waterloo, Winter 2024
- Large Models, U of Toronto, Winter 2025
- Andrej Karpathy's Neural Networks: Zero to Hero video lectures. 100% coding-based, hands-on tutorial on implementing basic autodiff, neural nets, language models, and GPT-2 mini (124M params).
- The Llama 3 Herd of Models describes the Llama "open-weights LLM" developed by Meta. Possibly the highest information content anywhere about LLMs.
- DeepSeek-V3 Technical Report and DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning describe the open-weights DeepSeek V3 and R1 models, which bring together several innovations in training LLMs to make them achieve comparable performance to some top closed models.
- The corresponding sections in the Understanding Deep Learning book. See also the associated tutorial posts: LLMs; Transformers 1, 2, 3; Training and fine-tuning; Inference
- Foundations of Large Language Models book