1. Goal
- Students will be able to critically read, assess, and discuss research work in Trustworthy Machine Learning (TML).
- Students will gain the technical background to implement basic TML techniques in a deep learning framework.
- Students will be ready to conduct their own research in TML and make contributions to the research community.
2. Prerequisites
- Familiarity with Python and PyTorch coding.
- A pass grade from the Deep Learning Course (or equivalent).
- Basic knowledge of machine learning concepts.
- Basic maths: multivariate calculus, linear algebra, probability, statistics, and optimisation.
3. TML Book
Previous course materials are available as a book: https://trustworthyml.io/ (also on arXiv).
The book will be useful for the course. However, the course materials are updated yearly to stay aligned with the latest research, so it won’t cover new topics.
4. Schedule
| # | Date | Content | Quiz | Project |
|---|
| L1 | Mar 06 | Orientation (short session) | — | — |
| L2 | Mar 13 | I. Human → AI: ML Foundations & Generalisation Primer | Quiz 0 (trial) | — |
| L3 | Mar 20 | Underspecification & Cues | — | Team formation due (23:59) |
| L4 | Mar 27 | LLM Communication & Modularity | — | — |
| L5 | Apr 03 | II. AI → Human: Explanation & XAI | Quiz 1 | — |
| L6 | Apr 10 | Attribution Methods | — | — |
| L7 | Apr 17 | TDA Frontiers & Mechanistic Interpretability | — | — |
| L8 | Apr 24 | Proposal presentations (midterm week) | — | Proposal report due (23:59); Proposal presentation |
| L9 | May 01 | Uncertainty I (Aleatoric) | Quiz 2 | — |
| L10 | May 08 | Uncertainty II (Epistemic) | — | — |
| L11 | May 15 | Uncertainty III (LLMs) | — | — |
| L12 | May 22 | III. Privacy & Security: Landscape & Data Privacy | Quiz 3 | — |
| L13 | May 29 | Contextual Privacy & Model Integrity | — | — |
| L14 | Jun 05 | Adversarial Attacks & Jailbreaking | — | — |
| L15 | Jun 12 | IV. Synthesis & Wrap-up | Quiz 4 | Final report due (23:59); Peer eval due (23:59) |
| L16 | Jun 19 | Final Presentations | — | Final presentation |
5. Grading
| Component | Weight |
|---|
| Quizzes 1-4 (7.5% each) | 30% |
| Proposal presentation | 10% |
| Proposal report | 10% |
| Final presentation | 25% |
| Final report | 25% |
| Total | 100% |
Late submissions are not accepted. A missed deadline counts as a zero.
6. Quizzes
- Format: Google Forms, accessible via QR code in class (link shared on Slack simultaneously).
- Timing: Strict 10-minute window at the start of class.
- Structure: 3 questions covering material from previous lectures. Attending lectures is essential.
- Quiz 0 is a trial run so students can get used to the format. It does not count towards the grade.
7. Projects
- Team size: 3 students per team.
- Formation: Use the
#team-formation Slack channel to find team members. Teams finalised by L3 (Mar 20, 23:59). - Compute: Each student receives a 50 USD Google Cloud Platform voucher for the project. Each team of three therefore has 150 USD credit.
- Template: Use the ICML 2026 LaTeX template for both proposal and final reports.
- Deliverables:
- Proposal: 1-2 page report + 5 min presentation.
- Final: 4-page report (ICML format, excluding references) + 10 min presentation.
- Peer evaluation: Mandatory form at the end. Distribute 100 points among your team members based on contribution. Unequal splits will affect individual grades.
Example project topics
Topics are open; students choose direction and methods. Examples of the kind of project that fits the course:
- Test-time detection of prompt sensitivity. Prior work has shown that VLMs are sensitive to prompt phrasing. The open question is whether we can detect at test time that a prediction is prompt-sensitive and flag it to the user, without access to ground truth. Propose and evaluate a detection method. Models: CLIP ViT-B/32, LLaVA-7B (both run on a single T4). Datasets: ImageNet, EuroSAT, or other zero-shot classification benchmarks. References: PARC (CVPR 2025, quantifying VLM prompt sensitivity), WaffleCLIP (ICCV 2023, random descriptors match LLM-generated prompts).
- Surfacing knowledge conflicts in RAG. Recent work resolves parametric-contextual conflicts silently. An open problem is whether the system can instead detect and surface the conflict to the user, letting them decide. Build a conflict-detection pipeline and evaluate its precision. Models: Llama 3.1 8B or Mistral 7B with a FAISS index (A100 or T4 with quantisation). Datasets: Natural Questions or TriviaQA with synthetically altered retrieval passages. References: FaithfulRAG (ACL 2025, fact-level conflict modelling), AdaCAD (NAACL 2025, adaptive decoding for knowledge conflicts), JuICE (ICML 2025, test-time attention intervention).
- Mechanistic vs data attribution for the same failure. Recent work attributes model failures either to training data (TDA) or to internal mechanisms (layer-wise dynamics, circuit analysis). These perspectives are rarely compared. Pick a failure mode (e.g. hallucination, gender bias) and apply both attribution families to the same cases. Do they agree? Models: ViT-B on ImageNet for vision; Llama 3.1 8B or a smaller LLM for language. Datasets: task-specific failure sets you curate. References: Accountability Attribution (ICML 2025, tracing behaviour to training stages), DDA (EMNLP 2024, influence functions with fitting error correction).
- Confidence under distribution shift. Methods like BaseCal and EAGLE improve calibration on in-distribution data. Less is known about how confidence estimates degrade under distribution shift or across multi-turn conversations. Evaluate existing calibration methods on shifted inputs and propose a detection strategy. Models: Llama 3.1 8B or Mistral 7B (sampling-based methods need ~20-50 forward passes per input; budget for this). Datasets: TriviaQA, MMLU with domain-shifted or adversarially perturbed variants. References: IB-EDL (ICLR 2025, information-theoretic evidential calibration), Multicalibration (ICML 2024, group-wise calibration for LLMs).
- Paper reproduction. Reproduce the key experiments of a published paper covered in the course. Verify the claims, test on a different model or dataset, and report where the results hold and where they break. Choose a paper whose experiments fit the compute budget.
8. Generative AI Policies
Students may use generative AI tools (e.g. LLMs, VLMs, image generators). However, you are solely responsible for all outputs you submit. We will apply heavy penalties for:
- Hallucinated or factually incorrect outputs.
- Unsound or fabricated citations.
- Plagiarised materials.
- AI slop (low-effort, generic AI-generated content).
Severe cases may be reported to the university for disciplinary action.
You must be ready to answer clarification requests from the lecturer or tutors at any point. Inability to explain your own work will be treated as evidence of academic misconduct.
We do not tolerate very similar creative work among class members. AI tends to produce similar outputs across sessions and model families. Diversify your answers, especially for creative work. Suspicion of copied work will be penalised.
9. Communication & Logistics
Language: English
Lecturer: Seong Joon Oh
Tutors: TBA
When: Fridays 13:00-16:00
Where: 양재산학캠퍼스 대강의실 (Hybrid)
Email: stai.there@gmail.com for submissions, questions, and feedback.
Slack: Email us your name and preferred email address to be added. Use it for questions, announcements, and finding team members.