
Thomas Breuel
[introductory] Reasoning, Values, and Alignment in Large Language Models
Summary
This course develops the theoretical foundations and empirical methods needed to understand, measure, and evaluate alignment, moral reasoning, and value specification in large language models. We begin with Bayesian decision theory as a unifying framework, explaining how values—encoded as priors and loss functions—combine with factual representations to generate model behavior. Against this backdrop, we survey major normative and political traditions in moral and political philosophy, describing how their competing conceptions of value, obligation, and justice inform contemporary regulatory frameworks and alignment methodologies.
The course presents a stance elicitation framework for objectively measuring political and moral positions encoded in LLM semantic space, using tensor construction across entities, policies, and models. Empirical results illustrate a stable two-dimensional political structure accounting for approximately 90% of observed variance, with a correlation of r = 0.99 against human-coded reference measures. We also cover methods for distinguishing artifacts of training data composition from genuine reasoning capacity.
Practitioners completing this course will be equipped to objectively characterize model alignment, identify and quantify value-laden biases, and analyze the normative commitments embedded in LLMs across diverse deployment contexts.
Syllabus
- Part 1 — How: Facts, Rules, and Decision Procedures
- Part 2 — Application: Moral Decision-Making in Autonomous Vehicles
- Part 3 — What: Value Systems and Moral Philosophy
- Part 4 — Mapping Political and Moral Space
- Part 5 — The Stance Elicitation Framework
- Part 6 — Empirical Results and Implications
References
Stuart Russell & Peter Norvig, Artificial Intelligence: A Modern Approach.
[Comprehensive coverage of search, logic, planning, knowledge representation, ontologies, expert systems, and probabilistic reasoning—widely used as the standard AI reference.]
Ronald Brachman & Hector Levesque, Knowledge Representation and Reasoning.
[In-depth treatment of semantic networks, frames, description logics, rule systems, and the foundations of symbolic inference and ontology design.]
Lewis Tunstall, Leandro von Werra & Thomas Wolf, Natural Language Processing with Transformers: Building Language Applications with Hugging Face.
[Detailed exploration of transformer architectures, pretraining/fine-tuning methods, prompting strategies, and production-grade examples for large-scale language models.]
Pre-requisites
Participants should have a working knowledge of transformer-based language models (e.g., attention mechanisms, pretraining/fine-tuning workflows), proficiency in Python programming, and familiarity with linear algebra and probability. No prior background in symbolic AI is required, as logic and symbolic methods will be introduced during the course.
Short bio
Thomas Breuel works on deep learning and computer vision at NVIDIA Research. Prior to NVIDIA, he was a full professor of computer science at the University of Kaiserslautern (Germany), where he also led a research group on document analysis, computer vision, and deep learning. Earlier, he worked as a researcher at Google, Xerox PARC, the IBM Almaden Research Center, IDIAP Switzerland. He is an alumnus of Massachusetts Institute of Technology and Harvard University. Contact Info: www.9×9.com.











