
Brian D. Earp
[introductory] Credit, Blame, and Personalisation in Human-AI Cooperation
Summary
This series of three lectures explores emerging issues at the intersection of moral psychology and AI ethics, with a focus on human-AI cooperation. The first lecture examines how people assign credit and blame in joint human-AI decision-making, drawing on recent empirical and theoretical work published in Annals of the New York Academy of Sciences. The second lecture introduces the idea of AI-assisted consent seeking in medical contexts (Consent GPT) along with personalisation in bioethics and clinical care, focusing on the concept of personalised patient preference predictors (P4s) and their ethical implications. The final lecture investigates AUTOGEN (AI Unique Tailored Output Generator): a way of fine-tuning AI systems on an individual’s prior outputs (e.g., scholarly writing) to create personalised digital avatars that can produce new high quality outputs in one’s style, raising complex ethical questions about the future of everything from scholarship to content creation.
Syllabus
Lecture 1: Credit and Blame in Human-AI Teams
- Foundations of moral psychology in joint action
- Experimental findings on credit and blame asymmetries
- Case studies from business, society, and creative arts
- Normative implications for fairness and accountability
Lecture 2: AI-Assisted Consent and Personalisation in Medicine
- Informed consent in medicine and challenges with the status quo
- Introduction to “Consent GPT” and conversational AI in healthcare
- Patient Preference Predictors (P4s) and ethical justifications
- Benefits and risks of personalisation in clinical decision-making
Lecture 3: AUTOGEN and the Future of Personalised AI
- The AUTOGEN concept: AI avatars trained on individual output
- Technical underpinnings: fine-tuning and persona emulation
- Philosophical and ethical concerns: authorship, identity, and consent
- Implications for scholarship, content creation, and beyond
- Toward governance frameworks for personalised AI tools
References
Allen et al. (2024). Consent-GPT: is it ethical to delegate procedural consent to conversational AI? Journal of Medical Ethics, 50(2), 77-83. https://doi.org/10.1136/jme-2023-109347
Earp et al. (2024). Credit and blame for AI-generated content: Effects of personalization in four countries. Annals of the New York Academy of Sciences, 1542, 51-57. https://doi.org/10.1111/nyas.15258
Earp et al. (2024). A personalized patient preference predictor for substituted judgments in healthcare: Technically feasible and ethically desirable. American Journal of Bioethics, 24(7), 13-26. https://doi.org/10.1080/15265161.2023.2296402
Iglesias et al. (2025). Digital doppelgängers and lifespan extension: What matters? The American Journal of Bioethics, 25(2), 95-110. https://doi.org/10.1080/15265161.2024.2416133
Porsdam Mann et al. (2023). Generative AI entails a credit–blame asymmetry. Nature Machine Intelligence, 5, 472-473. https://doi.org/10.1038/s42256-023-00653-1
Porsdam Mann et al. (2023). AUTOGEN: A personalized large language model for academic enhancement — ethics and proof of principle. American Journal of Bioethics, 23(10), 28-41. https://doi.org/10.1080/15265161.2023.2254064
Pre-requisites
N/A
Short bio
Brian D. Earp, PhD is Associate Professor of Biomedical Ethics at the National University of Singapore (NUS), Faculty Member of the NUS Artificial Intelligence Institute, and Associate Professor of Philosophy and of Psychology at NUS by courtesy. Brian also directs the Oxford-NUS Centre for Neuroethics and Society at NUS and the University of Oxford, and is Associate Director of the Yale-Hastings Program in Ethics and Health Policy at Yale University and The Hastings Center. In 2022, Brian was elected to the UK Young Academy under the auspices of the British Academy and the Royal Society. Brian’s work is cross-disciplinary, following training in philosophy, cognitive science, experimental psychology, history and sociology of science and medicine, and ethics. As of 2025, Brian serves as Editor-in-Chief of the Journal of Medical Ethics and of JME Practical Bioethics.