
Ricardo Baeza-Yates
[introductory] Introduction to Responsible AI
Summary
In the first part, to set the stage, we cover irresponsible AI: (1) discrimination (e.g., facial recognition, justice); (2) pseudoscience (e.g., biometric based predictions); (3) limitations (e.g., human incompetence, minimal adversarial AI); (4) indiscriminate use of computing resources (e.g., large language models); and (5) the impact of generative AI (disinformation, mental health and copyright issues). These examples do have a personal bias but set the context for the rest where we address three challenges: (1) principles & governance, (2) regulation and (3) our cognitive biases. After this tutorial, students should be able to know and understand the concepts, risks, ethics, governance, and regulation behind responsible AI.
Syllabus
1. Introduction
- Why responsible AI?
- Limitations of Data & ML
2. Irresponsible AI
- Discrimination
- Pseudo-science
- Pure human incompetence
- Unfair digital markets
- Environmental impact
- Generative AI (disinformation, copyright & mental health issues)
3. AI Ethics and Governance
- Ethical Values
- Instrumental principles (OECD, UNESCO, ACM)
- Responsible AI Governance
4. Regulation on the Use of AI
- AI Act (EU)
- Former AI Bill of Rights (US)
- Generative AI (China)
- Discussion
References
John Searle. Minds, Brains, and Programs. Behavioral and Brain Sciences, 1980.
Ricardo Baeza-Yates & Pablo Villoslada. Human vs. Artificial Intelligence. IEEE 4th Int. Conf. on Cognitive Machine Intelligence, 2022.
Marvin van Bekkum and Frederik Zuiderveen Borgesius. Digital welfare fraud detection and the Dutch SyRI judgment. European Journal of Social Security 23(4), 2021.
Blaise Agüera y Arcas, Margaret Mitchell & Alexander Todorov. Physiognomy’s New Clothes. Medium, 2017.
Emily M. Bender, Timnit Gebru, Angelina McMillan-Major, Shmargaret Shmitchell. On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜. ACM Conference on Fairness, Accountability, and Transparency, March 2021.
Lauren Smiley. Aftermath of a Self-Driving Tragedy. Wired, 2022.
Ricardo Baeza-Yates. Language models fail to say what they mean or mean what they say. Venture Beat, 2022.
OECD. AI Principles Overview. 2019.
UNESCO. Recommendations on the Ethics of AI. 2021.
Ricardo Baeza-Yates, Jeanna Matthews et al. Principles for Responsible Algorithmic Systems. ACM, 2022.
Eticas Consulting. Guide to Algorithmic Auditing. 2021.
Ben Shneiderman. Bridging the Gap Between Ethics and Practice: Guidelines for Reliable, Safe, and Trustworthy Human-centered AI Systems. ACM Transactions on Interactive Intelligent Systems 10(4), 2020.
European Union. The AI Act. 2021 (see also revised version from May 2023).
OSTP. Blueprint for an AI Bill of Rights. The White House, USA, 2022.
Cyberspace Administration of China. Measures for the Management of Generative Artificial Intelligence Services (Draft for Comments). 2023.
AI Now Institute. Five considerations to guide the regulation of “General Purpose AI”. 2023.
Jon Kleinberg, Himabindu Lakkaraju, Jure Leskovec, Jens Ludwig & Sendhil Mullainathan. Human Decisions and Machine Predictions. NBER 23180, 2017.
Ricardo Baeza-Yates. Bias on the Web. Communications of ACM, 2018.
Sorelle A. Friedler, Carlos Scheidegger & Suresh Venkatasubramanian. The (Im)possibility of Fairness: Different Value Systems Require Different Mechanisms for Fair Decision Making. Communications of the ACM 64(4), 2021.
Hessie Jones. Geoff Hinton Dismissed the Need for Explainable AI: 8 Experts Explain Why He’s Wrong. Forbes, 2018.
Cynthia Rudin. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature Machine Intelligence (1), 2019.
NIST. Four Principles of Explainable AI. 2020.
Boris Babic, Sara Gerke, Theodoros Evgeniou, and I. Glenn Cohen. Beware explanations from AI in health care. Science 373, 2021.
Pre-requisites
None, the content will be suitable to an interdisciplinary audience.
Short bio
Ricardo Baeza-Yates is the director of the new AI Institute at the Barcelona Supercomputing Center. Until early 2025 he was the Director of Research at the Institute for Experiential AI of Northeastern University since 2021. He is also a part-time Professor at Universitat Pompeu Fabra in Barcelona and at Universidad de Chile in Santiago. Before he was the CTO of NTENT, a semantic search technology company based in California and prior to these roles, he was VP of Research at Yahoo Labs, based in Barcelona, Spain, and later in Sunnyvale, California, from 2006 to 2016. He is co-author of the best-seller Modern Information Retrieval textbook published by Addison-Wesley in 1999 and 2011 (2nd ed), which won the ASIST 2012 Book of the Year award. From 2002 to 2004 he was elected to the Board of Governors of the IEEE Computer Society and between 2012 and 2016 was elected to the ACM Council. Since 2010 he has been a founding member of the Chilean Academy of Engineering. In 2009 he was named ACM Fellow and in 2011 IEEE Fellow. He obtained the Spanish National Research Award Ángela Ruiz Robles for applied research and technology transfer given by the Scientific Computing Societies of Spain and the BBVA Foundation in 2018 and the Chilean National Award on Applied and Technological Sciences in 2024, among other distinctions. He obtained a Ph.D. in CS from the University of Waterloo, Canada, and his areas of expertise are web search and data mining, information retrieval, bias and ethics on AI, data science and algorithms in general.
Regarding the topic of the tutorial, he is actively involved as expert in many initiatives, committees or advisory boards related to responsible AI all around the world: Global Partnership on AI at the OECD, ACM’s Technology Policy Council where he is co-chair of the AI Subcommittee, and IEEE’s AI Committee. He is one of the two main authors of the new ACM Principles for Responsible Algorithmic Systems as well as a member of the editorial committee of the AI and Ethics journal where he co-authored an article highlighting the importance of research freedom on AI ethics.