AIces 2026
1st INTERNATIONAL SCHOOL ON THE COGNITIVE, ETHICAL AND SOCIETAL DIMENSIONS OF ARTIFICIAL INTELLIGENCE
Porto – Maia, Portugal · March 30 - April 2, 2026
Registration
Downloads
  • Call AIces 2026
  • Poster AIces 2026
  • Lecture Materials
  • Home
  • Schedule
  • Lecturers
  • Sponsors
  • News
  • Info
    • Travel
    • Accommodation
    • UMAIA / UP staff and students, LASI members, CAIRNE members
    • Visa
    • Code of conduct
  • Home
  • Schedule
  • Lecturers
  • Sponsors
  • News
  • Info
    • Travel
    • Accommodation
    • UMAIA / UP staff and students, LASI members, CAIRNE members
    • Visa
    • Code of conduct

All posts by manager

Carlos Castillo
Feb 232026

Carlos Castillo—Algorithmic Fairness in High-Risk AI Applications

News by manager

Carlos Castillo, ICREA Research Professor at Universitat Pompeu Fabra, will lead the course “Algorithmic Fairness in High-Risk AI Applications.” Dr. Castillo is widely known for his research on fairness, discrimination-aware algorithms, risk modeling, and the societal impact of large-scale data systems.

This introductory course examines fairness concerns in contexts where algorithmic decisions may have profound implications for individuals and communities.

Key topics covered during the course:

  • High-Risk AI Contexts: Exploration of algorithms used in justice, employment, lending, education, and healthcare, where fairness is essential.
  • Definitions of Fairness & Bias: Overview of key fairness metrics, sources of bias, and their implications in automated decision-making.
  • Legal & Regulatory Frameworks: Introduction to emerging norms, such as EU AI Act requirements, that shape the development of high-risk AI systems.
  • Case Studies: Real-world examples illustrating how biased systems can produce harmful outcomes and how they can be audited and improved.

Dr. Castillo’s work bridges technical understanding with social responsibility, offering tools to evaluate and mitigate unfair impacts. Participants who wish to understand the ethical and societal stakes of AI in sensitive domains will find this course highly valuable.

Register for AIces 2026: https://aices.irdta.eu/2026/registration/
Event details: https://aices.irdta.eu/2026/

Read more
thomas-breuel
Feb 162026

Thomas Breuel—Facts and Rules in LLMs

News by manager

Thomas Breuel, Distinguished Engineer at NVIDIA Research and a leading expert in machine learning, document recognition, and computational models of intelligence, will teach “Facts and Rules in LLMs” at AIces 2026. His course explores how large language models represent factual knowledge, perform reasoning, and interact with symbolic structures.

This course offers an accessible technical foundation for understanding LLM behavior, limitations, and emerging approaches to reasoning.

Key topics covered:

  • Knowledge Representation in LLMs: How transformer architectures encode and retrieve factual information, and what governs model generalization.
  • Symbolic vs. Neural Reasoning: Examination of hybrid systems that integrate neural and logical components for more interpretable and robust reasoning.
  • Reasoning Benchmarks and Patterns: Analysis of chain-of-thought prompting, logic tasks, and mathematical reasoning benchmarks used to evaluate LLM performance.
  • Interpreting Model Behavior: Methods for examining how models arrive at answers, and what these insights reveal about their internal structure.

Dr. Breuel’s course will help participants move beyond black-box interpretations, offering a deeper conceptual understanding of how modern language models work.

Those interested in reasoning, interpretability, or the mechanics of LLMs will benefit greatly fromthis course.

Join AIces 2026 to learn more: https://aices.irdta.eu/2026/registration/
Event details: https://aices.irdta.eu/2026/

Read more
Susan Brennan
Feb 92026

Susan Brennan—Where Are the Humans in Human-Centered AI?

News by manager

Susan Brennan, Professor of Psychology at Stony Brook University, will present her course “Where Are the Humans in Human-Centered AI?” at AIces 2026. Dr. Brennan is internationally recognized for her research in psycholinguistics, human communication, and interactive systems, with particular focus on how humans coordinate meaning in real-time dialogue.

Her course examines the cognitive foundations that underlie natural interaction, highlighting what AI systems must learn to engage effectively with human users.

Key Topics Covered

  • Mechanisms of Human Communication: Insights into grounding, perspective-taking, conversational repair, and adaptive behavior, which form the basis for human
    collaboration.
  • Limitations of Generative AI: Analysis of why large language models can produce fluent output yet still fail to understand or participate meaningfully in collaborative
    dialogue.
  • Human–AI Interaction Dynamics: What is required for AI systems to act as genuine partners, not merely generators of plausible text?
  • Interdisciplinary Integration: Evidence from psychology, linguistics, and cognitive science illustrating how human insights can guide the next generation of interactive AI.

Dr. Brennan’s work underscores the importance of studying real human behavior to inform AI design, ensuring systems are intuitive, socially aware, and aligned with human expectations.
Attendees interested in communication, cognition, and human–AI collaboration will gain valuable insight from this course.

Secure your place for AIces 2026: https://aices.irdta.eu/2026/registration/
Event details: https://aices.irdta.eu/2026/

Read more
Ricardo Baeza-Yates
Feb 22026

Ricardo Baeza-Yates—Introduction to Responsible AI

News by manager

Ricardo Baeza-Yates will lead the “Introduction to Responsible AI” course at AIces 2026, bringing his deep expertise in ethical challenges and societal implications of artificial intelligence. As a respected researcher and thought leader, he offers participants a comprehensive foundation in thinking critically about AI systems in real-world contexts.

Dr. Baeza-Yates will discuss

  • Core ethical concepts: fairness, accountability, transparency, sustainability, and responsibility in AI design.
  • Origins of algorithmic bias: how and why bias emerges in data and models, with examples from real AI deployments.
  • Decision impacts: exploring AI effects in sensitive domains such as hiring, justice, and public services.
  • Frameworks and norms: introductory frameworks to guide responsible development and deployment.

Dr. Baeza-Yates has contributed significantly to web search, data mining, and responsible AI research, offering a lens that combines technical rigor with societal relevance. His course at
AIces equips participants with tools to think ethically and act responsibly as designers, researchers, and practitioners.

This course is ideal for those new to AI ethics or seeking a structured foundation in responsible AI principles—from bias and fairness to accountability and governance.

Register now for AIces 2026 and explore how responsible AI practice can shape better outcomes for society: https://aices.irdta.eu/2026/registration/
Event details: https://aices.irdta.eu/2026/

Read more
ming-lin
Jan 272026

Ming C. Lin — Socially Responsible and Trustworthy AI

News by manager

We are honored to welcome Ming C. Lin, Distinguished University Professor at the University of Maryland and a globally recognized leader in AI, robotics, simulation, and virtual environments, as a keynote speaker at AIces 2026.

Her keynote, “Socially Responsible and Trustworthy AI,” explores cutting-edge methods for ensuring that modern AI systems align with human needs, values, and safety expectations. Dr. Lin will discuss:

  • Model Alignment: Techniques for aligning LLMs and VLMs with diverse demographic and personality traits to support equitable outcomes and community-wide engagement, especially in areas like health messaging.
  • Safe Reinforcement Learning: New approaches that integrate Linear Temporal Logic and differentiable simulation to ensure safety while improving learning efficiency.
  • Greener AI: A Time-Aware World Model (TAWM) that adaptively samples across timescales, reducing training cost while improving accuracy and energy efficiency.

Together, these innovations illustrate how socially responsible AI research can deliver fairer, safer, and more reliable technology for all population groups.

Dr. Lin is a Fellow of the ACM, IEEE, NAI, Eurographics, SIGGRAPH Academy, and IEEE VR Academy, with more than 400 publications and numerous awards recognizing her impact across computing, AI, and virtual reality.

Join us at AIces 2026 to learn directly from Ming C. Lin and explore how socially responsible and trustworthy AI can shape safer, fairer, and more sustainable technologies for the future.

Reserve your place today: https://aices.irdta.eu/2026/registration/
Event details: https://aices.irdta.eu/2026/

Read more
Savannah Thais
Jan 182026

Savannah Thais — Measurement for Safer AI

News by manager

At AIces 2026, we are pleased to welcome Savannah Thais, Assistant Professor of Computer Science at Hunter College, City University of New York, where she leads the Science, Society, and AI Lab. Thais brings a unique interdisciplinary perspective to the event, combining deep technical knowledge with insights from governance, policy, and ethics.

Dr. Thais’ course at AIces 2026, “Measurement for Safer AI,” explores the crucial role of how we evaluate AI systems — especially as large models become more general-purpose yet remain opaque and difficult to assess reliably. Her work confronts a growing challenge in AI today:establishing rigorous, meaningful metrics that meaningfully correspond to safety, fairness, robustness, and real-world impact.

The course covers:

  • Foundations of measurement for safe AI, emphasizing why metrics matter for transparency and governance
  • Fairness, bias, and representation, including tradeoffs and limits of quantitative fairness metrics
  • Robustness and interpretability, with approaches to evaluate reliability under distribution shifts
  • Benchmarking practices, metric design, and participatory evaluation methods
  • Measurement pitfalls and how frameworks can be integrated into auditing and regulatory contexts

Dr. Thais’ research background includes mechanistic interpretability, AI for Science, and quantitative frameworks that bridge technical rigor with policy relevance. Prior to her academic career, she worked on the ATLAS experiment at the Large Hadron Collider, applying her technical expertise to complex, data-intensive problems.

Her scientific and policy contributions have been recognized through leadership roles, including service on the American Physical Society Panel on Public Affairs and participation on the Board of Directors of Women in Machine Learning from 2019 to 2024.

Participants in Savannah Thais’ course at AIces 2026 will gain a richer understanding of why measurement is foundational to building safer, more trustworthy AI—and how thoughtful evaluation practices can shape better governance and societal outcomes.

Register for AIces 2026: https://aices.irdta.eu/2026/registration/
Event details: https://aices.irdta.eu/2026/

Read more
David Danks
Jan 52026

Keynote spotlight: David Danks — “Trustworthy AI in an Untrustworthy World”

News by manager

We are delighted to welcome David Danks as a keynote speaker at AIces 2026 (Porto–Maia, Portugal | March 30 – April 2, 2026).

In his keynote, “Trustworthy AI in an Untrustworthy World,” Danks addresses a central tension in today’s AI discourse: while we increasingly call for AI that is trustworthy, responsible, ethical, or safe, much of the work in these areas assumes that AI is designed and deployed in a largely cooperative environment. But real-world settings are often the opposite—partially competitive, strategically misaligned, and shaped by conflicting goals and incentives.

This talk will explore what it actually means to build “trustworthy AI” when the surrounding world may be hostile to our values and interests, and will offer practical approaches for producing better AI systems under adversarial or high-conflict conditions.

About the keynote speaker

David Danks is Professor of Data Science, Philosophy, & Policy at the University of California, San Diego. Starting January 2026, he will be the Polk JSF Distinguished University Professor of Philosophy, AI, & Data Science at the University of Virginia. His research spans philosophy, cognitive science, and machine learning, including their intersection.

Danks has examined ethical, psychological, and policy issues around AI and robotics across sectors such as transportation, healthcare, privacy, and security. He has also contributed significantly to computational cognitive science and developed novel causal discovery algorithms for complex observational and experimental data. His honors include a James S. McDonnell Foundation Scholar Award and an Andrew Carnegie Fellowship. He was an inaugural member of the National AI Advisory Committee (USA), and currently serves on advisory boards across industry, government, and academia.

Register for AIces 2026: https://aices.irdta.eu/2026/registration/
Event details: https://aices.irdta.eu/2026/

Read more
maia_03
Apr 202025

AIces 2026 launched

News preparations by manager

AIces 2026 launched

Read more

AIces 2026

CO-ORGANIZERS


University of Maia

Institute for Research Development, Training and Advice – IRDTA, Brussels/London

Active links
  • DeepLearn 2026
Past links
  • DeepLearn 2025
  • DeepLearn 2024
  • DeepLearn 2023 Summer
  • DeepLearn 2023 Spring
  • DeepLearn 2023 Winter
  • DeepLearn 2022 Autumn
  • DeepLearn 2022 Summer
  • DeepLearn 2022 Spring
  • DeepLearn 2021 Summer
  • DeepLearn 2019
  • DeepLearn 2018
  • DeepLearn 2017
© IRDTA 2025. All Rights Reserved.
We use cookies on our website to give you the most relevant experience by remembering your preferences and repeat visits. By clicking “Accept All”, you consent to the use of ALL the cookies. However, you may visit "Cookie Settings" to provide a controlled consent.
Cookie SettingsAccept All
Manage consent

Privacy Overview

This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary
Always Enabled
Necessary cookies are absolutely essential for the website to function properly. These cookies ensure basic functionalities and security features of the website, anonymously.
CookieDurationDescription
cookielawinfo-checkbox-advertisement1 yearThe cookie is set by GDPR cookie consent to record the user consent for the cookies in the category "Advertisement".
cookielawinfo-checkbox-analytics11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Analytics".
cookielawinfo-checkbox-functional11 monthsThe cookie is set by GDPR cookie consent to record the user consent for the cookies in the category "Functional".
cookielawinfo-checkbox-necessary11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookies is used to store the user consent for the cookies in the category "Necessary".
cookielawinfo-checkbox-others11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Other.
cookielawinfo-checkbox-performance11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Performance".
PHPSESSIDsessionThis cookie is native to PHP applications. The cookie is used to store and identify a users' unique session ID for the purpose of managing user session on the website. The cookie is a session cookies and is deleted when all the browser windows are closed.
viewed_cookie_policy11 monthsThe cookie is set by the GDPR Cookie Consent plugin and is used to store whether or not user has consented to the use of cookies. It does not store any personal data.
Functional
Functional cookies help to perform certain functionalities like sharing the content of the website on social media platforms, collect feedbacks, and other third-party features.
Performance
Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.
Analytics
Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics the number of visitors, bounce rate, traffic source, etc.
CookieDurationDescription
_ga2 yearsThis cookie is installed by Google Analytics. The cookie is used to calculate visitor, session, campaign data and keep track of site usage for the site's analytics report. The cookies store information anonymously and assign a randomly generated number to identify unique visitors.
_gat_gtag_UA_74880351_91 minuteThis cookie is set by Google and is used to distinguish users.
_gid1 dayThis cookie is installed by Google Analytics. The cookie is used to store information of how visitors use a website and helps in creating an analytics report of how the website is doing. The data collected including the number visitors, the source where they have come from, and the pages visted in an anonymous form.
Advertisement
Advertisement cookies are used to provide visitors with relevant ads and marketing campaigns. These cookies track visitors across websites and collect information to provide customized ads.
Others
Other uncategorized cookies are those that are being analyzed and have not been classified into a category as yet.
SAVE & ACCEPT
Powered by CookieYes Logo