
On June 16th, CeADAR Senior Data Scientist Sebastián Cajas Ordoñez participated as a speaker and contributor at an event hosted at St Catherine’s College, University of Oxford, titled “The Dark Side of Health AI – and the Light at the End of the Tunnel”, for the panel discussion “Reimagining Incentive Structures that Align AI with Human Values.”
The seminar brought together a diverse group of researchers, technologists, ethicists, and policymakers to discuss how we can build AI systems that not only perform optimally, but also reflect the values, needs, and complexities of human society.
Other panelists at the event included Stephanie Hyland, Naigwe Kalema, Ezi Ozoani, Torleig Lunde, Jakob Groll, Kyra Delray and Max Lange
Sebastian contributed insights from CeADAR’s applied research in trustworthy and responsible AI, emphasizing embedding human AI values into AI systems, particularly within the AI alignment era. He joined a panel of experts from institutions including MIT, Harvard, Oxford, and UCL, engaging in a vibrant discussion on the socio-technical systems needed to embed ethical frameworks directly into AI design.
Topics discussed during the session included:
- Operationalizing Human AI Values: like empathy, humility, and curiosity, into practical AI system design
- Addressing bias and representation gaps at the dataset and algorithmic levels
- Integrating ethical/moral reflection into every phase of AI development, from ideation to deployment
- The role of public institutions and open science in realigning incentives in AI research
- Bridging the disconnect between AI design and clinical workflows to ensure relevance and usability
- Learning from real-world failures, exploring case studies of AI misuse and their societal impact
Key Highlights
- Humility and curiosity must be built into AI systems to help them recognize uncertainty, seek context, and support ethical decision-making.
- AI should be embedded into real-world workflows, adapting to human needs instead of rigidly optimizing isolated metrics.
- Inclusive design is essential – AI must be co-created with the communities it serves to ensure trust, relevance, and fairness.
The event, co-organized by Leo Celi (Harvard University and MIT), Joao Matos(University of Oxford), Linda Hong (Oxford University), and Ari Ercole, provided an important platform to discuss the ethical development of AI. As the global conversation around AI safety and alignment intensifies, CeADAR remains committed to advancing a meaningful dialogue on the future of human-centred AI.