Artificial Intelligence (AI) has the power to revolutionise healthcare but we must also recognise the potential risks and ethical dilemmas associated with it, writes Polat Goktas and Ricardo Simon Carbajo.
Over the past few years, Artificial Intelligence — especially large-language model applications such as ChatGPT (OpenAI – Chat Generative Pretrained Transformer) — has helped the healthcare sector make significant progress, opening up a range of new possibilities, ranging from predictive analysis of patient outcomes to customisation of treatments.
However, like all revolutionary technologies, these advancements are not without their complications. And the subject of maintaining ethical equilibrium in AI healthcare is becoming increasingly significant, particularly considering the European Union (EU)’s proposed regulations aimed at ensuring AI safety and transparency.
The European Parliament has approved its negotiating stance on the AI Act, securing 499 votes in support, 28 opposed, and 93 abstentions, guided by a dedication to ensure the technology aligns with EU rights and values, including principles of human oversight, safety, privacy, transparency, and social and environmental wellbeing. Specifically, the Parliament mandates:
- A full prohibition on AI applications for biometric surveillance, emotion recognition, and predictive policing;
- A requirement for generative AI systems, such as ChatGPT, to transparently inform users when the content has been generated by AI;
- A classification that AI systems, which can influence the outcomes of elections or voter behaviour, are considered to be high-risk AI, necessitating rigorous oversight and scrutiny.
Much like the challenge of understanding and placing trust in AI models, peeling back the layers of AI’s role in healthcare is a complex process. From concerns over data privacy to issues of algorithmic bias, it is clear that the adoption of AI in healthcare is not a straightforward path. It demands a balance of technological advancement and ethical insight, one that necessitates a careful, well-considered examination. Just as we must understand the potential of AI to revolutionise healthcare, we must also recognise the potential risks and ethical dilemmas associated with it.
The Ethics of AI: A Balancing Act
AI is having a profound effect on the medical industry. An exploration of the piece on AI in Healthcare from the 35th edition of the MCAA (Marie Curie Alumni Association) Newsletter offers insight into the pivotal role these technologies would play in shaping the future of healthcare landscape. From aiding in diagnostics and prognosis, accelerating drug discovery, to augmenting patient care, the potential of AI to enhance the health sector is undeniable. Yet, even with the promise these advancements hold, they bring with them a variety of concerns about their wider effects on the community.
As AI makes its way into healthcare systems, it brings an immediate need for in-depth conversations about ethical aspects. Drawing from concerns raised in the European Parliament’s press release on AI safety and transparency, issues surrounding privacy, data security, patient consent, and algorithmic bias emerge.
Patients’ privacy may be at risk as AI systems often require vast amounts of data. Considering the sensitivity of healthcare information, the necessity of data security cannot be overstated. Additionally, patient consent in the context of AI presents another complexity, given the need for understanding not only the nature of data being collected but also the method of its processing by AI. Lastly, algorithmic bias, a well-documented concern in AI applications, can result in unfair health outcomes if not addressed.
The Legal Perspective: EU’s Approach to Safe and Transparent AI
The recent press release from the European Parliament regarding AI rules marks a substantial milestone in AI regulation, laying the groundwork for inclusive legal guidelines for AI applications, healthcare included. These proposed regulations aim to protect citizens’ fundamental rights while fostering AI innovation, encompassing requirements for transparency, robustness, and accountability. For healthcare institutions aiming to adopt AI technologies, and citizens whose health data might be processed by such systems, understanding these regulations is essential.
Though AI presents an opportunity for remarkable progress in healthcare, navigating this landscape requires a sharp focus on ethical considerations. By employing comprehensive regulations, keeping an eye out for ethical pitfalls, and maintaining a commitment to transparency, the healthcare sector can capitalise on AI’s potential while safeguarding the rights and well-being of patients.
Understanding, trusting, and effectively leveraging AI applications in healthcare is a complex route – one that is not without its challenges. However, as we undertake this path, it is essential to remember that beyond the hype lies the potential for truly transformative shifts. This journey, while filled with potential pitfalls, is also ripe with opportunities for innovation, providing a promising frontier for the future of healthcare.
References
- European Parliament Press. MEPs ready to negotiate first-ever rules for safe and transparent AI. European Parliament. Retrieved from: https://www.europarl.europa.
eu/news/en/press-room/ 20230609IPR96212/meps-ready- to-negotiate-first-ever-rules- for-safe-and-transparent-ai - Goktas, P., & Simon Carbajo, R. (2023). Special Issue – Beyond the hype: AI in healthcare – Ethical balance and insights. Marie Curie Alumni Association, 35th Newsletter. Retrieved from https://www.mariecuriealumni.
eu/newsletters/35th-mcaa- newsletter/special-issue- beyond-hype-ai-healthcare- ethical-balance-and
Is a Marie-Curie Research Fellow at CeADAR and School of Computer Science, University College Dublin, Ireland.
is the Director of Innovation and Development at CeADAR and School of Computer Science, University College Dublin, Ireland