Are we trusting AI too much? New study demands accountability in Artificial Intelligence
Are we putting our faith in technology that we don't fully understand? A new study from the University of Surrey comes at a time when AI systems are making decisions impacting our daily lives—from banking and healthcare to crime detection. The study calls for an immediate shift in how AI models are designed and evaluated, emphasising the need for transparency and trustworthiness in these powerful algorithms.
![Artificial Intelligence image](/sites/default/files/styles/1200xauto/public/2020-10/Msc%20AI%20USE.jpg?itok=UbXWMbrj)
As AI becomes integrated into high-stakes sectors where decisions can have life-altering consequences, the risks associated with 'black box' models are greater than ever. The research sheds light on instances where AI systems must provide adequate explanations for their decisions, allowing users to trust and understand AI rather than leaving them confused and vulnerable. With cases of misdiagnosis in healthcare and erroneous fraud alerts in banking, the potential for harm – which could be life-threatening - is significant.
Surrey's researchers detail the alarming instances where AI systems have failed to adequately explain their decisions, leaving users confused and vulnerable. With misdiagnosis cases in healthcare and erroneous fraud alerts in banking, the potential for harm is significant. Fraud datasets are inherently imbalanced - 0.01% are fraudulent transactions – leading to damage on the scale of billions of dollars. It is reassuring for people to know most transactions are genuine, but the imbalance challenges AI in learning fraud patterns. Still, AI algorithms can identify a fraudulent transaction with great precision but currently lack the capability to adequately explain why it is fraudulent.
Dr Wolfgang Garn, co-author of the study and Senior Lecturer in Analytics at the University of Surrey, said:
"We must not forget that behind every algorithm’s solution, there are real people whose lives are affected by the determined decisions. Our aim is to create AI systems that are not only intelligent but also provide explanations to people - the users of technology - that they can trust and understand."
The study proposes a comprehensive framework known as SAGE (Settings, Audience, Goals, and Ethics) to address these critical issues. SAGE is designed to ensure that AI explanations are not only understandable but also contextually relevant to the end-users. By focusing on the specific needs and backgrounds of the intended audience, the SAGE framework aims to bridge the gap between complex AI decision-making processes and the human operators who depend on them.
In conjunction with this framework, the research uses Scenario-Based Design (SBD) techniques, which delve deep into real-world scenarios to find out what users truly require from AI explanations. This method encourages researchers and developers to step into the shoes of the end-users, ensuring that AI systems are crafted with empathy and understanding at their core.
Dr Wolfgang Garn continued:
"We also need to highlight the shortcomings of existing AI models, which often lack the contextual awareness necessary to provide meaningful explanations. By identifying and addressing these gaps, our paper advocates for an evolution in AI development that prioritises user-centric design principles. It calls for AI developers to engage with industry specialists and end-users actively, fostering a collaborative environment where insights from various stakeholders can shape the future of AI. The path to a safer and more reliable AI landscape begins with a commitment to understanding the technology we create and the impact it has on our lives. The stakes are too high for us to ignore the call for change."
The research highlights the importance of AI models explaining their outputs in a text form or graphical representations, catering to the diverse comprehension needs of users. This shift aims to ensure that explanations are not only accessible but also actionable, enabling users to make informed decisions based on AI insights.
The study has been published in Applied Artificial Intelligence.
[ENDS]
Note to editors:
- Dr Wolfgang Garn is available for interview; please contact mediarelations@surrey.ac.uk to arrange.
Related sustainable development goals
![Decent Work and Economic Growth UN Sustainable Development Goal 8 logo](/sites/default/files/styles/square_300x300/public/2025-01/decent_work_0.png?itok=fcRkJIex)
![Industry, Innovation, and Infrastructure UN Sustainable Development Goal 9 logo](/sites/default/files/styles/square_300x300/public/2025-01/industry_innovation_0.png?itok=oHTcgsxx)
![Sustainable Cities and Communities UN Sustainable Development Goal 11 logo](/sites/default/files/styles/square_300x300/public/2025-01/sustainable_cities_0.png?itok=C3Z9A2VM)
Media Contacts
External Communications and PR team
Phone: +44 (0)1483 684380 / 688914 / 684378
Email: mediarelations@surrey.ac.uk
Out of hours: +44 (0)7773 479911