Do we place our faith in technology that we don’t fully understand? A new study from Surrey University comes as AI systems are making decisions that impact our daily lives, from banking and healthcare to crime detection. This study requires immediate shifts in AI models’ design and evaluation methods, highlighting the need for transparency and reliability in these powerful algorithms.
The risks associated with the “black box” model are greater than ever, as AI will become integrated into high-stakes sectors where decisions can have life-changing outcomes. This research sheds light on when AI systems must provide a proper explanation for decision-making, allowing users to trust and understand AI rather than leaving them confused and vulnerable. In the case of misdiagnosis in healthcare, in the case of false fraud warnings in banking, the possibility of potentially life-threatening harm is important.
Surrey researchers detail surprising cases in which AI systems were unable to properly explain their decisions, causing users to be confused and vulnerable. Possibility of harm is important due to misdiagnosis cases in healthcare and false fraud warnings from banks. Fraud datasets are inherently unbalanced – 0.01% are fraudulent transactions – causing damage on the scale of billions of dollars. It’s safe to know that most transactions are authentic, but imbalances challenge AI by learning fraud patterns. Still, AI algorithms can identify incorrect transactions with very accurate accuracy, but they currently lack the ability to properly explain why they are incorrect.
“The study’s co-author and senior lecturer in analysis at the University of Surrey,” said Dr. Wolfgang Garn, PhD, a research co-author and senior lecturer in analysis at Surrey.
“We must not forget that behind every algorithmic solution is real people whose lives are affected by the decisions that have been decided. Our goal is not only intelligent, but also technology. Users – to create AI systems that provide explanations to people. What they can trust and understand.”
This study proposes a comprehensive framework known as SAGE (Setting, Audience, Goals, Ethics) to address these important issues. Sage is designed to not only understand AI descriptions, but also to be contextually relevant to the end user. By focusing on the specific needs and background of intended audiences, Sage Framework aims to bridge the gap between complex AI decision-making processes and the human operators that rely on them.
In conjunction with this framework, this study uses scenario-based design (SBD) techniques. This delves deep into the real-world scenario and explores what users really need from the AI ​​description. This method encourages researchers and developers to step into end-user shoes, ensuring that AI systems are created at the core with empathy and understanding.
Dr. Wolfgang Garn continued:
“We also need to highlight the drawbacks of existing AI models that lack the contextual awareness necessary to provide meaningful explanations. By identifying and addressing these gaps, our paper has It advocates the evolution of AI development, prioritizing user-centric design principles. AI developers actively engage with industry experts and end users, and insights from a variety of stakeholders will make the future of AI We want to promote a collaborative environment that can be shaped. The path to a safer and more reliable AI landscape will understand the technology we create and the impact it will have on our lives. It starts with a commitment to something.
This study emphasizes the importance of AI models that explain output in textual or graphical representations that meet the diverse understanding needs of users. This shift is intended to ensure that the explanation is not only accessible but practical, allowing users to make informed decisions based on AI insights.
This study is published in Applied Artificial Intelligence.
/Public release. This material of the Organization of Origin/Author is a point-in-time nature and may be edited for clarity, style and length. Mirage.news does not take any institutional position or aspect, and all views, positions and conclusions expressed here are the views of the authors alone.