Systematic Evaluation of Trustworthy AI Augmentation in Modern Applications

Name
Marasinghe Mudiyanselage Rasinthe Marasinghe
Abstract
Artificial intelligence (AI) has become pervasive in various sectors, including healthcare, finance, education, and transportation, transforming how tasks are performed and decisions are made. However, the rapid integration of AI raised significant concerns about privacy, bias, security, and the opacity of AI systems, often referred to as "black boxes." These challenges highlight the critical gap in ensuring AI systems are both efficient and trustworthy. This research addresses this gap by focusing on the practical implementation of continuous human oversight in AI development. The study specifically evaluates an adaptive dashboard developed for the SPATIAL platform to enhance AI transparency and accountability. Through experiments with the Medical Analysis Module (MAM) that employs Explainable AI (XAI) techniques to provide role-specific explanations for stakeholders analyzing electrocardiogram (ECG) data, the research assesses the interpretability of AI-generated explanations and the system's performance under varying user loads. The findings demonstrate that tailored explanations significantly improve user understanding and trust while the system maintained robust performance, ensuring scalability and reliability. These insights provide valuable guidance for developing practical tools to enhance the monitoring and oversight of AI inferences, aligning with regulatory requirements for trustworthy AI.
Graduation Thesis language
English
Graduation Thesis type
Master - Software Engineering
Supervisor(s)
Huber Raul Flores Macario, Abdul-Rasheed Olatunji Ottun
Defence year
2024
 
PDF