Evaluation of Explainable Artificial Intelligence: SHAP, LIME, and CAM
The last few years have produced a remarkable expansion of research in machine learning. The field has gained unprecedented popularity, several new areas have been developed, and some previously established areas have gained new momentum. Notwithstanding widespread adoption, machine learning models reside mostly in black boxes. A demand for understanding the decisions and reasons behind predictions of models is quite important in assessing trust. Explainable Artificial Intelligence (XAI) systems are intended to self-explain the reasoning behind system decisions and predictions. The recent improvements and trends for XAI research need a wide range of algorithmic transparency goals, which asks for research across varied application fields, and encourages a cross-discipline perspective of intelligibility and transparency goals. Yet, researchers from numerous fields concentrate on various priorities and comparatively separate topics in XAI study, which raises difficulties for the recognition of suitable methods for design and assessment and the synthesis of expertise through efforts. In this research paper, we proposed an evaluation method to bring out a comparative view of the three XAI methods, including LIME, SHAP, and CAM, in the image classification problem. We also dive into the investigation of how these algorithms operate and compare the provided explanations' efficiency and quality.
Research paper: Evaluation of Explainable Artificial Intelligence: SHAP, LIME, and CAM
Hung Quoc Cao, Hung Truong Thanh Nguyen, Khang Vo Thanh Nguyen, Nguyen Dinh Khoi Pham
3/21/2025
Efficient and Concise Explanations for Object Detection with Gaussian-Class Activation Mapping Explainer.jpg
NEW
Efficient and Concise Explanations for Object Detection with Gaussian-Class Activation Mapping Explainer
To address the challenges of providing quick and plausible explanations in Explainable AI (XAI) for object detection models, we introduce the Gaussian Class Activation Mapping Explainer (G-CAME).
Enhancing the Fairness and Performance of Edge Cameras with Explainable AI.avif
NEW
Enhancing the Fairness and Performance of Edge Cameras with Explainable AI
The rising use of Artificial Intelligence (AI) in human detection on Edge camera systems has led to accurate but complex models, challenging to interpret and debug.
QaiDora Products
QaiDora draws inspiration from the myth of Pandora’s box—a symbol of unexpected possibilities and hope. For us, AI models are like modern Pandora’s boxes, holding untapped potential to turn challenges into opportunities. At QAI, QaiDora serves as an ecosystem of AI products designed to drive innovation and deliver competitive advantages.
Trusted by
Contact us
Copyright by qaidora.com