Certified Logic-Based Explainable AI - Assistance à la Certification d’Applications DIstribuées et Embarquées Access content directly
Conference Papers Year : 2023

Certified Logic-Based Explainable AI

Abstract

The continued advances in artificial intelligence (AI), including those in machine learning (ML), raise concerns regarding their deployment in high-risk and safety-critical domains. Motivated by these concerns, there have been calls for the verification of systems of AI, including their explanation. Nevertheless, tools for the verification of systems of AI are complex, and so error-prone. This paper describes one initial effort towards the certification of logic-based explainability algorithms, focusing on monotonic classifiers. Concretely, the paper starts by using the proof assistant Coq to prove the correctness of recently proposed algorithms for explaining monotonic classifiers. Then, the paper proves that the algorithms devised for monotonic classifiers can be applied to the larger family of stable classifiers. Finally, confidence code, extracted from the proofs of correctness, is used for computing explanations that are guaranteed to be correct. The experimental results included in the paper show the scalability of the proposed approach for certifying explanations.
Fichier principal
Vignette du fichier
paper.pdf (545.51 Ko) Télécharger le fichier
Origin : Files produced by the author(s)

Dates and versions

hal-04031193 , version 1 (15-03-2023)
hal-04031193 , version 2 (27-03-2023)
hal-04031193 , version 3 (05-12-2023)

Licence

Attribution - NonCommercial - ShareAlike

Identifiers

Cite

Aurélie Hurault, Joao Marques-Silva. Certified Logic-Based Explainable AI: The Case of Monotonic Classifiers. TAP 2023, Jul 2023, Leicester, United Kingdom. pp.51-67, ⟨10.1007/978-3-031-38828-6_4⟩. ⟨hal-04031193v2⟩
278 View
198 Download

Altmetric

Share

Gmail Facebook X LinkedIn More