Certified Logic-Based Explainable AI - Fiabilité des Systèmes et des Logiciels
Communication Dans Un Congrès Année : 2023

Certified Logic-Based Explainable AI

Résumé

The continued advances in artificial intelligence (AI), including those in machine learning (ML), raise concerns regarding their deployment in high-risk and safety-critical domains. Motivated by these concerns, there have been calls for the verification of systems of AI, including their explanation. Nevertheless, tools for the verification of systems of AI are complex, and so error-prone. This paper describes one initial effort towards the certification of logic-based explainability algorithms, focusing on monotonic classifiers. Concretely, the paper starts by using the proof assistant Coq to prove the correctness of recently proposed algorithms for explaining monotonic classifiers. Then, the paper proves that the algorithms devised for monotonic classifiers can be applied to the larger family of stable classifiers. Finally, confidence code, extracted from the proofs of correctness, is used for computing explanations that are guaranteed to be correct. The experimental results included in the paper show the scalability of the proposed approach for certifying explanations.
Fichier principal
Vignette du fichier
paper.pdf (545.51 Ko) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04031193 , version 1 (15-03-2023)
hal-04031193 , version 2 (27-03-2023)
hal-04031193 , version 3 (05-12-2023)

Licence

Identifiants

Citer

Aurélie Hurault, Joao Marques-Silva. Certified Logic-Based Explainable AI. TAP 2023, Jul 2023, Leicester, United Kingdom. pp.51-67, ⟨10.1007/978-3-031-38828-6_4⟩. ⟨hal-04031193v2⟩
846 Consultations
407 Téléchargements

Altmetric

Partager

More