Questioning the reliability of Monte Carlo simulation for machine learning test validation

Abstract : Machine learning indirect test –also known as Alternate Test– has shown its potential to reduce test cost while maintaining an interpretation of the results compatible with the standard specification-based test. Since its introduction, many papers have been proposed to refine the idea and address its shortcomings. In particular, the issue of early validation of the test at the design stage has been considered and some methodologies have been proposed to assess test quality. These methodologies rely essentially on Monte Carlo simulations. In this paper, we propose a set of thought experiments to show that small inaccuracies and variations in the Monte Carlo models included in current technology process design kits may have a significant impact in the validation of machine learning indirect test, in particular in the estimation of test quality metrics. Despite of this, machine learning indirect test has actually succeeded in actual industrial cases. Some hints are thus given to the conditions that the test has to fulfill to guarantee good results.
Type de document :
Communication dans un congrès
IEEE European Test Symposium, May 2016, Amsterdam, Netherlands
Liste complète des métadonnées

http://hal.univ-grenoble-alpes.fr/hal-01325116
Contributeur : Manuel Barragan <>
Soumis le : mercredi 1 juin 2016 - 20:00:50
Dernière modification le : mercredi 16 mai 2018 - 18:30:05

Identifiants

  • HAL Id : hal-01325116, version 1

Collections

Citation

Gildas Leger, Manuel Barragan. Questioning the reliability of Monte Carlo simulation for machine learning test validation. IEEE European Test Symposium, May 2016, Amsterdam, Netherlands. 〈hal-01325116〉

Partager

Métriques

Consultations de la notice

140