Questioning the reliability of Monte Carlo simulation for machine learning test validation - Université Grenoble Alpes Accéder directement au contenu
Communication Dans Un Congrès Année : 2016

Questioning the reliability of Monte Carlo simulation for machine learning test validation

Résumé

Machine learning indirect test –also known as Alternate Test– has shown its potential to reduce test cost while maintaining an interpretation of the results compatible with the standard specification-based test. Since its introduction, many papers have been proposed to refine the idea and address its shortcomings. In particular, the issue of early validation of the test at the design stage has been considered and some methodologies have been proposed to assess test quality. These methodologies rely essentially on Monte Carlo simulations. In this paper, we propose a set of thought experiments to show that small inaccuracies and variations in the Monte Carlo models included in current technology process design kits may have a significant impact in the validation of machine learning indirect test, in particular in the estimation of test quality metrics. Despite of this, machine learning indirect test has actually succeeded in actual industrial cases. Some hints are thus given to the conditions that the test has to fulfill to guarantee good results.
Fichier non déposé

Dates et versions

hal-01325116 , version 1 (01-06-2016)

Identifiants

  • HAL Id : hal-01325116 , version 1

Citer

Gildas Leger, Manuel J. Barragan. Questioning the reliability of Monte Carlo simulation for machine learning test validation. IEEE European Test Symposium, May 2016, Amsterdam, Netherlands. ⟨hal-01325116⟩
109 Consultations
0 Téléchargements

Partager

Gmail Facebook X LinkedIn More