Questioning the reliability of Monte Carlo simulation for machine learning test validation
Résumé
Machine learning indirect test –also known as Alternate Test– has shown its potential to reduce test cost while maintaining an interpretation of the results compatible with the standard specification-based test. Since its introduction, many papers have been proposed to refine the idea and address its shortcomings. In particular, the issue of early validation of the test at the design stage has been considered and some methodologies have been proposed to assess test quality. These methodologies rely essentially on Monte Carlo simulations.
In this paper, we propose a set of thought experiments to show that small inaccuracies and variations in the Monte Carlo models included in current technology process design kits may have a significant impact in the validation of machine learning indirect test, in particular in the estimation of test quality metrics. Despite of this, machine learning indirect test has actually succeeded in actual industrial cases. Some hints are thus given to the conditions that the test has to fulfill to guarantee good results.