Skip to Main content Skip to Navigation
Preprints, Working Papers, ...

Model predictivity assessment: incremental test-set selection and accuracy evaluation

Abstract : Unbiased assessment of the predictivity of models learnt by supervised machine-learning methods requires knowledge of the learned function over a reserved test set (not used by the learning algorithm). The quality of the assessment depends, naturally, on the properties of the test set and on the error statistic used to estimate the prediction error. In this work we tackle both issues, proposing a new predictivity criterion that carefully weights the individual observed errors to obtain a global error estimate, and using incremental experimental design methods to "optimally" select the test points on which the criterion is computed. Several incremental constructions are studied, including greedy-packing (coffee-house design), support points and kernel herding techniques. Our results show that the incremental and weighted versions of the latter two, based on Maximum Mean Discrepancy concepts, yield superior performance. An industrial test case provided by the historical French electricity supplier (EDF) illustrates the practical relevance of the methodology, indicating that it is an efficient alternative to expensive cross-validation techniques.
Document type :
Preprints, Working Papers, ...
Complete list of metadata
Contributor : Bertrand Iooss Connect in order to contact the contributor
Submitted on : Wednesday, January 12, 2022 - 7:01:29 PM
Last modification on : Sunday, May 1, 2022 - 3:17:45 AM
Long-term archiving on: : Wednesday, April 13, 2022 - 11:37:47 PM


Files produced by the author(s)


  • HAL Id : hal-03523695, version 1


Elias Fekhari, Bertrand Iooss, Joseph Muré, Luc Pronzato, Maria Joao Rendas. Model predictivity assessment: incremental test-set selection and accuracy evaluation. 2022. ⟨hal-03523695v1⟩



Record views


Files downloads