Computation time/accuracy trade-off and linear regression - Université de Lille Accéder directement au contenu
Communication Dans Un Congrès Année : 2016

Computation time/accuracy trade-off and linear regression

Résumé

Most estimates practically arise from algorithmic processes aiming at optimizing some standard, but usually only asymptotically relevant, criteria. Thus, the quality of the resulting estimate is a function of both the iteration number and also the involved sample size. An important question is to design accurate estimates while saving computation time, and we address it in the simplified context of linear regression here. Fixing the sample size, we focus on estimating an early stopping time of a gradient descent estimation process aiming at maximizing the likelihood. It appears that the accuracy gain of such a stopping time increases with the number of covariates, indicating potential interest of the method in real situations involving many covariates.
pres_ERCIM2016.pdf (857.12 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-01420659 , version 1 (22-12-2016)

Identifiants

  • HAL Id : hal-01420659 , version 1

Citer

Christophe Biernacki, Maxime Brunin, Alain Celisse. Computation time/accuracy trade-off and linear regression. 9th International Conference of the ERCIM WG on Computational and Methodological Statistics (CMStatistics 2016, ERCIM 2016), Dec 2016, Séville, Spain. ⟨hal-01420659⟩
200 Consultations
33 Téléchargements

Partager

Gmail Facebook X LinkedIn More