November 15, 2024
14:30
ISBA - C115 (1st Floor)
Sébastien Laurent
IAE Aix-Marseille
Asymptotics for penalized QMLEs of time series regressions
We examine a linear regression model applied to the components of a time series, aiming to identify time-varying, constant as well as zero conditional beta coefficients. To address the non-identifiability of parameters when a conditional beta is constant, we employ a lasso-type estimator. This penalized estimator simplifies the model by shrinking the estimates when the beta is constant. Given that the model accommodates conditional heteroskedasticity and the relevant regressors are unknown, the total number of parameters to estimate can be quite large. To manage this complexity, we propose a multistep estimator that first captures the dynamics of the regressors before estimating the dynamics of the betas. This strategy breaks down a high-dimensional optimization problem into several lower-dimensional ones. Since we avoid making strict parametric assumptions about the innovation distributions, we use Quasi-Maximum Likelihood (QML) estimators. The non-Markovian nature of the global model means that standard convex optimization results cannot be applied. Nevertheless, we analyze the asymptotic distribution of the multistep lasso estimator and its adaptive version, deriving bounds on the maximum value of the penalty term. We also propose a nonlinear coordinate-wise descent algorithm, which is demonstrated to find stationary points of the objective function. The finite-sample properties of these estimators are further explored through a Monte Carlo simulation and illustrated with an application to financial data.