Abstract
Composite quantile regression (CQR) is an efficient method to estimate parameters of the linear model with non-Gaussian random noise. The non-smoothness of CQR loss prevents many efficient algorithms from being used. In this paper, we propose the composite smoothed quantile regression (CSQR) model and investigate the inference problem for a large-scale dataset, in which the dimensionality (Figure presented.) is allowed to increase with the sample size (Figure presented.) while (Figure presented.). After applying the convolution smoothing technique to the composite quantile loss, we obtain the convex and twice differentiable CSQR loss function, which can be optimized via the gradient descent algorithm. Theoretically, we establish the non-asymptotic error bound for the CSQR estimators and further provide the Bahadur representation and the Berry–Esseen bound, from which the asymptotic normality of CSQR estimator can be immediately derived. To make valid inference, we construct the confidence intervals that based on the asymptotic distribution. Besides, we also explore the asymptotic relative efficiency of the CSQR estimator with respect to the standard CQR estimator. At last, we provide extensive numerical experiments on both simulated and real data to demonstrate the good performance of our CSQR estimator compared with some baselines.
| Original language | English |
|---|---|
| Article number | e542 |
| Journal | Stat |
| Volume | 12 |
| Issue number | 1 |
| DOIs | |
| State | Published - Dec 2023 |
Keywords
- Bahadur representation
- asymptotic relative efficiency
- composite quantile regression
- convolution-type smoothing
- gradient descent
- non-asymptotic statistics