TY - JOUR
T1 - DEEP NONLINEAR SUFFICIENT DIMENSION REDUCTION
AU - Chen, Yinfeng
AU - Jiao, Yuling
AU - Qiu, Rui
AU - Yu, Zhou
N1 - Publisher Copyright:
© 2024 Institute of Mathematical Statistics. All rights reserved.
PY - 2024/6/1
Y1 - 2024/6/1
N2 - Linear sufficient dimension reduction, as exemplified by sliced inverse regression, has seen substantial development in the past thirty years. However, with the advent of more complex scenarios, nonlinear dimension reduction has gained considerable interest recently. This paper introduces a novel method for nonlinear sufficient dimension reduction, utilizing the generalized martingale difference divergence measure in conjunction with deep neural networks. The optimal solution of the proposed objective function is shown to be unbiased at the general level of σ-fields. And two optimization schemes, based on the fascinating deep neural networks, exhibit higher efficiency and flexibility compared to the classical eigendecomposition of linear operators. Moreover, we systematically investigate the slow rate and fast rate for the estimation error based on advanced U-process theory. Remarkably, the fast rate almost coincides with the minimax rate of nonparametric regression. The validity of our deep nonlinear sufficient dimension reduction methods is demonstrated through simulations and real data analysis.
AB - Linear sufficient dimension reduction, as exemplified by sliced inverse regression, has seen substantial development in the past thirty years. However, with the advent of more complex scenarios, nonlinear dimension reduction has gained considerable interest recently. This paper introduces a novel method for nonlinear sufficient dimension reduction, utilizing the generalized martingale difference divergence measure in conjunction with deep neural networks. The optimal solution of the proposed objective function is shown to be unbiased at the general level of σ-fields. And two optimization schemes, based on the fascinating deep neural networks, exhibit higher efficiency and flexibility compared to the classical eigendecomposition of linear operators. Moreover, we systematically investigate the slow rate and fast rate for the estimation error based on advanced U-process theory. Remarkably, the fast rate almost coincides with the minimax rate of nonparametric regression. The validity of our deep nonlinear sufficient dimension reduction methods is demonstrated through simulations and real data analysis.
KW - Sufficient dimension reduction
KW - U-process
KW - deep neural networks
KW - generalized martingale difference divergence
UR - https://www.scopus.com/pages/publications/85202718335
U2 - 10.1214/24-AOS2390
DO - 10.1214/24-AOS2390
M3 - 文章
AN - SCOPUS:85202718335
SN - 0090-5364
VL - 52
SP - 1201
EP - 1226
JO - Annals of Statistics
JF - Annals of Statistics
IS - 3
ER -