TY - JOUR
T1 - Byzantine-robust distributed support vector machine
AU - Wang, Xiaozhou
AU - Liu, Weidong
AU - Mao, Xiaojun
N1 - Publisher Copyright:
© Science China Press 2024.
PY - 2025/3
Y1 - 2025/3
N2 - The development of information technology brings diversification of data sources and large-scale data sets and calls for the exploration of distributed learning algorithms. In distributed systems, some local machines may behave abnormally and send arbitrary information to the central machine (known as Byzantine failures), which can invalidate the distributed algorithms based on the assumption of faultless systems. This paper studies Byzantine-robust distributed algorithms for support vector machines (SVMs) in the context of binary classification. Despite a vast literature on Byzantine problems, much less is known about the theoretical properties of Byzantine-robust SVMs due to their unique challenges. In this paper, we propose two distributed gradient descent algorithms for SVMs. The median and trimmed mean operations in aggregation can effectively defend against Byzantine failures. Theoretically, we show the convergence of the proposed estimators and provide the statistical error rates. After a certain number of iterations, our estimators achieve near-optimal rates. Simulation studies and real data analysis are conducted to demonstrate the performance of the proposed Byzantine-robust distributed algorithms.
AB - The development of information technology brings diversification of data sources and large-scale data sets and calls for the exploration of distributed learning algorithms. In distributed systems, some local machines may behave abnormally and send arbitrary information to the central machine (known as Byzantine failures), which can invalidate the distributed algorithms based on the assumption of faultless systems. This paper studies Byzantine-robust distributed algorithms for support vector machines (SVMs) in the context of binary classification. Despite a vast literature on Byzantine problems, much less is known about the theoretical properties of Byzantine-robust SVMs due to their unique challenges. In this paper, we propose two distributed gradient descent algorithms for SVMs. The median and trimmed mean operations in aggregation can effectively defend against Byzantine failures. Theoretically, we show the convergence of the proposed estimators and provide the statistical error rates. After a certain number of iterations, our estimators achieve near-optimal rates. Simulation studies and real data analysis are conducted to demonstrate the performance of the proposed Byzantine-robust distributed algorithms.
KW - 62H30
KW - 68W15
KW - Byzantine robustness
KW - convergence
KW - distributed learning
KW - support vector machine
UR - https://www.scopus.com/pages/publications/85203267973
U2 - 10.1007/s11425-023-2217-2
DO - 10.1007/s11425-023-2217-2
M3 - 文章
AN - SCOPUS:85203267973
SN - 1674-7283
VL - 68
SP - 707
EP - 728
JO - Science China Mathematics
JF - Science China Mathematics
IS - 3
ER -