TY - GEN
T1 - PALOR
T2 - 25th Australasian Conference on Information Security and Privacy, ACISP 2020
AU - Wen, Jialin
AU - Zhao, Benjamin Zi Hao
AU - Xue, Minhui
AU - Qian, Haifeng
N1 - Publisher Copyright:
© 2020, Springer Nature Switzerland AG.
PY - 2020
Y1 - 2020
N2 - With Google, Amazon, Microsoft, and other entities establishing “Machine Learning as a Service” (MLaaS), ensuring the security of the resulting machine learning models has become an increasingly important topic. The security community has demonstrated that MLaaS contains many potential security risks, with new risks constantly being discovered. In this paper, we focus on one of these security risks – data poisoning attacks. Specifically, we analyze how attackers interfere with the results of logistic regression by poisoning the training datasets. To this end, we analyze and propose an alternative formulation for the optimization of poisoning training points capable of poisoning the logistic regression classifier, a model that has previously not been susceptible to poisoning attacks. We evaluate the performance of our proposed attack algorithm on the three real-world datasets of wine cultivars, adult census information, and breast cancer diagnostics. The success of our proposed formulation is evident in decreasing testing accuracy of logistic regression models exposed to an increasing number of poisoned training samples.
AB - With Google, Amazon, Microsoft, and other entities establishing “Machine Learning as a Service” (MLaaS), ensuring the security of the resulting machine learning models has become an increasingly important topic. The security community has demonstrated that MLaaS contains many potential security risks, with new risks constantly being discovered. In this paper, we focus on one of these security risks – data poisoning attacks. Specifically, we analyze how attackers interfere with the results of logistic regression by poisoning the training datasets. To this end, we analyze and propose an alternative formulation for the optimization of poisoning training points capable of poisoning the logistic regression classifier, a model that has previously not been susceptible to poisoning attacks. We evaluate the performance of our proposed attack algorithm on the three real-world datasets of wine cultivars, adult census information, and breast cancer diagnostics. The success of our proposed formulation is evident in decreasing testing accuracy of logistic regression models exposed to an increasing number of poisoned training samples.
KW - Data poisoning
KW - Logistic regression
KW - Machine learning
UR - https://www.scopus.com/pages/publications/85089724387
U2 - 10.1007/978-3-030-55304-3_23
DO - 10.1007/978-3-030-55304-3_23
M3 - 会议稿件
AN - SCOPUS:85089724387
SN - 9783030553036
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 447
EP - 460
BT - Information Security and Privacy - 25th Australasian Conference, ACISP 2020, Proceedings
A2 - Liu, Joseph K.
A2 - Cui, Hui
PB - Springer
Y2 - 30 November 2020 through 2 December 2020
ER -