TY - GEN
T1 - Optimizing Neural Network Weights and Biases through Univariate Sampling
AU - Sun, Xiangyu
AU - Zhang, Geng
AU - Luo, Zhenwang
AU - Yang, Xi
N1 - Publisher Copyright:
© 2024 IEEE.
PY - 2024
Y1 - 2024
N2 - Deep neural networks have achieved unprecedented success in various domains such as computer vision. In general, the deep neural networks are trained by the backpropagation (BP) algorithm. However, the BP algorithm may make the search fall into inferior points such as the saddle point which results in the increased hidden layers to compensate the BP's inefficiency. The evolutionary algorithm provides the capability of jumping out of the inferior point since it is not restricted by the gradient. This paper studies the possibility of improving BP algorithm by evolutionary algorithms and shows that the univariate sampling method(USM) has unique advantage in optimizing large scale neural network weights and biases. Therefore, experiments are carried out on some standard data sets of CIFAR-10 and the subset of ImageNet for classical neural network models. By fine-tuning the parameters of evolutionary algorithm, the training time is acceptable in spite of the high dimensional optimization of neural network weights. Experimental results show that, without changing any of the original network structure, the evolutionary algorithms such as USM can significantly improve the accuracy of the original network.
AB - Deep neural networks have achieved unprecedented success in various domains such as computer vision. In general, the deep neural networks are trained by the backpropagation (BP) algorithm. However, the BP algorithm may make the search fall into inferior points such as the saddle point which results in the increased hidden layers to compensate the BP's inefficiency. The evolutionary algorithm provides the capability of jumping out of the inferior point since it is not restricted by the gradient. This paper studies the possibility of improving BP algorithm by evolutionary algorithms and shows that the univariate sampling method(USM) has unique advantage in optimizing large scale neural network weights and biases. Therefore, experiments are carried out on some standard data sets of CIFAR-10 and the subset of ImageNet for classical neural network models. By fine-tuning the parameters of evolutionary algorithm, the training time is acceptable in spite of the high dimensional optimization of neural network weights. Experimental results show that, without changing any of the original network structure, the evolutionary algorithms such as USM can significantly improve the accuracy of the original network.
KW - Deep Neural Networks
KW - Evolutionary Algorithms
KW - Univariate Sampling Method
UR - https://www.scopus.com/pages/publications/85208436106
U2 - 10.1109/ICCIA62557.2024.10719141
DO - 10.1109/ICCIA62557.2024.10719141
M3 - 会议稿件
AN - SCOPUS:85208436106
T3 - 2024 IEEE 9th International Conference on Computational Intelligence and Applications, ICCIA 2024
SP - 48
EP - 53
BT - 2024 IEEE 9th International Conference on Computational Intelligence and Applications, ICCIA 2024
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 9th IEEE International Conference on Computational Intelligence and Applications, ICCIA 2024
Y2 - 9 August 2024 through 11 August 2024
ER -