TY - JOUR
T1 - Decision-making and control with diffractive optical networks
AU - Qiu, Jumin
AU - Xiao, Shuyuan
AU - Huang, Lujun
AU - Miroshnichenko, Andrey
AU - Zhang, Dejian
AU - Liu, Tingting
AU - Yu, Tianbao
N1 - Publisher Copyright:
© The Authors.
PY - 2024/7/1
Y1 - 2024/7/1
N2 - The ultimate goal of artificial intelligence (AI) is to mimic the human brain to perform decision-making and control directly from high-dimensional sensory input. Diffractive optical networks (DONs) provide a promising solution for implementing AI with high speed and low power-consumption. Most reported DONs focus on tasks that do not involve environmental interaction, such as object recognition and image classification. By contrast, the networks capable of decision-making and control have not been developed. Here, we propose using deep reinforcement learning to implement DONs that imitate human-level decision-making and control capability. Such networks, which take advantage of a residual architecture, allow finding optimal control policies through interaction with the environment and can be readily implemented with existing optical devices. The superior performance is verified using three types of classic games: tic-tac-toe, Super Mario Bros., and Car Racing. Finally, we present an experimental demonstration of playing tic-tac-toe using the network based on a spatial light modulator. Our work represents a solid step forward in advancing DONs, which promises a fundamental shift from simple recognition or classification tasks to the high-level sensory capability of AI. It may find exciting applications in autonomous driving, intelligent robots, and intelligent manufacturing.
AB - The ultimate goal of artificial intelligence (AI) is to mimic the human brain to perform decision-making and control directly from high-dimensional sensory input. Diffractive optical networks (DONs) provide a promising solution for implementing AI with high speed and low power-consumption. Most reported DONs focus on tasks that do not involve environmental interaction, such as object recognition and image classification. By contrast, the networks capable of decision-making and control have not been developed. Here, we propose using deep reinforcement learning to implement DONs that imitate human-level decision-making and control capability. Such networks, which take advantage of a residual architecture, allow finding optimal control policies through interaction with the environment and can be readily implemented with existing optical devices. The superior performance is verified using three types of classic games: tic-tac-toe, Super Mario Bros., and Car Racing. Finally, we present an experimental demonstration of playing tic-tac-toe using the network based on a spatial light modulator. Our work represents a solid step forward in advancing DONs, which promises a fundamental shift from simple recognition or classification tasks to the high-level sensory capability of AI. It may find exciting applications in autonomous driving, intelligent robots, and intelligent manufacturing.
KW - deep learning
KW - diffractive optical networks
KW - optical computing
KW - reinforcement learning
UR - https://www.scopus.com/pages/publications/105002334790
U2 - 10.1117/1.APN.3.4.046003
DO - 10.1117/1.APN.3.4.046003
M3 - 文章
AN - SCOPUS:105002334790
SN - 2791-1519
VL - 3
JO - Advanced Photonics Nexus
JF - Advanced Photonics Nexus
IS - 4
M1 - 046003
ER -