TY - GEN
T1 - Everything of Thoughts
T2 - Findings of the 62nd Annual Meeting of the Association for Computational Linguistics, ACL 2024
AU - Ding, Ruomeng
AU - Zhang, Chaoyun
AU - Wang, Lu
AU - Xu, Yong
AU - Ma, Minghua
AU - Zhang, Wei
AU - Qin, Si
AU - Rajmohan, Saravan
AU - Lin, Qingwei
AU - Zhang, Dongmei
N1 - Publisher Copyright:
© 2024 Association for Computational Linguistics.
PY - 2024
Y1 - 2024
N2 - This paper introduce a novel thought prompting approach called “Everything of Thoughts” (XOT) for Large Language Models (LLMs) to defy the law of “Penrose triangle” of existing thought paradigms, to achieve three key perspectives in thought generation simultaneously: performance, efficiency, and flexibility. XOT leverages pretrained reinforcement learning and Monte Carlo Tree Search (MCTS) to incorporate external domain knowledge and planning capability into thoughts, thereby enhancing LLMs' decision-making capabilities. Through the MCTS-LLM collaborative thought revision framework, XOT autonomously produces high-quality comprehensive cognitive mappings with minimal LLM interactions. Additionally, XOT empowers LLMs to utilize flexible cognitive mappings for solving problems with multiple solutions. We evaluate XOT on several challenging problem-solving tasks, including Game of 24, 8-Puzzle, and Pocket Cube. Our results demonstrate that XOT significantly outperforms existing approaches in various dimensions, showcasing its remarkable proficiency in addressing complex problems across diverse domains. The data and code are available at https://github.com/microsoft/Everything-of-Thoughts-XoT.
AB - This paper introduce a novel thought prompting approach called “Everything of Thoughts” (XOT) for Large Language Models (LLMs) to defy the law of “Penrose triangle” of existing thought paradigms, to achieve three key perspectives in thought generation simultaneously: performance, efficiency, and flexibility. XOT leverages pretrained reinforcement learning and Monte Carlo Tree Search (MCTS) to incorporate external domain knowledge and planning capability into thoughts, thereby enhancing LLMs' decision-making capabilities. Through the MCTS-LLM collaborative thought revision framework, XOT autonomously produces high-quality comprehensive cognitive mappings with minimal LLM interactions. Additionally, XOT empowers LLMs to utilize flexible cognitive mappings for solving problems with multiple solutions. We evaluate XOT on several challenging problem-solving tasks, including Game of 24, 8-Puzzle, and Pocket Cube. Our results demonstrate that XOT significantly outperforms existing approaches in various dimensions, showcasing its remarkable proficiency in addressing complex problems across diverse domains. The data and code are available at https://github.com/microsoft/Everything-of-Thoughts-XoT.
UR - https://www.scopus.com/pages/publications/85205313849
U2 - 10.18653/v1/2024.findings-acl.95
DO - 10.18653/v1/2024.findings-acl.95
M3 - 会议稿件
AN - SCOPUS:85205313849
T3 - Proceedings of the Annual Meeting of the Association for Computational Linguistics
SP - 1638
EP - 1662
BT - The 62nd Annual Meeting of the Association for Computational Linguistics
A2 - Ku, Lun-Wei
A2 - Martins, Andre
A2 - Srikumar, Vivek
PB - Association for Computational Linguistics (ACL)
Y2 - 11 August 2024 through 16 August 2024
ER -