Abstract
Previous works on Textbook Question Answering suffer from limited performance due to the small-scale neural network based backbone. To alleviate the issue, we propose to utilize LLMs as the backbone of TQA tasks. To this end, we utilize two methods, the raw-context based prompting method and the knowledge graph based prompting method. Specifically, we introduce the Textbook Question Answering-Knowledge Graph (TQA-KG) method, which first converts textbook content into structural knowledge graphs and then combining knowledge graph into LLM prompting, thereby enhancing the model’s reasoning capabilities and answer accuracy. Extensive experiments conducted on the CK12-QA dataset illustrate the effectiveness of the method, achieving an improvement of 5.67% in accuracy compared to current state-of-the-art methods on average.
| Original language | English |
|---|---|
| Pages (from-to) | 639-654 |
| Number of pages | 16 |
| Journal | Proceedings of Machine Learning Research |
| Volume | 260 |
| State | Published - 2024 |
| Event | 16th Asian Conference on Machine Learning, ACML 2024 - Hanoi, Viet Nam Duration: 5 Dec 2024 → 8 Dec 2024 |
Keywords
- Large language model
- Retrieve-and-Generate
- Textbook Question Answering
- knowledge graphs