GVQA: Learning to Answer Questions about Graphs with Visualizations via Knowledge Base

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

10 Scopus citations

Abstract

Graphs are common charts used to represent the topological relationship between nodes. It is a powerful tool for data analysis and information retrieval tasks involve asking questions about graphs. In formative study, we found that questions for graphs are not only about the relationship of nodes but also about the properties of graph elements. We propose a pipeline to answer natural language questions about graph visualizations and generate visual answers. We first extract the data from graphs and convert them into GML format. We design data structures to encode graph information and convert them into an knowledge base. We then extract topic entities from questions. We feed questions, entities and knowledge bases into our question-answer model to obtain the SPARQL queries for textual answers. Finally, we design a module to present the answers visually. A user study demonstrates that these visual and textual answers are useful, credible and and transparent.

Original languageEnglish
Title of host publicationCHI 2023 - Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems
PublisherAssociation for Computing Machinery
ISBN (Electronic)9781450394215
DOIs
StatePublished - 19 Apr 2023
Event2023 CHI Conference on Human Factors in Computing Systems, CHI 2023 - Hamburg, Germany
Duration: 23 Apr 202328 Apr 2023

Publication series

NameConference on Human Factors in Computing Systems - Proceedings

Conference

Conference2023 CHI Conference on Human Factors in Computing Systems, CHI 2023
Country/TerritoryGermany
CityHamburg
Period23/04/2328/04/23

Keywords

  • Knowledge Base
  • Natural Language Process
  • Network Graph
  • Question Answering
  • Reinforcement Learning
  • Visualization

Fingerprint

Dive into the research topics of 'GVQA: Learning to Answer Questions about Graphs with Visualizations via Knowledge Base'. Together they form a unique fingerprint.

Cite this