GNN-RAG combines the strengths of Graph Neural Networks (GNNs) and Large Language Models (LLMs) for KGQA tasks. The method leverages a dense subgraph reasoner GNN to extract useful graph information, improving KGQA performance. Shortest paths in the KG connecting question entities and answer candidates are verbalized and input for LLM reasoning with RAG. GNN-RAG achieves state-of-the-art results in WebQSP and CWQ benchmarks, outperforming GPT-4 performance by integrating graph reasoning and natural language understanding.
This paper highlights the synergy of GNNs and LLMs, paving the way for improved KGQA systems with enhanced reasoning capabilities.