Show simple item record

dc.contributor.advisorKagal, Lalana
dc.contributor.authorRich, Benjamin R.
dc.date.accessioned2026-02-12T17:14:38Z
dc.date.available2026-02-12T17:14:38Z
dc.date.issued2025-09
dc.date.submitted2025-09-15T14:56:34.484Z
dc.identifier.urihttps://hdl.handle.net/1721.1/164853
dc.description.abstractKnowledge Graph Question Answering (KGQA) encompasses a set of techniques aimed at generating accurate, interpretable responses to natural language queries posed over structured, graph-based datasets. Recent approaches to KGQA involve reducing the knowledge graph (KG) to a relevant subgraph, which is then encoded in natural language as a series of triples (subject, predicate, object) and passed to a large language model (LLM) for interpretation and answer generation. These methods have shown state-of-the-art accuracy. However, this paradigm is undermined by a critical vulnerability: the retrieval of irrelevant or erroneous facts can amplify LLM hallucinations and degrade system trustworthiness, while the reasoning process remains opaque. This thesis addresses this challenge by extending an existing stateof-the-art KGQA architecture with uncertainty-aware subgraph retrieval methods. To achieve this, we modify the retrieval component to learn the epistemic uncertainty of each candidate triple’s relevance to a given query. We implement these modifications using Bayesian methods and learn a well-calibrated approximation of the posterior distribution over triple relevance. By explicitly modeling this uncertainty, the retriever model is shown to provide a fine-grained confidence score for each piece of evidence. We expose these metrics downstream to the LLM during reasoning and evaluate whether LLMs can reason over uncertainty-related metrics to improve KGQA. We find that LLMs cannot reason effectively over uncertainties in most cases, but that agentic workflows that provide selective access to uncertainty metrics may enhance performance. We evaluate our approach against established benchmarks using HIT-rate and set-comparison accuracy metrics. Additionally, we introduce reasoning-path and statistical trust metrics derived from calibrated uncertainty scores. Our analysis reveals a significant positive correlation between path-based uncertainty metrics and the veracity of the Large Language Model’s (LLM) answers. These findings establish a robust foundation for developing uncertainty-grounded trust mechanisms in LLM-agnostic KGQA systems. As a proof of concept, a lightweight classifier trained exclusively on the LLM’s inputs and outputs demonstrates substantial predictive power in identifying correct responses. Finally, we briefly explore using uncertainty to identify out-of-distribution (OOD) queries.
dc.publisherMassachusetts Institute of Technology
dc.rightsIn Copyright - Educational Use Permitted
dc.rightsCopyright retained by author(s)
dc.rights.urihttps://rightsstatements.org/page/InC-EDU/1.0/
dc.titleUncertainty-Aware Knowledge Graph Retrieval Methods and Their Use in LLM Question-Answering
dc.typeThesis
dc.description.degreeM.Eng.
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
mit.thesis.degreeMaster
thesis.degree.nameMaster of Engineering in Electrical Engineering and Computer Science


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record