Narrative Analysis of True Crime Podcasts With Knowledge Graph-Augmented Large Language Models

Oct 29, 2024·
Xinyi Leng
,
Jason Liang
,
Jack Mauro
,
Xu Wang
,
James Chapman
,
Andrea L. Bertozzi
,
Junyuan Lin
,
Bohan Chen
,
Chenchen Ye
,
Temple Daniel
· 0 min read
Abstract
Narrative data spans all disciplines and provides a coherent model of the world to the reader or viewer. Recent advancement in machine learning and Large Language Models (LLMs) have enable great strides in analyzing natural language. However, Large language models (LLMs) still struggle with complex narrative arcs as well as narratives containing conflicting information. Recent work indicates LLMs augmented with external knowledge bases can improve the accuracy and interpretability of the resulting models. In this work, we analyze the effectiveness of applying knowledge graphs (KGs) in understanding true-crime podcast data from both classical Natural Language Processing (NLP) and LLM approaches. We directly compare KG-augmented LLMs (KGLLMs) with classical methods for KG construction, topic modeling, and sentiment analysis. Additionally, the KGLLM allows us to query the knowledge base in natural language and test its ability to factually answer questions. We examine the robustness of the model to adversarial prompting in order to test the model’s ability to deal with conflicting information. Finally, we apply classical methods to understand more subtle aspects of the text such as the use of hearsay and sentiment in narrative construction and propose future directions. Our results indicate that KGLLMs outperform LLMs on a variety of metrics, are more robust to adversarial prompts, and are more capable of summarizing the text into topics.
Type
Publication
Proceedings of the 33rd International Conference on Information and Knowledge Management, GTA3 Workshop-2024