One of the principal reasons we tell stories is to explain why something happened. A story consists of the recounting of a series of events. The causes of those events are often attached to different agents, either more or less explicitly. Narrative is a powerful tool of human communication to answer why questions in the world (especially in a world that may seem increasingly senseless).
In a recent student-led project, Margaret Meehan and Dane Malenfant explore the ability of language models to predict causal relationships between events in fiction. A lot of work in NLP has focused on causal relation detection (“causality mining”), but no work to date has assessed how well these tools work in other domains like creative writing. As we’ve seen with other NLP tools like NER, performance usually drops considerably because training was done on non-fiction texts which behave very differently.
Surprisingly, we found that causality detection doesn’t lose significant performance when applied to fiction. Additionally, we found for the task of detecting “causal logic” within a sentence — i.e. does this sentence express causality — we were able to achieve very high levels of accuracy. This suggests that large language models can be applied to real-world questions with regard to causal argumentation in the literary domain.
You can read our new paper in the Text2Story proceedings located here.