Research Symposium

24th annual Undergraduate Research Symposium, April 3, 2024

Thomas Cherry He/HIm Poster Session 4: 2:45 pm - 3:45 pm /60


Cherry, Thomas (1).JPG

BIO


Thomas is a first-year undergraduate with a passion for mathematics and computer science. While his exact career aspirations are to be determined, he hopes for a career that allows him to harness the skills he gains through his studies in each of these fields.

Do Large Language Models Reason in a Bayesian Fashion?

Authors: Thomas Cherry, Dr. Gordon Erlebacher
Student Major: Computational Science
Mentor: Dr. Gordon Erlebacher
Mentor's Department: Scientific Computing
Mentor's College: Arts and Sciences
Co-Presenters: Miles Rosoff, Hoang Vu

Abstract


We investigate the hypothesis that large language models (LLMs) such as GPT-4, Mixtral-8x7B, and Phi-1.5 learn concepts in a manner consistent with Bayesian inference. To assess this capability, the LLMs are tasked with guessing a concept given a sequence of words. We first approximate the LLMs' prior over concepts to then approximate its posterior over concepts after a word has been presented to it. We then compare the LLM posterior with that from Bayesian inference. Additionally, the study explores the extent to which temperature influences the posterior's conformity to Bayes' Rule. Our investigation aims to enrich the understanding of Bayesian reasoning in LLMs and its implications for model performance. Our results suggest that the posterior update does not conform to Bayesian statistics, invalidating the original hypothesis.

screenshot_0.png

Keywords: Artificial Intelligence, Large Language Models