ChatGPT: More Human-Like Than Computer-Like, but Not Necessarily in a Good Way
Skip to main content
eScholarship
Open Access Publications from the University of California

ChatGPT: More Human-Like Than Computer-Like, but Not Necessarily in a Good Way

Abstract

ChatGPT is a large language model developed by OpenAI as a conversational agent. ChatGPT was trained on data generated by humans and by receiving human feedback. This training process results in a bias toward humans' traits and preferences. In this paper, we stress multiple biases of ChatGPT, and show that its responses demonstrate many human traits. We Begin by showing a very high correlation between the frequency of digits generated by ChatGPT and humans' favorite numbers, with the most frequent digit generated by ChatGPT, matching humans' most favorable number, 7. We continue by showing that ChatGPT's responses in several social experiments are much closer to those of humans' than to those of fully rational agents. Finally, we show that several cognitive biases, known in humans, are also present in ChatGPT's responses.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View