Skip to main content
eScholarship
Open Access Publications from the University of California

Theory of AI Mind: How adults and children reason about the ``mental states'' of conversational AI

Abstract

Conversational AI devices are increasingly present in our lives and even used by children to ask questions, play, and learn. These entities not only blur the line between objects and agents—they are speakers (objects) that respond to speech and engage in conversations (agents)—but also operate differently from humans. Here we use a variant of a classic false-belief task to explore adults' and children's attributions of mental states to conversational AI versus human agents. While adults understood that two conversational AI devices, unlike two human agents, may share the same "beliefs" (Exp.1), 3- to 8-year-old children treated two conversational AI devices just like human agents (Exp.2); by 5 years of age, they expected the two devices to maintain separate beliefs rather than share the same belief, with hints of developmental change. Our results suggest that children initially rely on their understanding of agents to make sense of conversational AI.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View