Skip to main content
eScholarship
Open Access Publications from the University of California

Learning ecological and artificial visual categories: rhesus macaques, humans, and machines

Abstract

Comparative studies of categorization using non-human animals are difficult to conduct because studies of human categorization typically rely on verbal reports. Moreover, animal performance may reflect reinforcement learning, whereby discrete features act as discriminative cues for categorization. We trained humans, monkeys, and computer algorithms – an associative, feature-driven algorithm and a neural network – to classify four simultaneously presented visual stimuli, each belonging to a different perceptual category, in a specific order. There were two sets of categories: naturalistic photographs and close-up sections of paintings with distinctive styles. All living subjects classified stimuli better than predicted by chance or by feature-driven learning alone, even when stimuli changed on every trial. However, humans more closely resembled monkeys when classifying the more abstract painting stimuli than photographic stimuli. This points to a common, non-associative, non-linguistic classification strategy in both species, one that humans can rely on in the absence of linguistic labels for categories.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View