Skip to main navigation menu Skip to main content Skip to site footer

Visually Grounded and Textual Semantic Models Differentially Decode Brain Activity Associated with Concrete and Abstract Nouns

Abstract

Important advances have recently been made using computational semantic
models to decode brain activity patterns associated with concepts; however,
this work has almost exclusively focused on concrete nouns. How well these
models extend to decoding abstract nouns is largely unknown. We address this
question by applying state-of-the-art computational models to decode
functional Magnetic Resonance Imaging (fMRI) activity patterns, elicited by
participants reading and imagining a diverse set of both concrete and
abstract nouns. One of the models we use is linguistic, exploiting the
recent word2vec skipgram approach trained on Wikipedia. The second is
visually grounded, using deep convolutional neural networks trained on
Google Images. Dual coding theory considers concrete concepts to be encoded
in the brain both linguistically and visually, and abstract concepts only
linguistically. Splitting the fMRI data according to human concreteness
ratings, we indeed observe that both models significantly decode the most
concrete nouns; however, accuracy is significantly greater using the
text-based models for the most abstract nouns. More generally this confirms
that current computational models are sufficiently advanced to assist in
investigating the representational structure of abstract concepts in the
brain.

PDF (presented at ACL 2017)