Skip to main navigation menu Skip to main content Skip to site footer

Learning Representations Specialized in Spatial Knowledge: Leveraging Language and Vision

Abstract

Spatial understanding is crucial in many real-world problems, yet little progress has been made towards building representations that capture spatial knowledge. Here, we move one step forward in this direction and learn such representations by leveraging a task consisting in predicting continuous 2D spatial arrangements of objects given object-relationship-object instances (e.g., ``cat under chair") and a simple neural network model that learns the task from annotated images. We show that the model succeeds in this task and that it is furthermore capable of predicting correct spatial arrangements for unseen objects if either CNN features or word embeddings of the objects are provided. The differences between visual and linguistic features are discussed. Next, to evaluate the spatial representations learned in the previous task, we introduce a task and a dataset consisting in a set of crowdsourced human ratings of spatial similarity for object pairs. We find that both CNN features and word embeddings predict well human judgments of similarity and that these vectors can be further specialized in spatial knowledge if we update them when training the model that predicts spatial arrangements of objects. Overall, this paper paves the way towards building distributed spatial representations, contributing to the understanding of spatial expressions in language.

Article at MIT Press PDF (presented at ACL 2018)

Author Biography

Guillem Collell

PhD researcher,

Computer Science Department