Extracting and Utilizing Abstract, Structured Representations for Analogy
- Frankland, S. M, Webb, T. W., Petrov, A. A., O'Reilly, R. C., & Cohen, J. D. (2019)
-
Extracting and utilizing abstract, structured representations for analogy.
In A. K. Goel, C. M. Seifert, & C. Freksa (Eds.),
Proceedings of the 41st Conference of the Cognitive Science Society
(pp. 1766-1772). Montreal, QB: Cognitive Science Society.
Reprint (pdf)
Abstract:
Human analogical ability involves the re-use of abstract, structured representations within and across domains. Here, we present a generative neural network that completes analogies in a 1D metric space, without explicit training on analogy. Our model integrates two key ideas. First, it operates over representations inspired by properties of the mammalian Entorhinal Cortex (EC), believed to extract low-dimensional representations of the environment from the transition probabilities between states. Second, we show that a neural network equipped with a simple predictive objective and highly general inductive bias can learn to utilize these EC-like codes to compute explicit, abstract relations between pairs of objects. The proposed inductive bias favors a latent code that consists of anti-correlated representations. The relational representations learned by the model can then be used to complete analogies involving the signed distance between novel input pairs (1:3 :: 5:? (7)), and extrapolate outside of the network's training domain. As a proof of principle, we extend the same architecture to more richly structured tree representations. We suggest that this combination of predictive, error-driven learning and simple inductive biases offers promise for deriving and utilizing the representations necessary for high-level cognitive functions, such as analogy.
Keywords: abstract structured representations, analogy, neural networks, predictive learning, relational reasoning