Doumas, L. A. A., Martin, A. E., & Hummel, J. E.
(2020). Relation learning in a neurocomputational architecture supports cross-domain transfer. In S. Denison, M. Mack, Y. Xu, & B. C. Armstrong (Eds.), Proceedings of the 42nd Annual Virtual Meeting of the Cognitive Science Society (CogSci 2020) (pp. 932-937). Montreal, QB: Cognitive Science Society.
Humans readily generalize, applying prior knowledge to novel situations and stimuli. Advances in machine learning have begun to approximate and even surpass human performance, but these systems struggle to generalize what they have learned to untrained situations. We present a model based on wellestablished neurocomputational principles that demonstrates human-level generalisation. This model is trained to play one video game (Breakout) and performs one-shot generalisation to a new game (Pong) with different characteristics. The model
generalizes because it learns structured representations that are functionally symbolic (viz., a role-ﬁller binding calculus) from unstructured training data. It does so without feedback, and without requiring that structured representations are speciﬁed a priori. Speciﬁcally, the model uses neural co-activation to discover which characteristics of the input are invariant and to learn relational predicates, and oscillatory regularities in network ﬁring to bind predicates to arguments. To our knowledge,
this is the ﬁrst demonstration of human-like generalisation in a machine system that does not assume structured representa-
tions to begin with.