EducationalCicada t1_j1yux4o wrote
Previously from Deep Mind in the domain of symbolic reasoning:
>This paper attempts to answer a central question in unsupervised learning:what does it mean to "make sense" of a sensory sequence? In our formalization,making sense involves constructing a symbolic causal theory that both explains the sensory sequence and also satisfies a set of unity conditions. The unity conditions insist that the constituents of the causal theory -- objects,properties, and laws -- must be integrated into a coherent whole. On our account, making sense of sensory input is a type of program synthesis, but it is unsupervised program synthesis.
>
>Our second contribution is a computer implementation, the Apperception Engine, that was designed to satisfy the above requirements. Our system is able to produce interpretable human-readable causal theories from very small amounts of data, because of the strong inductive bias provided by the unity conditions.A causal theory produced by our system is able to predict future sensor readings, as well as retrodict earlier readings, and impute (fill in the blanks of) missing sensory readings, in any combination.
>
>We tested the engine in a diverse variety of domains, including cellular automata, rhythms and simple nursery tunes, multi-modal binding problems,occlusion tasks, and sequence induction intelligence tests. In each domain, we test our engine's ability to predict future sensor values, retrodict earlier sensor values, and impute missing sensory data.
>
>The engine performs well in all these domains, significantly out-performing neural net baselines. We note in particular that in the sequence induction intelligence tests, our system achieved human-level performance. This is notable because our system is not a bespoke system designed specifically to solve intelligence tests, but a general-purpose system that was designed to make sense of any sensory sequence.
Edit: Also check out the followup paper:
valdanylchuk OP t1_j1z073m wrote
Very cool! And this paper is from 2019-20, and some of those I listed in my post are from 2018-19. I wonder how many of these turned out dead ends, and how far did the rest go by now. Papers for major conferences are often preprinted in advance, but sometimes DeepMind also comes out with something like AlphaGo or AlphaFold on their own schedule. Maybe some highly advanced Gato 2.0 is just around the corner?
Viewing a single comment thread. View all comments