Knowing that neural networks are theoretically Turing complete does not imply that the networks we train (ie the sets of weights we in fact encounter) have created Turing complete solutions.
Remember that the weight space is for all practical purposes infinite (ie without overfitting measures a net may fit any arbitrary function). But, the solution set of "good" weight combinations for any given task lives on a vanishingly smaller and lower-dimensional manifold.
In other words, it is not at all obvious that networks, being theoretically "Turing complete" will in fact produce Turing machines under the forms of optimization we apply. It is likely that our optimizers only explore the solution landscape in highly idiosyncratic ways.
Given that fact, to me this is a pretty remarkable result.
snakeylime t1_j88vcxb wrote
Reply to comment by lookmeat in Scientists Made a Mind-Bending Discovery About How AI Actually Works | "The concept is easier to understand if you imagine it as a Matryoshka-esque computer-inside-a-computer scenario." by Tao_Dragon
What are you talking about?
Knowing that neural networks are theoretically Turing complete does not imply that the networks we train (ie the sets of weights we in fact encounter) have created Turing complete solutions.
Remember that the weight space is for all practical purposes infinite (ie without overfitting measures a net may fit any arbitrary function). But, the solution set of "good" weight combinations for any given task lives on a vanishingly smaller and lower-dimensional manifold.
In other words, it is not at all obvious that networks, being theoretically "Turing complete" will in fact produce Turing machines under the forms of optimization we apply. It is likely that our optimizers only explore the solution landscape in highly idiosyncratic ways.
Given that fact, to me this is a pretty remarkable result.
(Source: ML researcher in NLP+machine vision)