Submitted by vsmolyakov t3_10766uz in MachineLearning
jimmymvp t1_j3q5wmj wrote
Reply to comment by Mental-Swordfish7129 in [N] What's next for AI? by vsmolyakov
Sry, what's the "active" part here? Is the model actually generative? I'm aware of Karl Friston and the free-energy principle. Is the active part the input stream selection? I thought that the active part refers to learning, in a sense that I get to pick my training data along the way. Sounds like what you're doing is akin to Gato from DeepMind with tokenization and is about multi-modal policies (modulo the hierarchical processing and attention).
Is there a math writeup somewhere?
Mental-Swordfish7129 t1_j3q731g wrote
Also, I do mean "active" in the ways you describe. The bottom layer actively controls the sensors via servos and a voice coil. The other layers actively modulate their input by masking it (ignoring it non-trivially).
Mental-Swordfish7129 t1_j3q6p7m wrote
The model is generative. Each layer generates predictions about the patterns of the layers below. The bottom layer generates predictions about the sensory data, some of which is proprioception data.
I have never published anything. I do not have that much time and it would largely be redundant. You can look at Friston, et.al. for the math. I use nearly the same math and logic.
What I'm doing bears only a superficial similarity to Gato in my opinion, but I can't say I've looked into it deeply. I've been far too busy with life. I only have my tiny spare time for this project unfortunately.
jimmymvp t1_j3q74ms wrote
So the active part is the self-predictive part?
Mental-Swordfish7129 t1_j3q81e2 wrote
Active just means that it directly modifies its input stream. And, yes, it is also predicting what that input will be, so it is reasonable to say that it is, in part, self-predictive.
Crucially, its input stream also includes features that are not itself or have not been changed by itself. The proprioceptive signals help it learn which is which.
Viewing a single comment thread. View all comments