Viewing a single comment thread. View all comments

JavaMochaNeuroCam t1_j64e3yg wrote

Alan was really psyched about GATO (600+ tasks/domains)

I think it's relatively straightforward to bind experts to a general cognitive model.

Basically, the MOE, Mixture of Experts, would dual train the domain-specific model with simultaneous training of the cortex (language) model. That is, a pre-trained image-recognition model can describe an image (ie, a cat) in text to an LLM, but also bind it to a vector that represents the neural state that captures that representation.

So, you're just binding the language to the domain-specific representations.

Somehow, the hippocampus, thalamus and claustrum are involved in that in humans. If I'm not mistaken.

1