LastExitToSalvation t1_ir0y0xs wrote
>One of the most complex parts of the proposed architecture, the “world model module” would work to estimate the state of the world, as well as predict imagined actions and other world sequences, much like a simulator.
This is the part standing between real cognition and ML prediction. AI has no sense of the world, only the discrete things it has been optimized to compute. If there was a general purpose world module, then everything a model learns can be put in the context of the real world, making the outputs more consistently accurate/training cheaper and faster. I know the paper just sets out an architecture for the next phase of research, but if this world module became real, that would be as profound as what deep learning has done over the last 10 years.
For everyone about to make a comment "I for one welcome our AI overlords" or some trite shit, this is actually the beginning of something that could lead us there. But without a world model, we will never get there. imo.
Thatingles t1_ir106e5 wrote
I remain convinced AGI will emerge from linking together many modules and one of those would of course be a world model module, but I don't think it's the final step. We still seem to be missing the components that would allow an AI to solve complex multi-step problems by a combination of memory and reasoning. I'm sure it will come but this ain't it.
thruster_fuel69 t1_ir1bpx1 wrote
Thats my meaty sense also. World module is critical but not the only requirement. That being said, the future is going to be exciting as heck! I can't wait for my worldly sage of an AI mentor.
berd021 t1_ir2j69u wrote
That is exactly what the world model is for though. You can use it to perform a length of transformations that is unspecified beforehand. It will stop performing steps as soon as energy is reasonably minimized.
This is compared to ai now, which only performs as many steps as it has learnt.
Snufflepuffster t1_ir1k2dt wrote
I have always considered something approaching sentience could be made by having a network operating on top of smaller task specific nets. Now operating on the activations of all these smaller nets could give the ‘sentient’ net a sense of of the world around it because it has access to information. It can modulate each of the smaller slave nets on the fly based on previous experiences to make a decision. It can also identify the most pressing to task to make a decision about in its surrounding environment. That’s what LeCun is suggesting in this scholarly op-ed, it’s not a new idea, more a question of computing power.
afaik we haven’t clearly defined what sentience is yet, if an ai bot can trick you into believing it’s sentient then what else is there? I guess this would just show we have an information processing limit and once another entity approaches that limit we are fooled. This is a question for the humanities to answer probably.
LastExitToSalvation t1_ir21ltj wrote
To your point about a network overlaying smaller nets, we could get to a point where awareness or quasi-sentience is an emergent phenomenon, not something we can build. Thinking about human consciousness, it is evident that our self awareness is an emergent property of our biology. If we put enough of the right technology pieces together, perhaps we'll see the same thing in machines. And then we're left with a real ethical question. If we didn't create sentience but it merely occurred, do we have the moral right to shut it down?
Snufflepuffster t1_ir22k9m wrote
Yea eventually the emergent properties should be mostly contained in the self supervised training signal. So a question of how the model learns not necessarily its construction. As the bot learns more it can start to identify priority tasks to infer, and then this process just continues. The thing we’re taking for granted is the environment that supplies all the stimulus from which self awareness could be learned.
LastExitToSalvation t1_ir2g0ku wrote
Well that's the question though - is self awareness learned (in which case our self awareness is just linear algebra done by a meat computer) or is it a spontaneous event, like a wildfire catching hold, something more ephemeral? I suppose that's the humanities question - how are we going to define what is either contained in some component piece of the architecture or wholly distinct from it? If I take away my brain, my consciousness is gone. But if I take away my heart, it's the same result. Is a self-supervised training signal an analog for consciousness? I guess I think it will be something more than that, something uncontained but still dependent on the pieces.
[deleted] t1_ir2iz5l wrote
[deleted]
Mike_0x t1_ir5gi4o wrote
I for one welcome our AI overlords.
Ebayednoob t1_ir19oux wrote
Some have proposed an AI world prediction module could be block-chain based and store each state as a hash in a Merkle tree-based system for fast time-state processing.
This isn't some token or bitcoin scam nonsense.. IT's a practical use-case for the block chain development (not something that holds value as a currency.. more like a software implementation). It's also eerily similar to how our DNA stores data
TheLastSamurai t1_ir1kuis wrote
I don't want any of this. I wish we had the power to shut all this research down, to me the risks far outweigh the benefits, even in the "good" scenario we basically lost most of our actual humanity. We need to organize and stop this.
Snufflepuffster t1_ir1mmvm wrote
it’s just a neural net. An assistant. Did you read the paper? It’s not coming for you, and I think it’s really selfish to try to stop research that could help so many. Machine learning in medicine is a big thing.
xxxmsky t1_ir26nnc wrote
The good it can bring is abundance of materials and food, good climate, world peace the end of world hunger and diseases
I disagree that the risks outweighs the good. We need to democratize it!
Viewing a single comment thread. View all comments