Snufflepuffster t1_ir22k9m wrote
Reply to comment by LastExitToSalvation in Meta's AI Chief Publishes Paper on Creating ‘Autonomous’ Artificial Intelligence by Impossible_Cookie596
Yea eventually the emergent properties should be mostly contained in the self supervised training signal. So a question of how the model learns not necessarily its construction. As the bot learns more it can start to identify priority tasks to infer, and then this process just continues. The thing we’re taking for granted is the environment that supplies all the stimulus from which self awareness could be learned.
LastExitToSalvation t1_ir2g0ku wrote
Well that's the question though - is self awareness learned (in which case our self awareness is just linear algebra done by a meat computer) or is it a spontaneous event, like a wildfire catching hold, something more ephemeral? I suppose that's the humanities question - how are we going to define what is either contained in some component piece of the architecture or wholly distinct from it? If I take away my brain, my consciousness is gone. But if I take away my heart, it's the same result. Is a self-supervised training signal an analog for consciousness? I guess I think it will be something more than that, something uncontained but still dependent on the pieces.
Viewing a single comment thread. View all comments