Submitted by hackinthebochs t3_zhysa4 in philosophy
CaseyTS t1_izuun4c wrote
My nitpick is that he shouldn't have put a specific probability number on this because he did not attempt to validate or verify it numerically. He has educated impressions and estimations about how the tech will develop, but as a physicist, I prickle at putting a number on something without quantitatively finding that number.
As for the actual subject matter: I think he's right. I actually think the consciousness problem is overblown. Subjective data (sensations, "what it's like to be a bat"), action planning, and executing actions - repeated frequently or continuously over a period of - is a good enough definition of consciousness for me. As such making a conscious general AI seems doable, and by my low standards, some probably exist already. I'd go so far as to say that the hardest part about making a human-like consciousness is not in creating a form of consciousness, but in generalizing its intelligence to the point where it can be used for multiple things (like humans are).
In other words, I think that making a toy model of consciousness that is either useless or only good for one thing (like chatting via text) is totally doable. I think making a consciousness with enough general intelligence that it looks like a human intelligence is incredibly difficult.
Viewing a single comment thread. View all comments