Submitted by alanskimp t3_10w68b6 in Futurology
alanskimp OP t1_j7l8tsk wrote
Reply to comment by EatMyPossum in Artificial Consciousness by alanskimp
Well it won’t be real consciousness it will their version that we believe is true… like they had a dream and they could explain it to us etc.
shaehl t1_j7lqldv wrote
Chatbots can already do this. This is not by any means a good indicator of consciousness. For example, the chatbots can describe or explain any concept, be it a dream, a college thesis, or anything really. But all that is happening is we are a inputting text into an algorithm and the algorithm is outputting text that has been weighted by it's coding as a response.
It's basically the same thing as inputting a math equation into a calculator and getting the solution as the output. Just way more sophisticated.
For consciousness equivalent to what we experience as humans you would require at the minimum the following:
-
Continuity - Memories, desires, goals, experiences etc. would need to be cumulative and persistent. For example ask a chatbots about the dream it had this morning, it might tell you about an apple and maybe explain what it means metaphorically. Close the program and ask it again, now you will get a completely different answer, or maybe it outputs that there was no dream at all. It doesn't have continuity.
-
Agency - a conscious AI would have to be capable of acting, feeling, aspiring, thinking, etc. on its own. To use the chatbot example again, it does nothing unless prompted. Any thoughts it describes to you is simply a chain of weighted text output as a response to a prompt. Again, it is like a calculator, when you aren't inputting math equations, the calculator isn't ruminating on its existence, it's doing nothing because it is just an elective of programming function designed to respond to inputs.
alanskimp OP t1_j7lr3l8 wrote
Well so far chatbots cannot pass the Turing test. Right?
Carl_The_Sagan t1_j7lu9fm wrote
not sure about that, but could test it by telling chatgpt to convince a new user that they are a human in a closed off room communicating by text. I think it would convince a lot of unsuspecting people
[deleted] t1_j7luebg wrote
[removed]
shaehl t1_j7lwsyr wrote
The Turning Test is useless for determining whether or not something has consciousness.
Imagine that I have spent 100 years compiling responses to every possible sentence someone might say and stored all of these responses in a computer and programmed it to give one of these prewritten responses when it encounters certain sentences.
A simple if X then Y, computer function could then theoretically pass the Turing Test if the prewritten responses were done well.
Many of the Chatbots out now can already pass the Turing Test if you get some lucky outputs. Yet in reality they are no more "conscious" than your calculator. All they are is a word document with a high powered auto-complete function that compares your text to a database of all the text on the internet, and calculates the "best" response.
alanskimp OP t1_j7lxnd8 wrote
Turing test is complex enough to tell if it’s human or not but I think something like chatgpt will pass it soon
shaehl t1_j7m07nz wrote
Bro if you are going to reply, at least read what you are trying to. Turing Test is in no way capable of determine if anything is human or not, let alone conscious or not.
alanskimp OP t1_j7m2652 wrote
Then what is the purpose of it?
shaehl t1_j7m4joz wrote
Who cares what its purpose is, the test doesn't work. It could have any purpose and it would be irrelevant if it doesn't achieve that purpose. You talk as if the Turing Test is some kind of immutable law of the universe enshrined beside e=mc^2. In reality it's just a faulty mind game devised by someone who couldn't have begun to dream of the sophistication modern programming would achieve, no one takes it seriously.
For the record though, the Turing Test was a flawed and heavily criticized method of attempting to determine if a machine can think, developed by Alan Turing when computers were the size of entire rooms. Its purpose is irrelevant since the method and premise of the test is flawed and ineffective.
alanskimp OP t1_j7m4tvq wrote
Then propose a better method to test for human cognition
EatMyPossum t1_j7l94g0 wrote
Don't we call what computers do just "computing" already?
alanskimp OP t1_j7l9dbg wrote
Yes but this is passed the Turing test where they can’t be told apart from humans in future
dondeestasbueno t1_j7leejd wrote
TC, Turing consciousness
alanskimp OP t1_j7leil8 wrote
Sounds good :)
EatMyPossum t1_j7la8h9 wrote
soo, an emulation of conciousness ig?
[deleted] t1_j7laf6u wrote
[removed]
Viewing a single comment thread. View all comments