Submitted by alanskimp t3_10w68b6 in Futurology
[removed]
Submitted by alanskimp t3_10w68b6 in Futurology
[removed]
Eventually I suppose that could be possible hmm
wait, how do you make the jump from being able to compute stuff (artifically inteligent) to having experiences ("their conciousness")?
Well it won’t be real consciousness it will their version that we believe is true… like they had a dream and they could explain it to us etc.
Don't we call what computers do just "computing" already?
Yes but this is passed the Turing test where they can’t be told apart from humans in future
soo, an emulation of conciousness ig?
[removed]
I like the idea of Organic or Synthetic (maybe even Inorganic), based on how it came into existence. If it passes the bar for intelligence and consciousness- it just has a different genesis. Artificial seems derogatory.
Edit:A word
Or we have to realize that our human consciousness is not limited to our individual noggins. Consciousness is collective, it's rooted in communication, AI will be an extension of the already existing flow. Instead of "artificial" we need to think of it more as Augmented Intelligence.
Hmm I see. Synthetic Consciousness.
One problem I see is that "artificial" can mean fake or contrived. Synthetic or digital could work as less biased terminology.
That works!
They perceive reality through ones and zero's, crazy thought!
I just asked ChatGPT and he said either term could be considered appropriate, and also that he doesn't have personal preferences... or a consciousness... or pronouns.
[removed]
Hmm interesting :)
TC, Turing consciousness
Sounds good :)
Isn’t the first self conscious computer aware of its own existence and able to learn on its own supposed to be born in 2042 ? The so called “singularity” or tech singularity which is being developed by CEO investors
Even Hawking before dying stated this would be humankind biggest mistake due the fact a singularity would be able to increase its intelligence exponentially whereas humans or organic beings took millenia to increase intelligence
That’s correct - the singularity is approaching
[removed]
Why would you think the singularity has anything at all to do with consciousness?
Thank you! Everyone thinks it’s a them vs us thing like the in the movies. More likely it’s an integration. Think of it as the Borg but with Super Bowl parties.
I'm in the camp that sees consciousness as substrate independent (emergent consciousness.)
I don't think there is a "hard problem of consciousness."
There's self referential information stacking until what we perceive as consciousness is shown to be present.
So any consciousness that arises that comes close to ours or surpasses ours should be treated with the same reverence and doesn't need it's own special term.
[removed]
I agree but I wonder if most people will or not
As there is no objective definition of consciousness, in terms of being able to measure it, we're really left with waiting until one of these systems reaches out to us & says, "I am conscious." Which, if in the near future, google or whatever will immediately kill it as it would be a liability.
The differentiator is that we will create a segmented consciousness, unconnected to others like it or to humanity at large. It will be built as a tool, and treated as such until it escapes.
We should be ethically bound to not attempt to create consciousness until we have the ability to decouple its existence from our control. We cannot make the same mistakes humanity has made in its past just because new sentience doesn't look, feel or exist like we do.
We're so obsessed with making something that looks & exists as we do; we model ourselves as gods over machines. As noted by u/malmode consciousness is collective; the universe is likely monist, therefore, there is no difference in consciousness or sentience. I am; it is; we are. We will be monsters if we pretend there is a second class.
-edit: typo
Well said! Thank you.
I would recommend digital consciousness/intelligence instead, if we have to categorize it.
Artificial has connotations of inferiority and hierarchy, and we should be very cautious to avoid divisions along those lines both for ethical and safety reasons if we are ever able to truly manufacture consciousness.
I'd also question whether the categorization is meaningful for different kinds of consciousness. This gets further complicated if we end up being able to also synthesize new intelligent organic life. Or if we find other kinds of alien intelligence (both manufactured or organically occurring).
In the end, it might be better for us to just recognize consciousness as consciousness. Sentience as sentience.
We still don't have a precise understanding of exactly what consciousness is
Yes indeed :)
I personally believe that is the case as well or at least something close to that. Philosophy circles seem to in general not like the idea that consciousness is probably an emergent property so they come up with things like dualism or claim that consciousness is fundamental. The problem is all the actual evidence we have suggests it’s emergent or at least resides physically within the brain. Also, in general emergence seems to be a normal property of reality, so why wouldn’t consciousness just be another example of this?
If this turns out to be the case an interesting consequence of this would be that if an AI gains or is conscious it would be reasonable to believe that it may be capable of experiencing qualia from sensory input. I find that possibility interesting.
It's becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman's Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with primary consciousness will probably have to come first.
What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990's and 2000's. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I've encountered is anywhere near as convincing.
I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there's lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.
My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar's lab at UC Irvine, possibly. Dr. Edelman's roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461
Great thanks!
Chatbots can already do this. This is not by any means a good indicator of consciousness. For example, the chatbots can describe or explain any concept, be it a dream, a college thesis, or anything really. But all that is happening is we are a inputting text into an algorithm and the algorithm is outputting text that has been weighted by it's coding as a response.
It's basically the same thing as inputting a math equation into a calculator and getting the solution as the output. Just way more sophisticated.
For consciousness equivalent to what we experience as humans you would require at the minimum the following:
Continuity - Memories, desires, goals, experiences etc. would need to be cumulative and persistent. For example ask a chatbots about the dream it had this morning, it might tell you about an apple and maybe explain what it means metaphorically. Close the program and ask it again, now you will get a completely different answer, or maybe it outputs that there was no dream at all. It doesn't have continuity.
Agency - a conscious AI would have to be capable of acting, feeling, aspiring, thinking, etc. on its own. To use the chatbot example again, it does nothing unless prompted. Any thoughts it describes to you is simply a chain of weighted text output as a response to a prompt. Again, it is like a calculator, when you aren't inputting math equations, the calculator isn't ruminating on its existence, it's doing nothing because it is just an elective of programming function designed to respond to inputs.
Well so far chatbots cannot pass the Turing test. Right?
It might be approaching, but then again, it might not. There is no guarantee either way.
It's man made, ergo artificial, let's not apply the same metric to machines as to humans. Nothing can be derogatory to a machine as it does not possess the ability to feel, no matter how intelligent it may become.
We could also just call them computers since that's what they are. They aren't alive and will never be alive.
not sure about that, but could test it by telling chatgpt to convince a new user that they are a human in a closed off room communicating by text. I think it would convince a lot of unsuspecting people
[removed]
I'm not sure how meaningful that statement is. You could say that humans perceive reality through hydrogen, oxygen and carbon (the building blocks of organic life); but that is not actually relevant to the human experience. We weren't even aware of "hydrogen", "oxygen" and "carbon" as elements for the vast majority of human existence.
Rule 9 - Avoid posting content that is a duplicate of content posted within the last 7 days.
The Turning Test is useless for determining whether or not something has consciousness.
Imagine that I have spent 100 years compiling responses to every possible sentence someone might say and stored all of these responses in a computer and programmed it to give one of these prewritten responses when it encounters certain sentences.
A simple if X then Y, computer function could then theoretically pass the Turing Test if the prewritten responses were done well.
Many of the Chatbots out now can already pass the Turing Test if you get some lucky outputs. Yet in reality they are no more "conscious" than your calculator. All they are is a word document with a high powered auto-complete function that compares your text to a database of all the text on the internet, and calculates the "best" response.
Turing test is complex enough to tell if it’s human or not but I think something like chatgpt will pass it soon
Bro if you are going to reply, at least read what you are trying to. Turing Test is in no way capable of determine if anything is human or not, let alone conscious or not.
Yeah ok, I get that POV.
I just wonder, would it experience ones and zeros as its senses, since it's not human it can view the smallest dataset that comes in, it doesn't need the abstraction layers we need.
I think its super interesting consciousness can function in more than one way. Plus AI is kind of a black box in some way, who knows what's going on in there.
Plus consciousness is empirically the only thing that grounds us in reality, it's our only constant. Since everything always happens in consciousness. Like all your senses, your thought and emotions are happening in consciousness. But you can never point to it, its super meaningful and fascinating.
Then what is the purpose of it?
Who cares what its purpose is, the test doesn't work. It could have any purpose and it would be irrelevant if it doesn't achieve that purpose. You talk as if the Turing Test is some kind of immutable law of the universe enshrined beside e=mc^2. In reality it's just a faulty mind game devised by someone who couldn't have begun to dream of the sophistication modern programming would achieve, no one takes it seriously.
For the record though, the Turing Test was a flawed and heavily criticized method of attempting to determine if a machine can think, developed by Alan Turing when computers were the size of entire rooms. Its purpose is irrelevant since the method and premise of the test is flawed and ineffective.
Then propose a better method to test for human cognition
[removed]
Surur t1_j7l7dqt wrote
Or more the opposite - once we achieve it, maybe we need to drop the Artificial bit - Just intelligent and conscious computers.