Submitted by blaspheminCapn t3_zhtjnn in Futurology
SeneInSPAAACE t1_izpv54m wrote
Reply to comment by BaalKazar in How AI found the words to kill cancer cells by blaspheminCapn
>But these neurons don’t have much in common with biological neurons. They utilize the electrical grid impulse-neuron principle but do not consider electric inhibiter-neurons. The entire chemical-neuron Transmitter system is ignored as well.
Correct. It's not an apples-to-apples comparison in that sense. Like I said.
However, it's hundreds of billions vs. 750 million, if we really wanted to compete.
​
> A biological brain can alter neuro transmitter levels to react to the same input in indefinite amount of ways, without changing the underlying electrical nor the chemical synapse network configuration.
All that and a few studies have hinted that there might also be an electromagnetic aspect to brain function. Still, an AI doesn't have to work the exact same way as a biological intelligence. It does make direct comparisons harder, though.
​
>AI can solve non-linear problems. That’s a big step in terms of computation but far off from what we believe makes up intelligence.
Yes, yes. The same old story. A goalpost is set for AI, then it's reached, then people say "what about THIS", and "Doing that previous thing doesn't prove it's actually intelligent".
BaalKazar t1_izq1twe wrote
The first goal post of measuring digital intelligence wasn’t moved since 60 years.
It’s still the Turing Test. Until AI cannot beat this already multiple times overhauled base version of a test for the digital intelligence state we can assume it is not yet intelligent. It’s not even at the beginning of the measurable spectrum, yet.
What GPT does today can be rebuild in 60 year old analog Turing machines. It’s a ball dropped onto an angled grid resulting in an expectable outcome depending on where you drop the ball. But that grid wouldn’t be considered intelligent, only functional.
Taking the brain of a bat, hooking it up to electrodes, and letting it control a fighter jet for example. The brain in this state is only functional. It already controlled aircrafts in experiments, but it’s merely a grid of functionality. What we consider „intelligence“ is gone once the brain is removed from the body and connected to a synthetic interface instead.
SeneInSPAAACE t1_izq49dy wrote
Uh, did you miss that LaMDA passed the Turing test in June? The conclusion was that the result isn't valid because there's no intelligence behind LaMDA.
Or, "It's not really intelligent".
This is what we're going to get. We'll use harder and harder tests and see them being passed, and we'll just keep concluding "It's not really actually intelligent". Or, maybe we'll switch to "It's not self-aware" or "It's not sapient" at some point.
BaalKazar t1_izq958r wrote
It did not though.
The suspect knew it was talking to a machine and was asked if it believes the machine might be intelligent or even sentient.
The Turing Test implies that the suspect does not know it is talking to a machine. That way the suspect has to identify the machine as a human for the machine to pass the test.
In case of LaMDA the human knew from the beginning that he is talking to a machine. Asking someone if he believes a machine is intelligent is different than asking someone if he believes that he is talking to a human.
There is money in AI. Hence a lot of caution is advised when for profit organizations self declare themselves as the first to pass the test. The first to pass it will become rich by publicity alone. When it actually is passed you, me and everyone will get blasted across all media channels by this breakthrough.
(The GPT ceo is marketing GPT-4 as the first to pass the test. GPT is for profit and said the same about GPT-3, other companies go the same publicity route without the meat needed. As long as no human says „yeh this dialog partner is a human“ the test isn’t passed. A human saying „this machine might be intelligent“ isn’t enough. )
SeneInSPAAACE t1_izqad1w wrote
>In case of LaMDA the human knew from the beginning that he is talking to a machine.
So the well was poisoned from the beginning? Isn't that cheating? On the human side?
BTW, allegedly GPT-4 will have 100 TRILLION parameters. Now, again, we can't exactly tell what that means, but human brains have something like 150 trillion SYNAPSES, and that includes all the ones for our bodily functions and motor control, so.... Yeah, it's going to get interesting.
BaalKazar t1_izqerqh wrote
To be honest yea it is. But it’s not as easy and definitive. You got a point I don’t want to deny that. The edge between us in our discussion is the fascinating thing about all of this, especially the fact that either of us might be correct but in the current state of time there is no definite way to proof it. The Turing test it self is not definitive either.
Currently it looks like GPT it self is going to try to cheat it’s way through the Turing test by using a language model which is naturally hard for humans to identify as a machine. When a neural network is trained to pass the test by using all means necessary, is it passing the test duo to its intelligence or passing the test pre-determined? (It was trained to pass the test, can it do things beyond the scope of this training?)
There is no clear answer. Which imo makes it fascinating. We cannot truly say it is intelligent, but it will reach a point very soon at which it will appear intelligent.
The master question is, if that it self already is intelligence. It might be! I don’t want to deny that. But we lack the necessary definite understanding of „intelligence“ to truly conclude.
When a neural network passes the test, there will be fierce discussions. These discussions will help us understand what makes up intelligence, they will most likely help with understanding consciousness as well.
But it’s a step by step discovery process on both sides. Passing the Turing test doesn’t automatically mean we suddenly have a clear picture of intelligence or what it looks like. But it is a milestone in being able to understand it. Perhaps humans already created synthetic intelligence without even noticing.
Don’t get me wrong GPT and Co are fascinating and modern age magic. The new sense of possible tools is breathtaking. Intelligence requires the ability to acquire and apply knowledge in form of skills. Digital AI is very close to doing that, but the way they acquire knowledge is very technical and bound to complex engineered models being fed in just the right way. It’s not able to do so on its own. (Just like the brain! But the brain does so with a certain intrinsic ease, which might be purely Duo to some special not yet discovered feature unrelated to „intelligence“. Science can’t really tell yet so we naturally have a hard time setting boundaries for different AI models. Perhaps this current language model isn’t intelligent but some physics model AI already was? The physics one can’t „talk“ to us which makes it easy to miss)
Currently we are talking to the AI, what we are looking for is the AI starting to talk to us. Perhaps it already did but nobody noticed because we didn’t yet know how to listen.
And yeah I fully agree GPT-4 sounds incredible! The steps the industry marches forward with got huge the last years, truly fascinating.
SeneInSPAAACE t1_izroblm wrote
>The Turing test it self is not definitive either.
Very true. Without poisoning the well, would LaMDA completely have passed it already? And if I've understood correctly, it's a bit of an idiot outside of putting words in a pleasing order.
​
>Currently it looks like GPT it self is going to try to cheat it’s way through the Turing test by using a language model which is naturally hard for humans to identify as a machine.
"Cheat" is relative. Can a HUMAN pass a turing test, especially if we restrict the format in which they are allowed to respond?
If it can pass every test a human can, and we still call it anything but intelligent, either we gotta admit our dishonesty, or question whether humans are intelligent.
​
> it will reach a point very soon at which it will appear intelligent.
Just like everyone else, then. Well, better than some of us.
BaalKazar t1_izsnhiv wrote
Now I fully agree with what you said.
Cheat is a absolutely relative! How can we tell that something which appears to be intelligent is not? The parallels to how human infants acquire knowledge are strikingly similier. Parents are the engineers and the environment is the data set which the infant is getting trained on.
We need to take a better look at what the Turing test is doing to answer your question of „could a human pass it“. Turings approach is not really to measure intelligence, intelligence definitely is a spectrum, his test results in a binary yes/no conclusion for a reason though. He believed that 70% of humans won’t be able to identify a machine through a 5min dialogue until the year 2000.
His test is not a scientifically important milestone, passing the Turing test, or declaring a machine to be intelligent is not yielding any new knowledge. The passing of the Turing test is marking the point in time in which humans must accept the fact that a majority of them won’t be able to tell the difference of remotely communicating with a human or a machine. (The latest point at which governments need to work on additional legislation and regulation etc)
So as you correctly pointed out, the test cannot really be cheated. But the test can be passed without the need for intelligence. A dog is intelligent but could not pass it. Passing it definitively requires something to seem intelligent for a human.
StarTrek has many episodes which tackle this highly ethical topic of when do humans accept something to be intelligent and when do we accept that something is sentient. The android Commander Data is definitively intelligent, he is acquiring knowledge and applies it in the real work. Question about Data is, is he sentient? They impressively show how difficult it is to identify intelligence and even something as seemingly obvious as sentience. There is an episode which concludes a crystalline rock to be intelligent based on it emitting energy patterns which can be considered to be an encoded try of communication.
Humans may look intelligence straight into the face and state it’s not intelligent. That’s because we do not understand our own intelligence enough yet. My point of view is that AI will help us understand our own intelligence. But until we cannot grasp our own, how can we grasp something else’s? I believe that pushing back will at some point result in a technology which goes over and beyond to make the claim of it being not intelligent completely obsolet. StarTreks Data for example, there is no deniability of its intelligence and interesting enough this leads straight to question of sentience. At least StarTrek is not able to draw a picture which clearly shows the boundary of intelligence and sentience, in their pictures these two things are appearing to correlate. Something which is definitively considered to be intelligent by humans, always appears to be sentient at the same time. (Which imo shows that we need to get a better idea of „intelligence“ before we conclude something is, when we concluded it is intelligent the scientific path „ends“ before we truly understood)
Viewing a single comment thread. View all comments