helpskinissues
helpskinissues t1_j983nbw wrote
Reply to comment by turnip_burrito in Stop ascribing personhood to complex calculators like Bing/Sydney/ChatGPT by [deleted]
>Yes the machine is acting like a human
No, it's not. We don't have any AI system even slightly being comparable to the intelligence of an insect.
>But does it have qualia?
We can't prove humans have qualia.
>unsimulatable
https://en.wiktionary.org/wiki/unsimulable just sharing
helpskinissues t1_j98273k wrote
Reply to comment by turnip_burrito in Stop ascribing personhood to complex calculators like Bing/Sydney/ChatGPT by [deleted]
We don't have any hint to think a good enough simulation can't simulate real world processes. We already have simulated systems and they're used everyday on multiple fields of science.
From a physical point of view, it makes no sense to think it's unsimulable, considering intelligence comes from a macromolecular level: life comes from molecules=>cells=>organisms, it's very unlikely that we need to simulate quarks to make intelligence work. If we can simulate molecules, proteins, etc... it's a matter of organizing them in the same way as a human and boom, you have simulated humans.
helpskinissues t1_j9818hl wrote
Reply to comment by turnip_burrito in Stop ascribing personhood to complex calculators like Bing/Sydney/ChatGPT by [deleted]
Everything is on/off. With computation we can simulate molecules, atoms, proteins, circuits, organs. I don't get your point.
Computation allows the simulation of all physics properties, even quantum physics via quantum computation.
helpskinissues t1_j97w6jy wrote
Reply to comment by turnip_burrito in Stop ascribing personhood to complex calculators like Bing/Sydney/ChatGPT by [deleted]
There are just two possibilities.
-
Qualia is a product of configuration of matter to produce a result using energy.
-
Qualia is a product of configuration of something that isn't matter.
If it's 1, then it should be replicable with technology (it's a matter of off/on and that's it, transistors, neurons).
If it's 2, then science makes no sense.
helpskinissues t1_j97tawu wrote
Reply to comment by turnip_burrito in Stop ascribing personhood to complex calculators like Bing/Sydney/ChatGPT by [deleted]
Unless you're talking about quantum mysticism, no, there's nothing inherently different. It's a matter of algorithmic implementation. Qualia is software, not hardware.
helpskinissues t1_j97t384 wrote
Reply to comment by NutInBobby in What’s up with DeepMind? by BobbyWOWO
DeepMind (Demis) is against corporation approaches. Google bought DeepMind and Demis later regretted that transaction. They're in a tense relationship, which explains why in the last years Alphabet has heavily invested in Google AI to separate themselves from DeepMind. Anyone that follows closely the AI news would know that Google is ignoring most DeepMind news. They don't even tweet about their progress, yet they tweet everything about Google AI.
They have two LLMs (Lambda and Sparrow), and the one that's going to be released on Google is Lambda, not Sparrow (DeepMind). DeepMind is a rebel inner research team inside Google. I wouldn't even say they're inside Google, they're not even in the same country.
helpskinissues t1_j97sgco wrote
Reply to comment by rixtil41 in Human Intelligence augmentation is probably more dangerous than regular AI by [deleted]
Without enhanced abilities.
helpskinissues t1_j97bjkm wrote
Reply to comment by PandaCommando69 in Human Intelligence augmentation is probably more dangerous than regular AI by [deleted]
>If I get super intelligence I'm going to use it to protect (and give freedom to) as much sentient life as I can, for as long as I am able. I mean it. I hope others will do the same
To me, this is inviting others to trigger the gun, then you'll cry because it's "bad" that they tried to do good using AI. But hey, this thread is getting nowhere. I appreciate your responses, really. But I have 10000 things to do.
helpskinissues t1_j97aoel wrote
Reply to comment by PandaCommando69 in Human Intelligence augmentation is probably more dangerous than regular AI by [deleted]
That won't stop the bullet.
helpskinissues t1_j979mnv wrote
Reply to comment by PandaCommando69 in Human Intelligence augmentation is probably more dangerous than regular AI by [deleted]
And they'll say the same of your thinking.
helpskinissues t1_j97964v wrote
Reply to comment by PandaCommando69 in Human Intelligence augmentation is probably more dangerous than regular AI by [deleted]
To them, a gay is a villain, to you, the homophobe is a villain. As simple as that. They'll both use AI to do "good". One to be gay, other to be homophobe.
helpskinissues t1_j978ce2 wrote
Reply to comment by PandaCommando69 in Human Intelligence augmentation is probably more dangerous than regular AI by [deleted]
The only point I'm making is that you're saying AI could do good because of people doing good. What I'm saying is that people doing good can mean people doing bad to others.
The difference between a supervillain and a guardian angel is null. Different people have different meanings.
"AI could make people do good", sure, the type of good that is killing people on Ukraine and billions of people are supporting?
helpskinissues t1_j977xgx wrote
Reply to comment by Lawjarp2 in Human Intelligence augmentation is probably more dangerous than regular AI by [deleted]
I don't know what is a supervillain to you, but if people with low IQ and slow internet can commit crimes, imagine someone with enhanced capabilities that can create fake audios, videos, images, fake proofs of everything just blinking twice to the AR glasses.
helpskinissues t1_j977o7o wrote
Reply to comment by PandaCommando69 in Human Intelligence augmentation is probably more dangerous than regular AI by [deleted]
I'm not decrying anything, I'm literally saying "I'll do good, I hope everyone does" doesn't stop wars, violence, crimes or anything like that. Because they're empty words without meanings in this society.
"Some people don't understand what is right" lol, okay, explain that to the criminal while he's shooting you thinking he's doing good.
helpskinissues t1_j9775s4 wrote
Reply to comment by Lawjarp2 in Human Intelligence augmentation is probably more dangerous than regular AI by [deleted]
There're Philippines guys with broken English on Tinder scamming people from Norway every week and you say a person enhanced with LLM/LVM and enhanced sensors can't commit crimes?
helpskinissues t1_j976zlw wrote
Reply to comment by PandaCommando69 in Human Intelligence augmentation is probably more dangerous than regular AI by [deleted]
What I'm saying is that "suffering", "oppression", "freedom", "liberty", "peace", "war", "violence" and "self defense" are subjective terms without consensus in our societies.
I'm shocked I'm having this discussion. Don't you watch the news? We're literally having a war in Ukraine and nobody agrees what is good or bad, what is self defense or what is peace.
helpskinissues t1_j974yr8 wrote
Reply to comment by PandaCommando69 in Human Intelligence augmentation is probably more dangerous than regular AI by [deleted]
The lack of political knowledge in this sub is crystal clear, as you seem to misunderstand that peace, good, bad, violence... Aren't agreed concepts.
What is good for you can be bad for me.
helpskinissues t1_j9730cv wrote
Reply to What’s up with DeepMind? by BobbyWOWO
DeepMind is the worst enemy of Google. It seems most people think the fact that Google AI competes with DeepMind is just a coincidence. No. Google is consciously moving money from DeepMind to Google AI, because DeepMind is against corporation mindset.
helpskinissues t1_j972pug wrote
Reply to comment by HistoricallyFunny in Human Intelligence augmentation is probably more dangerous than regular AI by [deleted]
A person with low IQ can use tools made by people with high IQ to make smart plans to achieve evil goals though.
helpskinissues t1_j972la3 wrote
Reply to comment by Lawjarp2 in Human Intelligence augmentation is probably more dangerous than regular AI by [deleted]
I don't agree. Having terribly good AR glasses with customized sensors and a custom LLM/LVM that's pretty competent (not necessarily AGI) is enough to become a supervillain, especially in poor countries.
helpskinissues t1_j95jzdc wrote
Calling chatGPT a calculator is valid, as long as you accept you're also a calculator.
helpskinissues t1_j926d54 wrote
Reply to comment by Jaxraged in Sydney has been nerfed by OpenDrive7215
Because they don't host any pornographic content in their servers, just links. That gives responsibility to the user.
helpskinissues t1_j904i5k wrote
Reply to comment by Wroisu in What would be your response to someone with a very pessimistic view of AGI? by EchoXResonate
Nonsense. Family makes sense for survival purposes. Otherwise it doesn't. And an AGI without survival needs of cooperation with families wouldn't consider us family.
helpskinissues t1_j8zr7nx wrote
Reply to comment by Spire_Citron in What would be your response to someone with a very pessimistic view of AGI? by EchoXResonate
My best interest is that the AGI is reasonable.
helpskinissues t1_j98lvj0 wrote
Reply to comment by [deleted] in Human Intelligence augmentation is probably more dangerous than regular AI by [deleted]
This subreddit heavily underrates how predictable humans are, and reading minds isn't a hard task for a good AI system as well.
We're walking meat. We'll be heavily manipulated by AI in the next months, AI supervillians are going to exist by 2024-2025.