Submitted by Primary-Food6413 t3_znyz00 in Futurology
Shiningc t1_j0k5mxk wrote
>For instance, Altman said that if OpenAI could master artificial general intelligence, which is machine intelligence that can solve issues just as well as a person, the company might “catch the light of all future value in the universe.”
We're not even close to having Artificial General Intelligence, because the entire approach is wrong. People tend to think that if we feed AIs enough "data", then somehow it will magically become intelligent enough to achieve sentience. But that's not how it goes. Or even worse, they think that it's data + fixed sets of instructions.
This whole dystopian image of a super-intelligent AI lording over us and forcing us to do nothing but manual labor, well that is the same idea as supposedly a super-intelligent or super-talented human being lording over us. Either people will revolt or people will submit, depending on what they think about it.
Another idea is that an AI is going to be "cold", amoral, devoid of "feelings" and only mechanically tries to achieve a "task" at its hand. Well that's entirely the result of the idea that an "AI" is going to be nothing but data + a fixed set of instructions. But how can a sentient being with supposed free-will, be devoid of a moral system? By that I mean an independent set of moral system that it will independently develop over time. A sentient AI is going to have to choose for itself what is the best moral course of action to take.
If we ignore that, then we're saying that an AI is dumb, blind and is only following a fixed set of instructions. But that's not very "intelligent" in a general sense. That AI is only following instructions of some other master.
moofart-moof t1_j0kecjz wrote
If the future the AI wants sucks, people will burn the infrastructure to the ground. These articles are frankly written by morons who don’t see how fragile the capitalist ecosystem is atm.
dashingstag t1_j0kuzad wrote
I have a theory about this. The only way to know true AGI is here is if they are actual participants in the economy, that means not only just consuming resources to sustain themselves but being able to desire goods and services. If not it’s just sophisticated code. This probably has complications that would require AI rights. We would reach Super-ai without actually reaching AGI.
The only fear I have is the super rich owning super-ai and devaluing human labor causing a snake eating its own tail scenario. AI can have near-infinite production but if no one can afford it then the value of that production is actually zero.
I am probably leaning more towards cybernetics as the economy can still function. Anything other scenario the economy or society would collapse and self-destruct before any dystopian AI develops
Shiningc t1_j0kxott wrote
I tend to think that an AI that the rich or the corporations can easily contain or control won’t be a remarkable one, just like a remarkable human being isn’t going to be easy to contain for a corporation and what not. I mean it is possible, depending on how such a being is going to be manipulated by its masters.
dashingstag t1_j0kzdz5 wrote
Oh you don’t need a sentient AI. Just a competent regular one will do. Just like how Amazon can already undercut competitors by selling just a little under what their users are selling by learning from transaction behaviour.
somethingsomethingbe t1_j0kkt9o wrote
What definition of AGI includes sentience? I thought the definition was, "Artificial general intelligence (AGI) is the ability of an intelligent agent to understand or learn any intellectual task that a human being can," (thank you Wikipedia).
TricksterOfFate t1_j0vh8dv wrote
Do you have the data of how the human consciousness work in the human brain?
Shiningc t1_j0x0b7z wrote
No, the whole point is that we have no idea how it works yet.
TricksterOfFate t1_j106a8u wrote
That will be the mystery brain chips will maybe unlock.
Viewing a single comment thread. View all comments