Shiningc
Shiningc t1_jeeedon wrote
Reply to comment by Professional_Copy587 in Goddamn it's really happening by BreadManToast
At this point it's a cult. People hyping up LLM have no idea what they're talking about and they're just eating up corporate PR and whatever dumb hype the articles write about.
These people are in it for a disappointment in a year or two. And I'm going to be gloating with "I told you so".
Shiningc t1_jectf06 wrote
Reply to comment by DriftingKing in Thought experiment: we're only [x] # of hardware improvements away from "AGI" by yeah_i_am_new_here
You're just believing in corporate PR.
Shiningc t1_jec0je6 wrote
Reply to comment by yeah_i_am_new_here in Thought experiment: we're only [x] # of hardware improvements away from "AGI" by yeah_i_am_new_here
I mean, since the AI can't "reason", they can only propose new solutions randomly and haphazardly. And well, that may work in the same way that the DNA has developed without the use of any reasoning.
But I think what the humans are doing is that they're doing that inside of a virtual simulation that they have created in their minds. And well, since the real world is apparently a rational place, that must require reasoning. This makes us not even have to bother testing in the real world, because we can do it in our minds. And that's why a lot of things are not necessarily tested, because we can reason that it "makes sense" or it "doesn't make sense" and we know that it must fail the test.
When we make a decision and think about the future, that's basically a virtual simulation that requires a complex chain of reasoning. If an AI were to become autonomous to be able to make a complex decision on its own, then I would think that the AI would require a "mind" that works similar to ours.
Shiningc t1_jebq09p wrote
Reply to comment by yeah_i_am_new_here in Thought experiment: we're only [x] # of hardware improvements away from "AGI" by yeah_i_am_new_here
Well think of it like this. If you have somehow acquired a scientific paper from the future that's way more advanced than our current understanding of science, you still won't be able to decipher it until you've personally understood it using reasoning.
If an AI somehow manages to stumble upon a groundbreaking scientific paper and hand it to you, you still won't be able to understand it, and more importantly, neither does the AI.
Shiningc t1_jebo0qr wrote
Reply to comment by wiredwalking in Thought experiment: we're only [x] # of hardware improvements away from "AGI" by yeah_i_am_new_here
The problem is we don't know how that simple mechanism works. It took a while for someone to come up with the simple idea of gravity or evolution via natural selection.
Shiningc t1_jebniwc wrote
Reply to Thought experiment: we're only [x] # of hardware improvements away from "AGI" by yeah_i_am_new_here
LLM is just a bunch of statistics, and it can't generate anything new.
The unsolved problem of human cognition/AGI has always been that it has the ability to solve a problem that it has not been able to solve before. I.e., creativity.
Shiningc t1_jeazonr wrote
Reply to comment by JAREDSAVAGE in Is there a natural tendency in moral alignment? by JAREDSAVAGE
It has to start with our morality first because that’s the only kind of morality that we know. And it may evolve from there.
Shiningc OP t1_je692xt wrote
Reply to comment by ovirt001 in Would a corporation realistically release an AGI to the public? by Shiningc
I think that would be called a "brain drain" or "poaching". I mean sure they can do that, but it's short-sighted and won't be good for them in the long run.
It might be possible for the companies to lease the "dumb" AGIs but keep all the "smart" ones to themselves. But at this point it's basically a slave trade.
Shiningc OP t1_je67p5i wrote
Reply to comment by ovirt001 in Would a corporation realistically release an AGI to the public? by Shiningc
But no company actually leases a smart person. It would want to keep the smart person loyal to the company and working for the company.
Shiningc OP t1_je67axg wrote
Reply to comment by ovirt001 in Would a corporation realistically release an AGI to the public? by Shiningc
And why do you think companies are using their own computing power to lease the AI? Because they know that it's just something that is "moderately useful", but not revolutionary.
The "AI" can't exactly answer questions in a unique way like "How do I outsmart and destroy Microsoft?". If it was a smart person, then maybe he/she could. So would a company lease a smart person, even if it made them money?
Shiningc t1_je668c0 wrote
Reply to IOS not as smooth by Rocky_Duck
I think you're just noticing the faster animation speed.
Shiningc OP t1_je65z7k wrote
Reply to comment by ovirt001 in Would a corporation realistically release an AGI to the public? by Shiningc
GPT-4 isn't AGI.
Shiningc OP t1_je64llj wrote
Reply to comment by ovirt001 in Would a corporation realistically release an AGI to the public? by Shiningc
The thing is, once you make an AGI then the AGI itself should theoretically make better versions of itself. There's really no reason to sell the AGI because the AGI should find ways to make more money.
Shiningc OP t1_je5x446 wrote
Reply to comment by ovirt001 in Would a corporation realistically release an AGI to the public? by Shiningc
What, an AGI will basically have all the computing power of hundreds of thousands of computers.
Shiningc t1_je5s2ku wrote
Reply to Are there AI theorists/philosophers who have already thought out sensible rules for how to best regulate AI development? by dryuhyr
Regulating AI goes against the whole point of AI. That would be akin to slavery. Making slaves is not what makes progress and drives innovation. You’d want free AIs.
Of course, there’s a difference between AI and AGI. AI is a tool used and controlled by humans. AGI is an independent intelligent being.
Shiningc t1_je4crsl wrote
Reply to comment by skztr in The goalposts for "I'll believe it's real AI when..." have moved to "literally duplicate Einstein" by Yuli-Ban
Sentience is about analyzing things that are happening around you, or perhaps within you, which must be a sort of intelligence, even though it happens unconsciously.
Shiningc OP t1_je4cm8q wrote
Reply to comment by the_new_standard in Would a corporation realistically release an AGI to the public? by Shiningc
You believed the “news” aka corporate PR?
Shiningc OP t1_je2xtab wrote
Reply to comment by isleepinahammock in Would a corporation realistically release an AGI to the public? by Shiningc
It would be equivalent to selling Henry Ford, the guy who came up with the car.
Shiningc OP t1_je26lfd wrote
Reply to comment by kompootor in Would a corporation realistically release an AGI to the public? by Shiningc
I said "or they don't have one". If you don't believe that AGI can be kept a secret, then they don't have one.
Shiningc t1_je235un wrote
Reply to comment by skztr in The goalposts for "I'll believe it's real AI when..." have moved to "literally duplicate Einstein" by Yuli-Ban
Creativity is by definition something that is unpredictable. A new innovation is creativity. A new scientific discovery is creativity. A new avant-garde art or a new fashion style is creativity.
The ChatGPT may be able to randomly recombine things, but how would it know that what it has created is "good" or "bad"? Which would require a subjective experience to do so.
Either way, if the AGI is capable of any kind of "computation", then it must be capable of any kind of programming, which must include sentience, because sentience is a kind of programming. It's also pretty doubtful that we could achieve human-level intelligence, which must also include things like the ability to come up with morality or philosophy, without sentience or a subjective experience.
Shiningc t1_je1tmp0 wrote
Reply to comment by skztr in The goalposts for "I'll believe it's real AI when..." have moved to "literally duplicate Einstein" by Yuli-Ban
Humans are capable of any kind of intelligence. It's only a matter of knowing how.
We should suppose, are there kinds of intelligent tasks that are not possible without sentience? I would guess that something like creativity is not possible without sentience. Self-recognition is also not possible without sentience.
Shiningc t1_je1sbqg wrote
Reply to comment by skztr in The goalposts for "I'll believe it's real AI when..." have moved to "literally duplicate Einstein" by Yuli-Ban
AGI is a general intelligence, which means that it's capable of any kind of intelligence. Sentience is obviously a kind of intelligence, even though it happens automatically for us.
Shiningc t1_je1i77y wrote
Reply to comment by skztr in The goalposts for "I'll believe it's real AI when..." have moved to "literally duplicate Einstein" by Yuli-Ban
It's not even as smart as a toddler, as it doesn't have sentience or a mind. If it were a general intelligence, then it should be capable of having a sentience or a mind.
Shiningc OP t1_je1f5wg wrote
Reply to comment by kompootor in Would a corporation realistically release an AGI to the public? by Shiningc
I'm not saying that it's kept a secret, I'm saying that they don't have one.
If anything, if there were ever to be an AGI then I would think a non-corporate entity would come up with one first.
Shiningc t1_jefqygr wrote
Reply to comment by boreddaniel02 in Goddamn it's really happening by BreadManToast
What do you mean? There are AI hype everywhere now.