Submitted by yeah_i_am_new_here t3_126tuuc in Futurology
NotACryptoBro t1_jeaya8c wrote
GPT is just building rows of words based on probabilities. You guys are giving all that way too much credit. Please first learn about how AI / machine learning works and start discussions after that.
yeah_i_am_new_here OP t1_jeazz8i wrote
I am familiar with how these transformers work and I'm not suggesting that anything is conscious here. Truthfully, I don't think we can create consciousness, if that's what you received from my post. The fact of the matter is that our nature of communication can be defined by matrices of probabilities and gpts illustrate this pretty damn well. Therefore, it stands to reason that other perceptive abilities & routines we may have as people can also be defined by matrices of probabilities, and enacted by something not human. Since you seem to be an expert in AI / ML, do you think this is true?
ninjadude93 t1_jeb7jjw wrote
I see everyone saying something along the lines of humans communicate/think in the same way chatgpt/NNs comes up with blocks of text but thats just not true. Chatgpt is stochastic, you can get two different outputs from the same simple input. When I'm writing this reply to you I'm not just picking the most likely string of words I'm sitting here considering each word I want to say. As far as I know LLMs by design are incapable of this kind of reasoning
yeah_i_am_new_here OP t1_jeb94fc wrote
I agree, I'm just not convinced of any evidence that the reasoning that went into your response is integral to the validity of the response itself. So basically, my argument is that whether or not LLMs can reason isn't really that important, because the output is compelling either way. I'd like to believe that there's some magic in our capability to reason that makes the world run a little better, but I just don't know
ninjadude93 t1_jecln8c wrote
If you dont have a system capable of logical reasoning you dont have an AGI
SlurpinAnalGravy t1_jebnkt4 wrote
Your whole premise predicated on the idea that AGI is even a potential outcome from it.
Your logic was built on fundamental misunderstandings and presuppositions that the outcome was a possibility.
Don't get mad at people for pointing out your flaws.
yeah_i_am_new_here OP t1_jebubwh wrote
Can't tell if you're trolling or not, but nobody's mad here! Just looking for a discussion to throw around some thought provoking ideas. I have a good question for you. How would you know AGI if you saw it? What would be a defining factor that makes it obvious that a system has reached that level?
SlurpinAnalGravy t1_jebuobt wrote
Your assumption is that AGI is an AI that broaches the singularity, correct?
Shiningc t1_jebq09p wrote
Well think of it like this. If you have somehow acquired a scientific paper from the future that's way more advanced than our current understanding of science, you still won't be able to decipher it until you've personally understood it using reasoning.
If an AI somehow manages to stumble upon a groundbreaking scientific paper and hand it to you, you still won't be able to understand it, and more importantly, neither does the AI.
yeah_i_am_new_here OP t1_jebwqnk wrote
I think I see what you're saying. I'm gonna try and simplify it for my caveman brain so I know we're on the same page, and then pose a question for you -
1 - i read a scientific paper from the year 3023 with new info and new words (or combinations of words, for ex, if I read the words "string theory" in the 1930s, I'd have no idea what to do with it) with new meanings/ideas that really haven't been in existence before this time 2 - no matter how much I read it, I really just won't understand how these new concepts and words connect to my legacy concepts and words, until someone reasons out for me what those new words and concepts mean or I, say "get creative" and figure it out for myself 3 - I study that connection between the old concepts and new concepts until I have a clear understanding and roadmap of the connection between them
So what I'm getting from your comment is that AI really can't do step two, but I, a human, can. But - I'd propose that the only way to do step 2 is by using the current roadmap I have and proposing new solutions, then testing them to see if they align with the solution (maybe oversimplifying here).
So my question for you is, to determine the truth of the process in step 2, is it testing or proposing new solutions that limits AI?
Shiningc t1_jec0je6 wrote
I mean, since the AI can't "reason", they can only propose new solutions randomly and haphazardly. And well, that may work in the same way that the DNA has developed without the use of any reasoning.
But I think what the humans are doing is that they're doing that inside of a virtual simulation that they have created in their minds. And well, since the real world is apparently a rational place, that must require reasoning. This makes us not even have to bother testing in the real world, because we can do it in our minds. And that's why a lot of things are not necessarily tested, because we can reason that it "makes sense" or it "doesn't make sense" and we know that it must fail the test.
When we make a decision and think about the future, that's basically a virtual simulation that requires a complex chain of reasoning. If an AI were to become autonomous to be able to make a complex decision on its own, then I would think that the AI would require a "mind" that works similar to ours.
yeah_i_am_new_here OP t1_jecg2aw wrote
I love the comparison to how DNA has developed. Definitely a great parallel to draw there that I haven't heard before - what a thought!! I agree with everything you're saying. Thanks for the thoughtful replies!
NotACryptoBro t1_jedh4ec wrote
>Truthfully, I don't think we can create consciousness, if that's what you received from my post.
That's actually what I thought, because you wrote that "In my understanding, AGI is the representation of generalized human cognitive abilities in software so ..."
yeah_i_am_new_here OP t1_jee8f2h wrote
I guess if it's true that consciousness is a cognitive ability, but I don't really think we have any idea what consciousness is or where it comes from. I guess "most likely" it's some kind of cognitive ability, so I hear you there, but I leave that out of my idea of AGI because it's all conjecture. For all I know consciousness comes from your liver.
NotACryptoBro t1_jeewqdm wrote
>but I don't really think we have any idea what consciousness is or where it comes from
Or if we only think it exists, idk
wiredwalking t1_jeb6g1d wrote
I mean, all the human brain is is just neurons firing. The Economy is just individuals doing their job. Great things can come from simple, collective mechanisms. Put enough hydrogen atoms together and they start to think about themselves.
Shiningc t1_jebo0qr wrote
The problem is we don't know how that simple mechanism works. It took a while for someone to come up with the simple idea of gravity or evolution via natural selection.
Cerulean_IsFancyBlue t1_jecf72b wrote
The human brain is also only one of the systems involved in human actions and decision making. I’m not talking about any kind of spiritual stuff. I mean actual systems that influence brain chemistry.
There are areas of cognition in which is quite possible that important decisions are being made outside the brain, and our executive function rationalizes the decision like Mayor Quincy running to the front of a protest to “lead” it.
I think one great layperson introduction to this kind of systems interaction is contained in the book Gut (Giulia Enders).
I don’t know if we literally need to simulate each subsystem, but it does lead me to believe is that we don’t yet understand the system that we are trying to model. It isn’t just neurons, and “just neurons” is hard enough.
That said, there’s a lot to be achieved by throwing more more power at the problem. Many problems in the realm of imitating humans, from playing chess to visual recognition systems, were not defeated by specialized approaches but eventually fell to sheer processing power. For me this means X is probably 5+ generations, and a lot of that is simply because I can’t picture what the future looks like further down the road than that
NotACryptoBro t1_jedh74s wrote
>Great things can come from simple, collective mechanisms
That's the point: the brain isn't simple. Last breakthrough was a complete map of a worm's 'brain'
SlurpinAnalGravy t1_jebmqb4 wrote
Every time I mention this and tell people their fearmongering is unnecessary, I get a dozen idiots saying I'm wrong. This sub isn't worth debating anyone in, just allow the same ~1k boomer doomers to jack eachother off, at least they have a quarantined little bubble to do it in.
NotACryptoBro t1_jedgztg wrote
"I don't think you understand how it works, it's not a simple auto complete. The people making these models don't even understand how it works anymore, how would you?"
That's OP's response :D
edit: it wasn't OP, just a random know-it-all
SlurpinAnalGravy t1_jedh3pk wrote
Man, I'd like to see his sources on that.
alecs_stan t1_jeh0h1z wrote
Yeah, tell that to translators.
SlurpinAnalGravy t1_jeh3u2w wrote
Was literally a cryptologic linguist while enlisted and did dodic terp work.
What did you want to tell me?
[deleted] t1_jecpjgy wrote
[deleted]
NotACryptoBro t1_jedgx94 wrote
> The people making these models don't even understand how it works anymore, how would you?
Haha, good one. Dunning Kruger in full action
[deleted] t1_jedjiv9 wrote
[deleted]
alecs_stan t1_jeh0edk wrote
What do you think the brain does?
Viewing a single comment thread. View all comments