Submitted by Malachiian t3_12348jj in Futurology
speedywilfork t1_jdvue1k wrote
Reply to comment by acutelychronicpanic in Microsoft Suggests OpenAI and GPT-4 are early signs of AGI. by Malachiian
>GPT4's strengths have little to do with spitting facts, and more to do with its ability to do reasoning and demonstrate understanding.
I am not talking about an opinion, i am referring to intent. if it cant determine "intent" it can neither reason nor understand. Humans can easily understand intent, AI can't.
as an example if i go to a small town and I am hungry. i find a local and ask "i am not from around here and looking for a good place to eat" they understand the intent of my question isnt the taco bell on the corner. they understand i am asking about a local eatery that others call "good". An AI would just spit out a list of restaurants, but that wasnt the intent of the question. therefore it didnt understand.
acutelychronicpanic t1_jdxbhx8 wrote
It can infer intent pretty effectively. I'm not sure how to convince you of that, but I've been convinced by using it. It can take my garbled instructions and infer what is important to me using the context in which I ask it.
speedywilfork t1_jdxkn3c wrote
It doesnt "infer" it takes textual clues and makes a determination based on a finite vocabulary. it doesnt "know" anything it just matches textual patterns to a predetermined definition. it is really rather simplistic. The reason AI seems so smart is because humans do all of the abstract thinking for them. we boil it down to a concrete thought then we ask it a question. however if you were to tell an AI "go invent the next big thing" it is clueless, impotent, and worthless. AI will help humans achieve great things, but the AI can't achieve great things by itself. that is the important point. it won't do anything on its own, and that is the way people keep framing it.
I can disable an autonomous car by making a salt circle around it or using tiny soccer cones. this proves that the AI doesn't "know" what it is. how do i "explain" to an AI that some things can be driven over and others can't. there is no distinction between salt, painted line, and wall to an AI, all it sees is "obstacle".
acutelychronicpanic t1_jdxpq6j wrote
You paint all AI with the same brush. Many AI systems are as dumb as you say because they are specialized to only do a narrow range of tasks. GPT-4 is not that kind of AI.
AI pattern matching can do things that only AI and humans can do. Its not as simple as you imply. It doesn't just search some database and find a response to a similar question. There is no database if raw data inside it.
Please go see what people are already doing with these systems. Better yet, go to the sections on problem solving in the following paper and look at these examples: https://arxiv.org/abs/2303.12712
Your assumptions and ideas of AI are years out of date.
speedywilfork t1_jdxyi6c wrote
why when i ask specific questions all i get is a straw man? this in itself proves that i am correct. I have been involved with AI development for 20 years. i understand every single model and type there is to be known. my ideas arent out of date. they are true. i am future looking here, and imagining a AI like Chat GPT to be paired with other systems. if i were to take into something like a coffee shop and ask it "is this a coffee shop?" it very likely would fail to get the answer correct. to an AI a coffee shop is a series of traits. it could not distinguish a coffee shop with a camera crew in it. from a fake coffee shop on a movie set. it couldnt distinguish an unbranded starbucks, from a unbranded mcdonalds. but you and i could, because a coffee shop is a concept, not a thing, it involves mood, feeling, and setting. and pattern recognition won't help it.
>AI pattern matching can do things that only AI and humans can do. Its not as simple as you imply. It doesn't just search some database and find a response to a similar question.
can a circle of small soccer cones disable an autonomous AI?
acutelychronicpanic t1_jdy378r wrote
20 years? You must be pretty well informed on recent developments then. I didn't go into detail because I assumed you've seen the demonstrations of GPT4.
If I can assume you've seen the GPT4 demos and read the paper, I'd love to hear your thoughts on how it can perform well on reasoning tasks its never seen before and reason about what would happen to a bundle of balloons in an image if the string was cut.
What about its test results? Many of those tests are not about memorization, but rather applying learned reasoning to novel situations. You can't memorize raw facts and pass an AP bio exam. You have to be able to use and apply methods to novel situations.
Idk. Maybe we are talking past each other here.
speedywilfork t1_je059or wrote
>I'd love to hear your thoughts on how it can perform well on reasoning tasks its never seen before and reason about what would happen to a bundle of balloons in an image if the string was cut.
I am sure you already know all of this, but It isnt really reasoning, i knows, i knows because it learned. anything that can be learned will eventually be learned by AI, anything and everything. So all of these tasks that appear to be impressive, to me, are just expected. So far AI hasnt done anything that is unexpected. but anything that has a finite outcome, like chess, Go, poker, starcraft, you name it, AI will beat a human, it won't even be close. but it doesnt "reason" it knows all of the possible moves that can ever be played. you show it a picture and ask it what is funny about it. it know that "atypical" things are considered "funny" by humans. so you show it a picture of the Eiffel tower wearing a hat, it can easily determine what is "funny". Even though it doesn't know what "funny" even means.
on the other hand tasks that are open ended and have no finite set of outcomes, like this...
https://news.yahoo.com/soldiers-outsmart-military-robot-acting-214509025.html
AI looks really, really, dumb. because in this scenario, real reasoning is required. a 5 year old child would be able to pick out these soldiers. these are the types of experiments i am interested in, because it will help us to know where AI can reasonably be applied and where it can't.
Why can't an AI pick out these soldiers and a 5 year old can? because an AI just sees objects, a 5 year old understands intent. a 5 year old understands that a person is intending to fool them, so they discern that it is a person inside a cardboard box. There is no way to teach an AI to recognize intent. because intent is an abstraction, and AI can understand abstractions
acutelychronicpanic t1_je0nzdb wrote
The current generation of AI does not use search to solve problems. That's not how neural networks work.
Go was considered impossible for AI to win for the reasons you suggested it is expected. There are too many possibilities for an AI to consider them all.
You misunderstand these systems fundamentally.
speedywilfork t1_je2qkbo wrote
>The current generation of AI does not use search to solve problems. That's not how neural networks work.
I never said they used search, it depends on the AI, but many still do use search with other protocols that augment it. they don't rely entirely on search but search is still a part of the algorithm.
>Go was considered impossible for AI to win for the reasons you suggested it is expected. There are too many possibilities for an AI to consider them all.
this is completely false. the original Go algorithm was taught on random games of Go, it had millions of moves built into its dataset. then it played itself millions of times. but the neural networks simply augmented the Monte Carlo Tree Search, it likely could not have won without search.
i don't literally mean it has a database of every potential move ever. i mean it builds this as it plays. however fundamentally it literally knows every move, because at any given point it knows all of the possible moves.
Viewing a single comment thread. View all comments