Submitted by Malachiian t3_12348jj in Futurology
speedywilfork t1_jdvbrrx wrote
Reply to comment by acutelychronicpanic in Microsoft Suggests OpenAI and GPT-4 are early signs of AGI. by Malachiian
i would venture to guess you didn't really present it with a true abstraction.
acutelychronicpanic t1_jdvg9r2 wrote
If you don't want to go look for yourself, give me an example of what you mean and I'll pass the results back to you.
speedywilfork t1_jdvnee1 wrote
here is the problem. "intelligence" has nothing to do with regurgitating facts. it has to do with communication or intent. so if i ask you "what do you think about coffee" you know i am asking about preference. not the origin of coffee, or random facts about coffee. so if you were to ask a human "what do you think about coffee" and they spit out some random facts. then you say "no thats not what i mean, i want to know if you like it" then they spit out more random facts. would you think to yourself. "damn this guy is really smart." i doubt it. you would likely think "whats wrong with this guy". so if something can't identify intent and return a cogent answer. it isnt "intelligent".
acutelychronicpanic t1_jdvog5q wrote
Current models like GPT4 specifically and purposefully avoid the appearance of having an opinion.
If you want to see it talk about the rich aroma and how coffee makes people feel, ask it to write a fictional conversation between two individuals.
It understands opinions, it just doesn't have one on coffee.
It'd be like me asking you how you "feel" about the meaning behind the equation 5x + 3y = 17
GPT4's strengths have little to do with spitting facts, and more to do with its ability to do reasoning and demonstrate understanding.
leaky_wand t1_jdvt5o9 wrote
5x + 3y = 17 is satisfying because there is one and only one answer using positive integers
speedywilfork t1_jdvue1k wrote
>GPT4's strengths have little to do with spitting facts, and more to do with its ability to do reasoning and demonstrate understanding.
I am not talking about an opinion, i am referring to intent. if it cant determine "intent" it can neither reason nor understand. Humans can easily understand intent, AI can't.
as an example if i go to a small town and I am hungry. i find a local and ask "i am not from around here and looking for a good place to eat" they understand the intent of my question isnt the taco bell on the corner. they understand i am asking about a local eatery that others call "good". An AI would just spit out a list of restaurants, but that wasnt the intent of the question. therefore it didnt understand.
acutelychronicpanic t1_jdxbhx8 wrote
It can infer intent pretty effectively. I'm not sure how to convince you of that, but I've been convinced by using it. It can take my garbled instructions and infer what is important to me using the context in which I ask it.
speedywilfork t1_jdxkn3c wrote
It doesnt "infer" it takes textual clues and makes a determination based on a finite vocabulary. it doesnt "know" anything it just matches textual patterns to a predetermined definition. it is really rather simplistic. The reason AI seems so smart is because humans do all of the abstract thinking for them. we boil it down to a concrete thought then we ask it a question. however if you were to tell an AI "go invent the next big thing" it is clueless, impotent, and worthless. AI will help humans achieve great things, but the AI can't achieve great things by itself. that is the important point. it won't do anything on its own, and that is the way people keep framing it.
I can disable an autonomous car by making a salt circle around it or using tiny soccer cones. this proves that the AI doesn't "know" what it is. how do i "explain" to an AI that some things can be driven over and others can't. there is no distinction between salt, painted line, and wall to an AI, all it sees is "obstacle".
acutelychronicpanic t1_jdxpq6j wrote
You paint all AI with the same brush. Many AI systems are as dumb as you say because they are specialized to only do a narrow range of tasks. GPT-4 is not that kind of AI.
AI pattern matching can do things that only AI and humans can do. Its not as simple as you imply. It doesn't just search some database and find a response to a similar question. There is no database if raw data inside it.
Please go see what people are already doing with these systems. Better yet, go to the sections on problem solving in the following paper and look at these examples: https://arxiv.org/abs/2303.12712
Your assumptions and ideas of AI are years out of date.
speedywilfork t1_jdxyi6c wrote
why when i ask specific questions all i get is a straw man? this in itself proves that i am correct. I have been involved with AI development for 20 years. i understand every single model and type there is to be known. my ideas arent out of date. they are true. i am future looking here, and imagining a AI like Chat GPT to be paired with other systems. if i were to take into something like a coffee shop and ask it "is this a coffee shop?" it very likely would fail to get the answer correct. to an AI a coffee shop is a series of traits. it could not distinguish a coffee shop with a camera crew in it. from a fake coffee shop on a movie set. it couldnt distinguish an unbranded starbucks, from a unbranded mcdonalds. but you and i could, because a coffee shop is a concept, not a thing, it involves mood, feeling, and setting. and pattern recognition won't help it.
>AI pattern matching can do things that only AI and humans can do. Its not as simple as you imply. It doesn't just search some database and find a response to a similar question.
can a circle of small soccer cones disable an autonomous AI?
acutelychronicpanic t1_jdy378r wrote
20 years? You must be pretty well informed on recent developments then. I didn't go into detail because I assumed you've seen the demonstrations of GPT4.
If I can assume you've seen the GPT4 demos and read the paper, I'd love to hear your thoughts on how it can perform well on reasoning tasks its never seen before and reason about what would happen to a bundle of balloons in an image if the string was cut.
What about its test results? Many of those tests are not about memorization, but rather applying learned reasoning to novel situations. You can't memorize raw facts and pass an AP bio exam. You have to be able to use and apply methods to novel situations.
Idk. Maybe we are talking past each other here.
speedywilfork t1_je059or wrote
>I'd love to hear your thoughts on how it can perform well on reasoning tasks its never seen before and reason about what would happen to a bundle of balloons in an image if the string was cut.
I am sure you already know all of this, but It isnt really reasoning, i knows, i knows because it learned. anything that can be learned will eventually be learned by AI, anything and everything. So all of these tasks that appear to be impressive, to me, are just expected. So far AI hasnt done anything that is unexpected. but anything that has a finite outcome, like chess, Go, poker, starcraft, you name it, AI will beat a human, it won't even be close. but it doesnt "reason" it knows all of the possible moves that can ever be played. you show it a picture and ask it what is funny about it. it know that "atypical" things are considered "funny" by humans. so you show it a picture of the Eiffel tower wearing a hat, it can easily determine what is "funny". Even though it doesn't know what "funny" even means.
on the other hand tasks that are open ended and have no finite set of outcomes, like this...
https://news.yahoo.com/soldiers-outsmart-military-robot-acting-214509025.html
AI looks really, really, dumb. because in this scenario, real reasoning is required. a 5 year old child would be able to pick out these soldiers. these are the types of experiments i am interested in, because it will help us to know where AI can reasonably be applied and where it can't.
Why can't an AI pick out these soldiers and a 5 year old can? because an AI just sees objects, a 5 year old understands intent. a 5 year old understands that a person is intending to fool them, so they discern that it is a person inside a cardboard box. There is no way to teach an AI to recognize intent. because intent is an abstraction, and AI can understand abstractions
acutelychronicpanic t1_je0nzdb wrote
The current generation of AI does not use search to solve problems. That's not how neural networks work.
Go was considered impossible for AI to win for the reasons you suggested it is expected. There are too many possibilities for an AI to consider them all.
You misunderstand these systems fundamentally.
speedywilfork t1_je2qkbo wrote
>The current generation of AI does not use search to solve problems. That's not how neural networks work.
I never said they used search, it depends on the AI, but many still do use search with other protocols that augment it. they don't rely entirely on search but search is still a part of the algorithm.
>Go was considered impossible for AI to win for the reasons you suggested it is expected. There are too many possibilities for an AI to consider them all.
this is completely false. the original Go algorithm was taught on random games of Go, it had millions of moves built into its dataset. then it played itself millions of times. but the neural networks simply augmented the Monte Carlo Tree Search, it likely could not have won without search.
i don't literally mean it has a database of every potential move ever. i mean it builds this as it plays. however fundamentally it literally knows every move, because at any given point it knows all of the possible moves.
RedditFuelsMyDepress t1_jdvtk6c wrote
Not the best example. Why would an AI have an opinion on coffee if it can't even drink it?
speedywilfork t1_jdvwbt4 wrote
i am not talking about its opinion, i am talking about intent. i want it to know what the intention of my question is regardless of the question. i just gave this as example to someone else...
as an example if i go to a small town and I am hungry. i find a local and ask "i am not from around here and looking for a good place to eat" they understand the intent of my question isnt the taco bell on the corner. they understand i am asking about a local eatery that others call "good". An AI would just spit out a list of restaurants, but that wasnt the intent of the question. therefore it didnt understand.
If i point at the dog bed even my dog knows what i intend for it to do. it UNDERSTANDS, an AI wouldnt.
RedditFuelsMyDepress t1_jdvzdtf wrote
Well I tried asking the coffee question from ChatGPT and this is what it said:
"As an artificial intelligence language model, I do not have personal preferences or opinions. However, I can provide some general information about coffee.
Coffee is a popular beverage enjoyed by millions of people around the world. It is made by brewing roasted coffee beans, which contain caffeine, a natural stimulant that can help increase alertness and reduce fatigue. Coffee also contains antioxidants and other beneficial compounds that may offer health benefits when consumed in moderation.
However, it is important to note that excessive consumption of coffee can have negative effects on health, such as increasing the risk of insomnia, anxiety, and digestive issues. Additionally, some people may be more sensitive to the effects of caffeine than others, and may experience negative side effects even with moderate consumption.
Overall, whether or not to consume coffee is a personal choice that depends on an individual's preferences, health status, and caffeine tolerance. It is always a good idea to consult with a healthcare professional to determine if coffee consumption is right for you."
In that first paragraph it does acknowledge the intent of your question, but just says that it isn't able to answer it. The facts about coffee being spit out I believe is just part of the directives given to ChatGPT.
speedywilfork t1_jdw5jyl wrote
but that is the problem. it doesnt know intent, because intent is contextual. if i was standing in a coffee shop the question means one thing, on coffee plantation another, in a business conversation something totally different. so if you and i were discussing things to improve our business and i asked "what do you think about coffee" i am not asking about taste. AI can't distinguish these things.
RedditFuelsMyDepress t1_jdwpkfj wrote
>AI can't distinguish these things.
I'm not sure how true that is though. Even with GPT3, it would actually take into account the context of the whole conversation instead of just the most recent sentence when I asked something.
Hard to say how well it would handle itself in a real-world environment though since it's just a chat-bot atm.
Viewing a single comment thread. View all comments