Submitted by Malachiian t3_12348jj in Futurology
speedywilfork t1_jdv74tu wrote
Reply to comment by Malachiian in Microsoft Suggests OpenAI and GPT-4 are early signs of AGI. by Malachiian
we don't even know how WE understand abstraction. so there is no way to program it. but as an example if you ask an AI "do you live in a computer" it won't really know how to answer that question. However if you asked a human "do AI live in a computer", most would immediately understand what you were asking. it is an abstraction.
Much of how we communicate is through abstraction, we just don't realize it. an AI wouldnt have any idea what to do. another example would be a "drive through" it is an abstraction. Humans know what it is, but an AI would be clueless. It could recognize what the window looked like it various form and shapes. but what if you pulled up to a restaurant and the line was 30 cars deep with a man standing in the middle of a parking lot waiting to take your order? you would know exactly where to go. but not an AI
Surur t1_jdve4sx wrote
Your examples are pretty bad and you should feel bad. > "do you live in a computer"
>> As an AI language model, I do not possess consciousness or emotions, and I do not "live" in the conventional sense. I exist as a complex set of algorithms and data structures running on computer servers. My purpose is to process and generate human-like text based on the input I receive.
.
> an AI wouldnt have any idea what to do. another example would be a "drive through"
>> As an AI language model, I don't have the ability to physically drive or navigate a vehicle. However, I can provide you with a detailed description of how one might navigate through a drive-through.
>> Approach the drive-through entrance: Look for signs indicating the entrance to the drive-through lane. These signs usually have arrows or the word "drive-through" on them. The entrance is typically located on one side of the restaurant, and you may need to drive around the building to find it.
>> Enter the drive-through lane: Once you locate the entrance, drive your vehicle into the designated lane. There may be multiple lanes in some cases, so follow any signs or arrows to ensure you're in the correct one. Maintain a safe distance from the vehicle in front of you to avoid any potential collisions.
>> Review the menu: As you approach the menu board, take some time to review the available options. Many drive-through restaurants have large, easy-to-read menu boards with pictures and prices of the items. Some may also have a separate board for promotional items
Cut for brevity.
speedywilfork t1_jdvkqtr wrote
>Your examples are pretty bad and you should feel bad.
no they aren't. they illustrated my point perfectly. the AI didn't know what you were asking when you asked "do you live in a computer" because it doesn't understand that we are not asking if it is "alive" in the biological sense. we are asking if it is "alive" in the rhetorical sense. also it doesn't even understand the term "computer" because we an not asking about a literal macbook or PC. we are speaking rhetorically and use the term "computer" to mean something akin to "digital world" it failed to recognize the intended meaning of the words, therefore it failed.
>Approach the drive-through entrance: Look for signs indicating the entrance to the drive-through lane. These signs usually have arrows or the word "drive-through" on them. The entrance is typically located on one side of the restaurant, and you may need to drive around the building to find it.
another failure. what if i go to a concert in a field and there is a impromptu line to buy tickets. no lane markers, no window, no arrows, just a guy and a chair holding some paper. AI fails again.
Surur t1_jdvnp4j wrote
Lol. I can see with you the AI can never win.
speedywilfork t1_jdvt9wv wrote
if an AI fails to understand your intent would you call it a win?
Surur t1_jdw2bc1 wrote
The fault can be on either side.
speedywilfork t1_jdw6ptz wrote
so if an AI can't recognize a "drive through" it is the "drive throughs" fault? not to mention a human would investigate. it would ask someone "where do i buy tickets?" someone would say "over there", they would point to the guy at the chair and the human would immediately understand. an AI would have zero comprehension of "over there"
Surur t1_jdw9hy7 wrote
> so if an AI can't recognize a "drive through" it is the "drive throughs" fault?
If the AI can not recognize an obvious drive-through it would be the AIs fault, but why do you suppose that is the case?
speedywilfork t1_jdwo2mg wrote
>If the AI can not recognize an obvious drive-through it would be the AIs fault, but why do you suppose that is the case?
i already told you because "drive through" is an abstraction or a concept, it isnt any one thing. anything can be a drive through. And AI can't comprehend abstractions. sometimes the only clue you have to perceive a drive through is a line. not all lines are drive throughs, and not all drive throughs have a line. they are both abstractions, and there is no way to "teach" an abstraction. We don't know how we know these things. we just do.
another example would be "farm" a farm can be anything. it can be in your backyard, or even on your window sill, inside of a building, or the thing you put ants in. so to ask and AI to identify a "farm" wouldnt be possible.
Surur t1_jdwqzq5 wrote
You are proposing this as a theory, but I am telling you an AI can make the same context-based decisions as you can.
speedywilfork t1_jdx4i47 wrote
So i have 4 lines, 3 of them are drive throughs. so you are telling me that an AI can tell the difference between a line of cars in a parking lot, a line of cars on a road, a line of cars parked on the side of the road, and a line of cars at a drive through? what distinguishing characteristics do each of these lines have that would tip off the AI to which 3 are the drive throughs?
Surur t1_jdx9cvb wrote
The AI would use the same context clues you would use.
You have to remember that AIs are actually super-human when it comes to pattern matching in many instances.
speedywilfork t1_jdxdkr6 wrote
i have already told you that anything can be a drive through. so what contextual clues does a field have that would clue an AI into it being a drive through if there are no lines, no lanes, no arrows, only a guy in a chair. AI don't "assume" things. i want to know specifics. if you can't give me specifics, it cannot be programmed. AI requires specifics.
I mean seriously, i can disable an autonomous car with a salt circle. it has no idea it can drive over it. do you think a 5 year old child could navigate out of a salt circle? that shows you how dumb they really are.
Surur t1_jdxeibf wrote
> anything can be a drive through
Then that is a somewhat meaningless question you are asking, right?
Anything that will clue you in can also clue an AI in.
For example the sign that says Drive-Thru.
Which is needed because humans are not psychic and anything can be a drive-through.
> AI requires specifics.
No, neural networks are actually pretty good at vagueness.
> I mean seriously, i can disable an autonomous car with a salt circle.
That is a 2017 story. 5 years old.
speedywilfork t1_jdxm0sp wrote
>Anything that will clue you in can also clue an AI in.
>For example the sign that says Drive-Thru.
why do you keep ignoring my very specific example then? i am in a car with no steering wheel, i want to go to a pumpkin patch with my family. i get to the pumpkin patch in my autonomous car where there is a man sitting in a chair in the middle of a field. how does the AI know where to go?
I am giving you a real life scenario that i experience every year. there are no lanes, nor signs, nor paths, it is a field. how does the AI navigate this?
Surur t1_jdxri6v wrote
What makes you think a modern AI can not solve this problem?
So I gave your question to chatgpt and all its guesses were spot on.
And this was its answer on how it would drive there - all perfectly sensible.
And this is the worst it will ever be - the AI agents are only going to get smarter and smarter.
speedywilfork t1_jdy1kdc wrote
>What makes you think a modern AI can not solve this problem?
because you gave it distinct textual clues to determine an answer. Pumpkin patch. table. sign. it didnt determine anything on its own. you did all of the thinking for it. this is the point i am making. it can't do anything on its own.
if i say to a human "lets go to the pumpkin patch". we all get in the car. drive to the location, see that man in the field, drive to the man in the field, that is taking tickets, not the man directing traffic. and we park. all i have to verbalize is "lets go to the pumpkin patch"
An AI on the other hand i have to tell it "lets go to the pumpkin patch" then when we get there i have to say "drive to the man sitting at the table, not the man directing traffic, when you get there stop next to the man, not in front or behind the man" then you pay, now you say "now drive over to the man directing traffic, follow his gestures he will show you where to park" (assuming it can follow gestures).
All the AI did was follow commands, it didnt "think" at all, because it can't. do you realize how annoying this would become after a while? an average human would be better and could perform more work.
Surur t1_jdy3nm7 wrote
Gpt4 is multimodal. In the very near future you will be able to feed it a video feed and it won't need any text descriptions.
Anyway, if you don't think the current version is smart enough, just wait for next year.
speedywilfork t1_je067dv wrote
you don't understand, in my example it HAS a video feed. how do you think it see the guy in the field? i am presenting a forward looking scenario. i have been developing AI for 20 years. i am not speculating here. i am telling you what is factual. it isn't coming next year, it isn't coming at all. there is no way to program for things like "initiative" and that is what is required to take AI to the next level. everything is a command to AI, it has no initiative. it drives to the field and stops, because to it, the task is complete. it got us to the pumpkin patch. task complete. now what? you have to feed it the next task, that's what. it won't do it on it's own
Surur t1_je074un wrote
> everything is a command to AI, it has no initiative. it drives to the field and stops, because to it, the task is complete.
Sure, but a fully conscious and intelligent human taxi driver would do the same.
AIs are perfectly capable of making multi-step plans, and of course when they come to the end of the plan they should go dormant. We don't want AIs driving around with no one in command.
speedywilfork t1_je09cgt wrote
>Sure, but a fully conscious and intelligent human taxi driver would do the same.
but not me driving myself, and that is the point. my point is we won't have level 5 autonomy in anything outside of designated routes and possibly taxis. there are things that an AI will never be able to do, and a human can do them infinitely better. so my AI might drive me to the pumpkin patch, them i will take over.
>We don't want AIs driving around with no one in command
this is exactly why they will be stuck at the point they are right now and won't take over tons of jobs like everyone is claiming. they are HELPERS, nothing more. they can't reason, they can't think, they can't discern, they don't have initiative. people will soon realize initiative is the trait of a human that they are really looking for. not performing simple tasks that have to be babysat on a constant basis.
longleaf4 t1_je05vwd wrote
I'd agree with you if we were just talking about gpt3. Gpt4 is able to interpret images and could probably suceed at biying tickets in your example. Not computer vision, interpretation and understanding.
Show it a picture of a man holding balloons and ask it what would happen if you cut the strings in the picture, and it can tell you the balloons will fly away.
Show it a disorganized line leading to a guy in a chair and tell it it needs to figure out where to buy tickets, it probably can.
speedywilfork t1_je07y85 wrote
no it can't. as i have told many people on here. i have been developing AI for 20 years. i am not speculating, i am EXPLAINING what is possible and what isn't. so far the GPT 4 demos are things that are expected, nothing impressive.
>and tell it it needs to figure out where to buy tickets, it probably can.
i want it to do it without me having to tell it. that is the point you are missing.
longleaf4 t1_je09b8h wrote
I've seen a lot of cynicism from the older crowd that has been trying to make real progress in the field. I've also seen examples from researchers that have explained why it shows advancement we never could have expected.
I wonder how much of it is healthy skepticism and how much is arrogance.
speedywilfork t1_je0b9uc wrote
>it shows advancement we never could have expected
this simply isn't true, everything AI is doing right now has been expected, or it should have been expected. anything that can be learned will be learned by AI. anything that has a finite outcome it will excel at. anything that doesn't have a finite outcome. it will struggle with. it isn't arrogance it is simply the way it works. it is like saying i am arrogant for claiming humans wont be able to fly like birds. nope, that's just reality
longleaf4 t1_je10fgu wrote
It seems like an inability to consider conflicting thoughts and the assumption that current knowledge is the pinnacle of understanding is a kind of arrogant way to view a developing field that no one person has complete insight to.
To me it seems kind of like saying Fusion power will never be possible. Eventually you're going to be wrong and it is more ofna question of when pur current understanding is broken.
The AI claim is that a breakthrough has occurred and only time can say if that is accurate or overly optimistic. Pretending breakthroughs can't happen isn't going to help anything though. It's just not a smart area to make a lot of assumptions about right now.
speedywilfork t1_je2rdub wrote
AI can't process abstract thoughts. it will never be able to, because there is no way to teach it, and we don't even know how humans can understand abstract thoughts. this is the basis for my conclusion. if it can't be programmed AI will never have that ability.
RedditFuelsMyDepress t1_jdvueyx wrote
Tbf some humans struggle with these things too.
Viewing a single comment thread. View all comments