speedywilfork
speedywilfork t1_je5p78r wrote
Reply to comment by KurtisLloyd in More Water Found on Moon, Locked in Tiny Glass Beads by Gari_305
yep, exactly. and that is the only reason our planet is inhabitable
speedywilfork t1_je5cj4g wrote
considering that the moon collided with earth billions of years ago. this is no surprise.
speedywilfork t1_je2rdub wrote
Reply to comment by longleaf4 in Microsoft Suggests OpenAI and GPT-4 are early signs of AGI. by Malachiian
AI can't process abstract thoughts. it will never be able to, because there is no way to teach it, and we don't even know how humans can understand abstract thoughts. this is the basis for my conclusion. if it can't be programmed AI will never have that ability.
speedywilfork t1_je2qkbo wrote
Reply to comment by acutelychronicpanic in Microsoft Suggests OpenAI and GPT-4 are early signs of AGI. by Malachiian
>The current generation of AI does not use search to solve problems. That's not how neural networks work.
I never said they used search, it depends on the AI, but many still do use search with other protocols that augment it. they don't rely entirely on search but search is still a part of the algorithm.
>Go was considered impossible for AI to win for the reasons you suggested it is expected. There are too many possibilities for an AI to consider them all.
this is completely false. the original Go algorithm was taught on random games of Go, it had millions of moves built into its dataset. then it played itself millions of times. but the neural networks simply augmented the Monte Carlo Tree Search, it likely could not have won without search.
i don't literally mean it has a database of every potential move ever. i mean it builds this as it plays. however fundamentally it literally knows every move, because at any given point it knows all of the possible moves.
speedywilfork t1_je0b9uc wrote
Reply to comment by longleaf4 in Microsoft Suggests OpenAI and GPT-4 are early signs of AGI. by Malachiian
>it shows advancement we never could have expected
this simply isn't true, everything AI is doing right now has been expected, or it should have been expected. anything that can be learned will be learned by AI. anything that has a finite outcome it will excel at. anything that doesn't have a finite outcome. it will struggle with. it isn't arrogance it is simply the way it works. it is like saying i am arrogant for claiming humans wont be able to fly like birds. nope, that's just reality
speedywilfork t1_je09cgt wrote
Reply to comment by Surur in Microsoft Suggests OpenAI and GPT-4 are early signs of AGI. by Malachiian
>Sure, but a fully conscious and intelligent human taxi driver would do the same.
but not me driving myself, and that is the point. my point is we won't have level 5 autonomy in anything outside of designated routes and possibly taxis. there are things that an AI will never be able to do, and a human can do them infinitely better. so my AI might drive me to the pumpkin patch, them i will take over.
>We don't want AIs driving around with no one in command
this is exactly why they will be stuck at the point they are right now and won't take over tons of jobs like everyone is claiming. they are HELPERS, nothing more. they can't reason, they can't think, they can't discern, they don't have initiative. people will soon realize initiative is the trait of a human that they are really looking for. not performing simple tasks that have to be babysat on a constant basis.
speedywilfork t1_je07y85 wrote
Reply to comment by longleaf4 in Microsoft Suggests OpenAI and GPT-4 are early signs of AGI. by Malachiian
no it can't. as i have told many people on here. i have been developing AI for 20 years. i am not speculating, i am EXPLAINING what is possible and what isn't. so far the GPT 4 demos are things that are expected, nothing impressive.
>and tell it it needs to figure out where to buy tickets, it probably can.
i want it to do it without me having to tell it. that is the point you are missing.
speedywilfork t1_je067dv wrote
Reply to comment by Surur in Microsoft Suggests OpenAI and GPT-4 are early signs of AGI. by Malachiian
you don't understand, in my example it HAS a video feed. how do you think it see the guy in the field? i am presenting a forward looking scenario. i have been developing AI for 20 years. i am not speculating here. i am telling you what is factual. it isn't coming next year, it isn't coming at all. there is no way to program for things like "initiative" and that is what is required to take AI to the next level. everything is a command to AI, it has no initiative. it drives to the field and stops, because to it, the task is complete. it got us to the pumpkin patch. task complete. now what? you have to feed it the next task, that's what. it won't do it on it's own
speedywilfork t1_je059or wrote
Reply to comment by acutelychronicpanic in Microsoft Suggests OpenAI and GPT-4 are early signs of AGI. by Malachiian
>I'd love to hear your thoughts on how it can perform well on reasoning tasks its never seen before and reason about what would happen to a bundle of balloons in an image if the string was cut.
I am sure you already know all of this, but It isnt really reasoning, i knows, i knows because it learned. anything that can be learned will eventually be learned by AI, anything and everything. So all of these tasks that appear to be impressive, to me, are just expected. So far AI hasnt done anything that is unexpected. but anything that has a finite outcome, like chess, Go, poker, starcraft, you name it, AI will beat a human, it won't even be close. but it doesnt "reason" it knows all of the possible moves that can ever be played. you show it a picture and ask it what is funny about it. it know that "atypical" things are considered "funny" by humans. so you show it a picture of the Eiffel tower wearing a hat, it can easily determine what is "funny". Even though it doesn't know what "funny" even means.
on the other hand tasks that are open ended and have no finite set of outcomes, like this...
https://news.yahoo.com/soldiers-outsmart-military-robot-acting-214509025.html
AI looks really, really, dumb. because in this scenario, real reasoning is required. a 5 year old child would be able to pick out these soldiers. these are the types of experiments i am interested in, because it will help us to know where AI can reasonably be applied and where it can't.
Why can't an AI pick out these soldiers and a 5 year old can? because an AI just sees objects, a 5 year old understands intent. a 5 year old understands that a person is intending to fool them, so they discern that it is a person inside a cardboard box. There is no way to teach an AI to recognize intent. because intent is an abstraction, and AI can understand abstractions
speedywilfork t1_jdy1kdc wrote
Reply to comment by Surur in Microsoft Suggests OpenAI and GPT-4 are early signs of AGI. by Malachiian
>What makes you think a modern AI can not solve this problem?
because you gave it distinct textual clues to determine an answer. Pumpkin patch. table. sign. it didnt determine anything on its own. you did all of the thinking for it. this is the point i am making. it can't do anything on its own.
if i say to a human "lets go to the pumpkin patch". we all get in the car. drive to the location, see that man in the field, drive to the man in the field, that is taking tickets, not the man directing traffic. and we park. all i have to verbalize is "lets go to the pumpkin patch"
An AI on the other hand i have to tell it "lets go to the pumpkin patch" then when we get there i have to say "drive to the man sitting at the table, not the man directing traffic, when you get there stop next to the man, not in front or behind the man" then you pay, now you say "now drive over to the man directing traffic, follow his gestures he will show you where to park" (assuming it can follow gestures).
All the AI did was follow commands, it didnt "think" at all, because it can't. do you realize how annoying this would become after a while? an average human would be better and could perform more work.
speedywilfork t1_jdxyi6c wrote
Reply to comment by acutelychronicpanic in Microsoft Suggests OpenAI and GPT-4 are early signs of AGI. by Malachiian
why when i ask specific questions all i get is a straw man? this in itself proves that i am correct. I have been involved with AI development for 20 years. i understand every single model and type there is to be known. my ideas arent out of date. they are true. i am future looking here, and imagining a AI like Chat GPT to be paired with other systems. if i were to take into something like a coffee shop and ask it "is this a coffee shop?" it very likely would fail to get the answer correct. to an AI a coffee shop is a series of traits. it could not distinguish a coffee shop with a camera crew in it. from a fake coffee shop on a movie set. it couldnt distinguish an unbranded starbucks, from a unbranded mcdonalds. but you and i could, because a coffee shop is a concept, not a thing, it involves mood, feeling, and setting. and pattern recognition won't help it.
>AI pattern matching can do things that only AI and humans can do. Its not as simple as you imply. It doesn't just search some database and find a response to a similar question.
can a circle of small soccer cones disable an autonomous AI?
speedywilfork t1_jdxm0sp wrote
Reply to comment by Surur in Microsoft Suggests OpenAI and GPT-4 are early signs of AGI. by Malachiian
>Anything that will clue you in can also clue an AI in.
>For example the sign that says Drive-Thru.
why do you keep ignoring my very specific example then? i am in a car with no steering wheel, i want to go to a pumpkin patch with my family. i get to the pumpkin patch in my autonomous car where there is a man sitting in a chair in the middle of a field. how does the AI know where to go?
I am giving you a real life scenario that i experience every year. there are no lanes, nor signs, nor paths, it is a field. how does the AI navigate this?
speedywilfork t1_jdxkn3c wrote
Reply to comment by acutelychronicpanic in Microsoft Suggests OpenAI and GPT-4 are early signs of AGI. by Malachiian
It doesnt "infer" it takes textual clues and makes a determination based on a finite vocabulary. it doesnt "know" anything it just matches textual patterns to a predetermined definition. it is really rather simplistic. The reason AI seems so smart is because humans do all of the abstract thinking for them. we boil it down to a concrete thought then we ask it a question. however if you were to tell an AI "go invent the next big thing" it is clueless, impotent, and worthless. AI will help humans achieve great things, but the AI can't achieve great things by itself. that is the important point. it won't do anything on its own, and that is the way people keep framing it.
I can disable an autonomous car by making a salt circle around it or using tiny soccer cones. this proves that the AI doesn't "know" what it is. how do i "explain" to an AI that some things can be driven over and others can't. there is no distinction between salt, painted line, and wall to an AI, all it sees is "obstacle".
speedywilfork t1_jdxdkr6 wrote
Reply to comment by Surur in Microsoft Suggests OpenAI and GPT-4 are early signs of AGI. by Malachiian
i have already told you that anything can be a drive through. so what contextual clues does a field have that would clue an AI into it being a drive through if there are no lines, no lanes, no arrows, only a guy in a chair. AI don't "assume" things. i want to know specifics. if you can't give me specifics, it cannot be programmed. AI requires specifics.
I mean seriously, i can disable an autonomous car with a salt circle. it has no idea it can drive over it. do you think a 5 year old child could navigate out of a salt circle? that shows you how dumb they really are.
speedywilfork t1_jdx4i47 wrote
Reply to comment by Surur in Microsoft Suggests OpenAI and GPT-4 are early signs of AGI. by Malachiian
So i have 4 lines, 3 of them are drive throughs. so you are telling me that an AI can tell the difference between a line of cars in a parking lot, a line of cars on a road, a line of cars parked on the side of the road, and a line of cars at a drive through? what distinguishing characteristics do each of these lines have that would tip off the AI to which 3 are the drive throughs?
speedywilfork t1_jdwo2mg wrote
Reply to comment by Surur in Microsoft Suggests OpenAI and GPT-4 are early signs of AGI. by Malachiian
>If the AI can not recognize an obvious drive-through it would be the AIs fault, but why do you suppose that is the case?
i already told you because "drive through" is an abstraction or a concept, it isnt any one thing. anything can be a drive through. And AI can't comprehend abstractions. sometimes the only clue you have to perceive a drive through is a line. not all lines are drive throughs, and not all drive throughs have a line. they are both abstractions, and there is no way to "teach" an abstraction. We don't know how we know these things. we just do.
another example would be "farm" a farm can be anything. it can be in your backyard, or even on your window sill, inside of a building, or the thing you put ants in. so to ask and AI to identify a "farm" wouldnt be possible.
speedywilfork t1_jdw6ptz wrote
Reply to comment by Surur in Microsoft Suggests OpenAI and GPT-4 are early signs of AGI. by Malachiian
so if an AI can't recognize a "drive through" it is the "drive throughs" fault? not to mention a human would investigate. it would ask someone "where do i buy tickets?" someone would say "over there", they would point to the guy at the chair and the human would immediately understand. an AI would have zero comprehension of "over there"
speedywilfork t1_jdw5jyl wrote
Reply to comment by RedditFuelsMyDepress in Microsoft Suggests OpenAI and GPT-4 are early signs of AGI. by Malachiian
but that is the problem. it doesnt know intent, because intent is contextual. if i was standing in a coffee shop the question means one thing, on coffee plantation another, in a business conversation something totally different. so if you and i were discussing things to improve our business and i asked "what do you think about coffee" i am not asking about taste. AI can't distinguish these things.
speedywilfork t1_jdvwbt4 wrote
Reply to comment by RedditFuelsMyDepress in Microsoft Suggests OpenAI and GPT-4 are early signs of AGI. by Malachiian
i am not talking about its opinion, i am talking about intent. i want it to know what the intention of my question is regardless of the question. i just gave this as example to someone else...
as an example if i go to a small town and I am hungry. i find a local and ask "i am not from around here and looking for a good place to eat" they understand the intent of my question isnt the taco bell on the corner. they understand i am asking about a local eatery that others call "good". An AI would just spit out a list of restaurants, but that wasnt the intent of the question. therefore it didnt understand.
If i point at the dog bed even my dog knows what i intend for it to do. it UNDERSTANDS, an AI wouldnt.
speedywilfork t1_jdvue1k wrote
Reply to comment by acutelychronicpanic in Microsoft Suggests OpenAI and GPT-4 are early signs of AGI. by Malachiian
>GPT4's strengths have little to do with spitting facts, and more to do with its ability to do reasoning and demonstrate understanding.
I am not talking about an opinion, i am referring to intent. if it cant determine "intent" it can neither reason nor understand. Humans can easily understand intent, AI can't.
as an example if i go to a small town and I am hungry. i find a local and ask "i am not from around here and looking for a good place to eat" they understand the intent of my question isnt the taco bell on the corner. they understand i am asking about a local eatery that others call "good". An AI would just spit out a list of restaurants, but that wasnt the intent of the question. therefore it didnt understand.
speedywilfork t1_jdvt9wv wrote
Reply to comment by Surur in Microsoft Suggests OpenAI and GPT-4 are early signs of AGI. by Malachiian
if an AI fails to understand your intent would you call it a win?
speedywilfork t1_jdvnee1 wrote
Reply to comment by acutelychronicpanic in Microsoft Suggests OpenAI and GPT-4 are early signs of AGI. by Malachiian
here is the problem. "intelligence" has nothing to do with regurgitating facts. it has to do with communication or intent. so if i ask you "what do you think about coffee" you know i am asking about preference. not the origin of coffee, or random facts about coffee. so if you were to ask a human "what do you think about coffee" and they spit out some random facts. then you say "no thats not what i mean, i want to know if you like it" then they spit out more random facts. would you think to yourself. "damn this guy is really smart." i doubt it. you would likely think "whats wrong with this guy". so if something can't identify intent and return a cogent answer. it isnt "intelligent".
speedywilfork t1_jdvkqtr wrote
Reply to comment by Surur in Microsoft Suggests OpenAI and GPT-4 are early signs of AGI. by Malachiian
>Your examples are pretty bad and you should feel bad.
no they aren't. they illustrated my point perfectly. the AI didn't know what you were asking when you asked "do you live in a computer" because it doesn't understand that we are not asking if it is "alive" in the biological sense. we are asking if it is "alive" in the rhetorical sense. also it doesn't even understand the term "computer" because we an not asking about a literal macbook or PC. we are speaking rhetorically and use the term "computer" to mean something akin to "digital world" it failed to recognize the intended meaning of the words, therefore it failed.
>Approach the drive-through entrance: Look for signs indicating the entrance to the drive-through lane. These signs usually have arrows or the word "drive-through" on them. The entrance is typically located on one side of the restaurant, and you may need to drive around the building to find it.
another failure. what if i go to a concert in a field and there is a impromptu line to buy tickets. no lane markers, no window, no arrows, just a guy and a chair holding some paper. AI fails again.
speedywilfork t1_jdvivi6 wrote
Reply to comment by datsmamail12 in Microsoft Suggests OpenAI and GPT-4 are early signs of AGI. by Malachiian
i am not impressed by it because everything it does, is expected. but it will never become self aware, because it has no ability to do so. self aware isnt something you learn, self aware is something you are. it is a trait, traits are assigned, not learned. even in evolution the environment is what assigns traits. AI have no environmental influence outside of their programmers. therefore the programmers would have to assign them the "self aware trait"
speedywilfork t1_je9viyb wrote
Reply to comment by Buscemi_D_Sanji in More Water Found on Moon, Locked in Tiny Glass Beads by Gari_305
i was using TLDR. i didnt want to go into details like the other poster did.