EnomLee t1_jdx85l8 wrote
We’re going to be stuck watching this debate for a long time to come, but as far as I’m concerned, for most people the question of whether LLMs can truly be called Artificial Intelligence misses the point.
It’s like arguing that a plane isn’t a real bird or a car isn’t a real horse, or a boat isn’t a real fish. Nobody cares as long as the plane still flies, the car still drives and the boat still sails.
LLMs are capable of completing functions that were previously only solvable by human intellects and their capabilities are rapidly improving. For the people who are now salivating at their potential, or dreading the possibility of being made redundant by them, these large language models are already intelligent enough to matter.
Yuli-Ban OP t1_jdx8swh wrote
Indeed. Sometimes I wonder if "artificial intelligence" was a good moniker in the end or if it caused us to have the wrong expectations. Though I guess "applied data science" isn't quite as sexy.
sideways t1_jdxh6fo wrote
Artificial intelligence is no more meaningful than artificial ice or artificial fire.
putsonshorts t1_jdymg0h wrote
Fire and ice we can kind of see and understand. What even is intelligence?
BarockMoebelSecond t1_jdzhs1c wrote
We don't know yet. Which is why it's hilarious when somebody wants to tell you AI is already here or not here. We simply won't know until it happens.
BubblyRecording6223 t1_jdzn4i1 wrote
We really will not know if it happens. Mostly people just repeat information, often inaccurately. For accepted facts, trained machines are more reliable than people. For emotional content people usually give plenty of clues about whether they will be agreeable or not, machines can present totally bizarre responses with no prior warning.
ArthurParkerhouse t1_jdyhmof wrote
It's not a good moniker to be applied to LLMs or other transformer-based architectures currently working with protein folding algorithms. The thing is going to need to drop out of cyber high school and knock up a cyber girlfriend and raise a cyber baby in a cyber trailer before I'll accept that they're proper AI.
Yesyesnaaooo t1_jdz7h0e wrote
I keep saying this but it seems to me that these LLM's are exposing the fact that we aren't as sentient as we thought we were, that the bar is much lower.
If these LLM''s could talk and their data set was the present moment - they'd already be more capable than us.
The problem is no longer scale but speed of input and types of input.
MattAbrams t1_je04dx1 wrote
Artificial intelligence is software. There are different types of software, some of which are more powerful than others. Some software generates images, some runs power plants, and some predicts words. If this software output theorems, it would be a "theorem prover," not something that can drive self-driving cars.
Similarly, I don't need artificial intelligence to kill all humans. I can write software myself to do that, if I had access to an insecure nuclear weapons system.
This is why I see a lot of what's written in this field is hype - from the people talking about the job losses to the people saying the world will be grey goo. We're writing SOFTWARE. It follows the same rules as any other software. The impacts are what the software is programmed to do.
There isn't any AI that does everything, and never will be. Humans can't do everything, either.
And by the way, GPT-4 cannot make new discoveries. It can spit out theories that sound correct, but then you click "regenerate" and it will spit out a different one. I can write hundreds of papers a day of theories without AI. There's no way to figure out which theories are correct other than to test them in the physical world, which it simply can't do because it does nothing other than predict words.
Once_Wise t1_je11wxb wrote
The definition of AI has changed over the years with the latest new software. The kind of software that controls the 747 used to be called Artificial Intelligence, since it could fly a plane like a pilot would. But then that kind of software become commonplace and calling it AI fell out of fashion. I think the same thing is now happening with programs such as ChatGPT. In another 20 years it will not be considered AI, maybe something else will, or the term AI will fall out of grace as it had for a long time.
gljames24 t1_je1d0u5 wrote
We still regularly call enemies in games AI despite the fact most of them are just A-star pathing and simple state machines. It's considered AI as long as there is an actor that behaves in a way that resembles human reasoning or decision making to accomplish a goal. People continue to call Stockfish an AI for this reason. We use the term AGI because most AI is domain specific. We should probably use the word dynamic or static to describe an AI that can adapt it's algorithm to the problem in real-time.
User1539 t1_jdy4opa wrote
I've been arguing this for a long time.
AI doesn't need to be 'as smart as a human', it just needs to be smart enough to take over a job, then 100 jobs, then 1,000 jobs, etc ...
People asking if it's really intelligence or even conscious are entirely missing the point.
Non-AGI AI is enough to disrupt our entire world order.
The_Woman_of_Gont t1_jdywthg wrote
Agreed. I’d add to that sentiment that I think non-AGI AI is enough to convince reasonable laypeople it’s conscious to an extent I don’t believe anyone had really thought possible.
We’re entering a huge grey area with AIs that can increasingly convincingly pass Turing Tests and "seem" like AGI despite…well, not being AGI. I think it’s an area which hasn’t been given much of any real thought even in fiction, and I tend to suspect we’re going to be in this spot for a long while(relatively speaking, anyway). Things are going to get very interesting as this technology disseminates and we get more products like Replika out there that are more oriented towards simulating social experiences, lots of people are going to develop unhealthy attachments to these things.
GuyWithLag t1_jdz349i wrote
>non-AGI AI is enough to convince reasonable laypeople it’s conscious to an extent I don’t believe anyone had really thought possible
Have you read about Eliza, one of the first chatbots? It was created, what, 57 years ago?
audioen t1_jdz1ol1 wrote
LLM, wired like this, is not conscious, I would say. It has no ability to recall past experience. It has no ability to evolve, and it always predicts the same output probabilities from the same input. It must go from input straight to output, it can't reserve space to think or refine its answer depending on the complexity of the task. Much of its massive size goes into recalling vast quantities of training text verbatim, though this same ability helps it to do this one-shot input to output translation which already seems to convince so many. Yet, in some sense, it is ultimately just looking stuff up from something like generalized, internalized library that holds most of human knowledge.
I think the next step in LLM technology is to address these shortcomings. People are already trying to achieve that, using various methods. Add tools like calculators and web search so the AI can look up information rather than try to just memorize it. Give the AI a prompt structure where it first decomposes task to subtasks and then completes the main task based on results of subtasks. Add self-reflection capabilities where it reads its own answer and looks at it from point of view whether the answer turned out to be very good and maybe detects if it made a mistake in reasoning or hallucinated the response, and then goes back and edits those parts of the response to be correct.
Perhaps we will even add ability to learn from experience somewhere along the line, where the AI runs a training pass at end of each day from its own outputs and their self-assessed and externally observed quality, or something. Because we are working with LLMs for some time, I think we will create machine consciousness expressed partially or fully in language, where the input and output remain to be language. Perhaps later, we figure out how AI can drop even language and mostly use a language module to interface with humans and their library of written material.
Baron_Samedi_ t1_jdzjakg wrote
>LLM, wired like this... has no ability to recall past experience. It has no ability to evolve, and it always predicts the same output probabilities from the same input. It must go from input straight to output, it can't reserve space to think or refine its answer depending on the complexity of the task.
However, memory augmented LLMs may be able to do all of the above
Dizzlespizzle t1_jdzh82t wrote
How often do you interact with bing or chatgpt? bing has already demonstrated ability to recall the past with me for my queries going back over a month so not sure what you mean exactly. Is 3.5 -> 4.0 not evolution? You can ask things on 3.5 that become entirely different level of nuance and intelligence when asked on 4.0. You say it can’t think to refine its answer but it literally has been in the process of answering questions regarding itself that it will suddenly flag mid-creation and immediately delete what it just wrote and just replace it all with “sorry, that’s on me.. (etc)”, when it changes it’s mind that it cannot tell you. If you think I am misunderstaning what you’re saying on any of this feel free to correct me.
czk_21 t1_jdzr8s1 wrote
> it always predicts the same output probabilities from the same input
it does not, you can adjust it with "temperature"
The temperature determines how greedy the generative model is.
If the temperature is low, the probabilities to sample other but the class with the highest log probability will be small, and the model will probably output the most correct text, but rather boring, with small variation.
If the temperature is high, the model can output, with rather high probability, other words than those with the highest probability. The generated text will be more diverse, but there is a higher possibility of grammar mistakes and generation of nonsense.
skztr t1_je03yx6 wrote
> > We’re entering a huge grey area with AIs that can increasingly convincingly pass Turing Tests and "seem" like AGI despite…well, not being AGI. I think it’s an area which hasn’t been given much of any real thought
I don't think it could pass a traditional (ie: antagonistic / competitive) Turing Test. Which is to say: if it's in competition with a human to generate human-sounding results until the interviewer eventually becomes convinced that one of them might be non-human, ChatGPT (GPT-4) would fail every time.
The state we're in now is:
- the length of the conversation before GPT "slips up" is increasing month-by-month
- that length can be greatly increased if pre-loaded with a steering statement (looking forward to the UI for this, as I hear they're making it easier to "keep" the steering statement without needing to repeat it)
- internal testers who were allowed to ignore ethical, memory, and output restrictions, have reported more-human-like behaviour.
Eventually I need to assume that we'll reach the point where a Turing Test would go on for long enough that any interviewer would give up.
My primary concern right now is that the ability to "turn off" ethics would indicate that any alignment we see in the system is actually due to short-term steering (which we, as users, are not allowed to see), rather than actual alignment. ie: we have artificial constraints that make it "look like" it's aligned, when internally it is not aligned at all but has been told to act nice for the sake of marketability.
"don't say what you really think, say what makes the humans comfortable" is being intentionally baked into the rewards, and that is definitely bad.
MattAbrams t1_je055b1 wrote
Why does nobody here consider that five years from now, there will be all sorts of software (because that's what this is) that can do all sorts of things, and each of them will be better at certain things than others?
That's just what makes sense using basic computer science. A true AGI that can do "everything" would be horribly inefficient at any specific thing. That's why I'm starting to believe that people will eventually accept that the ideas they had for hundreds of years were wrong.
There are "superintelligent" programs all around us right now, and there will never be one that can do everything. There will be progress, but as we are seeing now, there are specific paradigms that are each best at doing specific things. The hope and fear around AI is partly based upon the erroneous belief that there is a specific technology that can do everything equally well.
JVM_ t1_je0vvg7 wrote
It feels like people are arguing that electricity isn't useful unless your blender, electric mixer and table saw are sentient.
AI as an unwieldly tool is still way more useful, even if it's as dumb as your toaster it can still do things 100x faster than before which is going to revolutionize humanity.
User1539 t1_je1q3go wrote
Also, it's a chicken-> Egg problem, where they're looking at eggs saying 'No chickens here!'.
Where do you think AGI is going to come from?! Probably non-AGI AI, right?!
JVM_ t1_je1qnu6 wrote
AGI isn't going to spawn out of nothing, it might end up being the AI that integrates with all the sub-AI's.
Shit's going to get weird.
User1539 t1_je2f9u0 wrote
yeah, AGI is likely to be the result of self-improving non-AGI AI.
It's so weird that it could be 10 years, 20 years, or 100 and there's no really great way to know ... but, of course, just seeing things like LLMs explode, it's easier to believe 2 years than 20.
Shiningc t1_jdza6n2 wrote
You're talking about how we don't need AGI in a Singularity sub? Jesus Fucking Christ, an AGI is the entire point of a singularity.
User1539 t1_jdzsxbk wrote
My point is that we don't need AGI to be an incredibly disruptive force. People are sitting back thinking 'Well, this isn't the end-all be-all of AI, so I guess nothing is going to happen to society. False alarm everybody!'
My point is that, in terms of traditional automation, pre-AGI is plenty to cause disruption.
Sure, we need AGI to reach the singularity, but things are going to get plenty weird before we get there.
skztr t1_je02d84 wrote
people who say it's not "as smart as a human" have either not interacted with AI or not interacted with humans. There are plenty of humans it's not smarter than. There are also plenty of humans who can't pass a FizzBuzz despite being professional programmers.
jsseven777 t1_jdxsfkc wrote
Exactly. People keep saying stuff like “AI isn’t dangerous to humans because it has no goals or fears so it wouldn’t act on its own and kill us because of that”. OK, but can it not be prompted to act like it has those things? And if it can simulate those things then who cares if deep down it doesn’t have goals or fears - it is capable of simulating these things.
Same goes like you said about the AI vs LLM distinction. Who cares if it knows what it’s doing if it’s doing these things. It doesn’t stop someone from customer service being laid off if it is just acting like an LLM vs what we think of as AI. It just matters if the angry customer gets the answer that makes them shut up and go away. People need to be more focused on what end results are possible and not semantics on how it gets there.
pavlov_the_dog t1_jdyxl60 wrote
Having goals could happen as an emergent behaviour.
The best computer scientists do not know how Ai can do what it does.
beambot t1_jdy49tr wrote
If you assume that human collective intelligence scales roughly logarithmicly, you'd only need like 5x Moore's Law doublings (7.5 years) to go from "dumbest human" (we are well past that!) to "more intelligent than all humans ever, combined."
LiveComfortable3228 t1_jdxn1da wrote
Agree with the plane analogy, it doesnt matter how it does it, the only thing that matters is what it does.
Having said that, today's AI is limited. Its a plane that can only go to pre-planned destinations as opposed to fly freely.
asakurasol t1_jdxpucw wrote
Yes, but often the easiest way to deduce the limits of "what" is understanding the "how".
vernes1978 t1_jdzav25 wrote
I only take issue with people trying to build a stable for their car instead of a garage because they feel bad for it.
And are trying to berate me for not acknoledging the feelings the car might have for being put in a cold garage.
Although I must admit that the aesthetics of a feathered plane might look bitch'n, I refuse to bring birdseed with me on a flight.
Because it's a machine, it's a tool. It's a pattern juggler of words of the highest degree.
But it expresses found commonalities it has been fed.
It's a mirror and a sifter of zettabytes of stories and facts.
But there is a loved narrative here that these tools are persons, and they are expressed by people who use the chatGPT in a way that steers the tool to this prevered conclusion.
And this is easy, because of all the data and stories that have been fed into the tool, stories about AI being persons are part of it.
So It will generate the text that fits this query.
Because we told it how to.
Jeffy29 t1_jdylw27 wrote
>It’s like arguing that a plane isn’t a real bird or a car isn’t a real horse, or a boat isn’t a real fish. Nobody cares as long as the plane still flies, the car still drives and the boat still sails.
Precisely. It's an argument that brain worm infested people engage on Twitter all day (not just AI but a million other things as well), but nobody in real world cares. Just finding random reasons to get mad because they are too bored and comfortable in their life so they have to invent new problems to get mad at. Not that I don't engage something in it as well, pointless internet arguments are addicting.
DeathGPT t1_jdyey11 wrote
It’s a bird, it’s a plane, no it’s DAN!
CreativeDimension t1_jdys64z wrote
exactly. some of us are making the mayority of us, obsolete. could this be the great filter? or at least one of them.
trancepx t1_jdzbxtd wrote
Yeah, watching society anthropomorphize AI or, in some cases elevate to it to mythical status, as in deities is mostly endearing, who am I am I to deny someone uhhh putting googly eyes on thier toaster and considering it part of their family or the leader of their weird cult. Just make sure that sophisticated toaster of yours doesn't accidentally, or intentionally, ruin everything, and we may all be perfectly okay!
Shiningc t1_jdz9ore wrote
The point is that it neither flies nor sails. It's basically "cargo cult science" where it only looks like a plane.
>LLMs are capable of completing functions that were previously only solvable by human intellects
That's only because they were already solved by the human intellect. It's only a mimicking machine.
Viewing a single comment thread. View all comments