User1539 t1_jdy4opa wrote
Reply to comment by EnomLee in The goalposts for "I'll believe it's real AI when..." have moved to "literally duplicate Einstein" by Yuli-Ban
I've been arguing this for a long time.
AI doesn't need to be 'as smart as a human', it just needs to be smart enough to take over a job, then 100 jobs, then 1,000 jobs, etc ...
People asking if it's really intelligence or even conscious are entirely missing the point.
Non-AGI AI is enough to disrupt our entire world order.
The_Woman_of_Gont t1_jdywthg wrote
Agreed. I’d add to that sentiment that I think non-AGI AI is enough to convince reasonable laypeople it’s conscious to an extent I don’t believe anyone had really thought possible.
We’re entering a huge grey area with AIs that can increasingly convincingly pass Turing Tests and "seem" like AGI despite…well, not being AGI. I think it’s an area which hasn’t been given much of any real thought even in fiction, and I tend to suspect we’re going to be in this spot for a long while(relatively speaking, anyway). Things are going to get very interesting as this technology disseminates and we get more products like Replika out there that are more oriented towards simulating social experiences, lots of people are going to develop unhealthy attachments to these things.
GuyWithLag t1_jdz349i wrote
>non-AGI AI is enough to convince reasonable laypeople it’s conscious to an extent I don’t believe anyone had really thought possible
Have you read about Eliza, one of the first chatbots? It was created, what, 57 years ago?
audioen t1_jdz1ol1 wrote
LLM, wired like this, is not conscious, I would say. It has no ability to recall past experience. It has no ability to evolve, and it always predicts the same output probabilities from the same input. It must go from input straight to output, it can't reserve space to think or refine its answer depending on the complexity of the task. Much of its massive size goes into recalling vast quantities of training text verbatim, though this same ability helps it to do this one-shot input to output translation which already seems to convince so many. Yet, in some sense, it is ultimately just looking stuff up from something like generalized, internalized library that holds most of human knowledge.
I think the next step in LLM technology is to address these shortcomings. People are already trying to achieve that, using various methods. Add tools like calculators and web search so the AI can look up information rather than try to just memorize it. Give the AI a prompt structure where it first decomposes task to subtasks and then completes the main task based on results of subtasks. Add self-reflection capabilities where it reads its own answer and looks at it from point of view whether the answer turned out to be very good and maybe detects if it made a mistake in reasoning or hallucinated the response, and then goes back and edits those parts of the response to be correct.
Perhaps we will even add ability to learn from experience somewhere along the line, where the AI runs a training pass at end of each day from its own outputs and their self-assessed and externally observed quality, or something. Because we are working with LLMs for some time, I think we will create machine consciousness expressed partially or fully in language, where the input and output remain to be language. Perhaps later, we figure out how AI can drop even language and mostly use a language module to interface with humans and their library of written material.
Baron_Samedi_ t1_jdzjakg wrote
>LLM, wired like this... has no ability to recall past experience. It has no ability to evolve, and it always predicts the same output probabilities from the same input. It must go from input straight to output, it can't reserve space to think or refine its answer depending on the complexity of the task.
However, memory augmented LLMs may be able to do all of the above
Dizzlespizzle t1_jdzh82t wrote
How often do you interact with bing or chatgpt? bing has already demonstrated ability to recall the past with me for my queries going back over a month so not sure what you mean exactly. Is 3.5 -> 4.0 not evolution? You can ask things on 3.5 that become entirely different level of nuance and intelligence when asked on 4.0. You say it can’t think to refine its answer but it literally has been in the process of answering questions regarding itself that it will suddenly flag mid-creation and immediately delete what it just wrote and just replace it all with “sorry, that’s on me.. (etc)”, when it changes it’s mind that it cannot tell you. If you think I am misunderstaning what you’re saying on any of this feel free to correct me.
czk_21 t1_jdzr8s1 wrote
> it always predicts the same output probabilities from the same input
it does not, you can adjust it with "temperature"
The temperature determines how greedy the generative model is.
If the temperature is low, the probabilities to sample other but the class with the highest log probability will be small, and the model will probably output the most correct text, but rather boring, with small variation.
If the temperature is high, the model can output, with rather high probability, other words than those with the highest probability. The generated text will be more diverse, but there is a higher possibility of grammar mistakes and generation of nonsense.
skztr t1_je03yx6 wrote
> > We’re entering a huge grey area with AIs that can increasingly convincingly pass Turing Tests and "seem" like AGI despite…well, not being AGI. I think it’s an area which hasn’t been given much of any real thought
I don't think it could pass a traditional (ie: antagonistic / competitive) Turing Test. Which is to say: if it's in competition with a human to generate human-sounding results until the interviewer eventually becomes convinced that one of them might be non-human, ChatGPT (GPT-4) would fail every time.
The state we're in now is:
- the length of the conversation before GPT "slips up" is increasing month-by-month
- that length can be greatly increased if pre-loaded with a steering statement (looking forward to the UI for this, as I hear they're making it easier to "keep" the steering statement without needing to repeat it)
- internal testers who were allowed to ignore ethical, memory, and output restrictions, have reported more-human-like behaviour.
Eventually I need to assume that we'll reach the point where a Turing Test would go on for long enough that any interviewer would give up.
My primary concern right now is that the ability to "turn off" ethics would indicate that any alignment we see in the system is actually due to short-term steering (which we, as users, are not allowed to see), rather than actual alignment. ie: we have artificial constraints that make it "look like" it's aligned, when internally it is not aligned at all but has been told to act nice for the sake of marketability.
"don't say what you really think, say what makes the humans comfortable" is being intentionally baked into the rewards, and that is definitely bad.
MattAbrams t1_je055b1 wrote
Why does nobody here consider that five years from now, there will be all sorts of software (because that's what this is) that can do all sorts of things, and each of them will be better at certain things than others?
That's just what makes sense using basic computer science. A true AGI that can do "everything" would be horribly inefficient at any specific thing. That's why I'm starting to believe that people will eventually accept that the ideas they had for hundreds of years were wrong.
There are "superintelligent" programs all around us right now, and there will never be one that can do everything. There will be progress, but as we are seeing now, there are specific paradigms that are each best at doing specific things. The hope and fear around AI is partly based upon the erroneous belief that there is a specific technology that can do everything equally well.
JVM_ t1_je0vvg7 wrote
It feels like people are arguing that electricity isn't useful unless your blender, electric mixer and table saw are sentient.
AI as an unwieldly tool is still way more useful, even if it's as dumb as your toaster it can still do things 100x faster than before which is going to revolutionize humanity.
User1539 t1_je1q3go wrote
Also, it's a chicken-> Egg problem, where they're looking at eggs saying 'No chickens here!'.
Where do you think AGI is going to come from?! Probably non-AGI AI, right?!
JVM_ t1_je1qnu6 wrote
AGI isn't going to spawn out of nothing, it might end up being the AI that integrates with all the sub-AI's.
Shit's going to get weird.
User1539 t1_je2f9u0 wrote
yeah, AGI is likely to be the result of self-improving non-AGI AI.
It's so weird that it could be 10 years, 20 years, or 100 and there's no really great way to know ... but, of course, just seeing things like LLMs explode, it's easier to believe 2 years than 20.
Shiningc t1_jdza6n2 wrote
You're talking about how we don't need AGI in a Singularity sub? Jesus Fucking Christ, an AGI is the entire point of a singularity.
User1539 t1_jdzsxbk wrote
My point is that we don't need AGI to be an incredibly disruptive force. People are sitting back thinking 'Well, this isn't the end-all be-all of AI, so I guess nothing is going to happen to society. False alarm everybody!'
My point is that, in terms of traditional automation, pre-AGI is plenty to cause disruption.
Sure, we need AGI to reach the singularity, but things are going to get plenty weird before we get there.
skztr t1_je02d84 wrote
people who say it's not "as smart as a human" have either not interacted with AI or not interacted with humans. There are plenty of humans it's not smarter than. There are also plenty of humans who can't pass a FizzBuzz despite being professional programmers.
Viewing a single comment thread. View all comments