Viewing a single comment thread. View all comments

Comfortable-Ad4655 t1_j1g0er4 wrote

I agree that this sub quality could be improved significantly....but I am still curious why do you think llms are far away from AGI? it might be good to say also what do you consider "far away" first?

36

fortunum OP t1_j1g2wqj wrote

You would need to define AGI first. Historically the definition and measurement of AGI has changed. Then you could ask yourself if language is all there is to intelligence. Do sensation and perception play a role? Does the substrate (simulation on von Neumann Architecture or neuromorphic hardware) matter? Does AGI need a body? There are many more philosophical questions, especially around consciousness.

The practical answer would be that adversarial attacks are easy to conduct, for instance chatGPT. You can fool it and get nonsensical answers, this will likely happen with succeeding versions of LLMs as well

15

sticky_symbols t1_j1gi6fn wrote

Here we go. This comment has enough substance to discuss. Most of the talk in this sub isn't deep or well informed enough to really count as discussion.

Perceptual and motor networks are making progress almost as rapidly as language models. If you think those are important, and I agree that they can only help, they are probably being integrated right now, and certainly will be soon.

I've spent a career studying how the human brain works. I'm convinced it's not infinitely more complex than current networks, and the co.putational motifs to get from where we are to brain like function are already understood by handfuls of people, and merely need to be integrated and iterated upon.

My median prediction is ten years to full superhuman AGI, give or take. By that I mean something that makes better plans in any domain than a single human can do. That will slowly or quickly accelerate progress as it's applied to building better AGI, and we have the intelligence explosion version of the singularity.

At which point we all die, if we haven't somehow solved the alignment problem by then. If we have, we all go on permanent vacation and dream up awesome things to do with our time.

28

PoliteThaiBeep t1_j1gowpp wrote

You know I've read a 1967 sci Fi book by a Ukrainian author where they invented a machine that can copy, create and alter human beings. And a LOT of discussion of what it could mean for humanity. As well as threat of SuperAI.

In a few chapters where people were talking and discussing events one of people was going on and on how computers will rapidly overcome human intelligence and what will happen then.

I found it... Interesting.

Since a lot of talks I had with tech people over the years since like 2015 were remarkably similar. And yet similarity with talks people had in 1960s are striking.

Same points " it's not a question of IF it's a question of when" Etc. Same arguments, same exponential talk, etc.

And I'm with you that.. but also a lot of us pretend or think they understand more than they possible do or could.

We don't really know when an intelligence explosion will happen.

1960s people thought it would happen when computers could do arithmetic million times faster than humans.

We seem to hang on to flops raw compute power, compare it vs human brain - and voila! - if it's higher we got super AI.

We've since long passed 10^16 flops in our supercomputers and yet we're still nowhere near human level AI.

Memory bandwidth kinda slipped away from Kurzwail books.

Maybe ASI will happen tomorrow. Or 10 years from now. Or 20 years from now or maybe it'll never happen we'll just sort if merge with it as we go without any sort of defining rigid event.

My point is - we don't really know. Flops progression was a good guess but it failed spectacularly. We have over 10^18 flops capable computers and we're still 2-3 orders of magnitude behind human brain when trying to emulate it.

10

sticky_symbols t1_j1i5gpt wrote

I agree that we don't know when. The point people often miss is that we have high uncertainty in both directions. It could happen sooner than the average guess, as well as later. We are now around the same processing power as a human brain (depending what aspects of brain function you measure), so it's all about algorithms.

6

Ortus12 t1_j1gh1kj wrote

Language (input) -> blackbox (brain) -> language (output)

LLMs solve the blackbox. So whatever algorithms run in the human brain LLMs solve it. Not for one human brain, but for all the greatest human brains that have ever written something down. LLMs alone are super intelligence at scale.

We'll be able to ask it questions like, how do we build a nanite swarm? and write me a program in python for super intelligence that has working memory, can automate all computer tasks, and runs optimally on X hardware.

LLMs are super intelligence but they'll give birth to even more powerful super intelligence.

9

theotherquantumjim t1_j1hbwsd wrote

Is it not reasonable to posit that AGI doesn’t need consciousness though? Notwithstanding we aren’t yet clear exactly what it is, but there doesn’t seem to be a logical requirement for AGI to have consciousness. Having said that, I would agree that a “language mimic” is probably very far away from AGI and that some kind of LTM, as well as multi-mode sensory input, cross referencing and feedback is probably a pre-requisite.

6

eve_of_distraction t1_j1ip6o9 wrote

>Is it not reasonable to posit that AGI doesn’t need consciousness though?

It's very reasonable. It's entirely possible that silicon consciousness is impossible to create. I don't see why subjective experience is necessary for AGI. I used to think it would be, but I changed my mind.

1

sumane12 t1_j1j5clw wrote

You bring up some good points. I think the reason people are so optimistic recently has a number of points to it;

  1. Even though ChatGPT is not perfect and not what most people would consider AGI, it's general enough to be massively disruptive to society in general. Even if no further progress is made, there's so much low hanging fruit in terms of productivity that ChatGPT offers.

  2. Gpt4 is coming out soon, which is rumoured to be trained on multiple data sets so will be even better at generalising

  3. AI progress seems to be speeding up, we are closing in on surpassing humans in more measures than not.

  4. Hardware is improving allowing for more powerful algorithms

  5. Although kurzweil isn't perfect at prediction the future, his predictions and timelines have been pretty dam close so it's likely that this decade will be transformative in terms of AI

You bring up a good point about questioning whether language is all that's needed for intelligence, and I think that it possibly might be. Remember, language is our abstract way of describing the world and we've designed language in a way so as to encapsulate as much of our subjective experience as possible through description. let's take for example my car, you've never seen my car, but if I give you enough information, enough data, you will eventually get a pretty accurate idea of how it looks. It's very possible that the abstractions of our words, could be reverse engineered with enough data to represent the world we subjectively experience, if given enough data. We know that our subjective experience is only our minds way of making sense of the universe from a natural selection perspective, the real universe could be nothing like our subjective experience, and it seems reasonable to me the data we feed to large language models could give them enough information to develop a very accurate representation of our world and allow them to massively improve their intelligence based on that representation. Does this come with a subjective experience? I don't know, does it need to? I also don't know. The more research we do, the more likely we are to understand these massively philosophical questions, but I think we are a few years away from that.

2

fortunum OP t1_j1jb5w8 wrote

Yea thanks for the reply, that’s indeed an interesting question. With this approach it seem that intelligence is a moving target, maybe the next GPT could write something like a scientific article with actual results or prove a theorem. That would be extremely impressive but like you say it doesn’t make it AGI or get it closer to the singularity. With the current approach there is almost certainly no ‘ghost in shell’. It is uncertain if it could reason, experience qualia or be conscious of it’s own ‘thoughts’. So it could likely be self motivated, to some extend autonomous and have a degree of agency over its own thought processes all of which are true for life on earth at least. So maybe we are looking for something that we don’t prompt, but something that is ‘on’ and similar to a reinforcement learning agent.

2

sumane12 t1_j1jfdui wrote

I'd agree, I don't think we are anywhere near a ghost in the shell level of consciousness, however a rudimentary, unrecognisable form may well have been created in some LLM's. But I think what's more important than intelligence at this point is productivity. I mean, what is intelligence if not the correct application of knowledge? And what we have at the moment is going to create massive increases in productivity, which is obviously required on the way to the singularity. Now it could be that this is the limit of our technological capabilities, but that seems unlikely given the progress we have made so far and the points I outlined above. Is some level of consciousness required for systems that seem to show a small level of intelligence? David Chalmers seems to think so. We still don't have an agreed definition of how to measure intelligence, but let's assume it's an IQ test, well I've heard that ChatGPT has an IQ of 83 https://twitter.com/SergeyI49013776/status/1598430479878856737?t=DPwvrr36u9y8rGlTBtwGIA&s=19 which is low level human. is intelligence, as measured by iq test, all that's needed? Can we achieve super intelligence without a conscious agent? Can we achieve it with an agent that has no goals and objectives? These are questions we aren't fully equipped to answer yet, but should become clearer as we keep on building in what has been created.

1

overlordpotatoe t1_j1ghy2l wrote

Do you think it's possible to make a LLM that has a proper inner understanding of what it's outputting, or is that fundamentally impossible? I know current ones, despite often being able to give quite impressive outputs, don't actually have any true comprehension at all. Is that something that could emerge with enough training and advancement, or are they structurally incapable of such things?

1

visarga t1_j1hwxat wrote

Yes, it is possible for a model to have understanding, to the extent to which the model can learn the validity of its outputs. That would mean to create an agent-environment-goal setup and let it learn to win rewards. Grounding speech in experience is the key.

Evolution through Large Models

> This paper pursues the insight that large language models (LLMs) trained to generate code can vastly improve the effectiveness of mutation operators applied to programs in genetic programming (GP). Because such LLMs benefit from training data that includes sequential changes and modifications, they can approximate likely changes that humans would make. To highlight the breadth of implications of such evolution through large models (ELM), in the main experiment ELM combined with MAP Elites generates hundreds of thousands of functional examples of Python programs that output working ambulating robots in the Sodarace domain, which the original LLM had never seen in pre training. These examples then help to bootstrap training a new conditional language model that can output the right walker for a particular terrain. The ability to bootstrap new models that can output appropriate artifacts for a given context in a domain where zero training data was previously available carries implications for open endedness, deep learning, and reinforcement learning. These implications are explored here in depth in the hope of inspiring new directions of research now opened up by ELM.

3

AndromedaAnimated t1_j1hrinc wrote

You should have written THIS in your original post instead of pushing blame around and crying that you don’t like the way the subreddit functions.

1

crap_punchline t1_j1j4l5g wrote

lol shit like this has been debated ad infinitum on this subreddit, imagine wading in here saying "HOLD ON EVERYBODY PhD COMING THROUGH - YES, I SAID PhD, YOU'RE DOING IT ALL WRONG" and then laying on some of the most basic bitch questions we shit out on the daily

get a job at OpenAI or DeepMind then get back to us

−2