Comments

You must log in or register to comment.

AsuhoChinami t1_irb5hhk wrote

More like 100%, but it's good to see the timelines put forth by people in general shifting down.

17

Mino8907 t1_irb88zg wrote

I wonder if we will solve most of the scientific problems in life before AGI.

6

Shelfrock77 t1_irbc9gx wrote

Who is gonna guess AI’s birthday ? 🎂

25

ihateshadylandlords t1_irbfc59 wrote

Do you all think it be in the hands of a few or still in the proof of concept stage by 2032? Or do you think it will be publicly available to the average person by 2032?

2

TFenrir t1_irbgq74 wrote

What do you mean by proof of concept? These are real models, and real insights that we gain. Those insights are applied sometimes almost immediately on models the public have access to.

Do you mean, in this 2032 prediction, are they talking about AGI being something that's available to the public or only available to people behind closed doors? It would be the latter, because the nature of this prediction is that it would be emerging in the bleeding edge super computers that Google is using in their research.

Honestly I'm not even sure how AGI could be turned into a product, it would just be too... Disruptive? The nature of how regular people would interact with it is a big question mark to me.

3

Shelfrock77 t1_irbhu8j wrote

When I argue about the birth of AI, I will compare it to the pro life and pro choice debate. “When is a fetus considered conscious”. Then you see everyone saying “6 weeks”, “8 months” “12 months and 2 weeks” “When a sperm and egg meet and start building” blah blah blah. If you have to argue in the first place, it means it’s a gradient. Every entity exist for eternity but they form differently over time until you get to the steep part and “die” but then reabsorb back into the lake of consciousness aka the universe aka god. When people ask me what I believe in, I believe in myself because i’m here right now, if my parents would’ve never meant, I would’ve been born in some other rock.

I’m 100% confident alien races in space are debating on this issue rn or about to in the future or past depending on the way light (cuz u know space and time) is emerging from 😭. Anyways everything is god, when a universe dies, new one is born as you read this, and the next frame, and the next frame 🖼

Every human born was a sperm and egg but how about before that ? and then what became before ____ ? And then what became before _____ ? Was I in two places at once at some point ? More answers that lead to more questions until you realize you end where you started because the universe is infinite.

And don’t forget about animals, and smaller animals, and insects, and plants, and bacteria…, and …… It never ends, what comes around goes around. Anyways I don’t mean to give y’all an existential crisis, have a good day.

8

ihateshadylandlords t1_irbhzbm wrote

By proof of concept, I meant that it was something that they’ve disclosed they have, but aren’t making it publicly available for whatever reason.

If the AGI model can be applied to programs that the public can use (like GPT3), then that would be great.

2

Dr_Singularity OP t1_irbj7gz wrote

No way we will "solve" or "finish" most of science before 2023-2025 (my AGI arrival estimate). Even after AGI/ASI, with trillion X acceleration, it will take probably thousands of years(assuming Universe and its complexity is finite), but more likely there are infinite number of combinations, things we can engineer, discover. Universe is most likely infinite.

But the good thing is that aging/human biology complexity is finite, and with advanced AI's we can fully understand it/reverse it. With reversing aging(ultimately we will transcend and won't even need tech to "repair" ourselves), we will have all the time we need to explore Multiverse

24

TFenrir t1_irbos42 wrote

> If the AGI model can be applied to programs that the public can use (like GPT3), then that would be great.

AGI just wouldn't be possible for quite a while after invention for publicly available models though. I don't even really call GPT publicly available - you have API access but you don't actually have access to the model itself. We do have other publicly available models though; stable diffusion, gpt-j, Roberta, etc.

Regardless, think of it this way... Imagine a group of scientists standing around a monitor, attached to a private, heavily secured internal network, which utilizes a distributed super computer specifically just to run inference on a model that they just finished training in a very secure facility. At this point the models before have been so powerful, that containment is a legitimate concern.

They are there to evaluate whether or not this model constitutes an AGI, if it has anything resembling consciousness, if it's an existential threat to the species.

They're not going to just... Release that model into the wild. They're not even going to give the public access, or awareness of this model in any way shape or form, for a while.

That doesn't even get into the hardware requirements that would probably exist for this first model.

3

red75prime t1_irbptwo wrote

I expect that AGI running on a widely available hardware will be thick as a brick (no, distributed computing will not help much due to relatively low throughput and high latency).

Well, it will be a witty conversational partner, but extremely slow at acquiring new skills or understanding novel concepts.

3

ihateshadylandlords t1_irbth00 wrote

I’m not tech savvy at all, I didn’t know there was a difference between API and GPT3. But yeah, that’s why I’m not as hyped as a lot of people on here whenever AGI is created. It’s not like we’ll be able to use it (unless someone creates an open source version of it).

1

Yuli-Ban t1_irc13na wrote

Personal opinion hasn't changed much:

  • Proto/Frozen AGI will be here by 2024 (think "Supersized Gato with task interpolation and commonsense reasoning")

  • Oracle-like/weak AGI between 2024 and 2027

  • Human-level strong AGI by 2032???

Possibly too conservative on that last one, but better to be conservative on things like these so you're pleasantly surprised when things come true ahead of time.

39

Yuli-Ban t1_irc47b4 wrote

Quantitative ASI: right after AGI is created because there's not much of a tangible barrier between subhuman, par-human, and superhuman task completion. Could be as soon as 2024 with "frozen superintelligence."

Qualitative ASI: probably mid 2030s. We'll probably need a lot of neurofeedback data to get to strong AGI and, then, true superintelligence.

Singularity: not going to lie, I have my doubts about a Kurzweilian Singularity. I think the effects of ASI will resemble it for a while, so again, 2030s into 2040s.

Edit: Should probably stress that quantitative ASI is better described as "superhuman general task automation." We already have superhuman AI, in very narrow fields like chess, go, and arithmetic. You can consider these narrow task automation programs since the "AI" moniker is tenuous to begin with.

21

iNstein t1_irceopl wrote

API stands for Application Programming Interface. It is basically a series of commands that programmers can use to access/communicate with another program like GPT3. A kind of specialised instruction set for your program like gpt3.

Having an API connection to something like gpt3 is very similar to having gpt3 running on your own computer in a functional sense. It just means that you do not have to have the high performance hardware to run it on. It is the best option for something like gpt3 to be able to get as many ordinary people using it without us all going out and buying extremely expensive hardware.

3

HumpyMagoo t1_ircnozm wrote

if you believe that we are in a simulation, then it would be like finding the end of the game and then deciding to keep playing after everything is solved or maybe perhaps New Game + on the title screen

2

imlaggingsobad t1_ird13t9 wrote

The most powerful and sophisticated models will be controlled by the largest tech corporations (i.e. Google, Meta, Microsoft, Amazon, IBM, Nvidia, etc). They will slowly embed the AI into their products as enhancements/features, and also they will rent out their models to other businesses sort of like an 'AI model as a Service'. Initially I think the cost to access a proto-AGI API will be expensive because it will be quite capable, but costs will dramatically decrease as more companies offer the service. While all of this is happening, there will be hundreds of open-source versions that are less powerful but still very good.

My prediction is that one of these large tech companies will get to AGI first, but they will keep it contained within their company for some time. They won't release it to everyone straight away. In the meantime, other large companies will get to AGI just by following the bread crumbs, and then quickly we will have AGI (or at least proto-AGI) in the hands of several companies. Once that happens, there will be a race to monetize it and productize it. It will take the world by storm. Every business will be racing to adopt these AGI models to improve their efficiency/output or whatever. Then the open-source community will crack the code, and it will be available to pretty much everyone. Whether you'll be able to run it on your own machine is another issue, though.

3

WashiBurr t1_ird3u2p wrote

What a time to be alive! Whether we're about to see the beginning of a utopia or the end of humanity, it will be an amazing sight to behold.

18

94746382926 t1_ird4o86 wrote

Just to push back on this a little bit, if this chart is accurate then why haven't we accurately modeled C. Elegans brain or behavior? According to this chart AlphaGo is already many orders of magnitude more powerful and yet we haven't achieved this.

5

Bierculles t1_irdlaue wrote

we probably solved less than 0.1% of the scientific problems that are out there so i doubt it. I don't think that there is even an end to scientific problems.

1

Bierculles t1_irdlh40 wrote

hard to say, the thing is, one slipup from the top is enough for AGI to spread exponentially. Also i recon that countries that have it availabe for the public are going to rapidly outpace those who don't in pretty much every way possible.

1

beachmike t1_irdntjn wrote

I don't think the result will be a binary either/or. The agricultural and industrial revolutions each brought great advantages to humans as well as many new problems. The same will be true for AGI and ASI. I am certain, however, that "utopia" will remain a pipe dream, even after a technological singularity.

2

sumane12 t1_irdoxqe wrote

This is true. The world will get better as it always has, but true utopia is not in our nature. We always reach beyond our capabilities which means we will always want something we can't have, ergo we will never have utopia no matter how good life is. Also considering everyone's definition of utopia is different, everyone would have to agree that we have achieved utopia. Not to mention I'm sure heroin addicts believe they have utopia when they are high, but I'm pretty sure most of us would consider being in that state permanently to be a waste of life.

My personal hope, is for a star trek like existence with no war and no crime, and for ageing to be solved. It's a big ask but that's my definition of utopia which I think is achievable, but we would still have problems we would need to solve

2

LaukkuPaukku t1_irdp8qm wrote

One of the most important breakthroughs yet to be made, as I understand, is a form of working memory. It could take only a couple of years to crack that nut (perhaps basically on each step outputting values, in some way based on activated neurons, to place within the next iteration's context), and suddenly AI will be granted a whole another realm of abilities with its long-term planning and coherence.

4

NeutrinosFTW t1_irdpatt wrote

Both the agricultural and the industrial revolutions only increased the amount of energy that humanity can use to do work, they didn't introduce new players to the game. The advent of ASI means the creation of an entity with greater capabilities than humanity and (possibly) divergent goals, which is something that's never happened before. Most experts believe the singularity will lead to one of the two extremes for us (total annihilation or AI-powered utopia).

6

wen_mars t1_ire9yvm wrote

We don't fully understand how biological neurons work. Mapping the physical layout of a brain doesn't tell us how it works. Another big limiting factor in AI performance is training data and evaluating task performance. We don't have a simulation environment that accurately replicates the life of a worm and we don't have millions of years of accumulated training data simulating evolution.

5

wen_mars t1_ireahel wrote

If the current state of the art is an indication, organizations with access to big compute will publish ever better AI models for people to download. You won't have to train the AI on your home computer to get an AI that is up to date on reasonably new concepts.

1

red75prime t1_iredfs9 wrote

An intelligent agent that just can't excel at doing something until some future update (and forgetting how to swiftly do something other after the update, thanks to limited model size, which is necessary for real-time operation). Episodic memory compatibility problems (due to changes in model's latent space) making the agent misremember some facts. Occasional GPU memory thrashings (the agent slows down to a crawl). And other problems I can't foresee.

All in all, a vast opportunity for enthusiasts, a frustrating experience for casuals.

1

beachmike t1_irehbdg wrote

So-called "experts" are often wrong. In fact, there are NO experts on the technological singularity. If we knew what was actually coming, it wouldn't be a singularity event. A singularity, by definition, is unknowable by anything outside of it. However, you miss my point. Every revolution in human affairs brings advantages for humans as well as disadvantages and many new problems. Therefore, the mythical utopia WILL NOT occur. I highly doubt that ONLY one of two extremes will occur as a result of a technological singularity. That's far too simplistic.

1

RegularBasicStranger t1_irf4usc wrote

neurons activated will synapse with the neuron activated just before it so the visuals of a person and the background would be considered as one single concept in the beginning when the person gives the AI pleasure, the whole visual being a neuron.

then when the person is in a different background and still gives pleasure, the AI will learn to put more value to the pixels representing the person since the person is in both pleasurable visuals.

then when the AI sees only the background and gets no pleasure, the pixels representing the background loses value due to irrelevance thus the neuron for the person forms.

also the neurons activated will be stored sequencially as working memory, so the AI know what neurons had been activated and at what sequence (rechecking the list, starting from the most recent or starting from important milestones, also adds to the working memory so the AI will know it had recalled the past).

so AGI should only need neurons synapsing with the neuron activating immediately before it (with the more increase in pleasure or fear, the stronger the synapse), a lot of sensor types (not just text but also visuals, sounds, temperature, size, shape, texture, etc..., since it is the unique combinations of sensations that differences one similar concept to another, else they would be identical and prevent learning), a list of neurons activated according to sequence and something that gives it pleasure and something that gives it fear so it will have something to aim for, else it cannot be AGI.

0

Mr_Hu-Man t1_irhzyeg wrote

I agree with you, but then disagree with you. You first make the point that the definition of the singularity is that what happens on other side is unknowable, then in the next sentence you make hard predictions. I get what you’re saying, but I’d change your language to say that ‘the mythical utopia MAY NOT occur….or MAYBE it will 🤷🏻‍♂️ we literally can’t know’

1

DukkyDrake t1_irimfna wrote

You assume it will be something that can run on a few high-end home computers? It could require computational substrate as big as a building and consume gigawatts of power.

1

DukkyDrake t1_iriq8dq wrote

If it wasn't already obvious, the last 2 years should have demonstrated to all that governments around the world can seize any property.

When the main AI developers stop publishing, you can take that as a sign they have something that they think can give them a competitive advantage. When the government steps in and seize their operation so it's in safer hands, you can take that as a sign they have something transformational in hand.

1

beachmike t1_irm3exb wrote

I believe the mythical utopia is forever unobtainable for legacy humans even after the technological singularity, due to our very make-up. However, for those of us who choose to move into the infinite realm beyond human (which the singularity will make possible), all bets are off.

2