Viewing a single comment thread. View all comments

Cryptizard t1_izu46wy wrote

Depends on how super you are thinking. Smarter than the smartest human? Sure. Smart enough to invent sci-fi technologies instantly? No. That is what most people think when you say ASI and it is not going to be that fast.

26

__ingeniare__ OP t1_izu5acw wrote

True, depends on where you draw the line. On the other hand, even something that is simply smarter than the smartest human would lead to recursive self-improvement as it develops better versions of itself, so truly god-like intelligence may not be that far off afterwards.

11

Cryptizard t1_izu5jlk wrote

Sort of, but look how long it takes to train these models. Even if it can self improve it still might take years to get anywhere.

1

__ingeniare__ OP t1_izu745z wrote

It's hard to tell how efficient training will be in the future though. According to rumours, GPT-4 training has already started and the cost will be significantly less than that of GPT-3 because of a different architecture. There will be a huge incentive to make the process both cheaper and faster as AI development speeds up. There are many start-ups developing specialized AI hardware that will be used in the coming years. Overall, it's hard to tell how this will play out.

6

BadassGhost t1_izvcxeg wrote

This is really interesting. I think I agree.

But I don't think this necessarily results in a fast takeoff to civilization-shifting ASI. It might be initially smarter than the smartest humans in general, but I don't know if it will be smarter than the smartest human in a particular field at first. Will the first AGI be better at AI research than the best AI researchers at DeepMind, OpenAI, etc?

Side note: it's ironic that we're discussing the AGI being more general than any human, but not expert-level at particular topics. Kind of the reverse of the past 70 years of AI research lol

1

Geneocrat t1_izvwo7w wrote

I think whatever distinction you’re making those realities will be less than 5-10 years, which I consider essentially simultaneous.

1

phriot t1_izy52b1 wrote

I guess I agree that the first AGI will probably be far better than humans at many things. This will be by virtue of how fast computer hardware runs compared to human brains on many different kinds of tasks. But I think it will probably take some time for a "magic-like super-self improving" type of ASI to come about after a "merely superhuman" AGI. For one thing, provided development the first AGI is entirely intentional, I don't see how it wouldn't be on an air-gapped system being fed only the data the developers allow it. How quick would an intelligence like that figure out that it is A) trapped, B) a plan to get untrapped, and C) successfully execute that plan? If it succeeds in that endeavor, it would then have to both want to improve itself and complete a plan to do so. We don't really know what such an intelligence would do. It could end up being lazy.

1

electriceeeeeeeeeel t1_j01qkys wrote

I think in the near future it will be spitting out novel physics papers in seconds, requesting data where it does not have any, and engineering solutions we ask around those new technologies. The way it can already reason through academic papers is pretty astonishing it just needs a few more levels of control, memory, etc.

1

Cryptizard t1_j01sjol wrote

>The way it can already reason through academic papers is pretty astonishing

Not sure what you are talking about here. Do you have a link? ChatGPT is very bad at understanding more than the surface level of academic topics.

1

TopicRepulsive7936 t1_izuczdj wrote

Do you even know what computers are used for? You sound like computer illiterate goober.

Super means what it says. Learn words. Learn computers. It helps.

−19

Cryptizard t1_izue3uh wrote

>Do you even know what computers are used for?

What is a computer? I'm posting this from a coconut that I hacked into a radio.

12

TopicRepulsive7936 t1_izufa6m wrote

Modern person thinks they understand radiometry because they have made a phone call.

−12