Viewing a single comment thread. View all comments

GuyWithLag t1_j1hgj0h wrote

Reply to comment by Ortus12 in Hype bubble by fortunum

Dude, no. Listen to the PhDs - the rapture isn't near, not yet at least.

On a more serious note: This is what the OP refers to when talking about a "hype bubble". The professionals working in the field actually know that the current crop of AI models are definitely not suitable for the architecture of AGI, except maybe as components thereof. Overtraining is a thing, and it's also shown that overscaling is also a thing. Dataset size is king, and the folks that create the headline-grabbing models already fed the public internet to the dataset.

From a marketing standpoint, there's the second-mover advantage: see what other did, fix issues and choose a different promotion vector. You're looking at many AI announcements in a short span due to the bandwagon effect, caused by a small number of teams showing multiple years' worth of work.

6

lil_intern t1_j1hnp2k wrote

If by rapture you mean evil robots taking ppl out their house then yes but what about millions of peoples careers becoming obsolete over night every other month due to AI growth in unexpected fields that seems pretty close

3

Ortus12 t1_j1hzcoy wrote

The current popular Ai models are only what works best on the current hardware.

We've already designed tons of different models that are outlined in many older Ai books, that can be used as compute scales (as Ai companies make more money to spend on more compute). Even the current models weren't invented recently, they're just now applicable because the hardware is there.

There's been a few algorithmic optimizations along the way a larger portion of the scale has been hardware.

2nd order companies are taking out 1st order companies by improving things, but that still keeps the ball moving forward.

1

ThePokemon_BandaiD t1_j1ipluc wrote

First of all, current big datasets aren't the full internet, just large subsections, specific datasets of pictures or regular text. We also generate about 100 zettabytes of new data on a yearly basis as of this year, and generative models can, with the help of humans to sort it for value for now, generate their own datasets. And while currently available LLMs and Image recognition and generation models are still quite narrow, stuff like gato, flamingo, etc have shown that at the very least multimodal models are possible with current tech, and imo it’s pretty clear that more narrow AI models could be combined together to create a program that acts as an AGI agent.

1

YesramDeens t1_j1jzcgo wrote

> Listen to the PhDs - the rapture isn't near, not yet at least.

Stop with this misinformation; for every three PhDs that are saying we will have an AI winter, there are six AI researchers at companies like OpenAI and Deepmind that are extremely excited about the potential of the devices they are creating.

Your unnecessary doomerism is borne from a sense of superiority and arrogance in knowledge. Don’t be humbled later on.

1