drekmonger

drekmonger t1_je74aq3 wrote

Also noteworthy, we "train" and "infer" with a fraction of the energy cost of running an LLM, and that's with the necessary life support and locomotive systems. With transformer models, we're obviously brute forcing something that evolutionary biology has developed more economical solutions for.

There will come a day when GPT 5.0 or 6.0 can run on a banana peel.

1

drekmonger t1_je73xjv wrote

While the statement that "AGI would have the power of recursive self-improvement and would therefore very rapidly become exponentially more powerful" is a possibility, it is not a required qualification of AGI.

AGI is primarily characterized by its ability to learn, understand, and apply knowledge across a wide range of tasks and domains, similar to human intelligence.

Recursive self-improvement, also known as the concept of an intelligence explosion, refers to an AGI system that can improve its own architecture and algorithms, leading to rapid advancements in its capabilities. While this scenario is a potential outcome of achieving AGI, it is not a necessary condition for AGI to exist.

--GPT4

11

drekmonger t1_ja2q5vf wrote

Well, of course, there will be something like "holodeck modules" that are meant to be interactive. But also I think there will be more static experiences that you can optionally fiddle with.

Imagine a very dense natural language description of a changing scene that a super advanced AI is rendering in real time.

2

drekmonger t1_ja23aao wrote

I had an interesting conversation with ChatGPT about the idea of "semantic compression".

Imagine if popular TV shows were broadcast not as video, but as extremely detailed instructions to an AI model, which rendered the experience as if the model were a codec.

There could be knobs you could adjust during the inference. Like, "Make all the actors naked" or "Less graphic violence please!" Or, "I really don't like that guy's voice. Make him less annoying. Or, just write him out of the show, actually."

The AI model could inform you, "That change will have a significant impact on the narrative. Are you sure?" With enough changes, you'd be watching something completely different from what everyone else is.

17

drekmonger t1_j9iios3 wrote

Heh. I tried their rationalization step with ChatGPT, just with prompting. For their question about the fries and crackers it said the problem is flawed, because there's such a thing as crackers with low or no salt. Also correctly inferred that fries are usually salted, but don't have to be. (of course, it didn't have the picture to go by, which was the point of the research)

Great paper though. Thanks for sharing.

8

drekmonger t1_j9eg3a4 wrote

>I guarantee that people will care about AI really quickly as soon as it affects them personally. But we’re not at that point yet.

It's going to be a slow boiled frog. By the time the average person is significantly impacted, they'll attribute the effect to literally anything else.

Something similiar is happening with climate change. My city was hit by pretty much the worst ice storm ever, after trees were already weakened by drought, knocking out power lines all over the city, and people were bitching at the local government for the week that the power was out. It took a herculean effort to get the grid fixed, as in some cases trees that had stood for nearly a century had fallen over and taken out power poles.

I got hit, too, and was in the dark for the better part of a week. But complaining to the mayor and head of the local power utility about formerly impossible weather events is about cogent as blaming my cat.

17

drekmonger t1_j7luzv7 wrote

Reply to comment by ccnmncc in 200k!!!!!! by Key_Asparagus_919

>It was authored in 1993.

ChatGPT did me dirty. Prior to that comment I asked it to remind me who wrote the essay and when. It said 1983, and then I failed to look at the date on the essay itself.

Good catch.

3

drekmonger t1_j7lia04 wrote

Reply to comment by EddgeLord666 in 200k!!!!!! by Key_Asparagus_919

The Singularity, as it was originally imagined, included potential scenarios for transhumanism over a technological singularity. The original essay is still well worth the read, even 30 years later.

But the doomsday scenario the essay was ultimately warning against was that the Singularity would occur rapidly as a shocking cascade of events.

Perhaps in the "pet human" scenario, a benevolent ASI might slowly augment people as individuals.

Regardless, the problem is one of alignment, and I don't think you or I have much say in that. Even if a relatively benevolent organization like OpenAI develops the first AGI, their competitors (like, say, China's AI research efforts) won't be so benevolent.

As in capitalism, the most unethical strategy will tend to dominate ethical strategies. The "bad" AIs will win any race.

3