Viewing a single comment thread. View all comments

natepriv22 t1_j64prx8 wrote

Uh no. If a company developes AGI, they will become the most important company in history.

If you can't imagine what an actual AGI would be like and what their effect on society would be (nobody can accurately predict that of course), then you cannot make this claim about profits.

What if the AGI decides it likes OpenAI and thats the company that should get the first sci fi level fusion reactors. When talking about AGI you just cannot seriously make this kind of a prediction imo.

1

ArgentStonecutter t1_j64r7go wrote

You have a really romantic view of what an AGI is.

1

natepriv22 t1_j64rkys wrote

How so?

I have to admit I've never heard this kind of response before. AGI is when an AI will be able to answer in such an unexpected way lol.

1

ArgentStonecutter t1_j656714 wrote

AGI is an artificial general intelligence. It's an intelligence capable of acting as a general agent in the world. That doesn't imply that it's smarter than a human, or capable of unlimited self improvement, or answering any question or solving any problem. An AGI could be no smarter than a dog, but if it's competent as a dog that would be a huge breakthrough.

A system capable of designing a cheap fusion reactor doesn't need general intelligence, it could be an idiot savant or even not recognizably an intelligence at all. From the point of view of a business, it should be an oracle, simply answering questions, with no agency at all. General intelligence is likely to be a problem to be avoided as long as possible, you don't want to depend on your software "liking" you.

Vinge's original paper talked about a self-improving AGI but people seem to have latched on to the AGI part and ignored the self-improving part. He was talking about one that could update its fundamental design or design successively more capable successors.

1