Viewing a single comment thread. View all comments

1II1I11II1I1I111I1 t1_je0g9bo wrote

Would you say the Microsoft paper from LESS THAN TWO WEEKS AGO saying early forms of AGI can be observed in GPT-4 isn't the "thoughts of professionals and academics"?

All an AGI needs to be able to do is build another AI. The whole point is that ASI comes very soon after AGI.

4

[deleted] t1_je0h37m wrote

[removed]

−3

Few_Assumption2128 t1_je0mint wrote

Goofy take. It is true that we don't yet fully understand Concsciousness. But calling official microsoft papers clickbait is some next level dogshit take.

Also we kind of do understand what "could" be the needed improvements made to LLMs in order for them to get better and eventually gain consciousness. These improvements were discussed in the "clickbait microsoft papers".

​

It seems to me the only one not actually reading those papers is you

7

hyphnos13 t1_je0wqe9 wrote

Why does AGI need to be conscious?

In fact why does it have to be general. A bunch of specialized networks that can speed up human science or discover things on its own will advance progress in a way that is indistinguishable from an agi acting on its own.

If we build a machine intelligence capable of improving other ais and the hardware they run on then specialized "dumb" ais will still outpace human development faster than we can keep up.

2

[deleted] t1_je0pti8 wrote

[removed]

0