Viewing a single comment thread. View all comments

Surur t1_jdxugcy wrote

Informed opinions are always more valuable, especially when she makes technical claims like:

> But GPT-4 and other large language models like it are simply mirroring databases of text — close to a trillion words for the previous model — whose scale is difficult to contemplate. Helped along by an army of humans reprograming it with corrections, the models glom words together based on probability. That is not intelligence.

5

luniz420 t1_jdxvfvr wrote

That's just a fact.

Great article though

−1

SilentRunning OP t1_jdyvzth wrote

Are there ANY groups out there that have a A.I. system that creates a conversation without gleening from databases on the internet?

But it is an opinion piece, so yes informed opinions matter a bit.

−1

SomeoneSomewhere1984 t1_je0bbhp wrote

Are there any people who can hold a conversation after being raised alone in a dark room?

6

SilentRunning OP t1_je2jia2 wrote

Comparing oranges to a door knob. Is a computer conscious? I argue that it isn't. It has no idea what to do until it is turned on. Same thing with A.I., until it receives a prompt it will just sit there. If it gets something wrong/incorrect it doesn't correct itself it has to get reprogrammed by a human.

1

SomeoneSomewhere1984 t1_je2myii wrote

>If it gets something wrong/incorrect it doesn't correct itself it has to get reprogrammed by a human.

That's not even accurate. It can realize it's wrong.

3

SilentRunning OP t1_je2navi wrote

It is programmed to know when some data is incorrect, it doesn't realize anything. But yet it can't correct the method that brought the incorrect data until a human corrects the program. Until that happens it continues to bring incorrect results if the prompts are the same. This give the impression that it is learning on it's own, but is actually far from the truth. Each version of GPT was updated by human coders, it has learned anything on it's own and is far from being able to.

0