Submitted by SilentRunning t3_1245bzj in Futurology
Surur t1_jdxtplf wrote
The author has a Bachelors Degree in Journalism and Sociology and has only been a technology writer for two years.
I doubt she is qualified to say there is no such thing as AI.
SilentRunning OP t1_jdxtwri wrote
Well it is an opinion piece that raises a very interesting question which doesn't/shouldn't restrict a discussion.
Surur t1_jdxugcy wrote
Informed opinions are always more valuable, especially when she makes technical claims like:
> But GPT-4 and other large language models like it are simply mirroring databases of text — close to a trillion words for the previous model — whose scale is difficult to contemplate. Helped along by an army of humans reprograming it with corrections, the models glom words together based on probability. That is not intelligence.
[deleted] t1_jdxvz10 wrote
[deleted]
[deleted] t1_jdxwr7q wrote
[deleted]
luniz420 t1_jdxvfvr wrote
That's just a fact.
Great article though
SilentRunning OP t1_jdyvzth wrote
Are there ANY groups out there that have a A.I. system that creates a conversation without gleening from databases on the internet?
But it is an opinion piece, so yes informed opinions matter a bit.
SomeoneSomewhere1984 t1_je0bbhp wrote
Are there any people who can hold a conversation after being raised alone in a dark room?
SilentRunning OP t1_je2jia2 wrote
Comparing oranges to a door knob. Is a computer conscious? I argue that it isn't. It has no idea what to do until it is turned on. Same thing with A.I., until it receives a prompt it will just sit there. If it gets something wrong/incorrect it doesn't correct itself it has to get reprogrammed by a human.
SomeoneSomewhere1984 t1_je2myii wrote
>If it gets something wrong/incorrect it doesn't correct itself it has to get reprogrammed by a human.
That's not even accurate. It can realize it's wrong.
SilentRunning OP t1_je2navi wrote
It is programmed to know when some data is incorrect, it doesn't realize anything. But yet it can't correct the method that brought the incorrect data until a human corrects the program. Until that happens it continues to bring incorrect results if the prompts are the same. This give the impression that it is learning on it's own, but is actually far from the truth. Each version of GPT was updated by human coders, it has learned anything on it's own and is far from being able to.
Viewing a single comment thread. View all comments