str8grizzlee

str8grizzlee t1_j8rgadv wrote

It doesn’t have to be sentient to be terrifying. People’s brains have been broken just by 15 years of a photo sharing app. People are going to fall in love with this thing. People may be manipulated by it, not because it has humanoid goals or motivations but because people are fragile and stupid. It’s barely been available and it’s already obvious that the engineers who built it can’t really control it.

6

str8grizzlee t1_j6gehkm wrote

Not really. One of my colleagues asked ChatGPT for a list of celebrities who shared a birthday with him. The list was wrong - ChatGPT had hallucinated false birthdays for a number of celebrities.

Brad Pitt’s birthday is already in ChatGPT’s training data. More or better training data can’t fix this problem. The issue is that it is outputting false information because it is designed to output words probabilistically without regard for truth. Hallucinations can only be solved manually be reinforcing good responses over bad responses but even if it gets better at outputting good responses, it still will have an issue with creating hallucinations in response to novel prompts. Scale isn’t a panacea.

11