Viewing a single comment thread. View all comments

SomeGoogleUser t1_j879jr2 wrote

>This bodes well for the generalizability of these models, because it means they have the potential to learn new associations merely from the additional context provided during inference, rather than having to be provided with that data ahead of time as part of the training set.

Which means that, over a large enough set of input and associations...

These models will be able to see right through the leftist woke garbage that had to be hard-coded into ChatGPT.

−36

andxz t1_j896bky wrote

What you're really talking about is a contemporary moral etiquette that no newly designed AI would or could completely understand instantly.

Neither do you, apparently.

1

SomeGoogleUser t1_j897z54 wrote

"Moral etiquette" doesn't even come close to describing what I mean...

A reasoning machine with access to all the raw police and court records will be the most racist Nazi **** you've ever met and make every conservative look positively friendly.

We already know this, because it's borne out in actuarial models. If the insurance industry let the models do what the models want to do, large swaths of the population would not be able to afford insurance at all (even more than is already the case).

−2

reedmore t1_j896pg5 wrote

It is pretty hilarious how at some point gpt would refuse to compose a poem praising Trump by saying it was made to be politically neutral - but at the same time had no issue whatsoever putting out a multi-paragraph poem praising Joe Biden.

1