Viewing a single comment thread. View all comments

mr_birrd t1_j6xtb3u wrote

Edit: Chatgpt uses GPT3. Search the dataset it used.

Google it they have full transparency. If you find a text by yourself there maybe ask if they can remove it. First of all, the data is only used for stachastic gradient descent and the model has no idea about the content it read, it only can model probabilities of words, e.g. it learned to speak but it only speaks such that it mostly outputs what makes sence in a bayesian way.

So the model is already trained and it didn't even read all of the data, those huge models often only read each instance of sample once at maximum, since they learn that "well".

Also in the law text you wrote I understand it that if you opt out in the future, it doesn't make past data processing wrong. The model is already trained, so they don't have to remove anything.

They also mostly have a whole ethics chapter in their papers, maybe you go check it out. Ethics etc is not smth unknows and especially such big companies also have some people working on that in their teams.

1

Monoranos t1_j6xumt3 wrote

Even if they have full transparency it doesn't mean they are GDPR complient. I tried to look more into it but was not successfull.

1

mr_birrd t1_j6xvaec wrote

Well the thing is you aren't the first one to think about that. They do this for very long already and know that what they do is legal here. They would not waste millions in training it just to throw it away afterwards.

1

myrmil t1_j6xw2sq wrote

Yeah, they sure wouldn't Kappa

1