Osemwaro OP t1_j06y0j1 wrote
Reply to comment by red75prime in [D] Why are ChatGPT's initial responses so unrepresentative of the distribution of possibilities that its training data surely offers? by Osemwaro
I did wonder if its developers' attempts to address the biases in the training data may have inadvertently led to it being biased in the opposite direction in some cases (if that's what you mean by "anti-bias bias").
My goal was to identify and measure expressions of bias that are unlikely to be censored by the content filter, including rarely discussed biases (e.g. it described a disproportionate number of the women in its stories about intelligent people as being tall and having a slender/athletic build). But I can't easily get a representative sample of responses that it might give over the course of millions of interactions with users if its developers have used a low softmax temperature to massively reduce its entropy, as some other commenters have suggested.
Viewing a single comment thread. View all comments