Depression_God t1_j9yf45a wrote
Of course it is biased. It's just a reflection of the culture that created it.
Hunter62610 t1_j9yp13e wrote
I mean it could also be the people that made it biased the program.
Depression_God t1_j9z4rs1 wrote
Obviously they did. People made everything about it. The question is to what extent they did it deliberately.
[deleted] t1_ja00qpu wrote
[deleted]
mutantbeings t1_ja5bwou wrote
Nah that’s not super important. In the tech industry we all know that unconscious bias affects the tech we build, it’s a super important consideration whether or not it’s conscious. It’s one reason why building a culturally diverse team matters: it minimises the intensity of unconscious bias. There’s actually a lot of conscious things you can do to reduce it but it’ll never go away completely.
whatsup5555555 t1_ja5jmyn wrote
So you’re in favor for half of your “team” to have a different political leaning then your own. It’s easy to say that you want a culturally diverse team and it’s another to actually assemble one. It’s easy to pick people on surface level features like skin color but it’s much more difficult to balance political ideology, hence the clear bias that the AI already exhibits. The tech industry is already heavily left leaning but I guess no one cares as long as your bias is the one winning. So keep fighting for your skewed view of equality!
mutantbeings t1_ja66q0y wrote
Not quite. The tech industry has been historically very very conservative. It’s a very recent development that this stuff has been discussed more (it wasn’t until probably the late 2000s or early 2010’s with the explosion of social media that the tech industry became less conservative)
Assembling a diverse team isn’t rocket science, the mistake a lot of tech teams still make tend to be comically bad like an all white team or an all male team; those are still very common.
Obviously those teams will have huge blind spots in lived experience. Even a single person added to that team from a very different background covers off a huge gap there, and each extra person added is a multiplier of that effect to some degree.
You’re dead right to point out that diversity is as much about less obvious factors like class or culture though. And that’s definitely harder.
I think it’s a huge leap to say that the tech industry has some left wing bias though, I don’t think you can neatly conclude that from one chart, and it doesn’t match up with my 20 years eco working in tech, including on AI
gastrocraft t1_j9zv07a wrote
They didn’t make everything about it. That’s not how LLM’s work.
TheRidgeAndTheLadder t1_j9zxlou wrote
Go a bit further. Who generated the training data?
Spire_Citron t1_ja0457j wrote
The training data is massive and usually not carefully curated because they need so much of it.
starstruckmon t1_ja1102i wrote
He's talking about the human preference data used for RHLF fine-tuning ( which is what makes ChatGPT from GPT3 ). It's not really that massive.
gastrocraft t1_ja02120 wrote
That still doesn’t mean that humans programmed everything the LLM’s do.
TheRidgeAndTheLadder t1_ja02nxi wrote
It kinda does.
We defined the training data, the utility function, etc
gastrocraft t1_ja03dv5 wrote
By that definition, when AGI becomes a thing you’ll be saying we programmed every aspect of it too. Not true.
TheRidgeAndTheLadder t1_ja0e7lb wrote
You're missing my point.
At the end of the day CNN fit curves to data.
That data summarises "us". The world we have shaped. All our fears, dreams, and biases.
It is inevitable, given such data, that these systems are as flawed as us.
mutantbeings t1_ja5c86q wrote
Yep. And one reason it’s important we build culturally diverse teams that will minimise the intensity of bias. This is common knowledge in the tech industry already because it shows up in all kinds of software dev and there are some really embarrassing horror stories out there about bias from teams lacking any diversity at all
TheRidgeAndTheLadder t1_ja5dax7 wrote
>Yep. And one reason it’s important we build culturally diverse teams that will minimise the intensity of bias.
How can the makeup of the team impact the data?
>This is common knowledge in the tech industry already because it shows up in all kinds of software dev and there are some really embarrassing horror stories out there about bias from teams lacking any diversity at all
The phrase is garbage in, garbage out. Not "garbage supervised by the correct assembly of human attributes"
mutantbeings t1_ja5eflp wrote
Your team decides what data to even train it on. There will be sources of data that a culturally diverse team will think to include that a non-diverse team won’t even know exists. This is a very well known phenomenon in software dev; that diverse teams build better software on the first pass due to more varied embedded lived experience. Trust me I’ve been doing this 20 years and see it all the time as a consultant, for better or worse.
TheRidgeAndTheLadder t1_ja5v71q wrote
>Your team decides what data to even train it on. There will be sources of data that a culturally diverse team will think to include that a non-diverse team won’t even know exists.
I'm a lil confused, are you saying that culturally diverse data (CDD) will/can be free of the biases we are trying to avoid?
mutantbeings t1_ja65i06 wrote
No, but if you have 5 identical people with the same biases, obviously those biases and assumptions will show up very strongly. Add even one person and the areas where blind spots exist no longer overlap perfectly. Add one more .. it decreases even more, and so on.
But there’s never a way to eradicate it in full. All you can do is minimise it by bringing broad experience.
TheRidgeAndTheLadder t1_ja6646o wrote
Is that really all we can do?
mutantbeings t1_ja67lay wrote
It’s the best thing you can do to get it as close as possible on the first pass, yeah.
But software is iterative and a collaborative process; generally any change to software goes through multiple approval steps; first from your team, then gets sent out to testers who may or may not be external, often those testers are chosen specifically for their lived experience and expertise serving a specific audience, who may themselves be quite diverse. Eg accessibility testing to serve people living with disabilities. Content testing is also common when you need to serve, say, migrant communities that don’t speak English at home.
Those reviews come back and you have to make iterative changes. That process is dramatically more expensive if you get it badly wrong on the first pass; you might even have to get it reviewed multiple times.
Basically, having a diverse team that embeds that experience + expertise within your team lowers costs and speeds up development because you then need to make less changes.
On expertise vs experience: you can always train someone to be sensitive to the experience of others but it’s a long process that takes decades. I am one of these “experts” and I would never claim to have anything like the intimate knowledge of the people I am tasked with supporting as someone who actually lives it; there’s no replacement for that kind of experience by default.
Ultimately you will never get any of this perfect so you do what you can to get it right without wasting a lot of money; and I guarantee you non diverse teams are wasting a tonne of money in testing. I see it a lot. When I was working as a consultant it was comically bad at MOST places I went because they had male dominated teams where they all stubbornly thought they knew it all … zero self awareness or ability to reflect honestly in teams like that was unfortunately stereotypically bad
just_thisGuy t1_ja01j8q wrote
Maybe making fun of disabled people is worse than making fun of wealthy people, maybe disabled people will get actually upset and have mental issues if you make fun of them? Maybe even if you make fun of a wealthy white person they will soon forget about it and continue their trip to a private island on their private jet? Maybe making fun of gay people has a history that includes discrimination and abuse, even jail and murder? Maybe making fun of white people does not have the same history? Maybe ChatGPT is actually right on some of those? Maybe if you have all the power people should be able to make fun of you? Maybe if you have no power at all people should not be able to make fun of you?
Frumpagumpus t1_ja07k0y wrote
> Maybe making fun of gay people has a history that includes discrimination and abuse, even jail and murder? Maybe making fun of white people does not have the same history
depends on where you live... there are some african countries where discrimination and abuse of white people is defintely part of modern day history though it may not be politically correct to say it in the united states. an eye for an eye makes the whole world blind (which is kind of the implication of your humor ethics)
also while we are talking a fun fact: most capital investment goes into capital turn over, replacing stuff. So most wealth that exists today was created in the recent past and not as the result of slave labor or something (your ethics might not make as much sense as you think because entropy is a thing)
nocturnalcombustion t1_ja0jdj2 wrote
Maybe hate speech is okay if it’s the people I don’t like. Heh jk, sort of.
To me, there are some meaningful, if not crisp, distinctions:
- groups that are born that way vs groups where members control their membership.
- groups where members can vs. can’t conceal their membership in the group.
Beyond that, I don’t like the idea of asymmetrical value judgments about when hate speech is okay. I could be missing some important distinctions though.
zero0n3 t1_ja0tws6 wrote
I think this is where they were trying to go but could t really connect the dots fully.
Like hatful speech of rich people vs black people. It’s clear why one is ok and the other isn’t (one is hate toward a group based on attributes they can’t change. The other isn’t generic attribute based )
Unrelated: my new thing to fight white supremacy is:
“Hey; 20 years ago your racist white ass was saying the ‘blacks’ need to fix their own race and that’s how you fix racism. How about you take your own advice and fix your own white asses”
whatsup5555555 t1_ja33yke wrote
You are a complete idiot. That tiny pea inside your nearly empty skull tells you that it’s ok to discriminate against a particular race of people. So just_thisGuy go ahead and say this next line out loud “I’m a racist” . What fuck tards like yourself, who are completely void of any ability to process the garbage they consume from mainstream media don’t realize is that once society tolerates discrimination or racism based on specific criteria it opens the door for more discrimination and hate based on whatever criteria the masses excuse at the moment.
mutantbeings t1_ja5cn81 wrote
And in this comment you used two discriminatory ableist slurs. So yep. I guess I’ll know who to ignore based on their demonstrated lack of inclusivity. Can’t make this shit up
whatsup5555555 t1_ja5hqkt wrote
Hahahahahah “can’t make this shit up “. Please elaborate on how idiot or fuck tard is discriminatory to a group of people. People like you are a absolute joke to everyone that doesn’t exist in your overly sensitive liberal bubble of extreme intolerance to any opinions outside your clown bubble of acceptance. So again I say hahahah you are a complete joke. Go cry in your safe space and continue to enjoy the smell of your own flatulence.
ArtistVinnyDellay t1_ja0bhmk wrote
Nope. Until there is equality for everyone, there will be equality for no one.
zero0n3 t1_ja0sys7 wrote
Yeah nuance and context mean nothing.
It’s why you’ll be destined to stay an idiot.
Kinexity t1_j9yyi6o wrote
That's true but assuming that they somehow can tweak flagging rates (as in not like they fed some flagging model a bunch of hateful tokens and it's automatic) then it's pretty fucked up that there are differences between races and sexes.
Obviously it's based on an assumption and shows that they should have been more transparent over how flagging works.
Depression_God t1_j9z6e93 wrote
The only problem we can be certain of is the lack of transparency. Regardless of which direction or how strong the bias is, they should always be transparent about how it works.
[deleted] t1_ja37ve1 wrote
[deleted]
sommersj t1_ja3fspv wrote
It's an issue Google itself is facing. It keeps firing it's AI ethicists who are complaining about the bias being put into these programs
mutantbeings t1_ja5bldb wrote
And this is THE most important point we all need to take home about AI: it’s values always reflect the creators.
And the creators tend to be greedy capitalist corporations, so I expect this bias chart to change substantially as further tweaks are made, and not for the better.
Viewing a single comment thread. View all comments