Viewing a single comment thread. View all comments

CorruptCashew t1_itylxc3 wrote

"If you look like the person in the ad, you can imagine it being you using the product".

Same like a strong independent woman in a hollywood movie. Same principle.

35

MrChoovie t1_ityoovh wrote

Are you saying men over 55 look like teenage girls?

75

Penis_Bees t1_itz81ka wrote

They're probably more likely to click that ad. Some meta data gets stored and utilized and directs them those ads more.

17

CardboardJ t1_itziqeg wrote

This is the sad downside to having an impartial algorithm that's ranked only on success rate.

6

TangerineDream82 t1_itzo8ga wrote

What should the algorithm be ranked on instead?

9

CardboardJ t1_iu0r53k wrote

It's currently ranked against what makes Facebook the most money with a secondary ranking of what advertisers want to use to target a demographic. The algorithm is fine. The marketing creeps opting to use teen girls to advertise to 55 year old men should probably change and Facebook should probably have better moral standards as well.

It's like seeing parents that sold their teen daughter to a pimp and the pimp selling the girl to an old dirty man, then asking how we can change human reproduction to prevent this. It's the society that has accepted "Sex Sells" as a moral business practice that should be changed, but anyone that attempts to do so gets labeled a puritan and dismissed.

0

chute_amine t1_iu0cuel wrote

Ranking by that is fine, but there should be equity constraints in place. Ads are one thing, but think about applications for jobs or credit cards. Amazon and Apple already had huge issues there because they chose to ignore sensitive traits instead of actively enforcing equity.

−6

RonPMexico t1_iu0yuoe wrote

Are you saying that algorithms should include factors like race and age when determining credit scores and filtering job applications?

3

myspicename t1_iu0znoz wrote

Are you saying if algorithms lead to a pattern of ad distribution that say, doesn't advertise properties in mostly white areas to non white people, that's ok?

1

RonPMexico t1_iu1037h wrote

I'm saying these algorithms are designed self optimize. you give them a goal and they try to achieve it most efficiently. If the goal is to sell property and the most efficient way to sell the property favors ads targeted white people then it's fine.

3

myspicename t1_iu10dsx wrote

It's fine to have discriminatory advertising patterns for housing if it's in an algorithm? What if an algo designs an ad campaign that's providing misleading and false claims but it increases conversion rates and hit rates?

Do you think illegal acts are ok if they are done by an algo?

−2

RonPMexico t1_iu10v69 wrote

Who said anything about illegal? Is it illegal to put up up for sale signs in a white neighborhood? Is it illegal to claim the earth is 6000 years old from a pulpit?

If something is illegal it is illegal but that's not really useful or meaningful to this discussion.

1

myspicename t1_iu1ok0e wrote

So you think it's ok to exclusively advertise properties to white people?

−1

Tall-Log-1955 t1_iu1u7p6 wrote

I think that if the reason is that non-white people did not engage with the ad (because they are not interested in it), then yes it is okay

If the reason is that some property developer wants to keep out non-whites then it is not okay.

1

myspicename t1_iu1zh5f wrote

If the algo doesn't advertise to non white people, how would we know the problem is engagement. I'm trying to lead y'all through a line of logic that ends with the idea that outsourcing racist activities to an algo isn't not racist.

1

Tall-Log-1955 t1_iu26rpu wrote

These algorithms don't have that problem because they show ads to everyone in small amounts. Then whichever demographic/group engages at the highest rate, they show to more people like that.

You can read how they work here:

https://en.m.wikipedia.org/wiki/Multi-armed_bandit

1

myspicename t1_iu26zou wrote

Ok so generalizing from a small sample size and then using race or race proxy demographics. Do y'all seriously not see the issue?

1

Tall-Log-1955 t1_iu2eaz5 wrote

If race is actually predictive of interest in a product, I don't think it's that bad. Is it sexist to less often show tampon ads to men? Is it ageist to less often show ads for toys to senior citizens?

If people of a given race are genuinely not interested in a product, I don't think it harms them to show them the ads less often.

1

myspicename t1_iu2egdv wrote

I think for many consumer products, like hair care or tampons, this is true. It becomes insidious when it's real estate, education, accommodations, etc if unchecked.

1

RonPMexico t1_iu1xhnz wrote

The only way the algorithm would exclusively advertise to whites would be if that was an explicit direction given to the system. If you program the model to sell advertise at the highest price point and the ads were sent to high income earners in the school district who have searched for realtors and any other numbers of relevant variables then the results were mostly whites I'd have absolutely no problem with it.

1

myspicename t1_iu1z7ms wrote

So you are ok with a system being racist so long as it doesn't explicitly call it out. There was a reason that advertising directed at only one race for a property was made illegal.

1

RonPMexico t1_iu1zro2 wrote

The thing is the model isn't racist. I am explicitly saying including race in these systems should be prohibited.

1

myspicename t1_iu20n0k wrote

This is like when politicians carve up districts based on other factors to proxy race. The model is definitionally racist if it continues to fuel racial segregation.

1

RonPMexico t1_iu21nq4 wrote

Politicians are optimizing for political affiliation and use race as a proxy for that. I am saying that is the exact opposite of what ought to be allowed.

1

myspicename t1_iu24bff wrote

Racism is ok if you find a proxy. Got it.

1

RonPMexico t1_iu25qoq wrote

Have you considered the opposite case? Using the real estate example. You have x number of variables including salary, school district, visiting real estate websites, and so on. Each on of those variables is given a weight by the system. We don't know what those weights are, the systems operates in a "black box" to determine the appropriate values. You look at the results and decide native Americans are under represented. now you have to add native American as a variable and in order to get the results you want you have to decide how much that should impact the final results. So who decides to favor native Americans by how much? Would that not be illegal under the fair housing act?

1

myspicename t1_iu27918 wrote

If companies ever backchecked their algos for mistakes or systematic bias, I might not be against it.

1

RonPMexico t1_iu27fim wrote

I don't know what that sentence means

1

myspicename t1_iu27s7l wrote

Is the concept of machine learning making a racist assumption and enforcing racism alien to you? It's pretty widely discussed.

1

RonPMexico t1_iu28ega wrote

I know. Thats what we are discussing. You take the view if an algorithm returns results that are not directly proportional to racial demographics the system is racist. I'm saying that is ridiculous.

What doesn't convey meaning is:

If companies ever backchecked their algos for mistakes or systematic bias, I might not be against it.

0

myspicename t1_iu28ibd wrote

Did I say directly proportional? Stop strawmanning my argument.

1

RonPMexico t1_iu28p2s wrote

How far from proportional would be okay and after that it's racist?

0

myspicename t1_iu296ce wrote

Clearly there's no strict line. Just like a white passing black person crossing the color line in Jim Crow, racist systems aren't absolute.

I'd say if there's a vastly disproportionate discrepancy it's worth checking. And I'd say if it's around things like housing, or education (rather than say, hair care items) it's more salient.

1

RonPMexico t1_iu29jmi wrote

How about this? We remove race from the equation entirely. Surely that would lead to the best outcome no?

0

myspicename t1_iu2a7pt wrote

Absolutely not and I think it's fairly obvious it wouldn't. This was tried for education and housing and because of historical inequity and cultural in group bias of systems for a majority it doesn't work.

Even workplace or academic institutions that just have policies that appeal to white majorities can enforce that. It's trivial, but even not having say, vegetarian or halal items can be a blocker, and it's "race blind" to be fine not having it.

1

RonPMexico t1_iu2axl8 wrote

So you are saying they can't be race neutral and you can't define when it's racist. Who gets to decide where to draw these arbitrary lines? How would they work with optimized systems? What is fair enough?

−1

myspicename t1_iu2b900 wrote

This is why we have laws around this. Let me guess, you think markets correct all inequities?

1

RonPMexico t1_iu2cfec wrote

I'm saying when you artificially favor one race over another in an otherwise race neutral algorithm to give your desired results it's a bad thing. You believe race should factor into everything. And you have the temerity to claim the moral high ground. Racism is bad and you ought to be ashamed of your views.

0

chute_amine t1_iu11wxz wrote

It’s complicated, but yes. We don’t use the sensitive traits in training as a normal feature - we use them to correct bias in the model along that dimension. It can be done in training or after training, but it is a necessary check in any human-influencing AI model.

0

RonPMexico t1_iu127ye wrote

It sounds like your are reducing the efficiency of the model in the name of equality.

2

chute_amine t1_iu13vjw wrote

Exactly, but what is more important? Revenue or fairness? It’s about finding the right balance. Each project/model has its own level of compromise.

1

RonPMexico t1_iu144rm wrote

I would say in the long term efficiency will benefit every one more than handicapping systems to provide desired outcomes.

2

chute_amine t1_iu16o6j wrote

Fair enough. But academia, the big names in tech, the USA, the EU, and I disagree.

2

RonPMexico t1_iu17ibt wrote

Do you mean big tech as in Facebook or Google ad services. In academia are their engineers and data scientists that prefer nice data over accurate data.

1

AnotherTakenUser t1_itzfyld wrote

Men over 55 generally don't care what you're peddling so the algorithm has to get creative

−3