Comments
Pinkie_Flamingo t1_j7h4m70 wrote
Built by biased humans.
Garbage in, garbage out.
BovaDesnuts t1_j7h6lpu wrote
Remember, kids: robots are racists
Notsnowbound t1_j7h6qkj wrote
"we get extra bits if we human harder!"
[deleted] t1_j7h7wks wrote
[removed]
[deleted] t1_j7h98vt wrote
[removed]
[deleted] t1_j7hggip wrote
[removed]
hawkeye224 t1_j7hmro2 wrote
Why is it biased? I would imagine the training data would be photos of faces annotated with their actual age.. where is the bias introduced?
keithcody t1_j7ht0sc wrote
It’s in the article if you want to read it
“In the study, AI overestimated the age of smiling faces even more than human observers and showed a sharper decrease in accuracy for faces of older adults compared to faces of younger age groups, for smiling compared to neutral faces, and for female compared to male faces. “These results suggest that estimates of age from faces are largely driven by visual cues, rather than high-level preconceptions,” said lead author Tzvi Ganel, Ben-Gurion, department of cognitive and brain sciences. “The pattern of errors and biases we observed could provide some insights for the design of more effective AI technology for age estimation from faces.
…
“AI tended to exaggerate the aging effect of smiling for the faces of young adults, incorrectly estimating their age by as much as two and a half years. Interestingly, whereas in human observers, the aging effect of smiling is missing for middle-aged adult female faces, it was present in the AI systems,” said Carmel Sofer, Ben-Gurion, department of cognitive and brain sciences.”
sschepis t1_j7i15xh wrote
How this isn't understood yet is beyond me.
Human intelligence is literally constructed on our bias - on our ability to make rapid classification based on sparse data. This ability allows us to make strings of rapid decisions with a relatively low energy cost - It's literally hardwired in the physical structures of the brain.
The idea that the mechanism of bias can possibly be removed without fundamentally affecting the mechanism of intelligence shows that the conversation has veered off-track into the domain of politics and morality.
Which is fine - there's nothing wrong with those discussions - but what use are they if the mechanisms they're discussing are fundamentally misunderstood?
DrXaos t1_j7ia833 wrote
It's fairly well known that common ML systems for image processing (layers of convolutional networks followed by max-pooling or the like) are more sensitive to texture and less sensitive to larger scale shape and topology than humans.
It's likely that smiling triggered more 'wrinkle' detector units and the classifier eventually effectively added up the density of this texture detection for age prediction while humans know better where wrinkles from aging vs smiling are placed on the face and compensate.
keithcody t1_j7idyma wrote
Your description doesn’t really fit the findings.
Sample image used for training
https://www.ncbi.nlm.nih.gov/pmc/articles/instance/9800363/bin/41598_2022_27009_Fig1_HTML.jpg
DrXaos t1_j7ig8eq wrote
I guess I don't get your point. The images reflect the phenomenon I suggest.
Look at the younger images. In the smiling & young side there are more relatively high spatial frequency light to dark transitions, interpreted as a higher probability of wrinkles, vs the non-smiling side. I conjecture those contribute to higher age estimation.
DigitalSteven1 t1_j7in27k wrote
Study finds model that replicates training data replicates training data, and more specifically repeated training data.
​
Is this a joke or something? This is also how the human brain works, is it not? We exaggerate the our biases all the time, and the way we fix it is "feeding" our brain more data (learning).
djkoch66 t1_j7ix0io wrote
It’s understood but ignored.
Schemati t1_j7j5dde wrote
Would it cost anything to train ai on non garbage data or did we just dump everything and say good enough?
sschepis t1_j7ja6kq wrote
Which makes it even worse because willful ignorance about a powerful new technology has never worked for anyone.
I wonder if the researchers who performed this study did so with the knowledge that AIs merely optimise processes and that humans now regularly use caricaturists in place of perp sketches because they are so much more effective at triggering recall.
This study 100% exactly confirms the expected and desired outcome of an AI model when faced with this problem and yet somehow even though its a study I get the impression this is supposed to be bad, simply because the fact that biases were exagerated is not exactly news.
[deleted] t1_j7jject wrote
[removed]
Matshelge t1_j7jk7t9 wrote
So what is the result you want? A machine that can tell your age, or a machine that understands that age is just a number?
The reason it ended up like this was that we did not currate the input to give an output that fit ideal output.
It's not garbage data, it's human data. Maybe humans are garbage, but just wait till it starts touching more human taste and preferences, like intelligence or beauty. We will get real mad at those results.
[deleted] t1_j7jnmho wrote
[removed]
[deleted] t1_j7k2bel wrote
Well . All they are are human biases put into a giant database so what the hell else would you expect!!
Pinkie_Flamingo t1_j7knml7 wrote
Whut difference does it make if it costs more to be accurate across all ethnicities?
AutoModerator t1_j7h3ppt wrote
Welcome to r/science! This is a heavily moderated subreddit in order to keep the discussion on science. However, we recognize that many people want to discuss how they feel the research relates to their own personal lives, so to give people a space to do that, personal anecdotes are allowed as responses to this comment. Any anecdotal comments elsewhere in the discussion will be removed and our normal comment rules apply to all other comments.
I am a bot, and this action was performed automatically. Please contact the moderators of this subreddit if you have any questions or concerns.