Armoogeddon
Armoogeddon t1_j1spw46 wrote
Reply to comment by myassholealt in NYC's AI bias law is delayed until April 2023, but when it comes into effect, NYC will be the first jurisdiction mandating an AI bias order in the world, revolutionizing the use of AI tools in recruiting by Background-Net-4715
I agree wholeheartedly with your last sentence, but it goes way beyond “bias” in models. Models are only one piece of an ever more complex system.
In terms of the impressions you’ve inferred, we could talk in good conscience about that for hours. Maybe five or six years ago, it came to light that visual recognition models performed inherently worse on people of dark skin. The tech companies (I was there at the time at a big prominent one) decided to jump ahead of the bad press by condemning themselves and promising to do better. The media fallout was negligible.
It was bunk. Did AI models perform generally worse on black/people of African descent photos? In some cases yes. Was the training data cribbed from the US? Yes. Where black people made up, what 13% of the population? Of course they performed worse: there was 1/10 the data available to train them! It wasn’t racist; it wasn’t some bias built into the models by the human trainers - there was simply less data. But nobody bothered to elaborate on what should have been a nuanced conversation and the prevailing opinion jumped to the wrong perception and the wrong remediation. It kicked off an idiotic path upon which we still find ourselves. Or watch others traversing.
The real problem is nobody understands what’s behind these models. We understand the approaches they take generally, the “convolutions” applied at various training layers - but nobody understands the logic behind the output models any better than we understand the models behind human reasoning. We can infer things but there’s nothing known; not in a binary or truly understood way.
Yet everybody keeps racing ahead to apply these models in ever more profound and - if you’re in the space - unnerving ways. It’s getting scary, and it’s way worse than the stuff that’s being discussed here, which is also a bad idea.
I guess what I’m saying is it’s so much worse than these idiot politicians realize. They’re fighting a battle that was lost ten years ago.
Armoogeddon t1_j1s60ej wrote
Reply to comment by SakanaToDoubutsu in NYC's AI bias law is delayed until April 2023, but when it comes into effect, NYC will be the first jurisdiction mandating an AI bias order in the world, revolutionizing the use of AI tools in recruiting by Background-Net-4715
It’s been a few years, but also a data scientist with experience in NLP.
NLP would only be one component in a model(s), but even if you somehow standardized on that - and it would be really, really hard to do that - it would be virtually impossible to create what the author(s) of any bill would deem unbiased.
I suspect these people hear “bias” in machine learning and presume it’s a pejorative. It’s not; models trained by humans (“supervised machine learning”) are “biased” with their experience intentionally. Training models isn’t some Klan rally to go after people, at least not in my experience. I have serious qualms about how this stuff gets used and that’s partly why I left the field, but lawyers and career politicians aren’t helping by passing laws regulating a field they don’t understand any better than rocket science (to them).
Armoogeddon t1_jecyd7j wrote
Reply to comment by AceContinuum in New Yorkers overwhelmingly support bail changes ahead of state budget deadline: Poll by Grass8989
Dude they’re leaving in droves.
https://www.nytimes.com/2023/02/23/nyregion/millionaires-new-york-taxes.html
Sure there’s still plenty, but the trend is negative and the net population loss is a problem too.