Submitted by ADefiniteDescription t3_z1wim3 in philosophy
d4em t1_ixditg1 wrote
These algorithms are very vulnerable to bias. If a neighbourhood is heavily patrolled, the chance is much higher any infractions are added to the learning set, increasing the "crime-value" of that area. Meanwhile, areas that are rarely patrolled at all, have a much lower chance of ending up in the database. This creates blind spots.
A real life example of where policing by AI went horribly wrong is the Dutch childcare benefit scandal. The algorithm "learned" that types of people (single mothers, immigrants) were more likely to have something wrong with their taxes, checked them more often, and then identified them as fraudsters for minor infractions like receipts being handed in incorrectly, or being a few days late with payment. Because computers are *magic truth machines* that *don't make mistakes* these people were given no legal recourse, no chance to defend themselves. They did not even know what they were accused of.
If we are going to use machine learning as a tool to help legal administration, we need to take extreme caution, and everyone working with these machines must fully understand their limitations. The computer has no idea what it's actually doing, it's just a fancy calculator following instructions, and while it follows these instructions flawlessly, it's still extremely error prone, and does not have the capability for self-reflection a human does, even if "learning" is built into the algorithm. AI fundamentally does not understand what its doing, and that means it will never understand if its doing wrong. We cannot use AI to replace our own judgment.
vrkas t1_ixdt60j wrote
At least the whole cabinet resigned in the Netherlands. In Australia a similar scheme was instituted, then found to be illegal, but the people administering it continued to be in government. The former social services minister even became PM.
Back to the point: I agree that great care needs to be used when trying these kinds of optimised, targeted computational methods.
zhoushmoe t1_ixedfln wrote
All the care in the world won't stop the biases inherent in our paradigm. There are built-in mechanisms of discrimination and inequality that the system as we know it optimizes for and are virtually impossible to remove from our current modus vivendi.
These books talk about the problem at length:
https://www.goodreads.com/book/show/28186015-weapons-of-math-destruction
https://www.goodreads.com/book/show/34762552-algorithms-of-oppression
https://www.goodreads.com/book/show/34964830-automating-inequality
vrkas t1_ixee4sd wrote
Yeah for sure. In the two cases mentioned in the comments the ML-based bullshit isn't the actual cause of the trouble. The root is from the rampant starve-the-beast defunding and privatisation of governmental functions, along with negative neoliberal attitudes to social services. If you have a properly functional social service setup, you won't need any of this shit in the first place.
pitjepitjepitje t1_ixhafc9 wrote
The same guy who was PM during the scandal, offered himself up for reelection and won, so yes, the cabinet fell, but we’re still stuck with some of the responsible politicians, including the PM. Not contradicting you, but an (IMO necessary) addendum.
phanta_rei t1_ixdowtx wrote
It reminds of the algorithm that handed longer sentences to minorities. If I am not mistaken, it took factors like income and spit out a value that determines whether the defendant will recidivate or not. The result was that minorities were disproportionately affected by it…
d4em t1_ixdroz5 wrote
Oh yeah, this is a whole rabbit hole. There's also algorithms that are being trained by people to identify subjective values, such as "niceness." These are notoriously biased as well, as biased, in fact, as the people who train them. But unlike those people, the opinion of the AI won't be changed by actually getting to know the person it's judging. They give 100% confident, biased, results.
Or the chatbots that interpret written language and earlier conversations to simulate conversation. One of them was unleashed on the internet and was praising Hitler within 3 hours. Another, scientific model designed to skim research papers to give summaries to scientists, answered that vaccines both can and cannot cause autism.
These don't bother me though. They're so obviously broken that no one will think to genuinely rely on them. What bothers me is the idea of this type of tech becoming advanced enough to sound coherent and reliable, because the same issues disrupting the reliability of the AI tech we have today will still be present, it's just the limitation of the technology. Yet even today we have people hailing the computer as our moral savior that's supposed to end untruth and uncertainty. If the tech gets a facelift, I believe many people will falsely place their trust in a machine that just cannot do what is being asked of it, but tries it's damndest to make it look like it can.
glass_superman t1_ixdua43 wrote
As an example:
Meta just a couple days ago took offline it's scientific paper generation machine because it would happily provide you a real-sounding scientific paper on the history of bears in space.
https://futurism.com/the-byte/facebook-takes-down-galactica-ai
killertreatdev t1_ixec5vt wrote
Few things have described me better.
elmonoenano t1_ixeltji wrote
In the US the big problem is that b/c of the legacy of redlining and segregation, a lot of these algorithms use zip codes which has turned out to just be a proxy for race. So the pre trial release were basically making the decision based on race and age, but b/c no one in the court system actually knew how they worked, no one challenged it.
Cathy O'Neil's got a bunch of good work on it. She had a book a few years ago called Weapons of Math Destruction.
[deleted] t1_ixdvd03 wrote
[removed]
glass_superman t1_ixdtiuc wrote
> The computer has no idea what it's actually doing
Counterpoint: Neither do we.
Expert poker players are often unable to explain their reasoning for why it felt like a bluff. It could be that they are picking up on something and acting without being able to reason about it.
Likewise, a doctor with a lot of experience might have some hunch that turns out to be true. The hunch was actually solid deduction that the doctor was unable to reason about.
Even you, driving, probably sometimes get a hunch that a car might change lanes or get a hunch that an intersection should be approached slowly.
I (and others) feel that explainable AI might be a dead-end. If we told the poker player that you can only assume a bluff if you can put into words what is wrong, that player might perform worse. It might be that forcing AI to be explainable is artificial limiting it's ability to help us.
Even if you don't buy that, there are those studies that show that consciousness is explaining our actions after the fact like an observer. So we're not really using reason to make decisions, we just do things and then reason about why.
We let humans drive cars on hunches. Why should we hold AI to a higher standard? Is a poorly performing explainable AI better than an unexplainable one that does a good job?
d4em t1_ixdukvm wrote
I'm not talking about reasoned explanations when I say a computer does not understand what it's doing. What I mean is that a computer fundamentally has no concept of "right and wrong." It's just a field of data and to the computer it's all the same if you switched the field for "good" with the field for "bad," it would uncaringly keep making it's calculations. Computers do not feel, they do not have hunches. All it does it measure likeliness based on ever more convoluted mathematical models. Its a calculator.
Any emotional attachment is purely coming from our side. A computer simply does not care. Not about itself, not about doing a good job, and not about you. And even if you told it to care, that would be no more than just another instruction to be carried out.
glass_superman t1_ixdwyrw wrote
Are people so different? We spend years teaching our kids to know right from wrong. Maybe if we spent as much time on the computers then they could know it, too?
d4em t1_ixdy7r1 wrote
Does a baby need to be taught to feel hungry?
While I appreciate the comparison you're making, it poses a massive problem: who initially taught humans the difference between right and wrong?
Kids do good without being told to. They can know something is wrong without being taught it is. For a computer, this simply is not possible. We're not teaching kids what "good" and "bad" are, as concepts. We're teaching them to behave in accordance with the morals of society at large. And sure, you could probably teach a computer to simulate this behavior and make it look like it's doing the same thing, but at the very core, there would be something fundamental missing.
What's good and bad isn't a purely intellectual question. It's deeply tied in to what we feel, and that's what a computer simply cannot do. Even if we learn it to emulate empathy, it will never truly have the capacity to place itself in someone's shoes. It won't be able to even place itself in it's own shoes. For as far as it's trying to stay alive, it's only because it's following the instruction to do so. A computer is not situated in the world in the same way live beings are.
Skarr87 t1_ixe6ouh wrote
In my experience children tend to be little psychopaths. Right and wrong (morality) likely evolved along with humans as they developed societies. Societies give a significant boost to the survival and propagation of members within the society. So societies with moral systems that are conducive to larger and more efficient societies tend to propagate better as well. These moral systems then get passed on as the society propagates and any society that has morals not conducive to societies tend to die off.
Why do you believe an AI would definitely be incapable of empathy? Not all humans are even capable of empathy and empathy can even be lost by damage to the frontal lobe. For some of those that have lost it never returns and for others they are able to relearn to express it. If it was relearned does it mean they are just emulating it and not actually experiencing it? How would that be different than an AI?
When humans get intuition, a feeling, or a hunch it isn’t out of nowhere, they typically have some kind of history or experience with the subject. For example when a detective has a hunch about a suspect lying it could be from previous experience or even a bias from a correlation with behavior of previous lying subjects that other detectives haven’t really noticed. How fundamentally is this any different when an AI makes an odd correlation between data using statistics? You could argue that what an AI is doing when correlating data like this it is creating a hunch and when a human has a hunch they are just making a conclusion using correlated data.
Note I am not advocating using AI in policing, I believe that is a terrible idea that can and will be very easily abused.
d4em t1_ixe8sn6 wrote
Our moral systems probably got more refined as society grew, but by our very nature as live beings we need to have an understanding between right and wrong to inform our actions. A computer doesn't have this understanding, it just follows the instructions its given, always.
I'm not making the argument that machines are incapable of empathy, although I am by extension, but the core of the argument is that machines are incapable of experience. Sure, you could train a computer to spit out a socially acceptable moral answer, but there would be nothing making that answer inherently moral to the computer.
I agree that little children are often psychopaths, but they're not incapable of experience. They have likes, dislikes. A computer does not care about anything, it just does as it's told.
The fundamental difference between a human hunch and the odd correlation the AI makes is that the correlation does not mean anything to the computer, it's just moving data like it was built to do. It's a machine.
Skarr87 t1_ixekpu2 wrote
So if I am understanding you’re argument, and correct me if I am wrong, the critical difference between a human and a computer is that a computer isn’t capable of sentience and by extension sapience or even more generalized consciousness?
If that is the argument then my take is I’m not sure we can say that yet. We don’t have a great understanding of consciousness yet to be able to say that it is impossible for none organic things to possess. All we know for sure is that it seems that the consciousness can be suppressed or damaged from changing or stopped biological processes within the brain. I am not aware of a reason a machine, in principle, could not simulate those processes to same effect (consciousness).
Anyway, it seems to me that your main problem with using AI for policing is that it would be mechanically precise in its application without understanding the intricacies of why crime may be happening here? For example maybe it will come to the conclusion that African American communities are crime centers without understanding that the reason they are crime centers is because they tend to be poverty stricken which is the real cause. So it’s input may end up being almost a self fulfilling prophecy?
d4em t1_ixetoqs wrote
I'm not talking about sentience, sapience, or consciousness, or anything like that, I'm talking about experience. All computers are self-aware, their code includes references to self. I would say machine learning constitutes a basic level of intelligence. What they cannot do, is experience.
It's actually very interesting that you say we don't have a good enough understanding of consciousness yet. The thing about consciousness is that it's not a concrete term. It's not a defined logical principle. In considering what consciousness is, we cannot just do empirical research (it's very likely consciousness cannot be empirically proven), we have to make our own definition, we have to make a choice. A computer would be entirely incapable of doing so. The best it would be able to do is measure how the term is used and derive something based off that. And those calculations could get extremely complicated and produce results we wouldn't have come up with. But it wouldn't be able to form a genuine understanding of what "consciousness" entails.
This goes for art too, computers might be able to spit out images and measure which ones humans think is beautiful and use that data to create a "beautiful" image, but there would be nothing in that computer experiencing the image. It's just following instructions.
There's a thought problem called the Chinese Room. In it, you have a man, placed in a room, that does not speak a word of Chinese. When you want your English letter translated to Chinese, you slide it through a slit in the wall. The man then goes to work and looks up all possible information related to your letter in a bunch of dictionaries and grammar guides. He's extremely fast and accurate. Within a minute you get a perfect translation of your letter spit out the slit in the wall. The question is: does the man in the room know Chinese?
For a more accurate comparison: the man does not know English either, he looks that up in a dictionary as well. It's also not a man, it's a piece of machinery, that finds the instructions on how to look at your letter and how to hand it back to you in another dictionary. Every time you hand him a letter, the computer has to look in the dictionary to find out what a "letter" is and what you should do with it.
As for the problems with using AI or other computer-based solutions in government, yeah, pretty much. The real risk is that most police personnel isn't technically or mathematically inclined, and humans have shown a tendency to blindly trust what the computer or the model tells them. But also, if there was a flaw in one of the dictionaries, it would be flawlessly copied over into every letter. And we're using AI to solve difficult problems that we might not be able to doublecheck.
Skarr87 t1_ixhrn5o wrote
I guess I’m confused by what you mean by experience. Do you mean something like sensations? Something like the ability to experience the sensation of the color red or emotional sensations like love as opposed to just detecting light and recognizing it as red light and emulating the appropriate responses that would correspond to the expression of love?
With your example of the man translating words, I’m not 100% sure that is not an accurate analogy of how humans process information. I know it’s supposed to be an example to contrast human knowledge with machine knowledge, but it seems pretty damn close to how humans process stuff. There are cases where people have had brain injuries where they essentially lose access to parts of their brain that process language. They will straight up lose the ability to understand, speak, read, and write a language they were previously fluent in, the information just isn’t there anymore. It would be akin to the man losing access to his database. So then the question becomes does a human even “know” a language or do they just have what is essentially a relational database to reference?
Regardless though, none of this matters in whether we should use AI for crime. Both of our arguments essentially make the same case albeit from different directions, AI can easily give false interpretations of data and should not be solely used to determine policing policy.
glass_superman t1_ixe2glj wrote
A baby doesn't need to learn to be hungry but neither does a computer need to learn to do math. A baby does need to learn ethics, though, and so does a computer.
Whether or not a computer has something fundamentally missing that will make it never able to have a notion of "feeling" as humans do is unclear to me. You might be right. But maybe we just haven't gotten good enough at making computers. Just like we, in the past, made declarations about the inabilities of computer that were later proved false, maybe this is another one?
It's important that we are able to recognize when the computer becomes able to suffer for ethical reasons. If we assume that a computer cannot suffer, do we risk overlooking actual suffering?
d4em t1_ixe5eyy wrote
The thing is, for a baby to be hungry, it needs to have some sort of concept as hunger being bad. We need the difference between good and bad to stay alive. A computer doesn't, because it doesn't need to stay alive, it just runs and shuts down according to the instructions its given.
We need to learn ethics, yes, but we don't need to learn morals. And ethics really is the study of moral frameworks.
It's not because the computer is not advanced enough. It's because the computer is a machine, a tool. It's not alive. It's very nature is fundamentally different from that of a live being. It's designed to fulfil a purpose, and that's all it will ever do, without a choice in the matter. It simply is not "in touch" with the world in the way a live being is.
It's natural to empathize with computers because they simulate mental function. I've known people to empathize with a rock they named and drew a face on, it doesn't take that much for us to become emotionally attached. If we can do it with a rock, we stand virtually no chance against a computer that "talks" to us and can simulate understanding or even respond to emotional cues. I would argue that it's far more important we don't lose sight of what computers really are.
And if someone were to design a computer capable of suffering, or in other words, a machine that can experience - I don't think its possible and it would need to be so entirely different from the computers we know that we wouldn't call it a "computer" - that person is evil.
glass_superman t1_ixen19z wrote
>And if someone were to design a computer capable of suffering, or in other words, a machine that can experience - I don't think its possible and it would need to be so entirely different from the computers we know that we wouldn't call it a "computer" - that person is evil.
I made children that are capable of suffering? Am I evil? (I might be, I dunno!)
If we start with the assumption that no computer can be conscious then we will never notice the computer suffer, even if/when it does.
Better to develop a test for consciousness and apply it to computers regularly, to have a falsifiable result. So that we don't accidentally end up causing suffering!
d4em t1_ixeu6yh wrote
I'm not saying its evil to create beings that are capable of suffering. I would say that giving a machine, that has no other choice than to follow the instructions given to it, the capability to suffer would be evil.
And again, this machine would have to be specifically designed to be able to suffer. There is no emergent suffering that results from mathematical equations. Don't develop warm feelings for your laptop, I guarantee you they are not returned.
glass_superman t1_ixfso7p wrote
Consciousness emerged from life as life advanced. Why not from computers?
You could argue that we wouldn't aim to create a conscious computer. But neither did nature aim to create consciousness and here we are.
So I absolutely do think that there's a chance that it simply emerges. Just like it did before. Every day some unconscious gametes get together and, at some point, consciousness emerges, right? If carbon, why not silicon?
d4em t1_ixguiui wrote
Well, first, the comparison you're drawing between something created by nature and a machine designed by us as a tool is incorrect. We were not designed. Its not that "nature" did not aim to create consciousness, its that nature does not have any aim at all.
Second, our very being is fundamentally different from what a computer is. Experience is a core part of being alive. Intellectual function is built on top of it. You're proposing the same could work backwards; that you could build experience on top of cold mechanical calculations. I say it can't.
Part of the reason is the hardware computers are working on, they are entirely digital. They can't do "maybes."
Another part of the reason is that computers do not "get together" and have their unconsciousness meet. They are calculators, mechanically providing the answer to a sum. They don't wander, they don't try, they do not do anything that was not a part of the explicit instruction embedded in their design.
glass_superman t1_ixhifzy wrote
Is this not just carbon chauvinism?
Quantum computers can do maybe.
I am unconvinced that the points that you bring up are salient. Like, why do the things that you mention preclude consciousness? You might be right but I don't see why.
Sherlockian12 t1_ixe0hxe wrote
This misses the entire point of what explainable AI is. Asking humans to explain their intuition as a precondition for their intuition to be applicably valid is definitely limiting for humans. However, explainable AI isn't that we ask AI to explain itself. It's rather being able to exactly or with high probability pinpoint the exact dataset on which AI is basing it's prediction. This is definitely useless, and so limiting, when it comes to machine learning applications to, say, predicting what food you might like the best. It's however immensely important in areas like medical imaging, because we want to ensure that the input, on which AI is basing its decision, isn't some human-errored spot on the x-ray.
As such, it is for these fields that explainable AI is studied, where limitations of AI are far less significant than us being sure that AI isn't making a mistake. As such suggesting explainable AI is a dead-end is inaccurate, if not a mischaracterisation.
glass_superman t1_ixe1b9e wrote
I didn't mean that the AI should be able to explain itself. I meant that we should be able to dig in to the AI and find an explanation for how it worked.
I'm saying that that requiring either would limit AI and decrease it's usefulness.
Already we have models where it's too difficult to dig into them and figure out why a choice was made. As in, you can step through the math of a deep learning system to follow along with the math but you can't pinpoint the decision in there and more than you can root around in someone's brain to find the neuron responsible for a behavior.
Sherlockian12 t1_ixe3rh4 wrote
And you're missing the point of the field if you're making the trivial observation that working out an explanation decreases the usefulness.
That is the point. We want to decrease it's usefulness and increase its accuracy in fields where the accuracy is paramount. This is akin to the relationship of physics and math. In physics, we routinely make unjustified steps to make our models work. Then in math, we try to find a reasonable framework in which the unjustified steps are justified. Saying "math reduces the usefulness by requiring an explanation for seemingly okay steps" is to miss the point of what mathematics is trying to do.
glass_superman t1_ixeoz42 wrote
>And you're missing the point of the field if you're making the trivial observation that working out an explanation decreases the usefulness.
That's not what I said! I'm saying that limiting AI to only the explainable may decrease usefulness.
This is trivially true. Imagine that you have many AI programs, some of which you can interrogate and some that you can't. You need to pick the one to use. If you throw out the explainable ones, you have fewer tools. It's not a more useful situation.
>That is the point. We want to decrease it's usefulness and increase its accuracy in fields where the accuracy is paramount.
But accuracy isn't the same as explainability! A more accurate AI might be a less explainable one. Like a star poker player with good hunches vs a mediocre one with good reasoning.
We might decide that policing is too important to be unexplainable so we decide to limit ourselves to explainable AI and we put up with decreased utility of the AI in exchange. That's a totally reasonable choice. But don't tell me that it'll necessarily be more accurate.
> Saying "math reduces the usefulness by requiring an explanation for seemingly okay steps" is to miss the point of what mathematics is trying to do.
To continue the analogy, there are things in math that are always observed to be true yet we cannot prove it. And we might never be able to prove them. Yet we proceed as if they are true. We utilize that for which we have no explanation because utilizing it makes our lives better than waiting around for the proof that might never come.
Already math utilizes the unexplainable. Why not AI?
notkevinjohn t1_ixe8e3s wrote
I don't necessarily agree that we need to have what you call 'unexplainable AI' and what I would call 'AI using machine learning' to solve the kinds of problems that face police today. I think that you can have systems that are extremely unbiased and extremely transparent that are written in ways that are very explicit and can be understood by pretty much everyone.
But I do agree with you that it's a very biased and incomplete argument to say that automated systems are working in ways that are opaque to the communities they serve and ignore the fact that it's not in any way better to have humans making those completely opaque decisions.
glass_superman t1_ixem81g wrote
>I don't necessarily agree that we need to have what you call 'unexplainable AI'
To be more precise, I'm not saying that we must have unexplainable AI. I'm just saying that limiting our AI to only the explainable increases our ability to reason about it (good) but also decreases the ability of the AI to help us (bad). It's not clear if it's worth the trade-off. Maybe in some fields yes and other no.
Most deep learning is already unexplainable and it's already not useful enough. To increase both the usefulness and the explainability will be hard. Personally, I think that maximizing both will be impossible. I also think that useful quantum computers will be impossible to build. I'm happy to be proven wrong!
notkevinjohn t1_ixex7vp wrote
Yes, and I am pushing back about the spectrum of utility vs transparency that you are suggesting. I think that the usefulness of having a transparent process, especially when it comes to policing, vastly outweighs the usefulness of any opaque process with more predictive power. I think you need to update your definition of usefulness to account for how useful it is to have processes that people can completely understand and therefor trust.
glass_superman t1_ixfsc4n wrote
I agree with you except for the part where you seem very certain that understanding trumps all utility. I am thinking that might find some balance between utility and explainability. Presumably there would be some utilitarian calculus that would judge the importance of explainability versus utility of AI function.
Like for a chess playing AI, explainability might be totally unimportant but for policing it is. And for other stuff it's in the middle.
But say you have the choice between an AI that drives cars and you don't understand it versus an explainable one but the explainable one is shown to lead to 10 times the fatalities of the other one. Surely there is some level of increased fatalities where you'd be willing to accept the unexplainable?
Here's a blog with similar ideas:
https://kozyrkov.medium.com/explainable-ai-wont-deliver-here-s-why-6738f54216be
notkevinjohn t1_ixg4zez wrote
Yeah, I do think I understand the point you are trying to make, but I still don't agree. And that's because the transparency of the process is inextricable from your ability to see if it's working. In order for a legal system to be useful, it needs to be trusted, and you can't trust a system if you can't break open and examine every part of the system as needed. Let me give a concrete example to illustrate.
Take a situation described in the OP where the police are not distributed evenly along some racial lines in a community. Lets say that the police spend 25% more time in the community of racial group A than they do of racial group B. That group is going to assert that there is bias in the algorithm that leads to them being targeted, and if you cannot DEMONSTRATE that not to be the case than you'll have the kind of rejection of policing that we've been seeing throughout the country in the last few years. You won't be able to get people to join the police force, you won't get communities to support the police force, and when that happens it's not going to matter how efficiently you can distribute them.
Just like not crashing might be the metric with which you measure the success of an AI that drives cars; trust would be one of the metrics with which you would measure the success of some kind of AI legal system.
glass_superman t1_ixga3jd wrote
>And that's because the transparency of the process is inextricable from your ability to see if it's working.
Would you let a surgeon operate on you even though you don't know how his brain works? I would because I can analyze results on a macro level. I don't know how to build a car but I can drive one. Dealing with things that we don't understand is a feature of our minds, not a drawback.
>Take a situation described in the OP where the police are not distributed evenly along some racial lines in a community. Lets say that the police spend 25% more time in the community of racial group A than they do of racial group B. That group is going to assert that there is bias in the algorithm that leads to them being targeted, and if you cannot DEMONSTRATE that not to be the case than you'll have the kind of rejection of policing that we've been seeing throughout the country in the last few years. You won't be able to get people to join the police force, you won't get communities to support the police force, and when that happens it's not going to matter how efficiently you can distribute them.
Good point and I agree that policing needs more explainability than a chess algorithm. Do we need 100%? Maybe.
>Just like not crashing might be the metric with which you measure the success of an AI that drives cars; trust would be one of the metrics with which you would measure the success of some kind of AI legal system.
Fair enough. So for policing we require a high level of explainability, let's say. We offer the people an unexplainable AI that saves an extra 10,000 people per year but we opt for the explainable one because despite the miracle, we don't trust it! Okay.
Is it possible to make a practical useful policing AI with a high level of explainability? I don't know. It might not be. There might be many such fields where we never use AIs because we can't find them to be both useful enough and explainable enough at the same time. Governance, for instance.
[deleted] t1_ixedch8 wrote
[removed]
[deleted] t1_ixel9oa wrote
[removed]
BernardJOrtcutt t1_ixfnpne wrote
Your comment was removed for violating the following rule:
>Argue your Position
>Opinions are not valuable here, arguments are! Comments that solely express musings, opinions, beliefs, or assertions without argument may be removed.
Repeated or serious violations of the subreddit rules will result in a ban.
This is a shared account that is only used for notifications. Please do not reply, as your message will go unread.
ThreesKompany t1_ixee6ef wrote
It happened in NYC with fires. It is explored in a fascinating book called "The Fires" by Joe Flood. Basically RAND corporation had used computer models to "more efficiently" provide fire protection in the city and it led to a massive wave of fires and destruction of huge swaths of the city.
wkmowgli t1_ixeeivk wrote
For this example we can train an algorithm to estimate the probability of a crime in an area given the amount of patrolling in that area. So it could be normalized out if the algorithm is designed properly. The amount of care needed in designing these algorithms will need to be high. I do know that there is active research and development in identifying these biases early (even before deployment) but it’ll never be perfect. So it’ll likely be a cycle of negatively hurting people, being called out, fixed, and then back to step 1.
littlebitsofspider t1_ixfq8bn wrote
I wonder what would happen if they took the Abraham Wald approach and designed a counterintuitive algorithm. Like, make a heatmap of violent crimes (assault, robbery, rape, etc.), and then sic the algo on non-violent crimes in the inverted heatmapped areas, like larceny, wire fraud, and so on. Higher-income areas have wealthier people, and statistically wealthier people are better equipped to commit high-dollar white collar crimes. You could also use the hottest areas on the violence heatmap to target social services support.
notkevinjohn t1_ixdw7mb wrote
Machine Learning, Artificial Intelligence, and Algorithm are all terms that exist in the same space of computer science, but they absolutely do NOT all mean the same thing, and in your post here you used them all interchangeably.
An algorithm is a very generic term for some kind of heuristic that can be followed to produce some result. A recipe for cookies in an algorithm just like some algorithm on Facebook decides what posts to show you. Machine Learning takes place when the process a system implements is non-deterministic; it does things that the programmers didn't explicitly tell it to do; it actually learns how to do new things. An artificial intelligence is a system that's designed to do tasks in the same way a human would, often involving processing visual data or making human-like decisions.
If you wanted to make the case that we shouldn't use MACHINE LEARNING in policing, I would 100% agree with that statement, our police policies should be very deliberate and very transparent and machine learning wouldn't be either of those things. But using this as an argument that we shouldn't be embracing policing with explicitly defined algorithms that are far MORE transparent and deliberate than the humans they would replace is an absolutely indefensible argument. If there's one thing we've learned in the past few years, it's that police need far more regulation, and that's exactly what algorithms do whether they are implemented by a computer or by some system of rules and laws.
[deleted] t1_ixdxluo wrote
[removed]
BernardJOrtcutt t1_ixfnsq7 wrote
Your comment was removed for violating the following rule:
>Be Respectful
>Comments which consist of personal attacks will be removed. Users with a history of such comments may be banned. Slurs, racism, and bigotry are absolutely not permitted.
Repeated or serious violations of the subreddit rules will result in a ban.
This is a shared account that is only used for notifications. Please do not reply, as your message will go unread.
[deleted] t1_ixdziib wrote
[removed]
jovahkaveeta t1_ixf3hon wrote
What if we used victim surveys as training data instead wherein victims of crime can specify the place that the crime occurred.
manFigSpaceTheorist t1_ixft0eg wrote
thank you
eliyah23rd t1_ixdrm3e wrote
Computers are no longer following instructions. That went out about 10 years ago.
They're just juggling numbers. Same as us really but without the ability to self-reflect (yet)
d4em t1_ixdsosv wrote
They're following instructions to juggle numbers. If you can hand me the human source code, I'll gladly read it, but as far as I'm aware there is no such document in existence.
Viewing a single comment thread. View all comments