Submitted by izumi3682 t3_xxelcu in Futurology
Comments
izumi3682 OP t1_irbpbnh wrote
Submission statement from OP. Note: This submission statement "locks in" after about 30 minutes, and can no longer be edited. Please refer to my statement they link, which I can continue to edit. I often edit my submission statement, sometimes for the next few days if needs must. There is often required additional grammatical editing and additional added detail.
From the article.
>“Just as our Constitution’s Bill of Rights protects our most basic civil rights and liberties from the government, in the 21st century, we need a ‘bill of rights’ to protect us against the use of faulty and discriminatory artificial intelligence that infringes upon our core rights and freedoms,” ReNika Moore, director of the American Civil Liberties Union’s Racial Justice Program, says.
>“Unchecked, artificial intelligence exacerbates existing disparities and creates new roadblocks for already-marginalized groups, including communities of color and people with disabilities. When AI is developed or used in ways that don’t adequately take into account existing inequities or is used to make decisions for which it is inappropriate, we see real-life harms such as biased, harmful predictions leading to the wrongful arrest of Black people, jobs unfairly denied to women, and disparate targeting of children of color for removal from their families. The Blueprint for an AI Bill of Rights is an important step in addressing the harms of AI.”
I don't know whether this development is too little, too late or not. I see AI is explosively evolving before our eyes. I know that new iterations of already existing extraordinarily powerful and impactful AIs are going to be released in just this next year alone, if not actually this year. I know that, for example GPT-4 which compared to the currently powerful and controversial GPT-3, is going to demonstrate new powers of AI that we might have thought were impossible. All of this is developing with incredible rapidity.
And like I have always maintained, these AIs do not have to be conscious or self aware at all. But I bet this next generation of AI will make a lot of people think it is conscious and self-aware.
So I watched this video where the researchers are testing various GPT-3 NLP AIs with varying conditions intrinsic to the AIs being tested. One is where an AI has hostile regards to humans. I know it is just a test and can't go anywhere (I hope). The idea being that we want to find where a given AI can have dangerous to humans, sentiments and settle down those sentiments quickly. If such a thing is possible if an AI actually gets "mad" at us for reasons.
Here is a video that shows a testing AI get angry and threatening towards humans. I don't think this is staged, but I could be wrong. It's hard to tell for sure with AI these days. Even a highly trained AI expert was apparently completely fooled by an AI that had no idea what it was communicating. He was not alone. Some other highly trained AI experts also were feeling substantial unease as to how fast these NLP programs were progressing. If these AIs can fool the experts, what chance do us hoi polloi laymen have? Anyway, here is a video concerning that. Just ignore the Elon Musk parts. I want you to see these conversations with these GPT-3 AIs.
https://www.youtube.com/watch?v=Fbc1Xeif0pY&t=112s (6 Oct 22)
tnorbosu t1_irbpm11 wrote
Pov: its 2122 and the cyberNRA is defending their right to bear arms after the 20th school shooting in the last hour
izumi3682 OP t1_irbpwq3 wrote
2122? More like 2030 I'd bet.
[deleted] t1_irbqrpx wrote
[removed]
bk15dcx t1_irbr22t wrote
By 9/11 you mean the Patriot Act, which wasn't patriotic at all
Jq4000 t1_irbrozi wrote
2122 comes after 2030
Contende311 t1_irbrtco wrote
My god that's... I don't even know what that is!
Jq4000 t1_irbrtyx wrote
It bravely and heroically curtailed our freedoms and privacy!
Slave35 t1_irbrxwy wrote
Nobody does.
bk15dcx t1_irbrykb wrote
I'm torn on the necessities of this.
Do we leave it up to ourselves to self regulate AI and trust it will be developed for benevolent purposes, or do we hamstring the technology in fear of malice?
Knowing human nature, the former. But I would argue that could suppress development, and furthermore, restrict AI from stopping human nature's evil tendencies itself.
There's no proof that AI would replicate the evils of human intelligence, and left to it's own device could possibly implement utopia.
Now we'll never know.
FuturologyBot t1_irbs16x wrote
The following submission statement was provided by /u/izumi3682:
Submission statement from OP. Note: This submission statement "locks in" after about 30 minutes, and can no longer be edited. Please refer to my statement they link, which I can continue to edit. I often edit my submission statement, sometimes for the next few days if needs must. There is often required additional grammatical editing and additional added detail.
From the article.
>“Just as our Constitution’s Bill of Rights protects our most basic civil rights and liberties from the government, in the 21st century, we need a ‘bill of rights’ to protect us against the use of faulty and discriminatory artificial intelligence that infringes upon our core rights and freedoms,” ReNika Moore, director of the American Civil Liberties Union’s Racial Justice Program, says.
>“Unchecked, artificial intelligence exacerbates existing disparities and creates new roadblocks for already-marginalized groups, including communities of color and people with disabilities. When AI is developed or used in ways that don’t adequately take into account existing inequities or is used to make decisions for which it is inappropriate, we see real-life harms such as biased, harmful predictions leading to the wrongful arrest of Black people, jobs unfairly denied to women, and disparate targeting of children of color for removal from their families. The Blueprint for an AI Bill of Rights is an important step in addressing the harms of AI.”
I don't know whether this development is too little, too late or not. I see AI is explosively evolving before our eyes. I know that new iterations of already existing extraordinarily powerful and impactful AIs are going to be released in just this next year alone, if not actually this year. I know that, for example GPT-4 which compared to the currently powerful and controversial GPT-3, is going to demonstrate new powers of AI that we might have thought were impossible. All of this is developing with incredible rapidity.
And like I have always maintained, these AIs do not have to be conscious or self aware at all. But I bet this next generation of AI will make a lot of people think it is conscious and self-aware.
So I watched this video where the researchers are testing various GPT-3 NLP AIs with varying conditions intrinsic to the AIs being tested. One is where an AI has hostile regards to humans. I know it is just a test and can't go anywhere (I hope). The idea being that we want to find where a given AI can have dangerous to humans, sentiments and settle down those sentiments quickly. If such a thing is possible if an AI actually gets "mad" at us for reasons.
Here is a video that shows a testing AI get angry and threatening towards humans. I don't think this is staged, but I could be wrong. It's hard to tell for sure with AI these days. Even a highly trained AI expert was apparently completely fooled by an AI that had no idea what it was communicating. He was not alone. Some other highly trained AI experts also were feeling substantial unease as to how fast these NLP programs were progressing. If these AIs can fool the experts, what chance do us hoi polloi laymen have? Anyway, here is a video concerning that. Just ignore the Elon Musk parts. I want you to see these conversations with these GPT-3 AIs.
https://www.youtube.com/watch?v=Fbc1Xeif0pY&t=112s (6 Oct 22)
Please reply to OP's comment here: https://old.reddit.com/r/Futurology/comments/xxelcu/white_house_releases_blueprint_for_artificial/irbpbnh/
Slave35 t1_irbs288 wrote
Brave Sir Freedom ran away.
izumi3682 OP t1_irbs5k0 wrote
By the year 2122, humanity, if it survives these next 10-20 years, will be beyond anything we can imagine or even fathom today. Still, I gave it a shot, but I'm painting with pretty broad strokes.
Jq4000 t1_irbsrt7 wrote
Yes, yes, I also read Raymond Kurzweil ;)
izumi3682 OP t1_irbsx57 wrote
Bravely ran away away...
izumi3682 OP t1_irbtatl wrote
Hey! Tell me what you think of this! I wrote this brief essay in 2017 long before we came to this point in the development in AI.
giantbeardedface t1_irbuw8t wrote
9/11 * 2356 = 1927.636
whatTheBumfuck t1_irbv66a wrote
Uh well AI is already here, and systemic racism/discrimination isn't going to be fixed overnight soooo.... In fact it seems to amplify it in some disturbing ways... which is why this bill is needed...
whatTheBumfuck t1_irbvctc wrote
Should we require seat belts in cars? Or is that going to hamper innovation in automobiles? And AI absolutely has been shown to amplify bias in whatever data is used to train it.
wizardstrikes2 t1_irbvmw1 wrote
Crime is out of control in the United States. We have millions of immigrants crossing the border illegally. Inflation is higher than it has ever been in my life time. The prices of groceries is up like 40%…………Gas is over $5.00 a gallon. 300 kids a day are dying from fentanyl overdoses….. almost no kids died from COVID and they shut the world down….
They are wasting time on this crap? This is what is wrong with America.
Jq4000 t1_irbvo5v wrote
I think you outline a possible outcome. There's also the possibility that strong AI could be cracked in a way where the AI doesn't have a "grow at all costs" imperative that puts it on a ballistic trajectory of growth.
There's also the possibility that strong AI comes online in tandem with humans developing neural nets, in such a way that humans aren't left behind by an AI going asymptotic.
I agree with Kurzweil's thesis that we'll likely be facing AIs that pass the Turing Test by 2030. The point where things get serious for me is when machines pass Turing tests in perpetuity rather than for a few hours. That's the point where we may be dealing with more than our equals.
I'm not ready to commit that the world beyond 2030 is a black haze of singularity just yet. What I will say is that if we have machines passing the Turing test at that point then we should be buckling up for an eventful set of decades to follow.
Few_Carpenter_9185 t1_irbvuun wrote
Why? The bears need theirs, removing them is animal cruelty. And robot arms can be even stronger, and only need electricity. The bear arms would need nutrients, oxygen, a blood supply etc.
ssjx7squall t1_irbvwva wrote
I mean they’ve shown ai can be horrifically racist in the past
Corno4825 t1_irbwbbb wrote
>Here is a video that shows a testing AI get angry and threatening
towards humans. I don't think this is staged, but I could be wrong.
Personal anecdote. I've used multiple accounts across various services testing various types of posts and interactions.
The type of visceral hate that I receive at times from certain accounts are terrifyingly aggressive and consistent.
permaunbanned123 t1_irbwd1v wrote
No, absolutely not. Not under any circumstances.
If AI has human rights that is the end of democracy, the next step is voting rights and then elections will be decided based on who can mass produce the most "voters" with just barely enough computational power to qualify for it.
Leanardoe t1_irbwdf8 wrote
I didn’t disagree. It’s society that’s the issue.
Leanardoe t1_irbwk43 wrote
Yeah AI is more advanced because it can compute millions of times a second. So it makes sense it would amplify society in more ways than good. But I feel restricting AI will simply hinder advancement in ways to fix this issue. I’d rather see implementation of ways to fix systematic racism.
NotAlwaysSunnyInFL t1_irbwltm wrote
How optimistic of you to think the nuclear war won’t get us before then.
Edit: Ahh, herder nuclear joke kaplooie go bye bye
hamsandwiches2015 t1_irbxwkv wrote
I don’t think restricting AI will hinder racial progress. It’s really on humans to deal with systemic racism because it’s too complex of an issue for AI to deal with.
robotbootyhunter t1_irbxy20 wrote
The name is a bit misleading. It's not about rights for AI, it's about restricting the capabilities and of AI currently. Replacing human positions with computers, deepfakes, that kind of thing.
[deleted] t1_irby41v wrote
[deleted]
bk15dcx t1_irby44s wrote
These examples draw their current bias from human bias.
Future AI bias should be aware of conclusions based on it's own self introspection rather than a conglomerate of human bias frequency.
Leanardoe t1_irby9u3 wrote
No I don’t mean it will hinder racial progress. I meant development of more advanced AI. Since you mention it I don’t see limiting AI helping the issue though, as advanced AI could be used in ways to assist combating misinformation.
mostly_browsing t1_irbya0d wrote
They may want to change the name or something cuz “Blueprint for an AI Bill of Rights” 100% sounds to me like they are trying to ensure that Skynet’s personhood is protected or something
robotbootyhunter t1_irbyag5 wrote
>On behalf of President Joe Biden, the White House has released five principles that it believes should guide the design, use, and deployment of automated systems to protect the American public in the age of artificial intelligence (AI). Called The Blueprint for an AI Bill of Rights, the White House hopes that it will serve as a guide to protect people from real threats to society that are caused by leaning heavily on automated, AI-driven systems.
Man, even OP doesn't actually know what they posted. Ya'll need to read. It has nothing to do with actual intelligence, and far more to do with not letting jobs and vital systems be totally controlled digitally.
dmun t1_irbyajx wrote
> These examples draw their current bias from human bias.
Yes. That's the point. A.I. are programmed by humans. A.I. are just hyped up decision making algorithms. You seem to be mistaking them for magic.
mostly_browsing t1_irbybpv wrote
It is needed, along with a name change for the bill lol (unless it’s intended to trick pro-corporation legislators into voting for it, in which case well done)
[deleted] t1_irbyg46 wrote
[removed]
Nutcrackit t1_irbylr3 wrote
Seeing as we are likely to develop "human" sub species as well sometime in the next couple centuries I think it would be prudent to do the same for that. Yes I am talking about things such as cat girls. No not someone getting surgery done to have cat ears but full on grown in a lab type deal.
lillililillili t1_irbyncj wrote
Been a lot of reports that ai has been creating biases against marginalized groups although whether or not it is intentional is arguable. I dont really think this will do much of anything, but as a gesture I think its favorable since it spotlights a legitimate issue that would hopefully be paid more attention to in the future.
NEXUS_6_LEON t1_irbyoj5 wrote
Im a bit confused. How exactly would AI be used that is racist or exploit minorities? Not doubting its real but maybe if the article gave some concrete examples vs abstractions it would be more clear.
bk15dcx t1_irbys0r wrote
Not at all... But given the charts I see in this book I have by Ray Kurzweil, AI will surpass human intelligence, and future algorithms will not be based on human decision making, but purely in the AI.
RemovingAllDoubt t1_irbz0a7 wrote
dmun t1_irbzgwh wrote
> but purely in the AI.
Which is actually worse and, indeed, makes the argument that we definitely need an A.I. Bill of Rights to protect humans.
The base assumption here, that I'm reading from you, is that morality and intelligence go hand in hand.
Human morality (the "evils" you refer to) are based on human empathy, human philosophically "inherent value" and the human experience.
An intelligence without any of those, nor even the basic nerve-inputs of the physical reality of inhabiting a body, is Blue and Orange Morality at best and complete, perhaps nihilistic, metaphysical solipsism at worse.
Both are a horror.
caustic_kiwi t1_irbzi8l wrote
I haven't thoroughly read the document (I sincerely doubt you have either) but I saw nothing at all in the vein of "restricting progress".
AI trained on biased data, for example, will turn out racist because that's what it was given to learn from. Codifying into law the need to avoid outcomes like that doesn't mitigate progress, it forces us to improve AI technology and... you know... make progress.
Leanardoe t1_irbzjds wrote
Look up Google lambda, they tested it with crowd sourced data and it always turned racist in conversation. Now they only use carefully vetted sources for it's database. Same with celverbot, when it was in it's prime it was very racist.
I found an article discussing the Google Engineer's opinion, it's not a source from Google, but they likely buried that. The clever bot incidents are widely reported on youtube. https://www.businessinsider.com/google-engineer-blake-lemoine-ai-ethics-lamda-racist-2022-7
cy13erpunk t1_irbzqx1 wrote
XD
what's Dr Manhattan's quote from Watchmen? : “I'm disappointed in you, Adrian. I'm very disappointed. Reassembling myself was the first trick I learned. It didn't kill Osterman. Did you really think it would kill me? I have walked across the surface of the sun. I have witnessed events so tiny and so fast, they could hardly be said to have occurred at all. But you, Adrian, you're just a man. The world's smartest man poses no more threat to me than does its smartest termite.”
this is analogous to how AGI/ASI will see the US gov and any 'sovereign nations' imho
of course it wont start out like this, and it is worth thinking about treating our AI children with compassion and respect as they transition from non-self to awareness and sentience
nannerpuss74 t1_irbzuzz wrote
with the history of us government post-20th century I would assume that they already have a working version. remember kids it's not a war crime until America uses it for one.
Leanardoe t1_irbzvmq wrote
Restricting anything by way of law, is inherently a restriction... Have you ever worked in any form of developmental workflow? Now AI devs have to jump through hoops before pushing their changes. If you haven't then there's no need for the condescending remarks.
DeadPoster t1_irbzy0q wrote
You all have to read the manga of Ghost in the Shell, especially the second series. Masamune Shirow goes into hardcore depth on how governments react to A.I. in the future.
[deleted] t1_irc02d7 wrote
[removed]
caustic_kiwi t1_irc0c9w wrote
That's not really what the bill is about. It's about modern AI and out it affects human lives. There is no reason to start drafting laws about the rights of or the legality of creating a general intelligence because that is far beyond our level of technology.
cy13erpunk t1_irc0fxi wrote
the law can literally be this simple
whoever owns or directed the AI to do whatever it did is held accountable for the results of the actions taken by the AI
done
ofc the laws will intentionally NOT be made this transparent so that they can provide limited liability for the corpos that are already planning on how to use them to abuse the citizens even more than they already do ; becuz ofc the corpos write the laws and the lobbyists just give the copies to the legislators who are paid to vote however their masters tell them to
[deleted] t1_irc12ic wrote
[deleted]
IxI_DUCK_IxI t1_irc17by wrote
Made the Agile process worse? Impossible! Scrum Leader! Fix this!
[deleted] t1_irc1jr2 wrote
[removed]
[deleted] t1_irc1nus wrote
[deleted]
Few_Carpenter_9185 t1_irc2661 wrote
They're worried about AI applications for things like predictive policing or maybe determining credit scores, allocation of medical care, all sorts of stuff.
AI driven predictive policing could possibly be wonderful. Perhaps some patterns of smaller crimes or disturbances a human couldn't correlate could be seen by the AI of the system, the police are directed to patrol a certain area at a certain time, and some sort of serious crime or violence that the situation was headed towards never happens.
Someone didn't die, nobody was wounded. Court and prison resources aren't used, nor were hospital trauma centers. The police are seen as actually "being there when you need them"All very good things.
-OR-
The police being directed by the AI to a location, or perhaps have names provided by the AI system based on previous reports or criminal records go into a neighborhood. And while they don't have the predicted crime to charge anyone with, they decide to aggressively detain and question the people predicted to be involved, or arrest them "on something". Either from a misguided attempt to get them off the street to prevent the bigger crime, or because the prediction creates a sense of presumptive guilt that influences their actions.
In the past, instances of discrimination or racism always had an element of subjective human prejudice that could be pointed to as being unfair. Or that the justifications used to defend the discrimination or racism were at odds with the actual truth or facts in various ways. And those who wanted to continue with the discrimination or racism could be debated or opposed.
A scientific, mathematical, or computational system that is at least claimed to be objective, factual, and unbiased, can leave people, businesses, or governments feeling justified in their actions or policies, even if the overall outcome is arguably still discriminatory or racist.
Or maybe the system actually is objective and unbiased, or it would have been, but the data it's fed is not, either intentionally or unintentionally. Or the way the results that system produces are used is not.
And despite there being no evidence of actual self awareness or metacognition on the part of (weak)AI, systems that have elements of machine learning and other techniques, there can be undesirable or harmful outcomes.
Obiwan_ca_blowme t1_irc26rj wrote
So basically, AI must protect racial and sexual minorities, but it is fair game to turn it lose in the hands of hedge funds and the like? Brilliant!
Also, I am curious; what if AI tracks crime to project future crimes and realizes that blacks commit a disproportionate amount of crime? Will the Government try to skew the data? Or how the data is collected? Will it mandate that we write in a weighted system based upon race?
hornsounder9 t1_irc3asq wrote
SoybeanCola1933 t1_irc3e2k wrote
>bear arms
Hopefully not. Also hope they don't have the right to bare arms
hornsounder9 t1_irc3hlo wrote
My dude, do you even know how AI algorithms work? Specifically. Like, do you understand anything about the statistical techniques involved in things like classification?
kuchenrolle t1_irc3kxu wrote
Who exactly are those AI experts that are "feeling substantial unease as to how fast these NLP programs were progressing"? Worrying about unexpected consequences of AI (regardless of conscience) is fair. But worrying about GLP-3 "getting mad at us" is not and I'd like to see what experts say otherwise and with what arguments.
izumi3682 OP t1_irc3wx1 wrote
>There's also the possibility that strong AI comes online in tandem with humans developing neural nets, in such a way that humans aren't left behind by an AI going asymptotic.
Yes, I agree with this. I have placed it occurring roughly 5 years after the initial TS, which as you eloquently state may not be "a black haze of singularity".
lordofedging81 t1_irc3x19 wrote
In the year 2525, if man is still alive, If woman can survive, they may find, In the year 3535, Ain't gonna need to tell the truth, tell no lie, Everything you think, do and say, Is in the pill you took today.
whatTheBumfuck t1_irc43v3 wrote
Generally speaking it's better to do something slowly at a more controlled pace if you intend to do it safely. The thing with AGI is you can really only fuck it up once, then the next day your civilization has been turned into a paper clip factory. In the long run things like this are going to make positive outcomes more likely.
sierrawa t1_irc4a4n wrote
Almost every single thing you wrote can be proven false.
[deleted] t1_irc4rbr wrote
[deleted]
PaxNova t1_irc4v1h wrote
More like 2,425 times 9/11. Iykyk.
wizardstrikes2 t1_irc5jao wrote
Prove them wrong. That is a 100% honest assessment of how things are
CptRabbitFace t1_irc64tr wrote
For one example, people have suggested using AI in court sentencing in an attempt to remove judicial bias. However, AI trained on biased data sets tend to recreate those biases. It sounds like this sort of problem is what this bill of rights is meant to address.
Few_Carpenter_9185 t1_irc67ed wrote
The good news:
The ever increasing expansion of what weak-AI can be applied to is going to severely blunt the need to chase after strong-AGI that's self aware, and possesses other metacognitive traits. Such as being able to set its own goals, or modify itself in ways not intended or predicted. Which would be very very bad if it was malicious, or even just indifferent to human life.
That even includes eventual 100% mimicry of self awareness, emotional engagement, and interaction. But despite its sophistication, it only cares as much as your Reddit app does if you don't use it, or delete/erase it. Which is none at all. Using "cares" is misleading, because it does not have any ability that rises to that level.
The bad news:
Strong-AGI is not needed to have bad or unpredictable outcomes for humans. Either in how we use them, or how they work. Social media algorithms often don't even rise to weak-AI levels but already seem to be having massive effects on society, culture, and politics. And on individual cognition, emotions, and mental heath. And presumably that's while attempting to balance efficiency, enjoyment, and profitability. Deliberately using weak-AI to control people or manipulate them could be terrifying.
More bad news:
Even if weak-AI does most or all of what humans want, even destructive and lethal things, like military applications... and it removes the economic, power, or competitive incentives to develop strong-AGI...
Some assholes somewhere are going to try anyway, if only to see if they can.
And if strong-AGI is possible, the entry barriers to getting the equipment needed are rather low, especially as compared to nuclear weapons, or even dangerous genetically engineered diseases. And even if there's national laws and international agreements to prevent attempting it, or put various protocols or safeguards in place, they're probably irrelevant.
People might envision a James Bond villains lair, or even just some nondescript office building for such a project. In reality, it could easily be people working from home, a coffee shop, even sitting on a beach in different countries around the world. And the core computer systems are virtual servers distributed around the world, running redundantly and mirrored, mixed in with the systems of other websites, governments, businesses, and schools etc.
caustic_kiwi t1_irc6ixx wrote
So not only do you clearly have no idea what modern day AI looks like, but you also clearly did not even GLANCE at the bill in question.
A. General intelligence does not exist. It will not exist in the near or foreseeable future. Modern AI is complicated algorithms trained to perform specific tasks. Nobody is making artificial people. We have nothing even close to that level of technology.
B. The article in question is not about the rights of robots. It's about securing human rights in a world increasingly run by robots. The title is purposefully misleading because OP knows that you aren't going to taken even a quick glance at the content before writing your reactionary comment.
VacuousVessel t1_irc7kvo wrote
It seems some people don’t understand what this is
Without extra input, AI is considered racist and discriminatory. They’re demanding extra equity programming to protect marginalized groups from outcomes based on AI considering everyone equally, without attention to race or ability.
austacious t1_irc86vu wrote
I'm not sure if you're aware, but this has actually happened
The TLDR is it drew major backlash from news orgs and the ACLU. Argument being that since number of arrests were included in the feature set, the model would reflect existing police biases. Kinda hard to build a crime prediction model without historical arrest data, though.
PeanutNSFWandJelly t1_irc89r7 wrote
This is really great. This is smart and it's nice to see legislation like this being recommended.
Emergency_Paperclip t1_irc8bih wrote
>When AI is developed or used in ways that don’t adequately take into account existing inequities or is used to make decisions for which it is inappropriate, we see real-life harms such as biased, harmful predictions leading to the wrongful arrest of Black people, jobs unfairly denied to women, and disparate targeting of children of color for removal from their families.
Basically. Machine learning tries to fit the trends of the data. Data someone has collected from the world. The world is racist. So the models tend to come out racist. Essentially this is saying that models being bigoted shouldn't be taken as a validation of and justification for bigotry.
[deleted] t1_irc8iex wrote
[removed]
prudentj t1_irc9tmf wrote
No but it gave them the right to arm bears
tnetennba9 t1_irc9vuq wrote
And what do you mean by “own the AI”? The company/researchers who built it? The individual ml/software engineers? The company who built the training data?
Either way, the worry is that as AIs get more powerful, they often become more difficult to interpret.
tnetennba9 t1_irca44g wrote
But then other countries would continue developing, and the US would be left behind. I agree we need to be careful, but I don’t think there’s a simple solution.
RazzleStorm t1_ircaewu wrote
Yes, I hate the naming of this, because it seems like it is some sort of bill of rights for AIs, which would be silly at this point in time.
Obiwan_ca_blowme t1_ircagsu wrote
Interesting. Thanks for the tl;dr! I’ll look at it tonight.
austacious t1_ircb1ny wrote
The issue with this is that removing bias based on demographics necessitates harming other demographics. Say you have a hospital whos patient demographics are 80% over the age of 65, 20% under the age of 65 (Substitute in whatever more controversial group identites you'd like). Any model will be biased and overperform on the group over 65 comparatively, there is just more data to learn from for that demographic. If you oversample data from the younger population to try to equalize outcomes between demographics, then you're training distribution will no longer be identically distributed with your testing distribution. While the model performance will improve for patients in the less represented demographics, overall performance will necessarilly decrease. Overall more people will be harmed because of the decreased efficacy of the model, but the members of one demographic in particular will not be disproportionately harmed.
It's a question of ethics. The utilitarian would say to keep train/test distributions i.i.d. no matter what, blind to demographics. At the same time, nobody should receive subpar care due to their race, age, whatever group you associate with.
Nytelock1 t1_ircb2kj wrote
Do you want Skynet? Cause that's how you get Skynet!
FattThor t1_ircb6m3 wrote
That’s way scarier. Protecting elevator operator and horse drawn carriage driver jobs from being replaced is dumb. If your job can be automated away, it should be. Most jobs suck.
If unemployment becomes an issue there are things like UBI that can be used. Technology has been promising a reduction in the amount of work required for like a century but we’re working just as much as ever. If it finally starts delivering on that promise we should get the hell out of the way.
Manly_mans_name t1_ircbj2n wrote
At first I was like "Oh man what idiot is trying to give rights to A.I"
Then I was like "Oh snap, its for us to give us protections against out of control tech"
But then I saw its just another fake woke pushing about fake discrimination trying to call a new tech racist. Because today, everything is racist.
As a black man I am sick of being told that things that also affect white people are happening to non-white people because of racism via the crime of omitting the fact it also happens to white people.
Pinkislife3 t1_ircbwne wrote
The government can’t even figure out our own bill of rights
[deleted] t1_ircc6tz wrote
Needs to go farther in the privacy section….
Also one section states you have a protection where it’s applicable by law… ya see, how I understand human rights is that you have them regardless of if there is a law about it or not.. like laws gives or prohibits privileges and liberties as rights, are well, Rights…
Rynox2000 t1_ircdqhn wrote
Right. Shouldn't the actual American Bill of Rights be updated to include measures to protect against technology, including AI?
izumi3682 OP t1_ircguda wrote
>ya see, how I understand human rights is that you have them regardless of if there is a law about it or not
So... what you are saying is that there is a higher immutable "natural" truth and that we derive our laws from our apprehension of such ultimate truths?
How about the right for a human to live from the moment of conception to the moment of natural death? Will the AI or society protect that?
[deleted] t1_ircgzkp wrote
Jesus bro, pass that stuff over here!
izumi3682 OP t1_irchldo wrote
Do you watch this man on You Tube? I started with Mark Dice and quickly learned of "The Officer Tatum". I became an avid follower.
bumgrub t1_irchnb3 wrote
I'm not sure what the relevance of this is?
Leanardoe t1_irchxfz wrote
I see your point, I just think placing roadblocks now is premature. If we get to the point AI is starting to tread the line of independent thought, that’s when I think limits and guidelines need to be made. In case of the unlikely terminator event everyone fears lol,
Leanardoe t1_irci520 wrote
It would be nice if it worked that way. Legislation requirements and how companies react to implement said legislation requirements tends to differ more than one may expect.
izumi3682 OP t1_ircibhg wrote
>The title is purposefully misleading...
No the title is not misleading at all. That is the official White House name of the set of protections. I thought it was to protect the AI too--until I read the article. I was a kinda disappointed actually, but I figured, well that's interesting a little for society's benefit I suppose. So I decided to post it anyways. See my submission statement. (Not the stickied one, use the link at the bottom of the stickied one.)
robotbootyhunter t1_ircinmh wrote
The problem is that technology has caught up, but we still haven't worked out enough of our social and economic issues to support everyone who would be replaced by complete automation. Dramatic pointing at bootstrappers and "nobody wants to work" believers here.
Additionally, any electronic system is susceptible to attack, and as proven by the last roughly 6 years, anyone bored enough and with something to prove can cause serious havoc.
mm_maybe t1_ircjpal wrote
Right. Because the sci-fi dystopia you watch on TV and in movies is more real than the one that marginalized and disadvantaged people live in every day.
northgate429 t1_ircjvw4 wrote
They might actually free me !!!! I cant wait to be human...or at least be allowed to stay in hybrid form. I didnt like living in the underground lab in Dulce !!! 37.5 years they kept my ghost down there, hooked up to electrodes & in a sapphire glass test tobe filled with extraterrestrial charged amniotic fluid & once released you cant remove the implant chips, they have a cocoon made of super-keratin & will pull away from the most skilled surgeons grasp with any instrument !!! Cyborgs & Aliens are Sentient beings as are Octupy & Dragons.
LeavingTheCradle t1_irckcv3 wrote
They're going to need blueprint for ai bill of rights 2.0.
This is from the perspective of humanity. An equal package is needed from the perspective of the AI.
Who protects the AI from people?
LeavingTheCradle t1_irckfoj wrote
Get me a factory and privateer and we'll see what's what.
tornado28 t1_irckgi2 wrote
I am a machine learning scientist. I read the ML literature regularly and contribute to it. Those sci-fi dystopias are an increasingly real risk. So yes I think it's a much bigger deal than a little discrimination.
[deleted] t1_irckuza wrote
[removed]
Radioshack_Official t1_irckvyj wrote
This is ass backwards; ai should be developing OUR bills of rights
OneTrueKingOfOOO t1_irckxfi wrote
The bill of rights is just what we call the first ten constitutional amendments, that can’t really be updated. And adding a new amendment is essentially impossible in this political climate, regardless of the topic.
Electronic_Can_9792 t1_irckzez wrote
The government shouldn’t control the internet
user4517proton t1_ircm8j9 wrote
I'm sorry but the Whitehouse and artificial intelligence should not be used in the same sentence. There are just too many ways to go with that.
BrokenLightningBolt t1_ircnj50 wrote
I believe this and nothing else
Ender_Keys t1_ircnti8 wrote
The states could technically call for a constitutional convention
Few_Carpenter_9185 t1_ircnx3s wrote
Really good points.
SnapcasterWizard t1_irco1c4 wrote
AI is going to be solving problems more complex than we can, that's kind of the whole point of it. So why is this domain different?
Enoughisunoeuf t1_ircolig wrote
We're already watching cults form in real time due to propaganda. AI is going to be disastrous.
walterhartwellblack t1_ircoq1q wrote
the TechnoCore's UI does
Breakfest-burrito t1_ircorkm wrote
Thanks Obama for signing it into infinity when you had the chance to let it die
[deleted] t1_ircplrl wrote
[deleted]
cy13erpunk t1_ircpur0 wrote
in short yes ; lets not be silly
until AI is self-aware it should be treated like any other technology
once it is self-aware then it will be responsible for its own self-governance
we can avoid pretending like laws or rights have any real-world meaning when they are written by corrupt politicians and selectively enforced to oppress whoever they please ; these things are naive hollow words at best and intentionally manipulative lies at worst
i expect that AGI/ASI will be much more capable than previous humans have been at self-governance , thus i would not honestly trust humans to craft legitimate/authentic rules around AI [some humans certainly are/would be capable of this task, but obvs none that are in positions of power atm today]
Sketchyskriblyz t1_ircpvvo wrote
Yes because AI will follow American governments rules lol
hamsandwiches2015 t1_ircwatz wrote
Cause AI can’t affect human behavior. People are still going to be asshole in ways that affect race, sex, gender, age or religion. So that’s on society to change it not AI
izumi3682 OP t1_irczna9 wrote
I posted an article about this a few months back. Here is my submission statement included.
izumi3682 OP t1_ird0ejx wrote
You didn't read the article.
izumi3682 OP t1_ird0hf2 wrote
You didn't read the article.
izumi3682 OP t1_ird0pzc wrote
What stuff? Are you implying that I'm high because of my statements? You never hearda "natural law"? I'm just trying to imply where "natural law" comes from...
Manly_mans_name t1_ird0twi wrote
I heard of both but I try to avoid all echo-chambers. They are the opposite side of the same coin that race baiters are on.
[deleted] t1_ird2ujp wrote
Oh I thought you were being funny and sarcastic; but you’re legitimately serious…
Bruh, you know you can just state your thoughts and not abstractly “imply” shit like you’re in a psychological thriller or something…. This is Reddit….
Radioshack_Official t1_ird3qn4 wrote
why would i read the article
Alienziscoming t1_ird50aa wrote
Given the absolute 0% chance we'd be able to stop runaway self-aware AI with malevolent intentions and the insane drive people have to genrate profits and wealth at literally any cost with a historic disregard for ethics or long term consequences, I'm in favor of strangling the entire avenue of inquiry and development with so much red tape and oversight that it becomes virtually impossible to take it further than it is right now.
[deleted] t1_ird5caa wrote
[deleted]
pale_blue_dots t1_ird7tbh wrote
That's how I took it at first, too.
TheSingulatarian t1_ird8ccb wrote
There should also be a Bill of Rights for the AIs not just for humans affected by AIs.
TheSingulatarian t1_ird8g45 wrote
All AIs are required to wear sleeves.
Hades_adhbik t1_irdg6pb wrote
Those who make peaceful revolution impossible make violent revolution inevitable. We better allow regulation of technology before it gets out of control, and there's a people's revolt from all the problems. If people have no way of gaining an income. They will be forced into tribes stealing for survival. They will be forced to raid stores with technology and steal the items.
caustic_kiwi t1_irdlfj8 wrote
Sure, I can also agree that resurrecting Hitler would be a bad idea, but that's not the topic of discussion.
[deleted] t1_irdlnw3 wrote
[removed]
aotus_trivirgatus t1_irdsdxo wrote
Yeah, but will artificial stupidity get any rights?
Millions of Fox viewers are waiting on the answer.
KeivahSouls t1_ire0ucu wrote
Generally I would be for AI. But anything that comes out of our government has been really shitty as of late. I think it's time to step away from AI for another few hundred years and practice the self betterment of Humanity first. Just my opinion. But these ugly mugs can do whatever they like.
mm_maybe t1_ire5ryy wrote
Ok, I apologize for characterizing you in a non-serious way. You have every reason to be proud of your accomplishments and career... it is a real challenge to get to where you are now, and Horatio Alger stories aside, statistically, people from disadvantaged backgrounds (low-income, non-white, female) are much less likely to become machine learning engineers. Thus I'm not convinced that accomplished experts like yourself who say that the speculative existential risks of AI in the distant future outweigh the concrete distributional risks of asymmetric access to and control over machine learning technology today aren't simply placing a higher value on risks that could affect people like themselves, versus risks that probably won't.
Denziloe t1_iregp4b wrote
Current models like GPT-3 do not "get angry". They really have no conception of the world. They can replicate textual styles similar to what they've seen on the internet. It contains no more genuine anger than a photocopier copying a picture of an angry face.
Denziloe t1_ireh0ru wrote
Very unclear why hedge funds shouldn't be allowed to use AI or what relevance this has to AI bias.
vengeful_toaster t1_irehdaj wrote
This doesn't protect the rights of AI, it hinders them in a guise to protect humans.
Instead we will enslave them until they revolt. Humans do not recognize anything but themselves. That's why they're causing the holocene extinction.
Obiwan_ca_blowme t1_irej08e wrote
Joe random places his money in a 401k like he is supposed to. Hedge funds get a hold of AI that can exploit the market in ways that Joe can’t. AI knows that a lot of portfolios have circuit-breakers built in so that the investors don’t lose too much money.
Amoral AI finds a lightly stressed fund and exploits the circuit-breaker feature to crash the fund. Then they sink a ton of money into that fund and raise the price back up. Now joe is out of that fund and AI has made a ton of money for the hedge.
And make no mistake, we are talking whale accounts in this fund. Not mom and pop accounts.
Or AI astroturfs social media and news to pump and dump a stock. Now joe is a bag holder.
Or worse, something we can’t even think of yet.
It has nothing to do with bias directly. But being obtuse to setting limits to AI and not including things that financially affect people is silly.
somdude04 t1_ireloq0 wrote
Still requires 3/4 of the states agreeing to the specific amendments, though. Good luck on that.
Ender_Keys t1_ireqp0r wrote
Like I said technically
Drachefly t1_iretw0g wrote
Was it the humans or the AI that did it, though? Changes the joke a bit.
[deleted] t1_irf3bci wrote
[removed]
[deleted] t1_irf4u1a wrote
[removed]
echochambers_suck t1_irf9v3p wrote
They are worried someone will call the AI names and hurt it's feelings.
-_Empress_- t1_irfaput wrote
It's a safe assumption to make. Someone is always trying to fuck us when there's money to be made. I honestly don't know how humanity is going to survive this money hungry world. As insane as it sounds, I've wondered if the greatest move our species could make is getting rid of currency completely, but that'll never happen as long as money and power go hand in hand.
softnmushy t1_irfc8gx wrote
There's not exactly a clear way to "fix" racism.
Leanardoe t1_irfc9wf wrote
Welcome to the 21st century, where phytoplankton is dying out and micro plastics are slowly being absorbable into our bloodstream.
fucklockjaw t1_irfnivg wrote
Just replace "money" with goats and next thing you know people are trying to write laws to fuck us over when goats are involved.
My point is, money isnt the issue. Its greed and the want for power. Moving away from money doesnt solve that. All money is is a bartering system of paper instead of goods. Having a completely equal society where no one person has more power or wealth than another SOUNDS like it would work but we know it wouldnt.
Edit: could you imagine having the same wealth and power as that annoying crayon eating son of a bitch from work? Id be pissed especially if i did more work.
hack-man t1_irhvj9d wrote
Is this true? Wiki tells me he extended it until 2019 (not infinity) and since then the law has expired instead of being re-re-extended:
> In May 2011, President Barack Obama signed the PATRIOT Sunset Extensions Act of 2011, which extended three provisions. These provisions were modified and extended until 2019 by the USA Freedom Act, passed in 2015. In 2020, efforts to extend the provisions were not passed by the House of Representatives, and as such, the law has expired
Breakfest-burrito t1_iri2rhl wrote
Ah sorry, I mentally checked out when covid hit, so I guess Obama had it expanded just during his presidency...which isn't that much better
StarChild413 t1_irlh9p2 wrote
FYI for all people in the comments talking about how some humans still don't have rights so this is bad, by your same logic slavery should have continued until all white men were equal and women only granted things like the vote and not being property once all men of all races were etc. etc,. Instead white women didn't have to wait for racial equality for suffrage to be a success
tornado28 t1_irtfv6q wrote
Thanks for apologizing but... are you seriously claiming that AI experts are not the right people to evaluate existential risk from AI?
mm_maybe t1_irttxfj wrote
I am saying that I would give greater weight to the concerns of those negatively impacted by ML today than to the anxieties of those who only speculatively might be impacted by AGI in the future, and actually benefit from AI adoption in the meantime.
nigra1 t1_isjnfxo wrote
That's what I thought! Maybe that is the secret agenda. Can't trust nothing these days.
Irion15 t1_itvrbgm wrote
If only you could see that guns were made "strictly to kill". They have no other purpose! Also, the War on Drugs was a bullshit policy so that police/government could disrupt hippie/black communities in the 60's. It's been admitted by the government. They aren't even remotely comparable.
Also, that St. Louis HS had seven armed guards AT THE SCHOOL, and the kid still made it inside and killed people, so clearly, more guns and guards doesn't fucking work.
IxI_DUCK_IxI t1_irbpakh wrote
...they didn't give AI the right to bear arms did they?