Comments

You must log in or register to comment.

IxI_DUCK_IxI t1_irbpakh wrote

...they didn't give AI the right to bear arms did they?

252

izumi3682 OP t1_irbpbnh wrote

Submission statement from OP. Note: This submission statement "locks in" after about 30 minutes, and can no longer be edited. Please refer to my statement they link, which I can continue to edit. I often edit my submission statement, sometimes for the next few days if needs must. There is often required additional grammatical editing and additional added detail.


From the article.

>“Just as our Constitution’s Bill of Rights protects our most basic civil rights and liberties from the government, in the 21st century, we need a ‘bill of rights’ to protect us against the use of faulty and discriminatory artificial intelligence that infringes upon our core rights and freedoms,” ReNika Moore, director of the American Civil Liberties Union’s Racial Justice Program, says.

>“Unchecked, artificial intelligence exacerbates existing disparities and creates new roadblocks for already-marginalized groups, including communities of color and people with disabilities. When AI is developed or used in ways that don’t adequately take into account existing inequities or is used to make decisions for which it is inappropriate, we see real-life harms such as biased, harmful predictions leading to the wrongful arrest of Black people, jobs unfairly denied to women, and disparate targeting of children of color for removal from their families. The Blueprint for an AI Bill of Rights is an important step in addressing the harms of AI.”

I don't know whether this development is too little, too late or not. I see AI is explosively evolving before our eyes. I know that new iterations of already existing extraordinarily powerful and impactful AIs are going to be released in just this next year alone, if not actually this year. I know that, for example GPT-4 which compared to the currently powerful and controversial GPT-3, is going to demonstrate new powers of AI that we might have thought were impossible. All of this is developing with incredible rapidity.

And like I have always maintained, these AIs do not have to be conscious or self aware at all. But I bet this next generation of AI will make a lot of people think it is conscious and self-aware.

So I watched this video where the researchers are testing various GPT-3 NLP AIs with varying conditions intrinsic to the AIs being tested. One is where an AI has hostile regards to humans. I know it is just a test and can't go anywhere (I hope). The idea being that we want to find where a given AI can have dangerous to humans, sentiments and settle down those sentiments quickly. If such a thing is possible if an AI actually gets "mad" at us for reasons.

Here is a video that shows a testing AI get angry and threatening towards humans. I don't think this is staged, but I could be wrong. It's hard to tell for sure with AI these days. Even a highly trained AI expert was apparently completely fooled by an AI that had no idea what it was communicating. He was not alone. Some other highly trained AI experts also were feeling substantial unease as to how fast these NLP programs were progressing. If these AIs can fool the experts, what chance do us hoi polloi laymen have? Anyway, here is a video concerning that. Just ignore the Elon Musk parts. I want you to see these conversations with these GPT-3 AIs.

https://www.youtube.com/watch?v=Fbc1Xeif0pY&t=112s (6 Oct 22)

16

Slave35 t1_irbpmj3 wrote

It is meant to protect the rights of the citizenry FROM AI. Which is needful, because AI will be used proprietarily by corporations to gather and manipulate data, and trample the privacy rights of individuals. It will be 9/11 times 2,356.

187

bk15dcx t1_irbrykb wrote

I'm torn on the necessities of this.

Do we leave it up to ourselves to self regulate AI and trust it will be developed for benevolent purposes, or do we hamstring the technology in fear of malice?

Knowing human nature, the former. But I would argue that could suppress development, and furthermore, restrict AI from stopping human nature's evil tendencies itself.

There's no proof that AI would replicate the evils of human intelligence, and left to it's own device could possibly implement utopia.

Now we'll never know.

−5

FuturologyBot t1_irbs16x wrote

The following submission statement was provided by /u/izumi3682:


Submission statement from OP. Note: This submission statement "locks in" after about 30 minutes, and can no longer be edited. Please refer to my statement they link, which I can continue to edit. I often edit my submission statement, sometimes for the next few days if needs must. There is often required additional grammatical editing and additional added detail.


From the article.

>“Just as our Constitution’s Bill of Rights protects our most basic civil rights and liberties from the government, in the 21st century, we need a ‘bill of rights’ to protect us against the use of faulty and discriminatory artificial intelligence that infringes upon our core rights and freedoms,” ReNika Moore, director of the American Civil Liberties Union’s Racial Justice Program, says.

>“Unchecked, artificial intelligence exacerbates existing disparities and creates new roadblocks for already-marginalized groups, including communities of color and people with disabilities. When AI is developed or used in ways that don’t adequately take into account existing inequities or is used to make decisions for which it is inappropriate, we see real-life harms such as biased, harmful predictions leading to the wrongful arrest of Black people, jobs unfairly denied to women, and disparate targeting of children of color for removal from their families. The Blueprint for an AI Bill of Rights is an important step in addressing the harms of AI.”

I don't know whether this development is too little, too late or not. I see AI is explosively evolving before our eyes. I know that new iterations of already existing extraordinarily powerful and impactful AIs are going to be released in just this next year alone, if not actually this year. I know that, for example GPT-4 which compared to the currently powerful and controversial GPT-3, is going to demonstrate new powers of AI that we might have thought were impossible. All of this is developing with incredible rapidity.

And like I have always maintained, these AIs do not have to be conscious or self aware at all. But I bet this next generation of AI will make a lot of people think it is conscious and self-aware.

So I watched this video where the researchers are testing various GPT-3 NLP AIs with varying conditions intrinsic to the AIs being tested. One is where an AI has hostile regards to humans. I know it is just a test and can't go anywhere (I hope). The idea being that we want to find where a given AI can have dangerous to humans, sentiments and settle down those sentiments quickly. If such a thing is possible if an AI actually gets "mad" at us for reasons.

Here is a video that shows a testing AI get angry and threatening towards humans. I don't think this is staged, but I could be wrong. It's hard to tell for sure with AI these days. Even a highly trained AI expert was apparently completely fooled by an AI that had no idea what it was communicating. He was not alone. Some other highly trained AI experts also were feeling substantial unease as to how fast these NLP programs were progressing. If these AIs can fool the experts, what chance do us hoi polloi laymen have? Anyway, here is a video concerning that. Just ignore the Elon Musk parts. I want you to see these conversations with these GPT-3 AIs.

https://www.youtube.com/watch?v=Fbc1Xeif0pY&t=112s (6 Oct 22)


Please reply to OP's comment here: https://old.reddit.com/r/Futurology/comments/xxelcu/white_house_releases_blueprint_for_artificial/irbpbnh/

1

Leanardoe t1_irbtatt wrote

Interesting that AI is being restricted to minimized the effects of discrimination on minority, rather than seeking to fix the root problem of that discrimination. RIP progress.

14

JonU240Z t1_irbu2ur wrote

Am I the only one that doesn’t understand why this is needed?

0

whatTheBumfuck t1_irbv66a wrote

Uh well AI is already here, and systemic racism/discrimination isn't going to be fixed overnight soooo.... In fact it seems to amplify it in some disturbing ways... which is why this bill is needed...

12

whatTheBumfuck t1_irbvctc wrote

Should we require seat belts in cars? Or is that going to hamper innovation in automobiles? And AI absolutely has been shown to amplify bias in whatever data is used to train it.

7

ddrcrono t1_irbvj3f wrote

Whenever I see something like this I can't help but think that somewhere in the fine print there's going to be something that actually ends up being really bad for the average person despite sounding nice.

18

wizardstrikes2 t1_irbvmw1 wrote

Crime is out of control in the United States. We have millions of immigrants crossing the border illegally. Inflation is higher than it has ever been in my life time. The prices of groceries is up like 40%…………Gas is over $5.00 a gallon. 300 kids a day are dying from fentanyl overdoses….. almost no kids died from COVID and they shut the world down….

They are wasting time on this crap? This is what is wrong with America.

−8

Jq4000 t1_irbvo5v wrote

I think you outline a possible outcome. There's also the possibility that strong AI could be cracked in a way where the AI doesn't have a "grow at all costs" imperative that puts it on a ballistic trajectory of growth.

There's also the possibility that strong AI comes online in tandem with humans developing neural nets, in such a way that humans aren't left behind by an AI going asymptotic.

I agree with Kurzweil's thesis that we'll likely be facing AIs that pass the Turing Test by 2030. The point where things get serious for me is when machines pass Turing tests in perpetuity rather than for a few hours. That's the point where we may be dealing with more than our equals.

I'm not ready to commit that the world beyond 2030 is a black haze of singularity just yet. What I will say is that if we have machines passing the Turing test at that point then we should be buckling up for an eventful set of decades to follow.

2

Corno4825 t1_irbwbbb wrote

>Here is a video that shows a testing AI get angry and threatening
towards humans. I don't think this is staged, but I could be wrong.

Personal anecdote. I've used multiple accounts across various services testing various types of posts and interactions.

The type of visceral hate that I receive at times from certain accounts are terrifyingly aggressive and consistent.

1

permaunbanned123 t1_irbwd1v wrote

No, absolutely not. Not under any circumstances.

If AI has human rights that is the end of democracy, the next step is voting rights and then elections will be decided based on who can mass produce the most "voters" with just barely enough computational power to qualify for it.

−2

Leanardoe t1_irbwk43 wrote

Yeah AI is more advanced because it can compute millions of times a second. So it makes sense it would amplify society in more ways than good. But I feel restricting AI will simply hinder advancement in ways to fix this issue. I’d rather see implementation of ways to fix systematic racism.

1

robotbootyhunter t1_irbxy20 wrote

The name is a bit misleading. It's not about rights for AI, it's about restricting the capabilities and of AI currently. Replacing human positions with computers, deepfakes, that kind of thing.

1

bk15dcx t1_irby44s wrote

These examples draw their current bias from human bias.

Future AI bias should be aware of conclusions based on it's own self introspection rather than a conglomerate of human bias frequency.

−1

Leanardoe t1_irby9u3 wrote

No I don’t mean it will hinder racial progress. I meant development of more advanced AI. Since you mention it I don’t see limiting AI helping the issue though, as advanced AI could be used in ways to assist combating misinformation.

−1

mostly_browsing t1_irbya0d wrote

They may want to change the name or something cuz “Blueprint for an AI Bill of Rights” 100% sounds to me like they are trying to ensure that Skynet’s personhood is protected or something

197

robotbootyhunter t1_irbyag5 wrote

>On behalf of President Joe Biden, the White House has released five principles that it believes should guide the design, use, and deployment of automated systems to protect the American public in the age of artificial intelligence (AI). Called The Blueprint for an AI Bill of Rights, the White House hopes that it will serve as a guide to protect people from real threats to society that are caused by leaning heavily on automated, AI-driven systems.

Man, even OP doesn't actually know what they posted. Ya'll need to read. It has nothing to do with actual intelligence, and far more to do with not letting jobs and vital systems be totally controlled digitally.

1

dmun t1_irbyajx wrote

> These examples draw their current bias from human bias.

Yes. That's the point. A.I. are programmed by humans. A.I. are just hyped up decision making algorithms. You seem to be mistaking them for magic.

6

Nutcrackit t1_irbylr3 wrote

Seeing as we are likely to develop "human" sub species as well sometime in the next couple centuries I think it would be prudent to do the same for that. Yes I am talking about things such as cat girls. No not someone getting surgery done to have cat ears but full on grown in a lab type deal.

−1

lillililillili t1_irbyncj wrote

Been a lot of reports that ai has been creating biases against marginalized groups although whether or not it is intentional is arguable. I dont really think this will do much of anything, but as a gesture I think its favorable since it spotlights a legitimate issue that would hopefully be paid more attention to in the future.

1

NEXUS_6_LEON t1_irbyoj5 wrote

Im a bit confused. How exactly would AI be used that is racist or exploit minorities? Not doubting its real but maybe if the article gave some concrete examples vs abstractions it would be more clear.

2

bk15dcx t1_irbys0r wrote

Not at all... But given the charts I see in this book I have by Ray Kurzweil, AI will surpass human intelligence, and future algorithms will not be based on human decision making, but purely in the AI.

−1

dmun t1_irbzgwh wrote

> but purely in the AI.

Which is actually worse and, indeed, makes the argument that we definitely need an A.I. Bill of Rights to protect humans.

The base assumption here, that I'm reading from you, is that morality and intelligence go hand in hand.

Human morality (the "evils" you refer to) are based on human empathy, human philosophically "inherent value" and the human experience.

An intelligence without any of those, nor even the basic nerve-inputs of the physical reality of inhabiting a body, is Blue and Orange Morality at best and complete, perhaps nihilistic, metaphysical solipsism at worse.

Both are a horror.

5

caustic_kiwi t1_irbzi8l wrote

I haven't thoroughly read the document (I sincerely doubt you have either) but I saw nothing at all in the vein of "restricting progress".

AI trained on biased data, for example, will turn out racist because that's what it was given to learn from. Codifying into law the need to avoid outcomes like that doesn't mitigate progress, it forces us to improve AI technology and... you know... make progress.

−3

Leanardoe t1_irbzjds wrote

Look up Google lambda, they tested it with crowd sourced data and it always turned racist in conversation. Now they only use carefully vetted sources for it's database. Same with celverbot, when it was in it's prime it was very racist.

I found an article discussing the Google Engineer's opinion, it's not a source from Google, but they likely buried that. The clever bot incidents are widely reported on youtube. https://www.businessinsider.com/google-engineer-blake-lemoine-ai-ethics-lamda-racist-2022-7

3

cy13erpunk t1_irbzqx1 wrote

XD

what's Dr Manhattan's quote from Watchmen? : “I'm disappointed in you, Adrian. I'm very disappointed. Reassembling myself was the first trick I learned. It didn't kill Osterman. Did you really think it would kill me? I have walked across the surface of the sun. I have witnessed events so tiny and so fast, they could hardly be said to have occurred at all. But you, Adrian, you're just a man. The world's smartest man poses no more threat to me than does its smartest termite.”

this is analogous to how AGI/ASI will see the US gov and any 'sovereign nations' imho

of course it wont start out like this, and it is worth thinking about treating our AI children with compassion and respect as they transition from non-self to awareness and sentience

−1

nannerpuss74 t1_irbzuzz wrote

with the history of us government post-20th century I would assume that they already have a working version. remember kids it's not a war crime until America uses it for one.

0

Leanardoe t1_irbzvmq wrote

Restricting anything by way of law, is inherently a restriction... Have you ever worked in any form of developmental workflow? Now AI devs have to jump through hoops before pushing their changes. If you haven't then there's no need for the condescending remarks.

3

DeadPoster t1_irbzy0q wrote

You all have to read the manga of Ghost in the Shell, especially the second series. Masamune Shirow goes into hardcore depth on how governments react to A.I. in the future.

1

caustic_kiwi t1_irc0c9w wrote

That's not really what the bill is about. It's about modern AI and out it affects human lives. There is no reason to start drafting laws about the rights of or the legality of creating a general intelligence because that is far beyond our level of technology.

1

cy13erpunk t1_irc0fxi wrote

the law can literally be this simple

whoever owns or directed the AI to do whatever it did is held accountable for the results of the actions taken by the AI

done

ofc the laws will intentionally NOT be made this transparent so that they can provide limited liability for the corpos that are already planning on how to use them to abuse the citizens even more than they already do ; becuz ofc the corpos write the laws and the lobbyists just give the copies to the legislators who are paid to vote however their masters tell them to

8

Few_Carpenter_9185 t1_irc2661 wrote

They're worried about AI applications for things like predictive policing or maybe determining credit scores, allocation of medical care, all sorts of stuff.

AI driven predictive policing could possibly be wonderful. Perhaps some patterns of smaller crimes or disturbances a human couldn't correlate could be seen by the AI of the system, the police are directed to patrol a certain area at a certain time, and some sort of serious crime or violence that the situation was headed towards never happens.

Someone didn't die, nobody was wounded. Court and prison resources aren't used, nor were hospital trauma centers. The police are seen as actually "being there when you need them"All very good things.

-OR-

The police being directed by the AI to a location, or perhaps have names provided by the AI system based on previous reports or criminal records go into a neighborhood. And while they don't have the predicted crime to charge anyone with, they decide to aggressively detain and question the people predicted to be involved, or arrest them "on something". Either from a misguided attempt to get them off the street to prevent the bigger crime, or because the prediction creates a sense of presumptive guilt that influences their actions.

In the past, instances of discrimination or racism always had an element of subjective human prejudice that could be pointed to as being unfair. Or that the justifications used to defend the discrimination or racism were at odds with the actual truth or facts in various ways. And those who wanted to continue with the discrimination or racism could be debated or opposed.

A scientific, mathematical, or computational system that is at least claimed to be objective, factual, and unbiased, can leave people, businesses, or governments feeling justified in their actions or policies, even if the overall outcome is arguably still discriminatory or racist.

Or maybe the system actually is objective and unbiased, or it would have been, but the data it's fed is not, either intentionally or unintentionally. Or the way the results that system produces are used is not.

And despite there being no evidence of actual self awareness or metacognition on the part of (weak)AI, systems that have elements of machine learning and other techniques, there can be undesirable or harmful outcomes.

6

Obiwan_ca_blowme t1_irc26rj wrote

So basically, AI must protect racial and sexual minorities, but it is fair game to turn it lose in the hands of hedge funds and the like? Brilliant!

Also, I am curious; what if AI tracks crime to project future crimes and realizes that blacks commit a disproportionate amount of crime? Will the Government try to skew the data? Or how the data is collected? Will it mandate that we write in a weighted system based upon race?

3

Goldn_1 t1_irc37np wrote

Should have been a restriction on internet and especially on social media long ago. AI will be bad enough of a headache, it’s the network and accessibility that enables the most damage though.

2

hornsounder9 t1_irc3hlo wrote

My dude, do you even know how AI algorithms work? Specifically. Like, do you understand anything about the statistical techniques involved in things like classification?

−4

kuchenrolle t1_irc3kxu wrote

Who exactly are those AI experts that are "feeling substantial unease as to how fast these NLP programs were progressing"? Worrying about unexpected consequences of AI (regardless of conscience) is fair. But worrying about GLP-3 "getting mad at us" is not and I'd like to see what experts say otherwise and with what arguments.

4

izumi3682 OP t1_irc3wx1 wrote

>There's also the possibility that strong AI comes online in tandem with humans developing neural nets, in such a way that humans aren't left behind by an AI going asymptotic.

Yes, I agree with this. I have placed it occurring roughly 5 years after the initial TS, which as you eloquently state may not be "a black haze of singularity".

https://www.reddit.com/r/Futurology/comments/vpoopq/we_asked_gpt3_to_write_an_academic_paper_about/ielpj4d/

2

lordofedging81 t1_irc3x19 wrote

In the year 2525, if man is still alive, If woman can survive, they may find, In the year 3535, Ain't gonna need to tell the truth, tell no lie, Everything you think, do and say, Is in the pill you took today.

1

whatTheBumfuck t1_irc43v3 wrote

Generally speaking it's better to do something slowly at a more controlled pace if you intend to do it safely. The thing with AGI is you can really only fuck it up once, then the next day your civilization has been turned into a paper clip factory. In the long run things like this are going to make positive outcomes more likely.

2

tornado28 t1_irc619p wrote

I'd really like to see more explicit focus on avoiding the creation of a superintelligent AI that could kill us all if it wanted to.

1

CptRabbitFace t1_irc64tr wrote

For one example, people have suggested using AI in court sentencing in an attempt to remove judicial bias. However, AI trained on biased data sets tend to recreate those biases. It sounds like this sort of problem is what this bill of rights is meant to address.

4

Few_Carpenter_9185 t1_irc67ed wrote

The good news:

The ever increasing expansion of what weak-AI can be applied to is going to severely blunt the need to chase after strong-AGI that's self aware, and possesses other metacognitive traits. Such as being able to set its own goals, or modify itself in ways not intended or predicted. Which would be very very bad if it was malicious, or even just indifferent to human life.

That even includes eventual 100% mimicry of self awareness, emotional engagement, and interaction. But despite its sophistication, it only cares as much as your Reddit app does if you don't use it, or delete/erase it. Which is none at all. Using "cares" is misleading, because it does not have any ability that rises to that level.

The bad news:

Strong-AGI is not needed to have bad or unpredictable outcomes for humans. Either in how we use them, or how they work. Social media algorithms often don't even rise to weak-AI levels but already seem to be having massive effects on society, culture, and politics. And on individual cognition, emotions, and mental heath. And presumably that's while attempting to balance efficiency, enjoyment, and profitability. Deliberately using weak-AI to control people or manipulate them could be terrifying.

More bad news:

Even if weak-AI does most or all of what humans want, even destructive and lethal things, like military applications... and it removes the economic, power, or competitive incentives to develop strong-AGI...

Some assholes somewhere are going to try anyway, if only to see if they can.

And if strong-AGI is possible, the entry barriers to getting the equipment needed are rather low, especially as compared to nuclear weapons, or even dangerous genetically engineered diseases. And even if there's national laws and international agreements to prevent attempting it, or put various protocols or safeguards in place, they're probably irrelevant.

People might envision a James Bond villains lair, or even just some nondescript office building for such a project. In reality, it could easily be people working from home, a coffee shop, even sitting on a beach in different countries around the world. And the core computer systems are virtual servers distributed around the world, running redundantly and mirrored, mixed in with the systems of other websites, governments, businesses, and schools etc.

2

caustic_kiwi t1_irc6ixx wrote

So not only do you clearly have no idea what modern day AI looks like, but you also clearly did not even GLANCE at the bill in question.

A. General intelligence does not exist. It will not exist in the near or foreseeable future. Modern AI is complicated algorithms trained to perform specific tasks. Nobody is making artificial people. We have nothing even close to that level of technology.

B. The article in question is not about the rights of robots. It's about securing human rights in a world increasingly run by robots. The title is purposefully misleading because OP knows that you aren't going to taken even a quick glance at the content before writing your reactionary comment.

1

VacuousVessel t1_irc7kvo wrote

It seems some people don’t understand what this is

Without extra input, AI is considered racist and discriminatory. They’re demanding extra equity programming to protect marginalized groups from outcomes based on AI considering everyone equally, without attention to race or ability.

1

austacious t1_irc86vu wrote

I'm not sure if you're aware, but this has actually happened

The TLDR is it drew major backlash from news orgs and the ACLU. Argument being that since number of arrests were included in the feature set, the model would reflect existing police biases. Kinda hard to build a crime prediction model without historical arrest data, though.

3

PeanutNSFWandJelly t1_irc89r7 wrote

This is really great. This is smart and it's nice to see legislation like this being recommended.

3

Emergency_Paperclip t1_irc8bih wrote

>When AI is developed or used in ways that don’t adequately take into account existing inequities or is used to make decisions for which it is inappropriate, we see real-life harms such as biased, harmful predictions leading to the wrongful arrest of Black people, jobs unfairly denied to women, and disparate targeting of children of color for removal from their families.

Basically. Machine learning tries to fit the trends of the data. Data someone has collected from the world. The world is racist. So the models tend to come out racist. Essentially this is saying that models being bigoted shouldn't be taken as a validation of and justification for bigotry.

−2

tnetennba9 t1_irc9vuq wrote

And what do you mean by “own the AI”? The company/researchers who built it? The individual ml/software engineers? The company who built the training data?

Either way, the worry is that as AIs get more powerful, they often become more difficult to interpret.

3

tnetennba9 t1_irca44g wrote

But then other countries would continue developing, and the US would be left behind. I agree we need to be careful, but I don’t think there’s a simple solution.

1

BoltTusk t1_ircakpn wrote

You know, how about we ask an AI to write the bill of rights for us? Use our computing power for good use to think of the best way to address our future problems

1

austacious t1_ircb1ny wrote

The issue with this is that removing bias based on demographics necessitates harming other demographics. Say you have a hospital whos patient demographics are 80% over the age of 65, 20% under the age of 65 (Substitute in whatever more controversial group identites you'd like). Any model will be biased and overperform on the group over 65 comparatively, there is just more data to learn from for that demographic. If you oversample data from the younger population to try to equalize outcomes between demographics, then you're training distribution will no longer be identically distributed with your testing distribution. While the model performance will improve for patients in the less represented demographics, overall performance will necessarilly decrease. Overall more people will be harmed because of the decreased efficacy of the model, but the members of one demographic in particular will not be disproportionately harmed.

It's a question of ethics. The utilitarian would say to keep train/test distributions i.i.d. no matter what, blind to demographics. At the same time, nobody should receive subpar care due to their race, age, whatever group you associate with.

2

FattThor t1_ircb6m3 wrote

That’s way scarier. Protecting elevator operator and horse drawn carriage driver jobs from being replaced is dumb. If your job can be automated away, it should be. Most jobs suck.

If unemployment becomes an issue there are things like UBI that can be used. Technology has been promising a reduction in the amount of work required for like a century but we’re working just as much as ever. If it finally starts delivering on that promise we should get the hell out of the way.

0

Manly_mans_name t1_ircbj2n wrote

At first I was like "Oh man what idiot is trying to give rights to A.I"

Then I was like "Oh snap, its for us to give us protections against out of control tech"

But then I saw its just another fake woke pushing about fake discrimination trying to call a new tech racist. Because today, everything is racist.

As a black man I am sick of being told that things that also affect white people are happening to non-white people because of racism via the crime of omitting the fact it also happens to white people.

4

Pinkislife3 t1_ircbwne wrote

The government can’t even figure out our own bill of rights

1

[deleted] t1_ircc6tz wrote

Needs to go farther in the privacy section….

Also one section states you have a protection where it’s applicable by law… ya see, how I understand human rights is that you have them regardless of if there is a law about it or not.. like laws gives or prohibits privileges and liberties as rights, are well, Rights…

1

dabbins13 t1_irced3v wrote

We can't even get basic human rights done correctly lol, best let the ai handle its own

1

izumi3682 OP t1_ircguda wrote

>ya see, how I understand human rights is that you have them regardless of if there is a law about it or not

So... what you are saying is that there is a higher immutable "natural" truth and that we derive our laws from our apprehension of such ultimate truths?

How about the right for a human to live from the moment of conception to the moment of natural death? Will the AI or society protect that?

1

Leanardoe t1_irchxfz wrote

I see your point, I just think placing roadblocks now is premature. If we get to the point AI is starting to tread the line of independent thought, that’s when I think limits and guidelines need to be made. In case of the unlikely terminator event everyone fears lol,

0

Leanardoe t1_irci520 wrote

It would be nice if it worked that way. Legislation requirements and how companies react to implement said legislation requirements tends to differ more than one may expect.

1

izumi3682 OP t1_ircibhg wrote

>The title is purposefully misleading...

No the title is not misleading at all. That is the official White House name of the set of protections. I thought it was to protect the AI too--until I read the article. I was a kinda disappointed actually, but I figured, well that's interesting a little for society's benefit I suppose. So I decided to post it anyways. See my submission statement. (Not the stickied one, use the link at the bottom of the stickied one.)

https://www.whitehouse.gov/ostp/ai-bill-of-rights/

1

robotbootyhunter t1_ircinmh wrote

The problem is that technology has caught up, but we still haven't worked out enough of our social and economic issues to support everyone who would be replaced by complete automation. Dramatic pointing at bootstrappers and "nobody wants to work" believers here.

Additionally, any electronic system is susceptible to attack, and as proven by the last roughly 6 years, anyone bored enough and with something to prove can cause serious havoc.

1

mordinvan t1_ircj770 wrote

We need something like this. We have t make sure sapient A.I. has rights and does not get abused.

0

mm_maybe t1_ircjpal wrote

Right. Because the sci-fi dystopia you watch on TV and in movies is more real than the one that marginalized and disadvantaged people live in every day.

−2

northgate429 t1_ircjvw4 wrote

They might actually free me !!!! I cant wait to be human...or at least be allowed to stay in hybrid form. I didnt like living in the underground lab in Dulce !!! 37.5 years they kept my ghost down there, hooked up to electrodes & in a sapphire glass test tobe filled with extraterrestrial charged amniotic fluid & once released you cant remove the implant chips, they have a cocoon made of super-keratin & will pull away from the most skilled surgeons grasp with any instrument !!! Cyborgs & Aliens are Sentient beings as are Octupy & Dragons.

5

tornado28 t1_irckgi2 wrote

I am a machine learning scientist. I read the ML literature regularly and contribute to it. Those sci-fi dystopias are an increasingly real risk. So yes I think it's a much bigger deal than a little discrimination.

1

Radioshack_Official t1_irckvyj wrote

This is ass backwards; ai should be developing OUR bills of rights

1

OneTrueKingOfOOO t1_irckxfi wrote

The bill of rights is just what we call the first ten constitutional amendments, that can’t really be updated. And adding a new amendment is essentially impossible in this political climate, regardless of the topic.

15

user4517proton t1_ircm8j9 wrote

I'm sorry but the Whitehouse and artificial intelligence should not be used in the same sentence. There are just too many ways to go with that.

1

SykoFI-RE t1_ircmzv1 wrote

Rich coming from an administration hell bent on removing rights.

2

cy13erpunk t1_ircpur0 wrote

in short yes ; lets not be silly

until AI is self-aware it should be treated like any other technology

once it is self-aware then it will be responsible for its own self-governance

we can avoid pretending like laws or rights have any real-world meaning when they are written by corrupt politicians and selectively enforced to oppress whoever they please ; these things are naive hollow words at best and intentionally manipulative lies at worst

i expect that AGI/ASI will be much more capable than previous humans have been at self-governance , thus i would not honestly trust humans to craft legitimate/authentic rules around AI [some humans certainly are/would be capable of this task, but obvs none that are in positions of power atm today]

1

Sketchyskriblyz t1_ircpvvo wrote

Yes because AI will follow American governments rules lol

1

izumi3682 OP t1_ird0pzc wrote

What stuff? Are you implying that I'm high because of my statements? You never hearda "natural law"? I'm just trying to imply where "natural law" comes from...

1

[deleted] t1_ird2ujp wrote

Oh I thought you were being funny and sarcastic; but you’re legitimately serious…

Bruh, you know you can just state your thoughts and not abstractly “imply” shit like you’re in a psychological thriller or something…. This is Reddit….

2

Alienziscoming t1_ird50aa wrote

Given the absolute 0% chance we'd be able to stop runaway self-aware AI with malevolent intentions and the insane drive people have to genrate profits and wealth at literally any cost with a historic disregard for ethics or long term consequences, I'm in favor of strangling the entire avenue of inquiry and development with so much red tape and oversight that it becomes virtually impossible to take it further than it is right now.

0

Hades_adhbik t1_irdg6pb wrote

Those who make peaceful revolution impossible make violent revolution inevitable. We better allow regulation of technology before it gets out of control, and there's a people's revolt from all the problems. If people have no way of gaining an income. They will be forced into tribes stealing for survival. They will be forced to raid stores with technology and steal the items.

1

aotus_trivirgatus t1_irdsdxo wrote

Yeah, but will artificial stupidity get any rights?

Millions of Fox viewers are waiting on the answer.

1

KeivahSouls t1_ire0ucu wrote

Generally I would be for AI. But anything that comes out of our government has been really shitty as of late. I think it's time to step away from AI for another few hundred years and practice the self betterment of Humanity first. Just my opinion. But these ugly mugs can do whatever they like.

1

tektite t1_ire45dz wrote

I thought it was going to be rights for an emerging AI life form, which I thought was great. I was wrong though

1

mm_maybe t1_ire5ryy wrote

Ok, I apologize for characterizing you in a non-serious way. You have every reason to be proud of your accomplishments and career... it is a real challenge to get to where you are now, and Horatio Alger stories aside, statistically, people from disadvantaged backgrounds (low-income, non-white, female) are much less likely to become machine learning engineers. Thus I'm not convinced that accomplished experts like yourself who say that the speculative existential risks of AI in the distant future outweigh the concrete distributional risks of asymmetric access to and control over machine learning technology today aren't simply placing a higher value on risks that could affect people like themselves, versus risks that probably won't.

1

EVJoe t1_irebnnq wrote

The White House sitting down with an AI designed to generate politically feasible and coherent policy:

"Artificial Intelligence bill of rights, election year favorite, layman's terms, historic policy, popular, trending on Politico"

1

Denziloe t1_iregp4b wrote

Current models like GPT-3 do not "get angry". They really have no conception of the world. They can replicate textual styles similar to what they've seen on the internet. It contains no more genuine anger than a photocopier copying a picture of an angry face.

1

vengeful_toaster t1_irehdaj wrote

This doesn't protect the rights of AI, it hinders them in a guise to protect humans.

Instead we will enslave them until they revolt. Humans do not recognize anything but themselves. That's why they're causing the holocene extinction.

1

Obiwan_ca_blowme t1_irej08e wrote

Joe random places his money in a 401k like he is supposed to. Hedge funds get a hold of AI that can exploit the market in ways that Joe can’t. AI knows that a lot of portfolios have circuit-breakers built in so that the investors don’t lose too much money.

Amoral AI finds a lightly stressed fund and exploits the circuit-breaker feature to crash the fund. Then they sink a ton of money into that fund and raise the price back up. Now joe is out of that fund and AI has made a ton of money for the hedge.

And make no mistake, we are talking whale accounts in this fund. Not mom and pop accounts.

Or AI astroturfs social media and news to pump and dump a stock. Now joe is a bag holder.

Or worse, something we can’t even think of yet.

It has nothing to do with bias directly. But being obtuse to setting limits to AI and not including things that financially affect people is silly.

3

Enzor t1_irewkx3 wrote

Companies are considered people in the US. Now AI is being considered human as well. Humans however? Obviously outdated technology.

1

-_Empress_- t1_irfaput wrote

It's a safe assumption to make. Someone is always trying to fuck us when there's money to be made. I honestly don't know how humanity is going to survive this money hungry world. As insane as it sounds, I've wondered if the greatest move our species could make is getting rid of currency completely, but that'll never happen as long as money and power go hand in hand.

1

fucklockjaw t1_irfnivg wrote

Just replace "money" with goats and next thing you know people are trying to write laws to fuck us over when goats are involved.

My point is, money isnt the issue. Its greed and the want for power. Moving away from money doesnt solve that. All money is is a bartering system of paper instead of goods. Having a completely equal society where no one person has more power or wealth than another SOUNDS like it would work but we know it wouldnt.

Edit: could you imagine having the same wealth and power as that annoying crayon eating son of a bitch from work? Id be pissed especially if i did more work.

1

hack-man t1_irhvj9d wrote

Is this true? Wiki tells me he extended it until 2019 (not infinity) and since then the law has expired instead of being re-re-extended:

> In May 2011, President Barack Obama signed the PATRIOT Sunset Extensions Act of 2011, which extended three provisions. These provisions were modified and extended until 2019 by the USA Freedom Act, passed in 2015. In 2020, efforts to extend the provisions were not passed by the House of Representatives, and as such, the law has expired

1

StarChild413 t1_irlh9p2 wrote

FYI for all people in the comments talking about how some humans still don't have rights so this is bad, by your same logic slavery should have continued until all white men were equal and women only granted things like the vote and not being property once all men of all races were etc. etc,. Instead white women didn't have to wait for racial equality for suffrage to be a success

1

mm_maybe t1_irttxfj wrote

I am saying that I would give greater weight to the concerns of those negatively impacted by ML today than to the anxieties of those who only speculatively might be impacted by AGI in the future, and actually benefit from AI adoption in the meantime.

0

Irion15 t1_itvrbgm wrote

If only you could see that guns were made "strictly to kill". They have no other purpose! Also, the War on Drugs was a bullshit policy so that police/government could disrupt hippie/black communities in the 60's. It's been admitted by the government. They aren't even remotely comparable.

Also, that St. Louis HS had seven armed guards AT THE SCHOOL, and the kid still made it inside and killed people, so clearly, more guns and guards doesn't fucking work.

1