Submitted by HarpuasGhost t3_11a1qk3 in Futurology
AtomikSamurai310 t1_j9pa9up wrote
In my opinion, this shouldn't even be a debate. AI/robots are programmed to do whatever you tell them to do. Unless you give it some kind of human understanding of emotions and stuff.... realistically we should try to put a cap on what these Robots and AI could do, if they have it free will then we're gonna have to deal with Ultron.
Cheapskate-DM t1_j9pazsh wrote
Insofar as a robot is a tool, any and all discussions of their use boils down to permissions and consequences for their users or programmers.
Richard-Long t1_j9r6pcd wrote
Yeah we should definitely try to put a cap on it but we'll all know what happens when curiosity gets the best of us
AbyssalRedemption t1_j9rkw52 wrote
Agreed, insofar as AI exist only as tools, then don’t deserve anymore “robot rights” than my garbage can. We can have that discussion when/ if we reach the singularity and/ or AGI.
lungshenli t1_j9rxdkm wrote
Absolutely. But now imagine we create an actual self-conscious being and the first thing it finds is 4chan. There’s gotta be rules in place beforehand.
pete_68 t1_j9rdk2u wrote
There are far too many people who are clearly tremendously ignorant about what these things are and how they work. They're calculators. Fancy calculators that calculate the next best word. NOTHING ELSE. We need to just start ignoring people who can't get this through their skulls.
CommentToBeDeleted t1_j9pn4ju wrote
>AI/robots are programmed to do whatever you tell them to do.
I dislike this statement, for a number of reasons.
First the obvious strawman argument. There was a time when people believed that certain races were "sub-human" and existed only to do whatever you tell them to do.
Second, many cultures believed (and some still do) that females should, at least to a lesser extent, be subservient to males, and the impact of that form of abuse was largely ignored, due to society viewing females as serving their intended function.
​
>Unless you give it some kind of human understanding of emotions and stuff....
This is the entire crux of the debate. Most people hear "programming" and think of it in terms of a very traditional sense. A programmer goes in and writes every line of programming, that a program looks at and executers.
While this is still the case for many (probably most) forms of programming, it is not the case for machine learning.
Essentially, some problems are too complex for us to tell a computer exactly what to do. So rather than give it a bunch of rules, we more or less give it a goal or a way to score how close it got to achieving the desired result.
Then we run the program and check its score, but instead of running it 1 time, we run it millions of times, with very tiny differences between each instance. Then we select a percentage of "winners" to "iterate" on their small change and have all of these "children" compete against each other. Then we do this millions of times. Eventually, we hope to get an end product that does what we want it to do, without a lot of negatives, BUT the "programming" is a black box. We really have no idea how it ended up doing the things it ended up doing.
Sure we could assign it rules, like "don't tell users 'I am conscious'" but that is no different than telling a slave "you can't tell people you have the same rights as them." Creating a rule to prevent it from acknowledging something, doesn't actually change anything.
​
>In my opinion, this shouldn't even be a debate.
Strongly disagree here. First, do I think AI is currently conscious? Probably not. Am I sure? Absolutely not.
The problem is that we don't really have a good way of defining consciousness or sentience. It's only recently that we've given equal rights to people of different races and gender. We have yet to assign really a really significant "bill of rights" to animals who demonstrate extreme levels of intelligence, more so than some of our young children who do have rights.
So I guess my question is this: Is it ethical to risk creating a "thing" that could become conscious, without having a way to determine if that "thing" is conscious, then put that "thing" through what could be considered torture or slavery by those whom we already define as having "consciousness".
I think the answer to this question should be no, it is not ethical to do that. I think the answer isn't to try and prevent people from not making AI though, I think we need to better define consciousness, in a non-anthropocentric way. Then we need to come up with a way to test whether or not something should be considered conscious, then assign it rights befitting a conscious being.
​
tldr: Most programs are obviously not conscious, but of these chat ai bots, we lack the proper definition or test to confirm whether or not they are. In my view, it's unethical to continue doing this and therefore we have a moral obligation to better define consciousness, so that we can determine when/if it has arisen.
Imaginary_Passage431 t1_j9q8ee1 wrote
Faulty analogy fallacy. Robots aren’t a race, nor a discriminated sex. They aren’t a subgroup of humans either. Not even a subgroup of animals. Don’t have conciousness or have the ability to feel (don’t answer to this with the typical argumentum ad ignorantiam fallacy). You are trying to give rights and moral consideration to a calculator. And if I see a calculator and a kitten about to be crashed by a car I’d save the kitten.
CommentToBeDeleted t1_j9qd4xj wrote
I think you are misunderstanding the arguments making or I've failed to adequately to articulate them if this is your response.
​
>or have the ability to feel (don’t answer to this with the typical argumentum ad ignorantiam fallacy).
We are literally discussing how we lack the understanding to determine whether or not something has consciousness, can feel or have free thought and your rebuttal is "they can't feel". This feels exactly like the sort of thing that probably happens every time we have marginalized any entity. Imagine trying to have a discussion with someone about whether or not a slave is human or sub-human and they think it's a valid response to simply say "well they are not human so...". That's literally what the debate is about!
What is this called? "Begging the question" I believe. We argue whether or not they have free will or can feel and you try to provide the evidence that "they just don't okay!"
​
>Faulty analogy fallacy. Robots aren’t a race, nor a discriminated sex. They aren’t a subgroup of humans either. Not even a subgroup of animals.
There is where I think you are missing the point of the argument entirely. I'm fully aware of the facts you just stated, but it does nothing to rebut my claim and if anything, I think bolsters my argument even more.
To state more clearly what I was arguing
There was a point in our history where we viewed, actual, literal humans as a "sub race" and treated them as though they were property. You hear that now and think "thats insane, of course they should be treated the same as people!"
Then we did that to women (and still continue to do so in many places). They are viewed as less than their male counter parts, when in fact they should be given just as many rights.
Doctors used to operate on babies without providing a means to help deal with pain, because they assumed children were incapable of processing pain like adults. Despite them literally being humans and having brains, they assumed you could physically cause harm and suffering and it was no big deal.
So my point: Humans have notoriously and consistently, attempted to classify things with consciousness, that do feel, in a way that allows other humans to disregard that fact and treat them more poorly than we would treat those that we do acknowledge have consciousness. The mere fact that we have done this with our own species, should make us more acutely aware of our bias towards rejecting equal rights to entities that are deserving of them.
​
>You are trying to give rights and moral consideration to a calculator.
This is absolutely fallacious and you are misconstruing my argument. I specifically mention traditional programs that execute functions as being separate from this view and yet you internally made this claim. Here is my bit (the calculator you claim I'm trying to give rights to):
>Most people hear "programming" and think of it in terms of a very traditional sense. A programmer goes in and writes every line of programming, that a program looks at and executers.
While this is still the case for many (probably most) forms of programming, it is not the case for machine learning.
​
>And if I see a calculator and a kitten about to be crashed by a car I’d save the kitten.
And as you should. Giving rights doesn't mean the rights necessarily need to be equal. If I saw a child or a dog about to get run over, I would 100% save the child. Does that mean the dog is not entitled to rights, simply because those rights are not equal to that of a human child? Absolutely not.
What if I saw a human adult or a child tied up on a train tracks and could only save one? Of course I'm saving the child, but obviously the human adult should still have the necessary rights afforded to it.
​
No offense, but with your use of fallacies, I assume you know something about debates, however the content of your response felt more like an attempt at a Gish Gallop than a serious reply.
Alternative_Log3012 t1_j9qkndj wrote
None of this (absolute drivel) is a good argument for giving robots ‘rights’.
There isn’t any possibility of true consciousness from a computer.
At most, if robots are created somewhat anthropomorphically, regulate how humans interact with them publically so as not to outrage common decency (ie not make other humans uncomfortable).
Actually assigning rights to a computer itself shows a poor understanding of what a computer is…
CommentToBeDeleted t1_j9qluo8 wrote
>There isn’t any possibility of true consciousness from a computer.
Imagine admitting we don't' know what consciousness is and yet still being absolutely certain that you can distinguish when something is or is not conscious. As if applying the qualifier "true" changes anything about that. You want to know what drivel looks like, there you go...
​
>Actually assigning rights to a computer itself shows a poor understanding of what a computer is…
Really depends on what you definition of computer is here. If you are assuming a calculator, phone or desktop, then sure, I would grant you that. But to assume you have any idea how the "black box" works within machine learning algorithms demonstrates your gross misunderstanding of the topic at hand.
The actual people who build these "machines" do not fully understand the logic behind much of the decision making being made. That's the entire reason we utilize machine learning.
​
It's crazy just how little humility people show in regards to this subject. My entire argument is that we don't know enough and need to better understand this and people somehow manage to have the hubris to think this problem is already solved.
Alternative_Log3012 t1_j9qoqm7 wrote
Machine learning researchers and engineers understand the structure of their models, just not what each individual weighting is (there can be millions or more) for each simulated neuron, as these are found by a training process (which again is something known to the creator or team of creators) using known information.
The above process can literally be achieved by a complex calculator and is in no way deserving of ‘rights’
CommentToBeDeleted t1_j9qr1pt wrote
Knowing the structure of your model and providing it training data is a far cry from understanding how it reaches it's conclusion.
> (there can be millions or more)
You just described how incredibly complicated a system can be, yet still attempt to argue my point about programmers not fully understanding the logic behind them.
​
> for each simulated neuron
It's fascinating that you would analogize the way it functions as imitating a neuron, then only later state that everything it can do, could be achieved by a calculator.
​
I don't think you and I will ever agree on this topic. You seem impossibly married to the idea that every single computer is analogous to a calculator. I view that argument as both ignorant and reductive. All attempts I've made haven't produced new arguments from you, but are instead met with heels in sand.
Still appreciate you taking the time to respond, I just don't see this as being a productive use of either of our time.
Iwasahipsterbefore t1_j9reexq wrote
That's anti-natalism. Having a child fits all the same examples you gave of why it's not moral.
To be clear I honestly quite approve of anti-natalism, but it is a fringe belief
Viewing a single comment thread. View all comments