Viewing a single comment thread. View all comments

InTheEndEntropyWins t1_j8rap2b wrote

>If all our decisions can be traced back to genetics, situational and nurture; aren’t those variables beyond our own control anyway?

You are your genetics and upbringing. There is no need for you to have control over what you are.

So free will is about being able to act in line with your desires. It isn't about having complete control of your desires.

7

HippyHitman t1_j8rbguv wrote

But what’s free about that? What’s the difference between you acting on your desires and a robot acting on its programming?

12

threedecisions t1_j8rpkg4 wrote

The belief in Free Will is like an algorythm sent to a robot because it is encourages social order. It puts parameters on the robot's behaviors by telling it nasty things will happen to it if it acts outside of them.

The limitations on the idea is when the robot is unable to comply with prescriptions given to it and is punished rather than assisted.

4

frnzprf t1_j8w2s5a wrote

There are many people who don't believe in free will, it seems to me and they still follow laws. I don't know what the percentage in the general public is, mabye 25% don't believe in free will 25% do and 50% have never thought about it.

2

DasAllerletzte t1_j8rea3z wrote

I’d say, you can adapt.
And also consider non-measurable phenomena like other peoples feelings or reactions.
You can prioritize.

Recently I wanted to get §thing.
Then I started to weigh wether I truly need §thing and if I can afford it too.

Such decisions would require a ton of code engineering to implement.

2

HippyHitman t1_j8rf9hn wrote

>I’d say, you can adapt.

Sure, but what about a machine that can alter its own programming? If it’s not acting with free will when it adapts, then those adaptations aren’t free will.

>And also consider non-measurable phenomena like other peoples feelings or reactions.

They may not be measurable, but they can be observed and estimated. That’s how you do it, after all.

>You can prioritize.

This one machines are already great at. Probably better than us. The amount of prioritization that happens every microsecond in order to make modern computers run would fry our brains.

>Such decisions would require a ton of code engineering to implement.

Sure, and that’s my argument. We’re just extremely complex machines, so the reasoning is obfuscated to the point that it gives the illusion of free will. But if we could actually analyze our minds and thought mechanisms I don’t see why it would be any different from a computer program, and I don’t see where there’s room for free will.

8

frnzprf t1_j8w3ow1 wrote

> We’re just extremely complex machines, so the reasoning is obfuscated to the point that it gives the illusion of free will.

That's interesting. You say you have an illusion of free will, but you have seen through it. I don't even have the illusion of free will. I just have a will, which is inherently subjective, so it can't be an illusion.

Maybe historically free will meant something different then how I'd define it today. Maybe "someone is free to act according to their will". (Maybe not that though. I'm not sure.) A judge said "I'm punishing you, because you acted on free will, i.e. you weren't directly coerced by other peoples wills."

Then over time the meaning of free will evolved to something like "will, at least partially independent of everything", but people still claim to believe in it, because they are thinking of the older, pragmatic definition.

3

dbx999 t1_j8rw2b1 wrote

The more complex and nuanced the situation and decision making becomes the more convincing that the choice is the product of our inner self. We retcon our decisions as being products of free will. We ride a roller coaster of a life and think the whole time we’re steering the thing while it’s on a track.

2

InTheEndEntropyWins t1_j8rcoqq wrote

>But what’s free about that?

I'm sure there are other definitions, but I use something like free will is about "the ability to make voluntary actions in line with your desires free from external coercion/influence".

Free will is key in morality and justice, so I like to understand how the courts define and use it. Lets use a real life example of how the Supreme Court considers free will.

>It is a principle of fundamental justice that only voluntary conduct – behaviour that is the product of a free will and controlled body, unhindered by external constraints – should attract the penalty and stigma of criminal liability.
>
>https://scc-csc.lexum.com/scc-csc/scc-csc/en/item/1861/index.do

In the case of R. v. Ruzic

>The accused had been coerced by an individual in Colombia to smuggle cocaine into the United States. He was told that if he did not comply, his wife and child in Colombia would be harmed.

The Supreme Court found that he didn't smuggle the cocaine of his own free will. He didn't do it in line with his desires free from external coercion. Hence they were found innocent.

Compare that to the average case of smuggling where someone wants to make some money and isn't coerced into doing it. If they smuggle drugs then they did it of their own "free will" and would likely be found guilty.

So in one example the person had what the courts say is free will and not in the other.

​

>What’s the difference between you acting on your desires and a robot acting on its programming?

Well I would say a person is just a really complicated robot, so there isn't anything fundamentally different apart from complexity.

1

HippyHitman t1_j8rd1xk wrote

Legality doesn’t imply truth.

Let’s compare two scenarios: in one you program a robot to kill someone, in the other you program a robot to cut people’s hair but it has a horrible malfunction and kills someone. In which of those is scenarios is the robot exercising free will?

If you agree that humans are essentially no different from robots, then it follows that we can’t have free will regardless of what any court or law says.

5

InTheEndEntropyWins t1_j8rf5pd wrote

>Legality doesn’t imply truth.

I just refer to the legal system since they have good high quality analysis of free will which matches up to most people's intuitions around free will. It also lines up with what most philosophers think.

>Let’s compare two scenarios: in one you program a robot to kill someone,

Not sure here, how do you define a robot's desires?

If we switch it out to be a person, and say they have the genetics and upbringing to make them a violent killer. If they had the desire to kill someone and voluntary acted on that then it would be of their own free will.

> in the other you program a robot to cut people’s hair but it has a horrible malfunction and kills someone.

Well that's not in line with their desires and isn't a voluntary action, so wouldn't be of it's free will.

>If you agree that humans are essentially no different from robots, then it follows that we can’t have free will regardless of what any court or law says.

Sounds like you are talking about libertarian free will, and sure people don't have libertarian free will, but that doesn't matter since most people are really talking about compatibilist intuitions, which we do have.

What people really mean by free will is the same thing the courts are talking about. They aren't talking about the libertarian free will you are using.

2

HippyHitman t1_j8rg9mj wrote

This doesn’t seem like a logical argument to me. It seems like you’re just saying humans tend to believe we have free will, and our society is based upon that assumption.

I’m arguing that the assumption is incorrect.

Where would we draw the line between free will and compulsion? It has to be arbitrary, just like you noted about a robot’s desires. An automaton desires nothing other than following its programming, so anything a robot does successfully would be an exercise of free will. But I don’t think anybody would actually argue that, they’d argue it’s an exercise of the programmer’s free will. Why is it different for us just because our programming isn’t apparent?

6

InTheEndEntropyWins t1_j8rrx36 wrote

>This doesn’t seem like a logical argument to me. It seems like you’re just saying humans tend to believe we have free will, and our society is based upon that assumption.

I'm saying that humans use the compatibilist definition of free will. Hence it makes sense to talk about compatibilist free will rather than libertarian free will.

I'm saying it's illogical to use the incoherent concept of libertarian free will.

>Where would we draw the line between free will and compulsion?

It would depend on the facts and I like to look at the legal system, which does this all the time.

In cases like R. v. Ruzic, they looked at the facts and determined they were coerced and hence didn't have free will.

In the case of Powell v Texas, where they tried a defence that it wasn't of their own free will since they were an alcoholic. While this argument shows they didn't have libertarian freewill. The courts didn't accept this argument and it was found they did have free will. So they did distinguish between free will and compulsion in this case.

>It has to be arbitrary

Just like pretty much every high level concept. Even the concept of "life" is arbitrary with many blurred lines. But just because the concept of life is arbitrary doesn't mean it isn't useful or that we can't apply in the context of humans.

>, just like you noted about a robot’s desires. An automaton desires nothing other than following its programming, so anything a robot does successfully would be an exercise of free will. But I don’t think anybody would actually argue that, they’d argue it’s an exercise of the programmer’s free will. Why is it different for us just because our programming isn’t apparent?

​

>Why is it different for us just because our programming isn’t apparent?

Maybe that's the main difference. We aren't programmed with a clear simple goal of killing someone, whereas the robot was.

If you change the example of just making the angry and violent, then if the robot following these goals kills someone, I think it is fairly similar to the human case.

3

CruxCapacitors t1_j8t1c7h wrote

I dislike your focus on the legal use of "free will" because the legal system, particularly in the US (which is where you're citing cases from), has a very poor, punitive prison system that has terrible recidivism rates. I can't help but feel that if more people realized that compatibilism is flawed, we might be able to better rehabilitate people.

2

InTheEndEntropyWins t1_j8t4336 wrote

>I dislike your focus on the legal use of "free will" because the legal system

If you read the legal judgements around free will you'll see that they have an amazing grasp and understanding of the subject. They are as good if not better than most stuff philosophers write on the subject.

I like looking at the legal approach since is a nice realistic approach and understanding of the world that makes sense rather than an incoherent idea that isn't applicable to the reality we live in.

​

>I can't help but feel that if more people realized that compatibilism is flawed, we might be able to better rehabilitate people.

Having a more rehabilitative justice system has absolutely nothing to do with the fact the justice system is based on compatibilist free will. So that's just a non argument.

Any functioning justice system which focuses on rehabilitation needs to also use compatibilist free will to work.

In fact studies suggest the justice system would likely be even worse without compatibilist free will.

>These three studies suggest that endorsement of the belief in free will can lead to decreased ethnic/racial prejudice compared to denial of the belief in free will. https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0091572#s1>
>
>For example, weakening free will belief led participants to behave less morally and responsibly (Baumeister et al., 2009; Protzko et al., 2016; Vohs & Schooler, 2008) From https://www.ethicalpsychology.com/search?q=free+will
>
>these results provide a potential explanation for the strength and prevalence of belief in free will: It is functional for holding others morally responsible and facilitates justifiably punishing harmful members of society. https://www.academia.edu/15691341/Free_to_punish_A_motivated_account_of_free_will_belief?utm_content=buffercd36e&utm_medium=social&utm_source=twitter.com&utm_campaign=buffer From https://www.ethicalpsychology.com/search?q=free+will
>
>A study suggests that when people are encouraged to believe their behavior is predetermined — by genes or by environment — they may be more likely to cheat. The report, in the January issue of Psychological Science, describes two studies by Kathleen D. Vohs of the University of Minnesota and Jonathan W. Schooler of the University of British Columbia.

From https://www.nytimes.com/2008/02/19/health/19beha.html?scp=5&sq=psychology%20jonathan%20schooler&st=cse

1

BroadShoulderedBeast t1_j8rks0a wrote

I think in the context of free will discussion, voluntary action isn’t the same as free will. Even a robot can have a goal to do a thing as a matter of its pre-programming, but if another thing interrupts that action and the robot is made to do something different, it is no longer totally voluntary. The robot had a plan of action but had to change that plan because of circumstances outside of its control. Free will is not required for voluntary action.

Someone who kidnaps because they have the goal of making money versus someone who kidnaps because they have the goal of surviving against the person who ordered them at gun point to kidnap have very different degrees of voluntary action. The causes of their doing the kidnapping say something about the person’s propensity for voluntarily engaging in anti-social behavior.

1

InTheEndEntropyWins t1_j8roxdi wrote

>I think in the context of free will discussion, voluntary action isn’t the same as free will.

I didn't say it was the same.

>Someone who kidnaps because they have the goal of making money versus someone who kidnaps because they have the goal of surviving against the person who ordered them at gun point to kidnap have very different degrees of voluntary action. The causes of their doing the kidnapping say something about the person’s propensity for voluntarily engaging in anti-social behavior.

Even if you don't use the word "free will", you are using the concept to distinguish between these two situations. So I'm not really sure of your point.

You accept that there is a difference between the situations. Do you also accept the legal system and most people would use the term free will in that context?

2

BroadShoulderedBeast t1_j8uufed wrote

I worded that very poorly. What I should have said was, voluntary action doesn’t require libertarian free will. Then, as I kept trying to explain more, I realized I don’t even think ‘voluntary’ and ‘involuntary’ really make sense in a deterministic/random universe.

>So I’m not really sure of your point.

My point was that free will means you could have acted differently given the same exact set of circumstances, genetics, environment, so on, because of some force that can act on the universe without detection. Involuntary means the person wouldn’t normally do that action except for a very small set of circumstances, usually because of threat to safety or life.

>most people would use the term free will in that context?

I’m not sure what the conventional use of the term ‘free will’ has to do with metaphysics. See the conventional use of “begging the question” for why lay use of philosophy jargon is not always helpful.

2

InTheEndEntropyWins t1_j8x2kpk wrote

>Then, as I kept trying to explain more, I realized I don’t even think ‘voluntary’ and ‘involuntary’ really make sense in a deterministic/random universe.

I use the word voluntary since it's also used by incompatibilists like Sam Harris.

So Harris gives the example of deliberately shaking your hand as a voluntary action and your hand shaking as a result of Parkinson's as an involuntary action.

In theory we could do brain scans to differentiate the kinds of actions which are voluntary and involuntary.

So lets just use the words as defined by medical science.

I assume you agree there is a manful different between someone hitting you on purpose vs having an epileptic fit. That difference is what people normally mean by voluntary and involuntary actions.

>My point was that free will means you could have acted differently given the same exact set of circumstances, genetics, environment, so on,

Libertarian free will would mean that, but I'm talking about compatibilist free which doesn't doesn't.

>I’m not sure what the conventional use of the term ‘free will’ has to do with metaphysics. See the conventional use of “begging the question” for why lay use of philosophy jargon is not always helpful.

My point is that most lay people have compatibilist intuitions, most professional philosophers are outright compatibilists, pretty much all moral, court and justice systems are based on compatibilist free will.

>Most professional philosophers are compatibilists https://survey2020.philpeople.org/survey/results/all

Why on the earth would someone use some metaphysical definition of free will, "libertarian free will", which is only really used by some amateur philosophers? It has zero relevance to what most people actually mean by the term, and has zero relevance or impact on the world in which we live.

I want to talk about the definition of free will which most people really mean, the term used by most professional philosophers, the the definition used by moral systems, court and justice systems around the world. I want to use the definition which is relevant to the world in which we live.

So if you want to talk about metaphysics which has zero relevance to the world in which we live, then you should make it clear. Because when people say that free will doesn't exist it confuses lay people. When you confuse people then it leads to people being more racist, immoral, etc.

​

>These three studies suggest that endorsement of the belief in free will can lead to decreased ethnic/racial prejudice compared to denial of the belief in free will. https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0091572#s1>
>
>For example, weakening free will belief led participants to behave less morally and responsibly (Baumeister et al., 2009; Protzko et al., 2016; Vohs & Schooler, 2008)
>
>From https://www.ethicalpsychology.com/search?q=free+will
>
>these results provide a potential explanation for the strength and prevalence of belief in free will: It is functional for holding others morally responsible and facilitates justifiably punishing harmful members of society. https://www.academia.edu/15691341/Free_to_punish_A_motivated_account_of_free_will_belief?utm_content=buffercd36e&utm_medium=social&utm_source=twitter.com&utm_campaign=buffer
>
>From https://www.ethicalpsychology.com/search?q=free+will
>
>A study suggests that when people are encouraged to believe their behavior is predetermined — by genes or by environment — they may be more likely to cheat. The report, in the January issue of Psychological Science, describes two studies by Kathleen D. Vohs of the University of Minnesota and Jonathan W. Schooler of the University of British Columbia.
>
>From https://www.nytimes.com/2008/02/19/health/19beha.html?scp=5&sq=psychology%20jonathan%20schooler&st=cse

1

frnzprf t1_j8w5oy6 wrote

I agree that we should use words how they are used in daily life and not redefine them.

I think the judges shouldn't call that "free will" based on the usages of "free" and "will". Basically, I personally like the definition of libertarian free will better, because it's about a will that is free.

I'd call what the judge called "acting on free will", "acting based on your own will". If the judges definition is more common, it becomes the correct definition.

When it's hot in a room, then you don't have to fix the air-conditioning system, when there is a power failiure. The air-conditioniner wasn't "responsible". I think punishing criminals is like fixing or calibrating machines.

2

InTheEndEntropyWins t1_j8x6vq0 wrote

>I agree that we should use words how they are used in daily life and not redefine them.

That's my main argument. Most people have compatibilist intuitions in respect to free will. Most professional philosophers are outright compatibilists. Moral, court and justice systems are all based on compatibilist free will.

So yes, we should use the definition of what most people/society really mean by the word free will.

>[https://survey2020.philpeople.org/survey/results/all)

The only people redefining free will are the ones using libertarian free will, and incompatibilists.

1

Latera t1_j8zzhzd wrote

The ability to act based on reasons. No robot that has ever been produced has been able to act based on reasons and if, one day, we have AI that CAN reason, then it WILL be free.

1