Submitted by beforesunset1010 t3_za96to in philosophy
iiioiia t1_iz2a29u wrote
Reply to comment by Ok_Meat_8322 in How to solve moral problems with formal logic and probability by beforesunset1010
> If it differs wrt the fact that mathematics/logic are indifferent to substantive questions of fact or value, then I'm afraid to say that your model is incorrect on this point.
I'm thinking along these lines: "Perhaps certain conditions can be set and then things will resolve on their own."
You seem to be appealing to flawless mathematical evaluation, whereas I am referring to the behavior of the illogical walking biological neural networks we refer to as humans.
> No doubt, but once again that doesn't contradict what I said
I believe it does to some degree because you are making statements of fact, but you may not be able to care if your facts are actually correct. In a sense, this is the very exploit that my theory depends upon.
Ok_Meat_8322 t1_iz2ibc5 wrote
>I'm thinking along these lines: "Perhaps certain conditions can be set and then things will resolve on their own."
I'm having trouble discerning what exactly you mean by this, and how it relates to what I'm saying.
>You seem to be appealing to flawless mathematical evaluation, whereas I am referring to the behavior of the illogical walking biological neural networks we refer to as humans.
What does "flawless" mean here exactly- does it just mean that you've done the math correctly? But yes, I'm certainly assuming that one is doing the math correctly- even if ones math is correct, it still can only enter into the picture after we've settled the question of what moral philosophy, ethical framework, or specific values/judgments are right or correct.
>I believe it does to some degree because you are making statements of fact, but you may not be able to care if your facts are actually correct. In a sense, this is the very exploit that my theory depends upon.
Again with these vague phrases. I said that "the tricky question" was what moral philosophy, ethical system, or moral values/judgments one should adopt, not how math or logic can help resolve moral dilemmas... but, as you note, there are more than one "tricky question", which I'm happy to concede, and so what I really meant (and what I more properly should have said) was that the question of the correct/right ethical framework or moral philosophy is trickier than the question of how math/logic can help us solve moral problems.
But keeping that in mind, there was no contradiction between your reply and my original assertion. And yes, for the record, I most definitely do care about which facts are correct, I'm having trouble thinking of anything I care about more than this (at least when it comes to intellectual matters), and drawing a blank.
iiioiia t1_iz2lzqx wrote
> I'm having trouble discerning what exactly you mean by this, and how it relates to what I'm saying.
A bit like this is what I have in mind:
https://i.redd.it/5lkp13ljw34a1.png
My theory is that humans disagree with each other less than it seems, but there is no adequately powerful mechanism in existence (or well enough known) to distribute this knowledge (assuming I'm not wrong).
> What does "flawless" mean here exactly- does it just mean that you've done the math correctly? But yes, I'm certainly assuming that one is doing the math correctly- even if ones math is correct, it still can only enter into the picture after we've settled the question of what moral philosophy, ethical framework, or specific values/judgments are right or correct.
What I'm trying to say that yes, you are correct when it comes to reconciling mathematical formulas themselves, whereas I am thinking that showing people some "math" on top of some ontology (of various ideologies, situations, etc) may persuade them to "lighten up" a bit. Here, the math doesn't have to be correct, it only has to be persuasive.
> Again with these vague phrases. I said that "the tricky question" was what moral philosophy, ethical system, or moral values/judgments one should adopt, not how math or logic can help resolve moral dilemmas... but, as you note, there are more than one "tricky question", which I'm happy to concede, and so what I really meant (and what I more properly should have said) was that the question of the correct/right ethical framework or moral philosophy is trickier than the question of how math/logic can help us solve moral problems.
I think we're in agreement, except for this part: "the correct/right ethical framework or moral philosophy" - I do not believe that absolute correctness necessarily necessary for a substantial (say, 50%++) increase in harmony (although, some things would have to be correct, presumably).
> And yes, for the record, I most definitely do care about which facts are correct...
Most everyone believes that, but I've had more than a few conversations that strongly suggest otherwise - I'd be surprised if you and I haven't had a disagreement or two before! As Dave Chappelle says: consciousness is a hell of a drug.
Ok_Meat_8322 t1_iz2qxzo wrote
>My theory is that humans disagree with each other less than it seems, but there is no adequately powerful mechanism in existence (or well enough known) to distribute this knowledge (assuming I'm not wrong).
But we're not necessarily talking about resolving moral disputes between different people, but also of individual people having difficulty determining the correct moral course of action (i.e. "resolving a moral dilemma"), and this meme has nothing to say about the latter case (and that's assuming it says anything substantive or useful RE the former case, which I'm not sure it does).
The point is, once again, that mathematics or logic only enter into the question after one has decided or settled which ethical framework, moral philosophy, or particular moral values/judgments are right and correct, irrespective of how common or popular those ethical frameworks or moral values/judgments may be, or the extent to which people disagree about them.
>I think we're in agreement, except for this part: "the correct/right ethical framework or moral philosophy" - I do not believe that is necessarily necessary for a substantial (say, 50%++) increase in harmony.
Neither do I; determining or even demonstrating what is the right or correct thing is quite a separate matter from convincing others that it is the right or correct thing. It very may well may be (and in fact almost certainly is) that even if we could establish what ethical framework or moral values/judgments are right or correct (something I don't believe to be possible), many if not most people will persist in sticking with ethical frameworks or particular moral values/judgments other than the right or correct one. And it may well not "increase harmony", it could even lead to the opposite; sometimes the truth is bad, depressing, or even outright harmful, after all.
But these psychological and sociological questions are nevertheless separate questions from the meta-ethical question raised by the OP, i.e. whether and how maths or logic can help resolve moral problems or dilemmas.
iiioiia t1_iz2te3r wrote
> But we're not necessarily talking about resolving moral disputes between different people, but also of individual people having difficulty determining the correct moral course of action (i.e. "resolving a moral dilemma"), and this meme has nothing to say about the latter case (and that's assuming it says anything substantive or useful RE the former case, which I'm not sure it does).
All decisions are made within an environment, and I reckon most of those decisions are affected at least to some degree by causality that exists (but cannot be seen accurately, to put it mildly) in that environment....so any claims about "can or cannot" are speculative imho.
> The point is, once again, that mathematics or logic only enter into the question after one has decided or settled which ethical framework, moral philosophy, or particular moral values/judgments are right and correct, irrespective of how common or popular those ethical frameworks or moral values/judgments may be, or the extent to which people disagree about them.
I think we are considering the situation very differently: I am proposing that if a highly detailed descriptive model of things was available to people, perhaps with some speculative "math" in it, this may be adequate enough to produce substantial positive change. So no doubt, my approach is other than the initial proposal here, I do not deny it (or in other words: you are correct in that regard).
> ...many if not most people will persist in sticking with ethical frameworks or particular moral values/judgments other than the right or correct one.
To me, this is the main point of contention: would/might my alternate proposal work?
> And it may well not "increase harmony", it could even lead to the opposite; sometimes the truth is bad, depressing, or even outright harmful, after all.
Agree....it may work, it may backfire (depending on how one does it). Also: I am not necessarily opposed to ~stretching the truth (after all, everyone does it).
> But these psychological and sociological questions are nevertheless separate questions from the meta-ethical question raised by the OP, i.e. whether and how maths or logic can help resolve moral problems or dilemmas.
Agree, mostly (I can use some math in my approach).
Ok_Meat_8322 t1_iz2v3nr wrote
>I think we are considering the situation very differently: I am proposing that if a highly detailed descriptive model of things was available to people, perhaps with some speculative "math" in it, this may be adequate enough to produce substantial positive change.
I don't disagree with this, what I am proposing is that a descriptive model and/or mathematics or logic can only be applied to a moral problem or dilemma after one has presupposed or established a particular ethical framework, moral philosophy, and/or particular moral norms and judgments. Descriptive models, non-normative facts, and math/logic alone can never solve a moral problem or dilemma, in order to arrive at a moral judgment or conclusion one must presuppose an ethical framework or particular norms/value-judgments.
>To me, this is the main point of contention
It may well be the angle that interests you, but its not the point of contention between us because I'm not taking any position on that question.
iiioiia t1_iz3242b wrote
> I don't disagree with this, what I am proposing is that a descriptive model and/or mathematics or logic can only be applied to a moral problem or dilemma after one has presupposed or established a particular ethical framework, moral philosophy, and/or particular moral norms and judgments. Descriptive models, non-normative facts, and math/logic alone can never solve a moral problem or dilemma, in order to arrive at a moral judgment or conclusion one must presuppose an ethical framework or particular norms/value-judgments.
I suspect you have a particular implementation in mind, and in that implementation what you say is indeed correct.
Ok_Meat_8322 t1_iz7db9d wrote
Once again, I'm not sure what that's supposed to mean.
iiioiia t1_iz9mvo6 wrote
"I don't disagree with this, what I am proposing is that a descriptive model and/or mathematics or logic can only be applied to a moral problem or dilemma ...."
What would "applied" consist of?
Ok_Meat_8322 t1_izbtljz wrote
The example I used earlier was a utilitarian, who can use basic arithmetic to resolve moral dilemmas (such as, for instance, the trolley problem).
But this only works because the utilitarian has already adopted a particular ethical framework. Math can't tell you what values or ethical framework you should adopt, but once you have adopted them maths and logic may well be used to resolve moral issues.
iiioiia t1_izc1dt3 wrote
I don't disagree, but this seems a bit flawed - you've provided one example of a scenario where someone has done it, but this in no way proves that it must be done this way. In an agnostic framework, representations of various models could have math attached to them (whether it is valid or makes any fucking sense is a secondary matter) and that should satisfy an exception to your rule, I think?
Ok_Meat_8322 t1_j0naes5 wrote
>I don't disagree, but this seems a bit flawed - you've provided one example of a scenario where someone has done it, but this in no way proves that it must be done this way.
I don't think it must be done, I don't think logic or mathematics is going to be relevant to most forms of moral reasoning. But consequentialism is the most obvious case where it would work, since consequentialism often involves quantifying pleasure and pain and so would be a natural fit.
But if what you mean is that we could sometimes use logic or mathematics to answer moral questions without first presupposing a set of moral values or an ethical framework, I think it is close to self-evident that this is impossible: when it comes to reasoning or argument, you can't get out more than you put in, and so if you want to reach a normative conclusion, you need normative premises else your reasoning would necessarily be (logically) invalid.
iiioiia t1_j0ng2rm wrote
Oh, I'm not claiming that necessarily correct answers can be reached ("whether it is valid or makes any fucking sense is a secondary matter"), I don't think any framework can provide that for this sort of problem space.
Ok_Meat_8322 t1_j0nn0qc wrote
I'm skeptical about whether moral judgments are even truth-apt at all, but the strength of a line of reasoning or argument is equal to that of its weakest link, so your confidence in your conclusion- assuming your inference is logically valid- is going to boil down to your confidence in your (normative) premises. Which will obviously vary from person to person, and subjective confidence is no guarantor of objective certainty in any case.
So I'm fine with the idea that logic or mathematics could help solve moral dilemmas or problems, in at least some instances (e.g. utilitarian calculations/quantifications of pleasure/happiness vs pain/suffering) but it seems to me that some basic moral values or an ethical framework is a necessary prerequisite... which is usually the tricky part, so I'm somewhat dubious of the overall utility of such a strategy (it seems like it only helps solve what is already the easiest part of the problem).
iiioiia t1_j0nufl7 wrote
> I'm skeptical about whether moral judgments are even truth-apt at all, but the strength of a line of reasoning or argument is equal to that of its weakest link....
Mostly agree. As I see it, the problem isn't so much that answers to moral questions are hard to discern, but that with few exceptions I can think of (including literal murder), do not have a correct answer at all.
> ...so your confidence in your conclusion- assuming your inference is logically valid- is going to boil down to your confidence in your (normative) premises. Which will obviously vary from person to person, and subjective confidence is no guarantor of objective certainty in any case.
Right - so put error correction into the system, so when participants minds wander into fantasy and, provide them with gentle course correction back to reality, which is filled with non-visible (for now at least) mystery.
> So I'm fine with the idea that logic or mathematics could help solve moral dilemmas or problems, in at least some instances (e.g. utilitarian calculations/quantifications of pleasure/happiness vs pain/suffering) but it seems to me that some basic moral values or an ethical framework is a necessary prerequisite... which is usually the tricky part, so I'm somewhat dubious of the overall utility of such a strategy (it seems like it only helps solve what is already the easiest part of the problem).
"Solving" things can only be done in deterministic problem spaces, like physics. Society is metaphysical, and non-deterministic. It appears to be deterministic, but that is an illusion. Just as the average human 200 years ago was ~dumb by our standards (as a consequence of education and progress) and little aware of it, so too are we. This could be realized, but like many things humanity has accomplished, first you have to actually try to accomplish it.
Ok_Meat_8322 t1_j0ny94r wrote
>"Solving" things can only be done in deterministic problem spaces, like physics
I think its more a matter of "solving" things in one domain looking quite differently than in another domain. And solving a moral dilemma doesn't look at all like solving a problem in physics. But that doesn't mean it doesn't happen; oftentimes "solving" a moral problem or dilemma means deciding on a course of action. And we certainly do that all the time.
iiioiia t1_j0o9mf3 wrote
> And solving a moral dilemma doesn't look at all like solving a problem in physics.
Agree, but listening to a lot of people talk with supreme confidence about what "is" the "right" thing to do, it seems like this idea is not very broadly distributed.
> oftentimes "solving" a moral problem or dilemma means deciding on a course of action. And we certainly do that all the time
Right, but the chosen course doesn't have to be right/correct, it only has to be adequate for the maximum number of people, something that I don't see The Man putting a lot of effort into discerning. If no one ever checks in with The People, should we be all that surprised when they are mad at we don't know why (though not to worry: memes "explanatory" "facts" can be imagined into existence and mass broadcast into the minds of the population in days, if not faster).
Viewing a single comment thread. View all comments