bestest_name_ever
bestest_name_ever t1_j69hxzj wrote
Reply to comment by CubanHermes in Is there an upper limit to structure size in a vacuum? Could a sufficiently advanced civilisation build a galaxy sized structure in space or would it become too massive and collapse in on itself? by CubanHermes
No. Billions of workers don't get you anywhere. You'll need numbers of workers you'll have to look up the names for.
bestest_name_ever t1_j69htz5 wrote
Reply to comment by GrumpyButtrcup in Is there an upper limit to structure size in a vacuum? Could a sufficiently advanced civilisation build a galaxy sized structure in space or would it become too massive and collapse in on itself? by CubanHermes
Yes, and a compact shape like a cube makes that harder. But the main point is that any conceivable size is still much smaller than galaxy-sized. The death star for example, if it has a level of crew per volume that's comparable to a current ship, would hold several tens of trillions in personnel. And it's tiny, like a quarter the diameter of Ceres. But it could be built without requiring magic materials. Moving it would be a different issue.
bestest_name_ever t1_ivfk8jx wrote
Reply to comment by LZeroboros in Michael Shermer argues that science can determine many of our moral values. Morality is aimed at protecting certain human desires, like avoidance of harm (e.g. torture, slavery). Science helps us determine what these desires are and how to best achieve them. by Ma3Ke4Li3
Those are the same.
bestest_name_ever t1_ivf90uh wrote
Reply to comment by stoppedcaring0 in Michael Shermer argues that science can determine many of our moral values. Morality is aimed at protecting certain human desires, like avoidance of harm (e.g. torture, slavery). Science helps us determine what these desires are and how to best achieve them. by Ma3Ke4Li3
> I don't necessarily think that the answer people give to a question is correlated with the factual answer to that question, but I do think there may be value in looking for those questions for which consistent answer profiles are given across human populations. In other words: killing is thought of as a taboo basically everywhere you go, which implies that there may be some scientific underpinning to that taboo. Eating pork or beef is thought of as very taboo to some, but very normal to others, implying that the taboo is less scientific than particular.
Majority opinion doesn't really seem to be relevant if you just look at history. What's the majority of people going to say about whether the sun orbits earth or indeed earth is flat, if you ask at various points in history? There is no easily visible correlation between the truth of an opinion and whether or not it's the majority opinion, nor the size of the majority holding it.
bestest_name_ever t1_ivf7b2c wrote
Reply to comment by Angelo_Maligno in Michael Shermer argues that science can determine many of our moral values. Morality is aimed at protecting certain human desires, like avoidance of harm (e.g. torture, slavery). Science helps us determine what these desires are and how to best achieve them. by Ma3Ke4Li3
No that's not the point. You're talking about practicalities, i.e. predicting the full consequences of an action. (Which only matters for consequentialist ethics anyway).
The fundamental problem is that you cannot simply equate suffering with bad and pleasure with good, it needs to be justified, and it is this justification that science can never provide.
bestest_name_ever t1_ivc9dkd wrote
Reply to comment by stoppedcaring0 in Michael Shermer argues that science can determine many of our moral values. Morality is aimed at protecting certain human desires, like avoidance of harm (e.g. torture, slavery). Science helps us determine what these desires are and how to best achieve them. by Ma3Ke4Li3
> So there can be nothing of value to be gained, scientifically or otherwise, from subjectively asking people which hand they prefer? That strikes me as false.
Do you think if you determine what the majority of people believe about facts like, for example, whether humans are descended from monkeys, that tells you anything about the actual, factual question? If no, why do you think this question should be treated differently from questions about moral facts? If yes, what conclusion do you think we can draw from the fact of the majority belief about the fact of the matter at hand?
bestest_name_ever t1_ivc8op9 wrote
Reply to comment by eliyah23rd in Michael Shermer argues that science can determine many of our moral values. Morality is aimed at protecting certain human desires, like avoidance of harm (e.g. torture, slavery). Science helps us determine what these desires are and how to best achieve them. by Ma3Ke4Li3
> "If it is right for you, it is right for everybody". - While most people today would wholeheartedly agree, this maxim too is a value statement. It could be seen as a version of Kant's Categorical Imperative, but, it is (arguably) an axiom rather than anything independently supported by either Reason or Science.
The basic equality of all moral agents is also quite important for most versions of consequentialism. But yes, it's not universal, Nietzsche is probably the currently most famous dissenter.
bestest_name_ever t1_ivc89vf wrote
Reply to comment by Wizzdom in Michael Shermer argues that science can determine many of our moral values. Morality is aimed at protecting certain human desires, like avoidance of harm (e.g. torture, slavery). Science helps us determine what these desires are and how to best achieve them. by Ma3Ke4Li3
>I think science can be useful for studying what makes people happy/content and what causes the most harm/suffering. In that way, science can help direct your moral framework to actually achieve the greatest good.
No it can't, you've also fallen for the naturalistic fallacy. What science can help determine is the the greatest happiness/contentment and least harm/suffering, those are not the same as "good" (or bad, respectively).
bestest_name_ever t1_itn8b87 wrote
Reply to comment by MankindsError in Deflecting asteroids is not enough — we need to know when they approach by burtzev
Yes, this is pretty much spot on. Accuracy isn't actually important, because in a theoretical mission where we're saving earth, well, it was already going to hit. We can't accidentally make it double hit, so the only thing that matters is changing the orbit enough so it no longer hits. And because asteroids are inert, it doesn't even matter if the new course isn't perfectly safe and would still hit earth next time it comes around, because when we know of an asteroid we can track it, so that case would buy sufficient time to do another deflection mission.
bestest_name_ever t1_j8elmoy wrote
Reply to comment by SvetlanaButosky in “The principle of protecting our own thinking from eavesdroppers is fundamental to autonomy.” – Daniel Dennett debates the sort of free will it’s worth wanting with neuroscientists Patrick Haggard and philosopher Helen Steward by IAI_Admin
> As Sam Harris put it, compatibilism is just arguing that free will exists as long as the puppet ignores its strings.
Lol. There's a reason why other philosophers take Dennet seriously and Harris ... not. His incapability to understand basics such as the actual claims of compatibilism is a major part of it.