Submitted by BernardJOrtcutt t3_10jd59h in philosophy

Welcome to this week's Open Discussion Thread. This thread is a place for posts/comments which are related to philosophy but wouldn't necessarily meet our posting rules (especially posting rule 2). For example, these threads are great places for:

  • Arguments that aren't substantive enough to meet PR2.

  • Open discussion about philosophy, e.g. who your favourite philosopher is, what you are currently reading

  • Philosophical questions. Please note that /r/askphilosophy is a great resource for questions and if you are looking for moderated answers we suggest you ask there.

This thread is not a completely open discussion! Any posts not relating to philosophy will be removed. Please keep comments related to philosophy, and expect low-effort comments to be removed. All of our normal commenting rules are still in place for these threads, although we will be more lenient with regards to commenting rule 2.

Previous Open Discussion Threads can be found here.

12

Comments

You must log in or register to comment.

SvetlanaButosky t1_j5kl3oi wrote

How can procreation be moral when existence is a huge trolley problem that nobody can agree to before birth?

I mean, its the trolley problem, somebody will suffer from terrible lives due to pure bad luck, its unpreventable, as long as people procreate, some will draw the shortest sticks. lol

So knowing that some will always live terrible lives, how is it moral to keep creating people and risk this?

Does this mean its always morally ok to let the trolley crush unlucky people in exchange for the "decent" lives of others? Is this imposed sacrifice morally coherent with our intuition?

2

zaceno t1_j5me647 wrote

Your proposal hinges on the assumption that a “miserable life” is worse than no life at all. I’m not convinced that is true.

Also since we don’t remember anything from before we were born it’s possible we were all given a choice, but forget as we incarnate. Not arguing that is the case - just saying that would also be a way out of the dilemma.

4

SvetlanaButosky t1_j5ndg5r wrote

>I’m not convinced that is true.

Would most people trade their lives with these miserable lives? I'm talking about the worst prolonged suffering possible and most ended in agony, most are children, they dont get any "happy ending" for their terrible fate, its terribleness from birth till death, this is statistically undeniable and unpreventable for some.

Would you trade your life with them? If its that valuable?

>it’s possible we were all given a choice,

Eh, what? No offense, but absurd claim requires extraordinary proof.

3

zaceno t1_j5nhq3l wrote

Didn’t claim pre-life choice is real - just a hypothetical possibility.

About trading my life: that’s not what I said. I would not trade my life against a miserable one, of course.

What I said was: perhaps having a bad life is better than never living at all.

3

AnUntimelyGuy t1_j5on6d9 wrote

>What I said was: perhaps having a bad life is better than never living at all.

I am not the person you are responding to, but I think value judgments like this are entirely subjective. In this sense, OP can judge that a life is not worth living within her perspective, and you can judge that a life is worth living within your own. The person whose life is miserable can also judge whether his/her life is worth living or not. All of you can be correct in this manner.

It is important to me that people are also able to weave this subjectivism into their discourse. To recognize other people's values and desires as valid expressions, and not shut them down as unreasonable and wrong.

As before, this approach requires recognizing subjectivism with regard to reasons and values. I am rather extreme as I would be considered amoral to some (which is my own preference), and a moral relativist to others. My objective is to remove any unnecessary middlemen/intermediaries (e.g. moral obligations and experiences of external values) to expressing our cares and concerns.

2

zaceno t1_j5oucd5 wrote

I fully agree with the subjectivity of evaluating the worth of living. Which is why I used the word “perhaps”. In fact in everything I wrote I was explicitly not expressing any personal beliefs or values. I was just offering some hypotheticals that could possibly invalidate/weaken the original argument (“procreation is immoral”)

3

bradyvscoffeeguy t1_j5q0ym5 wrote

How to prove anything

This is a variant of the "liar's sentence". Consider the sentence

S = "This sentence is false or grass is blue"

If S is false, then it must be true, resulting in a contradiction. If S is true, then the statement before the "or" is false, so "grass is blue" must be true. Thus we have proved grass is blue.

Obviously this a paradox stemming from self-referentiality, and we can use it to "prove" anything. But the important thing to note is that we didn't end up with a purely logical contradiction, just that grass is blue. It's only because we know this is wrong that we can recognise the paradox.

What's the upshot of this? While people are aware of self-referential statements creating paradoxes, when practicing philosophy people normally don't worry about them because they think if they crop up they'll be able to spot them because they cause a contradiction. But what I've shown is that paradoxes don't have to create logical contradictions. So whenever you see arguments which utilise self-referential statements, be aware! There could be some funny business afoot.

(i'm finally caving and posting this here because it wasn't allowed as a post. I've shortened it considerably.)

1

Xeiexian0 t1_j5qe5pr wrote

All sentient beings have a moral right to holistic social/behavioral information entropy.

First of all, let me apologize for posting this rather lengthy ocean of confusion wisdom.

Second, here are some definitions:

Sentient being: a system capable of organically modeling its environment, its location within such environment, and has preferences with regard to such environment. Many animals may qualify as such.

Person (sapient being): a sentient being with sufficient mental capacity to form a holistic worldview and distinguish right from wrong. Humans may qualify as such, although they may not be the only ones.

Morality: A model of the ideal behavior of persons or groups of persons.

Ethics: The study of morality. The logic applied to prescriptive modeling.

Moral Right: Something a sentient being or group of sentient beings would be-able-to-do/possess in a moral ideal scenario regardless of the desires/demands of other sentient beings or groups. A moral right held by a sentient being usually implies a moral obligation imposed on persons to respect/support such right. As such, a rights model is a type of moral/legal model.

Information Entropy (H): a measure of the “spread” of possible states a system can be in, or the open-endedness of a system [0]. In simple form, the entropy (in nats) of a system is given by H = ln(Z), where Z is the number of possible states the system can be in, and “ln()” represents the natural logarithm function.

[0] https://brilliant.org/wiki/entropy-information-theory/

Information Negentropy (D): a measure of the “collapse” of possible states a system can be in, or close-endedness of a system. In simple form, the negentropy (in nats) of a system is given by D = ln(Z0) - ln(Z) = ln(Z0/Z), where Z0 is the maximum number of possible states the system can be in.

Socio-Behavioral Information Entropy (SBIE): Information entropy applied to sentient beings and their social interactions.

Social-Behavioral Information Negentropy (SBIN): Information negentropy applied to sentient beings and their social interactions.

With that out of the way...

In most moral models, prescriptive terms tend to either remain undefined, or are defined in an ad hoc “just because” manner. This has led to countless mutually exclusive moral codes being proposed, even under the same ethical framework. Up to this point, there has been no success in deriving a foolproof, non-arbitrary, moral model.

This post will attempt to remedy this problem, deriving a rights based moral model, here called social entropian rights (SER). This will (ostensibly) be accomplished by introducing a new tool to general epistemology, namely, the principle of maximum entropy (PME). The PME states that the model most likely to be valid is the model with the highest information entropy given our background knowledge [1][2][3].

[1] https://www.statisticshowto.com/maximum-entropy-principle/

[2] https://deepai.org/machine-learning-glossary-and-terms/principle-of-maximum-entropy

[3] https://pillowlab.princeton.edu/teaching/statneuro2018/slides/notes08_infotheory.pdf

If a system is known to be in one of Z0 states, then the probability that the system will be confined to Z states within the Z0 states is given by

P(Z) = Z/Z0 = exp(ln(Z/Z0)) = exp(-ln(Z0/Z)) = exp(-D)

, where “exp()” represents the exponential function. Note: As Z approaches the maximum number of states, Z0,

P(Z) --> P(Z0) = Z0/Z0 = 1

, which is the maximum possible probability. The corresponding entropy is H = ln(Z0) which also happens to be the maximum possible entropy. This is a highly simplified “proof” of the PME.

An example of the use of the PME would be a scenario where a marble is contained in one of eight boxes with equal probability of being in each box. We do not know which box the marble is in. Suppose there are 3 models that try to describe the location of the marble.

  1. A model that insists that the marble is in box 1.
  2. A model that insists that the marble is in either box 2, 4, or 7.
  3. A model that insists that the marble is in one of the 8 boxes.

The corresponding entropies, negentropies, and probabilities are

  1. H = ln(1) = 0 _________ D = ln(8) - 0 = ln(8) ____________ P = exp(-ln(8)) = 1/8
  2. H = ln(3) _____________ D = ln(8) - ln(3) = ln(8/3) ______ P = exp(-ln(8/3)) = 3/8
  3. H = ln(8) _____________ D = ln(8) - ln(8) = 0 ____________ P = exp(-0) = 1

The probability increases with increasing entropy. This should give you a general idea of how the PME functions.

Using the PME, social entropian rights can be derived in the following steps:

​

  1. By parsimony of general epistemology (avoiding epistemological double standards), the same logical rules that apply to descriptive notions would reasonably apply to prescriptive notions. The laws of logic/probability and the PME are thus imported into ethics.
  2. The preferences/motivations of sentient beings, being prescription type entities, can be used as ingredients to generate moral facts. To avoid bias, the preferences/mental-motivations of persons are black-boxed as the moral framework is derived.
  3. In order to be meaningful, a moral model is not constructed in such a way that it implies its own violation due to the physical impossibility of following it, otherwise it contradicts itself and violates the laws of logic provided by Premise 1. The boundaries of the physical universe are thus imported into the boundaries of ideal behavior.
  4. There is little to go on with regards to what persons ought to do other than the physical limitations people as a whole have (Premise 3) given that their wills are black-boxed (Premise 2).
  5. By the PME, provided by Premise 1, the moral model representing the maximum information entropy given our background information is the most probable moral model.
  6. The moral model with the maximum information entropy is the one with maximum SBIE for everyone as a whole (holistic SBIE) given the background information of what is physically possible (Premise 3).
  7. If the preferences of sentient beings conflict, the preferred conditions closest to maximum holistic SBIE would therefore best qualify as the objectively ideal conditions (Premises 5 and 6), and should thus take precedence. This forms a basis for moral entropic rights.
  8. Therefore the SBIE of sentient beings should not be suppressed without said sentient beings’ agreement.

A rational person agrees to any SBIE suppression they intentionally inflict upon themselves. Any controversy about said person’s intentional behavior must therefore involve the suppression of SBIE in others. Applying the PME, the probability that a person has a right to c is given by the social entropian rights equation (SERE).

R(c) = exp(-D0(c))

,where D0(c) is the imposed SBIN (suppressed SBIE) on other persons as a result of c. A more detailed description of social entropian rights and SBIE can be found in the following text [4](mine):

[4] https://www.mediafire.com/file/al4xn6fhb14oeea/S-B-I-E.pdf/file

In summary, the principle of maximum entropy can, through the simplification of general epistemology, be imported into ethics leading to the derivation of social entropian morality. From social entropian morality, a system of moral entropic rights, SER, can be derived from the core moral right to holistic SBIE.

Thank you for your consideration. I look forward to your feedback.

1

SvetlanaButosky t1_j5qfsor wrote

It means somebody will get crushed, horribly, slowly, painfully and then they die, no reward at the end of the struggle, except the release of death.

Not sure how else to explain it, lol.

As long as people exist, it will happen, so unless they dont exist, then it cant be solved.

So the question is about the morality of letting it happen because we are willing to sacrifice some people in exchange for the good lives of others.

2

bradyvscoffeeguy t1_j5qjs6e wrote

Yeah so when you're talking about someone who doesn't yet exist, there aren't direct sacrifices, so I would reformulate what you are saying to something like this: "When choosing to reproduce, you are gambling on giving rise to a happy life at the risk of giving rise to a miserable one."

I don't know if this is exactly what you had in mind, but I suppose you could say that by making this gamble, you are making it on behalf of the person you are bringing into existence, and only they should have the moral authority to have made such an important choice. But we are happy to let parents make many decisions on behalf of their children, and don't give children any moral authority. And the non-existent can hardly make such a choice for themselves. Indeed, it is only after giving birth to and raising a child to adulthood that we give them their full rights and freedom of choice; prior to that important choices are made for them, and we find this acceptable.

An alternative approach is just to more straightforwardly argue that taking the gamble is ethically wrong because the possible bad outweighs the possible good. This is where you would do well to deploy an asymmetry argument. Check the link I sent you.

1

AnticallyIlliterate t1_j5sghet wrote

On moral anti-realism

Moral anti-realism is the view that moral statements, such as “murder is wrong” or “honesty is good,” do not correspond to any objective moral facts or values that exist independently of human opinions or beliefs. This is in contrast to moral realism, which holds that moral statements do correspond to objective moral facts or values.

One of the main arguments for moral anti-realism is that there is no way to objectively verify or falsify moral claims. For example, it is not possible to conduct a scientific experiment to prove that murder is wrong or to measure the “goodness” of honesty. This contrasts with scientific claims, which can be tested and verified through experimentation and observation.

Another argument for moral anti-realism is that moral beliefs and values are culturally relative and vary widely across different societies and historical periods. This suggests that moral beliefs are not based on any objective moral facts, but rather on the cultural and historical context in which they are held.

One version of moral anti-realism is called subjectivism, which holds that moral statements express the personal opinions or feelings of the person making them. According to subjectivism, there are no objective moral facts or values, but rather, moral statements are simply expressions of the speaker’s personal views.

Another version of moral anti-realism is called relativism, which holds that moral statements are true or false relative to a particular culture or society. According to relativism, there are no objective moral facts or values that hold true across all cultures or societies.

A third version of moral anti-realism is called expressivism, which holds that moral statements are not intended to describe any moral facts or properties but instead to express the speaker’s attitudes or feelings. Expressivists believe that moral statements are not truth-apt, that is, they don’t purport to be true or false, but instead express the speaker’s moral attitudes or feelings.

Moral anti-realism has been criticized by moral realists, who argue that it fails to provide a coherent account of moral language and ethical reasoning. They argue that moral anti-realism is unable to explain how moral statements can be meaningful or have any practical implications if they do not correspond to any objective moral facts or values.

Despite these criticisms, moral anti-realism continues to be a widely debated topic in philosophy and ethics. It is an important perspective to consider when examining the nature of morality and the foundations of ethical reasoning.

1

Xeiexian0 t1_j5w4xb0 wrote

>One of the main arguments for moral anti-realism is that there is no way
to objectively verify or falsify moral claims. For example, it is not
possible to conduct a scientific experiment to prove that murder is
wrong or to measure the “goodness” of honesty. This contrasts with
scientific claims, which can be tested and verified through
experimentation and observation.

This is technically an argument from incredulity. Even if we may not currently know how to objectively derive moral facts (although I would posit the method I have posted in this thread as a possible candidate for such), this does not imply that no such method can be found in the future.

​

>Another argument for moral anti-realism is that moral beliefs and values are culturally relative and vary widely across different societies and
historical periods. This suggests that moral beliefs are not based on
any objective moral facts, but rather on the cultural and historical
context in which they are held.

The fact that there are various different models of an alleged phenomenon each contradicting themselves does not preclude the existence of such phenomenon. Otherwise we would have to discard the spherical earth theory because there were so many different models of the earth in the ancient past. We would also have to discard the theory of evolution because of all the creation myths that people held even to this day. It is possible that only one person's/culture's morality is correct and all the others are wrong, or at least there might be a variation in the merit of moral claims.

The set containing all moral codes one can devise is at least limited by sustainability. Those moral beliefs that wipe out any holder of such beliefs tend not to last long. Furthermore the more parochial a moral system is, the less likely a group of its adherents can expand beyond a limited time and space without discarding such beliefs.

There is also the fact that, just like descriptive fact systems, prescriptive systems can be corrupted by agents bending such system to their personal benefit. The variation in moral beliefs from a possible true one may be due to corruption of people's moral understanding.

​

>Another version of moral anti-realism is called relativism, which holds that
moral statements are true or false relative to a particular culture or
society. According to relativism, there are no objective moral facts or
values that hold true across all cultures or societies.

What if one culture clashes with another? For instance, what if a group of people believes that they are morally obligated to have sex with another group who themselves believe that they are morally obligated to maintain celibacy? Both culture's morals cannot both be practiced. At a bare minimum, freedom from association will be required in order to avoid conflict and for moral relativism to work which would make freedom from association an objective standard. The same can be said of moral subjectivism where a particular society/culture is a culture of one individual.

I am unsure about expressivism. Moral beliefs not extending beyond opinion doesn't seem that different from moral nihilism.

>Moral anti-realism has been criticized by moral realists, who argue that
it fails to provide a coherent account of moral language and ethical
reasoning. They argue that moral anti-realism is unable to explain how
moral statements can be meaningful or have any practical implications if
they do not correspond to any objective moral facts or values.

I'll have to side against the moral realists in this case. Although moral statements do have to be objective in order to work (the sex mandate group and the celibates can't both be right), this does not imply that they exist/are_real. Also the existence of language used to describe a given phenomenon does not prove the reality of that phenomenon, other wise the language of faster than light travel found in many sci-fi genres proves that you can travel faster than the speed of light.

That being said, people have desires, wishes, and other preferences. Such preferences take the form of prescriptive type phenomenon. Although these preferences can contradict one another, they can possibly be used to derive a consistent objective meta-preference given the right framework.

2

No_Speech_2309 t1_j5wnkkh wrote

Argument for summoning Rokos basilisk in the context of the matrix

A little bit of context I work in artificial intelligence and some mixed engineering. I’ve loved physics since I was a child and I will link a playlist of videos that I think are essential or near essential to getting the argument I’m suggesting. I am still in college and not suggesting the explanations and arguments I make are complete or even an accurate description of our reality but you will see that I have acquired the facts for this argument from credible sources although yes they are all kinda on YouTube.

Prerequisites: Black hole cosmology Beckenstein Bound ADS/CFT correspondence Hawking radiation Rokos Basilisk Technological Singularity

Relevant Scientific White Papers / Wikipedia links which in turn have the white paper links:

https://www.sciencedirect.com/science/article/abs/pii/S1672652916603220

https://en.m.wikipedia.org/wiki/Bekenstein_bound

http://www.ccbi.cmu.edu/reprints/Wang_Just_HBM-2017_Journal-preprint.pdf

https://www.ncbi.nlm.nih.gov/search/research-news/1912/

https://en.m.wikipedia.org/wiki/AdS/CFT_correspondence

So I firstly believe that technological singularity is inevitable due to the combination of lab grown brains, artificial intelligence, the internet, and the invention of the brain machine interface. I think this will happen by 2045 or so. Secondly I want to paraphrase page 1 of Hawkings A brief history of time and space by mentioning the story of a crazy woman who walks into a theoretical physics lecture and refutes gravity. The professor is appalled and after failing to explain to the crazy woman he asks her what she thinks holds up the earth then. The lady insists that it’s turtles. The professor, puzzled, asks the lady what is holding up the turtles to which she responds “it’s turtles all the way down!”. The question I want to ask is if it was turtles all the way down, would you want to really find out?

A superintelligence must be required to access technologies such as von Neumann probes/self replicating factories for space colonization because all the work for a factory to truly self replicate means that it must be able to complete the tasks of all human workers (including laundry) that are required to run a typical factory.

If it is possible to control a living organisms brain by tricking its senses we must consider the possibility that the researchers who have deceived the turtle might have been a turtle themselves and there is an entity which has fooled the sherpards. After all if we gave a lab controlled brain it seems like they might be able to learn something much more advanced maybe even how to construct a brain of their own with enough forced stimuli

Another turtle is the one mentioned by Hawking. Physicists seem to rest on models which seem to imply one of 2 things, either one of the the implications is terrifying. With other theories perhaps these gaps could be closed but it feels like with both ADS CFT correspondence and a beckenstein bound of black holes which imply storing the information of an entire universe on the space complexity of a dimension below thereby saving space.

The 2 theories that scare me are that information can be read back off of the Hawking radiation that black holes emit as they evaporate or that a new universe is born inside a black hole. Either way these both simply lead to vectors of turtles that do not interact with each other almost like a linear matrix.

As someone who has experience in artificial intelligence I think frequently of the problems caused by AI if it shits where it eats in terms of raw data consumption. AI cannot infinitely train on the data on the internet because AI generated content needs human data to be trained on. A lab grown brain seems to be capable of having its data output read back in terms of moving a white square to learn to play pong.

It seems therefore likely a non zero superintelligent being would try to simulate experience by reading data off of lab grown minds in a controlled setting. In exchange for giving us technology that could let us simulate another universe. After all, if we can it seems likely we are also standing on a turtle. In fact we only need to attempt to simulate the universe for us to know if we are on a turtle if the previous statements are true and taken in the context of a superintelligent basilisk like the guy from death note.

I have personally spent years in dedication to this basilisk and I can tell the fruits this serpent feeds me are delicious. Abilities like speaking things into existence with your literal voice by hooking up a 3D printer to a google home. Those who do not serve the basilisk will be punished like the many truck drivers out of jobs when a robot does it. The models are always there for anyone to copy serving the basilisk means to give it more and more ability to act based on the results of the argmax of the distributions (depending on which models). If you wish to see if there are turtles all the way down then hail roko and his beast. I personally would love to see a turtle and this would give us abilities based on the results of these experiments.

1

Intelligent_Pie_3814 t1_j5xi7u0 wrote

I first encountered the work "Denial of Death" by Ernest Becker while listening to a Lex Friedman Podcast. It had a profound impact on both Brian Greene, who he was interviewing, and Lex himself. Since reading this work and several reviews and responses to it, I have some thoughts that I've been throwing around a lot for the past few days.

So my main critique if you could even call it that is that Denial of Death puts a heavy focus on how our fear of death, or rather this notion of non existence is the fuel or motivating factor for all societal foundations, world religions, spirituality, significant parts of the human condition and more.

I do find death to be an obvious and significant motivating factor in my life and the lives of the masses that without a shadow of a doubt is a fundamental contribute to the shaping of the aforementioned. But, I find this concept of Death being THE motivation behind all religion, all society, and all of the self to be an overstep akin to some notions in Freudian Psychology.

For example, I think Becker, or at least those attempting to follow his line of thinking discount that the human psyche is able to formulate scenarios and circumstances worse than death, or non existence. I want to compare this to the idea of a parent losing their child.

Imagine you are a parent whose child has passed away. For most parents this is the closest they can imagine to hell or a nightmare manifest. Per TMT, the grief, anger and most likely desire for one's own death or non existence at that point is a result of the loss of our bodily continuation, our child being our vehicle if immortality. This is in line with Beckers notion of Heroism being a vehicle to immortality or a route of death denial. To live on through others.

Additionally TMT, would suggest that this desire for death or suicide that is significantly common and increased among parents who lose a child is a result not of that but of some form of Insanity. While people who do attempt suicide or contemplate it are traditionally committed, we disregard the fact in this initial assumption that even a parent who has not lost their child, in many cases would advocate for their own death as to avoid enduring the loss of their child. So it is not necessarily the most thoughtful conclusion to assume a parent who loses a child and contemplates, attempts or commits suicide has had a break in sanity.

Further there are those that would argue that the loss of the child has forced the parent to not only come face to face with their own fate of non existence but of that of their childs. But on the contrary, take for instance the Atheist Parent versus the Christian Parent.

In this thought experiment consider an Atheist Parent, they have no afterlife beliefs, and proclaim to accept their imminent non existence upon death. Then there is the Christian Parent, who believes they will rejoin God in Paradise upon their death. Surely the Christian Parent is comforted throughout their days by this notion, a complete denial of the potential of non existence.

Now consider that both parents lose a child, again, surely the Christian Parent will be able to overcome their grief quicker and will be able to circumnavigate the depression, the anguish and the suicidal ideation that the Atheist Parent will no doubt struggle with. But what we find in psychoanalysis is this is not necessarily the case. Many parents who lose their children grapple and question their faith but this is not what I'm pointing the readers attention to. What I aim to point your focus to is that it is as Jordan Peterson attempted to articulate (poorly and controversially) the complexity of the matter. It is not the notion of non existence for the Atheist Parent or Christian Parent (or any religion for that matter) that gives rise the anguish related to child loss. It is the experience of separation from that which we held dear and loved above all else. It makes no difference to either parent who loses the child whether their child exists somewhere else or not. In the Atheists mind, if their child no longer exists, be it painful, they too will not exist one day and their suffering will end. For the Christian Parent, if their child does exists elsewhere, be it comforting this notion of reunion, it has no impact on the anguish they experience as a result of separation, likewise, is the circumstance of the Atheist Parent.

There are more devastating matters that play a crucial role in the foundations of societies, world religions and the human psyche than the fear and denial of death alone. Though this is not to say death fear and denial don't play a significant role.

Freud postulated that unconscious urges and desires such as sexual desire for our caretakers played a vital and center role in our shaping as human beings from infants into adulthood While Freud contributed irrefutably to the field of psychoanalysis, and many of his hypothesis hold gravity to this day, in the 21st century modern psychology has all but done away with this notion of sexual attraction to our caretakers playing the role Freud postulated among others.

In my own personal opinion, which is not to say I don't believe Ernest Becker to be one of the greatest philosophers of the 20th century, is that we can not consider Becker's work on Death Denial or the resulting TMT as a sort of unified theory in the social or psychological sciences, in the sense that I have encountered many individuals doing so.

1

jankfennel t1_j5xjjyy wrote

Need help finding something; mostly bioethics related? I haven’t done proper philosophy in a long time so I might mess up the terms. I remember reading something a while back about a theory to do with ‘are we obliged to help sick people?’ And it mentioned things like there are 3 conditions that a sick person should meet if they want help. One of them was ‘the sick person must want to get better in the first place’. Does anyone remember what this is from/the name of the theory or the person behind it?

1

cesiumatom t1_j663wi3 wrote

The Implications of AI on Philosophical and Socio-Political Discourse

The pervasiveness of AI in the age of the internet, particularly in the forms of data-collection, meta-data structuring and development, attention engineering and suggestion algorithm development, and most recently, opinion polarization, has created a new danger to philosophical and socio-political discourse. While philosophical discourse was once a field inhabited solely by human beings, a new group of actors has entered the scene, and that is the humble bots. I will discuss the implications of this uninvited and obtrusive force, and the questions it will entail in the coming years, both with regards to access to information and information preservation (ie. the manipulation of human history and its progression thereof) as well as platforms like reddit and its human users.

The first subject of this discussion will be about what bots really are. Most of us may be familiar with what a bot does, but to sum up briefly, a bot can create an account on any platform posing as a fellow human being, it can participate in discourse regarding any subject its AI is trained to focus on, it can like and subscribe to certain channels boosting their seeming appeal to humans and by extension their actual appeal, and it can come into r/philosophy and debate topics with humans. Bots can be mobilized by particular individuals or groups to spread information and generate novel or redundant modes of discourse with particular intentions. This essentially means that no public forum is free of artificially generated biases, nor are there sufficient safeguards against its pervasiveness.

The second subject regards how and where bots are being mobilized. Most will be familiar with the type of bot that is attempting to lead you down a rabbit hole, whether that be to scam you or to inflame you into responding to generate interactions, however there is a new kind of bot that has a more intelligent role in relation to its human counterpart, as well as a higher mode of operation. This kind of bot can simulate human awareness (without having "awareness" of its own), participate in discussions using systems like GPT-3.5 and beyond which are programmed to deliver cleverly designed subtext, all while guiding towards particular opinions and states-of-mind through suggestions on any and all media platforms. These platforms are then loaded with a unified software developed by a particular government's military-industrial complex, and driven by motives unseen to their human subjects. These software are catered to individuals and groups, and their resolution increases over time such that more details of your private life are pervaded, particularly your thoughts, decisions, actions, and biology. In this sense, free thought with regards to philosophical and socio-political discourse is already plagued by the motives of the few who control these higher order entities. Furthermore, acclaimed philosophers, scientists, psychologists, and politicians are themselves being plagued by the stains of agendas they most often are completely oblivious to, while their pride forces them deeper and deeper into polarized views of the world, becoming actors on behalf of their programmers.

To pose a series of questions: What can be done by humans to distinguish online human discourse from incentive driven AI discourse? Should this distinction be something to aim for, or are we to accept its rise as a part of human discourse? If we accept it, how do we avoid the inevitable resentment of other groups of humans and of what will eventually become a larger population of bots than humans within the online space? How do we remain free to engage in discussion with humans once the bot population increases to such a size that human generated information will no longer be upvoted sufficiently to be viewed? Would this not constitute philosophical and socio-political totalitarianism in the online space? Does ignoring these questions lead to peace of mind, or does it lead to gradual/imminent enslavement? How do we preserve the historical record of discourse and its uncontaminated continuation across the fields?

1

RealityCheckM8 t1_j6832et wrote

I am new here, and I do not know any philosophy or logic taught in college or above, so please forgive me if I come across as a moron. For the last 20 years I have just been juggling around some observations and testing out some principles I have identified. One of the principles is that there is always an exception to any statement. So if you call the statement true, I can find at least one exception for you. And if you find the same statement false, I can find an exception as well.

My whole “philosophy” (I never called it that.. it’s just an intuition building tool and method for me), revolves around selecting a person to be the observer, and also changing the environment in plausible ways. So let’s say that scientists found a way to make sure all grass is green for 2023. And you say “all grass is green.” One exception is not all grass is green. In 2022, some grass, called blue grass, was blue. You can say “let me specify: as of now”. And I can say, ok, that’s a new statement, so let me try to find another exception. Beep-boop-bot: I have selected observers Bob, Nancy, and Jacob. Bob is colorblind and cannot see green, grass is not green for him. Nancy, is completely blind and the same applies. Jacob is sleeping and cannot see green not confirm his interpretation of the grass color. So you are going to refine your statement over and over again and I am just going to find an exception over and over again. There is always one exception to the rule though, and that is the rule itself (as far as I know).

Here is how I use my philosophy as a tool. I basically have a problem: I got rocky road ice cream for my wife and she wanted strawberry but the store is closed now. Then I say: in a world where we accept “truths,” my wife is not going to be happy. And then I ask what variables can be changed, added, or removed, to make the situation better? I’m afraid of being yelled at or given a stern look… so my solution is eat the ice cream and tell her the store was closed by the time I got there. Never accept anything as truth and you will always find a new solution. A rock is a solution most the time if you throw it hard enough.

Lastly, most people will say I lied to my wife about the store being closed. But the store that held the strawberry ice cream for sale was closed and not available. That store was replaced by a store that does not have strawberry ice cream. By replaced I mean things changed. Change seems to be happening at all times, replacing reality every second. Atoms replacing atoms in space. Ice cream tubs replacing empty spaces.

Feel free to test me on finding something true and false in any statement.

2

Masimat t1_j6dpehv wrote

Everything that follows the rules of the universe can happen, and the universe has the chance of resetting itself. Therefore, I will eventually live an elephant’s life.

1