Viewing a single comment thread. View all comments

Nameless1995 t1_iwvghxr wrote

  • Even if we assume that conscious valence provides an ought for the conscious being with the valence, it's not clear how you can universalize valence maximization separating it from particular individuals. Having an ought from my consciousness valence doesn't immediately imply that I am obligated to, say, sacrifice my consciousness valence for "overall maximization of some utility function accomodating valence of all beings" or such if one is not already poised towards selflessness.
  • Open Individualism may help with the above concern, but it's more controversial and niche than utilitarianism itself. Kind of undermines the whole project when to support X, you have rely on something even more controversial than X. Either way, I also don't see why I should care for some "metaphysical unity/lack of separation" -- which can be up to how we use language. I don't see why boundedness of consciousnesses (restriction from accessing other's consciousness unmediated) isn't enough to ground for individuation and separation irrespective of whether all the separate perspectives are united in a higher dimensional spatial manifold, God, or Neoplatonic One or what have you. It's unclear to me if such abstract metaphysical unities really matter. We don't individuate things based on them being completely causally isolated and separated from the world.
  • I don't see why a proper normative theory shouldn't be applicable and scalable to hypothetical and idealized scenarios. Their lack of robustness to hypotheticals should count as a "bug" and proper reason should be given as to why it doesn't apply there. Real life scenarios are complex, before just blindly applying a theory we need some assurance. Idealized scenarios and hypotheticals allows us to "stress test" our theories to gain assurance. Ignoring them because "they are not realistic" doesn't sound very pleasant.
  • I don't see how logarithmic scaling helps with repugnant conclusion. Repugnant conclusion comes from the idea that x beings with utility y but high quality utility for each of x beings can be always overtaken by a seemingly less ideal scenario where m beings (m >>> x) exists with utility z (z > y) but each of m being has low quality utility. I don't see what changes if the individuals happiness grows logarithmically (you can always adjust the numbers to make it a problem), and I don't see what changes if there is a underlying unitary consciousness behind it all. Is the "same" consciousness having multiple low quality experiences really better than that having multiple high quality experiences but in less number?
  • I also don't see the meaning of calling it the "same" consciousness if it doesn't have a single unified experience (solipsistic).
2

Squark09 OP t1_iwztj7o wrote

Excellent comment!

I'm already pretty committed to open/empty individualism, so this post was really meant to be me thinking through what utilitarianism means in this context. I get that it's controversial, but my own experiences in meditation and thinking about the scientific ambiguity of "self" have convinced me that closed individualism doesn't make sense.

> I don't see how logarithmic scaling helps with repugnant conclusion

You're right that it doesn't make any difference in the pure form of the thought experiment, however I think it does make a difference when you have limited resources to build your world. It's much easier to push someone's conscious valence up than to generate another 1000 miserable beings. The main thing that makes a difference here is open/empty individualism.

> I don't see why boundedness of consciousnesses (restriction from accessing other's consciousness unmediated) isn't enough to ground for individuation

If you go down this line of reasoning, your future self is separate from your past self, they don't share a bound experience either, it's just that your future self has a bunch of memories that it puts together into a story to relate to your past self. Most common sense ethics still tries to reduce suffering for a future self, why is this any different than claiming that you should help others?

> I also don't see the meaning of calling it the "same" consciousness if it doesn't have a single unified experience (solipsistic).

I mean that all consciousness is the same in the sense that two different electrons are the same, all consciousness has the same ontological properties. So if you buy that your own suffering is "real", then so is someone else's.

1