kilkil

kilkil t1_jdh5f67 wrote

I'd like you to describe this "subjective immortality" experience a bit more, or at least how you imagine it. Is it just like, I die, and then my point of view shifts to another "me", elsewhere in the universe?

If I've understood your thought experiment correctly, then I'm convinced you haven't preserved continuity of consciousness — in fact, it has been explicitly interrupted. As a counter-example, imagine creating a perfect clone of yourself. Your subjective experience won't suddenly be you looking through two sets of eyes; you'll have your consciousness, and the clone will have theirs. If you choose to kill yourself, you won't suddenly "take over" the clone's consciousness; it'll keep having its consciousness, while your brain will have permanently (?) stopped being conscious. CGP Grey has a nice video on the "Star Trek teleporter problem" where he goes over pretty much this exact topic, particularly as it relates to the Ship of Theseus problem.

In my understanding, the key missing factor is hiding in your second paragraph — the continuous sense of identity relates to that arrangement of atoms, as it persists continuously through time.

2

kilkil t1_jdgnqb9 wrote

(I've edited this comment over and over like a dozen times now, so sorry for any confusion.)

Could you please elaborate on your point? As stated, I don't see the contradiction between human thought being deterministic, and human thought being capable of deciding which claims to believe.

In your example, I would say that, even though both positions are determined by "invincible cause-effect chains", there's no rule that says both chains have to produce correct beliefs. In fact, since the claims are contradictory, only at most one of them can be correct, which means that the "cause-effect chain" of the other one must have included some step which entailed faulty information, or faulty logic. Or the same could apply to both, if both claims happened to be incorrect in some way.

To give an example, let's say person A lies to person B. If we accept determinism, that means "invincible chains of cause-effect" led to A and B believing different things, but A still has the correct information and B doesn't. The fact that both have these really long "cause-effect chains" doesn't prevent us from pointing out that A happens to believe correct information, and that B doesn't.

1

kilkil t1_jd7b1mj wrote

> They expose their opinion almost as if they really weighed the alternatives, selected and then chose (!) the best thesis.

Well, of course they chose. The determinist might simply reply that the fact that they ended up choosing that option is the result of ancient chains of cause-and-effect, stretching back far into the distant past, theoretically traceable to the Big Bang.

Those chains of causality, the determinist might continue, led them to have the childhood they did, to developing the thoughts they did, and ultimately, to their own interest in philosophy and to their own careful reasoning and conclusions on the subject of free will: that it is nothing but an occasionally useful fiction.

1

kilkil t1_jd78ej9 wrote

I've pondered this question as well. What I've concluded is that, instead of assigning "blame", "fault", or "responsibility", it's better to simply take a more consequentialist view, and ask: what are the likely outcomes of this person's actions? Should I convince them to do otherwise? Would it lead to an overall better outcome if something were done to stop them from doing it (again)? What should that something be?

By focusing on these questions, we can sidestep the question of who to hold accountable and instead look at what would be the best thing to do overall.

However, what's interesting is that answering that first question, "what are the outcomes", can be very complicated given the chaotic nature of human behaviour ("chaotic" here means "deterministic, but unpredictable in practice"). We have to use rule-of-thumb approximations for this sort of thing, instead of precise calculations. And it turns out that concepts like "accountability", "blame", "fault", and "personal responsibility" are very useful rules of thumb; in effect, when you blame someone for something, you are asserting that their behaviour requires some internal changes, or they'll just do it again. Even if the underlying causes are far outside that person's control, the logic works out the same.

To put it in maybe a more whimsical/poetic way: if we are but the fingers of the hands of Fate, then we cannot be judged for our sins, for they belong to Fate just as we do. But, since Fate doesn't have a mailing address, we'll have to settle for cutting off its fingers as necessary.

2

kilkil t1_j8eys73 wrote

That's a complete oversimplification. There was a whole "AI winter" in the late 20th century, during which there was very little progress and/or funding.

Also, for all we know, neural networks can just plateau. We'll take it as far as we can, but who knows what that is. Saying "if you squint and tilt your head it tastes like exponential" inspires only like, 60% confidence in me.

1