DaggerShowRabs

DaggerShowRabs t1_jefnl06 wrote

Ah, I get what you mean. I still don't think that necessarily solves the problem. It could be possible for a hypothetical artificial superintelligence to take actions that seem harmless to us, but because it is better at planning and prediction than us, the system knows the action or series of actions will lead to humanity's demise. But since it appears harmless to us, when it asks, we say, "Yes, you are acting in the correct way".

3

DaggerShowRabs t1_j2n60m3 wrote

Well it's definitely at least base reality for us.

And yeah, we just disagree there. I only think this hypothetical AI is denied any meaningful aspect of existence if there are fundamentally different sets of rules for the AI's universe compared to ours. As long as the rules are the same, I fail to see a compelling argument as to what exactly would be lacking from the AI's experience.

Edit: also, if this isn't "true base reality", since we're going there, it's interesting to think of the ethics of our simulators. I know I'm at least conscious, so if this isn't truly base reality, they seem to be okay putting conscious entities in simulations for at least certain situations.

2

DaggerShowRabs t1_j2n2k0t wrote

I agree with your reasoning line, they are not the same universes.

Now, the position the poster I was responding to takes (as far as I can tell), is that whichever universe is not the "base universe", is denied some aspect of "human existence".

I do not agree with that. As long as the rules are fundamentally the same, I don't think that would be denying some aspect of existence. The moment the rules change, that is no longer the case, but also, that means they are no longer "indistinguishable". Not because of accumulating randomized causality, but because of logical systematic rule changes from the base.

Edit: in the Matrix temporal example, it doesn't matter to me that there is a temporal lag relative to base, so long as the fundamental rules are exactly the same. The problem for me would come in if the rules were changed relative to base, in order to lead to specific outcomes. And then, for me, I would consider that the point where the simulation no longer is "indistinguishable" from reality.

1

DaggerShowRabs t1_j2mwkbv wrote

>I had understood it as „being undistinguishable from reality from the point of view of the entity that lives within“, exactly.

Well you can take that interpretation all you want, but that's all it is, an interpretation.

That's not what the poster actually said.

And even then, I disagree with the comparison you are making. While living in the Matrix, are people denied any essential aspect of living a human life from within the simulation?

Edit: other than the obvious that the Matrix simulation is running in the past relative to "base reality".

1

DaggerShowRabs t1_j2mvdfn wrote

I wouldn't know it, but it still wouldn't be truly indistinguishable from reality by definition.

If it were changed to, "indistinguishable from reality to an entity that didn't know any better", sure.

But that's not what was said. Indistinguishable from reality means indistinguishable from reality.

And actually, if I woke up one day and that change was made, I would bet that I eventually noticed that I hadn't felt the sense of jealousy in a while (after a certain period of time).

2

DaggerShowRabs t1_j2mlfvi wrote

>Even if we could create a simulated world that is indistinguishable from reality, it would still be a manufactured environment and the AI would not have the opportunity to experience the full range of human experiences.

I could tell immediately at this point that this was written by an AI because this 100% does not logically connect. It sounds good and convincing, but is essentially logical word salad.

If the AI would not have the opportunity to "experience the full range of human experiences", then it is not indistinguishable from reality, basically by definition.

1

DaggerShowRabs t1_j1hn4iy wrote

Reply to comment by fortunum in Hype bubble by fortunum

An actual AI winter at this point is about as likely as society instantaneously collapsing.

An AI winter is not an actual, valid concern for anyone in the industry for the forseeable future.

I get wanting to have a critical discussion about this, but then when someone talks about exponential growth, you need to do better than parroting a talking point that mainstream journalists who have no idea what they are talking about spew out.

I'm all for critical discussion, but talking about another actual AI winter like the 70s or early 2000s is kind of a joke. I'm really surprised anyone with even a little bit of knowledge of what is going on in the industry would say something this out-of-touch.

And none of that is to say AGI is immenent, just that an AI winter is literally the most out-of-touch counterpoint you could possibly use.

2

DaggerShowRabs t1_j13wag0 wrote

Well, value would still be created, it's just the means of measuring might be different.

Instead of being based on arbitrary monetary values, maybe it's measured against joules of energy units or something?

1

DaggerShowRabs t1_j0mei28 wrote

Maybe it's derivative, but it's derivative of a large, large amount of works and artists.

I would challenge you to post an example from a series of random prompts, and point out which artists the work is "derived" from.

You couldn't point one, or even a handful out, because of the sheer amount of different works and artists fed in. Even if it's "derivative", it's nearly imperceptibly derivative due to the sheer volume of data.

To say it's "copying" another artist is just completely, utterly incorrect.

4

DaggerShowRabs t1_izpr8bj wrote

It's really interesting to think about the incentives for creating ASI in the capitalist system. There are huge market and resource incentives to create ASI. All the way up until the point that there isn't, because the technology can probably create the means to eliminate most resource scarcity.

It may be the very thing that unravels their centralization of resources, and thus, power.

1

DaggerShowRabs t1_izpmpme wrote

This is true, but Marx recognized that a powerful central government would be a necessary transition state to the stateless society. Unfortunately, due to the inherent corruptibility of humans who reach positions of power, that "transition state" tends to be more or less indefinite.

That will hopefully change with advanced ASI.

14

DaggerShowRabs t1_ix810ie wrote

Agreed. The thing that terrifies me too is that there are so many ways it could go wrong.

It's probably easier to build an AGI than it is to build an AGI that is confirmed to be goal-aligned with humanity. If it isn't goal-aligned, you're basically rolling a pair of D20s and hoping you land on double 20s.

5

DaggerShowRabs t1_iwzc86b wrote

I haven't used ad-hominem. I addressed every point you made. So I correctly pointed out that you were using ad-hominem to dodge my points, like the intellectual lightweight that you are.

The fact that you think I've been "outclassed" when you haven't even addressed a single point I've made is actually really, truly sad.

Now, fuck off pissant.

1

DaggerShowRabs t1_iwz2g3k wrote

Yeah, Bell's Theorem or Bell's inequality states that there is a maximum bound between the correlations of particles in hidden local variable theories, which doesn't match with experimentation.

Superdeterminism is a loophole in this because it calls into question the ability of researchers to freely and independently choose their experiments. By changing this assumption, some superdeterministic models can violate Bell's inequalities. The problem is superdeterminism isn't really testable.

Edit: well let me rephrase that. Superdeterminism isn't testable right now. The only way to test this would be to rewind the state of the universe via simulation all the way back to the beginning of time and see if exactly the same things happen. We still may never be able to do this accurately enough to test, but I don't want to leave out the possibility.

1