DaggerShowRabs
DaggerShowRabs t1_jefm4ex wrote
Reply to comment by Heinrick_Veston in Sam Altman's tweet about the pause letter and alignment by yottawa
If the system needs approval before it takes any actions at all, the system is going to be extremely slow and limited.
DaggerShowRabs t1_jefjki1 wrote
Reply to comment by genericrich in This concept needs a name if it doesn't have one! AGI either leads to utopia or kills us all. by flexaplext
Yep. And when it doesn't work, we won't be around to notice it doesn't work.
It's anthropic principles all the way down.
DaggerShowRabs t1_j2n60m3 wrote
Reply to comment by dracsakosrosa in Alignment, Anger, and Love: Preparing for the Emergence of Superintelligent AI by Nalmyth
Well it's definitely at least base reality for us.
And yeah, we just disagree there. I only think this hypothetical AI is denied any meaningful aspect of existence if there are fundamentally different sets of rules for the AI's universe compared to ours. As long as the rules are the same, I fail to see a compelling argument as to what exactly would be lacking from the AI's experience.
Edit: also, if this isn't "true base reality", since we're going there, it's interesting to think of the ethics of our simulators. I know I'm at least conscious, so if this isn't truly base reality, they seem to be okay putting conscious entities in simulations for at least certain situations.
DaggerShowRabs t1_j2n2k0t wrote
Reply to comment by AndromedaAnimated in Alignment, Anger, and Love: Preparing for the Emergence of Superintelligent AI by Nalmyth
I agree with your reasoning line, they are not the same universes.
Now, the position the poster I was responding to takes (as far as I can tell), is that whichever universe is not the "base universe", is denied some aspect of "human existence".
I do not agree with that. As long as the rules are fundamentally the same, I don't think that would be denying some aspect of existence. The moment the rules change, that is no longer the case, but also, that means they are no longer "indistinguishable". Not because of accumulating randomized causality, but because of logical systematic rule changes from the base.
Edit: in the Matrix temporal example, it doesn't matter to me that there is a temporal lag relative to base, so long as the fundamental rules are exactly the same. The problem for me would come in if the rules were changed relative to base, in order to lead to specific outcomes. And then, for me, I would consider that the point where the simulation no longer is "indistinguishable" from reality.
DaggerShowRabs t1_j2mwkbv wrote
Reply to comment by AndromedaAnimated in Alignment, Anger, and Love: Preparing for the Emergence of Superintelligent AI by Nalmyth
>I had understood it as „being undistinguishable from reality from the point of view of the entity that lives within“, exactly.
Well you can take that interpretation all you want, but that's all it is, an interpretation.
That's not what the poster actually said.
And even then, I disagree with the comparison you are making. While living in the Matrix, are people denied any essential aspect of living a human life from within the simulation?
Edit: other than the obvious that the Matrix simulation is running in the past relative to "base reality".
DaggerShowRabs t1_j2mvdfn wrote
Reply to comment by AndromedaAnimated in Alignment, Anger, and Love: Preparing for the Emergence of Superintelligent AI by Nalmyth
I wouldn't know it, but it still wouldn't be truly indistinguishable from reality by definition.
If it were changed to, "indistinguishable from reality to an entity that didn't know any better", sure.
But that's not what was said. Indistinguishable from reality means indistinguishable from reality.
And actually, if I woke up one day and that change was made, I would bet that I eventually noticed that I hadn't felt the sense of jealousy in a while (after a certain period of time).
DaggerShowRabs t1_j2mn94o wrote
Reply to comment by dracsakosrosa in Alignment, Anger, and Love: Preparing for the Emergence of Superintelligent AI by Nalmyth
Sorry, but what you are saying is wrong by definition if the simulation is truly "indistinguishable from reality".
Both cannot be true. I guess it wasn't an AI, you're just bad at logic and definitions.
DaggerShowRabs t1_j2mlfvi wrote
Reply to comment by dracsakosrosa in Alignment, Anger, and Love: Preparing for the Emergence of Superintelligent AI by Nalmyth
>Even if we could create a simulated world that is indistinguishable from reality, it would still be a manufactured environment and the AI would not have the opportunity to experience the full range of human experiences.
I could tell immediately at this point that this was written by an AI because this 100% does not logically connect. It sounds good and convincing, but is essentially logical word salad.
If the AI would not have the opportunity to "experience the full range of human experiences", then it is not indistinguishable from reality, basically by definition.
DaggerShowRabs t1_j1iew72 wrote
Reply to comment by fortunum in Hype bubble by fortunum
Do the people above you also talk about an AI winter like you have been?
If so, your mentors are dumbasses.
DaggerShowRabs t1_j1hn4iy wrote
Reply to comment by fortunum in Hype bubble by fortunum
An actual AI winter at this point is about as likely as society instantaneously collapsing.
An AI winter is not an actual, valid concern for anyone in the industry for the forseeable future.
I get wanting to have a critical discussion about this, but then when someone talks about exponential growth, you need to do better than parroting a talking point that mainstream journalists who have no idea what they are talking about spew out.
I'm all for critical discussion, but talking about another actual AI winter like the 70s or early 2000s is kind of a joke. I'm really surprised anyone with even a little bit of knowledge of what is going on in the industry would say something this out-of-touch.
And none of that is to say AGI is immenent, just that an AI winter is literally the most out-of-touch counterpoint you could possibly use.
DaggerShowRabs t1_j13wag0 wrote
Reply to comment by Sashinii in "Collecting views on this: If you believe we are on the cusp of transformative AI, what do you think GDP per capita will be in 2040 (in 2012 dollars)? Bonus: Draw your expected GDP per capita trajectory on this graph and send it back to me." by maxtility
Well, value would still be created, it's just the means of measuring might be different.
Instead of being based on arbitrary monetary values, maybe it's measured against joules of energy units or something?
DaggerShowRabs t1_j0myvts wrote
Reply to comment by OldWorldRevival in Why are people so opposed to caution and ethics when it comes to AI? by OldWorldRevival
What a garbage response.
Get the fuck out of here.
DaggerShowRabs t1_j0mfot7 wrote
Reply to comment by OldWorldRevival in Why are people so opposed to caution and ethics when it comes to AI? by OldWorldRevival
>I think you might not be up to date on the topic.
You would be wrong on that. I'm talking about a series of random prompts, not prompts designed specifically to invoke a specific artist or style.
DaggerShowRabs t1_j0mei28 wrote
Reply to comment by OldWorldRevival in Why are people so opposed to caution and ethics when it comes to AI? by OldWorldRevival
Maybe it's derivative, but it's derivative of a large, large amount of works and artists.
I would challenge you to post an example from a series of random prompts, and point out which artists the work is "derived" from.
You couldn't point one, or even a handful out, because of the sheer amount of different works and artists fed in. Even if it's "derivative", it's nearly imperceptibly derivative due to the sheer volume of data.
To say it's "copying" another artist is just completely, utterly incorrect.
DaggerShowRabs t1_j0iluya wrote
Reply to How to Deal With a Rogue AI by SFTExP
Rogue AI deals with you
DaggerShowRabs t1_izx6e20 wrote
Reply to comment by ziplock9000 in Just today someone posted a Twitter thread about Nuclear Fusion... by natepriv22
Well that's just not true.
DaggerShowRabs t1_izpr8bj wrote
Reply to comment by AdorableBackground83 in This subreddit has a pretty serious anti-capitalist bias by Sieventer
It's really interesting to think about the incentives for creating ASI in the capitalist system. There are huge market and resource incentives to create ASI. All the way up until the point that there isn't, because the technology can probably create the means to eliminate most resource scarcity.
It may be the very thing that unravels their centralization of resources, and thus, power.
DaggerShowRabs t1_izpmpme wrote
Reply to comment by petermobeter in This subreddit has a pretty serious anti-capitalist bias by Sieventer
This is true, but Marx recognized that a powerful central government would be a necessary transition state to the stateless society. Unfortunately, due to the inherent corruptibility of humans who reach positions of power, that "transition state" tends to be more or less indefinite.
That will hopefully change with advanced ASI.
DaggerShowRabs t1_ix810ie wrote
Reply to comment by TupewDeZew in How do you think about the future of AI? by diener1
Agreed. The thing that terrifies me too is that there are so many ways it could go wrong.
It's probably easier to build an AGI than it is to build an AGI that is confirmed to be goal-aligned with humanity. If it isn't goal-aligned, you're basically rolling a pair of D20s and hoping you land on double 20s.
DaggerShowRabs t1_iwzc86b wrote
Reply to comment by beachmike in When does an individual's death occur if the biological brain is gradually replaced by synthetic neurons? by NefariousNaz
I haven't used ad-hominem. I addressed every point you made. So I correctly pointed out that you were using ad-hominem to dodge my points, like the intellectual lightweight that you are.
The fact that you think I've been "outclassed" when you haven't even addressed a single point I've made is actually really, truly sad.
Now, fuck off pissant.
DaggerShowRabs t1_iwza0li wrote
Reply to comment by beachmike in When does an individual's death occur if the biological brain is gradually replaced by synthetic neurons? by NefariousNaz
No, I quite do, but thanks for your uninformed, low quality opinion.
Your use of ad-hominem shows you to be an intellectual lightweight.
Now, kindly fuck off pissant.
DaggerShowRabs t1_iwz2g3k wrote
Reply to comment by Kaarssteun in Are you a determinist? Why/why not? How does that impact your view of the singularity? by Kaarssteun
Yeah, Bell's Theorem or Bell's inequality states that there is a maximum bound between the correlations of particles in hidden local variable theories, which doesn't match with experimentation.
Superdeterminism is a loophole in this because it calls into question the ability of researchers to freely and independently choose their experiments. By changing this assumption, some superdeterministic models can violate Bell's inequalities. The problem is superdeterminism isn't really testable.
Edit: well let me rephrase that. Superdeterminism isn't testable right now. The only way to test this would be to rewind the state of the universe via simulation all the way back to the beginning of time and see if exactly the same things happen. We still may never be able to do this accurately enough to test, but I don't want to leave out the possibility.
DaggerShowRabs t1_iwz06ma wrote
Reply to comment by Kaarssteun in Are you a determinist? Why/why not? How does that impact your view of the singularity? by Kaarssteun
Someone has to disprove Bell's Theorem for there to be hidden variables. Very few physicists believe hidden variables are possible.
DaggerShowRabs t1_iwyw2df wrote
Reply to Are you a determinist? Why/why not? How does that impact your view of the singularity? by Kaarssteun
You're not arguing for super-determinism right?
Super-determinism doesn't really work in light of the inherent randomness of quantum mechanics.
DaggerShowRabs t1_jefnl06 wrote
Reply to comment by Heinrick_Veston in Sam Altman's tweet about the pause letter and alignment by yottawa
Ah, I get what you mean. I still don't think that necessarily solves the problem. It could be possible for a hypothetical artificial superintelligence to take actions that seem harmless to us, but because it is better at planning and prediction than us, the system knows the action or series of actions will lead to humanity's demise. But since it appears harmless to us, when it asks, we say, "Yes, you are acting in the correct way".