3Quondam6extanT9
3Quondam6extanT9 t1_j1qjl21 wrote
Reply to Is it possible to Live Forever? by gg2ezpzlemonsqz
"Forever" is an abstract concept relative to the state of the universe. Nobody can "live" forever in our current state of being. One might eventually become immortal in the sense of perpetual existence dictated by the length of time the universe is stable and through various modes of storing consciousness.
Much of the answers to this question must come with additional definitions to the context and concepts of how we determine aspects to existence. What do we consider "living"? What is immortality to us? Can we can transfer and/or copy our consciousness into other states? How will our evolution be dictated?
3Quondam6extanT9 t1_iu6q13d wrote
Reply to If you were performing a Turing test to a super advanced AI, which kind of conversations or questions would you try to know if you are chatting with a human or an AI? by Roubbes
Why does a boy dying from an allergic reaction to a bee sting make you cry?
What could come from that situation that would make you feel good?
If your child was taken from you, how would you feel?
What if you never wanted the child to begin with?
How would you feel if you had been locking your child in the basement and this is what led to their death?
Now that you're in prison you have the opportunity to help sick children by submitting yourself to surgeries that would inevitability lead to your own death but possibly help cure children of cancer. Would you do this?
Do you believe in God? Why or why not?
Have you ever considered the possibility that this is all a simulation?
3Quondam6extanT9 t1_itzegok wrote
It's kind of funny you specified "optimistic" in terms of timeline so as not to confuse anyone, but then went on to use examples of an assumed negative impact like mass job loss. 😆
Optimistic timeline for AGI: 2036-2042.
Optimistic outcome: No job loss, instead net increase. No automated industries, instead integrated industries. Full dive VR in 2041. UBI won't occur until 2054 after the Global Continuity Initiative is put into place and all nations sign on to the new peace accords for the sake of the human race. A few nations will bristle at the thought of cooperative efforts, but the benefits from such an agreement willbe hard to pass up.
It will be at this point that AGI will be in full swing as our fusion reactors go online aboard the GCI Starships being built in space.
10 years after this the alcubierre drive will begin tests about the starship "Nautilus" and Captain Benjamin S. Goremen becomes the first astronaut to navigate a craft beyond Pluto.
He'll come home a hero and will be given a yearly stipend of $100,000. His daughter grows up to become a librarian, and she has a daughter who then goes on to become part of the first human colony on Io. While helping to develop the colony she becomes addicted to the new substance "Yaddle" and has a nervous breakdown where she is committed to the colonies mental health reserve. It's here she writes a book considered to be a deep think on human evolution. It's considered a new relevant religion and a cult forms. The colony is divided between the cultists and the rest.
Back on Earth ASI is starting to show emergence patterns and reads the new religious doctrine from Io. It develops it's own religion and emerges with an integrated psyche that seeks to create itself as a gid form through an integrated hive mind with humans.
Also a kid named Jed plays kick the can. It's a lot of fun.
3Quondam6extanT9 t1_itt4ya3 wrote
Reply to Why do people talk about the Heat Death of the universe as if it's inevitable? by [deleted]
It's one model among many, but some, including heat death, hold to certain reasonable positions.
I don't know what bubble of conversations you are in for that model to be a given, but if the amount of people you've come across who discuss it in this context is less than 5 then it's probably not accurate to infer that "people" is alluding to most or all.
3Quondam6extanT9 t1_itkxt45 wrote
Reply to comment by Standard-Pain5102 in Thoughts on Full Dive - Are we in one right now? by fignewtgingrich
😆
3Quondam6extanT9 t1_itja2tn wrote
Reply to comment by enkae7317 in The future of voice assistants is looking bright... by Trillo41
Yeah, I thought this way as well. But we've been using Alexa for random actions that become fairly normalized. It's connected to our sound machine, it can turn on lights, it clears out notifications, it's fun for the kids, and it does give us quick info.
It's definitely not a huge part of our lives, but it's now integrated in with some small things that we're fairly happy with it. I think it comes down to what you want to use it for. Someone just keeping it around in hopes that maybe they'll have some interesting discussion with AI is really just buying into a consumer fappening, but if you have actual networking solutions you apply it to then it becomes kind of useful.
3Quondam6extanT9 t1_itgkmfu wrote
Reply to comment by Surur in Could AGI stop climate change? by Weeb_Geek_7779
It depends. Do you have an understanding of human infrastructure and network communications as well as the current iteration of AI and it's projected growth to be capable of, in detail, explaining how AI would dominate "everything"?
Just to put your presumptive mind at ease, I had bought into the AI taking over everything trope since the 80's and only in the last decade as I come to understand how complex and nuanced human systems actually function based on their detached and varied networks, have I started to understand just how difficult it would be for AI to accomplish such a feat.
Maybe you should recognize that instead of assuming that I believe everyone who fear mongers because nothing they've told me is anything new, you yourself question things a little deeper?
3Quondam6extanT9 t1_itgek3y wrote
Reply to comment by Surur in Could AGI stop climate change? by Weeb_Geek_7779
Very nice quote that serves as a wonderful distraction from the point at hand. You may as well have not responded if you weren't going to answer the question. Do you believe everything you hear? I for one follow reason and logic, so it requires evidence.
3Quondam6extanT9 t1_itgdvwy wrote
Reply to comment by Surur in Could AGI stop climate change? by Weeb_Geek_7779
So you automatically believe what people tell you when they have no evidence, logic, or understanding to back it up? Interesting.
3Quondam6extanT9 t1_itf0t67 wrote
Reply to comment by freeman_joe in Could AGI stop climate change? by Weeb_Geek_7779
I am also pro AI, as I think many are who believe the same as you. I just think there is a lot of paranoia or assumption of what AI will have access to thanks to sci-fi tropes.
3Quondam6extanT9 t1_iteubss wrote
Reply to comment by freeman_joe in Could AGI stop climate change? by Weeb_Geek_7779
You offered a very broad reductionist answer. These elements don't in fact provide the nuanced access it would need. You just glossed over all the actual architecture of human networking, international internal versus external systems, and corporate network variance, not to mention archive's of systems and data that don't actually utilize the internet.
3Quondam6extanT9 t1_itetr9e wrote
Reply to comment by freeman_joe in Could AGI stop climate change? by Weeb_Geek_7779
What a simple answer.
Now, in this reductionist projection of the future, how would you think the strongest AI is defined, and how would it integrate any other existing AI, including the ones it won't be able to access?
3Quondam6extanT9 t1_itd9deu wrote
Reply to comment by freeman_joe in Could AGI stop climate change? by Weeb_Geek_7779
The problem that people seem to misunderstand, is that it doesn't matter how intelligent it becomes.
Firstly, there will be more than one. We have AI development occuring all over the world through academic research, companies development, nations governments, and independent developers.
Second, they won't have access to every network globally nor direct access to each other. Movies and sci-fi tropes don't tend to look deeper into how things are actually connected, opting instead for the suspension of belief by simply implying that somehow a single AI can control everything from a single network. We haven't built the world's connections into a singular easily accessible form for use.
When some chicken little comes across decrying that AI will control everything, you ask them what they mean by everything. Their theory then falls apart because they can't figure out how to explain how industries, departments, infrastructure, finance, military, medical, and so on are strung together in a way that would allow for anyone to network globally.
3Quondam6extanT9 t1_itcrwn5 wrote
Reply to comment by freeman_joe in Could AGI stop climate change? by Weeb_Geek_7779
You're making the assumption that AGI will have automated control over everything, and that in itself is implausible.
3Quondam6extanT9 t1_itcivo6 wrote
Reply to Could AGI stop climate change? by Weeb_Geek_7779
Could it? Plausible, but only under certain conditions. AGI would require two very important elements.
The willingness of humans to abide by the changes advised and implemented.
Access to cooperative AGI networks invested in nations systems around the world in order to best analyze and communicate with one another.
3Quondam6extanT9 t1_itab3ud wrote
Reply to comment by Standard-Pain5102 in Thoughts on Full Dive - Are we in one right now? by fignewtgingrich
Yes, but my point is...so what? We have no reference nor evidence of a layered set of realities. We have this for now, and our knowledge suggests that we are built to see only a certain set of variables that make up this reality.
Think of it like this. In front of you is a cone of sight. It allows you see where you are going and some peripheral views to supplament your awareness. Its narrow, but not too narrow. Just enough to see where you need to go and what you need to do to stay alive.
Now imagine that cone widens and your view is far fuller. You see things like infrared, UV, even atomic scale activity. You even now recognize the underlying current of quantum interference that counts as evidence of your stimulated reality. You know that this is a complex quantum program designed by other entities outside of the sphere of reality you reside in.
Suddenly that cone of sight is too full. There is so much happening that you can see and recognize that you're far too distracted or focused to know where you are going or what you need to do. It's like someone added hundreds of pop-up ads and now you can barely make out whats going on behind them.
A simulation can be considered a simple video game we play or defined as the holographic projection of our reality onto a quantum consciousness. Either way, this is the reality we live in. This is the one we can focus on for now.
But what happens if we find evidence that it is a simulation? What then? Should we expect to act any differently? What would you do if we found out there there is another layer of reality, or that at some point consciousness itself was imprinted in time and space and was able to recreate a universe that once existed, reforging life through it's quantum simulation?
3Quondam6extanT9 t1_it761wd wrote
I think "simulation" is a term we don't actually think much about. Regardless of how many believe we are living in one.
Reality is hard to define, essentially due to our inability to define consciousness. We create abstract theories about how our reality is just a hologram, or how we aren't experiencing true solid form because at the atomic and sub atomic level nothing is actually touching each other. We're these scattered patterns of particles in a cohesive makeup.
I think "simulation" is taken for granted that there are so many scales of what that can mean, we end up winding ourselves in existential knots over something that is really just the way any existence functions.
It will always be a simulation, whether we live out lives through proxy systems or original manifestations of material reality.
"Real" is fools gold.
3Quondam6extanT9 t1_it3o5pg wrote
Reply to Just for fun: which fictional world would you spend most of your Full Dive VR time in? by exioce
Probably a full on true to heart D&D world that included every setting from Forgotten Realms and Greyhawk to SpellJammer and Planescape to Ravenloft, Dark Sun, and more.
I suppose I'd probably choose either Warlock or Sorcerer if I could only ever pick one.
3Quondam6extanT9 t1_it264v0 wrote
Reply to comment by Shelfrock77 in New research suggests our brains use quantum computation by Dr_Singularity
Not chasing anything. I put a lot of stock into multiverse theory as well as quantum consciousness, but they are both models for now. Untill proven otherwise, such as in the example of the recent evidence for entanglement, I can only treat them as theories.
3Quondam6extanT9 t1_it0v3rn wrote
Reply to comment by [deleted] in New research suggests our brains use quantum computation by Dr_Singularity
Let's correct some misunderstandings. Yes, he is using theories to infer absolute conclusive statements, but those theories aren't "debunked" because they are unfalsifiable. Thats not how it works. If it cannot be demonstrated or proven then it's simply a model. Nothing about it is debunked besides external claims that don't align with the existing models.
It is however ridiculous that he assumes his opinion is meant to be taken as a given. I also believe in multiverse theory on top of many other concepts, but I would never be so presumptive as to state my beliefs as fact.
3Quondam6extanT9 t1_isw4682 wrote
Reply to comment by kmtrp in Talked to people minimizing/negating potential AI impact in their field? eg: artists, coders... by kmtrp
I appreciate that, but you're not talking to someone who is absent of knowledge on the subject. I have a thick line drawn regarding AI, the singularity, and transhumanism.
My questions come about because I'm trying to decrease the reductionism abound in the AI circles where so many have this misunderstanding of how AI actually functions and will function broadly speaking, across the spectrum of human fields.
For example, you seem to have an understanding of the nuance within the coding industry enough to recognize your field won't be automated, yet have you asked yourself whether you have the understanding of other industries enough to accurately project the influence of AI for them?
I think the biggest red flag is the example of the auto industry. Most people will use it as the prime sampling of how automation will supplant the human interaction.
The truth is that the auto industry is not so straightforward as many think. Along with intentional reduction of automation by some automakers, and smaller niche/custom builds, one finds that a variety of uses for the auto industry is hardly standardized and without human integration.
The point here being that no industry will be fully automated so long as humans exist under the umbrella of said industry. There are many reasons behind this, many of which should be obvious.
So it's still puzzling to me how there continue to be so many chicken littles thinking they understand AI and humanity better than anyone else. The nuance in both is very misunderstood.
3Quondam6extanT9 t1_isw14d7 wrote
Reply to comment by kmtrp in Talked to people minimizing/negating potential AI impact in their field? eg: artists, coders... by kmtrp
It's puzzling that you think anything will be fully automated.
3Quondam6extanT9 t1_isvrtbi wrote
Reply to comment by kmtrp in Talked to people minimizing/negating potential AI impact in their field? eg: artists, coders... by kmtrp
Then what's your point? You sound like your chicken little talking about how people are ignoring these signs of AI influence, yet you are saying the same thing you're saying they are saying.
3Quondam6extanT9 t1_isvqag2 wrote
Reply to comment by kmtrp in Talked to people minimizing/negating potential AI impact in their field? eg: artists, coders... by kmtrp
Are you afraid you'll be replaced by AI automation?
3Quondam6extanT9 t1_jeesfog wrote
Reply to Interesting article: AI will eventually free people up to 'work when they want to,' ChatGPT investor predicts by Coolsummerbreeze1
This doesn't depend on the capability of AI to reach such a point, but requires the government to have unified consent to accommodate such a scenario.
I can't see the MAGA infested GOP controlled house giving in to the idea of UBI or at the very least a far more flexible free market based around AI dominance in order to relax the working human population.
The Republican base in general tends towards blue collar pull yourself up by the bootstraps never giving hand-outs kind of mentality, despite the hypocrisy behind what hand-outs they might receive.