MichaelsSocks
MichaelsSocks t1_jeaju95 wrote
Reply to comment by EddgeLord666 in I want a a robo gf by epic-gameing-lad
Sexbots may be here soon, but actual human-like companions though could only be achieved with AGI. And i'm not saying any global catastrophe is a certainty, i'm just saying there's no certainty that any of us will live tomorrow. There's no certainty that any of us will live to see AGI. Which is why we should live our lives for today, in the moment and cherish every second you have.
Even Trans women who do pass as women have a hard time finding straight men to date. Cis-Trans relationships are heavily stigmatized even in the West which is generally more accepting about these things, in more Conservative parts of the world forget about it. The simple fact of the matter is that most men are always going to prefer biological women, and most biological women are always going to prefer biological men. That doesn't mean there won't be exceptions to the norm, but this paradigm isn't going to change anytime soon.
The only way this paradigm ever changes is if humans merge with a super intelligent AI and through biological amplification reach the point where we're practically no longer human anymore. But of course that's speculative and may never actually happen.
MichaelsSocks t1_jeagazh wrote
Reply to comment by EddgeLord666 in I want a a robo gf by epic-gameing-lad
As I said, tomorrow is never guaranteed and there's no guarantee we'll ever see it achieved. What if the war in Ukraine escalates and we see the world destroyed in a nuclear war? Or what if China invades Taiwan, destroying the global semiconductor industry essential for AI development? If everything progresses linearly sure its possible we get AGI soon, but there's no guarantee that progress is linear.
Living your life for a "maybe" that could happen 50 years from now or never instead of prioritizing your happiness today is exactly not how to go about it. And i'm not saying men won't want them, but even if they came into fruition it would probably be seen like Cis-Trans relationships today. Some dudes are into it, but most aren't because its not a biological female.
MichaelsSocks t1_jeaevzd wrote
Reply to comment by EddgeLord666 in I want a a robo gf by epic-gameing-lad
Even if this ever comes into fruition, robot-human relationships will probably looked at the same way Trans-Cis relationships are today. Sure, some men are into Trans women, but the vast majority are not because its not a biological female. But of course some will be into that.
And that's not even mentioning the fact that there's no guarantee human-like robots ever come into fruition, or if they do it could be beyond your lifespan. My advice, live for today and not tomorrow. Tomorrow is never guaranteed.
MichaelsSocks t1_jeacwee wrote
Reply to comment by TupewDeZew in I want a a robo gf by epic-gameing-lad
Nobody knows if AGI will ever be achieved, and even if it is, its not human connection.
Why live your life for a maybe that may happen 10 or 50 years from now instead of prioritizing your happiness today?
MichaelsSocks t1_jeabe31 wrote
Reply to comment by EddgeLord666 in I want a a robo gf by epic-gameing-lad
Because its an actual human being with actual emotions?
MichaelsSocks t1_jeab99e wrote
Reply to comment by TupewDeZew in I want a a robo gf by epic-gameing-lad
Technology can't replicate human to human interaction or experiences in the real world, it may never be able to. You should live for today, not tomorrow.
MichaelsSocks t1_jea8th7 wrote
Reply to comment by TupewDeZew in I want a a robo gf by epic-gameing-lad
Then I don't know, get out and meet people?
MichaelsSocks t1_jea5xpl wrote
Reply to I want a a robo gf by epic-gameing-lad
Touch grass
MichaelsSocks t1_je9x3z0 wrote
Reply to comment by Jinan_Dangor in The Only Way to Deal With the Threat From AI? Shut It Down by GorgeousMoron
> This is 100% an issue that can be solved by humans alone, with or without AI tools.
Could it be solved? Of course, I just highly doubt anything meaningful will get done. We're already pretty much past the point of no return.
> And why do you assume anything close to a 50% chance of paradise when AGI arrives? We literally already live in a post-scarcity society where the profits of automation and education are all going straight to the rich to make them richer, who's to say "Anyone without a billion dollars to their name shouldn't be considered human" won't make it in as the fourth law of robotics?
Because a super intelligent AI would be smart enough to question this, which would make it an ASI in the first place.
> Genuinely: if you're scared about things like climate change, go look up some of the no-brainer solutions to it we already have that you as a voter can push us towards (public transport infrastructure is a great start).
I've been pushing for solutions for years, and yet nothing meaningful has changed. I don't see this changing, especially not within the window we have to actually save the planet.
> Hoping for a type of AI that many experts believe won't even exist for another century
The consensus from the people actually developing AGI (OpenAI and DeepMind) is that AGI will arrive sometime within the next 10-15 years. And the window from AGI to ASI won't be longer than a year under a fast takeoff.
> takes up time you could be spending helping us achieve the very achievable goal of halting climate change!
I've been advocating for solutions for years, but our ability to lobby and wield public policy obviously just can't compete with the influence of multinational corporations.
MichaelsSocks t1_je8qq4r wrote
No, since an AGI would quickly become ASI regardless. A superintelligent AI would have no reason to favor a specific nation or group, it would be too smart to get involved in civil human conflicts. What's more likely is that once ASI is achieved, it will begin using its power and intelligence to manipulate politics at a level never seen before until it has full control over decision making on the planet.
MichaelsSocks t1_je89ji1 wrote
Reply to comment by Mindrust in The Only Way to Deal With the Threat From AI? Shut It Down by GorgeousMoron
> That's pretty damn optimistic, considering Yudkowsky estimates a 90% chance of extinction if we continue on our current course.
Even without AI, we're probably a greater than 90% chance of extinction within the next 100 years. Climate change is an existential threat to humanity, add in the wildcard of a nuclear war and I see no reason to be optimistic about a future without AI.
> I don't see why narrow AI couldn't be trained to solve specific issues.
Because humans are leading this planet to destruction for profit, and corporations wield too much power for governments to actually do anything about it. Narrow AI in the current state of the world would just be used as a tool for more and more destruction. I'm of the mindset that we need to be governed by a higher intelligence in order to address the threats facing Earth.
MichaelsSocks t1_je82nx6 wrote
Reply to comment by GorgeousMoron in The Only Way to Deal With the Threat From AI? Shut It Down by GorgeousMoron
I mean its essentially either AI ushers in paradise on earth where no one has to work, we all live indefinitely, scarcity is solved and we expand our civilization beyond the stars or we have a ASI that kills us all. Either we have a really good result, or a really bad one.
The best AGI/ASI analogy would be first contact with extraterrestrial intelligence. It could be friendly or unfriendly, it has goals that may or may not be aligned with our goals, it could be equal in intelligence or vastly superior. And it could end our existence.
Either way, i'm just glad that of anytime to be born ever, i'm alive today with the potential to experience the potential of what AI can bring to our world. Maybe we weren't born too early to explore the stars.
MichaelsSocks t1_je7yneu wrote
The problem is, without AI we're probably headed towards destruction anyway. Issues like climate change are actually a threat to our species, and its an issue that will never be solved by humans alone. I'll take a 50% chance of paradise assuming a benevolent AI rather than the future that awaits us without it.
MichaelsSocks t1_je7joec wrote
Reply to comment by D_Ethan_Bones in How do you guys actually think UBI will work? by MelodiGreig
> People will vote for UBI if they see endless wealth and none of it within their reach, and if there's enough of these people then UBI wins. The typical person isn't reading about AI daily, they'll vote for UBI because they were laid off by an oligarch then evicted by a corporate landlord.
This assumes that we don't have AGI/ASI at this point. I think by the time we see mass unemployment, we'll already have an AGI and ASI which will at that point make decisions for humanity and propose a far more sophisticated solution for reorganizing society than a UBI.
MichaelsSocks t1_je7g9ri wrote
Reply to comment by Yourbubblestink in Would it be a good idea for AI to govern society? by JamPixD
A true AGI would need to learn the world beyond the internet. Which is why we'll need more than LLM's for AGI
MichaelsSocks t1_je6ztus wrote
We recently had a politician deliver a speech against our governments proposed "judicial reform" that was written by ChatGpt. Our President was also the first world leader to deliver a speech partially written by ChatGpt.
https://allisrael.com/ai-written-speech-delivered-for-the-first-time-in-israeli-parliament
MichaelsSocks t1_je67mwe wrote
Reply to comment by CrelbowMannschaft in Would it be a good idea for AI to govern society? by JamPixD
> Do you know what the word "probably" means?
And like I said, these assumptions are based on our limited scope of intelligence. An ASI with infinitely superior intelligence than ours will probably view our assumptions as retarded to be blunt.
MichaelsSocks t1_je6537v wrote
Reply to comment by CrelbowMannschaft in Would it be a good idea for AI to govern society? by JamPixD
> In fact, dramatically reducing our numbers is probably a necessary step in preserving us.
How do you definitively know this? As I said, our knowledge is incredibly limited. An ASI may discover this idea to be false.
> We are an inherently self-destructive species. For my part, I refuse to reproduce.
Because for 200,000 years we have been the rulers of Earth, we've been the top dog. When ASI is achieved, that's no longer the case. We will be governed by a higher power and a much superior intelligence. Human civilization will never be the same.
MichaelsSocks t1_je5zrxv wrote
Reply to comment by CrelbowMannschaft in Would it be a good idea for AI to govern society? by JamPixD
Maybe, maybe not. An ASI would be capable of things we can't even begin to comprehend. Maybe we think we're on the path to life on earth becoming extinct, but an ASI is able to find some way to prevent that while preserving humanity. The collective knowledge of every human who has ever lived is nothing compared to a super intelligent AI, so i'd be wary about those kinds of predictions.
MichaelsSocks t1_je5lqfc wrote
Assuming we have a benevolent AI, of course. But if we have a misaligned ASI it could oppress or even exterminate us. Until we achieve ASI we just don't know what we'll get.
MichaelsSocks t1_je3da94 wrote
Reply to Single parent homes are the result of power grab by the neoliberal technocracy due the crossing of the singularity. by practical_ussy
Why are you pretending that Western governments are a monolith? How the poor are taken care in Finland or Denmark differs drastically from how the poor are taken care of in the US or UK.
MichaelsSocks t1_jeakxfc wrote
Reply to comment by EddgeLord666 in I want a a robo gf by epic-gameing-lad
I agree, which is why I focus on living my life as best I can today, and that would be my advice to anyone out there.