agorathird t1_jeggpm6 wrote
Reply to comment by Nanaki_TV in I have a potentially controversial statement: we already have an idea of what a misaligned ASI would look like. We’re living in it. by throwaway12131214121
>You may need to review your definitions. And I expect downvotes given this sub’s anti-capitalist stance. Shame
*Shits self* "I expect everyone to say that I stink. Shame"
Make a better argument before you claim unfairness prematurely.
Nanaki_TV t1_jegivwn wrote
I am pointing out the bias circlejerk this sub has become. Pick any post and count how many comments it takes to bring up UBI. LMAO
agorathird t1_jegjkel wrote
That proves my point. You're acting like UBI isn't a logically neccesary idea in a hypothetical society that is massively unemployed but over-abundant. That's not just 'muh communism'. It's the most ideal default economic mode almost every singulitarian recognizes across the economic spectrum.
It's really beyond anything we know right now.
Nanaki_TV t1_jegnn1a wrote
Have you considered an alternative? And I mean really given it your full weight of thought rather than dismissing it? Because I have about UBI and besides the economics of it not working out I really don’t like the idea of being beholden to the government or corporation-overlord to give me an income for my existence.
Alternatively, work will become hyper productive. Gone will be the forty hour work weeks and instead 5 hour days (once per week) could make enough money for you to sustain your current lifestyle. 10 if you want more like the newest VR gadget that will be released next year (in the hypothetical). Prices for goods and services, due to the abundance of AGIs will drastically fall. You do not scoff at the price of a pack of pencils anymore than you will scoff at the price of a trip to Mars.
agorathird t1_jegpfqk wrote
You are describing some kind of 1950s atom-punk idea of the future. That future has been cancelled. LLMs perfected, embodied, and multi-modal (general or specified) will cover the theorized 70 then 90 then 99% of human tasks. It only matters how long until companies feel like adopting it.
You will have capital owners and executives with machine employees. We are not in the picture as meaningful contributors. Hiring us will be like riding to work on horseback. No one will be going to work like George Jetson.
10 minutes of meaningful human labor to give WorkerGPT some extra oil sounds like a masquerade for what is really a society supported by UBI.
Nanaki_TV t1_jegsg3g wrote
Describe the world today in 1980. You cannot predict Reddit or Twitter. You cannot make the claims you’re making with any substantial certainty. Stop acting as if you know.
agorathird t1_jegunvj wrote
>Describe the world today in 1980. You cannot predict Reddit or Twitter. You cannot make the claims you’re making with any substantial certainty. Stop acting as if you know.
Both in this thread and the other thread you seem to not to want to extrapolate based on presently given information. That's like the best thing about being sentient too. Or at least you don't want me to extrapolate since you gave me a r/futurology tier take on working.
You are acting like I'm describing hypothetical technology. It's already here. Look through the subreddit for direct sources. You seem to only be working off of ChatGPT-like text models. Even that can be quite autonomous with plug ins. You're like those people who don't know ai is starting to create functional code.
For as much as you love markets, which I also do, you seem to not acknowledge the profit motive and how human neutral it is.
---
On a sidenote, if I had access to books in the 1980s I might've predicted social media. A lot of singulitarians did. But really this is more like predicting social media in 2001 or 2007 depending how which sites you'd like to count. But I still think the analogy is flawed, as the tech is here.
Nanaki_TV t1_jegvhjv wrote
I do not want speculate nor wish others to because I see the tech as-is right now. I’m am however anticipating AGI fundamentally changing how the world works. What that will look like no one can say. Saying UBI is THE solution is naive of the world we are headed towards. Things are going to break soon and quickly, I hope. It’s up to us to guide others to build a better world once the current system collapses.
Profits are already used in the reward functions of NN. It will be interesting when the reward function is switched to real-world money (like Satoshis via the LN).
agorathird t1_jegwf73 wrote
It's not naive you have not thought through the implications of what AGI means. You are also ignorant of what is doable with the current technology. Artificial general intelligence is equal to us but also inherently superior due to its computational capacities. There is no need for us after that.
You literally are not describing any useful idea of AGI and are only describing the most surface level uses of text-modality only LLMs in your responses.
The r/futurology work week stuff you talk about is possible right now with current public models of chatgpt. It's been possible for a while. But it's not implemented due to greed and beauruacrats being steadfast in their ways. Luckily, not implementing a change hasn't been critically dire for mass swaths of people thus far.
Nanaki_TV t1_jegznf7 wrote
>you have not thought through the implications of what AGI means.
Almost agreed. But because I cannot know what it means. I keep trying my darnest to picture it but I cannot. I'm not smart enough to know what thousands of AGI coming together to solve complex problems will come up with, nor will anyone here. It's hubris to assume anyone can.
>There is no need for us after that.
Again, assumption after assumption. More and new horizons will be created. What? I don't know. But electricity gave the ability for so much to exist on top of it once it was massively adopted. Once AGIs are massively adopted and in our homes, not requiring a supercomputer to train I mean, well, I can only hallucinate what that future will look like. If we are "not needed" then so be it, there's no use arguing. May we die quickly. But I doubt it very much.
> But it's not implemented due to greed and beauruacrats being steadfast in their ways.
It is greed that will cause these models to be implemented and jobs to be automated. I'm working on the risk assessment of doing so right now for work. I do understand. I think I'm just not explaining well due to being sleep deprived thanks to having newborn twins. Lol.
agorathird t1_jeh2s75 wrote
>Again, assumption after assumption. More and new horizons will be created. What? I don't know. But electricity gave the ability for so much to exist on top of it once it was massively adopted. Once AGIs are massively adopted and in our homes, not requiring a supercomputer to train I mean, well, I can only hallucinate what that future will look like. If we are "not needed" then so be it, there's no use arguing. May we die quickly. But I doubt it very much.
Not assumptions, that's what AGI means lol as far as current jobs are concerned. Unless there's an issue they have with space travel? You can make a few edges cases assuming slow takeoff. Which I can give you a boon on about new horizons sure. Maybe we merge, whatever.
This doesn't mean we die or it's unaligned or whatever. That's real speculation. Good luck with your twins.
Nanaki_TV t1_jeh345p wrote
Thanks. And thanks for the discussion. P
HarbingerDe t1_jegn35o wrote
What are you honestly proposing as an alternative to UBI?
UBI is pretty much the only way that capitalism can be maintained post-AGI job takeover. If there's no UBI, you have literal billions of hungry desperate people who will be happy to tear down the prevailing global economic system.
Nanaki_TV t1_jegopy2 wrote
Claims made without evidence can be dismissed without rebuttal. Your assumptions of the future are not guaranteed.
agorathird t1_jegq0ky wrote
That's not a claim but the premise. This is r/singularity. He is echoing the original claim that *you* mentioned and wanted to rebuke. You have not presented a cohesive line of logic that satisfies an alternative.
Nanaki_TV t1_jegsmpp wrote
I didn’t make a claim in this thread other than pointing out how often UBI is brought up as the defacto solution in this sub.
HarbingerDe t1_jegsekp wrote
I will ask you again what you're proposing as an alternative Mr. Big Brain McCapitalism.
If you believe in free market competition, and there comes a time when for any given job there is an AGI that can easily out-compete and given human applicant. What is the alternative? The bold words are supposed to help you piece this together. I'm not sure how I could be any more clear.
IF you think capitalism is the ideal economic model and it should be preserved for the foreseeable future.
You're either suggesting that for the foreseeable future, humans will be able to compete with an exponentially increasing artificial intelligence (that can already rival us in a lot of jobs).
OR you're suggesting that such an AI won't come to exist.
If you're not willing to concede that UBI is necessary in a post-AGI world, those are virtually the only logical conclusions you can be making. Are you going to elaborate or are you going to keep whining about how we all use the word "literally" or something else equally inane?
Nanaki_TV t1_jegt7f8 wrote
I cannot make claims about the future I do not know. That’s the problem with having principles and building on them. If I propose to end slavery in 1800s you’re objection to “who would pick the cotton!?” is not a rebuttal. And if I did somehow have a crystal ball saying giant machines would do it—1,000:1 people will be out of labor it would be so efficient you’d call for UBI then too. New horizons will be created. What they will be I cannot even begin to guess.
HarbingerDe t1_jegwjgf wrote
>If I propose to end slavery in 1800s you’re objection to “who would pick the cotton!?” is not a rebuttal.
Typical right-wing / conservative move of, "uhh actually we're totally the ones who are against slavery... Yeah... It was us..."
The scenarios are not analogous at all.
>New horizons will be created. What they will be I cannot even begin to guess.
You are fundamentally at odds with the premise of the sub, this seems to be the biggest thing you're not grasping.
If you believe we're on the cusp of developing a self improving entity that is more intelligent, more creative, and all around more capable than a human at any given task then there cannot be any new horizons that an AI wouldn't better be able to take advantage of.
Nanaki_TV t1_jeh2wso wrote
Yes there can be. Simply greeting each other could be enough. Human created works could have value.
Oh and idk what straw-man that was you were building so I am ignoring it.
Viewing a single comment thread. View all comments