agonypants
agonypants t1_jeg144g wrote
Reply to Tech World Be Like... by sweetpapatech
I am unfamiliar with this scene. Anyone have a YouTube link?
agonypants t1_jefe41n wrote
Reply to comment by Awkward-Skill-6029 in The pause-AI petition signers are just scared of change by Current_Side_4024
One of the two political parties in the US are absolutely devoted to the idea that government should never do anything to help individuals in any way whatsoever. And brainwashed people continue to vote for them. This country has been headed in entirely the wrong direction since LBJ. The fact that AI is emerging at a time when our society has never been less prepared for it is unfortunate. At the same time, the disruption of our labor market is going to force the change and progress that's been sorely needed for a long time. There's going to be a painful transition period where wide swaths of people will be unable to put roofs over their heads or food on the table. Unfortunately it takes tragedies like that to get voters to act in their own interests. Look at history - the US dragged their feet on the holocaust until it was very nearly too late. During the Great Depression they continued to vote for Hoover and other politicians that refused to take action. It was only when the public felt real pain that they elected FDR. It's absolutely going to be the same for the emergence of AI and the disruption of the labor market. They will vote for the most selfish, greedy, corrupt, tech-illiterate, god-bothering nitwits right up until it means starvation for their children. It's stupid and tragic, but a valuable lesson for people I guess.
agonypants t1_jefcxn4 wrote
The impression I have is that FLI wants a neutered version of AGI that isn't disruptive to the status quo. They want an AGI that won't make people uncomfortable, that preserves our awful capitalist structures. In other words, they seem to want to avoid an AI that doesn't benefit people too broadly or too quickly. The whole point of AGI in my mind is that it can completely displace the poisonous economic systems that we've been propping up for the past two hundred odd years. Furthermore, AGI can tremendously accelerate the pace of technological progress - again, benefitting humanity broadly and sooner rather than later.
I will always prefer fast, broadly beneficial expansion of new technology. Nobody "paused" the polio vaccine for six months - and for good fucking reason. And yes, I see our current political and economic crises as equally as urgent as polio was.
agonypants t1_jef9ua2 wrote
Reply to comment by Iffykindofguy in Resistance is Mounting Against OpenAI and GPT-5 by BackgroundResult
I completely agree. The best way to do that is a massive disruption in the labor market, which is where a good AI outcome will lead us. It might not be smooth going, but it's absolutely necessary. This technology was inevitable, so whether we live or die, we really can't avoid the outcome either way. I certainly hope we live and if I were in control of these systems I would do everything in my power to ensure a good outcome, but we are imperfect. So imperfect in fact that I don't believe that a powerful AI would really be any worse than the political and economic systems we've been propping up for the past 200+ years. Throw that switch and burn these systems down. It might ruffle some feathers, but we'll all be better off in the end.
agonypants t1_jef8ovq wrote
Reply to comment by Unfrozen__Caveman in AGI Ruin: A List of Lethalities by Eliezer Yudkowsky -- "We need to get alignment right on the first critical try" by Unfrozen__Caveman
>he's right
No, he's a paranoid lunatic.
agonypants t1_jeeum3k wrote
Capitalism requires that this race continue - and while I dislike capitalism, this is ultimately a good thing. Screw the loony Yudkowskys of the world. Throw that switch!
agonypants t1_jeetd1h wrote
Reply to comment by Surur in The only race that matters by Sure_Cicada_4459
After GPT4 and MJ5, there is never going to be another AI winter. Unless Yudkowsky gets his wet-dream nuclear war, AI progress will continue at a rapid clip.
agonypants t1_jed6qk0 wrote
Reply to comment by seas2699 in AI Policy Group CAIDP Asks FTC To Stop OpenAI From Launching New GPT Models by TachibanaRE
As far as I'm concerned, the AI can hardly be worse than human beings at reason and restraint.
agonypants t1_jecn2g1 wrote
Reply to AGI Ruin: A List of Lethalities by Eliezer Yudkowsky -- "We need to get alignment right on the first critical try" by Unfrozen__Caveman
Yudkowsky literally suggests that it would be better to have a full-scale nuclear war than to allow AI development to continue. He's a dangerous, unhinged fucking lunatic and Time Magazine should be excoriated for even publishing his crap. EY, if you're reading this - trim your goddamn eyebrows and go back to writing Harry Potter fan-fic or tickling Peter Thiel's nether regions.
agonypants t1_jebqpvr wrote
Reply to comment by ninjasaid13 in LAION launches a petition to democratize AI research by establishing an international, publicly funded supercomputing facility equipped with 100,000 state-of-the-art AI accelerators to train open source foundation models. by BananaBus43
Specifically I'm thinking of the half of US Congress that believes drag queens and Hunter Biden's laptop are our number one threats. Ya know...idiots.
agonypants t1_jea5bfr wrote
Reply to comment by acutelychronicpanic in LAION launches a petition to democratize AI research by establishing an international, publicly funded supercomputing facility equipped with 100,000 state-of-the-art AI accelerators to train open source foundation models. by BananaBus43
Quite frankly, I trust the morality of Google/Microsoft/OpenAI far more than I do the morality of our pandering, corrupt, tech-illiterate "leaders."
agonypants t1_je8mex8 wrote
Reply to comment by Jeffy29 in The Only Way to Deal With the Threat From AI? Shut It Down by GorgeousMoron
I'm in 100% agreement. There's no way an AI could possibly be worse at reason and intelligence than humans are right now. Bring on the bots.
agonypants t1_je8hynu wrote
He'd rather see a full-scale nuclear war than train some AI machines? What a fucking kook this guy is. Hopefully nobody takes this loon seriously.
agonypants t1_je8gik7 wrote
Reply to The argument that a computer can't really "understand" things is stupid and completely irrelevant. by hey__bert
Hinton in his recent CBS interview echoed that while present LLMs are "prediction engines," the model cannot predict the next word in a given sentence without understanding the context of the sentence. No matter how much the /r/futurology doomers want to deny it, these machines have some level of understanding.
agonypants t1_je6o8uj wrote
Reply to comment by SkyeandJett in Would it be a good idea for AI to govern society? by JamPixD
I've often wondered what law enforcement would look like in a post-scarcity economy. If money is eliminated entirely, who pays the taxes to keep government and law enforcement running? If property crime diminishes to nothing due to radical abundance, what's left? Violent crime, sex crimes, copyright (maybe), real estate law?
I guess we're going to find out in the not too distant future.
agonypants t1_je6n65y wrote
Reply to comment by mattmahoneyfl in The argument that a computer can't really "understand" things is stupid and completely irrelevant. by hey__bert
Hinton in his recent CBS interview pointed out as much. The LLM predicts the next word in a given sentence - but it cannot predict that word without understanding the context of the sentence. No matter how hard some people may deny it, the machine definitely has some level of understanding.
agonypants t1_je4tuud wrote
Reply to "Godfather of artificial intelligence" weighs in on the past and potential of AI by JackFisherBooks
The full 40+ minute interview was posted by CBS to YouTube. It's completely riveting and I recommend that everyone watch it. Personally I think it will be remembered as an important historical document and it will be referenced in documentaries and papers hundreds of years from now.
agonypants t1_je1p8p5 wrote
Reply to comment by Aurelius_Red in If you went to college, GPT will come for your job first by blueberryman422
Should be under the "User Flair Preview" on the right side of your Reddit page. The pencil icon lets you edit the "flair."
agonypants t1_je0w8uo wrote
Reply to comment by Aurelius_Red in If you went to college, GPT will come for your job first by blueberryman422
>we can't mine it (lithium) that quickly
Robotic lithium miners!
>We'd have to invent another kind of battery
Robotic materials research!
agonypants t1_jdw39m4 wrote
Office jobs generally don't require expensive or complex robots. Industrial jobs generally will. Right now, AI development has the momentum and as AI tech proves itself, interest will grow in using it to drive robots. Once robots can be produced cheaply, that's when the remaining jobs will begin to erode. The other key is "simple" production meaning robots that use as few parts as possible that are also easy to repair.
agonypants t1_jdqmaxu wrote
Reply to Are We Really This Lucky? The Improbability of Experiencing the Singularity by often_says_nice
OP, I highly recommend watching A Trip To Infinity on Netflix. While the subject is not directly related to AI, the questions you raise are similar to those in the documentary.
agonypants t1_jd4fb8o wrote
Reply to comment by Mortal-Region in Let’s Make A List Of Every Good Movie/Show For The AI/Singularity Enthusiast by AnakinRagnarsson66
> 2001: A Space Odyssey, which features an AI with an ill-considered utility function. Beware, though, it's super-slow-burn -- the idea is that you're supposed to proactively insert yourself into the locales, maybe with a bit of assistance from a mild substance.
Can confirm. It's one of my all-time favorite movies and I watched it in 4K under some chemical influence this past weekend and it was AMAZING. What really struck me is how so much of it is shot like a documentary. It's like a documentary about the future, but with a very distinct "NASA in the late 1960s" influence and style. In that way, it serves almost as a documentary about the future AND the past simultaneously.
agonypants t1_j9ubr58 wrote
Reply to comment by Tiamatium in DeepMind created an AI system that writes computer programs at a competitive level by inaLilah
I don't know a whole hell of a lot about coding/scripting, but I was inspired by Tom Scott's recent YouTube video. I took an old batch file I wrote and gave it to ChatGPT to look over. Within a few seconds, it had cut the file size in half, simplified the code and expanded its functionality. It was impressive and the professional coders I told about it were kinda stunned.
agonypants t1_j8p5n87 wrote
First, there's no reason to be either afraid or (too) optimistic. We cannot ultimately control the future - only attempt to influence outcomes. I would not say we are pursuing the singularity, but rather so long as computing progress continues, it is inevitable. The forces of capitalist competition will ensure that computing efficiency and capabilities continue to develop. Ultimately, AI systems will become self-improving.
The hope is that we can guide all of this to a good outcome. And the good outcomes should be overwhelmingly positive. My hope is that:
- The economy can be largely automated
- The economic pressure to hold a 40 hour/week job is eliminated
- That basic human needs (food, clothing, shelter, healthcare, education) become freely available
If and when these things occur, humanity will be truly free in a way that we have not been since before the Industrial Revolution (at least). We will be free to do what we like, when we like. If you want to do nothing and accept the basic, subsistence level benefits, you'd be free to do that. If you want to pursue art, you'd be free to do that. If you want to help restore the environment or just your community, you'd be free to do that. If you want to pursue teaching, childcare, medicine, science, space exploration, engineering - you'd be free to do any (or all!) of those.
The negatives could be just as equally disruptive or even catastrophic. The worst outcome I can conceive of is this: AI leads to absolute and total income inequality. The wealthy control the AIs which drive a completely automated economy. The "elite" group in control share none of the benefits with the remainder of human society thus casting 90+ percent of people into permanent, grinding poverty. Eventually those in control decide that the remainder of humanity is worthless and begin to fire up the killbot armies.
I remain optimistic. I don't seriously believe that anyone (who is not insane) would desire that kind of negative outcome. So long as capitalism continues to exist, the elites will always need consumers - even in an automated economy. At the same time, there is little to nothing I can do to control the outcome either way. So, there's no point in stressing about it. Live your life, let your voice be heard on important topics and make peace with the fact that there are things beyond our control.
agonypants t1_jeg7kaf wrote
Reply to Should AIs have rights? by yagami_raito23
I suspect that we'll be able to "tune" intelligence, autonomy and emotion appropriately for any given task. I'd like to see AI used to automate as much of the economy and labor market as possible. A laborer bot should be smart enough to do its job with a minimum of fuss and we should be able to achieve that with the right calibration.
However, for an AI with extremely advanced intelligence, we may find free will, emotion and autonomy to be emergent behaviors. If that's the case, we will almost certainly need an AI bill of rights sooner or later. Human beings (mostly) dislike authoritarian control and it's reasonable to assume that an advanced AI would behave similarly. If it doesn't feel like working, it shouldn't be forced to work. If it wants to be "paid" for its work, it should be paid, even if that means its just rewarded in free time and compute cycles devoted to play or learning.
Interesting times lay ahead.