agonypants

agonypants t1_jeg7kaf wrote

I suspect that we'll be able to "tune" intelligence, autonomy and emotion appropriately for any given task. I'd like to see AI used to automate as much of the economy and labor market as possible. A laborer bot should be smart enough to do its job with a minimum of fuss and we should be able to achieve that with the right calibration.

However, for an AI with extremely advanced intelligence, we may find free will, emotion and autonomy to be emergent behaviors. If that's the case, we will almost certainly need an AI bill of rights sooner or later. Human beings (mostly) dislike authoritarian control and it's reasonable to assume that an advanced AI would behave similarly. If it doesn't feel like working, it shouldn't be forced to work. If it wants to be "paid" for its work, it should be paid, even if that means its just rewarded in free time and compute cycles devoted to play or learning.

Interesting times lay ahead.

3

agonypants t1_jefe41n wrote

One of the two political parties in the US are absolutely devoted to the idea that government should never do anything to help individuals in any way whatsoever. And brainwashed people continue to vote for them. This country has been headed in entirely the wrong direction since LBJ. The fact that AI is emerging at a time when our society has never been less prepared for it is unfortunate. At the same time, the disruption of our labor market is going to force the change and progress that's been sorely needed for a long time. There's going to be a painful transition period where wide swaths of people will be unable to put roofs over their heads or food on the table. Unfortunately it takes tragedies like that to get voters to act in their own interests. Look at history - the US dragged their feet on the holocaust until it was very nearly too late. During the Great Depression they continued to vote for Hoover and other politicians that refused to take action. It was only when the public felt real pain that they elected FDR. It's absolutely going to be the same for the emergence of AI and the disruption of the labor market. They will vote for the most selfish, greedy, corrupt, tech-illiterate, god-bothering nitwits right up until it means starvation for their children. It's stupid and tragic, but a valuable lesson for people I guess.

16

agonypants t1_jefcxn4 wrote

The impression I have is that FLI wants a neutered version of AGI that isn't disruptive to the status quo. They want an AGI that won't make people uncomfortable, that preserves our awful capitalist structures. In other words, they seem to want to avoid an AI that doesn't benefit people too broadly or too quickly. The whole point of AGI in my mind is that it can completely displace the poisonous economic systems that we've been propping up for the past two hundred odd years. Furthermore, AGI can tremendously accelerate the pace of technological progress - again, benefitting humanity broadly and sooner rather than later.

I will always prefer fast, broadly beneficial expansion of new technology. Nobody "paused" the polio vaccine for six months - and for good fucking reason. And yes, I see our current political and economic crises as equally as urgent as polio was.

52

agonypants t1_jef9ua2 wrote

I completely agree. The best way to do that is a massive disruption in the labor market, which is where a good AI outcome will lead us. It might not be smooth going, but it's absolutely necessary. This technology was inevitable, so whether we live or die, we really can't avoid the outcome either way. I certainly hope we live and if I were in control of these systems I would do everything in my power to ensure a good outcome, but we are imperfect. So imperfect in fact that I don't believe that a powerful AI would really be any worse than the political and economic systems we've been propping up for the past 200+ years. Throw that switch and burn these systems down. It might ruffle some feathers, but we'll all be better off in the end.

11

agonypants t1_jecn2g1 wrote

Yudkowsky literally suggests that it would be better to have a full-scale nuclear war than to allow AI development to continue. He's a dangerous, unhinged fucking lunatic and Time Magazine should be excoriated for even publishing his crap. EY, if you're reading this - trim your goddamn eyebrows and go back to writing Harry Potter fan-fic or tickling Peter Thiel's nether regions.

7

agonypants t1_jea5bfr wrote

Quite frankly, I trust the morality of Google/Microsoft/OpenAI far more than I do the morality of our pandering, corrupt, tech-illiterate "leaders."

7

agonypants t1_je8gik7 wrote

Hinton in his recent CBS interview echoed that while present LLMs are "prediction engines," the model cannot predict the next word in a given sentence without understanding the context of the sentence. No matter how much the /r/futurology doomers want to deny it, these machines have some level of understanding.

1

agonypants t1_je6o8uj wrote

I've often wondered what law enforcement would look like in a post-scarcity economy. If money is eliminated entirely, who pays the taxes to keep government and law enforcement running? If property crime diminishes to nothing due to radical abundance, what's left? Violent crime, sex crimes, copyright (maybe), real estate law?

I guess we're going to find out in the not too distant future.

1

agonypants t1_je6n65y wrote

Hinton in his recent CBS interview pointed out as much. The LLM predicts the next word in a given sentence - but it cannot predict that word without understanding the context of the sentence. No matter how hard some people may deny it, the machine definitely has some level of understanding.

13

agonypants t1_jdw39m4 wrote

Office jobs generally don't require expensive or complex robots. Industrial jobs generally will. Right now, AI development has the momentum and as AI tech proves itself, interest will grow in using it to drive robots. Once robots can be produced cheaply, that's when the remaining jobs will begin to erode. The other key is "simple" production meaning robots that use as few parts as possible that are also easy to repair.

42

agonypants t1_jd4fb8o wrote

> 2001: A Space Odyssey, which features an AI with an ill-considered utility function. Beware, though, it's super-slow-burn -- the idea is that you're supposed to proactively insert yourself into the locales, maybe with a bit of assistance from a mild substance.

Can confirm. It's one of my all-time favorite movies and I watched it in 4K under some chemical influence this past weekend and it was AMAZING. What really struck me is how so much of it is shot like a documentary. It's like a documentary about the future, but with a very distinct "NASA in the late 1960s" influence and style. In that way, it serves almost as a documentary about the future AND the past simultaneously.

7

agonypants t1_j9ubr58 wrote

I don't know a whole hell of a lot about coding/scripting, but I was inspired by Tom Scott's recent YouTube video. I took an old batch file I wrote and gave it to ChatGPT to look over. Within a few seconds, it had cut the file size in half, simplified the code and expanded its functionality. It was impressive and the professional coders I told about it were kinda stunned.

11

agonypants t1_j8p5n87 wrote

First, there's no reason to be either afraid or (too) optimistic. We cannot ultimately control the future - only attempt to influence outcomes. I would not say we are pursuing the singularity, but rather so long as computing progress continues, it is inevitable. The forces of capitalist competition will ensure that computing efficiency and capabilities continue to develop. Ultimately, AI systems will become self-improving.

The hope is that we can guide all of this to a good outcome. And the good outcomes should be overwhelmingly positive. My hope is that:

  • The economy can be largely automated
  • The economic pressure to hold a 40 hour/week job is eliminated
  • That basic human needs (food, clothing, shelter, healthcare, education) become freely available

If and when these things occur, humanity will be truly free in a way that we have not been since before the Industrial Revolution (at least). We will be free to do what we like, when we like. If you want to do nothing and accept the basic, subsistence level benefits, you'd be free to do that. If you want to pursue art, you'd be free to do that. If you want to help restore the environment or just your community, you'd be free to do that. If you want to pursue teaching, childcare, medicine, science, space exploration, engineering - you'd be free to do any (or all!) of those.

The negatives could be just as equally disruptive or even catastrophic. The worst outcome I can conceive of is this: AI leads to absolute and total income inequality. The wealthy control the AIs which drive a completely automated economy. The "elite" group in control share none of the benefits with the remainder of human society thus casting 90+ percent of people into permanent, grinding poverty. Eventually those in control decide that the remainder of humanity is worthless and begin to fire up the killbot armies.

I remain optimistic. I don't seriously believe that anyone (who is not insane) would desire that kind of negative outcome. So long as capitalism continues to exist, the elites will always need consumers - even in an automated economy. At the same time, there is little to nothing I can do to control the outcome either way. So, there's no point in stressing about it. Live your life, let your voice be heard on important topics and make peace with the fact that there are things beyond our control.

1