robertjbrown

robertjbrown t1_jeh148m wrote

>We have no logical reason to believe that AI could go rogue

I think what Bing chat did shows that yes, we do have a logical reason to think that. And this is when it is run by companies (Microsoft and OpenAI) that really, really didn't want it doing things like that. Wait till an AI is run by some spammer or scammer the like who just doesn't care.

It could be as simple as someone giving it the goal of "increase my profits", and it finds a way to do it that disregards such things as "don't cause human misery" or the like.

4

robertjbrown t1_jeh0ozy wrote

AI already has goals. That's what alignment is. And it gets harder to make sure those goals align with our own, the smarter the AI is.

ChatGPTs primary goal seems to be "provide a helpful answer to the user". The problem is when the primary goal becomes "increase the profits of the parent company." Or even something like "cause more engagement".

1

robertjbrown t1_jeh04nk wrote

I'd say it is a combination of things, but realize that ChatGPT came out just a few months ago. I would not be surprised if Google catches up within the year. Things are moving extremely fast. It may be that Google has something way bettr than ChatGPT in terms of capability, but they are more conservative regarding safety.

Remember it was OpenAI tech that went off the rails in the Bing chat mode.

Everyone in the industry realizes just how dangerous this stuff is. "Playing with fire" is about as much of an understatement as you could have. Google might just be playing it safe, and prioritizing safety/alignment over capabilities.

3

robertjbrown t1_jegwcuc wrote

>The fact is that people need interaction with people

That is your intuition, and probably most people's intuition. I think it is based on the fact that non-people have not, until november 2022, been able to have an intelligent, natural conversation with a person.

If you don't think ChatGPT is able to "have an intelligent, natural conversation with a person," here in 2023, I'm not going to argue. If you don't think that ChatGPT or some competitor will be able to do that in 2030, I think you lack imagination (and probably simply lack experience exploring what ChatGPT can actually do today).

But even if you are right, that people need to interact with people, that doesn't mean we need humans to prepare their meals, help them go to the bathroom and bathe (I definitely would prefer a robot to a human for that), get them around, make sure they take their medications, etc. If they need human interaction, what's wrong with the robot caretaker helping them get on video chat with their kids and grandkids, or with other elders who have similar needs for interaction?

I could certainly see an elder community where hundreds of residents have one or two paid humans to run everything, with the robots doing all the unpleasant and tedious stuff. Human interaction is handled not by paid staff, but by other residents.

Remember also that, in a society where most jobs can be done by machines, there are a whole lot more family members that have time to interact with their loved ones, rather than paying someone to come in and pretend to enjoy interacting with a very old person.

What specific thing does a caretaker do that must be a human?

1

robertjbrown t1_jegur4y wrote

Except that the arts, education and elder care is something they can do very well.

You should spend a good amount of time with ChatGPT (especially the GPT-4 version) before suggesting that physical labor is the main thing where AI and automation are making a difference.

It's been a long time since bulldozers and backhoes replaced 99% of the need for humans with shovels. Now we are at the point where AI can replace most of the work done by lawyers. (if not with GPT-4, with GPT-8 or so)

And sure, you still someone to control the AI and make the highest level decisions and stepping in for those rare things where a human is needed. Just like you need the person driving the backhoe, and you still often need a person with a shovel to do some of the finer work. (although..... https://www.core77.com/posts/109074/A-Hilariously-Tiny-Mini-Excavator .... now just replace the driver with an AI, and maybe one person controlling 50 machines, big and small)

But yeah, while not everything is 100% automatiable, an awful lot of things are 99.9% automatable. The ones you mention actually being prime candidates.

1

robertjbrown t1_jegowlx wrote

Is it that you don't trust them to keep them safe?

I've been making a machine to "look after" my 8 year old daughter, in a sense. Currently all it does is quiz her on her multiplication tables, and allow her to watch episodes of her favorite show for 10 minutes after she's solved a few with sufficient speed and accuracy. It will gradually do more (especially going beyond multiplication tables), but that's what it does now.

I'm not saying I'm leaving her home alone. But it is doing some of the things I'd be doing, freeing me up to do other things. It actually does this task better, by making the reward -- time to watch her show -- so directly tied to her progress, so I don't have to be the bad guy all the time.

If it was also making meals, doing the laundry, cleaning up after her, etc.... in exactly the way a parent or baby sitter might, all the better.

Obviously, I am not trusting a machine to keep her safe. I don't trust a AI powered robot with a camera to alert me or even call 911 if it detects something unusual. Not because I wouldn't trust one, but because such devices don't exist today, or they are too expensive or not well tested enough. But they will exist.

Remember, we're going to have self driving cars in a few years. If you don't think so, you haven't paid attention to the massive advances in AI just in the last few years (with the release of ChatGPT being the big one). We will be putting our lives in their hands.

Notice parents today don't watch their kids 24/7, especially if the kids are older than toddlers. They let them play in the basement or backyard while they are making dinner or what have you. If the kid is choking or having another medical situation that they are unable to tell you about, or being molested, or taking drugs, or exploring parts of the internet that they shouldn't, or trying to commit suicide, or any number other bad things, the parent might not know until it is too late. A robot baby sitter can indeed keep them safer than they'd be without it, even if you are right there in the house.

Do you trust a baby monitor? Like, a camera pointed at a baby, that you can monitor with your own eyes, to see that the baby seems to be ok without going to a different room? This is really just an extension of that concept, that adds a bit more automation to it.

But again, the things I described don't exist yet. They will soon, as anyone who understands just how fast AI is getting better, and has an imagination, must realize.

Of course, if the parents don't need to go to work, and all housework is handled by robots, they can spend time with the kids doing enjoyable activities, so there isn't such an immediate need for child caretakers. But still.

0

robertjbrown t1_jegm1az wrote

>but we have also seen that people simply don't want to be cared for by just machines.

Where have we seen that? 6 months ago, there was very little more annoying to me than to have to interact with a chatbot. That's changed dramatically in the time since. And the current ChatGPT is non only an early version, but it doesn't speak out loud, I can't really talk to it in a natural way, and it has an intentionally neutral personality, no name, no visual appearance, no memory of past interactions with me, etc. That will change far, far before we have a "post scarcity utopia". In fact that will probably change in a year or two most.

That's just one piece of it, of course. We need good robotics that are cheap as well.

People's attitudes towards being cared for by machines will change really quick, when those machines get good enough at the job. It doesn't make sense to assume they won't like it based on machines that have existed previously. That's about like saying "people simply don't like socializing through a digital device", and you are basing your assumptions on people logging into a BBS on a TRS-80.

1

robertjbrown t1_jeg1orz wrote

Can you list one thing a caretaker can do that an AI robot wouldn't be able to?

I have a 90 year old mom, and she spends thousands a month on caretakers (and it was a lot more when my dad was around as well). I can't really think of anything. Seriously, name one thing.

I see them cleaning, doing laundry, making meals, making sure medications are taken, helping them bathe or go to bathroom, and so on. And of course, when they need human interaction, helping them either get somewhere to see another person, or helping them get on video chat with someone.

And even if you come up with one thing, isn't it something the robot can identify the need for, and call in the human? For instance, call a doctor?

1

robertjbrown t1_jefygr7 wrote

You think we're all just going to cooperate? "Discuss this as a species?" How's that going to work? Democracy? Yeah that's been working beautifully.

I don't think you've been paying attention.

You don't need to "attach AIs to the nukes" for them to do massive harm. All you need is one bad person using an AI to advance their own agenda. Or even an AI itself that was improperly aligned, got a "power seeking" goal, and used manipulation (pretending to be a romantically interested human is one way) or threats (do what I say or I'll email everyone you know, pretending to be you, sending them all this homemade porn I found on your hard drive).

GPT-4, as we speak, is writing code for people, and those people are running that code, without understanding it. I use it to write code and yes, it is incredible. It does it in small chunks, and I at least have the ability to skim over the code and see it isn't doing anything harmful. Soon it will do much larger programs , and the people running the code will be less experienced programmers than me. You don't see the problem there? Especially if the AI itself is not ChatGPT, but some open source one where they've taken the guardrails off? And this is all assuming the human (the ones compiling and running the code) is not TRYING to do harm.

I mean, go look in your spam folder. By your logic, we'd all agree that deceptive spam is bad, and stop doing it. Now think of if every spam was AI generated, knew all kinds of things about you, was able to pretend to be people you know, was smarter than the spam filters, and wasn't restricted to email. What if you came to reddit, and had no clue who was a human and who wasn't.

I don't know where your idealistic optimism comes from. Here in the US, politics has gone off the rails, more because of social media than anything. 30 years ago, we didn't have the ability for any Joe Blow to broadcast their opinion to the world. We didn't have algorithms that amplified views that increased engagement (rather than looking at quality) at a massive scale. We now have a government who is controlled by those who spend the vast bulk of their energy fighting against each other rather than solving problems.

Sorry this "drives you fucking insane", but damn. That's really, really naive if you think we'll all work together and solve this because "that's what we do." No, we don't.

2

robertjbrown t1_jecuazk wrote

I enjoy talking to ChatGPT, even today, more than talking with, for instance, my parents' caretakers.

If there is a robot or other device that can help me use the bathroom, I'd prefer than to a human.

I can't think of much else that a robot/AI couldn't do in terms of caretaking. Prepare food, keep track of my medications, get me places, help me up and down, keep an eye on me and alert others if there is a problem, and so on.

If I want company that isn't a machine, what about other people who also want company as well, as opposed to a paid employee? And maybe a dog. Which the caretaker can feed and walk and such.

I can't see people in a post scarcity economy wanting to be caretakers, since everything they need isn't, well, scarce.

2

robertjbrown t1_jecthkf wrote

Any kind? I'm pretty sure "AI alignment" is something we'll want to keep humans doing. It would be very foolish to let the AIs try to keep the AIs in line.

Aside from that, I can't think of any jobs that can't be replaced or reduced to a tiny fraction of what there was previously.

But I think most jobs will be unnecessary. I'm not convinced a utopia is inevitable, though. Obviously there has to be some way to distribute wealth, whether it be UBI or something else.

8