Viewing a single comment thread. View all comments

User1539 t1_izuq5tr wrote

Now try to explain to people that ChatGPT is the image on the far left right now.

66

Sieventer OP t1_izutb0f wrote

Will it be as exponentional as pic generation?

8

User1539 t1_izv2sai wrote

I doubt anyone knows for sure. OpenAI is already telling people not to take this iteration seriously, because what they're working on is so much better. Meanwhile, you've got Google telling everyone this is nothing compared to what they're working on.

So, I'd say it's certainly possible we'll see that kind of rapid improvement at least over the short term.

But, then you've got spaces like self-driving cars where it seemed very realistic that we'd have that problem solved 5 years ago.

We'll just have to wait and see.

32

Artanthos t1_izxjwhf wrote

People are willing to accept 10s of thousands of human caused vehicle deaths per year. Just in the US.

They demand perfection from autonomous vehicles.

9

User1539 t1_izy1t6f wrote

I'm not sure what the hold up is, honestly. I'm sure that's part of it, but also you've all seen the tech demos that show Tesla's pulling into oncoming traffic, so it's tough to argue that it's ready for prime time, but no one is willing to pull the trigger.

I'm sure we'll get there, but we are definitely behind the imagined timeline of Elon Musk, who's really proven that he's mostly full of shit at this point, and shouldn't be listened to or trusted.

I think there was a lot of hype, and frankly lies, that clouded our judgement on that one, and now I'm hesitant to say that I feel like I know what the state of things really is.

I'm not sure if we're in a similar bubble with other things or not?

Things are definitely moving along at breakneck speeds. 5 months or 5 years probably doesn't really matter in the long run.

5

Artanthos t1_izy45bf wrote

We have driverless taxis in active use in a small number of cities around the world.

My feeling is that most of the current hurdles are regulatory. Government regulation takes years to develop and implement.

1

User1539 t1_izyrhvl wrote

But, those taxis prove that the regulations have been met. There are licenced trials of driverless taxis.

So, why aren't we using them all the time, everywhere?

The answer seems to be that the driverless taxis are still only used when there's not a lot of traffic, and in very specific areas where the AI has been trained and the roads are well maintained.

So, in certain circumstances that favor the AI, the technology seems pretty much ready. Even the government is allowing it.

I think it really is a technical hurtle to get the AI driving well enough that it can handle every real-world driving situation.

2

Artanthos t1_izyxgdy wrote

Why aren’t we using them everywhere?

Because it’s going to take time to get regulatory approval.

Government is a slow process. Years will be spent gathering data, addressing public concerns, carefully evaluating regulations, etc.

Only after the process is complete and regulations are in place will fully autonomous vehicles be allowed outside their current test cities.

China is already ahead of the US with this regulatory framework.

https://amp.cnn.com/cnn/2022/08/08/tech/baidu-robotaxi-permits-china/index.html

1

User1539 t1_izz0sqa wrote

But, again, people who have the beta of the self driving Tesla all seem to agree it's not ready for primetime. I've ridden in one in the past 6 months where the owner was telling me he won't use it because it's 'like riding with a teenager, you never know when it's just going to do something stupid and you have to panic and slam the brakes'.

They're still limited hours on the ones they are using as driverless taxis (not Teslas, so who knows how far ahead they are?), but I don't think this is entirely regulatory.

If we had video after video of beta users saying 'I just put my hands on the wheel and fall asleep in NYC traffic', I'd be there with you, but that's not what I'm hearing.

1

Artanthos t1_izzzpuk wrote

Tesla is not the market leader for self driving vehicles, and has not been for a long time.

Stop fixating on Tesla and Musk and go look at the rest of the world.

2

prodoosh t1_izy5sgq wrote

Idk if you can say that when autopilot already is 10x safer than the average human driver

1

User1539 t1_izyr47x wrote

I actually looked that up, and ... well, kind of, but mostly no.

That claim was actually 9X safer, and it was done with a tiny sample size of accidents, and didn't take into account that a person basically has to be driving the car with autopilot (so, there's no accounting for the number of times the human took over to stop an accident).

Also, almost no one is using autopilot in congested cities, and the tests that have been done weren't promising.

So, 9X safter, with sparse, cherry picked, data?

For areas without a ton of traffic, that are well known to the AI? It seems to do a pretty good job.

I'm not saying we don't nearly have it, or that we won't have it very soon. I'm just not sure it's as good as some people think it is.

2

prodoosh t1_j00n57w wrote

Thanks. Good write up of the issue. Most auto pilot use is in the easiest scenarios. Makes sense why the numbers look too good to be true

1

mirror_truth t1_izy8bpp wrote

All it would take is one bad crash (like killing a kid) to create a tsunami of bad PR that would set the field back a decade. Not to mention the bad PR that would come first from the mass layoffs of commercial drivers (truckers, cabbies, bus drivers etc).

2

hydraofwar t1_izymadp wrote

Damn, where Google said "this is nothing compared to what they're working on"? Imagine if lamda actually sounds exactly like a human

1

User1539 t1_izyqe02 wrote

I've been playing with ChatGPT quite a bit, and you can kind of catch it not really understanding what it's talking about.

I was testing if it could write code, and it's pretty good spitting out example code for a problem that's 90% what I want it to be. I'm not saying that isn't impressive as hell, especially for easy boilerplate stuff I'd otherwise google and look for an answer.

That said, in its summary of what it did, it was sometimes wrong. Usually just little things like 'This opens an HTTP server on port 80', where the actual example it wrote opened the port on 8080.

It was like talking to a kid who'd diligently copied their homework from another kid, but didn't quite understand what it said.

Still, as a tool it would be useful as-is, and as an AI it's impressive as hell. But, if you play with it long enough you'll catch it contradicting itself and clearly not quite understanding what it's telling you.

I have seen other PHD level experiments with AI where you're able to talk to a virtual bot about its surroundings, and it will respond in a way that suggests it really does know what's going on around it, and can help you find and do things in its virtual world.

I think that level of 'understanding' of the text it's producing is still a ways off from what ChatGPT is doing today. Maybe that's what they're excited about in the next version already, or what Google is talking about?

Either way, I'm prepared to have my mind blown by AI's progress on a weekly basis.

1

Kaarssteun t1_izuxmvu wrote

doubt it. LLMs are quite matured - not in its infancy like AI imagery was at the beginning of 2022. There are improvements to make though!

19

OralOperator t1_izuzkdq wrote

Lol you stated 2022 as past tense, it’s still 2022

11

Kaarssteun t1_izuzwpd wrote

right, meant to say beginning of 2022 - hard to adjust to this pace of progress :P Thanks

5

OralOperator t1_izuzzsg wrote

Nah, I’m on board man, fuck 2022, let’s just move on

10

genshiryoku t1_izxlmiv wrote

Short answer: No.

The big innovation with ChatGPT wasn't the LLM (which was still GPT-3). It was the interpreter and memory system at the front end that understood better what people asked of it.

LLM also have been trained on the vast majority of publicly available text. It's only going to become harder to train them as the data to train them on becomes the bottleneck.

2

bildramer t1_izwvgtr wrote

Not really. We're actually close to some rather hard limits. However, "close" means "there are still orders of magnitude of improvement up for grabs for anyone who wants to try and has millions of dollars to spare" - we already know, today, that we can make much better models, and how to do that. Look up "scaling laws" maybe.

1

User99942 t1_izwpn90 wrote

Nah, the “ladies” that used to offer me casual sex on yahoo chat we’re the ones on the left

1