Viewing a single comment thread. View all comments

ALurkerForcedToLogin t1_j8hwdbe wrote

Yeah, but at least a med student is a sentient human being, not a statistical algorithm trying to guess the next word in the sentence blindly.

29

Ergok t1_j8ia83i wrote

All professionals are highly trained algorithms in a way. If the next word in the sentence is correct, does it matter where it came from?

36

venustrapsflies t1_j8jp5jv wrote

If I had a nickel for every time I saw someone say this on this sub I could retire early. It’s how you can tell this sub isn’t populated by people who actually work in AI or neuroscience.

It’s complete nonsense. Human beings don’t work by fitting a statistical model to large datasets, we learn by heuristics and explanations. A LLM is fundamentally incapable of logic, reasoning, error correction, confidence calibration, and innovation. No, a human expert isn’t just an algorithm, and it’s absurd that this idea even gets off the ground.

15

JoieDe_Vivre_ t1_j8jt9l7 wrote

The point they’re making is their second sentence.

If it’s correct, it doesn’t matter where it came from.

ChatGPT is just our first good stab at this kind of thing. As the models get better, they will out perform humans.

It’s hilarious to me that you spent all those words just talking shit, while entirely missing the point lol.

9

xxxnxxxxxxx t1_j8jzb3z wrote

If it’s ever correct, it’s by accident. The limitations listed above negate that point.

−4

JoieDe_Vivre_ t1_j8k184o wrote

It’s literally designed to get the answer right. How is that ever “by accident”?

8

venustrapsflies t1_j8kck2g wrote

No, it's not at all designed to be logically correct, it's designed to appear correct based on replications of the training dataset.

One the one hand, it's pretty impressive that it can do what it does using nothing but a statistical model of language. On the other hand, it's a quite unimpressive example of artificial intelligence because it is just a statistical language model. That's why it's abysmal at even simple math and logic questions, things that computers have historically been quite good at.

Human intelligence is nothing like a statistical language model. THAT is the real point, the one that both you and the OC, and frankly much of this sub at large, aren't getting.

7

xxxnxxxxxxx t1_j8k2m48 wrote

No, you are missing the understanding of how language models work. They are designed to guess the next word, and they can’t do any more than that. This works because language is a subjective interface - far from logical correctness

4

MaesterPycell t1_j8ky26c wrote

https://en.m.wikipedia.org/wiki/Chinese_room

This is a problem or theory that addresses the issue at possibly better lengths.

Additionally, I believe recommend to most people who are interested in AI to read the Fourth Age, which is a philosophy book targeted at ai. It explains it in a nice and easier to read concept about what it is to be truly AGI and the steps we’ve made so far and will need to make.

Quick Edit: I also don’t think youre wrong, what this AI is saying it wouldn’t be able to explain but it’s learned to take the code behind it and spit out something akin to human language, no matter how garbled or incoherent that is to the machine behind it doesn’t care, as long as it suits it’s learning.

2

jesusrambo t1_j8kk7i2 wrote

Lmao, this is how you can tell this sub is populated by javascript devs and not scientists

You can’t claim it’s fundamentally incapable of that, because we don’t know what makes something capable of that or not.

We can’t prove it one way or another. So, we say “we don’t know if it is or isn’t” until we can.

0

venustrapsflies t1_j8kkovy wrote

I am literally a scientist who works on ML algs for a living. Stop trying to philosophize yourself way into believing what you want to. Just because YOU don’t understand it doesn’t mean you can wave your hands and act like two different things are the same.

1

jesusrambo t1_j8knlv9 wrote

You are either not, or a bad scientist. You’re just describing bad science.

3

venustrapsflies t1_j8l4ftf wrote

No, bad science would pretending that just because you don’t understand two different things, they are likely the same thing. Despite what you may believe, these algorithms are not some mystery that we know nothing about. We have a good understanding of why they work, and we know more than enough about them to know that they have nothing to do with biological intelligence.

0

jesusrambo t1_j8l57sr wrote

Can you please define exactly what biological intelligence is, and how it’s uniquely linked to logic and innovation?

1

venustrapsflies t1_j8l5t2n wrote

are you actually interested in learning something, or are you just trying to play stupid semantic games?

0

jesusrambo t1_j8l6y32 wrote

If you can justify your perspective, and you’re interested in discussing it, I would love to hear it. I find this topic really interesting, and I’ve formally studied both philosophy and ML. However, so far nobody’s been able to provide an intelligent response without it devolving into, “it’s just obvious.”

Can you define what you mean by intelligence, how we can recognize and quantify it, and therefore describe how we can identify and measure its absence in a language model?

0

venustrapsflies t1_j8l8o54 wrote

How would you quantify the lack of intelligence in a cup of water? Prove to me that the flow patterns don’t represent a type of intelligence.

This is a nonsensical line of inquiry. You need to give a good reason why a statistical model would be intelligent, for some reasonable definition. Is a linear regression intelligent? The answer to that question should be the same as the answer to whether a LLM is.

What people like you do is to conflate multiple very different definitions of a relatively vague concept line “intelligence”. You need to start with why on earth you would think a statistical model has anything to do with human intelligence. That’s an extraordinary claim, the burden of proof is on you.

1

jesusrambo t1_j8laua9 wrote

I can’t, and I’m not making any claims about the presence or lack of intelligence.

You are making a claim: “Language models do not have intelligence.” I am asking you to make that claim concrete, and provide substantive evidence.

You are unable to do that, so you refuse to answer the question.

I could claim “this cup of water does not contain rocks.” I could then measure the presence or absence of rocks in the cup, maybe by looking at the elemental composition of its contents and looking for iron or silica.

As a scientist, you would understand that to make a claim, either negative or positive, you must provide evidence for it. Otherwise, you would say “we cannot make a claim about this without further information,” which is OK.

Is a linear regression intelligent? I don’t know, that’s an ill-posed question because you refuse to define how we can quantify intelligence.

2

HappierShibe t1_j8ju99x wrote

This is an asinine, 'tell me you don't work with neural networks without telling me you don't work with neural networks' answer to a really complex problem.

0

Deep_Stick8786 t1_j95ygd2 wrote

I am a physician. Just the likely order of replacement of us. Much of good medical decision making is already algorithmic, just with humans not AI yet. Surgical robots are quite advanced in their movement capabilities, its only a matter of time before an AI can replace the decision making aspect of operating

1

Deep_Stick8786 t1_j8ida3u wrote

Radiologists are going to go first. Then anyone not performing surgery. Then the robots come. We will all need to become engineers

−3

JackSpyder t1_j8iwxcr wrote

Don't worry, us engineers will be replaced long before then.

7

thebardingreen t1_j8k8r8x wrote

Truth.

ChatGPT cannot write an application. But it CAN write blocks of code that I, with my knowledge, can assemble into an application, and it can do that much faster than I can. And those code blocks are often better thought out than what I would have written.

Working with it has made my coding speed go up by about 30% AND made my code cleaner. I still have to fact check and debug everything it does (it gets things hilariously wrong sometimes). As I get more used to using it, I imagine my output will go up even more.

This thing is very early. I could imagine a model that uses ChatGPT, in it's current state, as a sort of sub processor... Like a human being defines an application, the model (trained on a bunch of open source software) looks for applications similar to what was defined, then starts asking ChatGPT (as a separate layer) for blocks of code it can assemble into the final app. When it runs into emergent bugs when these blocks conflict, it asks ChatGPT to solve the underlaying problem. Then it runs the final output through a bunch of benchmarks and optimization layers. It could even ask something like Stable Diffusion for graphical components.

I actually don't think that kind of capability is that far off. I can imagine it and I think it could be assembled from parts we have right now, given time and effort. And yeah, the final result might need some human input to clean it up (and pentest it!), but the effort saved would be phenomenal.

The long term effects of this tech on the economy are going to be... Weird. Probably painful at first. The capitalist mindset is not well equipped to deal with the disruption these kinds of tools can cause. But it's also going to cause an explosion of human expression and creativity, and give people the ability to do things they couldn't before (thanks to Stable Diffusion, I can make art in a way I never could before, I haven't even begun to scratch the surface of what I might want to do with that). What an exciting, fun and scary time to be alive.

4

HappierShibe t1_j8ju12k wrote

Last time I went to the doctors office I got exactly 45 seconds with an actual doctor, and about three rushed sentences of actual conversation. Our helathcare system is so FUBAR'd at this point, I'd be willing to try an AI doctor if it means I actually get some healthcare.

2

AutomaticOrange4417 t1_j8lb87k wrote

You think doctors don't use a statistical algorithm to pick their words and medicine and treatments?

2

newtonkooky t1_j8iatpb wrote

Low level doctors are imo the most likely to be replaced by AI, a general practioner has told me fuck all that I didn’t know from just googling in the last 10 years. These days I already go to a lab on my own to check my blood work,

−9

ALurkerForcedToLogin t1_j8iitwl wrote

Anyone can use Google, but I pay my doctor for their decade of med school and years of experience. They have the experience to know that it is actually not cancer, no matter what Web MD is telling me.

8

_Roark t1_j8itnhk wrote

and how many shitty doctors did you have to go through before you got that one?

1

ALurkerForcedToLogin t1_j8iutus wrote

None. Your first appointment with a doctor is a chance to get to know each other a little bit. You provide info about your health history, current status, and future goals, and you ask questions about how they will be able to help you meet your health goals. If they don't sound like they're going to be able to do what you need, then no hard feelings, they're just not the doctor for you, so you find another. Doctors have different specialties and focuses, and they have different approaches and proficiencies. Your job as a patient is to find a doctor with skills that align to your needs.

3

Call_Me_Thom t1_j8j7z45 wrote

No single doctor has knowledge about everything but an AI does. You do not need to find a doctor that works for you or a doctor whose schedule fits yours when AI takes care of you. It’s available for everyone anytime and also has the whole human knowledge base.

3

wam654 t1_j8k562e wrote

Available for everyone any time? Fat chance. Computation time on a super computer isn’t free. The dataset isn’t free. Liability isn’t resolved. The team of phds who built it didn’t work for free. And only the whole of human knowledge that it has license to access or it’s dataset was trained on. Most of the data it would need is not public domain and would likely be heavily guarded and monetized.

That’s just the dataset The ai doctor still needs data about you to make conclusions. That means lab tests, scheduling, cost considerations, etc.

2

_Roark t1_j8izvye wrote

i don't know why you're talking to me like I'm 5 and have never dealt with doctors before

i could say more, but i doubt there would be any point

0

ALurkerForcedToLogin t1_j8j2ncm wrote

I'm not talking to you like you're five. I answered your question. If you don't like it, then don't read it.

1

Ghune t1_j8idpp9 wrote

Replace?

No touching? I don't want this doctor.

2

Cybiu5 t1_j8jxbkr wrote

GPs are genuinely useless unless you're after paracetamol or a doctor's note

1

Littlegator t1_j8iq113 wrote

Ironically probably farthest from the truth. Generalists have to know the most. Specialists are far more likely to get bored with their career because they "learn it all" pretty quickly, and practice-changing updates happen like once a year if even.

−2