Viewing a single comment thread. View all comments

indigoHatter t1_j2wcn7g wrote

As I discussed in a comment further down, it's not that this is medical advice*, it's that the prompt for ChatGPT triggered it into (finding the part which was previously) nootropics (ran) through its neural network, and it "found" source material from pharmabros on nootropics forums and then (the predictive text based it's response off this learning and) wrote a summary based on that source. This isn't medically cross-examined data, it's just crowdsourced pharma-bro.

*The danger here, though, is that while this isn't medical advice, some dumbass could misconstrue it as such. This is just as true of finding a nootropics forum as well, but people may expect an AI to "be smarter" since it can also discuss medical facts if spoken to in a way that triggers correct medical language. Short version: info cool, but always run your Google & ChatGPT discussions by a real doctor, first.

(edits for clarity)

13

Technical-Berry8471 t1_j2wr48b wrote

Real doctors will think of the liability, and tell you not to. Also ChatGPT does tell people to consult a medical professional.

6

indigoHatter t1_j2xtuyv wrote

That's good. That at least absolves ChatGPT of malpractice, lol. Idiots will still miss that disclaimer, though.

1

monsieurpooh t1_j2wx7o9 wrote

Why do people keep spreading this misinformation? The process you described is not how GPT works. If it were just finding a source and summarizing it, it wouldn't be capable of writing creative fake news articles about any topic

3

indigoHatter t1_j2xubmd wrote

I might have grossly oversimplified the process, but is that not the general idea of training a neural network?

1

monsieurpooh t1_j2xw6ta wrote

These models are trained only to do one thing really well, which is predict what word should come after an existing prompt, by reading millions of examples of text. The input is the words so far and the output is the next word. That is the entirety of the training process. They aren't taught to look up sources, summarize, or "run nootropics through its neural network" or anything like that.

From this simple directive of "what should the next word be" they've been able to accomplish some pretty unexpected breakthroughs, in tasks which conventional wisdom would've held to be impossible for just a model programmed to figure out the next word, e.g. common sense Q and A benchmarks, reading comprehension, unseen SAT questions, etc. All this was possible only because the huge neural network transformers model is very smart, and as it turns out, can produce emergent cognition where it seems to learn some logic and reasoning even though its only real goal is to figure out the next word.

Edit: Also, your original comment appears to be describing inference, not training

2

indigoHatter t1_j2ysq1c wrote

Okay, again I am grossly oversimplifying the concept, but if it was trained to predict what word should be next in a response such as that, then presumably it once learned about nootropics and absorbed a few forums and articles about nootropics. So.......

Bro: "Hey, make my brain better"

GPT: "K, check out these nootropics"

I made edits to my initial post in hopes it makes better sense now. You're correct that my phrasing wasn't great initially, and leaves room for others to misunderstand what I am not clearly stating.

1

monsieurpooh t1_j2z3bt5 wrote

Thanks. I find your edited version hard to understand and still a little wrong, but I won't split hairs over it. We 100% agree on the main point though: This algorithm is prone to emulating whatever stuff is in the training data, including bro-medical-advice.

2

indigoHatter t1_j2zeaxf wrote

Yeah, I'm not trying very hard to be precise right now. Glad you think it's better though. ✌️ Have a great day, my dude!

2