ChronoPsyche

ChronoPsyche t1_j6ibzy8 wrote

You had a bad experience and therefore systemic changes are needed? Do you know what the word "anecdote" means? Of course, there are indeed issues with healthcare systems, but generalizing it to "every healthcare worker" is not helpful or accurate. Doctors/nurses aren't the issue, but that doesn't mean they're all 100% good.

1

ChronoPsyche t1_j6f3caq wrote

Does anyone have a source for this story from a more credible publication? Never heard of this website before and they don't link to any sources.

EDIT: I can't find a single other news source reporting this. While Reed Albergotti appears to be a credible journalist, it makes me very uncomfortable to see his obscure website being the only one reporting this. As such, I would take it with a grain of salt.

24

ChronoPsyche t1_j5v8o90 wrote

>but I cannot see a future where the reliability of the information we find on the internet is not questioned at every step,

This is already the reality now and has been for a while. If you haven't realized that then you've assuredly been mislead about a lot what you thought was reliable information. Not due to AI necessarily but due to massive disinformation campaigns and just general bullshit that spreads so easily in online echo chambers. AI will just make those all the more effective.

You shouldn't take anything at face value, always question what the source of the information is and whether that source is credible. This is why I get so frustrated by all the random blogs that are posted on this sub with news-like headlines, as if they are from credible news sources. They aren't, they are written by random people whose credibility is completely unknown and spammed on AI subs for click ad revenue. 90% of the time they contain unsubstantiated rumors and hyper-sensationalized, misleading, and/or outright false information. Like the blog article that made the 1 trillion parameter claim. Those of us with a skeptical eye knew it was bs immediately.

And by the way, even if it comes from a credible news agency, then you still have to consider potential bias. Bias is a lot easier to deal with though than straight up fake information.

So basically, yes you need to be vigilant, but not just from AI, but from all sources of information. Doesn't take a large language model to write an article and mislead.

1

ChronoPsyche t1_j5ta1cg wrote

Just because it has a mechanism doesn't mean it can necessary be traced and monitored. That's the whole idea behind emergence of anything, that it is a phenomena that came about that was not intended but the result of an unexpected interplay of complex elements.

If consciousness can come about from AGI or ASI is unknown, but researchers have acknowledged the possibility and that is what OP is asking about.

1

ChronoPsyche t1_j4gb4xl wrote

Lol, absolutely not. No company is going to take on that legal liability. Maybe in the far future but doctors are one of the last that will be automated since it involves people's lives and there are very strict regulations.

And I know you said "may not replace a doctor", but if they are diagnosing and treating you then that is the function of a doctor.

1

ChronoPsyche t1_j39ccoa wrote

Here's the biggest problem with trying to crowd-source research from beginners, you don't know what you don't know. You get 100k beginners and ask them to try to figure out AGI, they'll come up with a bunch of solutions that have already been tried thinking they're novel, but not realizing that it's been done before due to lack of experience.

I've tried to do something similar myself, not for AGI but for something else in another domain within machine learning. Thought I found gold and was a genius, only to discover I had just reinvented the wheel for an older technique that was abandoned due to not being feasible. As a result, all that happened was I learned firsthand and for myself why that technique was no longer used (and that it was ever used in the first place). It was a great learning experience, but that's all it was.

Depth of experience is invaluable. Research builds on past research, but in order to know what to build on and how to build on it, you gotta be experienced within the field. You gotta truly understand everything else that's already been tried.

I don't think you really appreciate everything that goes into research. It's a common fallacy for people who are beginners, like I said, you don't know what you don't know.

Of course, that shouldn't stop anyone from trying. You're more than welcome to take your advice. As for me, I am focusing more on novel ways to use advanced AI built by others in software applications. OpenAI just creates the tools, but someone's gotta use those tools to create something useful. That's where people with breadth of experience who lack the depth of experience necessary for rigorous research can excel. I'm not going to try and beat giant corporations with teams of PhDs and billions in funding at their own game.

1

ChronoPsyche t1_j3978ph wrote

You don't think researchers at Google and OpenAI aren't constantly trying to figure out new, more efficient algorithms? And these are researchers with PhDs in machine learning and billions in funding to carry out experiments, not people who just watched some online Python videos.

While what you say isn't impossible, you're making it sound way easier than it actually is. Sounds more like wishful thinking.

2

ChronoPsyche t1_j350ri7 wrote

I mean they do hold up to scrutiny. We have no reason to think that a probability model that merely emulates human language and doesn't have any sensory modalities could be sentient.

That's not an airtight argument because again, I can't prove a negative, but the definition of sentience is "the capacity to experience feelings and sensations." and ChatGPT absolutely does not have that capacity, so there's no reason to think it is sentient.

0

ChronoPsyche t1_j34yu96 wrote

> i just dont think you have arrived at any meaningful reasons that it couldnt be concious.

I don't need to arrive at meaningful reasons why it couldn't be conscious. The burden of proof is on the person making the extraordinary claim. OP's proof for it being conscious is "because it says it is".

Also, I'm not saying it can't be conscious as I can't prove a negative. I'm saying there's no reason to believe it is.

0

ChronoPsyche t1_j34vw1w wrote

Actually, I have no clue. We've never grown a human brain in a lab. It's impossible to say. That's kind of irrelevant though because we know that a human brain has the hardware necessary for sentience. We don't know that for ChatGPT and have no reason to believe it does.

And when I say perception, I don't just mean perception of the external environment, but perception of anything at all or be aware of anything at all. There is no mechanism by which ChatGPT can perceive anything, whether internal or external. Its only input is vectors of numbers that represent tokenized text. That's it.

Let's ask a better question, why would it be conscious? People think because it talks like a human, but that's just a trick. It's a human language imitator and that's all.

0