ChronoPsyche
ChronoPsyche t1_j6ic7ew wrote
Reply to comment by putalotoftussinonit in I went to the hospital yesterday and staff was really shit , I can’t wait until we replace every healthcare worker? Will things change in 2030 or is it too early. by Ishynethetruth
Empathy isn't about comforting, you're thinking of sympathy. Empathy is about understanding what you are going through and treating you accordingly.
ChronoPsyche t1_j6ibzy8 wrote
Reply to I went to the hospital yesterday and staff was really shit , I can’t wait until we replace every healthcare worker? Will things change in 2030 or is it too early. by Ishynethetruth
You had a bad experience and therefore systemic changes are needed? Do you know what the word "anecdote" means? Of course, there are indeed issues with healthcare systems, but generalizing it to "every healthcare worker" is not helpful or accurate. Doctors/nurses aren't the issue, but that doesn't mean they're all 100% good.
ChronoPsyche t1_j6f3caq wrote
Reply to ChatGPT creator Sam Altman visits Washington to meet lawmakers | In the meetings, Altman told policymakers that OpenAI is on the path to creating “artificial general intelligence,” by Buck-Nasty
Does anyone have a source for this story from a more credible publication? Never heard of this website before and they don't link to any sources.
EDIT: I can't find a single other news source reporting this. While Reed Albergotti appears to be a credible journalist, it makes me very uncomfortable to see his obscure website being the only one reporting this. As such, I would take it with a grain of salt.
ChronoPsyche t1_j62b0lw wrote
Reply to comment by Sashinii in MusicLM: Generating Music From Text (Google Research) by nick7566
You're pre-emptively stirring up negative sentiment against musicians and thats bullshit. Don't make people choose sides here.
ChronoPsyche t1_j5v8o90 wrote
>but I cannot see a future where the reliability of the information we find on the internet is not questioned at every step,
This is already the reality now and has been for a while. If you haven't realized that then you've assuredly been mislead about a lot what you thought was reliable information. Not due to AI necessarily but due to massive disinformation campaigns and just general bullshit that spreads so easily in online echo chambers. AI will just make those all the more effective.
You shouldn't take anything at face value, always question what the source of the information is and whether that source is credible. This is why I get so frustrated by all the random blogs that are posted on this sub with news-like headlines, as if they are from credible news sources. They aren't, they are written by random people whose credibility is completely unknown and spammed on AI subs for click ad revenue. 90% of the time they contain unsubstantiated rumors and hyper-sensationalized, misleading, and/or outright false information. Like the blog article that made the 1 trillion parameter claim. Those of us with a skeptical eye knew it was bs immediately.
And by the way, even if it comes from a credible news agency, then you still have to consider potential bias. Bias is a lot easier to deal with though than straight up fake information.
So basically, yes you need to be vigilant, but not just from AI, but from all sources of information. Doesn't take a large language model to write an article and mislead.
ChronoPsyche t1_j5ta1cg wrote
Reply to comment by turnip_burrito in What ethical ramifications do programmers, corps, & gov take into consideration to protect AI consciousnesses that may emerge? by chomponthebit
Just because it has a mechanism doesn't mean it can necessary be traced and monitored. That's the whole idea behind emergence of anything, that it is a phenomena that came about that was not intended but the result of an unexpected interplay of complex elements.
If consciousness can come about from AGI or ASI is unknown, but researchers have acknowledged the possibility and that is what OP is asking about.
ChronoPsyche t1_j4y9pg7 wrote
Reply to comment by ihateshadylandlords in OpenAI's CEO Sam Altman won't tell you when they reach AGI, and they're closer than he wants to let on: A procrastinator's deep dive by Magicdinmyasshole
That's exactly what's happening. A lot of people staked their life on GPT4 being AGI and are in denial when Altman just straight up called it bullshit.
ChronoPsyche t1_j4ifg7i wrote
Reply to When will humans merge with AI by [deleted]
Let me check my calendar.
ChronoPsyche t1_j4gbdy7 wrote
Reply to comment by Shelfrock77 in Soon Chat GDP will be doing medical exams. by JohnMcafee4coffee
Lmao GPT4 will not be diagnosing and treating people lmao. At least, not anything more official than just asking for advice which ChatGPT can already do. IT has nothing to do with ability and everything to do with regulations and liability.
ChronoPsyche t1_j4gb4xl wrote
Lol, absolutely not. No company is going to take on that legal liability. Maybe in the far future but doctors are one of the last that will be automated since it involves people's lives and there are very strict regulations.
And I know you said "may not replace a doctor", but if they are diagnosing and treating you then that is the function of a doctor.
ChronoPsyche t1_j39ccoa wrote
Reply to comment by Scarlet_pot2 in We need more small groups and individuals trying to build AGI by Scarlet_pot2
Here's the biggest problem with trying to crowd-source research from beginners, you don't know what you don't know. You get 100k beginners and ask them to try to figure out AGI, they'll come up with a bunch of solutions that have already been tried thinking they're novel, but not realizing that it's been done before due to lack of experience.
I've tried to do something similar myself, not for AGI but for something else in another domain within machine learning. Thought I found gold and was a genius, only to discover I had just reinvented the wheel for an older technique that was abandoned due to not being feasible. As a result, all that happened was I learned firsthand and for myself why that technique was no longer used (and that it was ever used in the first place). It was a great learning experience, but that's all it was.
Depth of experience is invaluable. Research builds on past research, but in order to know what to build on and how to build on it, you gotta be experienced within the field. You gotta truly understand everything else that's already been tried.
I don't think you really appreciate everything that goes into research. It's a common fallacy for people who are beginners, like I said, you don't know what you don't know.
Of course, that shouldn't stop anyone from trying. You're more than welcome to take your advice. As for me, I am focusing more on novel ways to use advanced AI built by others in software applications. OpenAI just creates the tools, but someone's gotta use those tools to create something useful. That's where people with breadth of experience who lack the depth of experience necessary for rigorous research can excel. I'm not going to try and beat giant corporations with teams of PhDs and billions in funding at their own game.
ChronoPsyche t1_j3978ph wrote
Reply to comment by Scarlet_pot2 in We need more small groups and individuals trying to build AGI by Scarlet_pot2
You don't think researchers at Google and OpenAI aren't constantly trying to figure out new, more efficient algorithms? And these are researchers with PhDs in machine learning and billions in funding to carry out experiments, not people who just watched some online Python videos.
While what you say isn't impossible, you're making it sound way easier than it actually is. Sounds more like wishful thinking.
ChronoPsyche t1_j395ifv wrote
Reply to comment by Scarlet_pot2 in We need more small groups and individuals trying to build AGI by Scarlet_pot2
But you do need to train a multi-million dollar model. It is extremely expensive to do. That's why the only companies that have produced LLMs worth anything are ones with billions in funding. Google and Microsoft-backed OpenAI.
ChronoPsyche t1_j37ak6l wrote
And who is funding this?
ChronoPsyche t1_j3575rb wrote
Reply to comment by myusernamehere1 in I asked ChatGPT if it is sentient, and I can't really argue with its point by wtfcommittee
Fair enough.
ChronoPsyche t1_j355vbk wrote
Reply to comment by myusernamehere1 in I asked ChatGPT if it is sentient, and I can't really argue with its point by wtfcommittee
"I agree with your conclusion but I just thought id point out that your arguments are bad". Lol that's rather pedantic but okay. You do you.
ChronoPsyche t1_j355cjq wrote
Reply to comment by myusernamehere1 in I asked ChatGPT if it is sentient, and I can't really argue with its point by wtfcommittee
What is your argument then? You haven't actually stated an argument, you've just told me mine is wrong.
ChronoPsyche t1_j353a24 wrote
Reply to comment by myusernamehere1 in I asked ChatGPT if it is sentient, and I can't really argue with its point by wtfcommittee
Nothing you're saying is relevant. Anything could be possible, but that isn't an argument against my claims. My keyboard could have strange alien sensory modalities that we don't understand. That doesn't make it likely.
ChronoPsyche t1_j350ri7 wrote
Reply to comment by myusernamehere1 in I asked ChatGPT if it is sentient, and I can't really argue with its point by wtfcommittee
I mean they do hold up to scrutiny. We have no reason to think that a probability model that merely emulates human language and doesn't have any sensory modalities could be sentient.
That's not an airtight argument because again, I can't prove a negative, but the definition of sentience is "the capacity to experience feelings and sensations." and ChatGPT absolutely does not have that capacity, so there's no reason to think it is sentient.
ChronoPsyche t1_j34zsvq wrote
Reply to comment by 2Punx2Furious in I asked ChatGPT if it is sentient, and I can't really argue with its point by wtfcommittee
>But I don't really care about sentience, "awareness" or "consciousness". I only care about intelligence and sapience, which it seems to have to some degree.
Okay, but this discussion is about sentience so that's not really relevant.
ChronoPsyche t1_j34yu96 wrote
Reply to comment by myusernamehere1 in I asked ChatGPT if it is sentient, and I can't really argue with its point by wtfcommittee
> i just dont think you have arrived at any meaningful reasons that it couldnt be concious.
I don't need to arrive at meaningful reasons why it couldn't be conscious. The burden of proof is on the person making the extraordinary claim. OP's proof for it being conscious is "because it says it is".
Also, I'm not saying it can't be conscious as I can't prove a negative. I'm saying there's no reason to believe it is.
ChronoPsyche t1_j34vw1w wrote
Reply to comment by myusernamehere1 in I asked ChatGPT if it is sentient, and I can't really argue with its point by wtfcommittee
Actually, I have no clue. We've never grown a human brain in a lab. It's impossible to say. That's kind of irrelevant though because we know that a human brain has the hardware necessary for sentience. We don't know that for ChatGPT and have no reason to believe it does.
And when I say perception, I don't just mean perception of the external environment, but perception of anything at all or be aware of anything at all. There is no mechanism by which ChatGPT can perceive anything, whether internal or external. Its only input is vectors of numbers that represent tokenized text. That's it.
Let's ask a better question, why would it be conscious? People think because it talks like a human, but that's just a trick. It's a human language imitator and that's all.
ChronoPsyche t1_j33atzl wrote
Perception is necessary for sentience. ChatGPT does not have any perception. This is ridiculous.
ChronoPsyche t1_j31pnq4 wrote
Reply to comment by DungeonsAndDradis in 2022 was the year AGI arrived (Just don't call it that) by sideways
The curve reaches back to the agricultural revolution, so a little shift can be anywhere from years to decades. I personally think we'll get AGI by 2030. We definitely don't have it yet though. It's also not clear if LLMs are sufficient for AGI.
ChronoPsyche t1_j71i9sa wrote
Reply to comment by captainjake9 in How long do you guys think it’s going to be before the eleven labs speech synthesiser source code gets leaked? by captainjake9
Source code does not leak "literally all the time".