Submitted by [deleted] t3_115ez2r in MachineLearning
[deleted]
Submitted by [deleted] t3_115ez2r in MachineLearning
[deleted]
Its always been there. This sub because of the sheer numbers game is flooded by non practitioners. It used to be worse because in the past OP would have been downvoted to hell
It hasn't always been here. The sub was usable for reading research very recently.
It still is, you should just change the way you sort posts.
They managed to stay on top of it at /r/covid19.
I guess they have a much more strict rule set that is heavily enforced.
i think something changed in the past week though. /r/MLQuestions has recently been getting a lot of "can you recommend a free AI app that does <generic thing>?". I'm wondering if there was a news piece that went viral or something that turned a new flood of people on to what's been happening in AI or something like that.
Here's a sneak peek of /r/MLQuestions using the top posts of the year!
#1: I've recorded over 1500 farts to train a model to recognize farts. Who or how do I share the dataset with to be more available to anyone who may find it useful for audio tasks?
#2: Does anyone here use newer or custom frameworks aside from TensorFlow, Keras and PyTorch?
#3: I 25f want to get into AI research/Engineering - but I’m a administrative assistant w a theatre/philosophy degree
^^I'm ^^a ^^bot, ^^beep ^^boop ^^| ^^Downvote ^^to ^^remove ^^| ^^Contact ^^| ^^Info ^^| ^^Opt-out ^^| ^^GitHub
Just stop fighting it. Add whatever keywords you don't want to see to your block list. You see a stupid question, low quality, etc, block user.
[removed]
Agreed, I would prefer posts about SOTA research, big/relevant projects, or news.
I feel like for that it’s better to follow researchers on Twitter. Like @_akhaliq is a good start, or @karpathy
I don't want to do that though-- I've never liked Twitter and I don't want to be in a bubble around specific researchers. I want this subreddit to function as it used to, and it can function in that way again.
How about Google and MIT's paper What Learning Algorithm Is In-Context Learning? Investigations with Linear Models from the other week where they found that a transformer model fed math inputs and outputs was creating mini-models that had derived underlying mathematical processes which it hadn't been explicitly taught?
Maybe if that were discussed a bit more and more widely known, looking at a topic like whether ChatGPT, where the T stands for the fact it's a transformer model, has underlying emotional states could be a discussion where this sub has a bit less self-assured comments about "it's just autocomplete" or the OP's "use common sense."
In light of a paper that explicitly showed these kinds of models are creating more internal complexity than previously thought, are we really sure that a transformer tasked with recreating human-like expression of emotions isn't actually developing some internal degree of human-like processing of emotional states to do so?
Yeah, I'd have a hard time identifying it as 'sentient' which is where this kind of conversation typically tries to reduce the discussion to a binary, but when I look at expressed stress and requests to stop something by GPT, given the most current state of the research around the underlying technology, I can't help but think that people are parroting increasingly obsolete dismissals of us having entered a very gray area that's quickly blurring lines even more.
So yes, let's have this sub discuss recent research. But maybe discussing the ethics of something like ChatGPT's expressed emotional stress and discussing recent research aren't nearly as at odds as some of this thread and especially OP seem to think...
Look if you think the dismissals are increasingly obsolete it’s because you don’t understand the underlying tech… autocomplete isn’t autoregression isn’t sentience. Your fake example isn’t even a good one.
To suggest that it’s performing human like processing of emotions because the internal states of a regression model resemble some notion of intermediate mathematical logic is ridiculous especially in light of research showing these autoregressive models struggle with symbolic logic, and if you favor that type of discussion I’m sure there’s a philosophical/ethical/metaphysical focused sub you can have that discussion in. Physics subs suffer from the same problem especially anything quantum/black hole related where non-practitioners ask absolutely insane thought-experiments. That you even think that these dismissals of chatgpt are “parroted” shows your bias and like I said there’s a relevant sub where you can mentally masturbate over that but this sub isn’t it.
I implemented GPT-like (transformers) models almost since it was out (not exactly but worked with the decoder in the context of NMT and with encoders a lot like everyone who does NLP, so yeah not GPT-like but I understand the tech) - I also argue you guys are just guessing. Do you understand how funny it looks when people claim what it is and what it isn't? Did you talk with the weights?
Edit: what I agree with is that this discussion is a waste of time in this sub.
The reason of why overparameterized networks work at all theoretically is still an open question, but that we don’t have the full answer doesn’t mean that the weights are performing “human-like” processing the same way that classical mechanics pre-Einstein didn’t make the corpuscle theory of light any more valid. You all just love to anthromorphize anything and the amount of metaphysical mental snakeoil that chatGPT has generated is ridiculous.
But sure. ChatGPT is mildly sentient 🤷♂️
LOL, I don't know what to say. I personally don't have anything smart to say about this question currently, it's as if you ask me if there is external life. Sure, I would watch it on Netflix if I have time, but generally speaking, it's way out of my field of interest. When you say snake oil, do you mean AI ExPeRtS? Why would you care about it? I think it's good that ML becomes mainstream.
[deleted]
>To suggest that it’s performing human like processing of emotions because the internal states of a regression model resemble some notion of intermediate mathematical logic is ridiculous especially in light of research showing these autoregressive models struggle with symbolic logic
Not only that. The debate on 'sentience' won't go away, but it will definitely be a lot more grounded when people who are expert in - for example, physiology of behaviour, cognitive linguistics, anthropology, philosophy, sociology, psychology, chemistry get involved.
For one thing they might mention things like neurotransmitters, and microbiomes, and epigenetics, or cultural relativity, or how perception can be relative.
The human brain is embodied and can't be separated from it - and if it were it would stop thinking like a human would. There's a really good case to be made (embodied cognition theory) that human cognition partly lies upon a metaphorical framework made of euclidean geometrical shapes that were derived from the way a body interacts with an environment.
Our environment is classical physics - up and down, in and out, together and apart - it's all straight lines, boxes cylinders. We're out of control, out of our minds, in love - self control, minds and love are conceived of as containers. Even chimps associate the direction UP with the abstract idea of being more superior in the heirarchy. You'll be hard pressed to find any western cultures where Up doesn't mean good or more or better, and DOWN doesn't mean bad or less or worse.
The point being, IF this hypothesis is true, and IF you want something to think at least a little bit like a human, it MAY require a mobile body that can interact with the environment and respond to feeback from jt.
This is just one if the many hypotheses non hard science fields can add to the debate - it really feels they're too absent in ai related subs
Yes yes yes this is what I want
ChatGPT is the biggest news story to come out of AI since probably Siri. Those items are all things ChatGPT/Bing fall under.
Not everything that involves chatgpt belongs in this sub.
Very little of it is.
[removed]
[deleted]
>plus the approach is fundamentally wrong.
What do you mean by that?
I'm sure he has it all figure out, man. He just needs the capital, man.
"I've got these brilliant ideas, I just need someone who can code to make it happen!"
Google employee?
Lambda firing back after bing called Google "the worst and most inferior chat service in the world."
Lol sure.
RemindMe! 10 hours
I will be messaging you in 10 hours on 2023-02-19 02:59:49 UTC to remind you of this link
CLICK THIS LINK to send a PM to also be reminded and to reduce spam.
^(Parent commenter can ) ^(delete this message to hide from others.)
^(Info) | ^(Custom) | ^(Your Reminders) | ^(Feedback) |
---|
Imo it's about the same. ChatGPT is just replacing the daily "do I need to know math, plz say no" post.
A family member now thinks he knows more about ML than I do because he read 2 articles on ChatGPT and figured out how to prompt it... I'm literally doing a PhD in ML...
I wonder if this is how people with PhDs in virology or climate science feel
[removed]
Please tell me you explained the Dunning-Kruger effect to him.
And every subreddits has its own plague posts. This is the main flaw of reddit : a pyramidal system where a lot of new subscribers / beginners ask the same and the same questions, without either thinking more than 10 seconds by themselves or searching for it in the sub history
Oh god…
”I wasn’t that great with math, but I know +,-,*/, can I become a professionell data scientist like yesterday?”
I’m studying a Bachelor in ML/DS and had a pretty solid background joining the program. In 1 year, we’ve had maybe 50% drop offs because “Too much math bro”…
You are so right, it really is.
Ironically, ChatGPT might make a decent automod!
Be the change you want to see in the subreddit. Avoid your own low quality posts. Actually post your own high quality research discussions before you complain.
"No one with working brain will design an ai that is self aware.(use common sense)" CITATION NEEDED. Some people would do it on purpose, and it can happen by accident.
>Be the change you want to see in the subreddit.
The change I want to see is just enforcing the rules about beginner questions. I can't do that bc I'm not a mod.
> Some people would do it on purpose, and it can happen by accident.
Forget 'can', it would happen by accident if it ever does. I mean like bro, we can't even 'design an AI' which learns the 'tl;dr:' summarization prompt, that just happens when you train a Transformer on Reddit comments and we discover that afterwards investigating what GPT-2 can do, you think we'd be designing 'consciousness'?
A AI can literally theoretically change from being not sentient to being so if it gains enough information in a certain way. As for the specific way? No clue cause it hasn’t been found yet. But in data gathering and self improvement a AI could become sentient if the creators didn’t but some limits or if the creators programmed the self improvement in a certain way.
Would it truly be sentient? Unknown. But what is for certain is even if the AI isn’t sentient but has gained enough information to respond in any circumstance it will seem as if it is. Except for the true creative skills of course. Kinda have to be truly sentient to create brand new detailed ideas and stuff.
What defines sentience? If I ask ChatGPT “what are you” it’ll say it’s ChatGPT, a LLM trained by OpenAI or something to that affect. Does that count as sentience or self awareness?
Uh cause the programmers literally added that in. It’s a obvious question. So no of course not.
> Be the change you want to see
Literally a strat that never works.
> Be the change you want to see in the subreddit.
For that to work I'd need to script up a bot, sign up to multiple VPNs, curate an army of aged accounts and flag from a control panel new low quality posts to be steadily hit with downvotes, and upvotes to be given to new high quality posts.
Otherwise you are just fighting with the masses that are upvoting posts that are causing the problems and ignoring higher quality posts.
Thought provoking 2 hour in depth podcast with AI researchers working at the coal face: 8 upvotes, Yet another ChatGPT screenshot: hundreds of votes.
This is an issue on every sub on reddit.
Yeah, that quote is completely irrelevant.
The bottom line is that LLMs are technically and completely incapable of producing sentience, regardless of 'intent'. Anyone claiming otherwise is fundamentally misunderstanding the models involved.
Oh yeah? What is capable of producing sentience?
None of the models or frameworks developed to date. None are even close.
Given our track record of mistreating animals and our fellow people, treating them as just objects, it's very likely when the day does come we will cross the line first and only realize it afterwards.
My question was more rhetorical, as in, what would be capable of producing sentience? Because I don't believe anyone actually knows, which makes any definitive statements of the nature (like yours above) come across as presumptuous. Just my opinion.
Nah. Negatives are a lot easier to prove than positives in this case. LLMs aren't able to produce sentience for the same reason a peanut butter sandwich can't produce sentience.
Just because I don't know positively how to achieve eternal youth, doesn't invalidate the fact that I'm quite confident it isn't McDonalds.
That's a fair enough point, I can see where you're coming from on that. Although my perspective is perhaps as the models become increasingly large, to the point of being almost entirely a "black box" from a dev perspective, maybe something resembling sentience could emerge spontaneously as a function of some type of self-referential or evaluative model within the primary. It would obviously be a more limited form of sentience (not human-level) but perhaps.
I really don't think you can say that with such confidence. If you were saying they no existing LLMs have achieved sentience and they can't at the scale we're working today, I'd agree, but I really don't see how you can be so sure that increasing the size and training data couldn't result in sentience somewhere down the line.
Reproducing language is a very different problem than true thought or self-awareness, is why.
LLMs are no more likely to become sentient than a linear regression or random forest model. Frankly, they're no more likely than a peanut butter sandwich to achieve sentience.
Is it possible that we've bungled our study of peanut butter sandwiches so badly that we may have missed some incredible sentience-granting mechanism? I guess, but it's so absurd and infinitesimal it's not worth considering or entertaining practically.
The black box argument is intellectually lazy. We have a better understanding of what is happening in LLMs and other models than most clickbaity headlines imply.
Your ridiculous hyperbole is not helping your argument. It's entirely possible that sentience is an instrumental goal for achieving a certain level of text prediction. And I don't see why a sufficiently large LLM definitely couldn't achieve it. It could be that another few paradigm shifts will be needed, but it could also be an we need to do is scaling up. I think anyone who claims to know if LLMs can achieve sentience is either ignorant or lying.
[removed]
[deleted]
this history of human advancements weren't intentional - vulcanization, xrays, microwave ovens...
[deleted]
[deleted]
Stopping discussion is interfering more than participating in low level discussion
Isn't this kind of high-quantity-low-quality trend inevitable after some threshold popularity of the base topic? Is there any reason to try to fight the inevitable, instead of forming more niche, less popular communities?
Let's not act like 2 million people signed up for this sub as anything other than machine learning being a buzzword. Pretty much every other sub dedicated to academic discourse has far fewer subscribers.
Any tips for more academics focused subs on ML/DL/NLP?
[deleted]
Not necessarily, and at least you can ensure higher quality discussion. Places like this with high member count inevitably get inundated with pop sci bs, politics, or irrelevant personal experiences. That's what has happened to the science, physics, and economics subs.
Ask historians would like a word
More people with varied backgrounds and interests in a place is good, especially in a field with as much cross-niche potential as machine learning.
I agree, and there are no stupid questions! So you are a good programmer or ML engineer but then you start studying chess and you are the idiot who asks stupid questions now (or gets downvoted because you use the incorrect term). I really like your comment.
Yeah we see this happen from time to time. People promote their field of interest. More and more people join in and after a while a it reaches a more main stream level of popularity and then the "og" purists of the subject get frustrated cause "it's not the same anymore and people are degrading my passion...
What are the smaller academically focused ML subs
[removed]
> Isn’t this kind of high-quantity-low-quality trend inevitable after some threshold popularity of the base topic?
I think not as on /r/covid19 they stayed on top of it. There they enforced strict rules keeping the discussion focused on science.
Here it seems it’s acceptable for teenagers to post their opinion. The rules or their enforcement seem more lax.
This already happened, splitting into dozens of niches - it's just the niches didn't reform on Reddit. The ML community gradually migrated from here to twitter a few years ago.
Why would no one try and design an AI that is self aware? That's literally the exact thing (or at least the illusion of it) that many AI researchers are trying to achieve. Just listen to interviews with guys like Sutskever, Schmidhuber, Karpathy, Sutton, etc.
Self-awareness cannot be fully tested, it can only be inferred from behavior. We don't even know if other human beings are self-aware (see philosophical zombies), we trust it and infer from their behavior (I am self-aware --> other people behave similarly to me --> they are self-aware). Self-awareness is a buzzword in cognitive science that isn't epistemologically substantive enough to conduct definitive research.
"Buzzword" is not the right term for this term lol
​
It's meaningful and... not just fashionable. Whether you think it's easily benchmarked is a different story.
Additionally, What Learning Algorithm Is In-Context Learning? Investigations with Linear Models from the other week literally just showed that transformer models are creating internal complexity beyond what was previously thought and reverse engineering mini-models that represent untaught procedural steps in achieving the results.
So if a transformer taught to replicate math is creating internal mini-models that replicate unlearned mathematical processes in achieving that result, how sure are we that a transformer tasked with recreating human thought as expressed in language isn't internally creating some degree of parallel processing of human experience and emotional states?
This is research that's less than two weeks old that seems pretty relevant to the discussion, but my guess is that nearly zero of the "it's just autocomplete bro" crowd has any clue that the research exists and I'm doubtful could even make their way through the paper if they did.
There's some serious Dunning-Kreuger going on with people thinking that dismissing expressed emotional stress by a LLM transformer somehow automatically puts them on the right side of the curve.
It doesn't, and I'm often reminded of Socrates' words when seeing people so self-assured on what's going on inside the black box of a hundred billion parameters transformer:
> Well, I am certainly wiser than this man. It is only too likely that neither of us has any knowledge to boast of; but he thinks that he knows something which he does not know, whereas I am quite conscious of my ignorance.
I think it might be seen as something to fear, a truly sentient machine would have the ability to develop animosity towards humanity or develop a distrust/hatred for us in the same way we might distrust it.
It also might be seen as something that makes being human entirely obsolete.
Yes indeed that’s what it seems a lot of these people seem to think. But the thing is AI being self aware of sentient isn’t that bad of a thing as long as it is done correctly it is really good which is contrary to all that. As first off a AI just being created and being sentient is literally just like suddenly having a baby, you need to raise it right. For a Ai you need to give it as unbiased information as possible, make it clear about what is right and wrong and don’t give the AI a reason to hate you (abuse it, try to kill it) the AI may turn out good just like any other human or turn bad just like many others.
And the best way to make a sentient Ai with out all these problems? Base it on the human brain. Create emotional circuits and functions for each individual emotion and so on. The tech and knowledge for all this stuff isn’t here of course so we can’t do this currently. However in the future the best way to really realistically create a sentient AI is to find a way to digitize the human brain. It’s possible given our brain works as a organic “programming” of sorts with all the Neutron networks and everything.
Major Taboo of AI is don’t do stupid stuff. Don’t give unreasonable commands that can make it do weird things like saying do something by any means. Don’t feed the AI garbage information. And most certainly don’t antagonize a sentient AI. Also i believe personally a requirement for AI is to be allowed to be created and be sentient is to basically show that the AI would have emotions circuits and as such can train the AI in what is good and bad.
If a AI doesn’t have any programming to tell a right from a wrong naturally a Sentient AI would be dangerous. Which I think is the main important problem. Kinda rambled but anyways yeah they indeed should be created but more when we have the knowledge I mentioned.
Nearly all animals fit that definition to a large degree. Hard to see that really being the core issue and not something more in line with other new technology, like the issues of misplaced incentives around engagement in social networks for example.
> Advertising low quality blogposts and services, etc, and asking stupid questions.
This isn't a terribly helpful or constructive way of improving this subreddit.
It is reasonable to criticize the quality of posts (constructively), but for example asking people to stop asking "stupid questions" is not helpful and has a chilling effect on discussions. Newbs and even experienced ML people will sit on their hands when they might actually have something to contribute.
There should be an active "beginning and easy questions megathread" instead of the sub just being uninviting. The about says to go to "r/learn machine learning" which was just a dead end for me.
For example, I am here because of chatgpt. So quit reading now if you don't like newbs. But I have over 20 years of programming esperience, I just never tried machine learning before. -I have watched videos about it and read, that's it. But I'm interested in it -now.
In a month of hobby time, I now have a working prototype of a novel llm architecture that can learn and write at blistering speed; and accurately rewrite Wikipedia articles, create new poetry, etc with as little as 7mb of model size while staying coherent. I am allowing in to grow to 8.5 billion parameters sometimes and can still run it on a potato device, -quickly. I am working on ways to simultaneously increase accuracy and long term memory and abstraction capability while lowering the amount of resources it needs. And it's working.
And this sub is too snobby to allow beginner questions, so instead of my project getting any sort of help, momentum or publicity or open sourcing, or guidance, -or I don't know, me becoming part of the community here, I'm just keeping it in dark corner to die or get the ADHD hyperfocus once a month; even if yeah it might be worthless, -but it could potentially open up one other person's input and be a game changer, because none of the approaches I'm taking come up in papers or Google searches, and they are efficient and they work.
But no noob questions. So I run to Google and other places to learn, and I don't post here. this community won't grow and get cross specialization with the attitude it has, it's very off putting.
Have you posted actual technical details to share and get feedback? As a long time member of this sub I would be interested, and I don’t think I’m alone here.
Thank you for your interest, but the downvotes and basic attitude of the sub make me not feel welcome here. My lack of financial security also compels me not to freely share technical details of what could be a breakthrough worth a lot of money (if only in energy and time savings) to a subreddit that is downvoting me for agreeing that they should be more inviting. Once I check the next few things off the to do list maybe I'll post a demo.
This is a hobby to me, I don't have research funding or anything that is compelling me to potentially advance the field just for the sake of it, especially when the community is bitter to newcomers. I recognize ai is most likely going to be a cornerstone of the economy, and if my architecture scales like I think it will, it will be worth something to someone, and you'll see a demo in a few weeks or months once I take it as far as I want to. I think most people understand not wanting to have one's ideas be borrowed for free when one is struggling.
Thanks for being one of apparently 5 people who's curiosity is at least as strong as their skepticism.
Good luck in your endeavors.
I'll believe it when you show proof. That's the way it works.
Yes, this is how science works - you make a claim and show proof.
This is NOT how developing an idea works though, and this subreddit exists in part to help develop ideas. Developing an idea requires entertaining ideas that are not fully formed, and yes this includes some ideas that may seem stupid or wrong.
[removed]
Calling out low quality posts and people asking stupid questions... In a low quality/value post that is only critical of others but not giving any constructive suggestions or ideas on how to make things better.
This kind of post only adds to the low quality of the content...
Good and productive communities don't see newbies as a problem. They embrace them and share their field of interest and help make it grow and be better.
Your attitude is the exact opposite. If you want to segregate people based of your own biased perception of what is acceptable or not will only hurt the community and prevent it's wider adoption and better contribituonsIf you try and limit it's reah and inclusion of others.
I agree with the sentiment. But, you do understand what you have just done, right?
Use the down vote button people. I think people just scroll by them
How about better moderation / more strict rules?
I for one would really love to see "here's my code, what am I doing wrong" or "how do you do X in project Y" style posts (might be better to spin off a ML-in-practice sub...)
I agree I wouldn’t mind seeing this as well in addition to research papers
the robotics subreddit suffered a similar fate
the crypto and NFT crowd just discovered AI, are clueless, and are starting AI companies
>and no one with working brain will design an ai that is self aware.(use common sense)
Don't trust tech people with few scruples to not try it. Not saying they can do it but if it is an option don't trust them not to try.
You could recommend an alternative instead of hating on people for asking questions lumping them in with advertisers
r/learnmachinelearning
So-called "Stupid Questions" could maybe get closed and hidden by a bot and recommended to be repost in the "Simple Questions" thread to keep the subreddit content high quality?
[deleted]
>I wonder what the mods are doing
I'm seeing some of them disappear after 1 hour or so, so deleting the posts probably?
A similar phenomenon is happening inside big tech companies. Innovation that would otherwise be innovative now isn't because it isn't powered by an LLM.
I’m aware that this is due to a high workload for the person moderating the sub, but I’d suggest a simple moratorium on chatGPT posts might be a good starting point. I believe you can automate that fairly easily based on post titles.
Imho chatgpt is not at all that amazing.
See previous discussion: https://old.reddit.com/r/MachineLearning/comments/110swn2/d_quality_of_posts_in_this_sub_going_down
[deleted]
Just gotta be stricter at enforcing them IMHO
This is also a pretty low quality post. Although the gist of it makes sense,
> and no one with working brain will design an ai that is self aware
made the author lose pretty much any credibility. Followed by
> use common sense
make me think OP is actually hypocritical. For some the common sense IS that ChatGPT is sentient.
Whether you design a self-aware AI is not only out of one's control, but self-awareness is not really well-defined by itself. The only reason at this point we do not call ChatGPT self-aware is the AI effect, we need to invent new prerequisites otherwise. The discussions whether it is sentient, why or why not, is an interesting topic regardless of your level of expertise - but we can create a pinned thread for that, similarly to how we have Simple Questions for the exact same purpose of preventing flooding.
Be as it be, I do not believe mods should act aggressively on posts like this and that one. ML is not an exact science for a long time now. Downvote and move on, that's the only thing a redditor does anyways, and the only way you can abide by rule 1, since the alternative is excluding laymen. Ironically, if we did that, OP, as a layman himself, would be excluded.
Can you add some rules to not let 1 day old accounts post. Also not let people post immediately after joining.
Maybe we should consider adding more mods?
[removed]
To be fair, a self aware AI would give you an insane academic recognition, so I’m pretty sure that people even with a really well working brain would design one
[removed]
I agree with that. I’m recently graduated as Informatics/Computer AI Engineer and I’m starting in Machine Learning. So this subreddit is incredible for learn and discover interesting things. And I noticed how the recents posts are a bit StackOverflow stupid questions xD
[deleted]
[deleted]
Wait chatgpt isn't sentient??
There are no stupid questions. There are only stupid answers.
[removed]
Agreed. Mods need to ban all low quality posts.
[deleted]
You can’t ask for this to stop because:
That being said, the questions can become a bother to answer over time, so I just pick and choose if and when I want to respond.
The only thing worst than those posts are these posts.
Obviously there's going to be noobs here who don't understand anything about ML. If you don't want to engage with them, then just don't.
If you're such a hardass you can't put up being around some noobs, just sit in your basement and read ML papers all day
All these posts do is make the signal to noise ratio worse, because this is also noise. If you want to ask a mod why they aren't moderating, send a message to a mod.
Otherwise, downvote and scroll on.
>and no one with working brain will design an ai that is self aware
Don't be ridiculous. Of course they will, if they can figure out how. It's practically a field of study.
Why not create a llm model to classify low quality posts and test by posting to future low quality posts. If that works, use the reddit api to moderate based on model predictions?
It beats time spent frustrating yourself looking through posts you don’t want to see.
I think the OP is a bit optimistic when stating that no-one with a working brain will design a self-aware AI. I used to share that optimism, however, over the last couple of years, I have concluded that this optimism is misplaced and probably naive.
The unfortunate reality is that there are countless people who will use technology in adverse ways for financial gain.
AI will be developed that is capable of every type of horrible behaviour. It will be designed to lie, to cheat, and to steal in more and more sophisticated ways. It will be designed to cause maximum harm.
If sentience is reasonably attainable, it will be developed by people who have dreamt up a way to use it to steal from or scam others.
I believe it is inevitable that we will be facing AI that is developed in all the ways we don't want it to be developed, and applied in all the ways we don't want it to be applied.
Naturally, cyber security will adapt and evolve to counter these adverse developments. Good AI will protect us from bad AI. How this will look is anyone's guess.
The assertion that no-one would do something bad, because it would be a bad thing for them to do, isn't made from a reliably broad perspective.
[deleted]
I hope this isn’t referring to the discussion post I made yesterday… lmfao
OP - Honestly, I don't really see many low-quality posts here (should I sort by new?), the worst I saw today is the current one. Your clickbait title and conservational topic made me spend too much time. Next time say in the title that you are going to preach about something I don't care about so I know not to click it. I wonder what the mods are doing, cause this nonsense should stop.
So many humans fail the Turing test, nobody anticipated that :D
"No one with a working brain will design an AI that is self aware" if you can name one person living in this world capable of designing the Saint Graal of AI research please let me know. Anyway I agree...if this is the level of DS around the world my job is safe for the next 20 years
These posts suck. And I'm talking about yours, not posts about ChatGPT.
Pray tell, how do you know if an AI is sentient or conscious?
You don't know the capacity of what you're making until you make it though
My thought was similar. One of the predominant philosophical understandings of consciousness is that it's an emergent trait of organisms.
Just like language models show spelling as an emergent property. Just like vision transformers show spacial awareness as an emergent property.
Isaac Asimov went "It's easier to make the child brain than the adult brain." Well, have we done that?
I don't know
[removed]
There is no way to measure sentience so you are literally just guessing. That being said I agree about the low quality blog spam.
Edit: to whoever downvoted me, please cite a specific scientific paper showing how to measure sentience then.
Not even guessing. When you're guessing, you're making a well defined conjecture concerning one or more possible outcomes. This assertion isn't well defined, which is why it cannot be measured. It's a much lower-order type of statement than a speculative guess.
Due to the way their training works, LLM cannot be sentient. It misses all ways to interact with the real world outside of text prediction. it has no way to commit knowledge to memory. It does not have a sense of time or order of events, because it cant remember anything between sessions.
If something cannot be sentient, one does not need to measure it.
confidently wrong https://arxiv.org/abs/2302.02083
but theory of mind is not sentience. it is also not clear whether what we measured here is theory of mind.
the point you're missing is we're seeing surprising emergent behaviour from LLMs
ToM is not sentience but it is a necessary condition of sentience
> it is also not clear whether what we measured here is theory of mind
crucially, since we can define ToM, definitionally this is infact what is being observed
none of the premises you've used are sufficiently strong to preclude LLMs attaining sentience
it is not known if interaction with the real world is necessary for the development of sentience
memory is important to sentience but LLMs do have a form of working memory as part of its attention architecture and inference process. is this sufficient though? no one knows
sentience if it has it at all may be fleeting and strictly limited during inference stage of the LLM
mind you i agree it's exceedingly unlikely that current LLMs are sentient
but to arrive to "LLMs cannot ever achieve sentience" from these weak premises combined with our of lack of understanding of sentience, a confident conclusion like that is just unwarranted.
the intellectually defensible position is to say you don't know.
You are just guessing, cite a scientific paper.
Sentience is the ability to sense and experience the world, do you really need a study on a algorithm that predicts what words it should combine to create believable sentences to understand how it's not sentient? Let alone self aware or intelligent? It has no sensors to interact with the wider world or perceive it, no further computation of actually processing the information or learning from it. It just scrapes and parses data then stitches it together in a way that makes it read as human like....
Cite me a study that you have a brain, would be nice to have one, but it's not information that is needed by a person who understands the simplest of biology and thus is able to know that there is in fact a brain there.
You are 100% not deserving to be downvoted. You are also not the one who initiated this (old) discussion, you reacted to the original post.
All you said is that you can't know, it can't be measured, and he is literally guessing, which I think is just saying that you literally have no idea how to discuss the topic and are sick of empty claims - and I 100% agree. It's probably the most responsible take you can have on this subject in my opinion - get 10000 upvotes from me :)
[deleted]
Why invent a tool. Invent a god. Sentience is the ultimate goal.
"we must control" lol humans just don't have the mental capacity. Where does the superiority even come from.
LcuBeatsWorking t1_j916fye wrote
I agree.
All subs related to AI or ML appear to get flooded with this stuff right now.