lifesthateasy
lifesthateasy t1_je9ftvv wrote
And your post is being different... How exactly?
lifesthateasy t1_jcxras0 wrote
Reply to [P] TherapistGPT by SmackMyPitchHup
Hooo leee, imagine if this has any of the issues ChatGPT had
lifesthateasy t1_jbim6l5 wrote
Reply to comment by crappleIcrap in [D] Blake Lemoine: I Worked on Google's AI. My Fears Are Coming True. by blabboy
Look it's really hard to argue with you when I present my finding and you're like "well I've never read anything of the like so it mustn't be true". Feel free to check this article, if you look closely, you'll find evidence of so-called "emergent abilities" are only emergent because we choose incorrect evaluation metrics, and once we choose ones that better describe the results and are not biased with usefulness to humans, you can see those metrics don't account for gradual improvements, and that's the only reason the abilities seem "emergent". If you consider a holistic model about something like GPT-3 and its aggregate performance along benchmarks, you can find the accuracy is smooth with scale. Emergent abilities would have an exponential scale. https://www.assemblyai.com/blog/emergent-abilities-of-large-language-models/ Since I can't post images here, check the image with the text "Aggregate performance across benchmarks for GPT-3 is smooth" in above article, which supports this notion.
So even *if* emergent abilities were a thing, and you'd argue consciousness is an emergent ability, there's data that shows there's nothing emergent about GPT's abilities, so then consciousness could also not have emerged.
Yes, GPT3 is the third round, and I'm saying GPT3 is static in its weights. It doesn't matter that they're making a GPT4, because I'm saying these models don't learn like we do. And they don't. GPT4 is a separate entity. Even *if* GPT3 had a conscience, it would have no connection to GPT4 as they're separate entities in a separate space of hardware, while human consciousness evolves within the same "hardware" and never stops learning. It even adds new connections until the end of our lives, which GPT3 doesn't (and yes, you're severely misinformed on that 25 year age barrier, that's an antiquated notion. To prevent you form going "well I've never read that" again, here's an article with plenty more to support it if you can google: https://cordis.europa.eu/article/id/123279-trending-science-do-our-brain-cells-die-as-we-age-researchers-now-say-no: "New research shows that older adults can still grow new brain cells." ). You can't even compare GPT3 to 4 in brain/human consciousness terms, because GPT4 will have a different architecture and quite likely even trained on different data. So it's not like GPT3 learns and evolves, no, GPT3 is set and GPT4 will be a separate thing - *completely unlike* human consciousness.
About determinism, I don't know if you're misunderstanding me on purpose, but what I'm saying is an artificial neuron in an NN has one activation function, one input and one output (even though the output can be and often is a vector or a matrix). At best it's bidirectional, but even bidirectionality is solved with separate pathways that go back, activation functions themselves are feedforward and to the same input they always give the same output. Brain cells however, are not only multidirectional without extra backwards connections, but they can keep some residual electric charge that can change the output (both its direction and strength) based on that residual charge. This residual activation can have a number of effects on the neuron's firing behavior, including increasing the strength of subsequent firing events and influencing the direction and timing of firing.
Since I can't be arsed to type any more, here's someone else who can explain it to you why brain neurons and artificial neurons are fundamentally different: https://towardsdatascience.com/the-differences-between-artificial-and-biological-neural-networks-a8b46db828b7 Even this article has some omissions, and I want to highlight how in the past we though neurons would always fire when getting a stimulus, and start firing when they stopped getting the stimulus (as artificial neurons do), but in fact there's been new discoveries showing that human neurons also exhibit persistent activity: neural firing that continues after the triggering stimulus goes away.
lifesthateasy OP t1_jb8zyyu wrote
Reply to comment by Disastrous_Elk_6375 in [D] Neat project that would "fit" onto a 4090? by lifesthateasy
Ooh great I'll look into those! Thank you!
lifesthateasy t1_jaold7x wrote
Reply to comment by crappleIcrap in [D] Blake Lemoine: I Worked on Google's AI. My Fears Are Coming True. by blabboy
Do you mean OpenWorm, where they try to code a nematode on a cellular level? Having the connectome mapped out doesn't mean they've managed to model its whole brain. A connectome is just the schematic and even that only with the individual cells removed. Kinda like an old school map, you can navigate based on it but it won't tell you where the red lights or shops are or what people do in the city.
I like how you criticize me for not providing scientific evidence for my reasoning, but then you go and make statements like "most people wouldn't consider it is sentient" and that's a general truth I'm supposed to accept.
I mentioned transformers only to point out both image generators and LLMs are similar in concept in a lot of ways, but yet people didn't start associating sentience with image generation. I didn't mean to imply a certain architecture allows or disallows sentience.
You're talking about the emergent qualities of consciousness. A common view about that seems to be that it emerges from the anatomical, cellular and network properties of the nervous system, and is necessarily associated with the vital, hedonic, emotional relevance of each experience and external cue, and intrinsically oriented to a behavioral interaction with the latter. In addition, many argue it doesn't even "eventually emerge" but is rather intrinsic and not added a posteriori. None of this is present in neural networks, as artificial neurons in neural networks don't have a continuously changing impulse pattern, but are basically just activation function giving a deterministic response. Yes, there's randomness introduced in these systems, but once trained, individual artificial neurons are pretty deterministic.
What I'm trying to say is that when scientists argue for the emergent nature of consciousness, they argue it emerges from the specific properties of our neural architecture, which is vastly different than that of neural networks'. So even if neural networks had some emergent features that emerge for that tiny bit of time (compared to our consciousness being on for most of the day) when they're generating an answer, I wouldn't call that sentience or consciousness, as it fundamentally differs from what we understand as sentience. In addition to that, a neural network doesn't continuously change and learn new things, it doesn't evaluate options and change its neurons' activation function. Once it's trained, it stays the same. The only things that temporarily change are in the memory module of the feedback systems, and that only serves the purpose of being able to hold conversation. Once your session ends, that gets deleted and it doesn't feed back into the system. Or at least in ChatGPT, there's no self-supervised learning present, and the whole system is basically immutable apart from those LSTM-like modules that allow it to have context. But even those get overloaded with time.
lifesthateasy t1_jaobf1i wrote
Reply to comment by crappleIcrap in [D] Blake Lemoine: I Worked on Google's AI. My Fears Are Coming True. by blabboy
Well machines might eventually get an intelligence similar to our, but that would be AGI to which we really have no way to as of yet. These are all specialized systems that are narrow intelligences. The only reason this argument of sentient AI got picked up nowadays is because this model generates text, to which many more of us can relate than to generating art.
If you go down into the math/code level, both are basically built on the same building blocks and are largely similar (mostly transformer-based). Yet, no one started writing articles about how AI was sentient when it only generated pretty pictures. For LLMs to be conscious it would require for us to work in a very similar way, eg. to only take written language as proof for our consciousness. Written language doesn't solely define our consciousness.
lifesthateasy t1_janyw2h wrote
Reply to comment by crappleIcrap in [D] Blake Lemoine: I Worked on Google's AI. My Fears Are Coming True. by blabboy
Oh not you too... I'm getting tired of this conversation.
LLMs have no sentience and that's that. If you wanna disagree, feel free, just disagree to someone else.
lifesthateasy t1_janudsp wrote
Reply to comment by currentscurrents in [D] Blake Lemoine: I Worked on Google's AI. My Fears Are Coming True. by blabboy
So you want to debate my comment in sentience, so you prove this by linking a wiki article about consciousness?
Ah, I see you haven't gotten past the abstract. Let me point you to some of the more interesting points: "Despite being subject to debate, descriptions of animal sentience, albeit in various forms, exist throughout the scientific literature. In fact, many experiments rely upon their animal subjects being sentient. Analgesia studies for example, require animal models to feel pain, and animal models of schizophrenia are tested for a range of emotions such as fear and anxiety. Furthermore, there is a wealth of scientific studies, laws and policies which look to minimise suffering in the very animals whose sentience is so often questioned."
So your base idea of questioning sentience just because it's subjective is a paradox that can be resolved by one of two ways. Either you accept sentience and continue studying it, or you say it can't be proven and then you can throw psychology out the window, too. By your logic, you can't prove to me you exist, and if you can't even prove such a thing, why even do science at all? We don't assume pain etc. are proxies to sentience, we have a definition for sentience that we made up to describe this phenomenon we all experience. "You can't prove something that we all feel and thus made up a name for it because we can only feel it" kinda makes no sense. We even have specific criteria for it: https://www.animal-ethics.org/criteria-for-recognizing-sentience/
lifesthateasy t1_janoimg wrote
Reply to comment by currentscurrents in [D] Blake Lemoine: I Worked on Google's AI. My Fears Are Coming True. by blabboy
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4494450/
Here's a metastudy to catch you up with animal sentience. Sentience has requirements, none of which a rock fits.
No, it's not. That's like saying you don't understand why 1+1=2 because you don't know how the electronic controllers work in your calculator. Look I can come up with unrelated and unfitting metaphors. Explainable AI is a field of itself, just look at below example about CNN feature maps.
We absolutely can understand what each layer detects and how it comes together if we actually start looking. For example, slide 19 shows an example about such feature maps: https://www.inf.ufpr.br/todt/IAaplicada/CNN_Presentation.pdf
Can you please try to put in any effort into this conversation? Googling definitions is not that hard: "Sentience is the capacity to experience feelings and sensations". Scientists use this to study sentience in animals for example (not in rocks, because THEY HAVE NONE).
And yes, there's also been studies about animal intelligence, but please stop adding to the cacophony of definitions on what you want to explain an LLM has. I'm talking about sentience and sentience only.
lifesthateasy t1_janig00 wrote
Reply to comment by currentscurrents in [D] Blake Lemoine: I Worked on Google's AI. My Fears Are Coming True. by blabboy
They're black boxes in a sense that it's hard to oversee all of the activations together. But it's very easy to understand what each neuron does, and you can even check outputs at each layer to see what's happening inside.
Look you sound like you went to an online course and have a basic understanding of basic buzzwords but have never studied the topic in depth.
Lol if you think rocks might be sentient, there's no way I can make you understand why LLMs are not.
You're even wrong on sentience and consciousness, for once you keep mixing these two concepts together which makes it harder to converse, as you keep changing what you're discussing. And then again, we do have a definition for sentience, and there have been studies that have proven for example in multiple animal species that they are in fact sentient, and zero studies that have shown the same on rocks. Even the notion is idiotic.
lifesthateasy t1_jalgvq6 wrote
Reply to comment by currentscurrents in [D] Blake Lemoine: I Worked on Google's AI. My Fears Are Coming True. by blabboy
Exactly, but we completely understand how neural networks work down to a tee.
lifesthateasy t1_jalerz5 wrote
Reply to comment by currentscurrents in [D] Blake Lemoine: I Worked on Google's AI. My Fears Are Coming True. by blabboy
Who's talking about intelligence? Of course artificial intelligence is intelligence. It's in the name. I'm saying it's not sentient.
lifesthateasy t1_jajgry8 wrote
Reply to comment by schludy in [D] Blake Lemoine: I Worked on Google's AI. My Fears Are Coming True. by blabboy
It's almost like that corresponds to who creates the most content on the internet lol
lifesthateasy t1_jajcxh9 wrote
Reply to comment by red75prime in [D] Blake Lemoine: I Worked on Google's AI. My Fears Are Coming True. by blabboy
Yeah but even that wouldn't work like our brain, the basic neurons in neural networks don't work like neurons in our brains so there's that.
lifesthateasy t1_jaj7uo8 wrote
Reply to comment by 7366241494 in [D] Blake Lemoine: I Worked on Google's AI. My Fears Are Coming True. by blabboy
There's a plethora of differences, one of them is that we can think even without someone prompting us.
lifesthateasy t1_jaj6vlq wrote
Ugh ffs. It's a statistical model that is trained on human interactions, so of course it's gonna sound like a human and answer as if it had the same fears as a human.
It doesn't think, all it ever does is it gives you the statistically most probable correct response to your prompt, if and only if it gets a prompt.
lifesthateasy t1_jaesovy wrote
lifesthateasy t1_jaeskse wrote
Reply to comment by cabose7 in Oscars: “Naatu Naatu” From India’s ‘RRR’ To Be Performed During Ceremony by impeccabletim
It's a thread about RRR being featured at the oscars, and I used it to make a comment about the direction the oscars seem to take without saying anything about RRR.
lifesthateasy t1_jaesg99 wrote
Reply to comment by AlanMorlock in Oscars: “Naatu Naatu” From India’s ‘RRR’ To Be Performed During Ceremony by impeccabletim
"The Academy Awards, better known as the Oscars, are awards for artistic and technical merit for the American film industry." Traditionally they do American movies with some categories for foreign films, but not nearly as many. Now it seems instead of focusing on better nominations for American movies, they add new markets. As I said, it's a good business move but I'm not sure I like the idea of a US-based panel of judges being the end-all be-all authorities on international cinema everywhere. Goes a bit against diversity doesn't it.
lifesthateasy t1_jaes30j wrote
Reply to comment by Cineflect in Oscars: “Naatu Naatu” From India’s ‘RRR’ To Be Performed During Ceremony by impeccabletim
Yeah that's my point, the Oscars used to be about mostly English speaking movies from the US with specific categories for foreign films. Now they're opening into foreign markets more. Now you be the judge if it's good or bad for diversity if the ultimate body to judge any movie anywhere has a fully USA-based panel of judges.
lifesthateasy t1_jaerni8 wrote
Reply to comment by Jazz_Potatoes95 in Oscars: “Naatu Naatu” From India’s ‘RRR’ To Be Performed During Ceremony by impeccabletim
Wait did RRR get an Oscar? Did I miss something? I didn't say anything about that movie, I'm just saying they're opening to Indian audiences instead of focusing on what they traditionally did. I don't quite understand how Oscars, being based in the US and being comprised of mostly US critics trying to become the end-all be-all critical body of the world is a good thing but okay. Seems a lot of diversity would be lost but sure whatever floats your boat.
lifesthateasy t1_jaercdz wrote
Reply to comment by thenumberless in Oscars: “Naatu Naatu” From India’s ‘RRR’ To Be Performed During Ceremony by impeccabletim
I didn't say anything about RRR
lifesthateasy t1_jaeraak wrote
Reply to comment by crzysexycoolcoolcool in Oscars: “Naatu Naatu” From India’s ‘RRR’ To Be Performed During Ceremony by impeccabletim
I didn't say anything about RRR wtf
lifesthateasy t1_jae9q4r wrote
Reply to comment by TheDwilightZone in Oscars: “Naatu Naatu” From India’s ‘RRR’ To Be Performed During Ceremony by impeccabletim
Less and less people care about the Oscars, because they stopped focusing on quality and their rewards are based around other things like representation and virtue signaling, instead of great movies. Instead of getting back to giving awards purely based on quality, they're moving to include markets that are traditionally not represented at the Oscars to draw in a new audience.
Don't get me wrong, if I were them I'd probably do the same as it's easier and makes sense business-wise, but I don't think we'll see much uptick in viewership from US/European markets.
lifesthateasy t1_jeg3o4t wrote
Reply to Why do so few American movies let foreign language speaking characters speak their own language? Why does everything have to be in English... by _wyfern_
That's nothing compared to all the media put there where multi-galactic space covenants just speak English by default. Even the notion of all of them communicating via sound and words and sentences is weird.