crappleIcrap
crappleIcrap t1_jaou73g wrote
Reply to comment by lifesthateasy in [D] Blake Lemoine: I Worked on Google's AI. My Fears Are Coming True. by blabboy
>I like how you criticize me for not providing scientific evidence for my reasoning,
I criticized you for quite the opposite reason- for claiming sentience to be something settled by science or mathematics when it is still firmly in the realm of philosophy.
>they argue it emerges from the specific properties of our neural architecture, which is vastly different than that of neural networks'
They never argue that it ONLY emerges from the specific properties of our neural architecture, or at least, I have never seen a good paper claiming that.
>Once it's trained, it stays the same. The only things that temporarily change are in the memory module of the feedback systems, and that only serves the purpose of being able to hold conversation.
Gpt3 is the third round of training and openI will, no doubt, will use our data to train a fourth, but even barring that, it is a bit similar to saying "but humans aren't even immortal, they die and just have kids that have to learn everything over again". Also, after 25 your brain largely stops changing and is fairly "set" other than new memories forming, so I fail to see how 1 thread is much different from 1 human. But this is a stupid argument because if I made the change to allow training on every input, the model wouldn't be any better and would actually be an easy (if less efficient) change to make. So if that was the only problem, I would immediately download gpt-neo and make the change and collect my millions.
Like I said, current implementations are not likely in my opinion to be sentient and this is a major reason- that most threads do not last very long, but there is no reason a single thread if let continue indefinitely could not be sentient as it has a memory that is not functionally very different than with human memory other than being farther away physically, or even that a short lived thread does not have a simple short lived sentience.
As far as determinism goes, the only way within the currently known laws of physics for the human brain to be non-determimistic is for it to use some quantum effect and the only other option is randomness, so claiming that it needs to be non-deterministic to be sentient is saying it needs true randomness added in, which I think is a weird argument despite being popular amongst the uninformed and the complete lack of evidence that the human brain uses quantum effects or is non-deterministic.
Also I cannot recommend Gödel Escher Bach enough, it makes a much stronger case than I could ever, and it is an amazing read.
>artificial neurons in neural networks don't have a continuously changing impulse pattern,
Not sure exactly what you are saying here, but it sounds pretty similar to RNNs, which are pretty old-news as Transformers seem to work much better at solving the issues this inability usually presents.
crappleIcrap t1_jaoem2j wrote
Reply to comment by lifesthateasy in [D] Blake Lemoine: I Worked on Google's AI. My Fears Are Coming True. by blabboy
I agree completely that pop-sci articles completely sensationalize this topic, but to be fair, they do that with every part of science. A funny one comes to mind of an article claiming something along the lines of "scientists create white hole in lab" but what actually happened is they ran a stream of water down on a flat surface and the spread acted mathematically similar to a white hole.
Nobody writes articles that nematodes are sentient despite fundamentally containing the same building blocks that human intelligence is built on. Side note- If mimicking real neurons is what you believe to be sentience, then the complete nematode connectome that you can emulate on your desktop already achieves that.
It is because most people would not consider their simple intelligence to be sentience, not because neurons as a building block are completely incapable of developing sentience.
As far as the architecture, wether it be Transformers or RNNs, even something simple like Markov chains, i dont think its relevant as I have seen no convincing pieces of evidence that any neural network type would never exhibit sentience as an emergent property.
crappleIcrap t1_jao313f wrote
Reply to comment by lifesthateasy in [D] Blake Lemoine: I Worked on Google's AI. My Fears Are Coming True. by blabboy
currently it is fairly unlikely as far as I can tell, but most arguments given are not restricted to "at its current size and complexity it doesn't appear to have the traits of a truly sentient being" and are essentially declarations that machines can never have any degree of sentience or that it would require some uobtainium mcguffin type math that is currently impossible.
crappleIcrap t1_janyjbj wrote
Reply to comment by lifesthateasy in [D] Blake Lemoine: I Worked on Google's AI. My Fears Are Coming True. by blabboy
from the abstract "Rather than attempting to extract meaning from the many complex and abstract definitions of animal sentience, we searched over two decades of scientific literature using a peer-reviewed list of 174 keywords."
how is this evidence that the definition of sentience is perfectly well defined and not at all abstract? you accuse him of not reading it, but did you?
it is a philosophical argument, not a scientific or mathematical one.
you simply hold the philosophy that due to the qualia of life argument, sentience cannot be an emergent property. I and many others disagree.
pretending this is a mathematical or scientific argument and that the science is settled that you are right is highly disingenuous.
you may be an expert on neural networks but that is like being an expert on car manufacturing and thinking that means you will be a better racecar driver than racecar drivers.
I also work with neural networks, fully understand the mathematics behind them, but that does not mean I know anything about sentience or the prerequisites for creation of a sentient being.
many arguments used against ai being sentient could easily be applied to humans
"it is just math, it doesn't actually know what it is doing"
do you think each human neuron behaves unpredictably and each have their own sentience? as far as we can tell or know, human neurons are deterministic and therefor "just math". true, neurons do not use statistical regression. but nobody ever proved that brains are the only possible way to produce sentience, or that human brains are the most optimized way possible. that is like expecting walking to be the most efficient method of moving things.
"it doesn't actually remember things, it rereads the entire text every time/ it isn't always training"
humans store information in their brain, do you believe that every neuron and every part of the brain remembers these things or is it possible that when remembering anything one part of the brain needs to ask another part of the brain what is remembered and then process that information again?
and do you expect your brain to remember and make permanent changes to the brain every nanosecond of every day, or do you expect some things to make changes and other things not to and also expect some amount of time to be required for that to happen? so why is it so hard to accept that sentience may be possible with changes only being made every month or year or longer. this argument is essentially that it cannot be sentient unless it is as fast as a human.
are there any more "i'm a scientists therefore I must know more about philosophy than philosophers" takes that i am missing?
crappleIcrap t1_jbkil2g wrote
Reply to comment by lifesthateasy in [D] Blake Lemoine: I Worked on Google's AI. My Fears Are Coming True. by blabboy
Now actually tell me why any of what you said is absolutely required for consciousness. You act like it is just self evident that it needs to be a brain and do it exactly the same way a brain does things.
> you can find the accuracy is smooth with scale. Emergent abilities would have an exponential scale.
Yeah, did you really read that and think that it was talking about the same type of emergence? I was talking about philosophical/scientific emergence- when an entity is observed to have properties its parts do not have on their own. The type of "emergence" used in that article is talking about big leaps in ability, and has absolutely nothing to do with the possibility of consciousness.
The fact that neural networks can produce anything useful is a product of emergence of the kind I was talking about and the absolute banger of a book Gödel Escher Bach was talking about.
>Brain cells however, are not only multidirectional without extra backwards connections, but they can keep some residual electric charge that can change the output (both its direction and strength) based on that residual charge. This residual activation can have a number of effects on the neuron's firing behavior, including increasing the strength of subsequent firing events and influencing the direction and timing of firing.
Okay, and what does this have to do with consciousness? It is still just deterministic nonlinear behavior, it makes no mathematic difference in what types of curves it can and cannot model because it can model any arbitrary curve, the exact architecture it uses to do it is irrelevant. Planes have no ability to flap their wings, they have no feathers or hollow bones, they have no muscles or tendons or any of the other things a bird uses to fly, therefore planes cannot fly? Functionally it has the ability to remember, depending on the setup, it has the ability to change its future output based on the past output, the exact method of doing so does not need to be the same, no matter how obsessed you are with it needing to do it in exactly the same way as a brain, it doesn't need to do anything even similar to the way the brain does it.
>Even if GPT3 had a conscience, it would have no connection to GPT4 as they're separate entities in a separate space of hardware,
I find it very strange that you are adamant that the model needs to be doing statistical regression to be conscious when the brain absolutely never does this, it is just something you assume is required because it uses the word "train" and training is learning therefore it must only be "learning" when it is in training mode.
If I tell it I live on a planet where the sky is green and later ask it if I went outside and looked at the sky what color I would see, it giving the correct answer is proof that constantly being in training mode is not required for it to "learn" it can "learn" just fine within the context of using inference mode and feeding it its own output as well as old inputs on every inference
Training a model is less like a brain learning and more like a brain evolving to do a specific function, and during inference is where the more human-like "learning" takes place. It is like a God specifying what way a brain should develop using a mathematical tool. It doesn't use neurons and has no real good analog to real biology at all, so to say it is required is just bizzare.
Gpt 3 is a continuation of gpt2, or I guess I just assumed that since it is closed source, but all open gpt models have worked this way, they train it and release the model, then they fire back up training starting where it left off. But like I said, as long as past information can effect future information, the exact method doesn't matter, and if you only have a basic understanding of chatgpt specifically,(which is becoming quite obvious) each tab can do that, I think it is very silly to say that consciousness has to cross over between browser tabs, where would you even come up with a stupid requirement like that? Humans consciousness does not cross over between human bodies. They are separate and can be created, learn, and destroyed completely separately
>artificial neuron in an NN has one activation function, one input and one output (even though the output can be and often is a vector or a matrix).
Which has been mathematically proven to be able to model any other system you could possibly think of, as long as each neuron has nonlinear behavior, then they can model any arbitrary system you come up with.
You can't just keep listing things that ai doesn't do and pretend it is self evident every conscious system would need to do that thing. You need to actually give a reason why a conscious system would need to have that function.