Submitted by blabboy t3_11ffg1u in MachineLearning
crappleIcrap t1_jbkil2g wrote
Reply to comment by lifesthateasy in [D] Blake Lemoine: I Worked on Google's AI. My Fears Are Coming True. by blabboy
Now actually tell me why any of what you said is absolutely required for consciousness. You act like it is just self evident that it needs to be a brain and do it exactly the same way a brain does things.
> you can find the accuracy is smooth with scale. Emergent abilities would have an exponential scale.
Yeah, did you really read that and think that it was talking about the same type of emergence? I was talking about philosophical/scientific emergence- when an entity is observed to have properties its parts do not have on their own. The type of "emergence" used in that article is talking about big leaps in ability, and has absolutely nothing to do with the possibility of consciousness.
The fact that neural networks can produce anything useful is a product of emergence of the kind I was talking about and the absolute banger of a book Gödel Escher Bach was talking about.
>Brain cells however, are not only multidirectional without extra backwards connections, but they can keep some residual electric charge that can change the output (both its direction and strength) based on that residual charge. This residual activation can have a number of effects on the neuron's firing behavior, including increasing the strength of subsequent firing events and influencing the direction and timing of firing.
Okay, and what does this have to do with consciousness? It is still just deterministic nonlinear behavior, it makes no mathematic difference in what types of curves it can and cannot model because it can model any arbitrary curve, the exact architecture it uses to do it is irrelevant. Planes have no ability to flap their wings, they have no feathers or hollow bones, they have no muscles or tendons or any of the other things a bird uses to fly, therefore planes cannot fly? Functionally it has the ability to remember, depending on the setup, it has the ability to change its future output based on the past output, the exact method of doing so does not need to be the same, no matter how obsessed you are with it needing to do it in exactly the same way as a brain, it doesn't need to do anything even similar to the way the brain does it.
>Even if GPT3 had a conscience, it would have no connection to GPT4 as they're separate entities in a separate space of hardware,
I find it very strange that you are adamant that the model needs to be doing statistical regression to be conscious when the brain absolutely never does this, it is just something you assume is required because it uses the word "train" and training is learning therefore it must only be "learning" when it is in training mode.
If I tell it I live on a planet where the sky is green and later ask it if I went outside and looked at the sky what color I would see, it giving the correct answer is proof that constantly being in training mode is not required for it to "learn" it can "learn" just fine within the context of using inference mode and feeding it its own output as well as old inputs on every inference
Training a model is less like a brain learning and more like a brain evolving to do a specific function, and during inference is where the more human-like "learning" takes place. It is like a God specifying what way a brain should develop using a mathematical tool. It doesn't use neurons and has no real good analog to real biology at all, so to say it is required is just bizzare.
Gpt 3 is a continuation of gpt2, or I guess I just assumed that since it is closed source, but all open gpt models have worked this way, they train it and release the model, then they fire back up training starting where it left off. But like I said, as long as past information can effect future information, the exact method doesn't matter, and if you only have a basic understanding of chatgpt specifically,(which is becoming quite obvious) each tab can do that, I think it is very silly to say that consciousness has to cross over between browser tabs, where would you even come up with a stupid requirement like that? Humans consciousness does not cross over between human bodies. They are separate and can be created, learn, and destroyed completely separately
>artificial neuron in an NN has one activation function, one input and one output (even though the output can be and often is a vector or a matrix).
Which has been mathematically proven to be able to model any other system you could possibly think of, as long as each neuron has nonlinear behavior, then they can model any arbitrary system you come up with.
You can't just keep listing things that ai doesn't do and pretend it is self evident every conscious system would need to do that thing. You need to actually give a reason why a conscious system would need to have that function.
Viewing a single comment thread. View all comments