Submitted by Calm_Bonus_6464 t3_zyuo28 in singularity
I'll start off by saying that i'm no expert but I did get into a debate recently about whether or not AGI is possible. The arguments used against AGI was that the very idea that we can achieve AGI is based heavily on the idea that the brain is like a computer, something this post by Piekniewski calls into question. Models like this assume that 1) the nature of intelligence is an emergent factor of scaling neurons and synapses, 2) you have a good model and analog of neurons and synapses, therefore 3) scaling this will lead to intelligence. The guy I was debating with called this into question stating that we still don't know how a neuron works. The research on this field which was done Prof. Efim Lieberman, who founded the field of biophysics, suggests that there is an incredible amount of calculation going on INSIDE neurons, using Cytoskeletal 3d computational lattices communicating using sound waves. So the amount of computational resources required to emulate a brain is orders if magnitude higher than that suggested by the model of a neuron as a dumb transistor and the brain as a network of switches. Second and more fundamental he believes that intelligence is an emerging property of consciousness. An ant or spider are conscious-Darwin goes on about this at length. Perhaps inanimate matter is also conscious-Leibniz, who invented this field, wrote the Monadology about this.
He went on to state neural networks aren't conscious any more than an abacus is. Scaling them won't make them so, though it may allow them to emulate consciousness within some envelope. Without consciousness, no understanding. Without understanding, no intelligence. And you're nowhere near any sort of understanding of consciousness, even theoretically. Therefore he said AI is mostly marketing with some interesting applications in controlled environments.
How would you respond to this argument?
throwawaydthrowawayd t1_j280yv5 wrote
> Without consciousness, no understanding
You don't need consciousness nor philosophical understanding to do logic. We've already proven that with our current NNs - Just ask GPT3.5 to find bugs in your code, and that's nowhere near the limit of what NNs will be. Logic and reasoning are not as complex of tasks as we thought.