ThatWolf t1_j739rd2 wrote
These thoughts of mine could be better stated, but I don't have time right now.
​
That's a lot of words to say that because you think we may not be able to create AI with current and/or proposed technology, you don't think it will ever be able to exist. To dismiss our ability to create true AI/AGI in the future because of modern limitations is a bit shortsighted at best. Especially considering that we're already doing things with AI that were impossible to do in the past.
I think you're overselling the computational complexity of things like immune cells. They're not exactly navigating the body on a self determined path to find pathogens. There is no 'thinking' involved beyond reacting to a stimulus. I'd also argue that Moravec's Paradox isn't really a paradox, but a misunderstanding of how complex those 'simple' tasks were at the time the statement was made because they lacked the relevant information. We now know that our senses account for a huge amount of our brain's processing capacity.
Likewise, for some reason you're completely ignoring the fact that human interpretation of data is literally how we teach other humans right now. We spend decades teaching young humans what red is, what hot is, what symbols we use to communicate and how to use them, what symbols we use to calculate and how to use them, how different things interact, and on and on. No human on the planet is born with the knowledge of what red is, it's a human interpretation that's taught to other humans by other humans. And even that can be wrong because there are humans with red/green color vision deficiency that cannot accurately interpret those colors.
​
>There is no need for models of anything in the brain. Nothing has to be abstracted out and processed by algorithms to produce a desired result.
Your brain absolutely creates models or algorithms (or whatever you would like to call them). When you learn to ride a bicycle, for example, your brain creates a model of what you need to do to produce the desired result of riding a bicycle without crashing. When that 'bicycle riding' model encounters a situation it's unfamiliar with you often end up crashing the bike, such as riding a bicycle with the steering reversed. Your brain is using a model it made of how a bicycle is supposed to work and even though you 'know' that the steering is backwards, you're unable to simply get on and ride such a bicycle because the model in your brain is unable to accommodate the change without significant retraining.
ReExperienceUrSenses OP t1_j745ly9 wrote
>I think you're overselling the computational complexity of things like immune cells. They're not exactly navigating the body on a self determined path to find pathogens. There is no 'thinking' involved beyond reacting to a stimulus.
I never said they were thinking. This is why people get so hung up on the brain as a necessity for complex action and "behavior." Come take a walk with me. I'm going to describe Chemotaxis for you.
Chemotaxis is an important part of the movement of bacterial cells. Its how they swim toward food and away from danger/noxious components. In the long pill shape form of E.coli, usually at one tip there will be some transmembrane proteins. These proteins are receptors. Small molecules bind to these receptors, with things like amino acids and sugars being food and nickel ions and acids being noxious. About 4 kinds of receptor will be surveying the surroundings. When the right molecule binds to the receptor, it triggers a signal transduction cascade. Upon binding to the receptor, the chain reaction leads to a protein binding to another that is connected to the flagella. The binding turns it like a rotor, moving the flagella and the bacteria will tumble in one direction or another.
No thinking involved. But see how it didn't really need computation or anything either? It was purely a mechanical process at the molecular level. Molecule binds, chain link chemical reactions, different protein binds to another to become a spinning motor. Our immune cells are very much doing something similar, only more of it. It has the option of more receptor types, more space for those receptors, and more internal space and material for very many chain link reactions. Natural selection iterated and created enough of the "wiring" needed for an immune cell to carry out its wet work or support duties.
When you examine pathways like this with more types of cells, and think about all that is going on in the body when you scale this up, it is easier to imagine that it is entirely possible for us to operate without any abstract computation going on. You might say "thats so simple and limited, and that would make us no more than automatons," but you would be wrong. Because of the scale. There are trillions of cells in our bodies with a large enough genome to create an absurd variety of protein complexes. The "computing" power of this is IMMENSE. And its chemical soup, so its all a gigantic, fuzzy impossibly huge finite state machine diagram. There isn't any determinism to worry about because it's too many molecules, the combinatorial explosion is too intense.
Sequences of direct action and reaction that change "behavior" based on the current conditions of cell and its surrounding environment.
THIS is why I say there are no models in the brain. Based on what? Theres no need.
​
>Your brain absolutely creates models or algorithms (or whatever you would like to call them). When you learn to ride a bicycle, for example, your brain creates a model of what you need to do to produce the desired result of riding a bicycle without crashing
Prove it. Where in the brain are the models stored. How are they accessed and updated. What is the biochemistry that is creating them. This is a well disputed concept in the field. Don't need models of bikes and desired results, just more chemotaxis if you think about it for a while.
We can waste time trying to decipher encoding schemes which might not even exist, or we can map the actual activity going on.
ThatWolf t1_j76ouok wrote
Your post reads like someone who has taken a psychoactive and suddenly believes they understand the nature of things. The only meaningful conclusion that I can draw from your post(s) is that you do not actually understand what you're talking about nearly as well as you believe you do.
​
>Sequences of direct action and reaction that change "behavior" based on the current conditions of cell and its surrounding environment.
>
>THIS is why I say there are no models in the brain. Based on what? Theres no need.
The random motions of cells in a body do not make for intelligence anymore than the wind making waves in the ocean does. Random cellular motions do not produce repeatable outcomes. It's well established scientific fact that memories are the result of synaptic connections between neurons and that those memories will activate those same synaptic pathways and neurons every single time you access them.
>Prove it. Where in the brain are the models stored.
For my example of riding a bicycle, the main areas this information is stored are a combination of the hippocampus, cerebellum, and basal ganglia. If your conjecture was actually true, then it would be impossible for a brain injury to have any impact on your existing abilities or skills. But we know that injuring a specific part of the brain can cause you to become worse at, or completely lose, a skill. In fact, using existing brain mapping technology we can specifically target parts of the brain that retain specific information if we wanted or even avoid them completely as is the case when performing neurosurgery.
Likewise, do not mistake the brain's capacity to heal/repair itself after an injury as evidence that these pathways do not exist. Similar to how the internet does not completely shut down if a link goes down, the brain is able to reroute and create new neural connections to parts that still work.
I'm not even going to bother addressing the issues with your understanding about modern AI. I've already spent way too much time on this post as it is.
Viewing a single comment thread. View all comments