Shiningc t1_j1soczr wrote
Reply to comment by KaiSix88 in For the first time Open AI is investing in a small number of startups who they believe are "pushing the boundaries" of technology and AI. by ECommerce_Developer
There's still going to be an inherent limitation set in statistics and probabilities. As in, things don't always follow a "trend" or a "pattern". A trend could suddenly change in unexpected and surprising ways.
It could be that things like predicting the trajectory of a ball falling are based on statistics and probabilities, when we use our "intuition". But we can also think about it that would completely change how we would predict the trajectory. For example, we could learn that the wind could affect the trajectory of the ball. Or as in the case of baseball, the pitcher could be using the "slider" throw to make the ball fall a lot faster than when normally thrown. And a person would never even have to ever see the ball being affected by the wind before to predict this. There were never any statistical samples. He can simply think about how the wind would affect the ball. So he predicted the trajectory not based on statistics, but by some kind of a new rule, perhaps one that closely resembles the laws of physics.
Our general thinking isn't necessarily based on statistics and probabilities. And that's why an AGI can't be developed from statistical and probabilistic methods alone.
KaiSix88 t1_j1ttc7j wrote
>So he predicted the trajectory not based on statistics, but by some kind of a new rule, perhaps one that closely resembles the laws of physics.
You actually hit it right on the money. That's where the sparsity comes into play. It is technically probabilistic though.
Imagine this for example. I have 100 neurons, only 3 neurons can turn on at a time. 100 choose 3, that is 161700 combinations. We'll call the code that lights up for a ball falling code G for gravity. We'll also say that your motor cortex will fire if it has at least 2 neurons that are the same.
The odds of any other code has an exact match is 1/161700. Very unlikely that anything other than a falling object overlaps with code G. However, there are noisy partial codes, A B and C (3 choose 2), that can overlap with G.
Because these 100 neurons are representing something similar in nature in the same part of the brain, the full code variants of codes A B and C will have meaningful overlap with G because they were formed from the same inputs. This leaves us 3 * 98 full overlapping codes. 98 variants per noisy partial because 3 neurons need to be on at a time, each partial is missing 1 neuron, and there are 98 other neurons to choose from.
As you may have guessed by now, your windy variants are the other overlapping codes. You can call that set W for windy. Only the codes with overlap of 2. So now you have 295(3*98+1) codes that can activate under falling conditions.
But even then, with that many codes between the full W set and code G, 295/161700 is still less than a percent chance of probability of a random code triggering a gravity related thought. In this scenario we haven't considered temporal codes, but this is enough to illustrate how implicit probabilities can arise out of sparse distributed codes.
If you are actually interested in this field, Kanerva and locality sensitive hashes will be right up your lane.
Viewing a single comment thread. View all comments