Submitted by groman434 t3_103694n in MachineLearning
GFrings t1_j2xjsz8 wrote
I think the context in which the task is performed, and what is meant by "outperform", is important. If given enough time and energy, a person could probably find all the dogs in a dataset of images. But could they find 200k dogs in a dataset of millions of images overnight? Probably not. In this sense, machines far outperform humans who are limited by attention span and ability to parralelize.
groman434 OP t1_j2xp8he wrote
Yep, you are right, I was not clear enough. What I meant was that AI would to a task "significantly better" (whatever this means exactly). For instance, if humans can find 90% of dogs in a dataset, that AI would be able to find 99.999% of dogs.
csiz t1_j2ycihi wrote
Well, the AI is already beating humans at the example you gave, the best accuracies on imagenet are now higher than humans.
But there are ways your data can be changed that can easily make AI superhuman. You can classify a full resolution image of a dog then compress and shrink it down before training the AI. A lot fewer humans could then see the dog in a tiny low-res image, but an AI could get it correct more just as often.
There are also AI that can improve themselves more than the human given data. The AlphaGo project started off with human Go matches as training data, and evolved into tabula-rasa training by self play. By the end, the AI beats the best human.
Best-Neat-9439 t1_j2yn1nx wrote
>There are also AI that can improve themselves more than the human given data. The AlphaGo project started off with human Go matches as training data, and evolved into tabula-rasa training by self play. By the end, the AI beats the best human.
Neither AlphaGo Zero or AlphaZero were trained with supervised learning. They were both trained with reinforcement learning (and MCTS, so it's not purely RL, but it's more like RL + planning). It's then not surprising that it can beat humans - its "ground truth" doesn't come from humans anyway.
horselover_f4t t1_j31e8tm wrote
>The system's neural networks were initially bootstrapped from human
gameplay expertise. AlphaGo was initially trained to mimic human play by
attempting to match the moves of expert players from recorded
historical games, using a database of around 30 million moves.[21]
Once it had reached a certain degree of proficiency, it was trained
further by being set to play large numbers of games against other
instances of itself, using reinforcement learning to improve its play.
MustachedSpud t1_j31glqv wrote
The zero in alpha zero means it starts with no human knowledge. They figures out that this approach is eventually stronger than the base alpha go strategy.
horselover_f4t t1_j32ebo0 wrote
But the person you responded to didn't talk about the zero variant. Maybe I misread the point of your post?
MustachedSpud t1_j34qzvu wrote
The person two comments up was talking about the zero version. Thread is about how AI can surpass humans and the point is they already can if they have a way to improve without human data
horselover_f4t t1_j355j8z wrote
Still does not make sense to me as the person before was specifically talking about vanilla. But no point in arguing about any of that i guess.
MustachedSpud t1_j35gn03 wrote
Are you trolling me or something? YOU are the person I responded to. YOU brought up the vanilla version, in a response to someone else who was talking about the zero version. The zero version is most relevant here because it learns from scratch, without human knowledge.
horselover_f4t t1_j368cx1 wrote
>There are also AI that can improve themselves more than the human given data. The AlphaGo project started off with human Go matches as training data, and evolved into tabula-rasa training by self play. By the end, the AI beats the best human.
​
>YOU brought up the vanilla version, in a response to someone else who was talking about the zero version.
... who responded to someone who talked about the vanilla version. In my first response to you, I did not realize you were not actually the person I responded to in the first place. Apparently you have not read what they responded to, which seems to be the reason you're missing the context.
I assume they must be laughing if they see us still talking about this.
BrotherAmazing t1_j2zuccr wrote
It’s not really fair to have a dog labelled as a Japanese spaniel (that is one) and let a a deep neural network train on a bunch of images of Japanese spaniels for a week, then have me try to identify the dog when I’ve never heard of or seen a Japanese spaniel before or heard or read about them so I guess papillon, then you tell me the CNN is “superior”.
If you consolidated all dog classes into “dog” humans wouldn’t get a single one wrong. Also, if you took an intelligent person and let them study and train on these classes with flashcards for as many training iterations as the CNN has during training, I imagine the human would perform at least comparably if not better than the CNN but that usually is not how the test is performed.
Viewing a single comment thread. View all comments