Submitted by Fun_Wolverine8333 t3_xzoomn in MachineLearning
RezaRob t1_isdkb67 wrote
Reply to comment by todeedee in [P] Youtube channel for ML - initial feedback and suggestions by Fun_Wolverine8333
Speaking only in general here: often in ML, we don't know exactly why things work theoretically. Even for something like convolutional neural networks, I'm not sure if we have a complete understanding of "why" they work, or what happens internally. There have certainly been papers which brought into question our assumptions about how these things work. Adversarial images are a good example of things that we wouldn't have expected. So, in ML, sometimes the method/algorithm, and whether it works, are more important than an exact theoretical understanding of what's happening internally. You can't argue with superhuman alphago performance.
Viewing a single comment thread. View all comments