Submitted by Fun_Wolverine8333 t3_xzoomn in MachineLearning
todeedee t1_irq0p5r wrote
Reply to comment by Affectionate_Log999 in [P] Youtube channel for ML - initial feedback and suggestions by Fun_Wolverine8333
Disagree -- the logic behind Bayesian estimators is extremely finicky. It took me fucking *years* to wrap my head around Variational Inference and I still don't have a great intuition why MCMC works. If the theory checks out, the implementation is pretty straightforward.
RezaRob t1_isdkb67 wrote
Speaking only in general here: often in ML, we don't know exactly why things work theoretically. Even for something like convolutional neural networks, I'm not sure if we have a complete understanding of "why" they work, or what happens internally. There have certainly been papers which brought into question our assumptions about how these things work. Adversarial images are a good example of things that we wouldn't have expected. So, in ML, sometimes the method/algorithm, and whether it works, are more important than an exact theoretical understanding of what's happening internally. You can't argue with superhuman alphago performance.
Viewing a single comment thread. View all comments