Submitted by bigbossStrife t3_z2a0xg in MachineLearning
alterframe t1_ixhvkgo wrote
It all boils down to how would you behave when something goes wrong. The weights of your layer do not converge? Try some more or less random hyperparameter changes and maybe they finally will. Sometimes that's the only thing you can come up with. Frameworks are just fine for that.
Maybe you have some extra intuition about the problem and want to try something more sophisticated to probe the problem better? You'd be fine with a framework as long as you deeply understand how it works, because the change you are going to do may be outside of its typical usage. Otherwise, you'd just get frustrated when something doesn't work as you expected.
I get the sentiment against using high-level frameworks. At the beginning all of them look like toys for newbies that compete with each other in making the shortest MNIST example. However, as more and more people use them they are more and more refined. I think that at this point Lightning may be worth giving it a try. I myself, would be strongly against it few years ago, and I was quite annoyed with its rise in popularity, but ultimately it turned out to be sort of a standard now.
Viewing a single comment thread. View all comments