Submitted by bigbossStrife t3_z2a0xg in MachineLearning
For an ML project I have at work, I've been considering if I should build my pipeline for training and deployment using PyTorch only or use something like PyTorch Lightning instead. I like how easy lightning is to use and all the little automatic things it does on it's own, but I also like to know what happens in the background and being able to do specific things when needed, so if I end up spending more time reading any specific framework's documentation to understand how to do one little thing when I could already be making it work, I feel like it would be a waste of time.
So that's why I decided to go with the PyTorch only implementation, but the thing is as the project was going forward, I started implementing more and more things and I felt like I was redoing a lot of things that some frameworks already offer like calculating batch size automatically, early stopping, etc.
I was wondering what's the workflow of other people here and was curious to hear some opinions on this.
linverlan t1_ixg4u3s wrote
This feels backwards. Best approach is to use the most off-the-shelf implementation you have available for a base model and implement specific features or refinements as needed for your use case.
This way you move quickly, get acceptable performance right away, and can make iterative improvements as long as time allows.