Submitted by LightGreenSquash t3_yrsqcz in MachineLearning
aideeptalk t1_ivzw93r wrote
I think having coding experience is exceptionally valuable to understanding.
Pick a paper you like that has code (Papers with Code helps in that search) and is considered important (so you aren't investing time in some less important tangent), with an architecture that is worth digging into (like transformers for some vision application you care about), and that preferably also has a Jupyer notebook walkthrough on the web. Find your own toy dataset or problem to apply it to - keep it pretty simple because the goal is understanding, not moving mountains. Review all the above. Then reimplement a toy version on your own for your own dataset - refer back to the above as needed. Doing toy versions keeps computing requirements manageable - you are looking for positive results that prove it is working and you understood things, not the best results ever. So a toy version is fine. It will likely take a few highly focused days to a week depending on your prior knowledge and programming skills in Python and either Torch or Tensorflow.
Repeat as needed to stay current with the major trends. It gets much faster the second and subsequent times through, especially if you structure your code so you can drop in different models. Consider using Lightning to enhance that flexibility.
BTW, if you are an algorithm person, focus on coding the algorithms - e.g. an updated transformer algorithm as your toy example. If you are an applications person, use the stock libraries and code a toy application.
The balance of time between theory and practice coding depends in large part on your career objectives. There are way too many papers to keep up with so you need a triage strategy. For example (maybe not right for you) review almost all the latest papers on some narrow field related to your PhD (e.g. AI for radiology interpretation of malignancies in chest xrays) and review only significant papers across the broad AI spectrum, perhaps lagging back a year or two so you can see which papers actually were highly significant.
Viewing a single comment thread. View all comments