Comments

You must log in or register to comment.

Murmeltier8000 t1_j7c9uyt wrote

Only if you want to build AI with quantuum processors

−3

velcher t1_j7ccfbs wrote

Yes, it is useful. The breakdown between PhD types would depend on the specific needs of the hiring organization.

9

UnusualClimberBear t1_j7chhkq wrote

This is all about timing. Currently stats/maths capabilities are not are their best.

3

UnusualClimberBear t1_j7cjxu9 wrote

Basically, current trends just ignore any reasonable thing, such as train/valid/test set. For now, the bigger, the better. This requires quite a lot of tech support functions (parallelization and data pipelining in particular) rather than theory-related ones.

5

Mechanical_Number t1_j7ck5au wrote

This is a very broad question but in general, yes.

On multiple occasions there such a big overlap between the fields that unless someone is doing some highly specialised (e.g. some very particular problems in Measure Theory or Computer Vision) the underlying skills will be transferable and almost interchangeable (e.g. in Gaussian Processes- or Causality- related topics).

3

AdFew4357 OP t1_j7ckwcf wrote

How would you recommend a student like me, whose a phd student in statistics, to be marketable for industry related ML research? I’m worried that in my time as a phd statistics student, my work will be too “classical” and “foundational” and lie more in the statistics domain rather than ML, and not be attractive for recruiters in the ML research space. How would you advise myself to be come off as more of a ML researcher than a pure theoretical statistician? Just focus on more ML related applications in my research?

1

tripple13 t1_j7cpqqm wrote

Sure, could very well be.

Just have to leave all your p-values at the door.

2

Mechanical_Number t1_j7d4mlo wrote

I think the degree of "classical"/"foundational" relates to your exact thesis topic so it is hard to judge. And even then you can always put spins on it. For example a PhD thesis topic on"Reformulations of James-Stein estimators in the context of letf-censored data" would be indeed quite classical but then again if you want to focus on bio-themed ML applications, having a strong theoretical background on working with censored data. More directly, standard guidelines apply:

  1. Collaborate with people outside your domain. It doesn't matter much if those are from the Biosciences, the Medical School or the Languages school. Show that while you are specialised, you can apply your specialism. (see point 3 too) This can also help to get a foot through the door for conference papers. (see point 2 too)
  2. Publish multiple (non-junk) papers. That one, final year, awesome paper of Annal of Statistics might be the clutch 3-pointer for that junior faculty position but a steady research output stream even if less impactful shows you can deliver continuously. Early on it is actually quite hard particularly for non-specialists to evaluate the significance of a publication.
  3. Code reasonably well. I am not talking C++ template meta-programming here, but be able to show that you can create an R or a Python package with some reasonable structure and quality. Extra point if you can use "ML tools" like JAX or PyTorch Lighting. You are not going to be lone gunman, you will be part of team. (Relates to point 1 a bit)
  4. Know your ML fundamentals well. That's not that hard; you are a Stats PhD, being able to adequately explain GBMs or NNs Backprop or GMMs or Langangians means that someone from ML can talk shop with you. Yeah, you won't know particulars of AutoDiff or PPO; who cares? They won't probably know them either unless they are actively working on the matter.
  5. Network. There is some network connectivity involved in hiring as well as in information diffusion. In addition, references matter. I remember out head of recruitment at my first job telling me that if I knew a good candidate I should let them know because for each position they would weed out internal referrals first (it makes sense, someone has already done "some screening").
1