itsstylepoint

itsstylepoint OP t1_ixenran wrote

Hey thanks.

I am not a big fan of notebooks and rarely use them. When I do, I prefer using VS Code notebooks. So maybe I will make a few vids with notebooks in the future, but will likely stick to Neovim.

P.S. As for loss plots, monitoring performance, and those kinds of things, I prefer using tools like WandB, TensorBoard, etc. Will be covering those as well.

2

itsstylepoint OP t1_ixeisda wrote

Yes, that is how it usually works with my impls! (check out a few vids)

As for mixed precision and metrics - I will be making separate vids for both and of course, for every implemented model, will try to find a dataset to demo train/eval.

It is cool that you mentioned mixed precision as I already have the materials ready for this vid - will be discussing mixed precision, quantization (post-training and quantization aware training), pruning, etc. Improving perf!

4

itsstylepoint OP t1_irjh4n3 wrote

Yup, all implementations are numerically stable.

Note that I do not discuss numerical stability issues for all activation functions, but for those where the intuitive implementation is not numerically stable (i.e., Sigmoid, Tanh).

I also have a separate video discussing numerical stability: AI/ML Model API Design and Numerical Stability (follow-up). But this is in the context of Gaussian Naive Bayes.

1