Submitted by nullspace1729 t3_y0dk5c in MachineLearning
neuroguy123 t1_irtuo1b wrote
I recommend some of the YOLO versions. I had fun with those and learned a lot about complex loss functions.
I also implemented a bunch of attention-models starting with Graves', through Bahdanau and Luong, and then Transformers. The history of attention in deep learning is very interesting and instructional to implement.
Another one I had fun implementing was Wavenet as it really forces you to deep dive on convolution variations, pixel-cnn, and some gated network structures. Then conditioning it was an extra challenge (similar to the attention networks).
One thing I've been meaning to get into is deepcut and other pose models because I don't know much about linear programming and the other math they use in those.
Viewing a single comment thread. View all comments