Tober447
Tober447 t1_j7uyy1n wrote
Reply to comment by zanzagaes2 in [P] Creating an embedding from a CNN by zanzagaes2
>I guess I can use the encoder-decoder to create a very low-dimensional embedding and use the current one (~500 features) to find similar images to a given one, right?
Exactly. :-)
Tober447 t1_j7uq41s wrote
Reply to comment by zanzagaes2 in [P] Creating an embedding from a CNN by zanzagaes2
You would take the output of a layer of your choice from the trained cnn (as you do now) and feed it into a new model, that is the autoencoder. So yes, the weights from your model are kept, but you will have to train the autoencoder from scratch. Something like CNN (only inference, no backprop) --> Decoder --> Latent Space --> Encoder for training and during inference you take the output of the decoder and use it for visualization or similarity.
Tober447 t1_j7u90qp wrote
Reply to [P] Creating an embedding from a CNN by zanzagaes2
You could try an autoencoder with CNN layers and a bottleneck of 2 or 3 neurons to be able to visualize these embeddings. The autoencoder can be interpreted as non-linear PCA.
​
Also, similarity in this embedding space should correlate with similarity of the real images/whatever your CNN extracts from the real images.
Tober447 t1_j9oii7q wrote
Reply to [D] Tools for drawing/visualising Neural Networks that are pretty? by CHvader
There is an old thread on reddit: https://www.reddit.com/r/MachineLearning/comments/l1z8cr/d_best_way_to_draw_neural_network_diagrams/
Personally, I like http://alexlenail.me/NN-SVG/LeNet.html