Submitted by zanzagaes2 t3_10xt36j in MachineLearning
zanzagaes2 OP t1_j7uual3 wrote
Reply to comment by Tober447 in [P] Creating an embedding from a CNN by zanzagaes2
Yes, that's a great idea. I guess I can use the encoder-decoder to create a very low-dimensional embedding and use the current one (~500 features) to find similar images to a given one, right?
Your perspective has been really helpful, thank you
schludy t1_j7v9pkm wrote
I think you're underestimating the curse of dimensionality. In 500d, most vectors will be far away from each other. You can't just use L2 norm when comparing the vectors in that high dimensional space
zanzagaes2 OP t1_j7vpd89 wrote
Yes, I think that's the case because I am getting far more reasonable values comparing the projection to 2d/3d of the embedding rather than the full 500 feature vector.
Is there a better way to do this than projecting into a smaller space (using reduction dimensionality techniques or encoder-decoder approach) and using L2 there?
Tober447 t1_j7uyy1n wrote
>I guess I can use the encoder-decoder to create a very low-dimensional embedding and use the current one (~500 features) to find similar images to a given one, right?
Exactly. :-)
Viewing a single comment thread. View all comments