Submitted by MohamedRashad t3_y14lvd in MachineLearning
MohamedRashad OP t1_irw4soy wrote
Reply to comment by _Arsenie_Boca_ in [D] Reversing Image-to-text models to get the prompt by MohamedRashad
This is actually the first idea that came to me when thinking about this problem ... Backpropgating the output image until I reach the text representation that made it happen then use the distance function to get the closest words to this representation.
My biggest problem with this idea was the variable length of input words ... The search space for the best words to describe the image will be huge if there is no limit on the number of words that I can use to describe the image.
​
What are your thoughts about this (I would love to hear them)?
_Arsenie_Boca_ t1_irwat46 wrote
Thats a fair point. You would have a fixed length for the prompt.
Not sure if this makes sense but you could use an LSTM with arbitrary constant input to generate a variable-length sequence of embeddings and optimize the LSTM rather than the embeddings directly.
Viewing a single comment thread. View all comments