KlutzyLeadership3652
KlutzyLeadership3652 t1_j07w3yc wrote
Reply to [Research] Graph Embeddings for Graph shape? by J00Nnn
Graph Matching Networks could be a good starting point. As another comment said, you do need to have some supervision (annotated graphs).
KlutzyLeadership3652 t1_irwt908 wrote
Reply to comment by MohamedRashad in [D] Reversing Image-to-text models to get the prompt by MohamedRashad
Don't know how feasible this would be for you but you could create a surrogate model that learns image-to-text. Use your original text-to-image model to generate images given text (open caption generation datasets can give you good examples of captions), and the surrogate model trains to generate the text/caption back. This would be model centric so don't need to worry about many2many issue mentioned above.
This can be made more robust than a backward propagation approach.
KlutzyLeadership3652 t1_j0hn1ba wrote
Reply to [P] Possible NLP approaches to extract 'goals' from text by 8hubham
You can look up 'extractive text summarization'. Or if you're looking for to-the-point keyphrases in the paragraphs, then 'keyphrase extraction'. See how off-the-shelf models work on your examples.