Viewing a single comment thread. View all comments

nmkd t1_irx0805 wrote

Feeding the CLIP interrogator result back into Stable Diffusion results in completely different images though.

It's not good.

−5

milleniumsentry t1_irxa3sg wrote

No no. It only tells you what prompts it would use to generate a similar image. There is no actual prompt data accessible in the image/meta data. With millions of seeds, and billions of word combinations, you wouldn't be able to reverse engineer it.

I think having an embed for those interested would be a great step. Then you could just read the file and go from there.

9

visarga t1_irziac5 wrote

Now is the time to convince everyone to embed the prompt data in the generated images, since the trend is just starting. Could be also useful later when we crawl the web, to separate real from generated images.

4

milleniumsentry t1_is13giv wrote

I honestly think this will be a step in the right direction. Not actually for prompt sharing, but for refinement. These networks will start off great at telling you.. that's a hippo.... that's a potato.. but what happens when someone wants to create a hippotato...

I think without some sort of tagging/self reference, the data runs to risk of self reinforcement... as the main function of the task is to bash a few things together into something else. At what point will it need extra information so that it knows, yes.. this is what they wanted... this is a good representation of the task...

A tag back loop would be phenomenal. Imagine if you ask for a robotic cow with an astronaut friend. Some of those image, will be lacking robot features, some won't look like cows... etc. Ideally, your finished piece would be tagged as well... but perhaps missing the astronaut... or another part of the initial prompt request. By removing tags that were not generated by the prompt, the two can be compared for a soft 'success' rate.

1