astrange t1_iy5mm2j wrote
Reply to comment by sam__izdat in [P] Stable Diffusion 2.0 and the Importance of Negative Prompts for Good Results (+ Colab Notebooks + Negative Embedding) by minimaxir
Yeah, "bad anatomy" and things like that come from NovelAI because its dataset has images literally tagged with that. It doesn't work on other models.
SD is scraped off the internet so something that might work is negative keywords associated with websites of images you don't like. Like "zillow" "clipart" "coindesk" etc.
Or try clip-interrogator or textual inversion against bad looking images (but IMO clip-interrogator doesn't work very well yet either).
sam__izdat t1_iy5myz9 wrote
> from NovelAI because its dataset has images literally tagged with that
That makes a lot more sense now, thanks. I thought they were also just using LAION 5B or some subset.
Viewing a single comment thread. View all comments