Submitted by cloudrunner69 t3_10e5jod in singularity
It's not CGI. It doesn't look anything like CGI. I don't know what it is but there is an obvious difference between AI art and CGI. Even the most photo realistic CGI looks different than AI art. It's one of those things that I can't quiet put my finger on.
So the question is what is going on with AI art compared to what a human does to create CGI images that makes them seem different. Like I kind of get how CGI is done, it's like modelling and adding textures and all that different stuff but AI doesn't do that. It isn't building up a model from a sketch to a complete design it's doing something different.
I don't understand what the AI is actually doing to create these images. And what interests me even more is I don't think there is any digital art made by humans that could pass as AI art. Am I wrong, like whats going on here. Can someone show me art from a human that looks identical to the art AI is making?
If Warhammer 40k was an 80s movie https://www.youtube.com/watch?v=FLd1dzBLLkQ
I don't know, maybe I'm smoking too much but this stuff is really strange. I just listened to someone on YouTube talk about how AI art is similar to what we see when we dream. But I completely disagree. That's what it used to look like but now AI art looks more like what we see when we are awake. It doesn't look dreamy at all, it looks real.
Zermelane t1_j4pj1oe wrote
> So the question is what is going on with AI art compared to what a human does to create CGI images that makes them seem different. Like I kind of get how CGI is done, it's like modelling and adding textures and all that different stuff but AI doesn't do that. It isn't building up a model from a sketch to a complete design it's doing something different.
This question is unfortunately both technical and deep, and it takes a lot of background to answer it well. It doesn't help that the technical details are changing fast, and the diffusion model architectures that are popular now are completely different from the GANs that were popular a few years ago; and maybe in the next year we'll have completely different models again.
But for a taste, look at the grid of horse images in this post or the sequence of drawing the beach in this one. It's a little bit misleading to show those as a description of the process, as it doesn't explain anything about what happens inside the U-Net to get from one step to another. But it does show that there is at least a sort of an iterative process and it does add detail over time.
At least with this architecture, anyway. GANs were different. Well, they probably still had internal representations that started off at a more sketch-like level, but that would have been harder to see in action. Recent models like MaskGIT do the process of adding detail in a completely different way yet.