Submitted by cloudrunner69 t3_10e5jod in singularity

It's not CGI. It doesn't look anything like CGI. I don't know what it is but there is an obvious difference between AI art and CGI. Even the most photo realistic CGI looks different than AI art. It's one of those things that I can't quiet put my finger on.

So the question is what is going on with AI art compared to what a human does to create CGI images that makes them seem different. Like I kind of get how CGI is done, it's like modelling and adding textures and all that different stuff but AI doesn't do that. It isn't building up a model from a sketch to a complete design it's doing something different.

I don't understand what the AI is actually doing to create these images. And what interests me even more is I don't think there is any digital art made by humans that could pass as AI art. Am I wrong, like whats going on here. Can someone show me art from a human that looks identical to the art AI is making?

If Warhammer 40k was an 80s movie https://www.youtube.com/watch?v=FLd1dzBLLkQ

I don't know, maybe I'm smoking too much but this stuff is really strange. I just listened to someone on YouTube talk about how AI art is similar to what we see when we dream. But I completely disagree. That's what it used to look like but now AI art looks more like what we see when we are awake. It doesn't look dreamy at all, it looks real.

8

Comments

You must log in or register to comment.

Zermelane t1_j4pj1oe wrote

> So the question is what is going on with AI art compared to what a human does to create CGI images that makes them seem different. Like I kind of get how CGI is done, it's like modelling and adding textures and all that different stuff but AI doesn't do that. It isn't building up a model from a sketch to a complete design it's doing something different.

This question is unfortunately both technical and deep, and it takes a lot of background to answer it well. It doesn't help that the technical details are changing fast, and the diffusion model architectures that are popular now are completely different from the GANs that were popular a few years ago; and maybe in the next year we'll have completely different models again.

But for a taste, look at the grid of horse images in this post or the sequence of drawing the beach in this one. It's a little bit misleading to show those as a description of the process, as it doesn't explain anything about what happens inside the U-Net to get from one step to another. But it does show that there is at least a sort of an iterative process and it does add detail over time.

At least with this architecture, anyway. GANs were different. Well, they probably still had internal representations that started off at a more sketch-like level, but that would have been harder to see in action. Recent models like MaskGIT do the process of adding detail in a completely different way yet.

3

LittleTimmyTheFifth5 t1_j4p5cla wrote

Current A.I. is just a glorified calcuator, a pretty cool life/world changing calcuator yes, but a calcuator nonetheless.

1

Antok0123 t1_j4p8pgq wrote

Where does the AI get their sources from? Is it from a collection of images on the internet? Because its a very biased AI. I keep prompting it to show a detailed national attire of the 1600s philippines. It never was accurate. I used the Philippines because its dress is a combiantion of spanish and southeast asian. And it never gets it right, either the dress looks mexican, european or asian- never filipino which is a combination of all 3.

2

shmoculus t1_j4p9la8 wrote

For example Stable Diffusion is based on the LAION datasets which include billions of images scraped from the internet. So Phillipino culture is likely underrepresented on the general internet and the models don't have a good representation of them.

People take the trained models and add stuff that wasn't in the training set, e.g. you can train on images of 1600s philippines to get what you want that is currently missing.

Have a look here for some custom models, people have added styles, concepts, people etc: https://civitai.com/, you could easily make one that does historical periods from all over the world. I'm sure people would love it.

6

victorkin11 t1_j4p93vn wrote

noise! we start from noise, use ai to recognize the few pixels match the things we want, use ai to denoise, remove the pixels not we want, each step find few pixels, loop 50 time and more, the image come out.

5

LittleTimmyTheFifth5 t1_j4p8yfb wrote

Some companies scrape (or download) the internet which conpanies or people can download and use.

Oh, and some companies tinker and censor their A.I.. Oh, and ChatGPT is just predicting what will come next in a conversation using their Language Model and math, that's why I call it and current A.I. a glorifed calcuator.

0

OldWorldRevival t1_j4q3dws wrote

This is why it is IP theft still, literally just an uncredited digital collage.

AI will hit a point where it can do far more advanced things.

−5

isthiswhereiputmy t1_j4q7gd8 wrote

I'm a professional contemporary artist and have made about $50K in the past couple of years just on my ai-assisted artwork. There is not an obvious difference IMO, it's just that 99.99% of ai-art prompts are generic in the same way that the vast majority of traditional artists are generic. AI-art tools just raise the bar so that what used to be evidence of certain skill has been automated, but there are now different skills to wield in order to use creative-generators in more interesting or unprecedented ways.

1

dasnihil t1_j4r7ahy wrote

CGI = computer generated graphics.

Human made CGI involves working with video editors, 3d model/texture/render, animation using math and physics (eg: coefficient of viscosity for fluid, friction, gravity, force etc), and many other awesome tools to do these things.

AI produced CGI involves 0 of those things. Let's say you want to produce an animation of water flowing through the tube. Traditional CGI is the human way, involves math & physics and a lot of computation.

Now imagine training a neural network with millions of moving images of fluid of various viscosity and making it able to guess every next frame, if you give a start (context) and current state of the fluid (particles), it would be able to predict every frame after that. It was trained on data we generated using math & physics, and now it doesn't need it.

Just like you when you learned how to ride a bicycle. Go figure.

1

Trumaex t1_j4sm036 wrote

> CGI = computer generated graphics.

Came with the intention of saying exactly this. It's by definition CGI :D But it's different kind of CGI than what we had before.

1