Submitted by konstantin_lozev t3_123xa6r in MachineLearning

[D] Hello, everyone. I watched an explanation on the use of diffusion models for creation of 2d images.

I just wonder, I think we are somewhat far away from 3d model generation. First, I think it would be much more computationally expensive. Second, I am not sure whether we have such a large set of training data. And third, the input and output that we have in 3d graphics is somewhat different from pixels, i.e. we are working with triangles in 3d graphics (maybe this is not as hard, as we can always start with vertices and then estimate triangles.

What's your take on that?

6

Comments

You must log in or register to comment.

a_marklar t1_jdx7ryn wrote

In a limited sense, we're already there. For example, the microsoft avatar generation.

I'd guess that its very unlikely that generative models will use triangles. Point clouds, SDFs, parametric surfaces all seem to be better data formats for these types of things. Those can all be converted to triangle meshes if that's required.

3