Submitted by Ezekiel_W t3_y0hk8u in singularity
SituatedSynapses t1_irsltdg wrote
Reply to comment by watermelontomato in AI art 256x faster by Ezekiel_W
They will discover some unique tricks to interpolate the future frame with the previous frame's render and be able to get that over 30 FPS I bet. The biggest problem I've noticed with AI generation is the huge amounts of VRAM it needs. I really don't know how they're going to get around that and I'm very curious to see what sort of wild tricks they figure out! :)
dasnihil t1_irtbsir wrote
i agree, it does need more VRAM to output faster, but im more excited about upcoming videos that maintain coherency like a proper human made video, then add audio synthesis to it and we all can implement our ideas and create amazing things. even if the render takes time, still amazing improvement to have.
-ZeroRelevance- t1_irvlkdo wrote
Seems like StabilityAI have some ideas for how to reduce it, since they seem pretty confident about getting Stable Diffusion below 1GB of VRAM. We’ll have to wait and see though.
kikechan t1_isaxkfh wrote
Wow, source?
-ZeroRelevance- t1_iscg70s wrote
Emad (the guy in charge of StabilityAI) has been saying on twitter that he thinks they can get Stable Diffusion under a gigabyte of VRAM for a while now. Here’s one of those tweets.
kikechan t1_isduh5j wrote
Thanks!
[deleted] t1_irwntze wrote
[deleted]
Viewing a single comment thread. View all comments