tinylobsta
tinylobsta OP t1_j6ilizy wrote
Reply to comment by ziplock9000 in Nothing, Forever — AI-generated, always streaming parody of ‘90s sitcoms by tinylobsta
Yup, at the time we created this, there was no good tech in place for the artwork. With SD, we think there’s a path to replacing our existing art pipeline with something generative, but it’s still being figured out. The problem is that 3D assets take a long time for models to create, and we run in near real time. But we’ll get there.
tinylobsta OP t1_j6ijqlh wrote
Reply to comment by nocloudno in Nothing, Forever — AI-generated, always streaming parody of ‘90s sitcoms by tinylobsta
Hey, I think the audio is back — we’re a live service e.g., we run a lot of cloud-based systems so we can be hit by outages occasionally, but the system is usually redundant enough to come back to life.
tinylobsta OP t1_j6haurw wrote
Hey people’s, co-creator of NF here. NF is created using generative algorithms and machine learning techniques for the dialogue, audio, speech, and pretty much everything other than the prefabricated 3D assets.
We started this about four years ago before OpenAI and SD kinda swept the landscape and are thinking now how to incorporate pieces of those into it.
We’re still thinking about where to take it next, so ideas and feedback always appreciated.
tinylobsta OP t1_j6jp1jb wrote
Reply to comment by MarginCalled1 in Nothing, Forever — AI-generated, always streaming parody of ‘90s sitcoms by tinylobsta
We've considered this -- the show is actually on about a 2m delay, but otherwise, it's entirely live. You can't see it in the iteration I have streaming rn, but the entire show is configurable... if you want less of one character, we can do that. Want more of one setting? We can do that, too! More lines per character? etc.
It was a design decision we made so that the audience (in the future) can morph the narrative of the show. We actually monitor the Twitch chat and can pick up keywords to help shape the narrative (without defining it, the generative stuff does all that). So we wanted to keep to the 2m per scene concept. Might need to do something like that in the future (batching), though, if time-to-create keeps being a constraint for 3D models.