Viewing a single comment thread. View all comments

Shelfrock77 t1_itfbv46 wrote

The people who voted anything other than “1-2 years” are deluded.

−4

ChronoPsyche t1_itfd5hq wrote

People who are so certain of themselves about their prediction abilities regarding something with so many unknown variables are deluded.

13

natepriv22 t1_itfoi6a wrote

Does that include you as well? You are technically also making a very certain prediction that something won't happen and that they will be proven wrong.

−1

ChronoPsyche t1_itfrh7h wrote

Casting doubt on a very unrealistic prediction made with certainty is not the same as making a very unrealistic prediction with certainty.

2

Akimbo333 t1_itfdrsx wrote

Why do you say that out of curiosity? To generate entire movies would take massive processing power 🔋! And I'm not sure that the current tech could render entire movies lol!!!

5

ChronoPsyche t1_itfe2fd wrote

Not to mention we have a huge limiting factor right now with context windows. Image generation is basically just catching up all at once to where text generation already is. It seems crazy because it's happening all at once and there is a lot more improvements that can be made before progress will stall, but until we figure out the memory problems inherent with our current AI algorithms, this progress will start to slow down.

3

Shelfrock77 t1_itfehyy wrote

I’m not going to say anything else, i’ll let this subs timeline prove it.

1

Akimbo333 t1_itfer1w wrote

Ok all good I respect that! I just wanted to know you're prospective!

1

ChronoPsyche t1_itfkn8o wrote

I hope you're right. Truly, would be amazing if we had text to feature film in 1 to 2 years. I don't see any reason to think you will be though.

AI growth comes in spurts and waves. We are in an AI summer right now. What's happening right now will slow down without some additional breakthroughs.

We gotta fix the memory problems we have and until we do, AI will be limited to short-term content generation. Really amazing short-term content generation, but short-term nonetheless.

The memory issue is not trivial. It's not a matter of better hardware. It's a matter of hitting exponential running time limits. We need either a much more efficient algorithm or a quantum computer. I'd presume we will end up finding a better algorithm first, but it hasn't happened yet.

1

visarga t1_itgu5bi wrote

Not exponential, let's not exaggerate. It's quadratic. If you have a sequence of N words, then you can have NxN pairwise interactions. This blows up pretty fast, at 512 words -> 262K interactions, at 4000 words -> 16M interactions. See why it can't fit more than 4000 tokens? It's that pesky O( N^2 ) complexity.

There is a benchmark called "Long Rage Arena" where you can check to see the state of the art in solving the "memory problem".

https://paperswithcode.com/sota/long-range-modeling-on-lra

1

ChronoPsyche t1_itgunqx wrote

Exactly what I am referring to. My bad, quadratic is what I meant.

1