Viewing a single comment thread. View all comments

techhouseliving t1_jb7oku3 wrote

Although it takes a supercomputer to initially train a model, it can run in a very small amount of memory and processing. Like 2 gigs of data is required for stable diffusion which can in theory create any 2d art conceivable. Similar for language models. It's the ultimate compression algorithm.

M1s and m2s are designed to run these models very efficiently. And those are pretty widely distributed.

2