Viewing a single comment thread. View all comments

Zermelane t1_j282ln7 wrote

> So the amount of computational resources required to emulate a brain is orders if magnitude higher than that suggested by the model of a neuron as a dumb transistor and the brain as a network of switches.

It is very popular to look at how biological neurons and artificial neurons are bad at modelling each other, and immediately, without a second thought, assume that it means that biological neurons must be a thousand times powerful, no, ten thousand times more powerful than artificial ones.

It is astonishingly unpopular to actually do the count, and notice that something like Stable Diffusion contains the gist of all of art history and the personal styles of basically all famous artists, thousands of celebrities, the appearance of all sorts of objects, etc., in a model that in a synapse-for-parameter count matches the brain of a cockroach.

(same with backprop: Backpropagation does things that biology can't do, so people just... assume that it means biology is doing something even better, and nobody seems to want to think the thought that backprop might be using its biologically implausible feedback mechanism to do things better than biology)

11

Kinexity t1_j2av1jc wrote

>It is astonishingly unpopular to actually do the count, and notice that something like Stable Diffusion contains the gist of all of art history and the personal styles of basically all famous artists, thousands of celebrities, the appearance of all sorts of objects, etc., in a model that in a synapse-for-parameter count matches the brain of a cockroach.

I want to call that out for being wrong. SD's phase space contains loads of jibberish and how good an image model is is dictated by how little bad images it's phase space contains, not by how many good ones it does. If your argument was right then RNG would be the best generative image model because the phase space of it's outputs containts every good image.

3