Viewing a single comment thread. View all comments

simonthefoxsays t1_iw6wznc wrote

The paper you link talks about the advantages of predictive coding coming from hardware architectures that colocate compute and memory in many small, somewhat independent units. Erlang will not give you that. The BEAM VM uses 1 thread per core, limiting its parallelism to the number of cpus, and even in that context it is designed for concurrency (allowing many tasks to make progress on one thread) which is in tension with data locality to the processor. In contrast, modern backprop implementations may have limitatiins on their parallelism compared to ideal state predictive coding, but they do heavily rely on gpus for much greater parallelism than cpus can allow.

Predictive coding looks very interesting, but to be useful it needs fundamentally different hardware than commodity computers today, not just a language with good parallel semantics.

2

abhitopia OP t1_iw8xv1o wrote

You are right. A neuromorphic hardware would be better. The reason right now is that everything runs on top of beam in Erlang, but then I am hoping that we can use Rust to implement core functions as NIFs as u/mardabx pointed out. https://discord.com/blog/using-rust-to-scale-elixir-for-11-million-concurrent-users

Having said that, I also do not think that speed is really the most critical problem to solve here. (For example, human brains are not even as fast as Beam single threads) Petaflots of compute is needed today because modern DL uses dense representations (unlike brain) and needs to be retrained from scratch (lacks continual learning). If resilient and fault tolerant system (say written in Erlang/Elixir) which could learn continuously and optimised (say using sparse representations) existed, it would eventually surpass competition.

1