Viewing a single comment thread. View all comments

mardabx t1_iw8j67a wrote

I am not ML scientist by any means, but I do know enough about programming to give my 3 cents.

Erlang is a very scalable horizontally and resilient, but not very performant. To scale upwards you will most likely need r/Rust, which already has some efforts put into ML and efficient horizontal scaling. If you insist on using Erlang/Elixir for base, do note that you can use Rust to speed up performance-sensitive parts of your project.

2

abhitopia OP t1_iw8udxk wrote

Thanks Mardabx for sharing your 3 cents. :) Very helpful.

The current ML systems today lack the scalability and fault tolerance which in my mind is more critical than training speed. Remember biological brains are not as fast either, but they are highly resilient and fault tolerant. And biological brains learning still surpasses some of the best AI currently trained on million of human equivalent life times. This is the direction I wanna go to where predictive coding based system runs continually, and scaled on demand, but it is never stopped.

Such a system would already be better than biological brain in the sense that brain is not scalable, but there is no such restriction on computer hardware systems.

Having said that, it is really impressive how performance gains can be had by using Rust (I didn't know it was even possible) and I am definitely open to using Rust to implement core functionality as NIFs (perhaps as optimisation). Thanks again for sharing.

2

mardabx t1_iwaq23m wrote

Fun sidenote: there have been at least 2 projects on BEAM-compatible VMs in Rust, and presentation on one of them looks like a good Erlang-centric explainer on reasons for doing so.

2