Submitted by abhitopia t3_ytbky9 in MachineLearning
abhitopia OP t1_ixc6b1m wrote
Reply to comment by miguelstar98 in [Project] Erlang based framework to replace backprop using predictive coding by abhitopia
Hey u/miguelstar98, OP here and still very enthusiastic. I have spend last 2 weeks studying predictive coding and still going through a lot of nuances. The more I think and read about it, the more confident I am about the utility of this project.
Btw, do you know what the comment was that got deleted by the moderator?
I shared the "neuroevolution though erlang" in my original post too. I really still think coding this (having read predictive coding) is so much easier in Erlang to make it fully asynchronous, and scalable. And worry about optimisation only later (e.g. using Rust NIFs or try to use cuda kernals)
>"Probabilistic scripts for automating common-sense tasks" by Alexander Lew
"We Really Don't Know How to Compute!" - Gerald Sussman (2011)
Haven't watched these, do you mind what you have in mind?
miguelstar98 t1_ixdfxqw wrote
The comment wasn't deleted by a moderator, it was my first attempt at the original reply before I realized that I that I had made a mistake (I have a tendency to ignore everything anyone has to say and just try to figure things out on my own first not because I arrogantly believe I'm better but because I know that I can look at things differently than other people) so I deleted it and quickly skimmed the other comments because I was running low on time.
The library genesis link is there because others might not have access to a institution or $$$ and paying for information before you even know if it's useful is inefficient towards learning.
I included those two videos and all of the other links is because (given the information I can gleam from you) I can and did reasonably predict that at least some of information within those links are outside of your comfort zone. Which would mean that after watching them you'll have explored the solution space of your particular problem more thoroughly. Exploring down rabbit holes should probably be done early on while it's still easy to change your mind.
The videos by Alexander Lew and Gerald Sussman are the first things I thought of when thinking about your problem. Will they be helpful? Maybe but I could be wrong.
What really interests me is that even after reading my reply you are confident, which means you think I'm wrong (which is so exciting!) but you haven't really answered my questions, or explained the source of your confidence or perhaps I haven't fully grasped enough of the nuances of the problem to even have useful responses for you. I'd love to help you, but I just don't see how it's not a dead end.
Don't worry about replying if you think I'm crazy just ignore everything I've said
abhitopia OP t1_ixdkoos wrote
Hey u/miguelstar98
> but you haven't really answered my questions, or explained the source of your confidence or perhaps I haven't fully grasped enough of the nuances of the problem to even have useful responses for you.
I am not sure which questions? Did you mean what you mentioned in your deleted post (which wasn't accessible to me)?
Anyways, I can see your original post now. Thanks for undeleting it.
>Software Designer's perspective:
I think actor model just makes a lot of sense to do asynchronous concurrent computations. Having said that, since Erlang is slow, I am actually considering using Actix library in Rust (The first step is for me to just write a pseudo code of the algorithm based on message passing)
​
>From a hardware design perspective:
I am not sure what you want to say. The difference here is not hardware but change in algorithm (BP vs PC). Afaik, BP requires synchronised forward and backward passes.
>From the Biologist's perspective:
I am not sure again. The intention isn't to say biological plausible is superior or we MUST imitate nature. It is rather something than current ML libraries don't do but seems doable in light of new PC research.
>From my personal perspective: I hope you can help clear up my understanding but what is the difference between predictive coding and model ensembles? I know that probably sounds like a dumb question, but can’t we just take a bunch of models that are really good at particular tasks and have a software layer that controls when to use which model and then combine their outputs to solve any general problem? Also if I need fault tolerance or I need to run inference, can’t I just use a cluster computer, why not 2? Isn’t this a solved problem when training large language models?
Hmm. Model ensembles and learning algorithms to train those models are two different topics. The focus here is not on the "inference" (FP) part which current libraries are really good at but the "learning" (BP) part. Not sure what else to say.
I highly recommend reading this tutorial on PC (and contrast against BP)
miguelstar98 t1_ixdthaw wrote
Yeah sorry it slipped my mind that it was deleted (I guess I'm more used to discord) and thanks I'll read up on that paper first then.
Viewing a single comment thread. View all comments