miguelstar98
miguelstar98 t1_ixdfxqw wrote
Reply to comment by abhitopia in [Project] Erlang based framework to replace backprop using predictive coding by abhitopia
The comment wasn't deleted by a moderator, it was my first attempt at the original reply before I realized that I that I had made a mistake (I have a tendency to ignore everything anyone has to say and just try to figure things out on my own first not because I arrogantly believe I'm better but because I know that I can look at things differently than other people) so I deleted it and quickly skimmed the other comments because I was running low on time.
The library genesis link is there because others might not have access to a institution or $$$ and paying for information before you even know if it's useful is inefficient towards learning.
I included those two videos and all of the other links is because (given the information I can gleam from you) I can and did reasonably predict that at least some of information within those links are outside of your comfort zone. Which would mean that after watching them you'll have explored the solution space of your particular problem more thoroughly. Exploring down rabbit holes should probably be done early on while it's still easy to change your mind.
The videos by Alexander Lew and Gerald Sussman are the first things I thought of when thinking about your problem. Will they be helpful? Maybe but I could be wrong.
What really interests me is that even after reading my reply you are confident, which means you think I'm wrong (which is so exciting!) but you haven't really answered my questions, or explained the source of your confidence or perhaps I haven't fully grasped enough of the nuances of the problem to even have useful responses for you. I'd love to help you, but I just don't see how it's not a dead end.
Don't worry about replying if you think I'm crazy just ignore everything I've said
miguelstar98 t1_ixatok2 wrote
Reply to comment by miguelstar98 in [Project] Erlang based framework to replace backprop using predictive coding by abhitopia
Finally, I put a lot of effort into this reply, solely because I could FEEL your enthusiasm in your post OP. It's the journey and the people we meet along the way that matters. Research is hard. Be Passionate. Personally, I'm now kinda interested in what would happen if we were to train an AI to be an operating system. It's all just function optimizations in the end anyway...I think...I probably shouldn't have sacrificed my sanity to write this, but I have no regrets. Also I only skimmed everyone’s comments so there might be overlap.
More links that may or may not be helpful
"Probabilistic scripts for automating common-sense tasks" by Alexander Lew
"We Really Don't Know How to Compute!" - Gerald Sussman (2011)
https://www.erlang-factory.com/upload/presentations/735/NeuroevolutionThroughErlang.pdf
This one is PDF of the handbook of neuroevolution through erlang. Libary Genesis can be sketchy sometimes
miguelstar98 t1_ixasxko wrote
Alright, This is gonna be a long reply.
TLDR; Probably a dead end (well at least current implementations are), but my goodness was it fun to research. In fact, it was a blast! I only had a single day off so take everything with a truck of salt.
Software Designer's perspective:
Erlang is definitely the right tool to use if you want a programming language that was built to perform distributed parallel computing, with fault detection, repair, and consistency built-in (see "Systems that run forever self-heal and scale" by Joe Armstrong (2013) ) though to be honest the creator of Erlang himself has said that everything that makes Erlang what it is can be implemented in other languages. But should you?….well in my opinion, after looking at the google trends for Erlang, Clojure, and Rust languages that are similarly built to solve specific problems it might be more worthwhile to just use a language that was built with ease of use, simplicity, and high popularity in mind because who actually wants to learn to build and maintain the software in these languages. You can always make a program faster, you can't always make it easier to learn, read or maintain.
From a hardware design perspective: Honestly silicon itself might be a dead end. We seem to be converging on carbon based switches both in terms of computers and as meat brains.
From the Biologist's perspective: The whole concept of neural networks was biologically inspired. Taking inspiration from biology, specifically the human brain is obviously the correct course of action, not because biology or the brain is special but because when you attempt to solve any problem odds are good that there is already a solution. It might be a buggy approximation, but a solution nonetheless and to be perfectly honest evolution by natural selection is terrible at making good solutions, it’s better to just idealize away evolution’s solution and just do better.
From my personal perspective: I hope you can help clear up my understanding but what is the difference between predictive coding and model ensembles? I know that probably sounds like a dumb question, but can’t we just take a bunch of models that are really good at particular tasks and have a software layer that controls when to use which model and then combine their outputs to solve any general problem? Also if I need fault tolerance or I need to run inference, can’t I just use a cluster computer, why not 2? Isn’t this a solved problem when training large language models?
https://pytorch.org/tutorials/beginner/former_torchies/parallelism_tutorial.html
miguelstar98 t1_iwoeafv wrote
Reply to comment by liukidar in [Project] Erlang based framework to replace backprop using predictive coding by abhitopia
🖒Noted. I'll take a look at it when I get some free time. Although someone should probably make a discord for this....
miguelstar98 t1_ixdthaw wrote
Reply to comment by abhitopia in [Project] Erlang based framework to replace backprop using predictive coding by abhitopia
Yeah sorry it slipped my mind that it was deleted (I guess I'm more used to discord) and thanks I'll read up on that paper first then.