topcodemangler
topcodemangler t1_jc2yjvw wrote
Reply to [R] Stanford-Alpaca 7B model (an instruction tuned version of LLaMA) performs as well as text-davinci-003 by dojoteef
>Finally, we have not designed adequate safety measures, so Alpaca is not ready to be deployed for general use
You mean censorship?
topcodemangler t1_jakalh1 wrote
Reply to comment by themrzmaster in [D] Are Genetic Algorithms Dead? by TobusFire
For me an always interesting and alluring idea was to use GA to search for a combination of elementary information processing stuff (probably Boolean gates) and memory which would result in some novel ML architecture. Maybe much more effective than NN as it would be possible to directly implement via electronics without the overhead.
topcodemangler t1_ja1pm3i wrote
Question - how much data you already have and how much more do you need?
topcodemangler t1_jduuhcf wrote
Reply to [D] Simple Questions Thread by AutoModerator
Is there any real progress on the JEPA architecture proposed and pushed by LeCun? I see him constantly bashing LLMs and saying how we need JEPA (or something similar) to truly solve intelligence but it has been a long time since the initial proposition (2 years?) and nothing practical has come out of it.
​
It may sound a bit aggressive but that was not my intention - the original paper really sparked my interest and I agree with a lot that he has to say. It's just that I would want to see how those ideas fare in the real world.