lmericle
lmericle t1_jcyxiex wrote
Reply to comment by alfredr in [R] What are the current must-read papers representing the state of the art in machine learning research? by alfredr
Well, no, it isn't. You are looking for machine learning research. That list is only about LLMs, a very specific and over-hyped sub-sub-application of ML techniques.
If all you want is to attach yourself to the hype cycle, then that link still won't be enough, but at least it's a start.
lmericle t1_jcln487 wrote
Reply to comment by felheartx in [D] PyTorch 2.0 Native Flash Attention 32k Context Window by super_deap
You will find that in hype circles such as NLP there's a lot of thought-terminating cliches passed around by people who are not so deep in the weeds. Someone says something with confidence, another person doesn't know how to vet it and so just blindly passes it on, and all of a sudden a hack becomes a rumor becomes dogma. It seems to me to be this way with context vs memory.
Put another way: it's the kind of attitude that says "No, Mr. Ford, what we wanted was faster horses".
lmericle t1_jannla8 wrote
Reply to [D] Are Genetic Algorithms Dead? by TobusFire
The trick with genetic algorithms is you have to tune your approach very specifically to the kinds of things you're modelling. Different animals mate and evolve differently, in the analogical view.
It's not enough to just do the textbook "1D chromosome" approach. You have to design your "chromosome", as well as your "crossover" and "mutation" operators specifically to your problem. In my experience, the crossover implementation is the most important one to focus on.
lmericle t1_j87fykv wrote
lmericle t1_j6udkpc wrote
For the last freakin time, LLMs are not the be-all end-all of machine learning...
lmericle t1_iz1kk77 wrote
Reply to comment by link0007 in [P] Save your sklearn models securely using skops by unofficialmerve
What about ONNX? Most if not all feedforward models can be represented as ONNX.
lmericle t1_irxbw0u wrote
Reinforcement learning does seem like a good option, but you'd need a bit of nuance in your reward function for training to work IMO. It would need to be more complex (and less sparse) than just "did you choose the correct next item or not".
lmericle t1_jcz5z92 wrote
Reply to comment by alfredr in [R] What are the current must-read papers representing the state of the art in machine learning research? by alfredr
People get mad when you call LLMs what they are. It will pass, as with all things.