Longjumping_Essay498
Longjumping_Essay498 OP t1_j2f9r9s wrote
Reply to comment by currentscurrents in [Discussion] is attention an explanation? by Longjumping_Essay498
Let say if for some example we dig into these attention maps, and find some perspective of some head for attending words. For an example in gpt some head focus on parts of speech. Will it always reliably do it for all example? What do you think. Can we manually evaluate and categorize the learnings??
Longjumping_Essay498 t1_j2c3wlo wrote
Reply to [D] GPU-enabled scikit-learn by Realistic-Bed2658
They use parallel processing and threads that's sufficient for ml not so deep
Longjumping_Essay498 t1_j22mx0g wrote
Wow, this is insanely great for factual things. I tried to generate for oops python concepts. The texts were good but the images it picked was not of python codes. But great solution, great start
Longjumping_Essay498 t1_j3fjfbo wrote
Reply to [D] Will NLP Researchers Lose Our Jobs after ChatGPT? by singularpanda
Domain specific LLM's need not to be huge like these LLM's like chatgpt. They have world knowledge. In most of the settings, we don't need that.