DeezNUTSampler
DeezNUTSampler t1_itq1l2d wrote
Reply to comment by say_wot_again in [R] Large Language Models Can Self-Improve by Lajamerr_Mittesdine
Can you link works in Computer Vision SSL which incorporate this principle “use model’s high confidence outputs on easy examples to train it on hard examples”? It is not obvious to me how this would work. For example, in contrastive learning the objective is to learn view invariant representations. Two views of an object, augmented differently, are pushed together in representation space by minimizing the distance between them as our loss function. Which one would constitute the easy/hard example here?
DeezNUTSampler t1_irbnqqr wrote
Reply to comment by Prinzessid in [D] How hard is it to join a lab during Master's? by Ok-Experience5604
> You don't really have the skills and experience to do research before completing the masters.
If someone wants they can get the required skills and experience to do research any time in their life - depends entirely on the individual's capabilities and circumstances.
Fun fact, several researchers at OpenAI who led projects like GPT-2, DALL-E, etc only have a bachelors degree. Case in point, Alec Radford and Prafulla Dhariwal
DeezNUTSampler t1_iybwnnz wrote
Reply to comment by Alarming_Fig_3660 in [D] I'm at NeurIPS, AMA by ThisIsMyStonerAcount
Definitely not. I'd say a solid 70-80% are researchers, and a significant number of non-research industry folks are those who work on ML