Submitted by Light991 t3_xvol6v in MachineLearning
PeedLearning t1_ir29bk8 wrote
Chelsea Finn? I knew very little people who were using MAML during her Phd, and even fewer after.
I reckon e.g. Ian Goodfellow had a lot of impact during his Phd. Alex Krizhevsky is another name with big impact.
fromnighttilldawn t1_ir54qtq wrote
Yeah but those are also the most well connected people in the ML world. It is easy to be impactful when you are having tea with Hinton, LeCun and Bengio every afternoon.
PeedLearning t1_ir70vku wrote
Hundreds of Phd students were in their positions. Few made such an impact.
It's not easy to be impactful, even with a good supervisor. In Krizhevsky's case, one could even argue he had a big impact, despite having Hinton as a supervisor. Alexnet was kind of built behind Hinton's back as he didn't approve of the research direction. Hinton did turn around later and recognize the importance though.
Light991 OP t1_ir29o16 wrote
Sure but papers she published back then are now very influential.
fromnighttilldawn t1_ir54kie wrote
I keep on trying to find one thing that she published that's considered to be successful and I find myself having to very narrowly define success. This was when I was doing a general survey on RL techniques.
PeedLearning t1_ir2ar52 wrote
Any concrete papers you have in mind?
Light991 OP t1_ir2bqnb wrote
Just sort her papers by citations and look at the years…
PeedLearning t1_ir48kps wrote
Yes, MAML is on top. But I don't think it has been very impactful, neither has the whole field of meta-learning really been.
carlml t1_ira0vsr wrote
What has been impactful according to you? What makes you say meta learning hasn't been impactful?
PeedLearning t1_irbosst wrote
(I have published myself in the meta-learning field, and worked a lot on robotics)
I see no applications of meta learning appearing, outside of self-citations within the field. The SOTA in supervised learning doesn't use any meta-learning. The SOTA in RL neither. The promise of learning to learn never really came true...
... until large supervised language models seemed to suddenly meta-learn as an emergent property.
So not only did nothing in the meta-learning field really take off and had some impact outside of computer science research papers, its original reason of being has been subsumed by a completely different line of research.
Meta-learning is no longer a goal, it's understood to be a side-effect of sufficiently large models.
carlml t1_ircfo0x wrote
Are the SOTA in RL for few-shot learning not meta-learning based?
PeedLearning t1_irdfrn4 wrote
I am not sure what you would consider SOTA in few-shot RL. The benchmarks I know are quite ad-hoc and don't actually impact much outside of computer science research papers.
The people that work on applying RL for actual applications don't seem to use meta-RL.
Viewing a single comment thread. View all comments