Submitted by Light991 t3_xvol6v in MachineLearning
Light991 OP t1_ir2bqnb wrote
Reply to comment by PeedLearning in [Discussion] Best performing PhD students you know by Light991
Just sort her papers by citations and look at the years…
PeedLearning t1_ir48kps wrote
Yes, MAML is on top. But I don't think it has been very impactful, neither has the whole field of meta-learning really been.
carlml t1_ira0vsr wrote
What has been impactful according to you? What makes you say meta learning hasn't been impactful?
PeedLearning t1_irbosst wrote
(I have published myself in the meta-learning field, and worked a lot on robotics)
I see no applications of meta learning appearing, outside of self-citations within the field. The SOTA in supervised learning doesn't use any meta-learning. The SOTA in RL neither. The promise of learning to learn never really came true...
... until large supervised language models seemed to suddenly meta-learn as an emergent property.
So not only did nothing in the meta-learning field really take off and had some impact outside of computer science research papers, its original reason of being has been subsumed by a completely different line of research.
Meta-learning is no longer a goal, it's understood to be a side-effect of sufficiently large models.
carlml t1_ircfo0x wrote
Are the SOTA in RL for few-shot learning not meta-learning based?
PeedLearning t1_irdfrn4 wrote
I am not sure what you would consider SOTA in few-shot RL. The benchmarks I know are quite ad-hoc and don't actually impact much outside of computer science research papers.
The people that work on applying RL for actual applications don't seem to use meta-RL.
Viewing a single comment thread. View all comments