Submitted by tekktokk t3_11w4kqd in MachineLearning
Came across this concept, Meta-Interpretive Learning (MIL) developed by Muggleton, Patsantzis, et al.
From what I understand this is a relatively new approach to ML? Has anyone heard of this? I was hoping to get a general feel for what people in the industry believe for the perspectives of this approach. If you're curious, here's an implementation of MIL.
UnusualClimberBear t1_jd2myu1 wrote
Sounds like a rebranding of Inductive Logic Programming. It does not scale, while all recent advances are about scaling simple systems. Think that for a vanilla transformer, the bottleneck is often the size of the attention because it is N^2, and people are switching to linear attention.