clavalle
clavalle t1_j35eqzp wrote
Reply to comment by CrypticSplicer in [Discussion] If ML is based on data generated by humans, can it truly outperform humans? by groman434
Makes sense.
An interesting question related to OPs: could there be a ML solution that humans /can't/ understand?
Not /don't/ understand...but I mean given enough time and study a given solution both outperforms humans and is relatively easy to verify but we cannot understand the underlying model at all.
My current belief is that no model is truly beyond human reasoning. But I've seen some results that make me wonder.
clavalle t1_j2xk9dl wrote
Reply to [Discussion] If ML is based on data generated by humans, can it truly outperform humans? by groman434
Yes, ML can outperform humans in certain tasks.
-
Quantity can sometimes make a very big difference - if you could sit down and train a human on the same amount of data, the human might be on par with ML...but that's often not possible
-
Training data is not always generated by humans.
-
Given the same data, there are connections or perspectives that humans have not followed or considered.
clavalle t1_j3cniyn wrote
Reply to comment by BurgooButthead in [Discussion] If ML is based on data generated by humans, can it truly outperform humans? by groman434
Doesn't even have to be alien. Just the familiar seen from a wholly different perspective.
Like whatever these models are doing here: https://scitechdaily.com/artificial-intelligence-discovers-alternative-physics/