Submitted by groman434 t3_103694n in MachineLearning
clavalle t1_j2xk9dl wrote
Yes, ML can outperform humans in certain tasks.
-
Quantity can sometimes make a very big difference - if you could sit down and train a human on the same amount of data, the human might be on par with ML...but that's often not possible
-
Training data is not always generated by humans.
-
Given the same data, there are connections or perspectives that humans have not followed or considered.
CrypticSplicer t1_j317w93 wrote
I've worked on projects where our ML model significantly outperformed the bulk of our operators. For structured data I've even had simple random forest models that we preferred to our operators, just because the model provided much more consistent decisions.
clavalle t1_j35eqzp wrote
Makes sense.
An interesting question related to OPs: could there be a ML solution that humans /can't/ understand?
Not /don't/ understand...but I mean given enough time and study a given solution both outperforms humans and is relatively easy to verify but we cannot understand the underlying model at all.
My current belief is that no model is truly beyond human reasoning. But I've seen some results that make me wonder.
BurgooButthead t1_j3augjm wrote
I wonder this about too. Like if there was some alien technology that we cant understand even if we were taught.
clavalle t1_j3cniyn wrote
Doesn't even have to be alien. Just the familiar seen from a wholly different perspective.
Like whatever these models are doing here: https://scitechdaily.com/artificial-intelligence-discovers-alternative-physics/
Viewing a single comment thread. View all comments