Viewing a single comment thread. View all comments

clavalle t1_j2xk9dl wrote

Yes, ML can outperform humans in certain tasks.

  1. Quantity can sometimes make a very big difference - if you could sit down and train a human on the same amount of data, the human might be on par with ML...but that's often not possible

  2. Training data is not always generated by humans.

  3. Given the same data, there are connections or perspectives that humans have not followed or considered.

10

CrypticSplicer t1_j317w93 wrote

I've worked on projects where our ML model significantly outperformed the bulk of our operators. For structured data I've even had simple random forest models that we preferred to our operators, just because the model provided much more consistent decisions.

3

clavalle t1_j35eqzp wrote

Makes sense.

An interesting question related to OPs: could there be a ML solution that humans /can't/ understand?

Not /don't/ understand...but I mean given enough time and study a given solution both outperforms humans and is relatively easy to verify but we cannot understand the underlying model at all.

My current belief is that no model is truly beyond human reasoning. But I've seen some results that make me wonder.

2