currentscurrents t1_iybz6a1 wrote
Reply to comment by piyabati in [D] Other than data what are the common problems holding back machine learning/artificial intelligence by BadKarma-18
I do agree that current ML systems require much larger datasets than we would like. I doubt the typical human hears more than a million words of english in their childhood, but they know the language much better than GPT-3 does after reading billions of pages of it.
> What is holding back AI/ML is to continue to define intelligence the way Turing did back in 1950 (making machines that can pass as human)
But I don't agree with this. Nobody is seriously using the Turing test anymore, these days AI/ML is about concrete problems and specific tasks. The goal isn't to pass as human, it's to solve whatever problem is in front of you.
yldedly t1_iydiq69 wrote
>The goal isn't to pass as human, it's to solve whatever problem is in front of you.
It's worth disambiguating between solving specific business problems, and creating intelligent (meaning broadly generalizing) programs that can solve problems. For the former, what Francois Chollet calls cognitive automation is often sufficient, if you can get enough data, and we're making great progress. For the latter, we haven't made much progress, and few people are even working on it. Lots of people are working on the former, and deluding themselves that one day it will magically become the latter.
piyabati t1_iydpx86 wrote
The hottest problems in NLP, computer vision, even self-driving cars, are almost solely defined in terms of how well a machine can mimic a human.
Desperate-Whereas50 t1_iye5kfo wrote
>I doubt the typical human hears more than a million words of english in their childhood, but they know the language much better than GPT-3 does after reading billions of pages of it.
But is this a fair comparison? I am far a way from being an expert in Evolution but I assume we have some evolutinoary in coded bias to learn language easier. Whereas ML systems have to begin from 0.
currentscurrents t1_iye68b8 wrote
Well, fair or not, it's a real challenge for ML since large datasets are hard to collect and expensive to train on.
It would be really nice to be able to learn generalizable ideas from small datasets.
Desperate-Whereas50 t1_iye7hf3 wrote
Thats correct. But to define what is the bare minimum, you need a baseline. I just wanted to say that humans are a bad baseline because we have "training data" encoded in our DNA. Further for tabular data ML systems often outperform humans with not as much training data.
But of course less data needed with good training results is always better. I would not argue about that.
Edit: Typos
Viewing a single comment thread. View all comments