Submitted by FrogsEverywhere t3_zgfou6 in Futurology
MrZwink t1_iziexmn wrote
Reply to comment by Drakolyik in The technological singularity is happening (oc/opinion) by FrogsEverywhere
They find correlation, not causation.
This means they have notorious difficulty with queries that make no sense. A good example is Galactica, facebooks scientific paper ai. asking it for the benefits of eating crushed glass. And it tries to answer. It doesn't notice the question is flawed. It just tried to find data that correlates to the query. And makes stuff up.
It is the question if we will be able to ever teach ai common semse.
PeartsGarden t1_izk8bru wrote
Yeah but what if you never told a child about crushed glass? What if that child never dropped a glass, and never cut his/her finger while cleaning the mess? What would a child say?
Would you say that child lacks common sense? Does that child lack experience (a training set)?
MrZwink t1_izl1el1 wrote
I'm not getting in a whole filosophical debate. These ai's aren't meant to be a child that gives it's opinion on a subject. They're expected to be oracles. And they're just not good enough yet.
PeartsGarden t1_izl89y7 wrote
> they're just not good enough yet.
My point is, that specific AI's training set may have been insufficient. The same as if a child's experiences are insufficient. I think we can both agree that a child has common sense, at least a budding version of it.
MrZwink t1_izlbrmv wrote
it's not the training set that is the problem. It is the way statistics approach the problem. Correlation is not causation. Ai's are a tool to automate cognitive processes. Nothing more. We shoulnt expect them to be oracles.
Viewing a single comment thread. View all comments