Viewing a single comment thread. View all comments

1714alpha t1_jdt21l2 wrote

Compare this to the current setup.

If you want to predict something, the weather, political events, financial trends, you would call together a body of experts and gather the best available data in order to make a best guess as to what will happen and what to do about it. We know that we're relying on the imperfect judgement of people and the incomplete data that we have available. The experts may be right, or they may be wrong. But it's the best judgement we can offer and the best data available. Anything else would be even less likely to be right. It's the best option available, so we go with it.

Now consider an algorithm that is on average at least as good, or possibly better, than the best experts we have at a given subject. It has all the data the experts themselves can digest and more. Would it be wrong to think that the algorithm might have valuable input with considering? Like any independent expert, you'd want to check with the larger community of experts to e what they think about the algorithm's projections, but in principle, I don't see why it should be discounted just because it came from an AI. Hell, they're are already programs that can diagnose illnesses better than human doctors .

To your point, it would indeed be problematic if any single source of information became the unquestioned authority on any given topic, but the same is true of human pundits and professors alike.

0

circleuranus OP t1_jduqpkp wrote

> became the unquestioned authority on any given topic, but the same is true of human pundits and professors alike.

There is no other system capable of such a thing like AI. Every other system we have is dependent on humans and the trust between humans and their biases. Humans actually seek information from other humans based solely on the commonality of their shared biases. Once you remove the human element, the system just "is". And such a system will be indistinguishable from magic or "the Gods".

1