Viewing a single comment thread. View all comments

kenlasalle t1_jdi2ajb wrote

And yet, they lay at the heart of many of our misunderstandings all the same.

1

s1L3nCe_wb OP t1_jdi30x4 wrote

That's precisely why epistemological autoanalysis is essential for growth and human evolution in general. I'm quite certain a sophisticated AI model could help us to get there faster.

1

kenlasalle t1_jdi3jdc wrote

We're seeing this from two different angles.

What I'm saying is any challenge to a person's worldview, even the most well-thought out a patiently explained argument, is going to be met by resistance because our society does not value flexible thinking.

What's you're saying, if I'm hearing you correctly, is that a competent AI can make an argument that breaks through this inflexibility - and I just don't think that follows.

Again, cynical. I know. But I'm old; I'm supposed to be cynical. That's my job.

But I wish you and your theory all the best.

2

s1L3nCe_wb OP t1_jdi40jf wrote

Hahaha yeah, that is a good summary.

Thank you for sharing your views! Have a good weekend 🙏

2

G0-N0G0-GO t1_jdi5do7 wrote

Well, the motivation and self-awareness required to engage in this key. If an AI can provide that to people who proudly & militantly refuse to do so at this time, that would be wonderful.

But the careful, objective creation & curation of AI models is key.

Though, like our current human behavioral paradigms, the weak link, as well as the greatest opponent to ideological growth, is humanity itself.

That sounds pessimistic, I know, but I agree with you that the effort is an eminently worthwhile pursuit…I just think that AI by itself can only ever be a singular avenue to improving this approach to our existence, among many others. And we haven’t been successful in identifying most of those.

But, again, a good-faith employment of AI to assist individuals in developing critical thinking skills is a worthwhile endeavor. But the results may disappoint, especially in the short term.

2