Viewing a single comment thread. View all comments

kenlasalle t1_jdhyi0h wrote

I honestly don't think people will listen to anything that doesn't support their worldview. They won't listen to people and they won't listen to machines. Worldview is tough to shatter when the easy answer, be it religion or politics or Pokemon or Twilight or whatever a person invests their life in, is so tempting. Given the opportunity to think for themselves, people will flee in terror.

I'm a bit cynical.

1

s1L3nCe_wb OP t1_jdi0e9h wrote

>I'm a bit cynical.

I can see that haha

The reason why many people show a lot of resistance to question their own ideas and open their minds to other viewpoints is that their average interaction with other people is confrontational. When we show genuine interest in understanding the other person's point of view, most of that resistance vanishes and the interaction becomes very beneficial for both parties.

But we are not used to this kind of meaningful interactions and we tend to be very inexperience when we try to have them. That's why I think that having a model like this as an educational application could be very useful.

0

kenlasalle t1_jdi0z0w wrote

I don't buy it. If you talk about things that people people in firmly - free will, an afterlife, a god - in a non-confrontational manner, they become confrontational. It's not the approach; it's the result.

1

s1L3nCe_wb OP t1_jdi1u67 wrote

Well, the kind of subjects that I was thinking about are more pragmatic in terms of social interactions. I think that the themes you used as examples can be very interesting but they are not very practical for our day to day interactions.

1

kenlasalle t1_jdi2ajb wrote

And yet, they lay at the heart of many of our misunderstandings all the same.

1

s1L3nCe_wb OP t1_jdi30x4 wrote

That's precisely why epistemological autoanalysis is essential for growth and human evolution in general. I'm quite certain a sophisticated AI model could help us to get there faster.

1

kenlasalle t1_jdi3jdc wrote

We're seeing this from two different angles.

What I'm saying is any challenge to a person's worldview, even the most well-thought out a patiently explained argument, is going to be met by resistance because our society does not value flexible thinking.

What's you're saying, if I'm hearing you correctly, is that a competent AI can make an argument that breaks through this inflexibility - and I just don't think that follows.

Again, cynical. I know. But I'm old; I'm supposed to be cynical. That's my job.

But I wish you and your theory all the best.

2

s1L3nCe_wb OP t1_jdi40jf wrote

Hahaha yeah, that is a good summary.

Thank you for sharing your views! Have a good weekend 🙏

2

G0-N0G0-GO t1_jdi5do7 wrote

Well, the motivation and self-awareness required to engage in this key. If an AI can provide that to people who proudly & militantly refuse to do so at this time, that would be wonderful.

But the careful, objective creation & curation of AI models is key.

Though, like our current human behavioral paradigms, the weak link, as well as the greatest opponent to ideological growth, is humanity itself.

That sounds pessimistic, I know, but I agree with you that the effort is an eminently worthwhile pursuit…I just think that AI by itself can only ever be a singular avenue to improving this approach to our existence, among many others. And we haven’t been successful in identifying most of those.

But, again, a good-faith employment of AI to assist individuals in developing critical thinking skills is a worthwhile endeavor. But the results may disappoint, especially in the short term.

2