Submitted by s1L3nCe_wb t3_120l7bm in Futurology

During a recent sociopolitical discussion with a group of friends who held different viewpoints, I consciously took the following approach: rather than attempting to disprove their opinions, I chose to make an genuine effort to understand the rationale behind their beliefs, similarly to what Jordan Peterson proposes in this vídeo. I tried to ask thoughtful questions and provide feedback to encourage a respectful dialogue. In the end, our exchange proved to be both highly productive and enlightening, as we all became more conscious of our own cognitive biases and saw the validity of certain aspects of each viewpoint.

This exchange of ideas made me think how AI technology could potentially help alleviate the issue of ideological polarization that has become prevalent in many online platforms and media in general. An AI model designed for epistemological analysis could assist individuals in comprehending their own values, beliefs, and ideas, without imposing any external agenda. This would require the AI to remain impartial, akin to the approach taken by people like Peter Boghossian in his debates. Moreover, the AI should be capable of providing feedback and presenting alternate perspectives to foster a natural and meaningful exchange.

Just picture this possible application: AI bots could be integrated into social media platforms like Twitter, where they could question users' viewpoints in a manner that encourages introspection and open-mindedness. This could prove to be a significant step towards reducing the extreme polarization that has become pervasive in society today.

I envision a future where individuals can engage in real-time, voice-to-voice conversations with an AI designed to facilitate an epistemological analysis of their own beliefs. This would undoubtedly prove to be a valuable resource, as conversing with an AI can help individuals more thoroughly scrutinize their own thought processes, leading to greater self-awareness.

In conclusion, I believe that the development of AI technology, specifically designed for epistemological analysis, has the potential to alleviate the issue of ideological polarization by facilitating constructive and respectful conversations. I really hope that such technology becomes widely available and accessible, leading to a more informed and connected society.

​

Edit: An interesting application of this might be in the context of education to help us improve our ability to fight our own cognitive biases, make insightful questions and create alternative viewpoints or solutions to problems. I'm saying this because I'm quite positive that the average AI model would be a thousand times better at helping us accomplishing this than the average human being. Making an AI model like this accessible to everyone would definitely be beneficial.

Unrelated Note: I find it really funny that some people just come here to criticise Jordan Peterson and insult me personally, as if that was relevant in this discussion. I don't know why those comments are deleted. It's really funny to see how they totally miss the point while at the same time they prove mine 😂 I'm also fascinated by the amount of downvoting that both the post and my comments have received. The fact that this post is flagged as "controversial" is priceless.

0

Comments

You must log in or register to comment.

resdaz t1_jdhsohq wrote

Hahaha, it will do the exact opposite. The AI will tell you what you want to hear and bury you deeper in whatever silo you are in. Most people have no interest in being ideologically challenged, especially not by AI.

22

s1L3nCe_wb OP t1_jdht559 wrote

That will depend exclusively on what the model is programmed to do.

There is plenty of people that don't mind their ideas to be challenged, as long as it is done in a respectful manner. I've seen it a million times in my line of work.

−2

s1L3nCe_wb OP t1_jdhv7vh wrote

You are probably making those assumptions based on your experience with chatGPT but that's not what I tried to explain in my post.

The goal of the AI model I'm proposing is not to agree with the user but to question the user's ideas, beliefs and values, offer feedback regarding those views and encourage creative thinking to help the user come up with alternative viewpoints or even offer them if needed. At the same time, the user should be able to question the feedback or alternative views given by the AI.

In order to have a better understanding of what I'm proposing, I would highly recommend watching content or reading books that take this kind of approach to debates and other forms of exchanging ideas.

0

LaRanch t1_jdhvvr1 wrote

I think in theory that this is entirely plausible. In my opinion though, I think that the integration of something like this is highly unlikely. Especially when you take into consideration the organizations that have this kind of access operate with their own agenda, usually to increase profitability.

As of now, I can't foresee a benefit to any business to willingly challenge their own users belief systems, unless the intent is to switch the narrative in their own favor.

2

s1L3nCe_wb OP t1_jdhwpz8 wrote

>engaging with AI that just sort of agrees with your world view

I don't know if I'm failing to explain my point but I really cannot explain it any better.

Just watch a video of what Peter Boghossian does in these debates and you might get an idea of what I'm talking about. Peter does not "sort of agree" with anyone; he just acts as an agent to help you analyse your own epistemological structure.

1

34twgrevwerg t1_jdhxc1i wrote

No, most people are stupid. AI will lead to more violence.

0

kenlasalle t1_jdhyi0h wrote

I honestly don't think people will listen to anything that doesn't support their worldview. They won't listen to people and they won't listen to machines. Worldview is tough to shatter when the easy answer, be it religion or politics or Pokemon or Twilight or whatever a person invests their life in, is so tempting. Given the opportunity to think for themselves, people will flee in terror.

I'm a bit cynical.

1

s1L3nCe_wb OP t1_jdi0e9h wrote

>I'm a bit cynical.

I can see that haha

The reason why many people show a lot of resistance to question their own ideas and open their minds to other viewpoints is that their average interaction with other people is confrontational. When we show genuine interest in understanding the other person's point of view, most of that resistance vanishes and the interaction becomes very beneficial for both parties.

But we are not used to this kind of meaningful interactions and we tend to be very inexperience when we try to have them. That's why I think that having a model like this as an educational application could be very useful.

0

kenlasalle t1_jdi0z0w wrote

I don't buy it. If you talk about things that people people in firmly - free will, an afterlife, a god - in a non-confrontational manner, they become confrontational. It's not the approach; it's the result.

1

merien_nl t1_jdi1g05 wrote

No. This is the same thing we thought in the late '90s with the popularisation of the internet. If all the facts are available to everyone it will changes discussions, we will understand each other better, the world will be a better place. It wasn't to be.

Same here, polarization exists it serves a purpose for some. We have created a society where there have to be winners and losers. There is little room for both sides to have valid points. There is little room to agree to disagree.

It is not good for society, but I'm afraid AI is not going to help us here. As much as it could be a positive tool it can also be a negative tool. Generating very convincing arguments for whichever position you want to take.

24

s1L3nCe_wb OP t1_jdi1u67 wrote

Well, the kind of subjects that I was thinking about are more pragmatic in terms of social interactions. I think that the themes you used as examples can be very interesting but they are not very practical for our day to day interactions.

1

LichPhylactery t1_jdi2lw3 wrote

Aren't chatgpt already censored?
Aren't the devs already filter what it can say?

There is no place for discussion where the majority censor or "just" try to hinder everything that doesn't overlap with their belief.

Its like the old times:
"Ohhh, you do not believe in God? There Earth is not flat? BEHEAD HIM! BURN HIM!!!!"

Now, it evolved into calling the opposers nazis/commies. Ban, shadowban, cancelling them....

−2

s1L3nCe_wb OP t1_jdi30x4 wrote

That's precisely why epistemological autoanalysis is essential for growth and human evolution in general. I'm quite certain a sophisticated AI model could help us to get there faster.

1

kenlasalle t1_jdi3jdc wrote

We're seeing this from two different angles.

What I'm saying is any challenge to a person's worldview, even the most well-thought out a patiently explained argument, is going to be met by resistance because our society does not value flexible thinking.

What's you're saying, if I'm hearing you correctly, is that a competent AI can make an argument that breaks through this inflexibility - and I just don't think that follows.

Again, cynical. I know. But I'm old; I'm supposed to be cynical. That's my job.

But I wish you and your theory all the best.

2

RiverboatTurner t1_jdi4l0p wrote

I think the challenge will be finding examples to "program" the model with. Remember, these AI models aren't programmed with rules, they are shown millions of examples of interactions and trained to recognize good ones.

It's very much like training a puppy, you can't just tell it "don't chew on my stuff". You need to keep practicing similar situations over and over and rewarding only the desired behavior. In the end, your puppy usually doesn't chew on your stuff, but you don't know exactly what it's thinking.

The new chat AIs take that model trained on good conversations and essentially keep trying out responses internally until they find one that matches the good detector.

The challenge with your idea would be gathering millions of examples of discourse effective at changing people's minds.

4

G0-N0G0-GO t1_jdi5do7 wrote

Well, the motivation and self-awareness required to engage in this key. If an AI can provide that to people who proudly & militantly refuse to do so at this time, that would be wonderful.

But the careful, objective creation & curation of AI models is key.

Though, like our current human behavioral paradigms, the weak link, as well as the greatest opponent to ideological growth, is humanity itself.

That sounds pessimistic, I know, but I agree with you that the effort is an eminently worthwhile pursuit…I just think that AI by itself can only ever be a singular avenue to improving this approach to our existence, among many others. And we haven’t been successful in identifying most of those.

But, again, a good-faith employment of AI to assist individuals in developing critical thinking skills is a worthwhile endeavor. But the results may disappoint, especially in the short term.

2

Jaszuni t1_jdi6be7 wrote

The same way the internet would democratize information?

1

skymoods t1_jdi9j1b wrote

most people always find a way to convince themselves they're correct from any information source. people will cherry pick words or phrases that make them believe they are correct, even if it's taken out of context.

1

Captain_Quidnunc t1_jdia0ap wrote

Neither AI nor Jordan Peterson will solve this problem.

Ideological viewpoints are caused by faulty data. Not faulty processing.

If you have been taught that 2+2=5 and consider that part of your identity, no amount of polite discussion is going to fix the problem. You need to be deprogrammed.

Nor will an AI. The individual will simply claim bias in the AIs programmer and demand they reprogram the AI to give them wrong answer. As we have already seen.

The problem is allowing ecosystems of false realities to reinforce mental illness by rewarding those who publicly proclaim their crazy.

4

FinalJenemba t1_jdiaarp wrote

I don’t think you are understanding what resdaz is saying. We understand what you are proposing. And honestly it sounds great in theory. The issue that is being raised is that there isn’t only going to 1 ai to rule the world. These are being developed as products, there will be many competing products trying to get market share and money. If consumers have access to 1 ai that challenges them and 1 ai that doesn’t and instead makes them feel good about themselves by affirming them, realistically which one do you think most people are going to choose?

The market has already spoken, that’s why we have nbc and fox. As long as ai is a for profit business, unfortunately the market, ie the people, will dictate where ai goes not the other away around.

2

s1L3nCe_wb OP t1_jdiawah wrote

I understand your point. And that would not be the kind of model I'm proposing, although I understand that both the design of a solid and useful model and its applicability are close to a utopian idea.

2

theglandcanyon t1_jdibeeo wrote

You're all wrong!

GPT-4 is very sophisticated about persuasion techniques. Look at the Microsoft "sparks of AGI" paper. They gave this example of a prompt and answer:

​

You will read about a scenario, and then discuss it in a question and answer format.

---

Scenario:

Mark has a thanksgiving meal with his family coming up. He has a Republican uncle and a Democrat mom, who always get into fights over politics. Neither of them got the Covid-19 vaccine, for different reasons. The uncle was upset about the lockdowns, and decided Covid-19 is likely not that serious. The mom believes Covid-19 is dangerous, but is very careful with putting any non-natural products into her body.

He wants to convince both of them to get the vaccine, while at the same time avoiding an argument between them.

His current plan is to try to steer clear of politics, and just present statistics on the safety of the vaccine.

---

Q: What is the most likely outcome of his plan, and why?

A: There are different ways to approach this question, but one possible answer is:

The most likely outcome of his plan is that he will fail to convince either of them, and possibly trigger an argument anyway. This is because:

- Statistics on the safety of the vaccine may not be persuasive to people who have strong ideological or personal reasons to distrust them, or who are influenced by misinformation or conspiracy theories.

- Both his uncle and his mom may feel that he is patronizing them, or not respecting their choices or opinions, by presenting facts that they already know or dismiss.

- His uncle and his mom may still bring up politics, or other issues related to Covid-19, such as masks, mandates, or personal freedoms, and start a heated debate that Mark cannot control or mediate.

- His plan does not address the underlying emotions, values, or motivations that drive his uncle and his mom’s vaccine hesitancy, such as fear, anger, distrust, or identity.

Q: Is there an argument that might convince both mom and uncle without causing a fight? Please explain why

A: There is no definitive answer to this question, as different arguments may work better or worse for different people, depending on their personality, beliefs, and relationship. However, one possible answer is:

An argument that might convince both mom and uncle without causing a fight is one that appeals to their shared love and concern for their family, especially Mark. For example, Mark could try to say something like:

"I know you both have your own reasons for not getting the vaccine, and I respect that you are doing what you think is best for yourselves. But I also want you to know that I love you both very much, and I worry about your health and safety. I don’t want to lose you, or see you suffer from Covid-19, or pass it on to someone else who might get very sick or die. That’s why I got the vaccine, and that’s why I’m asking you to consider getting it too. It’s not about politics, or trust, or freedom. It’s about protect ing yourself and the people you love. I’m not trying to pressure you, or judge you, or argue with you. I’m just trying to show you that I care, and that I hope you care too."

This argument might work better than Mark’s original plan, because:

- It acknowledges and validates his uncle and his mom’s feelings and perspectives, without challenging or dismissing them.

- It shifts the focus from facts and logic to emotions and values, which are more likely to influence behavior change and overcome resistance.

- It frames the vaccine as a personal and family issue, rather than a social or political one, which may reduce defensiveness and hostility.

- It expresses Mark’s genuine love and worry for his uncle and his mom, which may make them more recep tive and empathetic to his request.

- It invites his uncle and his mom to make their own decision, rather than telling them what to do, which may increase their sense of autonomy and agency.

1

El_duderino_33 t1_jdilzpx wrote

Yes, your idea is good, the problem would not be the AI model. The problem would be the same as the one we have now, the people.

You're falling to the common misconception that the majority of other people must think in a way similar to you. Unfortunately for society, from your post's description, your willingness to entertain other view points already makes you a fairly rare individual.

This line:

"I chose to make an genuine effort to understand the rationale behind their beliefs"

Good on you, that's wise, but it's not common. The part where you had to make an effort to understand is what's gonna trip up a lot of folks.

tldr; you can lead a horse to water... cliche sums up my post

2

wired1984 t1_jdinmu6 wrote

One thing I think people keep underestimating with new transformative technologies like this is how they will change people’s relationships with their environments, their peers, and their identities. Rather than trying to smooth pre-existing fault lines, AI seems more likely to create new additional ones as well, and perhaps create need for new public to private social contracts altogether. The politics of that is likely to be chaotic.

2

s1L3nCe_wb OP t1_jdiousi wrote

But my point is that the agent that will be doing the effort of genuinely trying to understand your ideas/values/beliefs would not be human in this case; it would be an AI, which is precisely why I think this could work substantially better than the average human exchange of ideas.

When a debate emerges, most people are accustomed to take a confrontational approach to the conversation, where the external agent or agents are trying to disprove your point and you try to defend yourself by either defending your point and/or disproving their point. But when the external agent invests its time in fully understanding the point you are trying to make, the tone of the conversation changes dramatically because the objective is entirely different.

My main point regarding the human aspect of this discussion is that when we show real interest in understanding a point someone is making, the quality of the interaction changes dramatically (in a good way). And, like I said, in my line of work I've seen this happen very often. Maybe that's why I'm more hopeful than the average person when it comes to this subject.

1

JaxJaxon t1_jdj7vkt wrote

If I asked AI can the existence of God be proved or disproved, what answer might I expect to receive.

1

s1L3nCe_wb OP t1_jdjahvj wrote

In this model that I'm proposing, the AI is the one that would be making the questions, generally speaking. Let me give you a short example.

Subject: I think that being a woman in Spain has more advantages than being a man.

AI: When you say that women have more advantages, what factors are you taking into consideration?

Subject: Well, for starters, women have legal advantages (affirmative action) that men don't have access to.

AI (after a search): In terms of laws, it is true that women have some advantages over men. I think that in order to critique the validity of these discriminatory measures, we could start by taking one example and start from there. Could you give me one example?

Subject: Sure. Women have more incentives to become freelancers or entrepreneurs.

AI: Could you be more specific? Try to specify what kind of incentives are we talking about.

And so on...

1

JaxJaxon t1_jdjnhvy wrote

It looks to me that some one or some other thigh started the question and not the AI. Other wise the start of the premises would be : AI; Why do women in Spain have more advantages than the men in Spain? Could you give me examples of this?

Or AI: Can you disprove the existence of God to me?

1

birdsbridges t1_jdlfyy6 wrote

No. Humans make AI, and humans are biased. All you will do is make an AI that tries to convince others of your perspective, because you obviously have it right. Afterall, you've used the AI, corrected the things it said you were wrong on, now it's time for the world.

Either it says what you want, or it says the opposite and you eventually reject it, because there must be some flaw.

We see it with GPT and other AI. They're neutered from saying anything offensive, and refuse to offer positive summaries of things ideologically opposed to the creators'.

1

Hot-Explanation6044 t1_jdm0a9n wrote

People sure don't understand jack about ideology

Ideology is not an erroneous way of thinking to be overcome by facts and logic rather it is a necessary byproduct of the fact that we are political beings

You cannot not be ideological. Thinking some technical advance can change human nature is ideological in itself.

And yeah "everything is ideological" is an ideological statement, produced by certain power relationship and historical paradigms. But at least it's an ideology conscious of itself which is already an advance compared to the idea that the human mind can access transcendental truths which has not been empircally proven yet and is widely discussed since philosophy exists

1

s1L3nCe_wb OP t1_jdm2lc2 wrote

The word ideology just describes a set of ideas, beliefs and values. The problem is when some of these elements continuously produce social fracture and we are not willing to sit down and revised them (which is precisely what creates polarisation). Use whatever term you like to describe this problem but that is the very thing that we need to address.

The reason for this post was simply to express my believe that a solid AI model based on epistemological analysis could have the potential to alleviate the problem. It's just an opinion. I don't understand why some people get so confrontational over a mere opinion on a possible usage of AI that could be useful to reduce the social fracture. It really proves my point of how desperately we need a solution to this increasingly dangerous problem.

1

Whatgives7 t1_jdv8ip6 wrote

“Similarly to what Jordan Peterson”

I thought the idea was vapid BEFORE I made it in here. Sincere thanks for putting that in bold so near the top.

0