Viewing a single comment thread. View all comments

s1L3nCe_wb OP t1_jdhwpz8 wrote

>engaging with AI that just sort of agrees with your world view

I don't know if I'm failing to explain my point but I really cannot explain it any better.

Just watch a video of what Peter Boghossian does in these debates and you might get an idea of what I'm talking about. Peter does not "sort of agree" with anyone; he just acts as an agent to help you analyse your own epistemological structure.

1

FinalJenemba t1_jdiaarp wrote

I don’t think you are understanding what resdaz is saying. We understand what you are proposing. And honestly it sounds great in theory. The issue that is being raised is that there isn’t only going to 1 ai to rule the world. These are being developed as products, there will be many competing products trying to get market share and money. If consumers have access to 1 ai that challenges them and 1 ai that doesn’t and instead makes them feel good about themselves by affirming them, realistically which one do you think most people are going to choose?

The market has already spoken, that’s why we have nbc and fox. As long as ai is a for profit business, unfortunately the market, ie the people, will dictate where ai goes not the other away around.

2

s1L3nCe_wb OP t1_jdiawah wrote

I understand your point. And that would not be the kind of model I'm proposing, although I understand that both the design of a solid and useful model and its applicability are close to a utopian idea.

2

El_duderino_33 t1_jdilzpx wrote

Yes, your idea is good, the problem would not be the AI model. The problem would be the same as the one we have now, the people.

You're falling to the common misconception that the majority of other people must think in a way similar to you. Unfortunately for society, from your post's description, your willingness to entertain other view points already makes you a fairly rare individual.

This line:

"I chose to make an genuine effort to understand the rationale behind their beliefs"

Good on you, that's wise, but it's not common. The part where you had to make an effort to understand is what's gonna trip up a lot of folks.

tldr; you can lead a horse to water... cliche sums up my post

2

s1L3nCe_wb OP t1_jdiousi wrote

But my point is that the agent that will be doing the effort of genuinely trying to understand your ideas/values/beliefs would not be human in this case; it would be an AI, which is precisely why I think this could work substantially better than the average human exchange of ideas.

When a debate emerges, most people are accustomed to take a confrontational approach to the conversation, where the external agent or agents are trying to disprove your point and you try to defend yourself by either defending your point and/or disproving their point. But when the external agent invests its time in fully understanding the point you are trying to make, the tone of the conversation changes dramatically because the objective is entirely different.

My main point regarding the human aspect of this discussion is that when we show real interest in understanding a point someone is making, the quality of the interaction changes dramatically (in a good way). And, like I said, in my line of work I've seen this happen very often. Maybe that's why I'm more hopeful than the average person when it comes to this subject.

1