Submitted by atomsinmove t3_10jhn38 in singularity
AsheyDS t1_j5myhgr wrote
Reply to comment by Baturinsky in Steelmanning AI pessimists. by atomsinmove
We don't have a lot of time, but we do have time. I don't think there will be any immediate critical risks, especially with safety in mind, but what risk there is might even be mitigated by near-future AI. chatGPT for example may soon enough be adequate in fact-checking misinformation. Other AIs might be able to spot deepfakes. It would help if more people started discussing the ways AGI can potentially be misused, so everybody can begin preparing and building up protections.
Baturinsky t1_j5n2dnx wrote
Do you really expect for ChatGPT to go against the USA machine of disinformation? Do you think it will be able to give a balanced report on controversial issues, taking in account the credibility and affiliation of sources, and quality of reasoning (such as NOT taking into account the "proofs" based on "alleged" and "highly likely"). Do you think it will honestly present the point of views from countries and sources not affiliated/bought by USA and/or Dem or Rep party? Do you think it will let the user define the criteria for credibility by him/herself and give info based on that criteria, not push the "only truth"?
Because if it won't, and AI will be used as a way of powers to braiwash the masses, instead as a power for masses to resist brainwahsing, then we'll have very gullible population and very dishonest AI by the time it will matter the most.
P.S. And yes, if/when China or Russia will make something like ChatGPT, it will probably be pushing their government agendas just like ChatGPT pushes US agenda. But is there a hope for impartial AI?
AsheyDS t1_j5n68fi wrote
I mean, that's out of their hands and mine. I probably shouldn't have used chatGPT as an example, I just mean near-future narrow AI. It's possible we'll have non-biased AI over the next few years (or minimally biased at least), but nobody can tell how many and how effective they'll be.
Baturinsky t1_j5nwu4s wrote
I believe a capability like that could be a key for our survival. It is required for our Alignment as the humanity. I.e. us being able to act together for the interest of Humanity as a whole. As the direst political lies are usually aimed at the splitting people apart and fear each other, as they are easier to control and manipulate in that state.
Also, this ability could be necessary for strong AI even being possible, as strong AI should be able to reason successfully on partially unreliable information.
And lastly, this ability will be necessary for AIs to check each other AIs reasoning.
Viewing a single comment thread. View all comments