Submitted by spiritus_dei t3_10tlh08 in MachineLearning
edjez t1_j7egs8x wrote
Reply to comment by GreenOnGray in [D] Are large language models dangerous? by spiritus_dei
Conflict, created by the first person in your example (me), and followed up by you, with outcomes scored by mostly incompatible criteria.
Since we are talking about language oracle class AIs, not sovereigns or free agents, it takes a human to take the outputs and enact to them, thus becoming responsible for the actions as it doesn’t matter what or who have the advice. It’s no different than substituting the “super intelligent AI” with “Congress”, or “parliament”.
(The hitchhikers guide outcome would be the AIs agree to put us on ice forever… or more insidiously constrain humanity to just one planet and keep the progress self regulated by conflict and they never leave their planet. Oh wait a second… 😉)
GreenOnGray t1_j7l4tb3 wrote
What do you think the outcome would be? Assume the AIs can not coordinate with each other explicitly.
Viewing a single comment thread. View all comments