odigon t1_j5s65g9 wrote
Reply to comment by Vibin_Eternally in "By far the greatest danger of Artificial Intelligence is that people conclude too early that they understand it."- Eliezer Yudkowsky. by KiwiTechCorp
I have really no idea what you are saying in your reply. Your original statement was that "A.I. is only as intelligent, beneficial, disciplined, or dangerous as it's creators". That's like saying a racing car isnt any faster that its creators.
We have in the past found a way to make machines that can go fast, can fly, can go underwater, and to see incredible distances, far beyond what its creators can do, which is the entire point of them. Now we are attempting to create a machine that can reason and if we are successful it will reason with much more ability than we can, in the same way that the best chess grandmasters are no match for a chess computer set at a high enough level. Will it have the same goals as us? Why should it? If it doesn't and becomes a danger can we stop it? How? It will be able to outwit us at every turn.
Viewing a single comment thread. View all comments