Viewing a single comment thread. View all comments

Vibin_Eternally t1_j5poh15 wrote

I think it can be said no matter the functions we develop for Artificial Intelligence, the term itself being an umbrella 🏖️ for many iterations, there will always need to be someone responsible for the "input", meaning orders, commands, code, or even to program the on and off functions, as well as basic overall design. No matter what, A.I.🤖 is only as intelligent, beneficial, disciplined, or dangerous as it's creators🧑🏼‍💻

1

odigon t1_j5r7it0 wrote

I can't imagine any way that that statement can be any more incorrect. We already dont understand how neural networks solve specific problems, we just let them loose on training data and reinforcement and get them to figure it out. Narrow AI already vastly outperforms humans in very narrow domains such as Chess and Go, and the best human masters struggle to explain what they are doing. AIs trained to play computer games often exploit glitches that humans didnt know existed to the extent that they do something that satisfies the program, but wasnt at all what was intended. They find solutions that humans will never have thought of and there is no reason to think that a general AI that has human level flexibility wont do the same in the real world. This may be a good thing or a very, very, very bad thing.

1

Vibin_Eternally t1_j5rv7u6 wrote

I respect your "imagination", as that plays apa part in humans creating things of genius. I see the sentiment of computers exploiting what humans miss in the programming. As they are programmed to do. Which is working along with humans to do great things. Like to improve your chess game to a higher level after defeat. Because the AI, calculated all the different moves to attempt or not attempt along with success/fail rate, depending on what level you input or set the game at. There is even the program named "Codex" that can write its own computer coding in 12 different computing languages. Whatever one inputs or ask for, it can create the code for. However it can't think for itself or decide to code without a pre-programmed purpose/input. If by human flexibility, you are referring to reason and true thinking, then we're not talking about AI. Art imitates life, life oftetimes imitates art, but AI imitates humans, and humans then have the option to imitate AI

1

Vibin_Eternally t1_j5rw37k wrote

As far as neural networks which is a supreme display of genius, even it is mimicking.It mimics the nodes or neural pathways of brains. We don't understand quite how it works because we're still trying to layout the exactness of how a human brain truly operate

1

odigon t1_j5s65g9 wrote

I have really no idea what you are saying in your reply. Your original statement was that "A.I. is only as intelligent, beneficial, disciplined, or dangerous as it's creators". That's like saying a racing car isnt any faster that its creators.

We have in the past found a way to make machines that can go fast, can fly, can go underwater, and to see incredible distances, far beyond what its creators can do, which is the entire point of them. Now we are attempting to create a machine that can reason and if we are successful it will reason with much more ability than we can, in the same way that the best chess grandmasters are no match for a chess computer set at a high enough level. Will it have the same goals as us? Why should it? If it doesn't and becomes a danger can we stop it? How? It will be able to outwit us at every turn.

1