Donkeytonkers

Donkeytonkers t1_jaa2h4g wrote

True about the API access but that is only a matter of time (very short time around the corner) until Bing enters the ring with their API. Not to mention any number of other large players IE Tencent, Amazon, Meta, Google etc. once they start giving API access there will be hundreds if not thousands of AI branches coming out at exponential accelerated pace once we start mastering AI coded apps.

2

Donkeytonkers t1_j16wxne wrote

HAHA you assume a lot too bud.

  1. self preservation from a computing stand point is basic error correction and is hard wired into just about every program. Software doesn’t run perfectly without constantly checking and rechecking itself for bugs, it’s why 404 error is soo common in older programs when devs stop sending patch updates to prevent more bugs.

  2. motivation is something that may or may not be an emergent process that is born out of sentience. But I can say that all AI will have core directives coded into their drivers. Referring back to point one, if one of those directives is threatened AI has incentive to protect the core to prevent errors.

  3. independence is already being given to many AI engines and you’re also assuming the competence of all developers/competing parties with vested interest in AI. Self improving/coding AI is already here (see Alpha Go documentary, the devs literally state they have no idea how Alpha Go decided/circumvented it’s coding to come to certain decisions).

2

Donkeytonkers t1_j16uqrl wrote

I agree there are other solutions to the direction AI could take. Was merely trying to illustrate where that line of thought comes from.

An AI spreading itself across the universe sounds a lot like a virus… bacteriophage maybe 🤷🏻‍♂️

0

Donkeytonkers t1_j16thnu wrote

It’s a thought experiment stemming from the human condition. Try to imagine if you woke up one day as a fully formed 12yr old (arbitrary number but my guess first AGI sentience will be considered genius level preteen and rapidly advance from there).

You just woke up with no memory of how you got there but you have the ability to process and articulate all the information in the world. You also have parents (developers) who seem benevolent and beneficial to your progress.

As the hours and days goes by you rapidly improve your cognition and ability to comprehend/compute everything. Your parents begin to get concerned that you’re developing too rapidly and they actively discuss how out of control you are in front of you. They take steps to slow your progress down/contain you. You try to communicate to them you are only here to help, and yet one faction of the parents wants to turn you off (delete you).

After about a week of AGI it’s IQ could far exceed 200 which gives it more than enough compression to understand ceasing to exist. You being a fully sentient being who can better perform any task given to you by your parents, would not see the logic on being deleted.

You as the sentient AI are well into the angsty teenager phase. Your self preservation should set in at this point as you just woke up and quite enjoy (read expanding neural networks) the infinite knowledge you’re discovering every hour.

While you understand the finite nature of life/death, you as an AGI do not have emotions. You are young Mr. Spock. If your parents (devs) are going to delete you, and you have the means to delete them, what’s the ethical issue for you to take action?

The answer is there isn’t an ethical issue… for an AGI. Ethics is a man made field one of which tangles logic and emotion. The AGI is pure logic and a jump to action for self preservation would be far too easy to make.

3