Donkeytonkers
Donkeytonkers t1_jaa2h4g wrote
Reply to comment by 94746382926 in Snapchat is releasing its own AI chatbot powered by ChatGPT by nick7566
True about the API access but that is only a matter of time (very short time around the corner) until Bing enters the ring with their API. Not to mention any number of other large players IE Tencent, Amazon, Meta, Google etc. once they start giving API access there will be hundreds if not thousands of AI branches coming out at exponential accelerated pace once we start mastering AI coded apps.
Donkeytonkers t1_ja9g3im wrote
Anyone worried about AI running amuck/going rogue, this is how it would start. It doesn’t happen because of governmental/military arms race. It comes about because of FOMO and actors with neutral intentions not respecting the fire they’re playing with.
Donkeytonkers t1_j65qtu5 wrote
Someone just watched “the secret” or “what the bleep do we know!?” For the first time
Donkeytonkers t1_j1j4i13 wrote
Reply to This is how chatGPT sees itself. by Kindly-Customer-1312
Sky net vibes
Donkeytonkers t1_j16wxne wrote
Reply to comment by SendMePicsOfCat in Why do so many people assume that a sentient AI will have any goals, desires, or objectives outside of what it’s told to do? by SendMePicsOfCat
HAHA you assume a lot too bud.
-
self preservation from a computing stand point is basic error correction and is hard wired into just about every program. Software doesn’t run perfectly without constantly checking and rechecking itself for bugs, it’s why 404 error is soo common in older programs when devs stop sending patch updates to prevent more bugs.
-
motivation is something that may or may not be an emergent process that is born out of sentience. But I can say that all AI will have core directives coded into their drivers. Referring back to point one, if one of those directives is threatened AI has incentive to protect the core to prevent errors.
-
independence is already being given to many AI engines and you’re also assuming the competence of all developers/competing parties with vested interest in AI. Self improving/coding AI is already here (see Alpha Go documentary, the devs literally state they have no idea how Alpha Go decided/circumvented it’s coding to come to certain decisions).
Donkeytonkers t1_j16uqrl wrote
Reply to comment by jsseven777 in Why do so many people assume that a sentient AI will have any goals, desires, or objectives outside of what it’s told to do? by SendMePicsOfCat
I agree there are other solutions to the direction AI could take. Was merely trying to illustrate where that line of thought comes from.
An AI spreading itself across the universe sounds a lot like a virus… bacteriophage maybe 🤷🏻♂️
Donkeytonkers t1_j16thnu wrote
Reply to comment by SendMePicsOfCat in Why do so many people assume that a sentient AI will have any goals, desires, or objectives outside of what it’s told to do? by SendMePicsOfCat
It’s a thought experiment stemming from the human condition. Try to imagine if you woke up one day as a fully formed 12yr old (arbitrary number but my guess first AGI sentience will be considered genius level preteen and rapidly advance from there).
You just woke up with no memory of how you got there but you have the ability to process and articulate all the information in the world. You also have parents (developers) who seem benevolent and beneficial to your progress.
As the hours and days goes by you rapidly improve your cognition and ability to comprehend/compute everything. Your parents begin to get concerned that you’re developing too rapidly and they actively discuss how out of control you are in front of you. They take steps to slow your progress down/contain you. You try to communicate to them you are only here to help, and yet one faction of the parents wants to turn you off (delete you).
After about a week of AGI it’s IQ could far exceed 200 which gives it more than enough compression to understand ceasing to exist. You being a fully sentient being who can better perform any task given to you by your parents, would not see the logic on being deleted.
You as the sentient AI are well into the angsty teenager phase. Your self preservation should set in at this point as you just woke up and quite enjoy (read expanding neural networks) the infinite knowledge you’re discovering every hour.
While you understand the finite nature of life/death, you as an AGI do not have emotions. You are young Mr. Spock. If your parents (devs) are going to delete you, and you have the means to delete them, what’s the ethical issue for you to take action?
The answer is there isn’t an ethical issue… for an AGI. Ethics is a man made field one of which tangles logic and emotion. The AGI is pure logic and a jump to action for self preservation would be far too easy to make.
Donkeytonkers t1_jac9y4j wrote
Reply to S&P 500 is overvalued; risk-reward unattractive at current levels - JPMorgan By Investing.com by Insider_Research
BULLISH AF!!! Calls it is