Submitted by Baturinsky t3_104u1ll in MachineLearning
[deleted] t1_j396adp wrote
[removed]
[deleted] t1_j5kwjgy wrote
[removed]
Baturinsky OP t1_j39irdy wrote
I'm a programmer myself. Actually, I'm writing an AI for a bot in game right now, without the ML, of cause. And it's quite good at killing human players, btw, even though algorithm is quite simple.
So tell me, please, why AI can't become really dangerous really soon?
By itself, network like ChatGPT is reatively harmless. It's not that smart, and can't do anything in real world directly. Just tells something to human.
But, corpos and countries funnel ton of money into the field. Models are learning different things, algorithms are improving, so they will know much more stuff soon, including how to move and operate things in real world. Then, what stops somebody from connecting some models together, and stick it into a robot arm, which will make and install more robot arms and war drones, which will seek and kill humans? Either specific kind of humans, or humans in general, depending on that "somebody"'s purpose?
[deleted] t1_j39llm4 wrote
[removed]
PredictorX1 t1_j3ca2pm wrote
What, specifically, are you suggesting?
Baturinsky OP t1_j3ch80z wrote
I'm not qualified enough to figure how drastic measures can be enough.
From countries realising they face a huge common crisis that they only survive it if they forget the squabbles and work together.
To using the AI itself to analyse and prevent it's own threats.
To classifying all trained general-purpose models of scale of ChatGPT and above and preventing the possibility of making the new ones (as I see entire-internet-packed models the biggest threat now, if they can be used without the safeguards)
And up to to forcebly reverting all publically avaiable computing and communication technology to the level of 20 of 30 years ago, until we figure how we can use it safely.
Blasket_Basket t1_j3h8t00 wrote
It sounds like you have some serious misunderstandings about what AI is and what it can be used for, rooted in the same sci-fi plots that has misinformed the entire public.
Baturinsky OP t1_j3hnmdc wrote
I'm no expert indeed, that's why I was asking.
But experts in the field also think that serious concerns on AI safety is justified
https://en.wikipedia.org/wiki/Open_Letter_on_Artificial_Intelligence
Also, a lot of good arguments here:
[deleted] t1_j40im8k wrote
[removed]
sneakpeekbot t1_j40imu3 wrote
Here's a sneak peek of /r/ControlProblem using the top posts of the year!
#1: I gave ChatGPT the 117 question, eight dimensional PolitiScales test | 48 comments
#2: Computers won't be intelligent for a million years – to build an AGI would require the combined and continuous efforts of mathematicians and mechanics for 1-10 million years. | 9 comments
#3: Ilya Sutskever, co-founder of OpenAI: "it may be that today's large neural networks are slightly conscious" | 43 comments
^^I'm ^^a ^^bot, ^^beep ^^boop ^^| ^^Downvote ^^to ^^remove ^^| ^^Contact ^^| ^^Info ^^| ^^Opt-out ^^| ^^GitHub
asingov t1_j40suvt wrote
Cherry picking Musk and Hawking out of a list which includes Norvig, Deepmind, Russel and "academics from Cambridge, Oxford, Stanford, Harvard and MIT" is just dishonest.
bob_shoeman t1_j40ukrr wrote
Alright, that’s fair - edited. I didn’t read through the first link properly.
Point remains that there is pretty generally a pretty complete lack of knowledge of what the field is like. r/ControlProblem most certainly is full of nonsense.
Viewing a single comment thread. View all comments