Viewing a single comment thread. View all comments

[deleted] t1_j396adp wrote

[removed]

9

Baturinsky OP t1_j39irdy wrote

I'm a programmer myself. Actually, I'm writing an AI for a bot in game right now, without the ML, of cause. And it's quite good at killing human players, btw, even though algorithm is quite simple.

So tell me, please, why AI can't become really dangerous really soon?
By itself, network like ChatGPT is reatively harmless. It's not that smart, and can't do anything in real world directly. Just tells something to human.

But, corpos and countries funnel ton of money into the field. Models are learning different things, algorithms are improving, so they will know much more stuff soon, including how to move and operate things in real world. Then, what stops somebody from connecting some models together, and stick it into a robot arm, which will make and install more robot arms and war drones, which will seek and kill humans? Either specific kind of humans, or humans in general, depending on that "somebody"'s purpose?

−6

PredictorX1 t1_j3ca2pm wrote

What, specifically, are you suggesting?

1

Baturinsky OP t1_j3ch80z wrote

I'm not qualified enough to figure how drastic measures can be enough.

From countries realising they face a huge common crisis that they only survive it if they forget the squabbles and work together.

To using the AI itself to analyse and prevent it's own threats.

To classifying all trained general-purpose models of scale of ChatGPT and above and preventing the possibility of making the new ones (as I see entire-internet-packed models the biggest threat now, if they can be used without the safeguards)

And up to to forcebly reverting all publically avaiable computing and communication technology to the level of 20 of 30 years ago, until we figure how we can use it safely.

0

Blasket_Basket t1_j3h8t00 wrote

It sounds like you have some serious misunderstandings about what AI is and what it can be used for, rooted in the same sci-fi plots that has misinformed the entire public.

1

Baturinsky OP t1_j3hnmdc wrote

I'm no expert indeed, that's why I was asking.
But experts in the field also think that serious concerns on AI safety is justified

https://en.wikipedia.org/wiki/Open_Letter_on_Artificial_Intelligence

Also, a lot of good arguments here:

https://www.reddit.com/r/ControlProblem/wiki/faq/

1

[deleted] t1_j40im8k wrote

[removed]

1

asingov t1_j40suvt wrote

Cherry picking Musk and Hawking out of a list which includes Norvig, Deepmind, Russel and "academics from Cambridge, Oxford, Stanford, Harvard and MIT" is just dishonest.

1

bob_shoeman t1_j40ukrr wrote

Alright, that’s fair - edited. I didn’t read through the first link properly.

Point remains that there is pretty generally a pretty complete lack of knowledge of what the field is like. r/ControlProblem most certainly is full of nonsense.

2