Viewing a single comment thread. View all comments

Hangry_Squirrel t1_j063uhf wrote

Calling an AI amoral is still anthropomorphizing it and assuming sentience. The AI we have is the textual equivalent of a factory robot: it can generate content via mimesis and figure out ways to spread it efficiently, but it has absolutely no idea what it's doing or that it's doing anything at all. It doesn't have a plan (and you can easily see that when it tries to write: it strings together some things which on the surface make sense, but it's not going anywhere with them).

As a tool, yes, it can become very dangerous in its efficiency, but it doesn't have any more sentience than a biological virus. The issue is that the people who create AI are also the people training it because they don't see the point of bringing in humanists in general and philosophers in particular. What the tool does can be expected and predicted, but only if you're used to thinking about ramifications instead of "oooh, I wonder what this button does if I push it 10 times."

0

telmar25 t1_j06l58c wrote

My point is that AI doesn’t need to have any idea what it’s doing—it doesn’t need to have sentience etc.—to produce unexpected output and be very dangerous. Facebook AI only has the tool of matching users with news or posts. So I suppose the worst that can happen is that users get matched with the worst posts (sometimes injected by bad actors) in a systematic way. Bad enough. Give an AI more capabilities—browse the web, provide arbitrary information, perform physical actions, be controlled by users with different intents—and much worse things can happen. There’s a textbook (extreme) example of an AI being tasked to eradicate cancer that launches nuclear missiles and kills everyone, as that is the fastest cancer cure. Even that AI wouldn’t need to have sentience, just more capabilities. Note this does not equate to more intelligence.

2