ErinBLAMovich t1_j9snb17 wrote
Reply to comment by [deleted] in [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
Maybe when an actual expert tells you you're overreacting, you should listen.
Are you seriously arguing that the modern world is somehow corrupted by some magical unified "postmodern philosophy"? We live in the most peaceful time in recorded history. Read "Factfulness" for exact figures. And while you're at it, actually read "Black Swan" instead of throwing that term around because you clearly need to a lesson on measuring probability.
If you think AI will be destructive, outline some plausible and SPECIFIC scenarios how this could possibly happen, instead of your vague allusions to philosophy with no proof of causality. We could then debate the likelihood of each scenario.
[deleted] t1_j9tl7kx wrote
[deleted]
Viewing a single comment thread. View all comments