Smallpaul t1_ja6orxv wrote
Reply to comment by VirtualHat in [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
> occasionally beat a much stronger player
We might occasionally win a battle against SkyNet? I actually don't understand how this is comforting at all.
> The world we live in is one of chance and imperfect information, which limits any agent's control over the outcomes.
I might win a single game against a Poker World Champion, but if we play every day for a week, the chances of me winning are infinitesimal. I still don't see this as very comforting.
Viewing a single comment thread. View all comments