[D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? Submitted by SchmidhuberDidIt t3_11ada91 on February 24, 2023 at 12:16 AM in MachineLearning 176 comments 123
[deleted] t1_j9wdqrq wrote on February 25, 2023 at 1:03 AM Reply to comment by linearmodality in [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt [deleted] Permalink Parent 3 [deleted] t1_ja4f1ds wrote on February 26, 2023 at 7:37 PM [deleted] Permalink Parent 1 Smallpaul t1_ja6pbdt wrote on February 27, 2023 at 6:19 AM Replying to yourself doesn't get anyone's attention. Permalink Parent 3
[deleted] t1_ja4f1ds wrote on February 26, 2023 at 7:37 PM [deleted] Permalink Parent 1 Smallpaul t1_ja6pbdt wrote on February 27, 2023 at 6:19 AM Replying to yourself doesn't get anyone's attention. Permalink Parent 3
Smallpaul t1_ja6pbdt wrote on February 27, 2023 at 6:19 AM Replying to yourself doesn't get anyone's attention. Permalink Parent 3
Viewing a single comment thread. View all comments