Mefaso t1_j9s66qq wrote
Reply to comment by Additional-Escape498 in [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
>I remember as recently as 2015 at ICLR/ICML/NIPS you’d get side-eye for even bringing up AGI.
You still do, imo rightfully so
starfries t1_j9ufa2h wrote
Unfortunately there are just too many crackpots in that space. It's like bringing up GUT in physics - worthwhile goal, but you're sharing the bus with too many crazies.
Viewing a single comment thread. View all comments