Tonkotsu787 t1_j9rolgt wrote
Check out Paul Christiano. His focus is on ai-alignment and, in contrast to Eliezer, he holds an optimistic view. Eliezer actually mentions him in the bankless podcast you are referring to.
This interview of him is one of the most interesting talks about AI I’ve ever listened to.
And here is his blog.
SchmidhuberDidIt OP t1_j9rqdje wrote
Thanks, I actually read this today. He and Richard Ngo are the names I've come across for researchers who've deeply thought about alignment and hold views grounded in the literature.
mano-vijnana t1_j9s5zl4 wrote
Both of them are more positive than EY, but both are still quite worried about AI risk. It's just that they don't see doom as inevitable. This is the sort of scenario Christiano worries about: https://www.alignmentforum.org/posts/HBxe6wdjxK239zajf/what-failure-looks-like
And this is Ngo's overview of the topic: https://arxiv.org/abs/2209.00626
learn-deeply t1_j9sukrc wrote
Also, Paul actually has trained and works closely with ML models, unlike Eliezer, who does not understand how deep learning works .
Viewing a single comment thread. View all comments