FinancialElephant t1_j9sqtwq wrote
Reply to comment by adventurousprogram4 in [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
I don't know anything about him when it comes to alignment. Seems like a lot of unrigorous wasted effort at first glance, but I haven't really had the time or desire to look into it.
The overbearing smugness of Inadequate Equilibria was nauseating. It was unreadable, even for poop reading. The guy is really impressed with himself for believing he came up with theories that have existed for a long time, but that he was too lazy and too disrespectful to research. I will admit there were a couple good snippets in the book (but given the general lack of originality, can we really be sure those snippets were original?).
>When things suck, they usually suck in a way that's a Nash Equilibrium.
There you go, I just saved you a couple hours.
What has EY actually done or built? He seems like one of those guys that wants to be seen as technical or intellectual but hasn't actually built anything or done anything other than nebulously / unrigorously / long-windedly discuss ideas to make himself sound impressive. Kinda like the Yuval Noah Harari of AI.
Viewing a single comment thread. View all comments