Viewing a single comment thread. View all comments

FirstOrderCat t1_j9m8bhd wrote

> Not that it wouldn't have happened without him but might've taken many more years to ramp up the same amount.

happened what exactly? what are the material results of his research?

I think Azimov's with his rules produces earlier and much stronger impact.

> I'm now a professional in the field of AGI safety

Lol, you adding AGI makes my bs detector beeping extremely loud.

Which AGI exactly you are testing for safety?

2

sticky_symbols t1_j9m8yn3 wrote

Asimov's rules don't work, and many of the stories were actually about that. But they also don't include civilization ending mistakes. The movie I Robot actually did a great job updating that premise, I think.

One counterintuitive thing is that people in the field of AI are way harder to convince than civilians. They have a vested interest in research moving ahead full speed.

As for your bs detector, I'm don't know what to say. And I'm not linking this account to my real identity. You can believe me or not.

If you're skeptical that such a field exists, you can look at the Alignment Forum as the principle place that we publish.

1

FirstOrderCat t1_j9ma8lr wrote

> Asimov's rules don't work

you jump to another topic. Initial discussion was that Azimov rules brought much more awareness, and you can't point on similar material results from Yudkovsky.

1

sticky_symbols t1_j9mbzia wrote

Sorry; my implication was that Asimov introduced the topic but wasn't particularly compelling. Yudkowsky created the first institute and garnered the first funding. But of course credit should be broadly shared.

1