Viewing a single comment thread. View all comments

dansmonrer t1_jeg67bc wrote

Not at all made up in my opinion! There just doesn't seem to be any consensual framework for the moment, and diverse people are scrambling to put relevant concepts together and often disagree on what makes sense. It's particularly hard for ai alignment because it requires you to define what are the dangers you want to speak of, and so to have a model of an open environment in which the agent is supposed to operate which currently we do not have any notion nor example of. This makes examples that people in ai alignment brought up very speculative and poorly grounded which allows for easy critic. I'm curious though if you have interesting research examples in mind!

1