grotundeek_apocolyps
grotundeek_apocolyps t1_jedbi44 wrote
Reply to comment by ReasonableObjection in [D] AI Explainability and Alignment through Natural Language Internal Interfaces by jackfaker
There is indeed a community of people who think in the way that you have described. They don't know what they're talking about and their research agenda is fundamentally unsound. Nothing that you just wrote is based on an accurate understanding of the science or mathematics of real machine learning.
I'd like to give you a better response than that but I'm honestly not sure how to. What does someone say to someone who is interested in math and who is very enthusiastic about the problem of counting the angels on the head of a pin? I say that not to be insulting but to illustrate the magnitude of the divide between how you perceive the field of machine learning and the reality of it.
grotundeek_apocolyps t1_jed6u66 wrote
Reply to comment by ReasonableObjection in [D] AI Explainability and Alignment through Natural Language Internal Interfaces by jackfaker
There are real concerns about the impacts of AI on the world, and they all pertain to the ways in which humans choose to use it. That is not the subject matter under consideration in "AI alignment" or "AI safety"; the term that is used for this is usually "AI ethics".
"AI alignment" /"safety" is about trying to prevent AIs from autonomously deciding to harm humans despite having been designed to not do so. This is a made up concern about a type of machine that doesn't exist yet that is predicated entirely on ideas from science fiction.
grotundeek_apocolyps t1_jecdbtm wrote
Reply to [D] AI Explainability and Alignment through Natural Language Internal Interfaces by jackfaker
"AI alignment" / "AI safety" are not credible fields of study.
grotundeek_apocolyps t1_jefl7kd wrote
Reply to comment by ReasonableObjection in [D] AI Explainability and Alignment through Natural Language Internal Interfaces by jackfaker
The crux of the matter is that there are fundamental limitations to the power of computation. It is physically impossible to create an AI, or any other kind of intelligent agent, that can overpower everything else in the physical world by virtue of sheer smartness.
Depending on where you're coming from this is not an easy thing to understand, it usually requires a lot of education. The simplest metaphor that I've thought of is the speed of light: it seems intuitively plausible that a powerful enough rocket ship should be able to fly faster than the speed of light, but actually the laws of physics prohibit it.
Similarly, it seems intuitively plausible that a smart enough agent should be able to solve any problem arbitrarily quickly, thereby enabling it to (for example) conquer the world or destroy humanity, but that too is physically impossible.
There are a lot of ways to understand why this is true. I'll give you a few places to start.
The people who have thought about this "for 30+ years" and come to a different conclusion are charlatans. I don't know of a gentler way of putting it. What do you tell someone when they ask you to explain why someone who has been running a cult for 30 years isn't really talking directly to god?
Something to note on the more psychological end of things is that a person's ability to understand things is fundamentally limited by their understanding of their own emotions. The consequence of this is that you should also be thinking about how you're feeling when you're reading hysterical nonsense about the robot apocalypse, because that's going to affect how likely you are to believe things that aren't true. People often fixate on things that have a strong emotional valence, irrespective of their accuracy.