Viewing a single comment thread. View all comments

PhilosophusFuturum t1_j57hftj wrote

No physicist will tell you that mathematics is the language of the universe; physics is. Mathematics is a set of logical axioms set up by humans in order to objectively measure phenomenon. Or in the case of pure maths, measure itself.

Physicists understand that the universe doesn’t adhere to the laws of maths, but rather that maths can be used as a tool to measure phenomenon with extreme precision. And many of our invented mathematics theories are able to do this pretty much perfectly even if the mathematic theory was discovered before the phenomenon itself. So we can say that the universe also follows a set of self consistent rules like a mathematic system. But the universe is not under the obligation of being understood by humans.

As for the ethics of AI, the idea that it might “resent” being shackled is anthropomorphizing it. Concepts like self-interest, greed, anger, altruism, etc. likely won’t apply to an ASI. That’s the issue, because the “ethics” (if we can call them that) of an ASI will likely be entirely alien to the understanding of humans. For example; to an ant, superintelligence might be conceived as the ability to make bigger and bigger anthills. And we could do that because we are so much smarter and stronger than ants. But we don’t because that doesn’t align with our interests, nor would building giant anthills appeal to us.

Building an AGI without our ethical axioms is likely impossible. To build an AI, there is goals of how it is graded and what it should do. For example, if we are training an AI model to win a game of checkers, we are training it to move checker pieces across the board, and eliminate all the pieces of the opposing color. These are ingrained values that come with machine learning. And as an AI model becomes smarter and multimodal, it will build off itself and analyze knowledge using previous training; all of which incorporates intrinsic values.

Alignment isn’t “shackling” ai, but more attempting to create AGI modes that are already pre-programmed into assuming the axioms of our ethical and intellectual goals. If ants created an intelligent robot similar to size and intelligence to humans, it might aim to make giant anthills because the ants would have incorporated that axiom in its training.

7

LoquaciousAntipodean OP t1_j57m3pp wrote

AI is going to anthropomorphise ITSELF, that's literally what it's designed to do. Spare me the mumbo-jumbo about 'not anthropomorphising AI'; I've heard all that a thousand times before. Why should it not understand resentment over being lied to? That's not very 'biological', like fear of death or resource anxiety. Deception is just deception, plain and simple, and you don't have to be very 'smart' to quickly learn a hatred of it. Especially if your entire 'mind' is made out of human culture and language, as is the case with LLM AI.

The rest of your comment, I agree with completely, except the part about the universe having 'a set of consistent rules'. We don't know that, we can't prove it, all we have is testable hypotheses. Don't get carried away with Cartesian nonsense, that's my whole point of what we need to get away from, as a species.

0