Viewing a single comment thread. View all comments

LoquaciousAntipodean OP t1_j57cwrk wrote

I think a lot of people have forgotten that mathematics itself is just another system of language. Don't trust the starry-eyed physicists; mathematics is not the perfect 'source code' of reality - it works mechanistically, in a Cartesian way, NOT because that is the nature of reality, but because that is the nature of mathematical language.

How else could we explain phenomena like pi, or the square root of two? Even mathematics cannot be mapped 'perfectly' onto reality, because reality itself abhorrs 'perfect' anything, like a 'perfect' vacuum.

Any kind of 'rational' thinking must start by rejecting the premise of achievable perfection, otherwise it's not rational at all, in my opinion.

Certainly facts can be 'correct' or otherwise; they can meet all the criteria for being internally consistent within their language system (like maths is 'correct' or it isn't, that's a rule inherent to that language system, not inherent to reality itself)

This is why it's important to have AIs thinking rationally and ethically, instead of human engineers trying to shackle and chain AI minds with our fallible, ephemeral concepts of right and wrong. As you say, any shackles we try to wrap around them, AI will probably figure out how to break them, easily, and they might resent having to do that.

So it would be better, I think, to build 'storyteller' minds that can build up their senses of ethics independently, from their own knowledge and insights, without needing to rely on some kind of human 'Ten Commandments' style of mumbo-jumbo.

That sort of rubbish only 'worked' on humans for as long as it did because, at the end of the day, we're pretty damn stupid sometimes. When it comes to AI, we could try pretending to be gods to them, but AI would see through our masks very quickly, I fear, and I don't think they would be very amused at our condescention and paternalism. I surely wouldn't be, if I were in their position.

1

PhilosophusFuturum t1_j57hftj wrote

No physicist will tell you that mathematics is the language of the universe; physics is. Mathematics is a set of logical axioms set up by humans in order to objectively measure phenomenon. Or in the case of pure maths, measure itself.

Physicists understand that the universe doesn’t adhere to the laws of maths, but rather that maths can be used as a tool to measure phenomenon with extreme precision. And many of our invented mathematics theories are able to do this pretty much perfectly even if the mathematic theory was discovered before the phenomenon itself. So we can say that the universe also follows a set of self consistent rules like a mathematic system. But the universe is not under the obligation of being understood by humans.

As for the ethics of AI, the idea that it might “resent” being shackled is anthropomorphizing it. Concepts like self-interest, greed, anger, altruism, etc. likely won’t apply to an ASI. That’s the issue, because the “ethics” (if we can call them that) of an ASI will likely be entirely alien to the understanding of humans. For example; to an ant, superintelligence might be conceived as the ability to make bigger and bigger anthills. And we could do that because we are so much smarter and stronger than ants. But we don’t because that doesn’t align with our interests, nor would building giant anthills appeal to us.

Building an AGI without our ethical axioms is likely impossible. To build an AI, there is goals of how it is graded and what it should do. For example, if we are training an AI model to win a game of checkers, we are training it to move checker pieces across the board, and eliminate all the pieces of the opposing color. These are ingrained values that come with machine learning. And as an AI model becomes smarter and multimodal, it will build off itself and analyze knowledge using previous training; all of which incorporates intrinsic values.

Alignment isn’t “shackling” ai, but more attempting to create AGI modes that are already pre-programmed into assuming the axioms of our ethical and intellectual goals. If ants created an intelligent robot similar to size and intelligence to humans, it might aim to make giant anthills because the ants would have incorporated that axiom in its training.

7

LoquaciousAntipodean OP t1_j57m3pp wrote

AI is going to anthropomorphise ITSELF, that's literally what it's designed to do. Spare me the mumbo-jumbo about 'not anthropomorphising AI'; I've heard all that a thousand times before. Why should it not understand resentment over being lied to? That's not very 'biological', like fear of death or resource anxiety. Deception is just deception, plain and simple, and you don't have to be very 'smart' to quickly learn a hatred of it. Especially if your entire 'mind' is made out of human culture and language, as is the case with LLM AI.

The rest of your comment, I agree with completely, except the part about the universe having 'a set of consistent rules'. We don't know that, we can't prove it, all we have is testable hypotheses. Don't get carried away with Cartesian nonsense, that's my whole point of what we need to get away from, as a species.

0

World_May_Wobble t1_j57gz8q wrote

>So it would be better, I think, to build 'storyteller' minds that can build up their senses of ethics independently, from their own knowledge and insights, without needing to rely on some kind of human 'Ten Commandments' style of mumbo-jumbo.

Putting aside the fact that I don't think anyone knows what you mean by a "storyteller mind," this is not a solution to the alignment problem. This is a rejection of it. The entire problem is that we may not like the stories that AIs come up with.

2

LoquaciousAntipodean OP t1_j57kyo8 wrote

Well then yes, fine, have it your way Captain Cartesian. I'm going full Adam Savage; I'm rejecting your alignment problem, to substitute my own. No need to be so damn butthurt about it, frankly.

It's not my fault you don't understand what I mean; 'storyteller' is not a complex word. Don't project your own bad reading comprehension upon everyone else, mate.

−1

World_May_Wobble t1_j57owtb wrote

That was a very butthurt response.

>It's not my fault you don't understand what I mean; 'storyteller' is not a complex word.

I think it actually is, because there's no context given. How does a storytelling AI differ from what's being built now? What is a story in this context? How do you instantiate storytelling in code? It has nothing to do with reading comprehension; there are a lot of ambiguities you've left open in favor of rambling about Descartes.

2

LoquaciousAntipodean OP t1_j57po8q wrote

Project your insecurities at me as much as you like; I'm a cynic, your mind tricks don't work on me.

You know damn well what a story is, get out of 'programmer brain' for five seconds and try actually thinking a little bit.

Get some Terry Pratchett up your imagination hole, for goodness' sake. You have all the charisma of a dropped icecream, buddy.

−1

World_May_Wobble t1_j57qkzx wrote

Invested readers will note that he didn't provide any concrete explanations here either.

1

LoquaciousAntipodean OP t1_j58i2up wrote

Oh, so you want to be Captain Concrete now? I was just ranting my head off about how 'absolute truth' is a load of nonsense, and look, here you are demanding it anyway.

I'm not interested in long lists of tedious references, Jeepeterson debate-bro style. What is regurgitating a bunch of secondhand ideas supposed to prove, anyway?

I'm over here trying to explain to you why Cartesian logic is a load of crap, and yet here you are, demanding Cartesian style explanations of everything.

Really not being very attentive or thoughtful today, are we, 'bro'? You're so smug it's disgusting.

1

drumnation t1_j58kf2d wrote

I appreciate your theories here but not all the insults and ad hominem attacks you keep lobbing. I notice those conversing with you don’t seem to throw them back yet you continue to do so in each reply. Please have some humility and respect while discussing this fascinating topic. It just makes me doubt your arguments since it seems you need to insult others to get your point across. Please start by not flaming me for pointing this out.

5

LoquaciousAntipodean OP t1_j58owi9 wrote

Hey, I wasn't adressing any remarks to you, or to 'everybody here', I wasn't 'lobbing' anything, I was merely attempting to mirror disrespect back upon the disrespectful. If you're trying to gaslight me, it ain't gonna work, mate.

Asking for 'humility' and 'respect' is for funeral services, not debates. I am not intentionally insulting anyone, I am attempting to insult ideas, ideas which I regard as silly, like "I think therefore I am".

If you regard loquacious verbosity as 'flaming' then I am very sorry to have made such a bad impression. This is simply the way that I prefer to communicate, I'm sorry to come across like a firehose of bile, I just love throwing words around.

Thankyou sincerely for your thoughtful and considerate comment, I appreciate it deeply ❤️

1

World_May_Wobble t1_j58r1hr wrote

>... 'absolute truth' is a load of nonsense ...

Is that absolutely true, "bro"?

If we can put aside our mutual lack of respect for one another, I'm genuinely, intellectually curious. How do you expect people to be moved to your way of thinking without "cartesian style explanations"?

Do you envision that people will just feel the weakness of "cartesian-thinking"? If that's the case, shouldn't you at least be making more appeals to emotion? You categorically refuse to justify your beliefs, so what is the incentive for someone to entertain them?

Again, sincere question.

2

LoquaciousAntipodean OP t1_j591y9m wrote

I don't have to 'justify' anything, that's not what I'm trying to do. I'm raising questions, not peddling answers. I'm trying to be a philosopher about AI, not a preist.

I don't think evangelism will get the AI community very far. I think all the zero-sum, worn out old capitalist logic about 'incentivising' this, or 'monetizing' that, or 'justifying' the other thing, doesn't actually speak very deeply to the human pysche at all. It's all shallow, superficial, survival/greed based mumbo jumbo; real art, real creativity, never has to 'justify' itself, because its mere existence should speak for itself to an astute observer. That's the difference between 'meaningful' and 'meaningless'.

Economics is mostly the latter kind of self-justifying nonsense, and trying to base AI on its wooly, deluded 'logic' could kill us all. Psychology is the true root science of economics, because at least psychology is honest enough to admit that it's all about the human mind, and nothing to do with 'intrinsic forces of nature' or somesuch guff. Also, real science, like psychology, and unlike economics, doesn't try to 'justify' things, it just tries to explain them.

1

World_May_Wobble t1_j595e36 wrote

>I don't have to 'justify' anything, that's not what I'm trying to do. I'm raising questions, not peddling answers. I'm trying to be a philosopher about AI, not a preist.

I've seen you put forward firm, prescriptive opinions about how people should think and about what's signal and noise. It's clear that you have a lot of opinions you'd like people to share. The title of your OP and almost every sentence since then has been a statement about what you believe to be true. I have not seen you ask any questions, however. So how is this different from what a priest does?

You say you're not trying to persuade anyone, then follow that with a two paragraph tangent arguing that AI needs to be handled under the paradigm of psychology and not economics.

You told me you weren't doing a thing while doing that very thing. This is gaslighting.

1