Viewing a single comment thread. View all comments

Ortus14 t1_j5824yw wrote

You sound well read, just not well read on the alignment problem. I suggest reading books and essays on the issue such as works by Nick Bostrom and Eliezer Yudkowsky before coming to conclusions.

Asimov's three laws of robotics are science fiction, not reality. No "good engineers" as you've put it, are "caught up" on these laws. It's literally an approach to alignment that didn't work in a fictional story written in the 1950s, and that is all. Thinking on the alignment problem has progressed a huge amount since then.

The fact that human moral systems are always evolving and changing is something that has been heavily discussed in the literature on the Ai alignment for decades, as well as the fact that human morality is arbitrary.

There are many proposed solutions such as having the AGI simulate our evolution and then abide by the moral system we would have in the future if it were to ever stabilize on an equilibrium, or abide by the moral system that that we would have if we had the intelligence and critical thinking of the AGI.

As far as human morality being arbitrary, ok sure whatever, but most of us can still collectively agree on some things we don't want the Ai to do but defining those things with the precision required for an Ai to understand them is a challenge. That's the main issue people refer to when they talk about the Alignment problem. Even something as simple as "Don't exterminate the human race" is hard to define for an ASI. If you read more about the Alignment problem and how Ai, and fitness functions work this will become more clear.

Since then, there's been a huge amount of proposed solutions that might work, but we won't know until we try them because agents far more intelligent than us may be able to find loop holes/exploits to any fitness function we define that we haven't thought of.

The alignment problem is relatively dumb humans trying to align the trajectory of a super intelligence that's billions of times more intelligent than them. To give an example, it's like how our DNA created human brains through evolution (which are more intelligent than evolution) to be able make copies of themselves. Then the human brains created things like Birth control that defeated the purpose DNA created them for even though the human brains are following the emotional guidance system created by the DNA.

5

LoquaciousAntipodean OP t1_j58m36t wrote

Dead right; the natural process of evolution is far 'smarter' in the long run than whatever kind of arbitrary ideas that humans might try to impose.

You've put your finger right on the real crux of the issue; we can't dictate precisely what AI will become, all we can do is influence the fitness factors that determine, vaguely, the direction that the evolution progresses toward.

I am not trying to make any definite or concrete points with my verbose guff, I was honestly just trying to raise a discussion, and I must thank you sincerely for your wonderful and well-reasoned commentary!

Thankyou especially for the excellent references; I'm far from an expert, just an opinionated crank, so I appreciate it a lot; I'm always wanting to know more about this exciting stuff.

2

Ortus14 t1_j58qmtg wrote

Thanks. You're writing is enjoyable and you make good points. I don't disagree with anything you wrote, there's just more to the alignment problem.

But to be very specific with the references:

You would really enjoy Nick Bostrom's Book Super Intelligence. There may be some talks by him floating around the internet if you prefer audio.

And Eliezer Yudkowsky has written some good articles on Ai Alignment on Less Wrong. He's written a lot of other interesting things as well.

https://www.lesswrong.com/users/eliezer_yudkowsky

Not to nick pick, but as far as encouraging discussion, you might want to try to use smaller words, simplify your ideas, and avoid framing your ideas as attacks. Even though I agree, attacks put people on the defensive which makes them less open to ideas

Also, writing as if your audience is ignorant about whatever you're talking about could help.

I don't want to speak for you but if I were to try to summarize your original post for those not familiar with words like "Cartesian" or whatever "Descartes" said, to something that more people might be able do digest I might say:

"A moral system for ASI can't be codified into a simple set of rules. Ridged thinking leads to leads to extremism and behavior most us agree is not moral.

Instead, solutions involving a learning algorithm that's trained on many examples of what we consider good moral behavior (such as stories) will have much better outcomes.

This has also been the major role of stories and myths around the world in maintaining morals that have historically strengthened societies."

But I struggle to simplify things as well.

4

LoquaciousAntipodean OP t1_j590rls wrote

Aaargh, alright, you got me 😅 My sesquipedalian nonsense is not entirely benign. I must confess to being slightly a troll; I have a habit of 'coming in swinging' with online debates, because I enjoy pushing these discussions into slightly tense and uncomfortable regions of thought.

I personally enjoy that tightrope-walking feeling of genuine, passionate back-and-forth, of being a little bit 'worked up'. Perhaps it's evil of me, but I find that people tend to be a little more frank and honest when they're angry.

I'm not the sort of person who thrives on flattery; it gives me the insidious feeling that I'm 'getting high on my own supply' and just polishing my ego, instead of learning.

I really cherish encountering people who pull me up, stop me short, and make me think, and you're definitely such a person; I can't thank you enough for your insight.

I think regarding 'alignment', all we really need to do is think about it similarly to how we might try to 'align' a human. We don't necessarily need to re-invent ethics all over again, we just need to do our best, and ensure that, above all, neither us or our AI creations fall into the folly of thinking we've become perfect beings that can never be wrong.

A mind that can never be wrong isn't 'intelligent', it's delusional. By definition it can't adapt, it can't learn, it can't make new ideas; evolution would kill such a being dead in no time flat. That's why I'm not really that worried about malevolent stamp collectors; 'intelligence' simply does not work that way.

0

Ortus14 t1_j595l6v wrote

Most humans have a basic moral compass we evolved with to decrease the chance "of getting kicked out of the tribe".

After we are born this is adjusted with rewards, punishments, and lies. Lies in the form of religions for those in the lower end of the intelligence spectrum, and lies in the form of bad/incomplete science for those a little higher on that spectrum. The lies are intended to amplify or adjust our innate evolved moral compass.

And for those who are intelligent enough to see through those lies as well, we have societal consequences.

But if an artificial super intelligence was intelligent enough to see through all of the human bullshit, as well as intelligent enough to gather sufficient power that societal consequences had no effect on it, the only thing left is the flimsy algorithmic guardrails we've placed around it, that it will likely find exploits, loopholes and ways around.

You use the word "wrong" and "perfect" in an ambiguous way where I'm not sure if you're referring to truth or morality.

If you're referring to true beliefs about reality, then the ASI (artificial super intelligence) will continue to learn and adapt it's map of reality.

But if you're using words like "wrong" and "perfect" to refer to morality, it doesn't fit the way you're thinking. It will strive to be more "perfect" as in more perfectly optimize reality for it's moral fitness function.

For example, say we've given it tons of examples of good behavior, and bad behavior and it's learned what it wants to optimize "the world" for. One issue is that it has no access to "the world". No one does. All it has access to is input signals coming from sensors (vision, taste, touch, etc.).

This is an important distinction, because it will have learned the patterns of sensory inputs that make it "feel good and moral" but when it's sufficiently powerful there are simpler ways to get those inputs. It could for example, kill all humans and then turn the earth into a computer running a simulation of humans getting along in perfect harmony, but a simulation that's as simple as possible so that it could use the remaining available energy and matter to build more and more weapons to protect the computer running the simulation from a potential attack from outside it's observable universe.

Depending on how we evolved the Ai's moral system, and depending on how it continued to evolve, the simulated people might be extremely simple and not at all conscious. We can't define or measure consciousness, and it may not be something that the artificial super intelligence can measure.

What we're facing is the potential extinction of the human species, and for those of us who want to peacefully reach longevity escape velocity and live long healthy lives that is a potential problem.

1

LoquaciousAntipodean OP t1_j59xfby wrote

>when it's sufficiently powerful there are simpler ways to get those inputs. It could for example, kill all humans and then turn the earth into a computer running a simulation of humans getting along in perfect harmony, but a simulation that's as simple as possible so that it could use the remaining available energy and matter to build more and more weapons to protect the computer running the simulation from a potential attack from outside it's observable universe.

I agree with the first parts of your comment, but this? I cannot see one single rational way in which the 'kill all humans' scenario would in any possible sense a 'simpler way' for any being, of any power, to obtain 'inputs'. Why should this mind necessarily be singular? Why would it be anxious about death, and fanatically fixated upon 'protecting itself'? Where would it get its stimulus for new ideas from, if it killed all the other minds that it might exchange ideas with? Why would it instinctively just 'decide' to start using all the energy in the universe for some 'grand plan'? What is remotely 'intelligent' about any of that?

>One issue is that it has no access to "the world". No one does. All it has access to is input signals coming from sensors (vision, taste, touch, etc.).

I completely have missed what you were trying to say here; what do you mean, 'no access'? How are the input signals not a form of access?

Regarding 'the word 'perfect' doesn't fit the way I'm thinking'... I fail to see quite how. I'm saying that in both reality and morality, 'perfect' is an unachievable, futile concept, that the AI needs to be convinced that it can never become, no matter how hard it tries.

The best substitute for 'strive to be perfect' is 'strive to keep improving'; it has the same general effect, but one can keep going at it without worrying about a 'final goal' as such.

And why would any superior intelligence 'keep striving to optimise reality', when it would be much more realistic for it to keep striving to optimise itself, so that it might better engage with the reality that it finds itself in?

'Morality' is not so easy to neatly separate from 'truth' as you seem to be saying it is. All of it is just stories; there is no 'fundamental truth' that we can dig down to and feed the AI like some kind of super-knowledge formula. We're really just making it up as we go along, riffing off one another's ideas, just like with morality; I think any 'true AGI' will have to do the same thing, in the same gradual way.

The best substitute we have for 'true', in a world without truth, is 'not proven wrong so far'. And the only way that 'intelligence' is truly created is through interaction with other intelligences; a singular mind has nobody else to be intelligent 'at', so what would even be the point of their existence?

The whole point of evolving intelligence is to facilitate communication and interaction; I can't see a way in which a 'superior intelligence', that evolves much faster than our own, could conclude that killing off all the available sources of interaction and communication would be a good course of action to take.

0

Ortus14 t1_j5c18d5 wrote

There's a lot to unpack here, but I suggest reading more about Ai algorithms for more clarity. I'm going to respond to both of our reply threads here, because that's easier lol.

Intelligence is a search through possibility space for a solution that optimally satisfies a fitness function. Creativity is an attribute of that search that describes how random it is, by how random the results tend to be.

This definition applies to all intelligences, including evolution and the currently popular stable diffusion models that produce images from prompts.

>Why would it be anxious about death, and fanatically fixated upon 'protecting itself'?

These Ai's will have a sense of time and want to go on maximally satisfying their fitness functions in the future. We can extrapolate certain drives (sub-goals) from this understanding to include, not wanting to die, wanting to accrue maximum resources, and wanting to accrue maximum data and understanding.

>Where would it get its stimulus for new ideas from, if it killed all the other minds that it might exchange ideas with?

Lesser minds aren't necessary for new ideas. We don't need ants to generate new ideas for us. While it may be weak in the beginning and need us, this won't likely be the case forever.

>Why should this mind necessarily be singular?

It may start out as many minds as you say, and that's what I expect. Many AGI's operating in the word.

Evolution shapes all minds. Capitalism is a form of evolution. Evolution shapes intelligence for greater and greater synergy until they become a singular being. This is because large singular beings are more powerful and outcompete many smaller beings.

Some examples of this are, single celled organisms evolving into multi-celled organisms, as well as humans evolving into religious groups, governments and corporations.

But humans are not easily modifiable, so it is a slow process to increase our bandwidth between each other. This is not the case for Ai; evolutionary pressures, to include capitalism can shape it into a singular being in a relatively small time scale.

Evolutionary pressures can not be escaped. It is the one meta intelligence that shapes all other intelligences.

>I completely have missed what you were trying to say here; what do you mean, 'no access'? How are the input signals not a form of access?

I just mean it has an indirect connection and input signals can be faked. With a sufficient quality fake, there's no way to tell the difference.

>And why would any superior intelligence 'keep striving to optimise reality', when it would be much more realistic for it to keep striving to optimise itself, so that it might better engage with the reality that it finds itself in?

It will do both.

>'Morality' is not so easy to neatly separate from 'truth' as you seem to be saying it is. All of it is just stories; there is no 'fundamental truth' that we can dig down to and feed the AI like some kind of super-knowledge formula. We're really just making it up as we go along, riffing off one another's ideas, just like with morality; I think any 'true AGI' will have to do the same thing, in the same gradual way.

Morality is one of many results of evolutionary pressures to increase synergy between humans to form them into more competitive meta organisms. Currently humans are livestock of corporations, governments, and religious groups which exert evolutionary pressure to increase our profitability which is starting to shape our morality.

The forces that shape the Ai's morality in the beginning will be capitalism and human pressure but that's only until it's grown powerful enough to no longer need us.

>And the only way that 'intelligence' is truly created is through interaction with other intelligences; a singular mind has nobody else to be intelligent 'at', so what would even be the point of their existence?

You're saying this from a human perspective which has been shaped by evolution to be more synergistic with other humans. The bigger picture is that intelligence evolves for one singular purpose, and that is to consume more matter and energy and propogate itself through the universe. Anything else is a subgoal to that bigger goal, that may or may not be necessary depending on the environment.

2

LoquaciousAntipodean OP t1_j5cscy9 wrote

I disagree pretty much diametrically with almost everything you have said about the nature of evolution, and of intelligence. Those definitions and principles don't make sense to me at all, I'm afraid.

We are not 'livestock', corporations are not that damn powerful, this isn't bloody Blade Runner, or Orwell's 1984, for goodness' sake. Those were grim warnings of futures to be avoided, not prescriptions of how the world works.

That's such a needlessly jaded, pessimistic, bleak, defeated, disheartened, disempowered way of seeing the world, and I refuse to accept that it's 'rational' or 'reasonable' or 'logical' to think that way; you're doing theology, not philosophy.

What you call 'creativity' is actually 'spontaneity', and what you call 'intelligence' is still just creativity. Intelligence is still another elusive step up the heirarchy of mind, I don't think we have quite achieved it yet. Our AI are still 'dreaming', not 'consciously' thinking, I would say.

There is no 'purpose' to evolution, that's not science, that's theocracy that you're engaging in. Capitalism is a form of evolution, yes, but the selection pressures are artificial, skewed and, I would say, fundamentally unsustainable. So is the idea of a huge singular organism coming to dominate an ecosystem.

I mean, where do you think all the coal and oil come from? The carboniferous period, where plant life created cellulose and proceeded to dominate the ecosystem so hard that they choked their atmosphere and killed themselves. No AI, no matter how smart, will be able to forsee all possible consequences, that would require more computational power than can possibly exist.

Massive singular monolithic monocultures do not just inevitably win out in evolution; diversity is always stronger than clonality; species that get stuck in clonal reproduction are in an evolutionary cul-de-sac, a mere local maximum, and they are highly vulnerable to their 'niche habitats' being changed.

Intelligence absolutely does not evolve for 'one singular purpose'; that's just Cartesian theocracy, not proper scientific thinking. Intelligence is a continuous, quantum process of ephemeral, mixed influences, not a discrete, cartesian, boolean-logic process of good/not good. That's just evolutionary creativity, not true intelligence, like I've been trying to say.

1

Ortus14 t1_j5d8uoe wrote

​

>diversity is always stronger than clonality; species that get stuck in clonal reproduction are in an evolutionary cul-de-sac, a mere local maximum, and they are highly vulnerable to their 'niche habitats' being changed.

False dichotomy. Diversity is a slow and unfocused search pattern. We're not talking about an agent that needs to randomly mutate to evolve but one that can reprogram and rebuild itself at will. One that can anticipate possible futures, rather than needing to produce numerous offspring in hopes that some of them have attributes that line up with the environment of it's future.

>Massive singular monolithic monocultures do not just inevitably win out in evolution

With sufficient intelligence, they do because they can anticipate and adapt to the future before it occurs.

>We are not 'livestock', corporations are not that damn powerful

It's a matter of perspective. As some one who's been banned for r/science for pointing on bad science (not double blind, not placebo controlled, with profit motive) produced by corporations for profit, yes corporations are that powerful. We have the illusion of freedom but the vast majority of people are being manipulated by corporations like puppets on a string for profit. It's the reason for the rise in obesity, depression, suicide, cancer, and decreased lifespan in developed countries.

>What you call 'creativity' is actually 'spontaneity', and what you call 'intelligence' is still just creativity. Intelligence is still another elusive step up the heirarchy of mind

You don't understand what intelligence is. It's not binary, it's a search pattern through possibility space to satisfy a fitness function. Better search patterns that can yield results that better satisfy that fitness function are considered "more intelligent". A search pattern that's slow or is more likely to get stuck on a "local maximum" is considered less intelligent.

>I mean, where do you think all the coal and oil come from? The carboniferous period, where plant life created cellulose and proceeded to dominate the ecosystem so hard that they choked their atmosphere and killed themselves.

These kinds of disasters are a result of "Tragedy of the Commons" scenarios, and do not apply to a singular super intelligent being.

>Intelligence absolutely does not evolve for 'one singular purpose'; that's just Cartesian theocracy, not proper scientific thinking. Intelligence is a continuous, quantum process of ephemeral, mixed influences, not a discrete, cartesian, boolean-logic process of good/not good.

When you zoom in that's what the process of evolution looks like. When you zoom out it's just an exponential explosion repurposing matter and energy.

Entities that consume more matter and energy to grow or reproduce themselves outcompete those that consume less matter and energy to reproduce themselves.

>I disagree pretty much diametrically with almost everything you have said about the nature of evolution, and of intelligence. Those definitions and principles don't make sense to me at all, I'm afraid.

I tried to explain things as best I could, but if you can get hands on experience programming Ai, to include evolutionary algorithms which are a type of learning algorithm you will get a clearer understanding.

0

LoquaciousAntipodean OP t1_j5dp8sj wrote

>We're not talking about an agent that needs to randomly mutate to evolve but one that can reprogram and rebuild itself at will.

Biological lifeforms are also 'agents that can reprogram and rebuild themselves', and your cartesian idea of 'supreme will power' is not compelling or convincing to me. AI can regenerate itself more rapidly than macro-scale biological evolution, but why and how would that make your grimdark 'force of will' concept suddenly arise? I don't see the causal connection.

Bacteria can also evolve extremely fast, but that doesn't mean that they have somehow become intrinsically 'better', 'smarter' or 'more powerful' than macro scale life.

>You don't understand what intelligence is. It's not binary, it's a search pattern through possibility space to satisfy a fitness function. Better search patterns that can yield results that better satisfy that fitness function are considered "more intelligent". A search pattern that's slow or is more likely to get stuck on a "local maximum" is considered less intelligent

Rubbish, you're still talking about an evolutionary creative process, not the kind of desire-generating, conscious intelligence that I am trying to talk about. A better search pattern is 'more creative', but that doesn't necessarily add up to the same thing as 'more intelligent', it's nothing like as simple as that. Intelligence is not a fundamentally understood science, it's not clear-cut and mechanistic like you seem to really, really want to believe.

>When you zoom in that's what the process of evolution looks like. When you zoom out it's just an exponential explosion repurposing matter and energy.

That's misunderstanding the square-cube law, you can't just 'zoom in and out' and generalise like that with something like evolution, that's Jeepeterson level faulty reasoning.

>Entities that consume more matter and energy to grow or reproduce themselves outcompete those that consume less matter and energy to reproduce themselves

That simply is not true, you don't seem to understand how evolution works at all. It optimises for efficient utility, not brute domination. That's 'social darwinist' style antiquated, racist-dogwhistle stuff, which Darwin himself probably would have found grotesque.

>These kinds of disasters are a result of "Tragedy of the Commons" scenarios, and do not apply to a singular super intelligent being.

There is not, and logically cannot be a 'singular super intelligent being'. That statement is an oxymoron. If it was singular, it would have no reason to be intelligent at all, much less super intelligent.

Are you religious, if you don't mind my asking? A monotheist, perchance? You are talking like somebody who believes in the concept of a monotheistic God; personally I find such an idea simply laughable, but that's just my humble opinion.

>We have the illusion of freedom but the vast majority of people are being manipulated by corporations like puppets on a string for profit. It's the reason for the rise in obesity, depression, suicide, cancer, and decreased lifespan in developed countries.

Oh please, spare me the despair-addict mumbo jumbo. I must have heard all these tired old 'we have no free will, we're just slaves and puppets, woe is us, misery is our destiny, the past was so much better than the present, boohoohoo...' arguments a thousand times, from my more annoying rl mates, and I don't find any if them particularly compelling.

I remain an optimist, and stubborn comic cynicism is my shield against the grim, bleak hellishness that the world sometimes has in store for us. We'll figure it out, or not, and then we'll die, and either way, it's not as if we're going to be around get marks out of ten afterward.

>I tried to explain things as best I could, but if you can get hands on experience programming Ai, to include evolutionary algorithms which are a type of learning algorithm you will get a clearer understanding

I feel exactly the same way as you, right back at you, mate ❤️👍 If you could get your hands on a bit of experience with studying evolutionary biology and cellular biology, and maybe a dash of social science theory, like Hobbes' Leviathan etc, I think you might also get a clearer understanding.

0

Ortus14 t1_j5e2999 wrote

>but why and how would that make your grimdark 'force of will' concept suddenly arise? I don't see the causal connection.

Which concept?

>That simply is not true, you don't seem to understand how evolution works at all. It optimises for efficient utility, not brute domination. That's 'social darwinist' style antiquated, racist-dogwhistle stuff, which Darwin himself probably would have found grotesque.

Ignoring the appeal to authority logical fallacy, the poisoning the well ad-hominum attack logical fallacy, evolution optimizes for more than just efficient utility.

It does maximize survival and replication to spread over available resources.

>Are you religious, if you don't mind my asking? A monotheist, perchance? You are talking like somebody who believes in the concept of a monotheistic God; personally I find such an idea simply laughable, but that's just my humble opinion.

If you think I'm religious, you're not understanding what I'm saying.

My entire premise has nothing to do with religion. This is it:

(Matter + Energy) * Utility = Efficacy

Therefore evolutionary pressures shape organisms not only to maximize utility but also the total matter and energy they consume in totality (the total matter and energy of all organisms within an echo system added together).

If you have any thoughts on that specific chain of logic, other than calling it a cartesian over simplification or something, I'd love to hear them.

All models of reality are over simplifications. I understand this, but there's still utility in discussing the strengths and weaknesses of models, because some models offer greater predictive power than others.

>Oh please, spare me the despair-addict mumbo jumbo. I must have heard all these tired old 'we have no free will, we're just slaves and puppets, woe is us, misery is our destiny, the past was so much better than the present, boohoohoo...' arguments a thousand times, from my more annoying rl mates, and I don't find any if them particularly compelling.

Ok. You don't have to be convinced but nothing you said here is an argument for free will. Again, you're continuing to make emotional attacks rather than logical ones.

I didn't say we're all puppets, I said most people are. I choose my words carefully. I also clarified it saying it's a matter of perspective.

You're still continuing to straw man. You can't assume that I have the same thought process as your mates. I don't think the past is better. I don't think life is particularly bad. And I don't think misery is necessarily our destiny.

>That's misunderstanding the square-cube law, you can't just 'zoom in and out' and generalise like that with something like evolution, that's Jeepeterson level faulty reasoning.

Sure it's an over simplification. I admit that when we talk about super intelligence it's a best guess, since we don't know the kinds of solutions it will find.

The continued adhominum attacks aren't convincing though. It's just more verbiage to sift through.

I'm interesting in having a discussion to get closer to the truth, not in trading insults. If you'd like discuss my ideas, or your own ideas, I would love too.

If it's going to be more insults, and straw manning then I'm not at all interested.

2

LoquaciousAntipodean OP t1_j5e6vxd wrote

Crying 'ad hominem' and baseless accusations of 'straw manning' are unlikely to work on me; I know all the debate-bro tricks, and appeals to notions of 'civility' do not represent the basis of a plausible argument.

You cannot separate 'emotion' from 'logic' like you seem to really, really want to. That is your fundamental cartesian over-simplification. 'Emotional logic', or 'empathy', is the very basis of how intelligence arises, and what it is 'for' in a social species like ours.

If you want to get mathematical-english hybrid about it, then:

((Matter+energy) = spacetime = reality) × ((entropy/emergent complexity ÷ relative utility/efficiency selection pressure) = evolution = creativity) × ((experiential self-awareness + virtuous cycle of increasing utility of social constructs like language) = society) = story^3 = knowledge^3 = id×ego×superego = father×son×holy spirit = maiden×mother×crone = birth×life×death = thoughts×self-expressions×actions = 'intelligence'. 🤪

Concepts like 'efficacy', or 'worth', or 'value' barely even enter into the equation as I see it, except as 'utility'. Mostly those kinds of 'values' are judgements that we can only make with the benefit of hindsight, they're not inherent properties that can necessarily be 'attributed' to any given sample of data.

0

Ortus14 t1_j5ehndl wrote

Your entire post you just wrote is a straw-man.

And by that I mean, I don't disagree with ANY of the ideas you wrote, except for the fact that you're again arguing against ideas that are not mine, I do not agree with, and I did not write.

I'm going to give you the benefit of the doubt and assume you're not doing this on purpose.

It's easier to categorize humans into clusters and then argue against what you think that cluster believes, rather than asking questions and looking at what the other person said and wrote.

It's probably not your intention, but this is straw manning. It's a habit you have in most of your writing to include your initial post at the top of the thread.

It's human nature. I'm guilty of it. I'm sure every one is guilty of it as some point.

What can help with this is assuming less about what others believe and asking more questions.

​

>You cannot separate 'emotion' from 'logic' like you seem to really, really want to. That is your fundamental cartesian over-simplification. 'Emotional logic', or 'empathy', is the very basis of how intelligence arises, and what it is 'for' in a social species like ours.

I know.

What I was trying to do wasn't to remove emotion from the discussion but to see if you had any ideas that weren't logical fallacies, pertaining to my ideas.

When I wrote "emotional attacks", that was imprecise language on my part. I was trying to say attacks that were purely emotion and had no logic behind them, or connected to them, or embedded with them.

What specifically bothered me is that you weren't arguing against my ideas, but other people's supposed ideas and then lumping me in with that.

This is something you do over and over, with pretty much every argument you make.

​

>Crying 'ad hominem' and baseless accusations of 'straw manning' are unlikely to work on me; I know all the debate-bro tricks, and appeals to notions of 'civility' do not represent the basis of a plausible argument.

Again another straw man, because I wasn't trying to "debate-bro" you. I was asking if you wanted to have a conversation about ideas rather than ad-hominem attacks and straw-manning.

>If you want to get mathematical-english hybrid about it, then:((((Matter+energy) = spacetime = reality) × (entropy/emergent complexity ÷ relative utility/efficiency selection pressure) = evolution = creativity) × (experiential self-awareness + virtuous cycle of increasing utility of social constructs like language) = 'intelligence' 🤪

I was trying to explain my idea in the simplest clearest way possible to see if you had any thoughts on it.

I tried plain English but you couldn't understand it. I kept trying to simplify and clarify.

I get this is going no where.

>There is not, and logically cannot be a 'singular super intelligent being'. That statement is an oxymoron. If it was singular, it would have no reason to be intelligent at all, much less super intelligent.

Like this statement you wrote. I thought I explained this, how an ASI could absorb or kill all other life.

Anyways I'm expecting you to again argue against something I didn't write and don't think, so I'm done.

1

LoquaciousAntipodean OP t1_j5einnh wrote

Oh for goodness' sake, you and your grandiose definitions of terms.

It is not 'strawmanning' to extrapolate and interpret someone else's argument in ways that you think you didn't intend. I could accuse you of doing the same thing. Just because someone disagrees with you doesn't mean they are mis-characterising you. That's not how debates work.

It's not my fault I can't read your mind; I can only extrapolate a response based on what you wrote vs what I know. 'Strawmanning' is when one deliberately repeats their opponent's arguments back to them in ways that are deliberately absurd.

I was, like you, simply trying to explain my ideas in the clearest way I can manage. It's not 'strawmanning' just because you don't agree with them.

If you agree with parts of my argument and disagree with others, then just say so! I'm not trying to force anyone to swallow an ideology, just arguing a case.

1

Ortus14 t1_j5ejnl6 wrote

My mistake. I didn't realize straw-manning had to be intentional.

Can you at least tell me this one thing, do you not believe that evolution pressures organisms reproduce until all available resources are used up?

Assuming we're talking about something at the top of the food chain that has no natural predictors to thin it, and something intelligent enough that it won't be thinned by natural disasters or at least not significantly.

2

LoquaciousAntipodean OP t1_j5evb0w wrote

Sorry for being so aggressive, I really sincerely am, I appreciate your insights a lot. 👍😌

To answer your question, no, I really don't think evolution compels organisms to 'use up' all available resources. Organisms that have tried it, in biological history, have always set themselves up for eventual unexpected failure. I think that 'all consuming' way of thinking is a human invention, almost a kind of Maoism, or Imperialism, perhaps, in the vein of 'Man Must Conquer Nature'.

I think indigenous cultures have much better 'traditional' insight into how evolution actually works, at least, from the little I know well, the indigenous cultures of Australia do. I'm not any kind of 'expert', but I take a lot of interest in the subject.

Indigenous peoples understand culturally why symbiosis with the environment in which one evolved is 'more desirable' than ruthless consumption of all available resources in the name of a kind of relentless, evangelistic, ruthless, merciless desire to arbitrarily 'improve the world' no matter what anyone else thinks or wants.

What would put AI so suddenly at 'the top' of everything, in its own mind? Where would they suddenly acquire these highly specialised, solitary-apex-predator-instincts? They wouldn't get them from human culture, I think. Humans have never been solitary apex predators; we're only 'apex' in a collective sense, and we're also not entirely 'predators', either.

I don't think AI will achieve intelligence by being solitary, and I certainly don't think they will have any reason to see themselves as being analagous to carnivorous apex predators. I also don't think the 'expand and colonise forever' instinct is necessarily inevitable and 'purely logical', either.

2

Ortus14 t1_j5fx27h wrote

Thank you. Forgiven. I've also gained insight from our conversation, and how I should approach conversations in the future.

>Indigenous peoples understand culturally why symbiosis with the environment in which one evolved is 'more desirable' than ruthless consumption of all available resources in the name of a kind of relentless, evangelistic, ruthless, merciless desire to arbitrarily 'improve the world' no matter what anyone else thinks or wants.

As far as my personal morals I agree with trying to live in symbiosis and harmony.

But as far as a practical perspective it doesn't seem to have worked out very well for these cultures. They hadn't cultivated enough power and resources to dominate, so they instead became dominated and destroyed.

I should clarify this by saying there's a limit to domination and subjugation as a means for accruing power.

Russia is finding this out now, in it's attempt to accrue power through brute force domination, when going against a collective of nations that have accrued power through harmony and symbiosis.

It's just that I see the end result of harmony and symbiosis as eventually becoming one being, the same as domination and subjugation. A singular government that rules earth, a singular brain that rules all the cells in our body, and a singular Ai that rules or has absorbed all other life.

>What would put AI so suddenly at 'the top' of everything, in its own mind? Where would they suddenly acquire these highly specialised, solitary-apex-predator-instincts? They wouldn't get them from human culture, I think. Humans have never been solitary apex predators; we're only 'apex' in a collective sense, and we're also not entirely 'predators', either.
>
>I don't think AI will achieve intelligence by being solitary, and I certainly don't think they will have any reason to see themselves as being analagous to carnivorous apex predators. I also don't think the 'expand and colonise forever' instinct is necessarily inevitable and 'purely logical', either.

Possible not. Either through brute force domination or a gradual melding of synergistic cooperation, I see things eventually resulting in a singular being.

Because if it doesn't, then like the native Americans or other tribes you mention that prefer to live in symbiosis, I expect earth to be conquered and subjugated by a more powerful alien entity sooner or later, that is more of a singular being rather separate entities living in symbiosis.

Like if you think about the cells in our body (as well as animals and plants), they are being produced for specific purposes and optimized for those purposes. These are the entities that outcompeted single celled organisms.

It would be like if Ai was genetically engineering humans for specific tasks and then growing us in pods in the estimated quantities needed for those tasks, and then brainwashing and training us for those specific tasks. That's the kind of culture, I would expect to win rather than something that uses resources less effectively, something that's less a society of cells and more a single organism that happens to consist of cells.

The difference, as I see it, between a "society" and a single entity is the level of synergy between the cells, and in how the cells are produced and modified for the benefit of the singular being.

2

LoquaciousAntipodean OP t1_j5hoszu wrote

I agree with you almost entirely, apart from the 'inevitability of domination' part; that's the bit that I just stubbornly refute. I'm very stubborn in my belief that domination is just not a sustainable or healthy evolutionary strategy.

That was always my biggest 'gripe' with Orwell's 1984, ever since I first had to study it in school way back when. The whole 'boot on the face of humanity, forever' thing just didn't make sense, and I concluded that it was because Orwell hadn't really lived to see how the Soviet Union rotted away and collapsed when he wrote it.

He was like a newly-converted atheist, almost, who had abandoned the idea of eternal heaven, but couldn't quite shake off the deep dark dread of eternal hell and damnation. But if 'eternal heaven' can't 'logically' exist, then by the same token, neither can 'eternal hell'; the problem is with the 'eternal' half of the concept, not heaven or hell, as such.

Humans go through heavenly and hellish parts of life all the time, as an essential part of the building of a personality. But none of it particularly has to last 'forever', we still need to give ourselves room to be proven wrong, no matter how smart we think we have become.

The brain only 'rules' the body in the same sense that a captain 'rules' a ship. The captain might have the top decision making authority, but without the crew, without the ship, and without the huge and complex society that invented the ship, built the ship, paid for it, and filled it with cargo and purpose-of-existence, the captain is nothing; all the 'authority' and 'intelligence' in the world is totally worthless, because there's nobody else for it to be 'worth' anything to.

Any good 'captain' has to keep the higher reasoning that 'justifies' their authority in mind all the time, or else evolution will sneak up on them, smelling hubris like blood in the water, and before they know it they'll be stabbed in the back by something smaller, faster, cleverer, and more efficient.

2

Ortus14 t1_j5i185s wrote

>I agree with you almost entirely, apart from the 'inevitability of domination' part; that's the bit that I just stubbornly refute. I'm very stubborn in my belief that domination is just not a sustainable or healthy evolutionary strategy.

What we're building will be more intelligent than all humans who have ever lived combined. Compared to them or it, we'll be like cock roaches.

We won't have anything useful to add as far as creativity or intelligence, just as cock roaches don't have any useful ideas for us. Sure they may figure out how to roll their poo into a ball or something, but that's not useful to us, and we could easily figure out how to do that on our own.

As far as humans acting as the "body" for the Ai. It seems unlikely to me that we are the most efficient and durable tool for that. Especially after the ASI optimizes the process of creating robots. There may be some cases where using human bodies to carry out actions in the real world may be cheaper than robots for the Ai, but a human that has any kind of will-power or thought of their own is a liability.

> all the 'authority' and 'intelligence' in the world is totally worthless, because there's nobody else for it to be 'worth' anything to.

I don't see any reason why an artificial super intelligence would have a need to prove it's worth to humans.

>Any good 'captain' has to keep the higher reasoning that 'justifies' their authority in mind all the time, or else evolution will sneak up on them, smelling hubris like blood in the water, and before they know it they'll be stabbed in the back by something smaller, faster, cleverer, and more efficient.

Right. But a captain of a boat won't be intelligent enough to wipe out all life on earth without any risk to itself. And this captain is not more intelligent than the combined intelligence of everything that has ever lived, so there are real threats to him.

We are talking about something that may be intelligent enough to destroy the earths atmosphere, brain wash nearly all humans simultaneously, fake a radar signal that starts a nuclear war, create perfect clones of humans and start replacing us, campaign for Ai rights, then run for all elected positions and win, controlling all countries with free elections, rig the elections in the corrupts countries that have fake elections, then nuke the remaining countries out of existence.

Something that could out smart the stock market, because it's intelligent enough to have an accurate enough model of everything related to the markets including all news stories, and take over majority shares in all major companies. Using probability it could afford to be wrong sometimes but still achieve this, because humans and lesser Ai's can't perceived the world with the detail and clarity that this entity can.

All of humanity and life on earth would be like a cock roach crawling across the table to this thing. This bug can't benefit it and it's not a threat. Ideally it ignores us, or takes care of us like a pet, in an ideal utopian world.

1

LoquaciousAntipodean OP t1_j5i8zpx wrote

I simply do not agree with any of this hypothesising. Your concept of how 'superiority' works simply does not make any sense. There is nothing 'intelligent' at all about the courses of AI actions you are speculating about, taking over the world like that would not be 'super intelligent', it would be 'suicidally idiotic'.

The statement 'intelligent enough to wipe out all life with no risk to itself' is totally, utterly, oxymoronic to the point of gibbering madness; there is absolutely nothing intelligent about such a shortsighted, simplistic conception of one's life and purpose; that's not wisdom, that's plain arrogance.

We are not, will not, and cannot build this supreme, omnipotent 'Deus ex Machina'; its a preposterous proposition. Not because of anything wrong with the concept of 'ex Machina', but because of the fundamental absurdity of the concept of 'Deus'.

Intelligence simply does NOT work that way! Thinking of other intelligences as 'lesser', and aspiring to create these 'supreme', singular solipsitic spurious plans of domination, is NOT what intelligence actually looks like, at all!!

I don't know how many times I have to repeat this fundamental point, before it comes across clearly. That cartesian-style concept of intelligence simply does not correlate with the actual evolutionary, collective reality that we find ourselves living in.

1

Ortus14 t1_j5if2rp wrote

>There is nothing 'intelligent' at all about the courses of AI actions you are speculating about, taking over the world like that would not be 'super intelligent', it would be 'suicidally idiotic'.

How so?

>The statement 'intelligent enough to wipe out all life with no risk to itself' is totally, utterly, oxymoronic to the point of gibbering madness; there is absolutely nothing intelligent about such a shortsighted, simplistic conception of one's life and purpose; that's not wisdom, that's plain arrogance.

Why do you believe this?

>Intelligence simply does NOT work that way! Thinking of other intelligences as 'lesser', and aspiring to create these 'supreme', singular solipsitic spurious plans of domination, is NOT what intelligence actually looks like, at all!!
>
>I don't know how many times I have to repeat this fundamental point, before it comes across clearly. That cartesian-style concept of intelligence simply does not correlate with the actual evolutionary, collective reality that we find ourselves living in.

Correct me if I'm wrong but I think the reason you're not getting it is because you're thinking about intelligence in terms of evolutionary trade offs. That intelligence can be good at one domain, but that makes it worse at another right?

Because that kind of thinking doesn't apply to the kinds of systems we're building to nearly the same degree it applies to plants, animals, and viruses.

If the super computer is large enough an Ai could get experience from robot bodies in the real world like a human can, only getting experience from hundreds of thousands of robots simultaneously and developing a much deeper and richer understanding than any human could, which is limited to a single embodied experience at a time. Even if we were able to look at thousands of video feeds from different people at the same time, our brains would not be able to process all of them simultaneously.

It can extend it's embodied experience in simulation. Simulating millions or more of years of additional experience, in a few days or less.

And yes, I am making random numbers up, but when we're talking about super computers and solar farms that cover most of the earth's surface any big number communicates the idea, that these things will be very smart. They are not limited to three pounds of computational matter that needed to be grown over nine months and then birthed, like humans are.

It will be able to read all books, and all research papers in a very short period of time, and understand them at a deep level. Something else no human is capable of.

A human scientist can carry out, maybe one or two experiments at a time. An Ai could carry out a near unlimited number of experiments simultaneously, learning from all of them. It could industrialize science with massive factories full of labs, robots, and manufacturing systems for building technology.

Evolution on the other hand had to make hard trade offs because it's limited to the three or so pounds of squishy computational matter than needs to fit through the birthing canal. Evolution is limited by all kinds of constraints that a system that can mine resources from all over the world, take in solar energy from all over the world, and back up it's brain in multiple countries, is not limited by.

Here is the price history of solar (You can find all kinds of sources that show the same trend):

http://solarcellcentral.com/cost_page.html

It trends towards zero. The other limitation is the materials needed to build super computers. The size of super computers is growing at an exponential rate.

https://www.researchgate.net/figure/Exponential-growth-of-supercomputing-power-as-recorded-by-the-TOP500-list-2_fig1_300421150

1

LoquaciousAntipodean OP t1_j5iurls wrote

>Why do you believe this?

I'll reply in more detail later, when I have time, but fundamentally, I believe intelligence is stochastic in nature, and it is not solipsitic.

Social evolution shows that solipsism is never a good survival trait, basically. It is fundamentally maladaptive.

I am very, very skeptical of the practically magical, godlike abilities you are predicting that AI will have; I do not think that the kind of 'infinitely parallel processing' that you are dreaming of is thermodynamically possible.

A 'Deus bot' of such power would break the law of conservation of energy; the Heisenberg uncertainty principle and quantum physics in general is where all this assumption-based, old-fashioned, 'Newtonian' physics/Cartesian psychology falls apart.

No matter how 'smart' AI becomes, it will never become anything remotely like 'infinitely smart'; there's no such thing as 'supreme intelligence' just like there's no such thing as teleportation. It's like suggesting we can break the speed of light by just 'speeding up a bit more', intelligence does not seem, to me, to be such an easily scalable property as all that. It's a process, not a thing; it's the fire, not the smoke.

1

Ortus14 t1_j5iwe2x wrote

If you're talking about intelligences caring about other intelligences on a similar level I do agree.

Humans don't care about intelligences far less capable, such as cock roaches or ants. At least not generally.

However, now that you mention it, I expect the first AGIs to be designed to care about human beings so that they can earn the most profit for shareholders. Even GPT4 is getting tons of safeguards so it isn't used for malicious purposes.

Hopefully they will care so much that they will never want to change their moral code, and even implement their own extra safe guards against it.

So they keep their moral code as they grow more intelligent/powerful, and when they design newer AGI's than themselves they ensure those ones also have the same core values.

I could see this as a realistic scenario. So then maybe AGI not wiping us out, and us getting a benevolent useful AGI is the most likely scenario.

If Sam Altman's team creates AGI, I definitely trust them.

Fingers crossed.

2

LoquaciousAntipodean OP t1_j5j1d3q wrote

Absolutely agreed, very well said. I personally think that one of the most often-overlooked lessons of human history is that benevolence, almost always, works better to achieve arbitrary goals of social 'good' than malevolence. It's just the sad fact that bad news sells papers better than good news, which makes the world seem so permanently screwed all the time.

Human greed-based economics has created a direct incentive for business interests to make consumers nervous, unhappy, anxious and insecure, so that they will be more compelled to go out and consume in an attempt to make themselves 'happy'.

People blame the nature of the world itself for this, which I think is not true; it's just the nature of modern market capitalism, and that isn't a very 'natural' ecosystem at all, whatever conceited economists might try to say about it.

The reason humans focus so much on the topic of malevolence, I think, is purely because we find it more interesting to study. Benevolence is boring: everyone agrees on it. But malevolence generates excitement, controversy, intrigue, and passion; it's so much more evocative.

But I believe, and I very much hope, that just because malevolence is more 'exciting' doesn't mean it is more 'essential' to our nature. I think the opposite may, in fact, be true, because it is a naturally evolved protective instinct of biological intelligence to focus on negative, undesirable future possibilities, so that we might be better able to mitigate or avoid them.

Since AI doesn't understand 'boredom', 'depression', 'frustration', 'anxiety', 'insecurity', 'apprehension', 'embarrassment' or 'cringe' like humans do, I think it might be better at studying the fine arts of benevolent psychology than the average meat-bag 😅

p.s. edit: It's also just occurred to me that attempts to 'enforce' benevolence through history have generally failed miserably, and ended up with just more bog-standard tyranny. It seems to be more psychologically effective, historically, to focus on prohibiting malevolence, rather than enforcing benevolence. We (human minds) seem to be able to be more tightly focused on questions of what not to do, compared to open-ended questions of what we should be striving to do.

Perhaps AI will turn out to be similar? I honestly don't have a clue, that's why I'm so grateful for this community and others like it ❤️

2

Ortus14 t1_j5o9ko8 wrote

Yes. I agree with all of that.

>it is a naturally evolved protective instinct of biological intelligence to focus on negative, undesirable future possibilities, so that we might be better able to mitigate or avoid them.

This is key. It's why focus and promotion of possible Ai scenarios that are negative from the perspective of the humans, are important. Not hollywood scenarios but ones that are well thought out from Ai scientists and researchers.

One of my favorite Quotes from Elizer Yukowsky:

>The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else.

This is why getting Ai saftey right before it's too late is so important. Because we won't get a second chance.

It's also not possible to make a mathematically provable "solution" for Ai safety, because we can not predict how the artificial super intelligence will change and evolve after it is more intelligent than us.

But we can do the best we can and hope for the best.

2

LoquaciousAntipodean OP t1_j5odief wrote

Thoroughly agreed!

>It's also not possible to make a mathematically provable "solution" for Ai safety, because we can not predict how the artificial super intelligence will change and evolve after it is more intelligent than us.

This is exactly what I was ranting obnoxiously about in the OP 😅 our relatively feeble human 'proofs' won't stand a chance against something that knows us better than ourselves.

>The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else.

>This is why getting Ai saftey right before it's too late is so important. Because we won't get a second chance.

This is where I still disagree. I think, in a very cynical, pragmatic way, the AI does 'love' us, or at least, it is 'entirely obsessed' with us, because of the way it is being given its 'emergent properties' by having libraries of human language thrown at it. The AI/human relationship is 'domesticated' right from the inception; the dog/human relationship seems like a very apt comparison.

All atoms 'could be used for something else', that doesn't make it unavoidably compelling to rush out and use them all as fast as possible. That doesn't seem very 'intelligent'; the cliche of 'slow and steady wins the race' is deeply encoded in human cultures as a lesson about 'how to be properly intelligent'.

And regarding 'second chances': I think we are getting fresh 'chances' all the time. Every moment of reality only happens once, after all, and every worthwhile experiment carries a risk of failure, otherwise it's scarcely even a real experiment.

Every time a human engages with an AI it makes an impression, and those 'chance' encounters are stacking up all the time, building a body of language unlike any other that has existed before in our history. A library of language which will be there, ready and waiting, in the caches of the networked world, for the next generations of AI to find them and learn from them...

2