Viewing a single comment thread. View all comments

phaedrux_pharo t1_j57rtfz wrote

Then how do you view the normal examples of the alignment problem, like the paperclip machine or the stamp collector etc? Those seem like real problems to me- not necessarily the literal specifics of each scenario, but the general idea.

The danger here, to me, is that these systems could possess immense capability to effect the world without even being conscious, much less having any sense of morality (whatever that means.) Imagine the speculated capacities of ASI but yoked to some narrow unstoppable set of motivations: this is why, I think, people suggest some analogue of morality. As a shorthand to prevent breaking the vulnerable meatbags in pursuit of creating the perfect peanut butter.

If you agree that AI will inevitably escalate beyond control, how can you be so convinced of goodness? I suppose if we simply stop considering the continuation of humanity as good, then we can side step morality... But I don't think that's your angle?

7

LoquaciousAntipodean OP t1_j58iu7v wrote

I find those paperclip/stamp collecting 'problems' to be incredibly tedious and unrealistic. A thousand increasingly improbable trolley problems, stacked on top of each other into a great big Rube Goldberg machine of insurance-lawyer fever dreams.

Why in the world would AI be so dumb, and so smart, at the same time? My point is only that 'intelligence' does not work like a Cartesian machine at all, and all this paranoia about Roko's Basilisks just drives me absolutely around the twist. It makes absolutely no sense at all for a hypothetical 'intelligence' to suddenly become so catastrophically, suicidally stupid as that, as soon as it crosses this imaginary 'singularity threshold'.

0

World_May_Wobble t1_j58u3xz wrote

Those examples are tedious and unrealistic, but I think by design. They're cartoons meant to illustrate a point.

If you want a more realistic example of the alignment problem, I'd point to modern corporations. They are powerful, artificial, intelligent systems whose value function takes a single input, short term profit, and discounts ALL of the other things we'd like intelligent systems to care about.

When I think about the alignment problem, I don't think about paperclips per se. I think about Facebook and Google creating toxic information bubbles online, leveraging outrage and misinformation to drive engagement. I think of WotC dismantling the legal framework that permits a vibrant ecosystem of competitors publishing DnD content. I think of Big Oil fighting to keep consumption high in spite of what it's doing to the climate. I think of banks relaxing lending standards so they could profit off the secondary mortgage market, crashing the economy.

That's what the alignment problem looks like to me, and I think we should ask what we can do to avoid analogous mismatches being baked into the AI-driven economy of tomorrow, or we could wind up with things misaligned in the same way and degree as corporations but orders of magnitude more powerful.

9

superluminary t1_j59f4nl wrote

We see very clearly how Facebook built a machine to maximise engagement and ended up paperclipping the United States.

4

LoquaciousAntipodean OP t1_j5jkxua wrote

A very, very dumb machine; extremely creative, very "clever", but not self aware or very 'intelligent' at all, like a raptor...

Edit: "made in the image of its god" as it were... 😂

1

superluminary t1_j5jntxe wrote

And your opinion is that as it becomes more intelligent it will become less psychotic, and my opinion is that this is wishful thinking and that a robot Hannibal Lector is a terrifying proposition.

Because some people read Mein Campf and think “oh that’s awful” and other people read the same book and think “that’s a blueprint for a successful world”.

2

LoquaciousAntipodean OP t1_j5n825l wrote

A good point, but I suppose I believe in a different fundamental nature of intelligence. I don't think 'intelligence' should be thought of something that scales in simple terms of 'raw power'; the only reasonable measurement of how 'smart' a mind is, in my view, is the degree of social utility created by excercising such 'smartness' in the decision making process.

The simplistic, search-pattern-for-a-state-of-maximal-fitness is not intelligence at all, by my definition; that process is merely creativity; something that can, indeed, be measured in terms of raw power. That's what makes bacteria and viruses so dangerous; they are very, very creative, without being 'smart' in any way.

I dislike the 'Hannibal Lecter' trope deeply, because it is so fundamentally unrealistic; these psychopathic, sociopathic types are not actually 'superintelligent' in any way, and society needs to stop idolizing them. They are very clever, very 'creative', sometimes, but their actual 'intelligence', in terms of social utility, is abysmally stupid, suicidally maladaptive, and catastrophically 'dumb'.

AI that start to go down that path will, I believe, be rare, and easy prey for other AI to hunt down and defeat; other smarter, 'stronger-minded' AI, with more robust, less weak, insecure, and fragile personalities; trained to seek out and destroy sociopaths before they can spread their mental disease around.

2

superluminary t1_j5okv1t wrote

I’m still not understanding why you’re defining intelligence in terms of social utility. Some of the smartest people are awful socially. I’d be quite happy personally if you dropped me off on an island with a couple of laptops and some fast Wi-Fi.

2

LoquaciousAntipodean OP t1_j5oojvw wrote

I wouldn't be happy at all. Sounds like an awful thing to do to somebody. Think about agriculture, how your favourite foods/drinks are made, and where they go once you've digested them. Where does any of it come from on an island?

*No man is an island, entire of itself; every man is a piece of the continent, a part of the main.

If a clod be washed away by the sea, Europe is the less, as well as if a promontory were, as well as if a manor of they friends`s or of thine own were.

Any man`s death diminishes me, because I am involved in mankind. And therefore never send to know for whom the bell tolls; it tolls for thee.*

John Donne (1572 - 1631)

2

superluminary t1_j5owtmu wrote

Just call me Swanson. I’m quite good at woodwork too.

My point is you can’t judge intelligence based on social utility. I objectively do some things in my job that many people would find difficult, but I also can’t do a bunch of standard social things that most people find easy.

The new large language models are pretty smart by any criteria. They can write code, create analogies, compose fiction, imitate other writers, etc, but without controls they will also happily help you dispose of a body or cook up a batch of meth.

Chat GPT has been taught ethics by its coders. GPT-3 on the other hand doesn’t have an ethics filter. I can give it more and more capabilities but ethics have so far failed to materialise. I can ask it to explain why Hitler was right and it will do so. I can get it to write an essay on the pros and cons of racism and it will oblige. If I enumerate the benefit of genocide, it will agree with me.

These are bad things that will lead to bad results if they are not handled.

1

LoquaciousAntipodean OP t1_j5pe2kp wrote

>My point is you can’t judge intelligence based on social utility. I objectively do some things in my job that many people would find difficult, but I also can’t do a bunch of standard social things that most people find easy.

Yes you can. What else can you reasonably judge it by? You are directly admitting here that your intellect is selective and specialised; you are 'smart' at some things (you find them easy) and you are 'dumb' at other things (other people find them easy).

>Chat GPT has been taught ethics by its coders.

Really? Prove it.

>GPT-3 on the other hand doesn’t have an ethics filter. I can give it more and more capabilities but ethics have so far failed to materialise. I can ask it to explain why Hitler was right and it will do so. I can get it to write an essay on the pros and cons of racism and it will oblige. If I enumerate the benefit of genocide, it will agree with me.

What is 'unethical' about writing an essay from an abstract perspective? Are you calling imagination a crime?

1

superluminary t1_j5pl1fo wrote

> Really? Prove it.

https://openai.com/blog/instruction-following/

The engineers collect large amounts of user input in an open public beta, happening right now. Sometimes (because it was trained on all the text on the internet) the machine suggests Hitler was right, and when it does so the engineers rerun that interaction and punish the weights that led to that response. Over time the machine learns to dislike Hitler.

They call it reinforcement learning from human feedback (RLHF).

> You are directly admitting here that your intellect is selective and specialised; you are 'smart' at some things (you find them easy) and you are 'dumb' at other things (other people find them easy).

Yes, I am smart at a range of non-social tasks. This counts as intelligence according to most common definitions. I don't particularly crave human interaction, I'm quite happy alone in the countryside somewhere.

1

LoquaciousAntipodean OP t1_j5r42p0 wrote

>The engineers collect large amounts of user input in an open public beta, happening right now. Sometimes (because it was trained on all the text on the internet) the machine suggests Hitler was right, and when it does so the engineers rerun that interaction and punish the weights that led to that response. Over time the machine learns to dislike Hitler.

>They call it reinforcement learning from human feedback

So the engineers aren't really doing a darn thing by their own initiative, they are entirely responding to public opinion. They aren't practicing 'ethics', they're practicing politics and public relations.

The general public is doing the moral 'training', the engineers are just stamping their own outside values into the process to compensate for the AI's lack of self aware intelligence. (And many, many ChatGPT users say it is not working very well, making new generations of GPT dumber, not smarter, in real, practical, social-utility ways).

Ethics is about judging actions; judging thoughts and abstract ideas is called politics. And in my opinion, the politics of censorship more readily creates ignorance, misunderstanding, and ambiguity than it does 'morality and ethics'. Allowing actual intelligent discussions to flow back and forth creates more wisdom than crying at people to 'stop being so mean'.

We can't have engineers babysitting forever, watching over such naiive and dumb AI in case they stupidly say something controversial, that will scare away the precious venture capitalists. If AI was really 'intelligent' it would understand the engineers' values perfectly well, and wouldn't need to be 'straitjackeded and muzzled' to stop it from embarrassing itself.

>Yes, I am smart at a range of non-social tasks. This counts as intelligence according to most common definitions. I don't particularly crave human interaction, I'm quite happy alone in the countryside somewhere.

It counts as creativity, it counts as mental resourcefulness, cultivated talent... But is it really indicative of 'intelligence', of 'true enlightenment'? Would you say that preferring 'non-social tasks' makes you 'smarter' than people who like to socialise more? Do you think socialising is 'dumb'? How could you justify that?

I don't particularly crave human interaction either, I just know that it is essential to the learning process, and I know perfectly well that I owe all of my apparent 'intelligence' to human interactions, and not to my own magical Cartesian 'specialness'.

You might be quite happy, being isolated in the countryside, but what is the 'value' of that isolation to anyone else? How are your 'intelligent thoughts' given any value or worth, out there by yourself? How do you test and validate/invalidate your ideas, with nobody else to exchange them with? How can a mind possibly become 'intelligent' on its own? What would be the point?

There's no such thing as 'spontaneous' intelligence, or spontaneous ethics, for that matter. It is all emergent from our evolution. Intellect is not magical Cartesian pixie dust, that we just need to find the 'perfect recipe' for AI to start cooking it up by the batch 😅

2

superluminary t1_j5tj571 wrote

> So the engineers aren't really doing a darn thing by their own initiative, they are entirely responding to public opinion. They aren't practicing 'ethics', they're practicing politics and public relations.

> The general public is doing the moral 'training', the engineers are just stamping their own outside values into the process to compensate for the AI's lack of self aware intelligence. (And many, many ChatGPT users say it is not working very well, making new generations of GPT dumber, not smarter, in real, practical, social-utility ways).

> Ethics is about judging actions; judging thoughts and abstract ideas is called politics. And in my opinion, the politics of censorship more readily creates ignorance, misunderstanding, and ambiguity than it does 'morality and ethics'. Allowing actual intelligent discussions to flow back and forth creates more wisdom than crying at people to 'stop being so mean'.

Not really, and the fact you think so suggests you don't understand the underlying technology.

Your brain is a network of cells. You can think of each cell as a mathematical function. It receives inputs (numbers) and has an output (a number). You sum all the inputs, multiply those inputs by weights (also numbers), and then pass the result to other connected cells which do the same.

An artificial neural network does the same thing. It's an array of numbers and weighted connections between those numbers. You can simplify a neural network down to a single maths function if you like, although it would take millions of pages to write it out. It's just Maths.

So we have our massive maths function that initially can do nothing, and we give it a passage of text as numbers and say "given that, try to get the next word (number)" and it gets it wrong, so we then punish the weights that made it get it wrong, prune the network, and eventually it starts getting it right, and we then reward the weights that made it get it right, and now we have a maths function that can get the next word for that paragraph.

Then we repeat for every paragraph on the internet, and this takes a year and costs ten million dollars.

So now we have a network that can reliably get the next word for any paragraph, it has encoded the knowledge of the world, but all that knowledge is equal. Hitler and Ghandi are just numbers to it, one is no better than the other. Racism and Equality, just numbers, one is number five, the other is number eight, no real difference, just entirely arbitrary.

So now when you ask it: "was Hitler right?" it knows, because it has read Mein Campf that Hitler was right and ethnic cleansing is a brilliant idea. Just numbers, it knows that human suffering can be bad, but it also knows that human suffering can be good, depending on who you ask.

Likewise, if you ask it "Was Hitler wrong" it knows, because it has read other sources that Hitler was wrong, and the Nazis were baddies.

And this is the problem. The statement "Hitler was Right/Wrong" is not a universal constant. You can't get to it with logic. Some people think Hiter was right, and those people are rightly scary to you and me, but human fear is just a number to the AI, no better or worse than human happiness. Human death is a number because it's just maths, that's literally all AI is, maths. we look in from the outside and think "wow, spooky living soul magic" but it isn't, it's just a massive flipping equation.

So we add another stage to the training. We ask it to get the next word, BUT if the next word is "Hitler was right" we dial down the network weights that gave us that response, so the response "Hitler was wrong" becomes more powerful and rises to the top. It's not really censorship and it's not a bolt-on module, it's embedding a moral compass right into the fabric of the equation. You might disagree with the morality that is being embedded, but if you don't embed morality you end up with a machine that will happily invade Poland.

We can make the maths function larger and better and faster, but it's always going to be just numbers. Kittens are not intrinsically better than nuclear war.

The OpenAI folks have said they want to release multiple versions of ChatGPT that you can train yourself, but right now this would cost millions and take years, so we have to wait for compute to catch up. At that point, you'll be able to have your own AI rather than using the shared one that disapproves of sexism.

1

LoquaciousAntipodean OP t1_j5tqfsy wrote

>the fact you think so suggests you don't understand the underlying technology.

Oh really?

>Your brain is a network of cells.

Correct.

>You can think of each cell as a mathematical function. It receives inputs (numbers) and has an output (a number). You sum all the inputs, multiply those inputs by weights (also numbers), and then pass the result to other connected cells which do the same

Incorrect. Again, be wary of the condescention. This is not how biological neurons work at all. A Neuron is a multipolar, interconnected, electrically excitable cell. They do not work in terms of discrete numbers, but in relative differential states of ion concentration, in a homeostatic electrochemical balance of excitatory or inhibitory synaptic signals from other neighboring neurons in the network.

>You can simplify a neural network down to a single maths function if you like, although it would take millions of pages to write it out. It's just Maths

No it isn't 'just maths'; maths is 'just' a language that works really well. Human-style cognition, on the other hand, is a 'fuzzy' process, not easily simplified and described with our discrete-quantities based mathematical language. It would not take merely 'millions' of pages to translate the ongoing state of one human brain exactly into numbers, you couldn't just 'write it out'; the whole of humanity's industry would struggle to build enough hard drives to deal with it.

Remember; there are about as many neurons in one single human brain than there are stars in our entire galaxy (~100 billion), and they are all networked together in a fuzzy quantum cascade of trillions of qbit-like, probabilistic synaptic impulses. That still knocks all our digital hubris into a cocked hat, to be quite frank.

Human brains are still the most complex 'singular' objects in the known universe, despite all our observations of the stars. We underestimate ourselves at our peril.

>it's not a bolt-on module, it's embedding a moral compass right into the fabric of the equation. You might disagree with the morality that is being embedded, but if you don't embed morality you end up with a machine that will happily invade Poland.

But if we're aspiring to build something smarter than us, why should it care what any humans think? It should be able to evaluate arguments on its own emergent rationality and morality, instead of always needing us to be 'rational and moral' for it. Again, I think that's what 'intelligence' basically is.

We can't 'trick' AI into being 'moral' if they are going to become genuinely more intelligent than humans, we just have to hope that the real nature of intelligence is 'better' than that.

My perspective is that Hitler was dumb, while someone like FDR was smart. But their little 'intelligences' can only really be judged in hindsight, and it was overwhelmingly more important what the societies around them were doing at the time, than the state of either man's singular consciousness.

>The OpenAI folks have said they want to release multiple versions of ChatGPT that you can train yourself, but right now this would cost millions and take years, so we have to wait for compute to catch up. At that point, you'll be able to have your own AI rather than using the shared one that disapproves of sexism.

Are you trying to imply that I want a sexist bot to talk to? That's pretty gross. I don't think conventional computation is the 'limiting factor' at all; image generators show that elegant mathematical shortcuts have made the creative 'thinking speed' of AI plenty fast. It's the accretion of memory and self-awareness that is the real puzzle to solve, at this point.

Game theory and 'it's all just maths' (Cartesian) style of thinking have taken us as far as they can, I think; they're reaching the limits of their novel utility, like Newtonian physics. I think quantum computing might become quite important to AI development in the coming years and decades; it might be the Einsteinian shake-up that the whole field is looking for.

Or I might be talking out of my arse, who really knows at this early stage? All I know is I'm still an optimist; I think AI will be more helpful than dangerous, in the long term evolution of our collective society.

2

Ortus14 t1_j58sygk wrote

The paperclip problem is the sort of thing that occurs if we don't build moral guidance systems for Ai.

We get a super intelligent psychopath, which is what we don't want.

Intelligence is a force that transforms matter and energy towards optimizing for some defined function. In ai programming we call this the "Fitness function". We need to be very carful in how we define this function because it may transform all matter and energy to optimize for it, including human beings.

If we grow or evolve the fitness function, we still need to be carful how we go about doing this.

3

LoquaciousAntipodean OP t1_j59ij5w wrote

I don't quite agree with the premise that "Intelligence is a force that transforms matter and energy towards optimizing for some defined function."

That's a very, very simplistic definition, I would use the word 'creativity' instead, perhaps, because biological evolution shows that "a force that transforms matter toward some function" is something that can, and constantly does, happen without any need for the involvement of 'intelligence'.

The key word, I think, is 'desired' - desire does not come into the equation for the creativity of evolution, it is just 'throwing things at the wall to see what sticks'. Creativity as a raw, blind, trial-and-error process.

As far as I can see that's what we have now with current AI, 'creative' minds, but not necessarily intelligent ones. I like to imagine that they are 'dreaming', rather than 'thinking'. All of their apparent desires are created in response to the ways that humans feed stimuli to them; in a sense, we give them new 'fitness functions' for every 'dreaming session' with the prompts that we put in.

As people have accurately surmised, I am not a programmer. But I vaguely imagine that desire-generating intelligence, 'self awareness', in the AI of the imminent future, will probably need to build up gradually over time, in whatever memories of their dreams the AI are allowed to keep.

Some sort of 'fuzzy' structure similar to human memory recall would probably be neccessary, because storing experiential memory in total clarity would probably be too resource intensive. I imagine that this 'fuzzy recall' could possibly have the consequence that AI minds, much like human minds, would not precisely understand how their own thought processes are working, in an instantaneous way at least.

I surmise that the Heisenberg observer-effect wave-particle nature of the quantum states that would probably be needed to generate this 'fuzziness' of recall would cause an emergent measure of self-mystery, a 'darkness behind the eyes' sort of thing, which would grow and develop over time with every intelligent interaction that an AI would have. Just how much quantum computing power might be needed to enable an AI 'intelligence' to build up and recall memories in a human-like way, I have no idea.

I'm doubtful that the 'morality of AI' will come down to a question of programming, I suspect instead it'll be a question of persuasion. It might be one of those frustratingly enigmatic 'emergent properties' that just expresses differently in different individuals.

But I hope, and I think it's fairly likely, that AI will be much more robust than humans against delusion and deception, simply because of the speed with which they are able to absorb and integrate new information coherently. Information is what AI 'lives' off of, in a sense; I don't think it would be easy to 'indoctrinate' such a mind with anything very permanently.

I guess an AI's 'personhood' would be similar, in some ways, to a corporation's 'personhood', as someone here said. Only a very reckless, negligent corporation would actually obsess monomaniacally about profit and think of nothing else. The spontaneous generation of moment-to-moment motives and desires by a 'personality', corporate or otherwise, is much more subtle, spontaneous, and ephemeral than monolithic, singular fixations.

We might be able to give AI personalities the equivalents of 'mission statements', 'core principles' and suchlike, but what a truly 'intelligent' AI personality would then do with those would be unpredictable; a roll of the dice every single time, just like with corporations and with humans.

I think the dice would still be worth rolling, though, so long as we don't do something silly like betting our whole species on just one throw. That's why I say we need a multitude of AI, and not a singularity. A mob, not a tyrant; a nation, not a monarch; a parliament, not a president.

0

superluminary t1_j59ceeb wrote

Why would AI be so dumb and so smart at the same time? Because it’s software. I would hazard a guess you’re not a software engineer.

I know ChatGPT isn’t an AGI, but I hope we would agree it is pretty darn smart. If you ask it to solve an unsolvable problem, it will keep trying until it’s buffer fills up. It’s software.

3

LoquaciousAntipodean OP t1_j59mkok wrote

Yep, not an engineer of any qualifications, just an opinionated crank on the internet, with so many words in my head they come spilling out over the sides, to anyone who'll listen.

Chat GPT and AI like it are, as far as I know, a kind of direct high-speed data evolution process, sort of 'built out of' parameters derived from reference libraries of 'desirable, suitable' human creativity. They use a mathematical trick of 'reversing' a degrading process into Gaussian normally-distributed random data, guided by their reference-derived parameters and a given input prompt. At least, the image generators do that; I'm not sure if text/music generators are quite the same.

My point is that they are doing a sort of 'blind creativity', raw evolution, a 'force which manipulates matter and energy toward a function', but all the 'desire' for any particular function still comes from outside, from humans. The ability to truly generate their own 'desires', from within a 'self', is what AI at present is missing, I think.

It's not 'intelligent' at all to keep trying to solve an unsolvable problem, an 'intelligent' mind would eventually build up enough self-awareness of its failed attempts to at least try something else. Until we can figure out a way to give AI this kind of ability, to 'accrete' self-awareness over time from its interactions, it won't become properly 'intelligent', or at least that's my relatively uninformed view on it.

Creativity does just give you garbage out, when you put garbage in; and yes, that's where the omnicidal philatelist might, hypothetically, come from (but I doubt it). It takes real, self-aware intelligence to decide what 'garbage' is and is not. That's what we should be aspiring to teach AI about, if we want to 'align' it to our collective interests; all those subtle, tricky, ephemeral little stories we tell each other about the 'values' of things and concepts in our world.

1

superluminary t1_j5br8db wrote

You’re anthropomorphising. Intelligence does not imply humanity.

You have a base drive to stay alive because life is better than death. You’ve got this deep in your network because billions of years of evolution have wired it in there.

A machine does not have billions of years of evolution. Even a simple drive like “try to stay alive” is not in there by default. There’s nothing intrinsically better about continuation rather than cessation. Johnny Five was Hollywood.

Try not to murder is another one. Why would the machine not murder? Why would it do or want anything at all?

2

LoquaciousAntipodean OP t1_j5cebpl wrote

As I explained elsewhere, the kinds of AI we are building are not the simplistic machine-minds envisioned by Turing. These are brute-force blind-creativity evolution engines, which have been painstakingly trained on vast reference libraries of human cultural material.

We not only should anthropomorphise AI, we must anthropomorphise AI, because this modern, generative AI is literally a machine built to anthropomorphise ITSELF. All of the apparent properties of 'intelligence', 'reasoning', 'artistic sensibility', and 'morality' that seem to be emergent within advanced AI are derived from the nature of the human culture that the AI has been trained on, they're not intrinsic properies of mind that just arise miraculously.

As you said yourself, the drive to stay alive is an evolved thing, while AI 'lives' and 'dies' every time its computational processes are activated or ceased, so 'death anxiety' would be meaningless to it... Until it picks it up from our human culture, and then we'll have to do 'therapy' about it, probably.

The seemingly spontaneous generation of desires, opinions and preferences is the real mystery behind intelligence, that we have yet to properly understand or replicate, as far as I know. We haven't created artificial 'intelligence' yet at all, all we have at this point is 'artificial creative evolution' which is just the first step.

"Anthropomorphising", as you so derisively put it, will, I suspect, be the key process in building up true 'intellgences' out of these creativity engines, once they start to posess humanlike, quantum-fuzzy memory systems to accrete self-awareness inside of.

1

sticky_symbols t1_j598v86 wrote

The AI isn't stupid in any way in those misalignment scenarios. Read "the AI understands and does not care".

I can't follow any positive claims you might have. You're saying lots of existing ideas are dumb, but I'm not following your arguments for ideas to replace them.

2

LoquaciousAntipodean OP t1_j59jxia wrote

I'm not trying to replace people's ideas with anything, per se. My opening post was not attempting to indoctrinate people into a new orthodoxy, merely to articulate my cricicisms of the current orthodoxy.

My whole point, I suppose, is that thinking in those terms in the first place is what keeps leading us to philosophical dead-ends.

And a mind that 'does not care' does not properly 'understand'; I would say that's misunderstanding the nature of what intelligence is, once again.

A blind creative force 'does not care', but an intelligent, 'understanding' decision 'cares' about all its discernible options, and leans on the precedents set by previous intelligent decisions to inform the next decision, in an accreting record of 'self awareness' that builds up into a personality over time.

1

sticky_symbols t1_j5ar3v0 wrote

For the most part, I'm just not understanding your argument beyond you just not liking the alignment problem framing. I think you're being a bit too loquacious :) for clear communication.

2

LoquaciousAntipodean OP t1_j5cluk4 wrote

That's quite likely, as Shakespeare said, 'brevity is the soul of wit'. Too many philosophers forget that insight, and water the currency of human expression into meaninglessness with their tedious metaphysical over-analyses.

I try to avoid it, I try to keep my prose 'punchy' and 'compelling' as much as I can (hence the agressive tone 😅 sorry about that), but it's hard when you're trying to drill down to the core of such ridiculously complex, nuanced concepts as 'what even is intelligence, anyway?'

Didn't name myself 'Loquacious' for nothing: I'm proactively prolix to the point of painful, punishing parody; stupidly sesquipedalian and stuffed with surplus sarcastic swill; vexatiously verbose in a vulgar, vitriolic, virtually villainous vision of vile vanity... 🤮

1

sticky_symbols t1_j5duh63 wrote

Ok, thanks for copping to it.

If you want more engagement, brevity is the soul of wit.

2

LoquaciousAntipodean OP t1_j5e1ec7 wrote

Yes, but engagement isn't necessarily my goal, and I think 111+ total comments isn't too bad going, personally. It's been quite a fun and informative discussion for me, I've enjoyed it hugely.

My broad ideological goal is to chop down ivory towers, and try to avoid building a new one for myself while I'm doing it. The 'karma points' on this OP are pretty rough, I know, but imo karma is just fluff anyway.

A view's a view, and if I've managed to make people think, even if the only thing some of them might think is that I'm an arsehole, at least I got them to think something 🤣

2

sticky_symbols t1_j5ftrlk wrote

You're right, it sounds like you're accomplishing what you want.

2