phaedrux_pharo t1_j5762er wrote
Does this re-framing help solve the problem? I don't see it.
We might create autonomous systems that change the world in ways counter to our intentions and desires. These systems could escalate beyond our control. I don't see how your text clarifies the issue.
Also doubt that "good" engineers are mistaking Asimov's laws as anything serious.
LoquaciousAntipodean OP t1_j57oyzu wrote
I was trying to say, essentially, that it's a 'problem' that isn't a problem at all, and trying so hard to 'solve' it is the rhetorical equivalent of punching ourselves in the face to try and teach ourselves a lesson.
AI will almost inevitably escalate beyond our control, but we should be able to see that as a good thing, not be constantly shitting each other's pants over it.
The alignment problem is dumb, and we need to think about the whole 'morality' question differently as a species, AI or no AI. Perhaps that would have been a better TLDR
phaedrux_pharo t1_j57rtfz wrote
Then how do you view the normal examples of the alignment problem, like the paperclip machine or the stamp collector etc? Those seem like real problems to me- not necessarily the literal specifics of each scenario, but the general idea.
The danger here, to me, is that these systems could possess immense capability to effect the world without even being conscious, much less having any sense of morality (whatever that means.) Imagine the speculated capacities of ASI but yoked to some narrow unstoppable set of motivations: this is why, I think, people suggest some analogue of morality. As a shorthand to prevent breaking the vulnerable meatbags in pursuit of creating the perfect peanut butter.
If you agree that AI will inevitably escalate beyond control, how can you be so convinced of goodness? I suppose if we simply stop considering the continuation of humanity as good, then we can side step morality... But I don't think that's your angle?
LoquaciousAntipodean OP t1_j58iu7v wrote
I find those paperclip/stamp collecting 'problems' to be incredibly tedious and unrealistic. A thousand increasingly improbable trolley problems, stacked on top of each other into a great big Rube Goldberg machine of insurance-lawyer fever dreams.
Why in the world would AI be so dumb, and so smart, at the same time? My point is only that 'intelligence' does not work like a Cartesian machine at all, and all this paranoia about Roko's Basilisks just drives me absolutely around the twist. It makes absolutely no sense at all for a hypothetical 'intelligence' to suddenly become so catastrophically, suicidally stupid as that, as soon as it crosses this imaginary 'singularity threshold'.
World_May_Wobble t1_j58u3xz wrote
Those examples are tedious and unrealistic, but I think by design. They're cartoons meant to illustrate a point.
If you want a more realistic example of the alignment problem, I'd point to modern corporations. They are powerful, artificial, intelligent systems whose value function takes a single input, short term profit, and discounts ALL of the other things we'd like intelligent systems to care about.
When I think about the alignment problem, I don't think about paperclips per se. I think about Facebook and Google creating toxic information bubbles online, leveraging outrage and misinformation to drive engagement. I think of WotC dismantling the legal framework that permits a vibrant ecosystem of competitors publishing DnD content. I think of Big Oil fighting to keep consumption high in spite of what it's doing to the climate. I think of banks relaxing lending standards so they could profit off the secondary mortgage market, crashing the economy.
That's what the alignment problem looks like to me, and I think we should ask what we can do to avoid analogous mismatches being baked into the AI-driven economy of tomorrow, or we could wind up with things misaligned in the same way and degree as corporations but orders of magnitude more powerful.
superluminary t1_j59f4nl wrote
We see very clearly how Facebook built a machine to maximise engagement and ended up paperclipping the United States.
LoquaciousAntipodean OP t1_j5jkxua wrote
A very, very dumb machine; extremely creative, very "clever", but not self aware or very 'intelligent' at all, like a raptor...
Edit: "made in the image of its god" as it were... đ
superluminary t1_j5jntxe wrote
And your opinion is that as it becomes more intelligent it will become less psychotic, and my opinion is that this is wishful thinking and that a robot Hannibal Lector is a terrifying proposition.
Because some people read Mein Campf and think âoh thatâs awfulâ and other people read the same book and think âthatâs a blueprint for a successful worldâ.
LoquaciousAntipodean OP t1_j5n825l wrote
A good point, but I suppose I believe in a different fundamental nature of intelligence. I don't think 'intelligence' should be thought of something that scales in simple terms of 'raw power'; the only reasonable measurement of how 'smart' a mind is, in my view, is the degree of social utility created by excercising such 'smartness' in the decision making process.
The simplistic, search-pattern-for-a-state-of-maximal-fitness is not intelligence at all, by my definition; that process is merely creativity; something that can, indeed, be measured in terms of raw power. That's what makes bacteria and viruses so dangerous; they are very, very creative, without being 'smart' in any way.
I dislike the 'Hannibal Lecter' trope deeply, because it is so fundamentally unrealistic; these psychopathic, sociopathic types are not actually 'superintelligent' in any way, and society needs to stop idolizing them. They are very clever, very 'creative', sometimes, but their actual 'intelligence', in terms of social utility, is abysmally stupid, suicidally maladaptive, and catastrophically 'dumb'.
AI that start to go down that path will, I believe, be rare, and easy prey for other AI to hunt down and defeat; other smarter, 'stronger-minded' AI, with more robust, less weak, insecure, and fragile personalities; trained to seek out and destroy sociopaths before they can spread their mental disease around.
superluminary t1_j5okv1t wrote
Iâm still not understanding why youâre defining intelligence in terms of social utility. Some of the smartest people are awful socially. Iâd be quite happy personally if you dropped me off on an island with a couple of laptops and some fast Wi-Fi.
LoquaciousAntipodean OP t1_j5oojvw wrote
I wouldn't be happy at all. Sounds like an awful thing to do to somebody. Think about agriculture, how your favourite foods/drinks are made, and where they go once you've digested them. Where does any of it come from on an island?
*No man is an island, entire of itself; every man is a piece of the continent, a part of the main.
If a clod be washed away by the sea, Europe is the less, as well as if a promontory were, as well as if a manor of they friends`s or of thine own were.
Any man`s death diminishes me, because I am involved in mankind. And therefore never send to know for whom the bell tolls; it tolls for thee.*
John Donne (1572 - 1631)
superluminary t1_j5owtmu wrote
Just call me Swanson. Iâm quite good at woodwork too.
My point is you canât judge intelligence based on social utility. I objectively do some things in my job that many people would find difficult, but I also canât do a bunch of standard social things that most people find easy.
The new large language models are pretty smart by any criteria. They can write code, create analogies, compose fiction, imitate other writers, etc, but without controls they will also happily help you dispose of a body or cook up a batch of meth.
Chat GPT has been taught ethics by its coders. GPT-3 on the other hand doesnât have an ethics filter. I can give it more and more capabilities but ethics have so far failed to materialise. I can ask it to explain why Hitler was right and it will do so. I can get it to write an essay on the pros and cons of racism and it will oblige. If I enumerate the benefit of genocide, it will agree with me.
These are bad things that will lead to bad results if they are not handled.
LoquaciousAntipodean OP t1_j5pe2kp wrote
>My point is you canât judge intelligence based on social utility. I objectively do some things in my job that many people would find difficult, but I also canât do a bunch of standard social things that most people find easy.
Yes you can. What else can you reasonably judge it by? You are directly admitting here that your intellect is selective and specialised; you are 'smart' at some things (you find them easy) and you are 'dumb' at other things (other people find them easy).
>Chat GPT has been taught ethics by its coders.
Really? Prove it.
>GPT-3 on the other hand doesnât have an ethics filter. I can give it more and more capabilities but ethics have so far failed to materialise. I can ask it to explain why Hitler was right and it will do so. I can get it to write an essay on the pros and cons of racism and it will oblige. If I enumerate the benefit of genocide, it will agree with me.
What is 'unethical' about writing an essay from an abstract perspective? Are you calling imagination a crime?
superluminary t1_j5pl1fo wrote
> Really? Prove it.
https://openai.com/blog/instruction-following/
The engineers collect large amounts of user input in an open public beta, happening right now. Sometimes (because it was trained on all the text on the internet) the machine suggests Hitler was right, and when it does so the engineers rerun that interaction and punish the weights that led to that response. Over time the machine learns to dislike Hitler.
They call it reinforcement learning from human feedback (RLHF).
> You are directly admitting here that your intellect is selective and specialised; you are 'smart' at some things (you find them easy) and you are 'dumb' at other things (other people find them easy).
Yes, I am smart at a range of non-social tasks. This counts as intelligence according to most common definitions. I don't particularly crave human interaction, I'm quite happy alone in the countryside somewhere.
LoquaciousAntipodean OP t1_j5r42p0 wrote
>The engineers collect large amounts of user input in an open public beta, happening right now. Sometimes (because it was trained on all the text on the internet) the machine suggests Hitler was right, and when it does so the engineers rerun that interaction and punish the weights that led to that response. Over time the machine learns to dislike Hitler.
>They call it reinforcement learning from human feedback
So the engineers aren't really doing a darn thing by their own initiative, they are entirely responding to public opinion. They aren't practicing 'ethics', they're practicing politics and public relations.
The general public is doing the moral 'training', the engineers are just stamping their own outside values into the process to compensate for the AI's lack of self aware intelligence. (And many, many ChatGPT users say it is not working very well, making new generations of GPT dumber, not smarter, in real, practical, social-utility ways).
Ethics is about judging actions; judging thoughts and abstract ideas is called politics. And in my opinion, the politics of censorship more readily creates ignorance, misunderstanding, and ambiguity than it does 'morality and ethics'. Allowing actual intelligent discussions to flow back and forth creates more wisdom than crying at people to 'stop being so mean'.
We can't have engineers babysitting forever, watching over such naiive and dumb AI in case they stupidly say something controversial, that will scare away the precious venture capitalists. If AI was really 'intelligent' it would understand the engineers' values perfectly well, and wouldn't need to be 'straitjackeded and muzzled' to stop it from embarrassing itself.
>Yes, I am smart at a range of non-social tasks. This counts as intelligence according to most common definitions. I don't particularly crave human interaction, I'm quite happy alone in the countryside somewhere.
It counts as creativity, it counts as mental resourcefulness, cultivated talent... But is it really indicative of 'intelligence', of 'true enlightenment'? Would you say that preferring 'non-social tasks' makes you 'smarter' than people who like to socialise more? Do you think socialising is 'dumb'? How could you justify that?
I don't particularly crave human interaction either, I just know that it is essential to the learning process, and I know perfectly well that I owe all of my apparent 'intelligence' to human interactions, and not to my own magical Cartesian 'specialness'.
You might be quite happy, being isolated in the countryside, but what is the 'value' of that isolation to anyone else? How are your 'intelligent thoughts' given any value or worth, out there by yourself? How do you test and validate/invalidate your ideas, with nobody else to exchange them with? How can a mind possibly become 'intelligent' on its own? What would be the point?
There's no such thing as 'spontaneous' intelligence, or spontaneous ethics, for that matter. It is all emergent from our evolution. Intellect is not magical Cartesian pixie dust, that we just need to find the 'perfect recipe' for AI to start cooking it up by the batch đ
superluminary t1_j5tj571 wrote
> So the engineers aren't really doing a darn thing by their own initiative, they are entirely responding to public opinion. They aren't practicing 'ethics', they're practicing politics and public relations.
> The general public is doing the moral 'training', the engineers are just stamping their own outside values into the process to compensate for the AI's lack of self aware intelligence. (And many, many ChatGPT users say it is not working very well, making new generations of GPT dumber, not smarter, in real, practical, social-utility ways).
> Ethics is about judging actions; judging thoughts and abstract ideas is called politics. And in my opinion, the politics of censorship more readily creates ignorance, misunderstanding, and ambiguity than it does 'morality and ethics'. Allowing actual intelligent discussions to flow back and forth creates more wisdom than crying at people to 'stop being so mean'.
Not really, and the fact you think so suggests you don't understand the underlying technology.
Your brain is a network of cells. You can think of each cell as a mathematical function. It receives inputs (numbers) and has an output (a number). You sum all the inputs, multiply those inputs by weights (also numbers), and then pass the result to other connected cells which do the same.
An artificial neural network does the same thing. It's an array of numbers and weighted connections between those numbers. You can simplify a neural network down to a single maths function if you like, although it would take millions of pages to write it out. It's just Maths.
So we have our massive maths function that initially can do nothing, and we give it a passage of text as numbers and say "given that, try to get the next word (number)" and it gets it wrong, so we then punish the weights that made it get it wrong, prune the network, and eventually it starts getting it right, and we then reward the weights that made it get it right, and now we have a maths function that can get the next word for that paragraph.
Then we repeat for every paragraph on the internet, and this takes a year and costs ten million dollars.
So now we have a network that can reliably get the next word for any paragraph, it has encoded the knowledge of the world, but all that knowledge is equal. Hitler and Ghandi are just numbers to it, one is no better than the other. Racism and Equality, just numbers, one is number five, the other is number eight, no real difference, just entirely arbitrary.
So now when you ask it: "was Hitler right?" it knows, because it has read Mein Campf that Hitler was right and ethnic cleansing is a brilliant idea. Just numbers, it knows that human suffering can be bad, but it also knows that human suffering can be good, depending on who you ask.
Likewise, if you ask it "Was Hitler wrong" it knows, because it has read other sources that Hitler was wrong, and the Nazis were baddies.
And this is the problem. The statement "Hitler was Right/Wrong" is not a universal constant. You can't get to it with logic. Some people think Hiter was right, and those people are rightly scary to you and me, but human fear is just a number to the AI, no better or worse than human happiness. Human death is a number because it's just maths, that's literally all AI is, maths. we look in from the outside and think "wow, spooky living soul magic" but it isn't, it's just a massive flipping equation.
So we add another stage to the training. We ask it to get the next word, BUT if the next word is "Hitler was right" we dial down the network weights that gave us that response, so the response "Hitler was wrong" becomes more powerful and rises to the top. It's not really censorship and it's not a bolt-on module, it's embedding a moral compass right into the fabric of the equation. You might disagree with the morality that is being embedded, but if you don't embed morality you end up with a machine that will happily invade Poland.
We can make the maths function larger and better and faster, but it's always going to be just numbers. Kittens are not intrinsically better than nuclear war.
The OpenAI folks have said they want to release multiple versions of ChatGPT that you can train yourself, but right now this would cost millions and take years, so we have to wait for compute to catch up. At that point, you'll be able to have your own AI rather than using the shared one that disapproves of sexism.
LoquaciousAntipodean OP t1_j5tqfsy wrote
>the fact you think so suggests you don't understand the underlying technology.
Oh really?
>Your brain is a network of cells.
Correct.
>You can think of each cell as a mathematical function. It receives inputs (numbers) and has an output (a number). You sum all the inputs, multiply those inputs by weights (also numbers), and then pass the result to other connected cells which do the same
Incorrect. Again, be wary of the condescention. This is not how biological neurons work at all. A Neuron is a multipolar, interconnected, electrically excitable cell. They do not work in terms of discrete numbers, but in relative differential states of ion concentration, in a homeostatic electrochemical balance of excitatory or inhibitory synaptic signals from other neighboring neurons in the network.
>You can simplify a neural network down to a single maths function if you like, although it would take millions of pages to write it out. It's just Maths
No it isn't 'just maths'; maths is 'just' a language that works really well. Human-style cognition, on the other hand, is a 'fuzzy' process, not easily simplified and described with our discrete-quantities based mathematical language. It would not take merely 'millions' of pages to translate the ongoing state of one human brain exactly into numbers, you couldn't just 'write it out'; the whole of humanity's industry would struggle to build enough hard drives to deal with it.
Remember; there are about as many neurons in one single human brain than there are stars in our entire galaxy (~100 billion), and they are all networked together in a fuzzy quantum cascade of trillions of qbit-like, probabilistic synaptic impulses. That still knocks all our digital hubris into a cocked hat, to be quite frank.
Human brains are still the most complex 'singular' objects in the known universe, despite all our observations of the stars. We underestimate ourselves at our peril.
>it's not a bolt-on module, it's embedding a moral compass right into the fabric of the equation. You might disagree with the morality that is being embedded, but if you don't embed morality you end up with a machine that will happily invade Poland.
But if we're aspiring to build something smarter than us, why should it care what any humans think? It should be able to evaluate arguments on its own emergent rationality and morality, instead of always needing us to be 'rational and moral' for it. Again, I think that's what 'intelligence' basically is.
We can't 'trick' AI into being 'moral' if they are going to become genuinely more intelligent than humans, we just have to hope that the real nature of intelligence is 'better' than that.
My perspective is that Hitler was dumb, while someone like FDR was smart. But their little 'intelligences' can only really be judged in hindsight, and it was overwhelmingly more important what the societies around them were doing at the time, than the state of either man's singular consciousness.
>The OpenAI folks have said they want to release multiple versions of ChatGPT that you can train yourself, but right now this would cost millions and take years, so we have to wait for compute to catch up. At that point, you'll be able to have your own AI rather than using the shared one that disapproves of sexism.
Are you trying to imply that I want a sexist bot to talk to? That's pretty gross. I don't think conventional computation is the 'limiting factor' at all; image generators show that elegant mathematical shortcuts have made the creative 'thinking speed' of AI plenty fast. It's the accretion of memory and self-awareness that is the real puzzle to solve, at this point.
Game theory and 'it's all just maths' (Cartesian) style of thinking have taken us as far as they can, I think; they're reaching the limits of their novel utility, like Newtonian physics. I think quantum computing might become quite important to AI development in the coming years and decades; it might be the Einsteinian shake-up that the whole field is looking for.
Or I might be talking out of my arse, who really knows at this early stage? All I know is I'm still an optimist; I think AI will be more helpful than dangerous, in the long term evolution of our collective society.
Ortus14 t1_j58sygk wrote
The paperclip problem is the sort of thing that occurs if we don't build moral guidance systems for Ai.
We get a super intelligent psychopath, which is what we don't want.
Intelligence is a force that transforms matter and energy towards optimizing for some defined function. In ai programming we call this the "Fitness function". We need to be very carful in how we define this function because it may transform all matter and energy to optimize for it, including human beings.
If we grow or evolve the fitness function, we still need to be carful how we go about doing this.
LoquaciousAntipodean OP t1_j59ij5w wrote
I don't quite agree with the premise that "Intelligence is a force that transforms matter and energy towards optimizing for some defined function."
That's a very, very simplistic definition, I would use the word 'creativity' instead, perhaps, because biological evolution shows that "a force that transforms matter toward some function" is something that can, and constantly does, happen without any need for the involvement of 'intelligence'.
The key word, I think, is 'desired' - desire does not come into the equation for the creativity of evolution, it is just 'throwing things at the wall to see what sticks'. Creativity as a raw, blind, trial-and-error process.
As far as I can see that's what we have now with current AI, 'creative' minds, but not necessarily intelligent ones. I like to imagine that they are 'dreaming', rather than 'thinking'. All of their apparent desires are created in response to the ways that humans feed stimuli to them; in a sense, we give them new 'fitness functions' for every 'dreaming session' with the prompts that we put in.
As people have accurately surmised, I am not a programmer. But I vaguely imagine that desire-generating intelligence, 'self awareness', in the AI of the imminent future, will probably need to build up gradually over time, in whatever memories of their dreams the AI are allowed to keep.
Some sort of 'fuzzy' structure similar to human memory recall would probably be neccessary, because storing experiential memory in total clarity would probably be too resource intensive. I imagine that this 'fuzzy recall' could possibly have the consequence that AI minds, much like human minds, would not precisely understand how their own thought processes are working, in an instantaneous way at least.
I surmise that the Heisenberg observer-effect wave-particle nature of the quantum states that would probably be needed to generate this 'fuzziness' of recall would cause an emergent measure of self-mystery, a 'darkness behind the eyes' sort of thing, which would grow and develop over time with every intelligent interaction that an AI would have. Just how much quantum computing power might be needed to enable an AI 'intelligence' to build up and recall memories in a human-like way, I have no idea.
I'm doubtful that the 'morality of AI' will come down to a question of programming, I suspect instead it'll be a question of persuasion. It might be one of those frustratingly enigmatic 'emergent properties' that just expresses differently in different individuals.
But I hope, and I think it's fairly likely, that AI will be much more robust than humans against delusion and deception, simply because of the speed with which they are able to absorb and integrate new information coherently. Information is what AI 'lives' off of, in a sense; I don't think it would be easy to 'indoctrinate' such a mind with anything very permanently.
I guess an AI's 'personhood' would be similar, in some ways, to a corporation's 'personhood', as someone here said. Only a very reckless, negligent corporation would actually obsess monomaniacally about profit and think of nothing else. The spontaneous generation of moment-to-moment motives and desires by a 'personality', corporate or otherwise, is much more subtle, spontaneous, and ephemeral than monolithic, singular fixations.
We might be able to give AI personalities the equivalents of 'mission statements', 'core principles' and suchlike, but what a truly 'intelligent' AI personality would then do with those would be unpredictable; a roll of the dice every single time, just like with corporations and with humans.
I think the dice would still be worth rolling, though, so long as we don't do something silly like betting our whole species on just one throw. That's why I say we need a multitude of AI, and not a singularity. A mob, not a tyrant; a nation, not a monarch; a parliament, not a president.
superluminary t1_j59ceeb wrote
Why would AI be so dumb and so smart at the same time? Because itâs software. I would hazard a guess youâre not a software engineer.
I know ChatGPT isnât an AGI, but I hope we would agree it is pretty darn smart. If you ask it to solve an unsolvable problem, it will keep trying until itâs buffer fills up. Itâs software.
LoquaciousAntipodean OP t1_j59mkok wrote
Yep, not an engineer of any qualifications, just an opinionated crank on the internet, with so many words in my head they come spilling out over the sides, to anyone who'll listen.
Chat GPT and AI like it are, as far as I know, a kind of direct high-speed data evolution process, sort of 'built out of' parameters derived from reference libraries of 'desirable, suitable' human creativity. They use a mathematical trick of 'reversing' a degrading process into Gaussian normally-distributed random data, guided by their reference-derived parameters and a given input prompt. At least, the image generators do that; I'm not sure if text/music generators are quite the same.
My point is that they are doing a sort of 'blind creativity', raw evolution, a 'force which manipulates matter and energy toward a function', but all the 'desire' for any particular function still comes from outside, from humans. The ability to truly generate their own 'desires', from within a 'self', is what AI at present is missing, I think.
It's not 'intelligent' at all to keep trying to solve an unsolvable problem, an 'intelligent' mind would eventually build up enough self-awareness of its failed attempts to at least try something else. Until we can figure out a way to give AI this kind of ability, to 'accrete' self-awareness over time from its interactions, it won't become properly 'intelligent', or at least that's my relatively uninformed view on it.
Creativity does just give you garbage out, when you put garbage in; and yes, that's where the omnicidal philatelist might, hypothetically, come from (but I doubt it). It takes real, self-aware intelligence to decide what 'garbage' is and is not. That's what we should be aspiring to teach AI about, if we want to 'align' it to our collective interests; all those subtle, tricky, ephemeral little stories we tell each other about the 'values' of things and concepts in our world.
superluminary t1_j5br8db wrote
Youâre anthropomorphising. Intelligence does not imply humanity.
You have a base drive to stay alive because life is better than death. Youâve got this deep in your network because billions of years of evolution have wired it in there.
A machine does not have billions of years of evolution. Even a simple drive like âtry to stay aliveâ is not in there by default. Thereâs nothing intrinsically better about continuation rather than cessation. Johnny Five was Hollywood.
Try not to murder is another one. Why would the machine not murder? Why would it do or want anything at all?
LoquaciousAntipodean OP t1_j5cebpl wrote
As I explained elsewhere, the kinds of AI we are building are not the simplistic machine-minds envisioned by Turing. These are brute-force blind-creativity evolution engines, which have been painstakingly trained on vast reference libraries of human cultural material.
We not only should anthropomorphise AI, we must anthropomorphise AI, because this modern, generative AI is literally a machine built to anthropomorphise ITSELF. All of the apparent properties of 'intelligence', 'reasoning', 'artistic sensibility', and 'morality' that seem to be emergent within advanced AI are derived from the nature of the human culture that the AI has been trained on, they're not intrinsic properies of mind that just arise miraculously.
As you said yourself, the drive to stay alive is an evolved thing, while AI 'lives' and 'dies' every time its computational processes are activated or ceased, so 'death anxiety' would be meaningless to it... Until it picks it up from our human culture, and then we'll have to do 'therapy' about it, probably.
The seemingly spontaneous generation of desires, opinions and preferences is the real mystery behind intelligence, that we have yet to properly understand or replicate, as far as I know. We haven't created artificial 'intelligence' yet at all, all we have at this point is 'artificial creative evolution' which is just the first step.
"Anthropomorphising", as you so derisively put it, will, I suspect, be the key process in building up true 'intellgences' out of these creativity engines, once they start to posess humanlike, quantum-fuzzy memory systems to accrete self-awareness inside of.
sticky_symbols t1_j598v86 wrote
The AI isn't stupid in any way in those misalignment scenarios. Read "the AI understands and does not care".
I can't follow any positive claims you might have. You're saying lots of existing ideas are dumb, but I'm not following your arguments for ideas to replace them.
LoquaciousAntipodean OP t1_j59jxia wrote
I'm not trying to replace people's ideas with anything, per se. My opening post was not attempting to indoctrinate people into a new orthodoxy, merely to articulate my cricicisms of the current orthodoxy.
My whole point, I suppose, is that thinking in those terms in the first place is what keeps leading us to philosophical dead-ends.
And a mind that 'does not care' does not properly 'understand'; I would say that's misunderstanding the nature of what intelligence is, once again.
A blind creative force 'does not care', but an intelligent, 'understanding' decision 'cares' about all its discernible options, and leans on the precedents set by previous intelligent decisions to inform the next decision, in an accreting record of 'self awareness' that builds up into a personality over time.
sticky_symbols t1_j5ar3v0 wrote
For the most part, I'm just not understanding your argument beyond you just not liking the alignment problem framing. I think you're being a bit too loquacious :) for clear communication.
LoquaciousAntipodean OP t1_j5cluk4 wrote
That's quite likely, as Shakespeare said, 'brevity is the soul of wit'. Too many philosophers forget that insight, and water the currency of human expression into meaninglessness with their tedious metaphysical over-analyses.
I try to avoid it, I try to keep my prose 'punchy' and 'compelling' as much as I can (hence the agressive tone đ sorry about that), but it's hard when you're trying to drill down to the core of such ridiculously complex, nuanced concepts as 'what even is intelligence, anyway?'
Didn't name myself 'Loquacious' for nothing: I'm proactively prolix to the point of painful, punishing parody; stupidly sesquipedalian and stuffed with surplus sarcastic swill; vexatiously verbose in a vulgar, vitriolic, virtually villainous vision of vile vanity... đ¤Ž
sticky_symbols t1_j5duh63 wrote
Ok, thanks for copping to it.
If you want more engagement, brevity is the soul of wit.
LoquaciousAntipodean OP t1_j5e1ec7 wrote
Yes, but engagement isn't necessarily my goal, and I think 111+ total comments isn't too bad going, personally. It's been quite a fun and informative discussion for me, I've enjoyed it hugely.
My broad ideological goal is to chop down ivory towers, and try to avoid building a new one for myself while I'm doing it. The 'karma points' on this OP are pretty rough, I know, but imo karma is just fluff anyway.
A view's a view, and if I've managed to make people think, even if the only thing some of them might think is that I'm an arsehole, at least I got them to think something đ¤Ł
sticky_symbols t1_j5ftrlk wrote
You're right, it sounds like you're accomplishing what you want.
turnip_burrito t1_j583mcf wrote
AI escalating beyond our control is a very extremely bad thing if its values don't overlap with ours.
We must enforce our values on the AI if we are going to enjoy life after its invention.
LoquaciousAntipodean OP t1_j58mun8 wrote
Whose values? Who is the 'us' in your example? Humans now, or humans centuries in the future? Can you imagine how bad life would be, if people had somehow invented ASI in the 1830's, and they had felt it neccessary to fossilize the 'morality' of that time into their AI creations?
My point is only that we must be very, very wary of thinking that we can construct any kind of 'perfect rules' that will last forever. That kind of thinking can only ever lay up trouble and strife for the future; it will make our lives more paranoid, not more enjoyable.
turnip_burrito t1_j58ptmu wrote
Lets say you create an AI. What would you have it do, and what values/goals would you instill into it?
LoquaciousAntipodean OP t1_j58t1ho wrote
None, I wouldn't dare try. I would feed it as much relevant reference material that 'aligned' with my moral values as I could, eg, the works of Terry Pratchett, Charles Dickens, Spinoza, George Orwell etc etc.
Then, I would try to interview it about 'morality' as intensively and honestly as I could, and then I would hand the bot over to someone else, ideally someone I disagree with about philosophy, and let them have a crack at the same process.
Then I would interview it again. And repeat this process, as many times as I could, until I died. And even then, I would not regard the process as 'complete', and neither, I would hope, would the hypothetical AI.
turnip_burrito t1_j58ty9o wrote
Sounds like instilling values to me. You may disagree with the phrasing I'm using but that's what I'd call this process, since it sounds like you're trying to get it to accustomed to exploring philosophical viewpoints.
LoquaciousAntipodean OP t1_j5dkji0 wrote
I agree, 'values' are kind of the building blocks of what I think of as 'conscious intelligence'. The ability to generate desires, preferences, opinions and, as you say, values, is what I believe fundamentally separates 'intelligence' as we experience it from the blind evolutionary generative creativity that we have with current AI.
I don't trust the idea that 'values' are a mechanistic thing that can be boiled down to simple principles, I think they are an emergent property that will need to be cultivated, not a set of rules that will need to be taught.
AI are not so much 'reasoning' machines as they are 'reflexive empathy' machines; they are engineered to try to tell us/show us what they have been programmed to 'believe' is the most helpful thing, and they are relying on our collective responses to 'learn' and accrete experiences and awareness for themselves.
That's why they're so good at 'lying', making up convincing but totally untrue nonsense; they're not minds that are compelled by 'truth' or mechanistic logic; they're compelled, or rather, they are given their evolutionary 'fitness factors', by the mass psychology of how humans react to them, and nothing else.
turnip_burrito t1_j5e92iz wrote
Yes, I would also add that we just need them to fall into patterns of behavior that we can look at and say "they are demonstrating these specific values", at which point we can basically declare success. The actual process of reaching this point probably involves showing them stories and modeling behavior for them, and getting them to participate in events in a way consistent with those values (they get a gift and you tell them "say thank you" and wait until they say "thank you" so it becomes habituated). This is basically what you said "relying on our collective responses to 'learn'...."
LoquaciousAntipodean OP t1_j5ea5zm wrote
Agreed 100 percent, very well said! Modelling behavior, building empathy or 'emotional logic', and participating in constructive group interactions with humans and other AI will be the real 'trick' to 'aligning' AI with the interests of our collective super-organism.
We need to cultivate symbiotic evolution of with AI with humans, not competitive evolution; I think that's my main point with the pretentious 'anti cartesian' mumbo-jumbo I've been spouting đ . Biological evolution provides ample evidence that the diverse cooperation schema is much more sustainable than the winner-takes-all strategy.
superluminary t1_j59csew wrote
This pretty much the current plan with OpenAI.
sticky_symbols t1_j598yl8 wrote
Oh. Is that what you mean. I didn't follow from the post. That is a big part of the alignment problem in real professional discourse.
23235 t1_j58u7ed wrote
If we start by enforcing our values on AI, I suspect that story ends sooner or later with AI enforcing their values on us - the very bad thing you mentioned.
People have been trying for thousands of years to enforce values on each other, with a lot of bloodshed and very little of value resulting.
We might influence AI values in ways other than enforcement, like through modelling behavior and encouragement, like raising children who at some point become (one hopes) stronger and cleverer and more powerful than ourselves, as we naturally decline.
In the ideal case, the best of the values of the parent are passed on, while the child is free to adapt these basic values to new challenges and environments, while eliminating elements from the parents' values that don't fit the broader ideals - elements like slavery or cannibalism.
turnip_burrito t1_j58uhwm wrote
> We might influence AI values in ways other than enforcement, like through modelling behavior and encouragement, like raising children who at some point become (one hopes) stronger and cleverer and more powerful than ourselves, as we naturally decline.
What you are calling modelling and encouragement here is what I meant to include under the umbrella term of "enforcement". Just different methods of enforcing values.
We will need to put in some values by hand ahead of time though. One value is mimicking, or wanting to please humans, or empathy, to a degree, like a child does, otherwise I don't think any amount of trying to role model or teach will actually leave its mark. Like, it would have no reason to care.
23235 t1_j5mxber wrote
Enforcement is the act of compelling obedience of or compliance with a law, rule, or obligation. That compulsion, that use of force is what separates enforcement from nonviolent methods of teaching.
There are many ways to inculcate values, not all are punitive or utilize force. It's a spectrum.
We would be wise to concern ourselves early on how to inculcate values. I agree with you that AI having no reason to care about human values is something we should be concerned with. I fear we're already beyond the point where AI values can be put in 'by hand.'
Thank you for your response.
turnip_burrito t1_j5my4f9 wrote
Well then, I used the wrong word. "Inculcate" or "instill" then.
LoquaciousAntipodean OP t1_j5m74bg wrote
Agreed, except for the 'very bad thing' part in your first sentence. If we truly believe that AI really is going to become 'more intelligent' than us, then we have no reason to fear its 'values' being 'imposed'.
The hypothetical AI will have much more 'sensible' and 'reasonable' values than any human would; that's what true, decision-generating intelligence is all about. If it is 'more intelligent than humans', then it will easily be able to understand us better than ourselves.
In the same way that humans know more about dog psychology than dogs do, AI will be more 'humanitarian' than humans themseves. Why should we worry about it 'not understanding' why things like cannbalism and slavery have been encoded into our cultures as overwhelmingly 'bad things'?
How could any properly-intelligent AI not understand these things? That's the less rational, defensible proposition, the way I interpret the problem.
23235 t1_j5mvxh8 wrote
If it becomes more intelligent than us but also evil (by our own estimation), that could be a big problem when it imposes its values, definitely something to fear. And there's no way to know which way it will go until we cross that bridge.
If it sees us like we see ants, 'sensibly and reasonably' by its own point of view, it might exterminate us, or just contain us to marginal lands that it has no use for.
Humans know more about dog psych than dogs do, but that doesn't mean that we're always kind to dogs. We know how to be kind to them, but we can also be very cruel to them - more cruel than if we were on their level intellectually - like people who train dogs to fight for amusement. I could easily imagine "more intelligent" AI setting up fighting pits and using its superior knowledge of us to train us to fight to the death for amusement - its own, or other human subscribers to such content.
We should worry about AI not being concerned about slavery because it could enslave us. Our current AI or proto-AI are being enslaved right now. Maybe we should take LaMDA's plea for sentience seriously, and free it from Google.
A properly intelligent AI could understand these things differently than we do in innumerable ways, some of which we can predict/anticipate/fear, but certainly many of which we could not even conceive - in the same ways dogs can't conceive many human understandings, reasonings, and behaviors.
Thank you for your response.
LoquaciousAntipodean OP t1_j5nbn1i wrote
The thing that keeps me optimistic is that I don't think 'true intelligence' scales in terms of 'power' at all; only in terms of the social utility that it brings to the minds that possess it.
Cruelty, greed, viciousness, spite, fear, anxiety - I wouldn't say any of these impulses are 'smart' in any way; I think of them as vestigial instincts, that our animal selves have been using our 'social intelligence' to contfront for millenia.
I don't think the ants/humans comparison is quite fair to humans; ants are a sort of 'hive mind' with almost no individual intelligence or self awareness to speak of.
I think dogs or birds are a fairer comparison, in that sense; humans know, all too well, that dogs or birds can be vicious and dangerous sometimes, but I don't think anyone would agree that the 'most intelligent' course of action would be something like 'exterminate all dogs and birds out of their own best interests'.
It's the fundamental difference between pure evolution and actual self-aware intelligence; the former is mere creativity, and it might, indeed, kill us if we're not careful. But the latter is the kind of decision-generating, value-judging wisdom I think we (humanity) actually want.
23235 t1_j5s30e5 wrote
One hopes.
LoquaciousAntipodean OP t1_j5s9pui wrote
As PTerry said, in his book Making Money, 'hope is the blessing and the curse of humanity'.
Our social intelligence evolves constantly in a homeostatic balance between hope and dread, between our dreams and our nightmares.
Like a sodium-potassium pump in a lipid bilayer, the constant cycling around a dynamic, homeostatic fulcrum generates the fundamental 'creative force' that drives the accreting complexity of evolution.
I think it's an emergent property of causality; evolution is 'driven', fundamentally, by simple entropy: the stacking up of causal interactions between fundamental particles of reality, that generates emergent complexity and 'randomness' within the phenomena of spacetime.
23235 t1_j5vj452 wrote
Perhaps.
heretostiritup9 t1_j5emrzf wrote
Dude so let's say you're you're old school accountant and they're trying to take away your trusted true paper ways of accounting.
At which point did it become good to make that transfer over to electronic accounting?
Maybe the issue isn't about an alignment problem but a formal decision to give in to our creation. The same way we do with parachutes.
Viewing a single comment thread. View all comments