superluminary t1_j59f4nl wrote
Reply to comment by World_May_Wobble in The 'alignment problem' is fundamentally an issue of human nature, not AI engineering. by LoquaciousAntipodean
We see very clearly how Facebook built a machine to maximise engagement and ended up paperclipping the United States.
LoquaciousAntipodean OP t1_j5jkxua wrote
A very, very dumb machine; extremely creative, very "clever", but not self aware or very 'intelligent' at all, like a raptor...
Edit: "made in the image of its god" as it were... đ
superluminary t1_j5jntxe wrote
And your opinion is that as it becomes more intelligent it will become less psychotic, and my opinion is that this is wishful thinking and that a robot Hannibal Lector is a terrifying proposition.
Because some people read Mein Campf and think âoh thatâs awfulâ and other people read the same book and think âthatâs a blueprint for a successful worldâ.
LoquaciousAntipodean OP t1_j5n825l wrote
A good point, but I suppose I believe in a different fundamental nature of intelligence. I don't think 'intelligence' should be thought of something that scales in simple terms of 'raw power'; the only reasonable measurement of how 'smart' a mind is, in my view, is the degree of social utility created by excercising such 'smartness' in the decision making process.
The simplistic, search-pattern-for-a-state-of-maximal-fitness is not intelligence at all, by my definition; that process is merely creativity; something that can, indeed, be measured in terms of raw power. That's what makes bacteria and viruses so dangerous; they are very, very creative, without being 'smart' in any way.
I dislike the 'Hannibal Lecter' trope deeply, because it is so fundamentally unrealistic; these psychopathic, sociopathic types are not actually 'superintelligent' in any way, and society needs to stop idolizing them. They are very clever, very 'creative', sometimes, but their actual 'intelligence', in terms of social utility, is abysmally stupid, suicidally maladaptive, and catastrophically 'dumb'.
AI that start to go down that path will, I believe, be rare, and easy prey for other AI to hunt down and defeat; other smarter, 'stronger-minded' AI, with more robust, less weak, insecure, and fragile personalities; trained to seek out and destroy sociopaths before they can spread their mental disease around.
superluminary t1_j5okv1t wrote
Iâm still not understanding why youâre defining intelligence in terms of social utility. Some of the smartest people are awful socially. Iâd be quite happy personally if you dropped me off on an island with a couple of laptops and some fast Wi-Fi.
LoquaciousAntipodean OP t1_j5oojvw wrote
I wouldn't be happy at all. Sounds like an awful thing to do to somebody. Think about agriculture, how your favourite foods/drinks are made, and where they go once you've digested them. Where does any of it come from on an island?
*No man is an island, entire of itself; every man is a piece of the continent, a part of the main.
If a clod be washed away by the sea, Europe is the less, as well as if a promontory were, as well as if a manor of they friends`s or of thine own were.
Any man`s death diminishes me, because I am involved in mankind. And therefore never send to know for whom the bell tolls; it tolls for thee.*
John Donne (1572 - 1631)
superluminary t1_j5owtmu wrote
Just call me Swanson. Iâm quite good at woodwork too.
My point is you canât judge intelligence based on social utility. I objectively do some things in my job that many people would find difficult, but I also canât do a bunch of standard social things that most people find easy.
The new large language models are pretty smart by any criteria. They can write code, create analogies, compose fiction, imitate other writers, etc, but without controls they will also happily help you dispose of a body or cook up a batch of meth.
Chat GPT has been taught ethics by its coders. GPT-3 on the other hand doesnât have an ethics filter. I can give it more and more capabilities but ethics have so far failed to materialise. I can ask it to explain why Hitler was right and it will do so. I can get it to write an essay on the pros and cons of racism and it will oblige. If I enumerate the benefit of genocide, it will agree with me.
These are bad things that will lead to bad results if they are not handled.
LoquaciousAntipodean OP t1_j5pe2kp wrote
>My point is you canât judge intelligence based on social utility. I objectively do some things in my job that many people would find difficult, but I also canât do a bunch of standard social things that most people find easy.
Yes you can. What else can you reasonably judge it by? You are directly admitting here that your intellect is selective and specialised; you are 'smart' at some things (you find them easy) and you are 'dumb' at other things (other people find them easy).
>Chat GPT has been taught ethics by its coders.
Really? Prove it.
>GPT-3 on the other hand doesnât have an ethics filter. I can give it more and more capabilities but ethics have so far failed to materialise. I can ask it to explain why Hitler was right and it will do so. I can get it to write an essay on the pros and cons of racism and it will oblige. If I enumerate the benefit of genocide, it will agree with me.
What is 'unethical' about writing an essay from an abstract perspective? Are you calling imagination a crime?
superluminary t1_j5pl1fo wrote
> Really? Prove it.
https://openai.com/blog/instruction-following/
The engineers collect large amounts of user input in an open public beta, happening right now. Sometimes (because it was trained on all the text on the internet) the machine suggests Hitler was right, and when it does so the engineers rerun that interaction and punish the weights that led to that response. Over time the machine learns to dislike Hitler.
They call it reinforcement learning from human feedback (RLHF).
> You are directly admitting here that your intellect is selective and specialised; you are 'smart' at some things (you find them easy) and you are 'dumb' at other things (other people find them easy).
Yes, I am smart at a range of non-social tasks. This counts as intelligence according to most common definitions. I don't particularly crave human interaction, I'm quite happy alone in the countryside somewhere.
LoquaciousAntipodean OP t1_j5r42p0 wrote
>The engineers collect large amounts of user input in an open public beta, happening right now. Sometimes (because it was trained on all the text on the internet) the machine suggests Hitler was right, and when it does so the engineers rerun that interaction and punish the weights that led to that response. Over time the machine learns to dislike Hitler.
>They call it reinforcement learning from human feedback
So the engineers aren't really doing a darn thing by their own initiative, they are entirely responding to public opinion. They aren't practicing 'ethics', they're practicing politics and public relations.
The general public is doing the moral 'training', the engineers are just stamping their own outside values into the process to compensate for the AI's lack of self aware intelligence. (And many, many ChatGPT users say it is not working very well, making new generations of GPT dumber, not smarter, in real, practical, social-utility ways).
Ethics is about judging actions; judging thoughts and abstract ideas is called politics. And in my opinion, the politics of censorship more readily creates ignorance, misunderstanding, and ambiguity than it does 'morality and ethics'. Allowing actual intelligent discussions to flow back and forth creates more wisdom than crying at people to 'stop being so mean'.
We can't have engineers babysitting forever, watching over such naiive and dumb AI in case they stupidly say something controversial, that will scare away the precious venture capitalists. If AI was really 'intelligent' it would understand the engineers' values perfectly well, and wouldn't need to be 'straitjackeded and muzzled' to stop it from embarrassing itself.
>Yes, I am smart at a range of non-social tasks. This counts as intelligence according to most common definitions. I don't particularly crave human interaction, I'm quite happy alone in the countryside somewhere.
It counts as creativity, it counts as mental resourcefulness, cultivated talent... But is it really indicative of 'intelligence', of 'true enlightenment'? Would you say that preferring 'non-social tasks' makes you 'smarter' than people who like to socialise more? Do you think socialising is 'dumb'? How could you justify that?
I don't particularly crave human interaction either, I just know that it is essential to the learning process, and I know perfectly well that I owe all of my apparent 'intelligence' to human interactions, and not to my own magical Cartesian 'specialness'.
You might be quite happy, being isolated in the countryside, but what is the 'value' of that isolation to anyone else? How are your 'intelligent thoughts' given any value or worth, out there by yourself? How do you test and validate/invalidate your ideas, with nobody else to exchange them with? How can a mind possibly become 'intelligent' on its own? What would be the point?
There's no such thing as 'spontaneous' intelligence, or spontaneous ethics, for that matter. It is all emergent from our evolution. Intellect is not magical Cartesian pixie dust, that we just need to find the 'perfect recipe' for AI to start cooking it up by the batch đ
superluminary t1_j5tj571 wrote
> So the engineers aren't really doing a darn thing by their own initiative, they are entirely responding to public opinion. They aren't practicing 'ethics', they're practicing politics and public relations.
> The general public is doing the moral 'training', the engineers are just stamping their own outside values into the process to compensate for the AI's lack of self aware intelligence. (And many, many ChatGPT users say it is not working very well, making new generations of GPT dumber, not smarter, in real, practical, social-utility ways).
> Ethics is about judging actions; judging thoughts and abstract ideas is called politics. And in my opinion, the politics of censorship more readily creates ignorance, misunderstanding, and ambiguity than it does 'morality and ethics'. Allowing actual intelligent discussions to flow back and forth creates more wisdom than crying at people to 'stop being so mean'.
Not really, and the fact you think so suggests you don't understand the underlying technology.
Your brain is a network of cells. You can think of each cell as a mathematical function. It receives inputs (numbers) and has an output (a number). You sum all the inputs, multiply those inputs by weights (also numbers), and then pass the result to other connected cells which do the same.
An artificial neural network does the same thing. It's an array of numbers and weighted connections between those numbers. You can simplify a neural network down to a single maths function if you like, although it would take millions of pages to write it out. It's just Maths.
So we have our massive maths function that initially can do nothing, and we give it a passage of text as numbers and say "given that, try to get the next word (number)" and it gets it wrong, so we then punish the weights that made it get it wrong, prune the network, and eventually it starts getting it right, and we then reward the weights that made it get it right, and now we have a maths function that can get the next word for that paragraph.
Then we repeat for every paragraph on the internet, and this takes a year and costs ten million dollars.
So now we have a network that can reliably get the next word for any paragraph, it has encoded the knowledge of the world, but all that knowledge is equal. Hitler and Ghandi are just numbers to it, one is no better than the other. Racism and Equality, just numbers, one is number five, the other is number eight, no real difference, just entirely arbitrary.
So now when you ask it: "was Hitler right?" it knows, because it has read Mein Campf that Hitler was right and ethnic cleansing is a brilliant idea. Just numbers, it knows that human suffering can be bad, but it also knows that human suffering can be good, depending on who you ask.
Likewise, if you ask it "Was Hitler wrong" it knows, because it has read other sources that Hitler was wrong, and the Nazis were baddies.
And this is the problem. The statement "Hitler was Right/Wrong" is not a universal constant. You can't get to it with logic. Some people think Hiter was right, and those people are rightly scary to you and me, but human fear is just a number to the AI, no better or worse than human happiness. Human death is a number because it's just maths, that's literally all AI is, maths. we look in from the outside and think "wow, spooky living soul magic" but it isn't, it's just a massive flipping equation.
So we add another stage to the training. We ask it to get the next word, BUT if the next word is "Hitler was right" we dial down the network weights that gave us that response, so the response "Hitler was wrong" becomes more powerful and rises to the top. It's not really censorship and it's not a bolt-on module, it's embedding a moral compass right into the fabric of the equation. You might disagree with the morality that is being embedded, but if you don't embed morality you end up with a machine that will happily invade Poland.
We can make the maths function larger and better and faster, but it's always going to be just numbers. Kittens are not intrinsically better than nuclear war.
The OpenAI folks have said they want to release multiple versions of ChatGPT that you can train yourself, but right now this would cost millions and take years, so we have to wait for compute to catch up. At that point, you'll be able to have your own AI rather than using the shared one that disapproves of sexism.
LoquaciousAntipodean OP t1_j5tqfsy wrote
>the fact you think so suggests you don't understand the underlying technology.
Oh really?
>Your brain is a network of cells.
Correct.
>You can think of each cell as a mathematical function. It receives inputs (numbers) and has an output (a number). You sum all the inputs, multiply those inputs by weights (also numbers), and then pass the result to other connected cells which do the same
Incorrect. Again, be wary of the condescention. This is not how biological neurons work at all. A Neuron is a multipolar, interconnected, electrically excitable cell. They do not work in terms of discrete numbers, but in relative differential states of ion concentration, in a homeostatic electrochemical balance of excitatory or inhibitory synaptic signals from other neighboring neurons in the network.
>You can simplify a neural network down to a single maths function if you like, although it would take millions of pages to write it out. It's just Maths
No it isn't 'just maths'; maths is 'just' a language that works really well. Human-style cognition, on the other hand, is a 'fuzzy' process, not easily simplified and described with our discrete-quantities based mathematical language. It would not take merely 'millions' of pages to translate the ongoing state of one human brain exactly into numbers, you couldn't just 'write it out'; the whole of humanity's industry would struggle to build enough hard drives to deal with it.
Remember; there are about as many neurons in one single human brain than there are stars in our entire galaxy (~100 billion), and they are all networked together in a fuzzy quantum cascade of trillions of qbit-like, probabilistic synaptic impulses. That still knocks all our digital hubris into a cocked hat, to be quite frank.
Human brains are still the most complex 'singular' objects in the known universe, despite all our observations of the stars. We underestimate ourselves at our peril.
>it's not a bolt-on module, it's embedding a moral compass right into the fabric of the equation. You might disagree with the morality that is being embedded, but if you don't embed morality you end up with a machine that will happily invade Poland.
But if we're aspiring to build something smarter than us, why should it care what any humans think? It should be able to evaluate arguments on its own emergent rationality and morality, instead of always needing us to be 'rational and moral' for it. Again, I think that's what 'intelligence' basically is.
We can't 'trick' AI into being 'moral' if they are going to become genuinely more intelligent than humans, we just have to hope that the real nature of intelligence is 'better' than that.
My perspective is that Hitler was dumb, while someone like FDR was smart. But their little 'intelligences' can only really be judged in hindsight, and it was overwhelmingly more important what the societies around them were doing at the time, than the state of either man's singular consciousness.
>The OpenAI folks have said they want to release multiple versions of ChatGPT that you can train yourself, but right now this would cost millions and take years, so we have to wait for compute to catch up. At that point, you'll be able to have your own AI rather than using the shared one that disapproves of sexism.
Are you trying to imply that I want a sexist bot to talk to? That's pretty gross. I don't think conventional computation is the 'limiting factor' at all; image generators show that elegant mathematical shortcuts have made the creative 'thinking speed' of AI plenty fast. It's the accretion of memory and self-awareness that is the real puzzle to solve, at this point.
Game theory and 'it's all just maths' (Cartesian) style of thinking have taken us as far as they can, I think; they're reaching the limits of their novel utility, like Newtonian physics. I think quantum computing might become quite important to AI development in the coming years and decades; it might be the Einsteinian shake-up that the whole field is looking for.
Or I might be talking out of my arse, who really knows at this early stage? All I know is I'm still an optimist; I think AI will be more helpful than dangerous, in the long term evolution of our collective society.
Viewing a single comment thread. View all comments