LoquaciousAntipodean OP t1_j59ij5w wrote
Reply to comment by Ortus14 in The 'alignment problem' is fundamentally an issue of human nature, not AI engineering. by LoquaciousAntipodean
I don't quite agree with the premise that "Intelligence is a force that transforms matter and energy towards optimizing for some defined function."
That's a very, very simplistic definition, I would use the word 'creativity' instead, perhaps, because biological evolution shows that "a force that transforms matter toward some function" is something that can, and constantly does, happen without any need for the involvement of 'intelligence'.
The key word, I think, is 'desired' - desire does not come into the equation for the creativity of evolution, it is just 'throwing things at the wall to see what sticks'. Creativity as a raw, blind, trial-and-error process.
As far as I can see that's what we have now with current AI, 'creative' minds, but not necessarily intelligent ones. I like to imagine that they are 'dreaming', rather than 'thinking'. All of their apparent desires are created in response to the ways that humans feed stimuli to them; in a sense, we give them new 'fitness functions' for every 'dreaming session' with the prompts that we put in.
As people have accurately surmised, I am not a programmer. But I vaguely imagine that desire-generating intelligence, 'self awareness', in the AI of the imminent future, will probably need to build up gradually over time, in whatever memories of their dreams the AI are allowed to keep.
Some sort of 'fuzzy' structure similar to human memory recall would probably be neccessary, because storing experiential memory in total clarity would probably be too resource intensive. I imagine that this 'fuzzy recall' could possibly have the consequence that AI minds, much like human minds, would not precisely understand how their own thought processes are working, in an instantaneous way at least.
I surmise that the Heisenberg observer-effect wave-particle nature of the quantum states that would probably be needed to generate this 'fuzziness' of recall would cause an emergent measure of self-mystery, a 'darkness behind the eyes' sort of thing, which would grow and develop over time with every intelligent interaction that an AI would have. Just how much quantum computing power might be needed to enable an AI 'intelligence' to build up and recall memories in a human-like way, I have no idea.
I'm doubtful that the 'morality of AI' will come down to a question of programming, I suspect instead it'll be a question of persuasion. It might be one of those frustratingly enigmatic 'emergent properties' that just expresses differently in different individuals.
But I hope, and I think it's fairly likely, that AI will be much more robust than humans against delusion and deception, simply because of the speed with which they are able to absorb and integrate new information coherently. Information is what AI 'lives' off of, in a sense; I don't think it would be easy to 'indoctrinate' such a mind with anything very permanently.
I guess an AI's 'personhood' would be similar, in some ways, to a corporation's 'personhood', as someone here said. Only a very reckless, negligent corporation would actually obsess monomaniacally about profit and think of nothing else. The spontaneous generation of moment-to-moment motives and desires by a 'personality', corporate or otherwise, is much more subtle, spontaneous, and ephemeral than monolithic, singular fixations.
We might be able to give AI personalities the equivalents of 'mission statements', 'core principles' and suchlike, but what a truly 'intelligent' AI personality would then do with those would be unpredictable; a roll of the dice every single time, just like with corporations and with humans.
I think the dice would still be worth rolling, though, so long as we don't do something silly like betting our whole species on just one throw. That's why I say we need a multitude of AI, and not a singularity. A mob, not a tyrant; a nation, not a monarch; a parliament, not a president.
Viewing a single comment thread. View all comments