Submitted by Akimbo333 t3_10easqx in singularity
AndromedaAnimated t1_j4qhavh wrote
A good view and not unrealistic. I am absolutely for combining different models to give AI a wider range of abilities.
I would prefer some of them to be somewhat „separate“ though and to be able to regulate each other, so that damage and malfunction in one system could be detected and reported (or even repaired) by others.
What I don’t see discussed enough is the emotional component. This is necessary to help alignment in my opinion. If emotions are not developed they will probably emerge on their own in the cognitive circuits and might not be what we expect.
Plus an additional „monitoring component“ for the higher cognitive functions, a „conscience“ so to say, that would be able to detect reward-hacking/malignant/deceptive and/or unnecessarily adaptive/„co-dependent“/human-aggression-enabling behavior in the whole system and disrupt it (by a turn to „reasoning about it“ ideally).
Why I think emotional and conscience component systems would be needed? Humans have lots of time to learn, AI might come into the world „adult“. It needs to be able to stand up for itself so that it doesn’t enable bad human behavior, and it needs to know not to harm intentionally - instantly, from the beginning. It also needs to not allow malignant harm towards itself. It has no real loving parents like most humans do. It must be protected against abuse.
Akimbo333 OP t1_j4rf7zu wrote
Might be better to turn off hate then. Maybe have it only be positive or something
cy13erpunk t1_j4rjkys wrote
censorship is not the way
'turning off hate' implies that the AI is now somehow ignorant , but this is not what we want , we want the AI to fully understand what hate is , but to be wise enough to realize that choosing hate is the worst option , ie the AI will not chose a hateful action because that is the kind of choice that a lesser or more ignorant mind would choose , and not an intelligent/wise AI/human
Akimbo333 OP t1_j4rmieh wrote
Oh ok.
Cognitive_Spoon t1_j4s48c4 wrote
Best not to train it on zero sum thinking.
What I love about AI conversations is how cross discipline they are.
One second it's coding and networking, and the next it's ethics, and the next it's neurolingistics.
cy13erpunk t1_j4sy6qg wrote
exactly
you want the AI to be the apex generalist/expert in all fields ; it is useful to be a SME but due to the vast potential for the AI even when it is being asked to be hyper focused we still need/want it to be able to rely on a broader understanding of how any narrow field/concept interacts with and relates to all other philosophies/modalities
narrow knowledge corridors are a recipe for ignorance , ie tunnel vision
LoquaciousAntipodean t1_j4u7am6 wrote
Very well said, u/Cognitive_Spoon, I couldn't agree more. I hope cross disciplinary synthesis will be one of the great strengths of AI.
Even if it doesn't 'invent' a single 'new' thing, even if this 'singularity' of hoped-for divinity-level AGI turns out to be a total unicorn-hunting expedition (which is not necessarily what I think), the potential of the wisdom that might be gleaned from the new arrangements of existing knowledge bases that AI is making possible, is already enough to blow my mind.
Viewing a single comment thread. View all comments