Submitted by CMDR_BunBun t3_127j9xn in singularity
After listening to the Fridman/ Kudkowsky podcast, I've been giving much thought about their arguments on alignment. I can certainly agree with Kudkwosky in his fear that we are not making much headway there nor devoting the amount of resources to that problem compared to the gains we are making in AI ability. This is a real problem, as we move towards AGI and beyond. If we can agree on this, can we discuss how we can make progress towards solving the alignment issue without one shooting ourselves out of the game? I will start with my harebrained idea. Why not put the AI in a biological body? With all the limitations of the human condition. Whether simulated or real. I know we have no idea how to do this now, but here's my proposal. Have this thing live as a human, so it can understand us, hopefully empathize with us. Gain our trust and respect, and then we can determine if we can trust it with its godlike powers in the world. Discuss.
SWATSgradyBABY t1_jeedmuo wrote
You got some tech you'd like to share with the world?