Submitted by Johns-schlong t3_zpczfe in singularity
civilrunner t1_j0udhrm wrote
Reply to comment by Kaarssteun in Is progress towards AGI generally considered a hardware problem or a software problem? by Johns-schlong
I agree, though it may not be nearly as efficient as a human brain when it comes to being intelligent. I'm my opinion all you need to do is look at the gains from GPUs vs CPU AI training to see how much scaling up local chip compute potential does for AI to see how much potentially better a 3D human brain may be compared to even a wafer scale stacked 2D chip and then acknowledge that the human brain doesn't just compute with 1 and 0, the chemical signals offers slightly more options and just off and on as we learned recently.
There are advantages to a silicone electronic circuit as well of course, the main one being speed since electricity flows far far faster than chemical signals.
I am also personally unsure of how "enslaving" a verified general intelligence would be ethical regardless of it's computational architecture. It's far better to ensure alignment so that it's not "enslaved" but rather wants to collaborate to achieve the same goals.
Kaarssteun t1_j0udpg9 wrote
Right, enslaving is subjective; but we want to make sure it enhances our lives rather than destroying it.
civilrunner t1_j0udzlb wrote
Sure, just wouldn't call it "enslaving" them seeing as that generally means forcing them to work against their will which if we build an AGI or an ASI seems unlikely to be feasible. Well aligned is a far better term and in my view will be the only thing that could work.
Viewing a single comment thread. View all comments