LanchestersLaw t1_jcf5x9c wrote
I think the most similar historic example is the human genome project where the government and private industry where both racing to be the first to fully decode the human genome but the US government was releasing its data and industry could use it to get even further ahead.
Its the classic prisoners dilemma. If both parties are secretive research is much slower and might never complete but with a small probability of completing the project first for a high private reward for the owner and low reward for society. If one party shares and the other does not, the withholding party gets a huge comparative boost for a high probability of a high private reward. If both parties share we have the best case with parties being able to split the work and share insights so less time is wasted for a very high probability of a high private and high public reward.
I think for AI we need mutual cooperation and to stop seeing ourselves as rivals. The rewards for AI cannot be privatized for the shared mutual good of humanity in general (“Humanity” regrettably does include Google and the spider piloting Zuckerberg’s body). Mutual beneficial agreement with enforceable punishment for contract breakers is what we need to defuse tensions, not an escalation of tensions.
Viewing a single comment thread. View all comments