Submitted by SpinRed t3_10b2ldp in singularity
If you customize moral rules into GPT-4, you are basically introducing a kind of "bloatware" into the system. When Alphago was created...as powerful as it was, it too was handicapped by the human strategy/bloatware imposed upon the system. Conversly, When Alphazero came on the scene, it learned to play Go by given the basic rules and instructed to optimize its moves by playing millions of simulated games (without adding human strategy/bloatware). As a result, not only did Alphazero kick Alphago's ass over and over again, Alphazero was a significantly smaller program....yeah, smaller. I understand we need safeguards to keep ai from becoming dangerous, but those safeguards need to become part of the system as a result of logic...not human "moral bloatware."
Ijustdowhateva t1_j47wkw9 wrote
This is why we have to support open source endeavors like Stability instead of hyping up Google and Microsoft owned companies.