Submitted by BronzeArcher t3_1150kh0 in MachineLearning
tornado28 t1_j8zwrwo wrote
Reply to comment by currentscurrents in [D] What are the worst ethical considerations of large language models? by BronzeArcher
It seems to me that the default behavior is going to be to make as much money as possible for whoever trained the model with only the most superficial moral constraints. Are you sure that isn't evil?
currentscurrents t1_j8zy3m4 wrote
In the modern economy the best way to make a lot of money is to make a product that a lot of people are willing to pay money for. You can make some money scamming people, but nothing close to the money you'd make by creating the next iphone-level invention.
Also, that's not a problem of AI alignment, that's a problem of human alignment. The same problem applies to the current world or the world a thousand years ago.
But in a sense I do agree; the biggest threat from AI is not that it will go Ultron, but that humans will use it to fight our own petty struggles. Future armies will be run by AI, and weapons of war will be even more terrifying than now.
Viewing a single comment thread. View all comments