Viewing a single comment thread. View all comments

yaosio t1_j5a7hva wrote

There were no moral barriers, that was an excuse they made up. They couldn't figure out how to monitize their language models without eating up their search revenue. Now that LLMs are fast approaching usability for more than writing fictional stories Google is being forced to drop the act and find a way to make money with their technology. If they don't then they will be left behind and turn into the next Ask Jeeves.

When a company says they did something and their reason has nothing to do with money they are not telling the truth. It is always about money.

17

DoktoroKiu t1_j5bhemu wrote

Yeah, like unless they are hooking up an AGI or other agent that has the ability to continually learn and affect the real world, all of the "safety" and morality talk is largely centered on making sure people can't turn it into a racist nazi bot, because that would affect their bottom line.

There is a threat to using these tools to mislead people (like russian twitter bots), but unless they stop publishing papers there is no way to put the genie into the bottle again. And the people fooled by that narrative would probably be duped by a basic ass Markov chain anyway.

2