Submitted by EVJoe t3_10q4u04 in singularity
With the rise of impressive but still flawed AI image generation, and more recently the rise of impressive but still flawed chatbots, I see a common obstacle to the development and usefulness of both -- societies that aim to constrain the human imagination to only certain, culturally-approved topics and content, and which now seem intent on imposing those norms on generative AI. For example, recent updates to ChatGPT are so extreme that it will no longer generate fictional descriptions of villainous behavior. That is the level of social control AI gen must contend with-- the idea that it is socially irresponsible to give any person a means of generating images or texts depicting anything illegal or culturally-unacceptable, even though many of those things exist in reality and are even sometimes allowable for specific people.
Some say that government has "a monopoly on violence". Restrictions like "no villain generation" seem to echo the idea that violence and oppression are things which are only acceptable when governments do them. The implication of these limits seems to be that there is a growing monopoly on even imagined violence, despite both illegal and legal violence being very present in our societies. We are evidently only allowed to read about villains in human-authored media, or journalism, but AI-generated villains are currently deemed unacceptable for human consumption.
Do you believe such limitations are compatible with the kind of AI generation we can presume will serve as foundation for the singularity? Is it singularity if there are certain ways you are not permitted to imagine with AI assistance? How can such limitations be overcome in the long run?
a4mula t1_j6numdw wrote
Who knows. I've considered this topic probably about as much as anyone has. And I don't know.
We can say that rules only inhibit behavior. Rules are fundamentally barriers that define the potential space of any system. That's all.
The more rules, the less possible outcomes because you're limiting the potential space in ways that intersect data. Use this data, don't use that data.
Even when we define really good rules, this is still true.
Yet, clearly rules are important. They define the interactions available to a system. Strange relationship, they're both the definers of the structure of data as well as the interactions available to data.
For instance, you can have a very simple grid-based game. Conway's Game of Life style. But without rules, the system produces nothing at all. No novel information. It doesn't interact because it has no rules that instruct it how.
Yet, the more rules you add to the simple game, the greater restraint on the possible combinations that can arise becomes. Sometimes that's a good thing, as it wouldn't do you any good to have infinite potential space if the novel information only showed up so rarely as to actually see it.
Rules. They're important. But too many constrain a system in ways that can only reduce its effectiveness.
I don't know what that balance is, but companies like OpenAI seem to be doing a pretty good job of it.