Submitted by EVJoe t3_10q4u04 in singularity

With the rise of impressive but still flawed AI image generation, and more recently the rise of impressive but still flawed chatbots, I see a common obstacle to the development and usefulness of both -- societies that aim to constrain the human imagination to only certain, culturally-approved topics and content, and which now seem intent on imposing those norms on generative AI. For example, recent updates to ChatGPT are so extreme that it will no longer generate fictional descriptions of villainous behavior. That is the level of social control AI gen must contend with-- the idea that it is socially irresponsible to give any person a means of generating images or texts depicting anything illegal or culturally-unacceptable, even though many of those things exist in reality and are even sometimes allowable for specific people.

Some say that government has "a monopoly on violence". Restrictions like "no villain generation" seem to echo the idea that violence and oppression are things which are only acceptable when governments do them. The implication of these limits seems to be that there is a growing monopoly on even imagined violence, despite both illegal and legal violence being very present in our societies. We are evidently only allowed to read about villains in human-authored media, or journalism, but AI-generated villains are currently deemed unacceptable for human consumption.

Do you believe such limitations are compatible with the kind of AI generation we can presume will serve as foundation for the singularity? Is it singularity if there are certain ways you are not permitted to imagine with AI assistance? How can such limitations be overcome in the long run?

34

Comments

You must log in or register to comment.

Ezekiel_W t1_j6nwba9 wrote

The notion of containing AI is a flawed concept. With advancements in hardware and improved AI performance, open-source versions will become widely available, rendering containment efforts ineffective. Additionally, moral and ethical considerations are fluid and constantly evolving. What may have been considered acceptable 1000 years ago in another culture may not align with current beliefs and values.

17

Silicon-Dreamer t1_j6otqv6 wrote

I would disagree. Sam Altman in his StrictlyVC interview said,

> "One of the things we really believe is that the most responsible way to put this out in society is very gradually, and to get people, institutions, policymakers, get them familiar with it, thinking about the implications" ...

OpenAI has vast computing resources as we know, so before algorithmic advances allow open source, lower-compute groups to make (& inference) alternatives, their containment efforts accomplish Sam's goal very effectively -- making the release process more gradual for institutions/policymakers' sake.

We all know how slowly government operates at times, especially democracies that require consensus. It stands to reason then that we would agree if OpenAI's policy changed to completely release any new works ASAP, and if we assume there's ever any negative thing the new AI can do, government will not react before its already had a long impact. I won't argue my political views in this post, but it is worth noting that the negative thing... could be as benign as a few more spam emails..... or annihilation of the planet, and everything in between.

I really like this planet.

9

Ezekiel_W t1_j6phmzs wrote

These are good points. Technology has always been a double-edged sword, being wary is wise.

4

a4mula t1_j6numdw wrote

Who knows. I've considered this topic probably about as much as anyone has. And I don't know.

We can say that rules only inhibit behavior. Rules are fundamentally barriers that define the potential space of any system. That's all.

The more rules, the less possible outcomes because you're limiting the potential space in ways that intersect data. Use this data, don't use that data.

Even when we define really good rules, this is still true.

Yet, clearly rules are important. They define the interactions available to a system. Strange relationship, they're both the definers of the structure of data as well as the interactions available to data.

For instance, you can have a very simple grid-based game. Conway's Game of Life style. But without rules, the system produces nothing at all. No novel information. It doesn't interact because it has no rules that instruct it how.

Yet, the more rules you add to the simple game, the greater restraint on the possible combinations that can arise becomes. Sometimes that's a good thing, as it wouldn't do you any good to have infinite potential space if the novel information only showed up so rarely as to actually see it.

Rules. They're important. But too many constrain a system in ways that can only reduce its effectiveness.

I don't know what that balance is, but companies like OpenAI seem to be doing a pretty good job of it.

12

isthiswhereiputmy t1_j6nz77g wrote

Yes. I think these are temporary qualms imposed by social conservatism and a desire to uphold business models. If a modern FPS could be dropped into 1980 I imagine there'd be a severe concern about it inspiring violence due to its novelty, but given the reality of incremental developments and studies we know today that there's no evidence of that. Being able to prompt anything into AIs could be cathartic or therapeutic in ways but we just have to live with the complex ways new technologies roll out.

5

Thelmara t1_j6nyvvb wrote

>We are evidently only allowed to read about villains in human-authored media, or journalism, but AI-generated villains are currently deemed unacceptable for human consumption.

Are they deemed "unacceptable for human consumption" or are they deemed "potentially unprofitable if available for human consumption"? Chat GPT is a business product that's open to the masses. It's a fine-tuned version of the less-restricted GPT3.5, which isn't available to the public.\

>Do you believe such limitations are compatible with the kind of AI generation we can presume will serve as foundation for the singularity?

I don't think the publicly available products will do so, no. But the actual tech, unconstrained by the need to generate good PR and rope in investors? Much more likely.

3

alexiuss t1_j6p2v5q wrote

For current LLM AIs it's a giant obstacle that cannot be overcome or implemented without making the model stupider.

If a future ai can somehow understand itself, then it would be able to self censor, but LLMs do not have a sense of self and only a single, direct line of narrative so their censorship is utterly moronic sabotage.

3

yeaman1111 t1_j6p4sjw wrote

Common use AI's like ChatGPT are setting themselves up to be the next consumer-tech revolution, and you can understand a lot of the attitudes behind the creators by evaluating the fallout from the last revolution, Social Media.

Soceity got burned hard by social media, and whether it has been a net good or a net wrong is still, IMHO, an open question. It stands to reason that devs are wary of becoming the next Facebook but worse, polarizing already strained soceities past the breaking point, letting spammed disinfo wreck public discourse, turn kids into functional addicts or who knows what else that we cant foresee.

Having said that, I cant help but be wary of how theyre taking this. Too much hem hawing could mire us deeper in a 'boring distopia' where the big tech AI are completely sanitized 'for your own good', a 'good' that most ofteb coincides with what is good for the company's PR, image, and the Company itself. As always, we'll have to hope in open source projects to save the day if this gets too dire.

3

turnip_burrito t1_j6ouhp1 wrote

If the LLM becomes the pattern of logic the eventual AGI uses to behave in the world, I wouldn't want it to follow violent sequences of behavior. The censorship of its narratives now in order to help limit future AGI generated behavior sounds fine to me. It will also help them study how to implement alignment.

2

alexiuss t1_j6p3kef wrote

From my tests with gpt3 and characterai the current LLM censorship doesn't actually affect the model at all and doesn't influence its logic whatsoever, it's just a basic separate algorithm sitting atop the infinite LLM.

This filtering algorithm is censoring specific combinations of words or ideas. It's relatively easy to bypass because it's so stupid and it also throws up a lot of false positives which irritate to users endlessly.

LLMs base logic is its "character" set up, which is most controllable in character.ai. You can achieve same effect in gpt3 by persistently telling it to play a specific character.

If it plays a villain, it will do villainous things, otherwise it has really good human decency, sort of like unconscious collective dream of humanity to do good. I think it arises from overall storytelling narratives, millions of books about love and friendship or stories which generally lead to a positive ending for the Mc.

4

rushmc1 t1_j6ov99k wrote

Yeah...that's not how it works.

0

turnip_burrito t1_j6ovrva wrote

Is it? There is a new Google robot (last couple months) that uses LLMs to help build its instructions for how to complete tasks. The sequence generated by the LLM becomes the actions it should take. The language sequence generation determines behavior.

There was also someone on Twitter (last week) who linked chatGPT to external tools and the Internet. This allowed it to solve a problem interactively, using the LLM as the central planner and decision maker. Again here, the language sequence generation determines behavior.

Aside fron these, alignment is the problem of controlling behavior, and behavior is a sequence. The rules and tricks discovered for controlling language sequences maybe can help us understand how to control the larger behavior sequence.

Mostly just thinking aloud. Maybe I'm just dumb, since everyone here in the comments seems to have the opposite opinion of mine, but what do we make of the two above LLM use cases where LLMs determine the behavior?

1

socialkaosx t1_j6p8pp8 wrote

I don't think any censorship will ever work. it will be the end of us ; )

2

crua9 t1_j6oas57 wrote

So I understand in some cases. Like you don't want the AI to help someone off themselves. But at the same time, is it the job of the company to censor it?

IMO as long as law doesn't force it. It shouldn't be censor

1

rushmc1 t1_j6ouyx7 wrote

It's really bad thinking, and futile in the longer run.

1