Comments

You must log in or register to comment.

NutInBobby t1_j9vcy4n wrote

Crazy that Bing Chat lead me here as a source, this was posted 17 minutes ago.

7

nillouise t1_j9vj3kz wrote

>Our mission is to ensure that artificial general intelligence—AI systems that are generally smarter than humans—benefits all of humanity.

It only say that they want AI to benefit human, exclude benefit AI itself, if AI smart enough, it will satisfy with this announcement?

So apperently, we can conclude that currenlty AI is not smart enought to do that. If oneday, openAI announcement consider the AI feeling, then the big thing come.

1

Ortus14 t1_j9vj5cx wrote

Very wordy way to say, we'll release progressively more powerful models and figure out the alignment problem as we go along.

That being said, it's as good a plan as any and I am excited to see how things pan out.

4

Surur t1_j9vk4hf wrote

They seem to be writing as if AGI is quite close, despite their earlier statements.

6

Savings-Juice-9517 t1_j9vmqlt wrote

Key takeaways:

Short term:

  • OpenAI will become increasingly cautious with the deployment of their models. This could mean that users as well as use cases may be more closely monitored and restrained.
  • They are working towards more alignment and controllability in the models. I think customization will play a key role in future OpenAI services.
  • Reiterates that OpenAI’s structure aligns with the right incentives: “a nonprofit that governs us”, “a cap on the returns our shareholders can earn”.

Long term:

  • Nice quote: “The first AGI will be just a point along the continuum of intelligence.”
  • AI that accelerates science will be a special case that OpenAI focuses on, because AGI may be able to speed up its own progress and thus expand the capability exponentially.

Credit to Dr Jim Fan for the analysis

1