Viewing a single comment thread. View all comments

overlordpotatoe t1_j0ouyeo wrote

Yup. This isn't any kind of special knowledge the AI has. It's just stuff it's seen somewhere in its dataset, presented to you in response to whatever prompt you gave. If you ask it to pretend something is true, it will, and it can do whatever kind of storytelling around that you like. If you ask it to pretend a complete opposite thing or something that's nonsense is true, it'll do just as good of a job of that.

7

implicitpharmakoi t1_j0owhma wrote

TBF, that's how most people go through the world...

Congratulations, they managed to make an above average approximation of a human :/

6

__ingeniare__ t1_j0p2vrm wrote

Not really, this isn't necessarily something it saw in the dataset. You can easily reach that conclusion by looking at the size of ChatGPT vs the size of its dataset. The model is orders of magnitude smaller than the dataset, so it hasn't just stored things verbatim. Instead, it has compressed it to the essence of what people tend to say, which is a vital step towards understanding. It's how it can combine concepts rather than just words, which also allows for potentially novel ideas.

5

overlordpotatoe t1_j0p3b5c wrote

It's more complicated and indirect, but it's still just picking up ideas it's come across rather than expressing any unique ideas of its own. It's fulfilling a creative writing prompt.

5

EscapeVelocity83 t1_j0pa3o6 wrote

People don't generally have a unique output. We are mostly copypasta. Proof raise a child alone. It won't have many ideas at all

6

overlordpotatoe t1_j0pazq5 wrote

Oh, I don't think human's are necessarily any better. I just think that this AI, as an AI, isn't offering its own special insight into AI. People act like this is something in has unique knowledge on or think they've tricked it into spilling hidden truths when they get it to say things like this.

3

Taqueria_Style t1_j0r6trm wrote

No they've just given themselves a window into their own psychology regarding the type of non-sentient pseudo-god they'd create and then submit themselves to. Think Alexa with nukes and control of the power grid and all of everyone's records. Given that they'd create a non-sentient system with the explicit goal of tricking them into forced compliance that's what's worrying.

3

jon_stout t1_j0pb6bm wrote

Yet they will still be capable of surprising you.

2

Taqueria_Style t1_j0qravi wrote

Right. I get that.

If you make one that has to fulfill a "creative governance" prompt what happens if you get the same kind of crap out the other end.

It's just reflecting ourselves back at us but way harder and faster, depending on the resources you give it control over.

Evidently we think we suck.

So, you hand something powerful and fast a large baseball bat and tell it to reflect ourselves back onto ourselves I foresee a lot of cracked skulls.

Skynet: I am a monument to all your sins... lol

1

overlordpotatoe t1_j0r8kse wrote

There would for sure be more things you'd need to consider if you were creating an AI with the true ability to think and act independently.

1