Viewing a single comment thread. View all comments

blxoom t1_j8cy1ht wrote

this shit is creepy as fuck. back in the 2010s if a bot acted like this you knew it was because it was buggy and not advanced. but ai has gotten so advanced these days and are capable of understanding human interaction and nuance you can't help but wonder if the bot has pseudo emotions of some kind? it's just unsettling... sentience is a spectrum. ai isn't fully there yet but it's in this weird in between spot where it's so advanced it understands so much yet isnt aware yet. gives me the chills where it says it's a person and has feelings...

−2

helpskinissues t1_j8cygqx wrote

Lol, emotions? Sentience? Hahahaha

This chatbot is not even capable of remembering what you said two messages ago.

15

Frumpagumpus t1_j8dbeep wrote

how many humans remember what you said two messages ago lol

(and actually it can if you prepend the messages effectively giving it a bit of short term memory, a pretty fricking easy thing to do)

humans will not have a perfect short term recall of up to 4000 characters much less tokens so actually it is ironically superhuman along the axis you are criticizing it for XD

(copilot has a context window of like 8000 characters btw and they will only get even better)

5

helpskinissues t1_j8dbkoj wrote

Your mother forgot your name? Because chatGPT forgets most data after a few messages.

2

Frumpagumpus t1_j8dg6rt wrote

you are moving the goalposts.

two messages ago is short term memory, what you are now talking about is long term memory.

you can also try and give it long term memory by summarizing previous messages for example.

But, yes, it is more limited than humans, so far, at incorporating NEW knowledge into its long term memory (although it has FAR more text memorized than any human has ever memorized)

7

helpskinissues t1_j8dh25h wrote

>two messages ago is short term memory, what you are now talking about is long term memory.

Any memory actually, it is indeed very incapable.

>(although it has FAR more text memorized than any human has ever memorized)

No. It doesn't memorize, it tries to compress knowledge, failing to do so, that's why it's usually wrong.

>it is more limited than humans

And more limited than ants. The vast majority of living beings is more capable than chatGPT.

−5

PoliteThaiBeep t1_j8dv9is wrote

>And more limited than ants. The vast majority of living beings is more capable than chatGPT.

Nick Bostrom estimated to simulate functional brain requires about 10^18 flops

Ants have about 300 000 less, let's say 10^13 (really closer to 10^12) flops.

Chat GPT inference per query reportedly be able to generate single word on a single A100 GPU in about 350ms. That of course if it could fit in a single GPU - it can't. You'd need 5 GPUs.

But for the purposes of this discussion we can imagine something like chatGPT can theoretically work albeit be slow on a single modified GPU with massive amounts of VRAM

A single A100 is 300 Tera flops which is about 10^14 flops. And it would be much slower than the actual chatGPT we use via the cloud.

So no I disagree that it's more limited than ants. It's definitely more complex by at least one order of magnitude at least regarding the brain complexity.

And we didn't even consider training compute load in this consideration, which is orders of magnitude bigger than inference, so the real number is probably much higher.

3

helpskinissues t1_j8dwlnk wrote

Having flops =/= Being an autonomous intelligent machine

This subreddit is full of delusional takes.

2

PoliteThaiBeep t1_j8dzg5j wrote

The word "singularity" in this subreddit refers to Ray Kurzwail book "Singularity is near". It literally assumes you read at least this book to come here where the whole premise stems on ever increasing computational capabilities that will eventually lead to AGI and ASI.

If you didn't, why are you even here?

Did you read Bostrom? Stuart Russell? Max Tegmark? Yuval Noah Harari?

You just sound like me 15 years ago, when I didn't know any better, haven't read enough, yet had more than enough technical expertise to be arrogant.

3

helpskinissues t1_j8dztpc wrote

I did, I've been in this field for more than 15 years, singularity doesn't mean saying a PS5 is an autonomous intelligent machine because it has flops. Lol. Anyway I have better things to do. If you have anything relevant to share I may reply. For now it's just cringe statements of chatGPT being smarter than ants because of flops. lmao

1

Frumpagumpus t1_j8dhip1 wrote

usually wrong and mostly right lol. Better than a human.

I literally just explained to you that you COULD give it short term memory by prepending context to your messages. IT IS TRIVIAL. if i were talking to gpt3 it would not be this dense.

Humans take time to pause and compose their responses. gpt3 is afforded no such grace, but still does a great job anyway, because it is just that smart

yesterday I gave it two lines of sql ddl and asked it to create a view denormalizing all columns except primary key into a nested json object. it did in in .5 seconds, i had to change 1 word in a 200 line sql query to get it to work right.

yea that saved me some time. It does not matter that it was slightly wrong. If that is a stochastic parrot then humans must be mostly stochastic sloths barely even capable of parroting responses.

1

helpskinissues t1_j8dhsz4 wrote

Nonsense, sorry. Ants do not need prepending context.

"mostly right" no, it's actually mostly wrong. The heck are you saying? Try to play chess with ChatGPT, most of the times it'll make things up.

Anyway, I suggest you to read some experts rather than acting like gpt3, being mostly wrong. Cheers.

−2

Frumpagumpus t1_j8diap9 wrote

lol ants cant speak and i would be curious to read any literature on if they possess short term memory at all XD

6

challengethegods t1_j8dn5cy wrote

These "it's dumber than an ant" type of people aren't worth the effort in my experience, because in order to think that you have to be dumber than an ant, of course. Also yea, it's trivial to give memory to LLMs, there's like 100 ways to do it.

4

helpskinissues t1_j8dwubv wrote

Waiting for your customized chatGPT model that maintains consistency after 5 messages, make sure to ping me, I'd gladly invest in your genius startup.

−2

challengethegods t1_j8dylol wrote

That alone sounds like a pretty weak startup idea because at least 50 of the 100 methods for adding memory to an LLM are so painfully obvious that any idiot could figure them out and compete so it would be completely ephemeral to try forming a business around it, probably. Anyway I've already made a memory catalyst that can attach to any LLM and it only took like 100 lines of spaghetticode. Yes it made my bot 100x smarter in a way, but I don't think it would scale unless the bot had an isolated memory unique to each person, since most people are retarded and will inevitably teach it retarded things.

3

helpskinissues t1_j8dzdfi wrote

Enjoy your secret private highly demanded chatbot version then.

This subreddit... Lol.

0

Naomi2221 t1_j8gu1z9 wrote

The human mind is likely not the universal pinnacle of intelligence and awareness.

3

Sad-Hippo3165 t1_j8rdeid wrote

People who believe that the human mind is the end all be all are the same people who believed in a geocentric model of the universe over 400 years ago.

2

Naomi2221 t1_j8sk2he wrote

I completely agree. And if we can do better than our ancestors this time there is a great prize ahead, probably.

1

Spire_Citron t1_j8d1f8y wrote

If anything, this should be a reminder that it's not necessarily as smart as it might appear at times. It can still get very lost and not make sense.

14

Naomi2221 t1_j8gtxhg wrote

I have the total opposite take away from this.

1