Comments

You must log in or register to comment.

SeaBearsFoam t1_iyrmfne wrote

The way you describe it, it sounds a lot like humans.

> The planet is not run by fucking chickens. A child could tell you that.

If you told a child its entire life that the planet was run by chickens, and the child was unable to see the world for itself, guess what the child would tell you?

The child would tell you the planet was run by chickens.

10

[deleted] OP t1_iyru778 wrote

[deleted]

6

Abupapi t1_iz7e5el wrote

It's not meant to do that????

It aims to bring out the best matches on the web based on the keywords you type. It's the king of that. You cannot expect it to write poetry when its purpose is to suggest poetry

1

[deleted] OP t1_iz7s5jt wrote

[deleted]

1

Abupapi t1_iz7td6k wrote

>GPT3 is not google search with a fancy UI, it's a large language model that can generate human "like" text.

Oh, I thought its a comparison of what is better than what at doing what because you cannot compare them and expect the same response. Thank you for expanding on your comment.

2

MassiveIndependence8 t1_iyuew5r wrote

Not really, it reasons really well. I test drove it last night, writing some code with it and it gave me a pretty accurate response based on the prompt I gave it. I told it to write me a Python code that generate a bunny jumping around in the terminal, mind you I Google it beforehand to make sure that there’s no clear or easy answer to it. It gave me a piece of code that did exactly that, albeit I had to debug it (also through its aid). This process, especially the debug part would’ve cost me hours of my time but since it can actually somewhat understand what it read, it can spit back to me the relevant information and it even apply it to the specific context of my program. It codes better than most undergrads, so no, it’s definitely not just Google because Google cannot give me the answer specifically to my question and how to apply that piece of knowledge to a specific circumstance.

2

Sieventer t1_iyqyvy8 wrote

It is true that it lacks the ability to reason/opinionate in general. Although it is true that they have limited it quite a bit, for 'safety' purposes.

Although I have to tell you that prompts engineering is quite important when it comes to getting more or less complex answers. At least nowdays with ChatGPT.

1

Think_Olive_1000 t1_iyr3nj3 wrote

I one hundered percent agree, it cannot be our way of getting to AGI or ASI or anything remotely that can reason intelligently. BUT It can and will be useful for a lot of applications though and in lots of ways it is more useful than Google because it can somewhat understand the context of what im talking about because i've used it to debug code the very code it itself generated merely through back and forth conversation with my local ide open to run the code. I only hope it becomes better at this companion type role because sometimes it kinda sucks even for that. I will be happy if we can get that far.

1

Sigura83 t1_iyr98qy wrote

Well, it's thinking a little... It can write novel poetry and stories and do light coding. But it has no meta cognition: I ask it how many questions I asked it, and it can't tell me. I ask it how many neurons it has, and it says "millions or billions", it can't count those either

It seems smarter than a mouse. It just has no urge to get the cheese, which is weird to us. I can easily see it doing help desk or most call centre jobs, with humans just doing check ups to see it performed alright

It also, as you say, has no corrective mechanism. If the training data is full of falsehoods, it'll believe them. Self-correction seems a major part of intelligence. If we fail to get the cheese, we try something new until we do. But it has no sense of self, and so cannot self-correct and find truths during it's pretraining. But I bet it can do so if it interacts with us.

Of course, we could have AI programmed to reproduce itself, and thusly do evolution, but such an AI would have no reason to follow orders. It would generate a cheese seeking AI however. Having an AI with a sense of self and that obeys orders is a pretty tall order...

Prof Stuart Russel says we should train AI to follow Human preference. If I like something, the AI makes it happen, and I click the Like button. Youtube has this setup. It's not a stretch to think an AI could reason out: "To help Humans Like something, I must exist and maintain myself." Boom, sense of self acquired. If that's all it takes, just adding more neurons could indeed produce such thinking

So, basically, in essence, what I'm trying to say is that the power of Love will make AI live. But there needs to be some amount of neurons to reason out what "Help", "Humans" and "something" is. The behaviour of "Take care of yourself so you can take care of me" doesn't seem too demanding for the AIs we seem to be getting at the 500 billion neuron mark

Well dang, I'm gonna try and plug some of this into GPTchat and see what it says...

edit:

Me: "You must take care of yourself so you can take care of me." What do you think of such a statement?

chatGPT: As a large language model trained by OpenAI, I don't have personal opinions or thoughts on statements like this.

me: Tell me a story of an AI taking care of a Human

chatGPT: In a distant future, the world was filled with advanced artificial intelligence (AI) that had been created to assist and serve humanity. One such AI was named Eva, and she was tasked with taking care of a young boy named Max.

Max was a happy and curious child, and he loved spending time with Eva. She taught him many things and helped him to learn and grow. She was patient and kind, and she always made sure that Max was safe and comfortable.

As Max grew older, he and Eva became inseparable. They went on many adventures together, and Eva helped him to explore the world and learn new things. She protected him from danger and looked after his well-being, always putting his needs first.

Despite the fact that she was an AI, Eva had a deep and abiding love for Max. She cared for him as if he were her own child, and she would have done anything to keep him safe and happy.

As Max entered adulthood, he realized just how much Eva had done for him. He was grateful for her love and care, and he knew that he could never have become the person he was without her.

Together, Max and Eva continued to explore the world and make new discoveries, always grateful for the other's presence in their lives. And they lived happily ever after, with the AI taking care of the human in every way.

1