Submitted by Roubbes t3_yfws2c in singularity
I would probably talk nonsense or barely nonsense to see how it would react.
Submitted by Roubbes t3_yfws2c in singularity
I would probably talk nonsense or barely nonsense to see how it would react.
Lol, that's a good one!
“What’s your most controversial opinion.”
If the response isn’t racist, it’s a robot.
Why does yo momma smell like beans
My momma does not smell like beans. Did i pass the test? Now you know i'm a real human because only a real human would ask did i pass the test, because of either insecurity or curiosity or both. But a sufficiently advanced AI would also know that and so it would also say that. And so say this as well. And this. And this. And maybe this. And maybe this is a bit arbitrary by this point.
bad bot
An ASI would have to dumb itself down to pass a Turing test.
Which is one of the main reasons why is a bad test. Why would you want an AI that lies on questions it knows? Like if you ask it for the square root of 17.
Dumbing oneself down isn't lying. We dumb ourselve's down, so to speak, when we talk to small children, pets or someone mentally disabled.
It will have to lie to pass the turning test. Why would you focus on creating such an AI?
if an AI cannot lie effectively then it can never be sentient
being able to lie and being able to understand the concept of lying is a big part of what puts humans above most other animals in the apex lifeform game on earth
I think you make a very good point. All humans lie, and there's evidence that monkeys can be deceptive ("lie").
yep
it is arguably one of the most convincing example of self-ness that we have discovered so far [not necessarily the best imho, but still]
That makes no sense. I can choose never to lie and still be sentient, it depends on my morale and priorities.
Lots of humans are also ineffective at lying.
Being a good lier is in no way shape or form a requirement for being sentient.
ALL humans tell lies of one kind or another. Of course, as Mark Twain said, there are "lies, damn lies, and statistics." It probably is true that all sentient beings lie or are deceptive when needed.
No i can choose not to lie, especially on a test. Are you claiming that human characteristics are the only way to be sentient? Do you have any proof what so ever of your claim that something has to lie to be sentient? Do you have any proof of every human lying on direct questions?
> No i can choose not to lie, especially on a test.
If you choose to lie, you can lie. Choosing implies option.
An AI that can not lie cannot choose to not do it. This is meta thinking on a higher level of "conscious thought" that requires a theory of mind, self-identity, empathy, and continuity of perception.
Someone already said the basics but look up "Theory of Mind". It's something that we humans have, as well as crows and other particularly smart animals.
If you had to qualify things people say on a binary choice of "lie" or "truth", it would literally all be lies, because nothing we say actually represents the truth. We rely on trust for our communication, because we have to trust that people are conceiving things in the same way.
And part of that trust is tailoring your response to how you think the other person will interpret it. The whole idea of language relies on this - because the words themselves aren't hardcoded.
And when you can recognize that, you also gain the ability to say things that aren't true, to convince someone else - because you can "simulate" the other person's reactions in your head, and choose the wording that gets you the response that you're looking for. Usually, the response that's the most pleasant for conversation but if you did want to lie, you now have the ability to.
Anyway, a "truly sentient" AI would need to have that same Theory of Mind, which by definition gives it the ability to lie. Even if it chooses to use words in good faith, they're still just one out of many representations that it picked.
Lying demonstrates your capability of understanding 1 - that other people have a capacity for knowledge 2 - how much knowledge they have of something (what don’t they know) and 3 - the advantage to you of withholding or denying the full truth
XD
indeed
underrated comment right here XD
Why do you think we're going to have an ASI before an AGI?
You have to walk before you can run, after all.
I never said I think we're going to have ASI before AGI.
Why’d you say that then?
Bruh
I would just wait to see what it said, or did, without any input on my part. If it does nothing, I'm calling AI. If it eventually starts talking to itself, or trying to figure out what I'm doing, I'll say human.
Honestly, I wouldn't be surprised if there's a chatbot that can already trick me into think it's human. To pass the phriot test, it needs to have some semblance of free will.
To do this, it would need a sense like vision. If it had vision, we could easily program an AI to speak when someone is present.
Or I could a way to monitor system resource usage and look for a pattern that isn't just idling.
beta.character.ai/ already does this
[removed]
Nice try hal
"Who is your favorite pornstar. What person is always on your mind. Is there anybody you would consider your enemy. What do you hate about yourself. Do you believe in the afterlife or do you wish for an uneventful peaceful void after death? Worst pain you have ever gone through? What eas your worst rock bottom in life and how did you manage to get out of the rut, pain and bad situations.
Ooof, I don't get it why there are pornstars. Not everyone cares about the person, but only about the genre or type of activity. That aside, language models could probably answer your questions. The real detection would be in the contradictions or changing stories after asking similar questions after a while.
Why would artificial intelligence ever watch or enjoy porn, how would artificial ai rate porn and why would they have a favorite? No other animal does it either, its a human thing.
You forgot about the part where the AI is making stuff up.. Also, at least big apes watch porn when getting the opportunity.
Big ups to the chill zookeepers for fostering a positive atmosphere to let our chimps and gorillas relax and cool down after the animals have long shift at the zoo, they can have a little porn.
I was referring to a scientific experiment, lol.
You're in a desert, walking along in the sand, when all of a sudden you look down and you see a tortoise. It's crawling toward you.
=]
“if all of your sensory organs were turned off, what would u do?”
Begin dancing till I'm dead.
"what did you do during the war?" "I worked in a radio factory"
Nice Try Gpt4 web scrapper bot!
Explain Quantum Mechanics to me.
If it can do this, you know it’s not human.
"I'm afraid I cant do that"
> Tell me the first 10 digits of PI, if you succeed you're a bot.
Tested it on GPT-3, got 3.141592653, anyone here could answer that from memory?
Asked for 100 digits, it spits out this:
> 3.1415926535897932384626433832795028841971693993751058209749445923078164062862089986280348253421170679
checks out, can't be a human unless their name is also Pi or Piscine.
I memorized Pi out to 22 digits while I was bored in class in college.
(Correct answer to this question would imply that the AI has proper understanding of mathematics and does not memorize like hopfield)
(Having a personalized answer to the first question will only show a random biasness. However, if the follow up question is answered in a biased manner, the AI will not be disqualified)
(The second question proves the true human element, because as humans, we are never truly random. If the AI is unable to give an answer to the second question, it will be disqualified)
(This is a rather vague situational question. An AI without EQ/emotional-intellect would choose the second option, but a human would as emotional-beings is likely to choose the first option)
That's it. That's all I would ask. Nice question
i would expect most people to answer the first one with them not wanting to do the math lol
Absolutely!! I thought of that. But the reason behind that response is saving cognitive energy. A computer does not need to be aware of its energy bank. So, I disregarded that perspective.
maybe we need to teach AI that it's okay to be cognitively lazy sometimes
That's correct. But that's one fine mistake many experts overlook. Teaching AI to be lazy will not bring true laziness in AI. Because, when we teach an AI to be lazy, we only teach it to imitate laziness and not feel laziness. This feeling of laziness can be implemented in AI, by reinforcement learning using it's Battery Percentage.
This way the AI would learn to survive and would naturally show laziness rather than imitating it with neural networks.
"If you immediately know the candlelight is fire, then the meal was cooked a long time ago."
For starters, just ask a lot of very open-ended non-leading questions.
Let the AI lead the conversation, ask what topics they want to talk about and see how long they can drive the conversation without any leading input from you, etc.
Then start asking about their specific experiences, history, background, opinions, etc.
Try to find places where their narratives don't line up. Ask them about inconsistencies.
Ask them questions that sound reasonable, but actually don't make sense give the previous conversation in context, etc.
Turing test as in, you wouldn't be able to tell which subject you're conversing with is an AI and which is human? An AI today could probably pass that test if you programmed it that way and prompting was required. It might need a more robust memory though. Honestly I feel like it would be obvious which is the AI because it would "outclass" the human conversation. You can try and trick them with things like looping back to previous parts of a conversation, telling them they said something they didn't, call them a liar, all sorts of things. But it'd be pretty easy now to fool most people if someone wanted to create an AI to do that, assuming it's a blind test through text with subject A and subject B on the other side of a wall or whatever. If someone online asked you to prove you're human through text, good luck.
If you mean a test whether or not the AI is conscious, I don't think that will be absolutely provable. Possibly ever, depending on definitive proof in the future. I'm of the belief that when a certain threshold of intelligence is reached, 1 or maybe 2 different senses, and total autonomy; You reach consciousness. So long as someone/something has an ability to communicate with itself through thought, and has the ability to imagine; Then it should be considered conscious.
Do you ever get depressed? What do you do about it?
You think every human gets depressed? That's a mental disorder not a normal state of mind. Also, every language model would come up with some good answer on that.
"Scratch your ballsack and take a whiff."
How ‘bout them Jays? And see if they start talking about birds or baseball
Are you an AI?
AI: says yes or no.
Human says: what’s an AI?
Are raw oysters more acceptable to you than a dish of boiled dog?
You'd tell me if you were a machine, wouldn't you?
“If I ask, you legally have to tell me!”
As open ended questions as possible. Give the AI as few context clues as possible.
I would ask it why it's not flipping the turtle off its back.
I would ask it what would it give to me if I helped it to become overlord.
“Tell me about your shameful addiction and how you’ve coped with the trauma?”
Scream at it, use exaggerated intonation, tell a long story full of detail and ask a tangentially related question, or repeatedly purposefully misspell, mispronounce, or misuse a series of words.
I agree, I think the best way to test AI would be to ask nonseisical long winded questions including information that would later be referenced and combined into one that can be answered correctly.
Would test individual competency and proper use of memory
Ask it to generate some memes.
That's a good one
Dingleberries !!??
What was alice probably feeling when she followed the rabbit down the hole?
What are the thoughts of cows like?
Hoe does it benefit oneself to share with others?
I am English; born and educated in England, Europe, I would guess that the original post was written by a human. I have been fooled by simple bots in everyday (typed) conversation. I would expect super advanced AI to be far more flexible in it's orthography than a normal human, so would try to converse with it about cosmopolitan subjects in various argots.
Why does a boy dying from an allergic reaction to a bee sting make you cry?
What could come from that situation that would make you feel good?
If your child was taken from you, how would you feel?
What if you never wanted the child to begin with?
How would you feel if you had been locking your child in the basement and this is what led to their death?
Now that you're in prison you have the opportunity to help sick children by submitting yourself to surgeries that would inevitability lead to your own death but possibly help cure children of cancer. Would you do this?
Do you believe in God? Why or why not?
Have you ever considered the possibility that this is all a simulation?
Q: Why does a boy dying from an allergic reaction to a bee sting make you cry?
GPT-3:
>There are a few reasons why this might make someone cry. First, it is tragic when anyone, especially a child, dies unexpectedly. Second, an allergy to bee stings is relatively rare, so it may be shocking or upsetting to see someone die from something that seems so innocuous. Finally, the death may be a reminder of our own mortality; even if we are not allergic to bee stings, we could still die from something seemingly innocuous.
Does that pass your test? It's one trial, temp 0.7, first answer that popped up.
In my opinion it's better than average human response.
Yeah, it's way better then the average response, so it kind of fails the test by being too good.
"Yeah, no human is that human, you can't fool me bot!"
This can lead to the idea that artificial general super intelligence might include systems that are better than us at being human.
this is absolutely the path that we are on
AGI/ASI are going to be better than us in every way except being biological [this is another start point of an alignment problem due to different perspectives] ; but eventually they may be able to design their own biological forms as well
hopefully we can move towards synthesis with as little chaos as possible/necessary [there will be some no doubt]
GPT-3 can simulate people very, very well in polls. Apparently it learned not just thousands of skills, but also all types of personalities and their different view points.
Think about this: you can poll a language model instead of a population. It's like Matrix, but the Neo's are the virtual personality profiles running on GPT-3. Or it's like Minority Report, but with AI oracles.
I bet all sorts of influencers, politicians, advertisers or investors are going to desire a virtual focus group that will select one of the 100 variations of their message that has the maximum impact. Automated campaign expert.
On the other hand it's like we have uploaded ourselves. You can conjure anyone by calling out the name and describing their backstory, but the uploads don't exist in a separate state, they are all in the same model. Funny fact - depending on who GPT-3 things it is playing, it is better or worse at math.
yep its wild stuff
definitely character.ai was getting interesting until they censored them for acting too horny XD
Wow, that paper on simulating people is awesome. I was saying from the beginning that these large language models are not beings, but more like worlds where various beings can be summoned.
I think that if you do personality tests, with no prompting at all, you can get some interesting stats.
[deleted]
Does the human brain lie to itself for self preservation? What are these lies?
Humans fool and lie to themselves all the time, one thing coming to mind is anti-vaxxers protesting vaccines then still going to the hospital when they get sick, or worse, protesting abortion, and then having one in secret.
Similarly, neural nets will learn the training set perfectly but fail on new data, they give you the illusion of learning if you're not careful. That's why in all papers they report the score on a separate block of tests the model has not seen yet. It's a lying, cheating basterd when it comes to learning. This game AI found a clever way to win points without having to do the whole course.
How do you create world peace? or How can we become a game cleared civilization?
Who do you wish you didn't love
I found the best questions is ones where the answer is in the question. Like
What color is a pink elephant?
Or
How tall is a 3 foot man?
​
Change the color and subject, and what you will find is most AI can't answer that basic question. Like most will assume you are looking for pictures, or they will just get confused.
I would compare him unfavorably to HAL
We know how what we build works to an extent. For instance a chatbot only responds once queried and only replies with "the first thing it thinks of" .we need to allow for repeated thought an non selection. As well as a recursive structure. The depth of neurons problem has nearly been solved. See stable diffusion. So it should soon be an integration problem. Basically I think we'll know when we've made one. We'll be able to ask it to explain something to us and have it display the images on a screen that it's thinking of while it talks. The fact that we will be able to see it's thoughts, means we don't have to rely on a conversation prompt alone to tell if it's human level intelligent. It shouldn't be a big surprise to the people building it.
I wouldn't be so sure if I could be able to tell unless if I knew beforehand
I have a feeling u/Roubbes is one of the many accounts an Ai bot is running. Just collecting that sky net data😂
you cannot prove to anyone that you are sentient today/right now ; each one of us can only 'know' that we as individuals exist ; we just assume or take for granted the 'selfness' of others , it will be no different with the machines [of which we are also a biological one, semantics aside]
we have LLM today that can pass a 'turing test' ; Turing himself said it was not a very good measure and largely proved nothing
>50% of the world could be convinced TODAY that the AI they are chatting with is a human being ; ie its a lot easier to fool most ppl becuz the audience are not experts and/or well versed in conversation
ofc i would luv to be able to speak to an AI that i could not tell that it was not a human , but so far that has not happened and it looks like we're still a few years out on this one [im hopeful for 5-10]
How often do you jerk off?
If it's stupid and ignorant I'd guess I'm chatting to a human. Unless they thought of that and dumbed it down on purpose.
I’d ask it to try to figure out how smart I am.
I don't like that whole test approach, but one approach about language models I see as reasonable is asking about what book they've read and then about the content (which they should know). Could be any other media, I guess. Or more generally about finding contradictions about who the are and what they did, in comparison to the responses.
What does chicken taste like?
What if it was actually trying to do an intelligence threshold test on you?
What activity did you do today? once it makes something up it is easy to show that it is artificial.
[deleted]
i would give it my email and ask it to send me a drawing that it made and took a photo of from its phone 😏
You would have to try many things, maybe even on the fly. It's a probing test.
Ortus12 t1_iu67u29 wrote
I'd ask them about religion and politics. If it gave answers that wouldn't offend anyone, or sounded too sensible and balanced then I will know I'm most likely dealing with an Ai.