3Quondam6extanT9 t1_iu6q13d wrote
Why does a boy dying from an allergic reaction to a bee sting make you cry?
What could come from that situation that would make you feel good?
If your child was taken from you, how would you feel?
What if you never wanted the child to begin with?
How would you feel if you had been locking your child in the basement and this is what led to their death?
Now that you're in prison you have the opportunity to help sick children by submitting yourself to surgeries that would inevitability lead to your own death but possibly help cure children of cancer. Would you do this?
Do you believe in God? Why or why not?
Have you ever considered the possibility that this is all a simulation?
visarga t1_iu7nbwy wrote
Q: Why does a boy dying from an allergic reaction to a bee sting make you cry?
GPT-3:
>There are a few reasons why this might make someone cry. First, it is tragic when anyone, especially a child, dies unexpectedly. Second, an allergy to bee stings is relatively rare, so it may be shocking or upsetting to see someone die from something that seems so innocuous. Finally, the death may be a reminder of our own mortality; even if we are not allergic to bee stings, we could still die from something seemingly innocuous.
Does that pass your test? It's one trial, temp 0.7, first answer that popped up.
In my opinion it's better than average human response.
MercuriusExMachina t1_iu7vs2b wrote
Yeah, it's way better then the average response, so it kind of fails the test by being too good.
visarga t1_iu84rfo wrote
"Yeah, no human is that human, you can't fool me bot!"
MercuriusExMachina t1_iu859av wrote
This can lead to the idea that artificial general super intelligence might include systems that are better than us at being human.
cy13erpunk t1_iu8b4s8 wrote
this is absolutely the path that we are on
AGI/ASI are going to be better than us in every way except being biological [this is another start point of an alignment problem due to different perspectives] ; but eventually they may be able to design their own biological forms as well
hopefully we can move towards synthesis with as little chaos as possible/necessary [there will be some no doubt]
visarga t1_iu8bzyj wrote
GPT-3 can simulate people very, very well in polls. Apparently it learned not just thousands of skills, but also all types of personalities and their different view points.
Think about this: you can poll a language model instead of a population. It's like Matrix, but the Neo's are the virtual personality profiles running on GPT-3. Or it's like Minority Report, but with AI oracles.
I bet all sorts of influencers, politicians, advertisers or investors are going to desire a virtual focus group that will select one of the 100 variations of their message that has the maximum impact. Automated campaign expert.
On the other hand it's like we have uploaded ourselves. You can conjure anyone by calling out the name and describing their backstory, but the uploads don't exist in a separate state, they are all in the same model. Funny fact - depending on who GPT-3 things it is playing, it is better or worse at math.
cy13erpunk t1_iu8d8rs wrote
yep its wild stuff
definitely character.ai was getting interesting until they censored them for acting too horny XD
MercuriusExMachina t1_iu904ni wrote
Wow, that paper on simulating people is awesome. I was saying from the beginning that these large language models are not beings, but more like worlds where various beings can be summoned.
I think that if you do personality tests, with no prompting at all, you can get some interesting stats.
Viewing a single comment thread. View all comments