Submitted by mepper t3_1059r7o in technology
RainbowDissent t1_j3dfjpu wrote
Reply to comment by cylemmulo in ChatGPT is enabling script kiddies to write functional malware by mepper
I totally agree, it's seriously impressive how well it gets all the little details in language. If you haven't checked it out yet, I'd definitely recommend giving it a try. I think you'll be blown away by how well it can understand and respond to pretty much anything you throw at it.
opticalnebulous t1_j3eafu7 wrote
So far though I feel like most of the answers it gives me are fairly generic.
RainbowDissent t1_j3eccpv wrote
That answer above was generated by it, so at least it passes for normal conversation.
This one is not, just to be clear!
You can iterate in a response. I got to the one above by making a prompt and then asking it to "rewrite the last response in a less formal style."
You do have to acknowledge its limitations. It doesn't understand current events, politics etc. It won't understand niche pop culture references.
You can ask it to e.g. write a response in the style of a Reddit comment reply or in the style of a ten-sentence children's book or in the style of a newspaper article, which helps enormously with getting tone or cadence correct. It's often better to ask it a normal question and then ask it to rewrite the response in a particular style.
IMO it excels at summarising information. "Write a 600 word essay on the causes of the Hundred Years' War" or something. Or simply paste a lot of information and ask it to condense it into 200 words.
Worth pointing out it's only the free version, and paid tools which are far more capable also exist.
goomyman t1_j3f2v6m wrote
This is why I always hated the Turing test. Data from Star Trek would fail that test.
The Turing test tests how well an AI can fake being a human. And as such it was passed by having a good understanding of human non answers pretending to be foreigners and children.
Basically any current event, political or human experience question is unfair because a bot won’t have human and real world experiences even if it was sentient. It could be trained on those answers though and provide a believable answer for something like “what’s your favorite sports team”, but it would never have watched sports.
An AI can be sentient without acting like a humans with fake human experiences.
Not saying chat bot is sentient but I think the line of what is sentient is going to get blurrier and blurrier in our lifetimes. While that google guy who claimed their bot was sentient was definitely wrong he may go down in history anyway as one of the defining moments of when what defines sentient and where we need to start redefining our definitions.
Cognitive_Spoon t1_j3ebrxm wrote
Ask more specific questions, or twist the task a bit
Viewing a single comment thread. View all comments