Submitted by ateqio t3_111ux51 in MachineLearning
andreichiffa t1_j8h43hh wrote
You can’t. Anyone with enough technical knowledge will not want to go anywhere near legal ramifications and responsibility it implies (in addition to looking like a clown in about 10 minutes of uptime once bypasses are found).
There are fundamental limitations on detectability as of now.
ateqio OP t1_j8h5g16 wrote
You're right.
The problem is, people (especially professors) are going to look for it no matter what.
Just look at the stats. Roberta OpenAI detector was downloaded a whopping 114k times in just last month. It clearly states not to use it as ChatGPT detector but I see a lot of it's implementations
Better to educate users with a big fat disclaimer and a tool
andreichiffa t1_j8hawd4 wrote
I have reported to Huggingface what its detector was used for and its failure modes (hint:false positives are worse). In the first days of December. They decided to keep it up. It’s on their consciousness.
Same thing with API providers. Those willing to sell you one are selling you snake oil. It’s on their consciousness.
Same thing for you. You want to build an app that sells snake oil that can be harmful in a lot of scenarios? It’s on your consciousness.
But at that point you even don’t need an API to build it.
ateqio OP t1_j8hcsz6 wrote
What's the ratio of false positives? honestly curious
andreichiffa t1_j8hf2th wrote
10% is what OpenAI considered as "good enough" for theirs, but the problem is with the fact that the detection is not uniform. Most neurodivergent folks will be misclassified as generative models, just as for people with social anxiety who tend to be wordy. Non-native and non-fluent English speakers are the other big false-positive triggers.
Viewing a single comment thread. View all comments