Submitted by LettucePrime t3_119s0zp in Futurology
Regarding AI cheating in academia & the human effort it takes to discern AI generated text from written text:
A lot of very very smart people are doing lots & lots of good work writing AI-assisted AI detector bots, or Digitally Watermarking AI text, both projects beyond my feeble human ken. I haven't seen it discussed before, but shouldn't the onus of delineating man from machine be on the side providing the AI chatbot? Shouldn't they be providing public record of the raw text generated by their public toy in a database, easily checked & cross-referenced by existing plagiarism tools?
I know it's not beyond any of these companies: for all their sci-fi machinations, language models ultimately return a few KBs of output, & we're talking about the likes of Microsoft, Alphabet, & Meta. They built the infrastructure for the social media era.
If security is an issue, sell your clients a secure platform for your chatbot, managed by their organization. AI is already difficult to monetize as it is - it's why Silicon Valley ignored LLMs for the entire 2010's. Am I missing something in my assessment? This seems like a no brainer solution & these firms should be pressured to adopt it, largely for the good of society, if nothing else.
Clairvoidance t1_j9ns42x wrote
There's the issue of locally run LLMs. It's even possible low-scale with models like Pygmalion, but it would be an even bigger issue if there wasn't low-scale models, as nothing would stop the richer people from having a language learning model on the downlow, or as funny as it sounds, there might even emerge some sort of black-market of LLM
people are also seemingly very careless about what they put into LLMs