mirrorcoloured
mirrorcoloured t1_j7hum3j wrote
Reply to comment by HoneyChilliPotato7 in [N] Google: An Important Next Step On Our AI Journey by EducationalCicada
I think this says more about you than Google.
mirrorcoloured t1_j6e1ckl wrote
Reply to comment by dineNshine in [D] Couldn't devs of major GPTs have added an invisible but detectable watermark in the models? by scarynut
Yes I wasn't clear on the comparison, but I meant more by analogy that it's possible to hide information in images without noticeable impact to humans. In this space, I just have my anecdotal experience that I can use textual inversion embeddings that use up 10-20 tokens with no reduction in quality that I can notice. I'm not sure how much a quality 'watermark' would require, but based on this experience and the fact that models are getting more capable over time, it seems reasonable to me that we could spare some 'ability' and not notice.
I also agree with the philosophy of 'do one thing and do it well' where limitations are avoided and modularity is embraced. Protecting people from themselves is unfortunately necessary, as our flaws are well understood and fairly reliable at scale, even though we can all be rational at times. As a society I think we're better off if our pill bottles have child safe caps, our guns have safeties, and our products have warning labels. Even if these things marginally reduce my ability to use them (or increase their cost), it feels selfish for me to argue against them when I understand the benefits they bring to others (and myself when I'm less hubristic). To say that, for example, 'child-safe caps should be optionally bought separately only by those with children and pets' ignores the reality that not everyone would do that, friends and family can visit, people forget things in places they don't belong, etc. The magnitude of the negative impacts would be far larger than the positive, and often experienced by different people.
mirrorcoloured t1_j5wwhn5 wrote
Reply to comment by dineNshine in [D] Couldn't devs of major GPTs have added an invisible but detectable watermark in the models? by scarynut
While I agree with your concluding sentiments against centralization and recommending use of signatures, I don't believe your initial premise holds.
Consider stenography in digital images, where extra information can be included without any noticeable loss in signal quality.
One could argue that any bits not used for the primary signal are 'limiting usability', but this seems pedantic to me. It seems perfectly reasonable that watermarking could be implemented with no noticable impacts, given the already massive amount of computing power required and dense information output.
mirrorcoloured t1_j5gol1a wrote
Reply to comment by dineNshine in [D] Couldn't devs of major GPTs have added an invisible but detectable watermark in the models? by scarynut
How would extra embedded information limit the end user, specifically?
mirrorcoloured t1_j7l8e29 wrote
Reply to comment by hemphock in [N] Google: An Important Next Step On Our AI Journey by EducationalCicada
Wow I didn't expect numbers that high! I wonder if there's a large AA/reddit overlap, or if that's representative of search as a whole.
Google is showing a steady increase in reddit interest over time, and the second related query I see is "what is reddit". It's interesting that it's roughly linear and doesn't have the increasing growth that you'd expect from word-of-mouth spread.