Submitted by Business-Lead2679 t3_1271po7 in MachineLearning
EuphoricPenguin22 t1_jedhyci wrote
Reply to comment by phire in [P] Introducing Vicuna: An open-source language model based on LLaMA 13B by Business-Lead2679
Using training data without explicit permission is (probably) considered to be fair use in the United States. There are some currently active court cases relating to this exact issue here in the U.S., namely Getty Images (US), Inc. v. Stability AI, Inc. The case is still far from a decision, but it will likely be directly responsible for setting a precedent on this matter. There are a few other cases happening in other parts of the world, and depending on where you are specifically, different laws or regulations may already be in place that clarify this specific area of law. I believe there is another case against Stability AI in the UK, and I've heard that the EU was considering adding or has added an opt-out portion of the law; I'm not sure.
phire t1_jedo041 wrote
Perfect 10, Inc. v. Amazon.com, Inc. established that it was fair use for google images to keep thumbnail sized copies of images because providing image search was transformative.
I'm not a lawyer, but thumbnails are way closer to the original than network weights, and AI image generation is arguably way more transformative than providing image search. I'd be surprised if Stability loses that suit.
pm_me_your_pay_slips t1_jee2xtt wrote
Perhaps applicable to the generated outputs of the model, but it’s not a clear case for the inputs used as training data. It could very well end up in the same situation as sampling in the music industry. Which is transformative, yet people using samples have to “clear” them by asking for permission (usually involves money).
Viewing a single comment thread. View all comments