Viewing a single comment thread. View all comments

the_coyote_smith t1_irjxlc0 wrote

I agree we should shape it responsibly. Which means sometimes criticizing, let’s say, SD and LAION from scrapping medical records and copyrighted images from other artists who did the real work. And yes - it was knowingly done - because there is a double standard happening with Harmonai, which explicitly does collects via an opt-in approach.

https://techcrunch.com/2022/10/07/ai-music-generator-dance-diffusion/

If it’s hard to empathize, than maybe that is something you could work on.

Your points boil down to - (1) tech is inevitable so just don’t question, (2) we don’t know what could happen, (3) this tech is harmful to peoples psyche and social stability but I’m fine so just accept it. (4) leave the ones who question behind.

Like - duh, I want AI to be helpful for everyone. I want it used responsibly. I used to study Cognitive Science and NLP in college, I was all in. I want this tech to truly help everyone responsibly with just intent. But, I just don’t think gutting artists work opportunities - and creating a world where all art has the is shadow of doubt over it (I.e “was this made by a person or a robot? I can’t tell …”) - is the way to go. I just can’t imagine what good could come out of a world where someone who is suicidal picks up a phone - calls the suicide hotline - but isn’t sure if a real person is behind the phone. Hell, they may have not even bothered to call knowing it could be a robot and not a person.

1

ebolathrowawayy t1_irk3z1n wrote

> I agree we should shape it responsibly. Which means sometimes criticizing, let’s say, SD and LAION from scrapping medical records and copyrighted images from other artists who did the real work. And yes - it was knowingly done - because there is a double standard happening with Harmonai, which explicitly does collects via an opt-in approach.

I'm pretty sure SD didn't have time to comb through however many billions of images in the LAION dataset. I doubt SD wanted medical records in their model or if they do I'm sure they'll be happy to remove any that violate HIPAA.

Copyrighted images are fair game unless the law changes. They used it for training only. If artists' work aren't included in the training data then you get a pretty shitty model.

> Your points boil down to - (1) tech is inevitable so just don’t question, (2) we don’t know what could happen, (3) this tech is harmful to peoples psyche and social stability but I’m fine so just accept it. (4) leave the ones who question behind.

None of those are my points.

  1. Tech is inevitable, I didn't say don't question

  2. I have very high confidence about what will happen in the next 10-20 years. I have vague ideas about what will happen after that, but that can be dealt with when it's nearer

  3. It may be harmful, but so are psychopathic CEOs and kitchen knives. It's not unique to AI. I personally don't think AI is likely to be net-harmful, even when ASIs come online

  4. No, I just don't feel bad for people who lose their jobs because they couldn't see the future staring them in the face. I don't feel bad that tech lifted some 90% of the world's population out of having to do farm work all day either. They shouldn't be left behind though, UBI will be essential

> But, I just don’t think gutting artists work opportunities

They will be gutted soon with or without their work included in the training data. It might delay it by a year or less because some artists will volunteer their work and there's a lot of good work done by long dead artists that can be used. Maybe the people who are so threatened by SD should move on to making things that aren't furry porn and other basic stuff. Or learn how to use it to assist them in whatever they're doing.

> I want this tech to truly help everyone responsibly with just intent. But, I just don’t think gutting artists work opportunities - and creating a world where all art has the is shadow of doubt over it (I.e “was this made by a person or a robot? I can’t tell …”)

As a consumer of the works of artists of all kinds, I don't care whether an AI or a person made something.

> I just can’t imagine what good could come out of a world where someone who is suicidal picks up a phone - calls the suicide hotline - but isn’t sure if a real person is behind the phone. Hell, they may have not even bothered to call knowing it could be a robot and not a person.

Why would that matter if they deploy an AI for this purpose and see a reduction in suicides? If they deploy it and suicides increase then yeah sure, it failed, just stop doing that and ban that practice.

I want to live in a world that's similar to Star Trek and I think it's foolish to try to halt progress.

1

the_coyote_smith t1_irk7332 wrote

Yeah - I’m done arguing because it’s just clear you don’t care about people at the moment.

I’m glad you think your fantasy of living in Star Trek will happen.

I hope you find compassion and empathy one day.

1