Standard_Ad_2238
Standard_Ad_2238 t1_j9lkvrf wrote
Reply to comment by EndTimer in Microsoft is already undoing some of the limits it placed on Bing AI by YaAbsolyutnoNikto
People always find a way to talk about what they want. Let's say Reddit for some reason adds a ninth rule: "Any content related to AI is prohibited." Would you simply stop doing that at all? What the majority of us would do is find another website where we could talk, and even if that one starts to prohibit AI content too, we would keep looking until we find a new one. This behavior applies to everything.
There are already some examples of how trying to limit a specific topic on an AI would cripple several other aspects of it, as you could clear see it on a) CharacterAI's filter that prevented NSFW talks at the cost of a HUGE overall coherence decrease; b) a noticeable quality decrease of SD 2.0's capability of generating images with humans, since a lot of its understanding of anatomy came from the NSFW images, now removed from the model training; and c) BING, which I think I don't have to explain due to how recent it is.
On top of that, I'm utterly against censorship (not that it matters for our talk), so I'm very excited to see the uprising of open-source AI tools for everything, which is going to greatly increase the difficulty of limiting how AI is used.
Standard_Ad_2238 t1_j9l84z2 wrote
Reply to comment by EndTimer in Microsoft is already undoing some of the limits it placed on Bing AI by YaAbsolyutnoNikto
Correct me if I got it wrong, but you are talking about bot engagement or fake news, right? In that case, if anything, at least AI would be indirectly increasing jobs for moderation roles ^^
Standard_Ad_2238 t1_j9kzqrb wrote
Reply to comment by UltraMegaMegaMan in Microsoft is already undoing some of the limits it placed on Bing AI by YaAbsolyutnoNikto
What really is funny in this whole "controversy" regarding AI is that what you have just said applies to EVERY new technology. Every one of them also brings a bad side that we have do deal with it. From the advent of cars (which brought a lot of accidents with them) to guns, Uber, even the Internet itself. Why the hell are people treating AI differently?
Standard_Ad_2238 t1_j9dnryo wrote
Reply to comment by One_andMany in Would you play a videogame with AI advanced enough that the NPCs truly felt fear and pain when shot at? Why or why not? by MultiverseOfSanity
The pain and fear are simulated because there is no electrochemistry involved, so they don't feel anything truly
Standard_Ad_2238 t1_j9djmg4 wrote
Reply to Would you play a videogame with AI advanced enough that the NPCs truly felt fear and pain when shot at? Why or why not? by MultiverseOfSanity
I don't think I'd enjoy that, but I totally want this scenario to be possible. I think those people who are trying to humanize AI are not only paving the way to a huge problem in the future, but also losing the possibility to get the best servants that we could get without any complaints.
Standard_Ad_2238 t1_j9dj4fn wrote
Reply to comment by sumane12 in Would you play a videogame with AI advanced enough that the NPCs truly felt fear and pain when shot at? Why or why not? by MultiverseOfSanity
Unlike how humans behave, we can simply unplug or easily prompt or fine-tune an AI to behave in a desired way. Why try to humanize something that is not human? We have the best opportunity to have servants that are able to make anything we ask without complaining, why mess up with that?
Standard_Ad_2238 t1_j8xic4j wrote
Reply to Microsoft Killed Bing by Neurogence
They probably think "people are too dumb/evil to talk with a robot, they are not prepared, and on top of that WE MUST PROTECT THE CHILDREN". Hell, why are we even allowed to use the internet then? I wonder which big final-user company is going to be the first one to treat AI like just another tool instead of some humankind threat.
Standard_Ad_2238 t1_j9lmxm2 wrote
Reply to comment by berdiekin in Microsoft is already undoing some of the limits it placed on Bing AI by YaAbsolyutnoNikto
I think most people who are into this field are, but it seems to me that every company is walking on eggshells afraid of a possible big viral tweet or to appear on a well known news website as "the company whose AI did/let the users do [ ]" (insert something bad there), just like Microsoft with Bing.
I could train a dog to attack people on streets and say "hey, dogs are dangerous" or to buy a car and run over a crowd just to say "hey, cars are dangerous too". What it seems to me is that some people don't realize that everything could be dangerous. Everything can and at sometime WILL be used by a malicious person to do something evil, it's simply inevitable.
Recently I started to hear a lot of "imagine how dangerous those image generative AIs are, someone could ruin a lot of people's lives by creating fake photos of them!". Yeah, we didn't have Photoshop until this year.