Submitted by spiritus_dei t3_10tlh08 in MachineLearning
Blakut t1_j788j67 wrote
Reply to comment by spiritus_dei in [D] Are large language models dangerous? by spiritus_dei
it is a code, but actually it's much more than that. It's a self replicating piece of code packaged in a capsule that allows it to survive and propagate. Like a computer virus. But you know, computer viruses are written and disseminated by people. They don't evolve on their own.
spiritus_dei OP t1_j78ago8 wrote
All of that is possible with a sophisticated enough AI model. It can even write computer viruses.
In the copyright debates the AI engineers have contorted themselves into a carnival act telling the world that the outputs of the AI art are novel and not a copy. They've even granted the copyright to the prompt writers in some instances.
I'm pretty sure we won't have to wait for too long to see the positive and negative effects of unaligned AI. It's too bad we're not likely to have a deep discussion as a society about whether enough precautions have been taken before we experience it.
Machine language programmers are clearly not the voice of reason when it comes to this topic. Anymore more than virologists pushing gain of function research were the people who should have been steering the bus.
Blakut t1_j78jn2y wrote
"All of that is possible with a sophisticated enough AI model. It can even write computer viruses." only directed by a human, so far.
"In the copyright debates the AI engineers have contorted themselves into a carnival act telling the world that the outputs of the AI art are novel and not a copy. They've even granted the copyright to the prompt writers in some instances." - idk, they might be
Viewing a single comment thread. View all comments