Submitted by QuartzPuffyStar t3_126wmdo in singularity
Literally ALL outlets, Social Media posts that are talking about the AI Pause letter are showing the news as if the letter written by business owners, and especially miking it as if E.Musk is the one "gint mind" behind it. I'm completely baffled by how several important people in the field of AI development and safety are completely ignored here, and they are the ones that basically wrote the letter.
Just to name some of the hundreds of prominent international researchers (that actually include Chinese and Russians):
- Yoshua Bengio: Bengio is a prominent researcher in the field of deep learning, and is one of the co-recipients of the 2018 ACM A.M. Turing Award for his contributions to deep learning, along with Geoffrey Hinton and Yann LeCun.
- Stuart Russell: Russell is a computer scientist and AI researcher, known for his work on AI safety and the development of provably beneficial AI. He is the author of the widely-used textbook "Artificial Intelligence: A Modern Approach."
- Yuval Noah Harari: Harari is a historian and philosopher who has written extensively on the intersection of technology and society, including the potential impact of AI on humanity. His book "Homo Deus: A Brief History of Tomorrow" explores the future of humanity in the age of AI and other technological advances.
- Emad Mostaque: Mostaque is a financier and investor who has written extensively on the potential impact of AI on financial markets, and has advocated for the responsible development and regulation of AI.
- John J Hopfield: Hopfield is a physicist and neuroscientist who is known for his work on neural networks, including the development of the Hopfield network, a type of recurrent neural network.
- Rachel Bronson: Bronson is a foreign policy expert who has written about the potential impact of AI on international relations and security.
- Anthony Aguirre: Aguirre is a physicist and cosmologist who has written about the potential long-term implications of AI on humanity, including the possibility of artificial superintelligence.
- Victoria Krakovna: Krakovna is an AI researcher and advocate for AI safety, and is one of the founders of the AI alignment forum and the AI safety unconference.
- Emilia Javorsky: Javorsky is a researcher in the field of computational neuroscience, and has written about the potential impact of AI on the brain and the nature of consciousness.
- Sean O'Heigeartaigh: O'Heigeartaigh is an AI researcher and advocate for AI safety, and is the executive director of the Centre for the Study of Existential Risk at the University of Cambridge.
- Yi Zeng: Zeng is a researcher in the field of computer vision, and has made significant contributions to the development of machine learning algorithms for image recognition and analysis.
- Steve Omohundro: Omohundro is an AI researcher who has written extensively on the potential risks and benefits of AI, and is the founder of the think tank Self-Aware Systems.
- Marc Rotenberg: Rotenberg is a lawyer and privacy advocate who has written about the potential risks of AI and the need for AI regulation.
- Niki Iliadis: Iliadis is an AI researcher who has made significant contributions to the development of natural language processing and sentiment analysis algorithms.
- Takafumi Matsumaru: Matsumaru is a researcher in the field of robotics, and has made significant contributions to the development of humanoid robots.
- Evan R. Murphy: Murphy is a researcher in the field of computer vision, and has made significant contributions to the development of algorithms for visual recognition and scene understanding.
Among many others.
This letter is basically the equivalent of the early 20th petition by scientists that asked to limit and regulate the proliferation of nuclear weapons. And yet, its being sold as a capitalist stratagem to gain time.
And the manipulation doesnt end just in the headlines and social media push.
More worrisome is that I tried to make Bing Chat to give me the list (the one in this post, that I ended up doing manually and with some help of the old GPT3.5 with no internet access) of AI researchers that signed the petition, and it completely refused to do so! At it first gave me the wrong answer (naming Elon Musk, and other business leaders when I specifically told it to ignore them), then it proceeded to just ignore my instructions, saying that it "wasnt able to find the information", even when I sent it the link to the petition that contained the list of people that signed it! After being pushed for it (both in creative and regular mode), it only made it very difficult to get the information, by giving different replies, or just giving one point of the list at a time type of replies.
Screenshots: https://imgur.com/a/xLlGa6M
We are seeing big companies working real-time to direct the public discourse on a specific path. And maybe, a small probability (and hell if i'm being overparanoid here), of an AI itself helping with that through its generated titles, articles, and suggestions.
Mortal-Region t1_jeb9glo wrote
The fact that it was filled with fake signatures sure doesn't help.