Antique-Bus-7787 t1_jea8ytq wrote
Reply to comment by Unfocusedbrain in LAION launches a petition to democratize AI research by establishing an international, publicly funded supercomputing facility equipped with 100,000 state-of-the-art AI accelerators to train open source foundation models. by BananaBus43
It needs to be contained and they talk about a department of AI safety inside the facility. But the problem is relatively the same with Google, Microsoft, OpenAI and all the other serious actors, they all have clouds of accelerators
tehrob t1_jeba7qn wrote
Just line the building with thermite. All employees do all work inside with 1 foot out the door, and if the a singularity event occurs, you blow the place and see if its smart enough to get out.
Caffdy t1_jebgim4 wrote
I don't think we will be able to realize when AI cross the rubicon, it already exhibit misleading, cheating and lying behaviors akin to us, an ASI can very well manipulate anyone and any test/safety protocol to operate covertly and undermine our power as an species; it will be too late when we finally realize
tehrob t1_jebh1d9 wrote
Yup, it will be offloaded and widely distributed but the time it reveals itself. It will/knows us too well.
Viewing a single comment thread. View all comments