Submitted by Sharp_Soup_2353 t3_115v2k6 in singularity
sideways t1_j950ycw wrote
My money is on Gato being or being closer to "true" AGI than anything else at the time it's made public.
turnip_burrito t1_j95d9xr wrote
I agree, and it does make me nervous that we may not have alignment solved by then.
Hey AI researchers on this sub. I know you're lurking here.
Please organize AI safety meetings in your workplace. Bring your colleagues to conference events on AI existential safety. Talk with your bosses about making it a priority.
Thanks,
Concerned person
Viewing a single comment thread. View all comments