shoegraze
shoegraze t1_j9s1hd0 wrote
Reply to comment by Additional-Escape498 in [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
What I’m hoping is that EY’s long term vision for AI existential risk is thwarted by the inevitable near term issues that will come to light and inevitably be raised to major governments and powerful actors who will then enter a “collective action” type of response similar to what happened with nukes, etc. the difference is that any old 15 year old can’t just buy a bunch of AWS credits and start training a misaligned nuke.
What you mention about a ChatGPT like system getting plugged into the internet is exactly what Adept AI is working on. It makes me want to bang my head against the wall. We can say goodbye soon to a usable internet because power seeking people with startup founder envy are going to just keep ramping these things up.
In general though, I think my “timelines” are longer than EY / EA by a bit for a doomsday scenario. LLMs are just not going to be the paradigm that brings “AGI,” but they’ll still do a lot of damage in the meantime. Yann had a good paper about what other factors we might need to get to a dangerous, agentic AI.
shoegraze t1_irwlpoz wrote
Reply to comment by freezelikeastatue in [D] Looking for some critiques on recent development of machine learning by fromnighttilldawn
But surely with a dataset as large as the Pile and enough weights, the model will be able to learn at least decently well how to interpret misspellings and abbreviations. If anything wouldn’t this data “issue” help improve a LLM’s robustness? Not sure I see what the issue is in the context of LLMs, but to be fair I agree with you if you’re trying to train a small model on a small amount of context-specific text data (but then you shouldn’t be using the Pile should you?)
shoegraze t1_j9s22kq wrote
Reply to comment by royalemate357 in [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
Yep if we die from AI it will be from bioterrorism well before we get enslaved by a robot army. And the bioterrorism stuff could even happen before “AGI” rears its head.