SchmidhuberDidIt OP t1_j9rwh3i wrote
Reply to comment by arg_max in [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
What about current architectures makes you think they won’t continue to improve with scale and multimodality, provided a good way of tokenizing? Is it the context length? What about models like S4/RWKV?
Viewing a single comment thread. View all comments