Submitted by fortunum t3_zty0go in singularity
dookiehat t1_j1gstq3 wrote
I’m with OP. Specifically i believe many more human intuition guided innovations in ai software architecture and hardware need to occur before self-improving, let alone self-directed AI occurs.
Gargantuan models will give way to sparse architectures that can be run with somewhat modest equipment and external information sourcing that resembles research coming directly from the AI agent itself. This won’t necessarily replace large models, but will be a module added and enacted when planning, strategizing, and reasoning. It may be influenced by neurobiology, but probably won’t look exactly the same
Viewing a single comment thread. View all comments