andresni
andresni t1_itozik6 wrote
This is actually quite interesting. Unlike most studies of this kind, the test stimulus was novel. That is, the decoder wasn't merely trained to detect which of a set of stimulus a participant listened to (e.g. which of these 5 stories), but from that set decode a novel stimulus! On a single trial basis no less.
andresni t1_j6jwwq5 wrote
Reply to AI improving itself by Luka87uchiha
If you look at how much energy and computational resources it took to make ChatGPT, then it's pretty obvious that even if ChatGPT4 (or whichever version) could in principle bootstrap itself into the intelligence stratosphere, it wouldn't have the resources to do so. Neither do we have that kind of resources hanging around unused for the AI to tap into without or explicit knowledge and consent.
And even if we dedicated resources to it, the next iteration would demand even more. The time it takes us to build the super computers, gather data, and provide the requisite energy, is measured in months if not years. A self-improving AI wouldn't be able to improve faster than we are able to allocate resources to it.
Unless, of course, it manages to tap into all our phones and computers and gaming consoles and serves and the like. That'll give it the juice it needs, perhaps. Question is, could it even do so? How smart would it have to be to do so without our collective consent and collaboration?