Submitted by basafish t3_zpibf2 in Futurology
Tupcek t1_j0u18gv wrote
Reply to comment by streamofbsness in Should we make it impossible for AI to rewrite its own code or modify itself? by basafish
well, changing weights are basically rewriting the logic of an AI, so it could be defined as rewriting its code.
The problem is, once continuous learning becomes mainstream (same way as people learn and memorize things, events, places, processes etc. their whole lives), rewriting the logic basically becomes the point.
It is a valid question, though the hard part of the question is the definition of its code.
In human brain, every memory is a “code”, because every memory slightly alters the behavior of a human.
Should we limit the AI to pre-trained stuff and drop every new knowledge ASAP (like ChatGDP now, where if you point out its mistake, it will remember, but just for this session, as it will only use it as an input, not re-train itself), or should we allow continuous learning?
Viewing a single comment thread. View all comments