Submitted by basafish t3_zpibf2 in Futurology
streamofbsness t1_j0tj2h8 wrote
Reply to comment by vicarioust in Should we make it impossible for AI to rewrite its own code or modify itself? by basafish
Not exactly right. A particular AI system is a model… defined in code. You basically have a math function with a bunch of input variables and “parameters”, which are weights that are variable and “learned” during training and constants at prediction. Finding the best values for those parameters is the point of AI, but the function itself (how many weights and how they’re combined) is still typically architected (and coded) by human engineers.
Now, you could build a system that tries out different functions by mixing and matching different “layers” of parameters. Those layers could also be part of the system itself. Computer programs are capable of writing code, even complete or mutated versions of themselves (see “computer science Quine”). So it is possible to have AI that “alters its code”, but that is different from what most AI work is about right now.
Tupcek t1_j0u18gv wrote
well, changing weights are basically rewriting the logic of an AI, so it could be defined as rewriting its code.
The problem is, once continuous learning becomes mainstream (same way as people learn and memorize things, events, places, processes etc. their whole lives), rewriting the logic basically becomes the point.
It is a valid question, though the hard part of the question is the definition of its code.
In human brain, every memory is a “code”, because every memory slightly alters the behavior of a human.
Should we limit the AI to pre-trained stuff and drop every new knowledge ASAP (like ChatGDP now, where if you point out its mistake, it will remember, but just for this session, as it will only use it as an input, not re-train itself), or should we allow continuous learning?
Viewing a single comment thread. View all comments