Viewing a single comment thread. View all comments

vicarioust t1_j0t5oig wrote

AI is a model and not code. Preventing an AI from "altering its code" is pretty much saying it should not learn. Which is the point of AI.

40

JoeBookish t1_j0t7q4x wrote

It can be iterative, though, and build forward from point b while operating by the rules at point a. You can absolutely write code that a learning machine can't modify.

4

streamofbsness t1_j0tj2h8 wrote

Not exactly right. A particular AI system is a model… defined in code. You basically have a math function with a bunch of input variables and “parameters”, which are weights that are variable and “learned” during training and constants at prediction. Finding the best values for those parameters is the point of AI, but the function itself (how many weights and how they’re combined) is still typically architected (and coded) by human engineers.

Now, you could build a system that tries out different functions by mixing and matching different “layers” of parameters. Those layers could also be part of the system itself. Computer programs are capable of writing code, even complete or mutated versions of themselves (see “computer science Quine”). So it is possible to have AI that “alters its code”, but that is different from what most AI work is about right now.

4

Tupcek t1_j0u18gv wrote

well, changing weights are basically rewriting the logic of an AI, so it could be defined as rewriting its code.
The problem is, once continuous learning becomes mainstream (same way as people learn and memorize things, events, places, processes etc. their whole lives), rewriting the logic basically becomes the point.
It is a valid question, though the hard part of the question is the definition of its code.
In human brain, every memory is a “code”, because every memory slightly alters the behavior of a human.
Should we limit the AI to pre-trained stuff and drop every new knowledge ASAP (like ChatGDP now, where if you point out its mistake, it will remember, but just for this session, as it will only use it as an input, not re-train itself), or should we allow continuous learning?

−4

norbertus t1_j0tplwt wrote

People don't differentiate between "AI" and "pre-trained neural network" when talking about things like GPT and Stable Diffusion

1

thisalienispissed t1_j0ubwip wrote

Technically we don’t have anything close to AI yet. It’s all computer vision, pre trained neural networks etc

1