Submitted by Gari_305 t3_10yta0f in Futurology
Banana_bee t1_j7zrwse wrote
In my opinion this is largely because, until recently, if a robot made a mistake once it would always make that same mistake in that situation. The 'AI' was effectively an incredibly long series of 'if' statements.
With ANNs that isn't necessarily true, but often is, as the models are usually not continuously trained after release - because then you get Racist Chatbots.
This is changing as we use smaller secondary models to detect this kind of content and reinforce the network's training in the direction we want - but it's still not hugely common.
ATR2400 t1_j80bqni wrote
I know that some like Character.AI get trained a bit through conversation now. The AIs I’ve made seem to learn some behaviours after a long conversation that get pulled in to new chats. Like if I tell it to speak in a certain way and keep reinforcing that in one chat then when I start a new one it’ll keep it up despite having no memory of being explicitly told to act that way.
Viewing a single comment thread. View all comments