In my opinion this is largely because, until recently, if a robot made a mistake once it would always make that same mistake in that situation. The 'AI' was effectively an incredibly long series of 'if' statements.
With ANNs that isn't necessarily true, but often is, as the models are usually not continuously trained after release - because then you get Racist Chatbots.
This is changing as we use smaller secondary models to detect this kind of content and reinforce the network's training in the direction we want - but it's still not hugely common.
Banana_bee t1_j7zrwse wrote
Reply to Humans are struggling to trust robots and forgive mistakes by Gari_305
In my opinion this is largely because, until recently, if a robot made a mistake once it would always make that same mistake in that situation. The 'AI' was effectively an incredibly long series of 'if' statements.
With ANNs that isn't necessarily true, but often is, as the models are usually not continuously trained after release - because then you get Racist Chatbots.
This is changing as we use smaller secondary models to detect this kind of content and reinforce the network's training in the direction we want - but it's still not hugely common.