Legal-Software t1_j8h8bti wrote
The dark side of AI is not the AI itself, but that people will accept its decision making as-is with little transparency and little recourse, while the companies making the models hide behind trade secret protections to prevent any scrutiny or oversight. This is already happening with regards to things like private companies in the US providing non-transparent prison sentencing recommendations using AI, where things like skin colour were picked up by the model as relevant factors in determining sentencing length (the data shows there are more dark-skinned people in prison, therefore the AI logically infers that they are more likely to be offenders, and adjusts the parameters accordingly). With no oversight or transparency, it's not always clear what parameters are identified as relevant, what weights they have from one layer to the next, and what kind of biases exist within in the network.
Part of my day job entails developing AI and ML models for assessing driving risk (both of human drivers and of self-driving vehicles), and it's clear that these models and technologies will always have faults that require error correction and monitoring. A vital part of improving any model is knowing when it gets things right and when it gets things wrong - by removing the feedback mechanism you in effect prevent any real improvements from being made and ensure the continued mediocrity of the outputs.
Viewing a single comment thread. View all comments