Many of the proposed legal regulations for systems such as autonomous vehicles mention the need for explainability or transparency in the decision making processes of said vehicles. My understanding however was that due to their deep-learning processes, this is either extremely hard or impossible to do?
Is my understanding correct? Or is explainability possible in deep-learning systems?
BrandonBilliard t1_j9g3111 wrote
Reply to [D] Simple Questions Thread by AutoModerator
Hey,
Many of the proposed legal regulations for systems such as autonomous vehicles mention the need for explainability or transparency in the decision making processes of said vehicles. My understanding however was that due to their deep-learning processes, this is either extremely hard or impossible to do?
Is my understanding correct? Or is explainability possible in deep-learning systems?