flinsypop
flinsypop t1_is0jh1g wrote
Reply to comment by Visual-Arm-7375 in [P] Understanding LIME | Explainable AI by Visual-Arm-7375
Awesome. Cant wait!
flinsypop t1_is0cbrw wrote
This is a nice brief introduction. Where you could improve is showing how each part of the presentation is mapped to code, so people can play around with it. My advice would be to link to the lime tutorials and fill in any gaps with notebooks of your own. If you can direct your viewers to be practice what you explain and also have safety nets where you explain common problems and solutions, you can differentiate your content from the dozens of other content creators explaining the same tools and concepts you are.
I do have bias here though, I dislike slides and slides of mathematical notation but you did a good job of breaking it up with visuals in the middle. However, in the second half, it would have been better if you referred back to examples from the first half as you go along. Using different examples can be fine but, in my experience explaining it to colleagues, the lack of continuity can stun lock people. For example people might wonder what exactly the perturbated dataset could look like for the images at the start. You could show the output of lime for the husky picture compared to the same picture with added noise that would have been generated in the "perturbed" dataset.
flinsypop t1_j5erlpw wrote
Reply to comment by AmputatorBot in [D] Couldn't devs of major GPTs have added an invisible but detectable watermark in the models? by scarynut
Good bot.