Submitted by alkaway t3_zkwrix in MachineLearning
alkaway OP t1_j024u31 wrote
Reply to comment by trajo123 in [P] Are probabilities from multi-label image classification networks calibrated? by alkaway
Thanks for your response -- This is an interesting idea! Unfortunately, I am actually training my network to predict 1000+ classes, for which such an idea would be computationally intractable...
trajo123 t1_j029y2r wrote
Ah, yes it doesn't really make sense for more than a couple of classes. So if you can't make your problem multi-class, have you tried any probability calibration on the model outputs? This should make them "more comparable", I think this is the best you can do with a deep learning model.
But why do you want to rank the outputs per pixel? Wouldn't some per-image aggregate over the channels make more sense?
alkaway OP t1_j02owfb wrote
Thanks so much for your response! Are you aware of any calibration methods I could try? Preferably ones which won't take long to implement / incorporate :P
trajo123 t1_j031wsx wrote
Perhaps scikit-learn's "Probability calibration" section would be a good place to start. Good luck!
LearnDifferenceBot t1_j02p3jr wrote
> won't to long
*too
Learn the difference here.
^(Greetings, I am a language corrector bot. To make me ignore further mistakes from you in the future, reply !optout
to this comment.)
Viewing a single comment thread. View all comments