madrury83
madrury83 t1_ixaqnqk wrote
Reply to Suggestions for a socially valuable project that would welcome an unpaid contributor [D] by AnthonysEye
Improving the documentation and error messages on any open source project is always appreciated.
madrury83 t1_it7hw31 wrote
Reply to comment by PassionatePossum in [D] Accurate blogs on machine learning? by likeamanyfacedgod
I think the more rigorous way to get at the OPs point is to observe that the AUC is the probability that a randomly selected positive class is scored higher (by your fixed model) than a randomly chosen negative class. Being probabilities, these are independent (at a population level) of the number of samples you have from your positive and negative populations (of course, smaller samples get you more sampling variance). I believe this is the OPs point with "they are fractions".
In any case, can we at least all agree that blogs/articles throwing around this kind of advice without justification is less than helpful?
madrury83 t1_j3epnqt wrote
Reply to comment by ThatInternetGuy in [R] Greg Yang's work on a rigorous mathematical theory for neural networks by IamTimNguyen
Repurposing common words to have technical meanings is a basic trope in mathematics: kernel, neuron, limit, derivative, spectrum, manifold, atlas, chart, model, group, ring, ideal, field, topology, open, closed, compact, exotic, neighborhood, domain, immerse, embed, fibre, bundle, flow, section, measure, category, scheme, torsion, ...
... and typing
Natural Transformation
into google shows you skinny dudes that got buff.