anonymousTestPoster
anonymousTestPoster t1_j2r4kyj wrote
Reply to comment by 99posse in [D] life advice to relatively late bloomer ML theory researcher. by notyourregularnerd
> Theoretical ML: It's BS, literally
You say this, but I would argue most of the best people in the ML industry that I have personally witness posses a strongly rich theoretical background. Of course to answer very practical questions, I wouldnt maybe go to a theory book, or look into reading a texbook on algebraic geometry..... But the minds of those well-versed in theory tend to better understand novel situations and problems very quickly, and have a very adaptable mind.
So If a theoretician can correctly transition their "post-PHD" personalities to industry, I think they stand the best chance to be one of the most valuable team players, because for example "everyone" can code, or so they say, but not everyone can understand models in the depth of level as a theoretician.
For example if something isn't working, I would rather first seek the counsel of someone with a theoretical background working in industry, rather than someone who has only ever worked in industry, unless that person is exceptionally talented, and has something like 10 years of experience.
anonymousTestPoster t1_iz8pwzx wrote
Reply to comment by huberloss in [D] If you had to pick 10-20 significant papers that summarize the research trajectory of AI from the past 100 years what would they be by versaceblues
Schmidhuber would like a word with that ChatGPT bot
anonymousTestPoster t1_iykfyen wrote
Reply to [p] Really Dumb Idea(bear with me) by poobispoob
It is an interesting concept, because it looks like an anti-classifier / anti-segmenter.
Usually we want to maximize identifcation and or segmentation within an image, but now you would want to reverse the cost function in a sense, so as to minimize identifiability. The theoretical best rate that this can occur would be probably be uniform random sampling across a grid.
What you could do is have a set of images for various locations under different conditions / weather, then superimpose the camo in various orientations, and find the which camo performs best in which settings more often.
This would be the quick and dirty start approach, then you can focus in on particular use case / conditions such as the other poseter has commented on.
> varying vegetaion (sage, nothing, large deciduous trees, pines, ...). The person may be laying down in the bushes, walking down an open path, ...
anonymousTestPoster t1_ixyvrwy wrote
Reply to comment by Zestyclose-Check-751 in [P] Metric learning: theory, practice, code examples by Zestyclose-Check-751
> Hi, metric learning is an umbrella term like self-supervised learning, detection, and tracking.
This is basically my point, what is the need for an umbrella term? There is an infinitude of ways in which sub topics can be linked together, rather than having:
> people still need some tools and tutorials to solve their problems.
Isn't it better that people appeal towards self-supervised learning, detection, and tracking directly depending on the problem at hand? These sub-topics are sufficiently different that they should be considered quite separately. Even for things like "supervized learning" we consider the sub-problems of regression and classification very differently. Although there is theoretical interest to combine both topics in the discussion of similarities, practically speaking one would choose to take a "classification" or a "regression" task for the specific problem, so that it is ultimately not useful to consider a practical problem as being of "supervized" type, apart from maybe 1-2 sentences in an introduction section of the problem.
anonymousTestPoster t1_ixxy7qq wrote
Reply to comment by larryobrien in [P] Metric learning: theory, practice, code examples by Zestyclose-Check-751
Do you have any papers or python library that does what you are saying?
anonymousTestPoster t1_ixxq87i wrote
Is metric learning a new buzz word or does it represent a genuinely new step in research direction? Because the idea of vector space embedding (for whatever purpose) is not a new concept.
Of course one may not know the embedding procedure (is this what they call representation learning?), but the proposed way in which metric learning and or representation learning appears to solve this issue is by doing what seems effectively like just a grid search (which can be extended to continuous parameter spaces if necessary) of sorts over a set of possible embeddings / projections / metrics.
Of course I could be wrong and missing the point entirely, since I only very, very quickly skimmed a few paragraphs here or there. Please correct me if I am wrong.
anonymousTestPoster t1_ival53k wrote
Reply to [R] Reincarnating Reinforcement Learning (NeurIPS 2022) - Google Brain by smallest_meta_review
How is this idea different to using pre-trained networks (functions) then adapting these for a new problem context?
anonymousTestPoster t1_j90i96k wrote
Reply to comment by bjergerk1ng in [D] Formalising information flow in NN by bjergerk1ng
what did the person link? Lol why is everything getting deleted in this thread?