killver
killver t1_ivzoqe1 wrote
Why don't you ask in his repo?
killver t1_iv2t4ll wrote
Reply to comment by Zer01123 in [D] NVIDIA RTX 4090 vs RTX 3090 Deep Learning Benchmarks by mippie_moe
Yeah, not sure where they get this conclusion from.
killver t1_iv2swqz wrote
Thanks for that - unfortunately it confirms that it is performing worse than many were hoping.
killver t1_iv0z4kt wrote
Reply to [N] Class-action lawsuit filed against GitHub, Microsoft, and OpenAI regarding the legality of GitHub Copilot, an AI-using tool for programmers by Wiskkey
I am even more concerned that they send my non-public / proprietary code back to evaluate the responses and save it to improve the models. I still could not find a clear statement that they are not doing it.
killver t1_iv0ycz4 wrote
Reply to comment by CapaneusPrime in [N] Class-action lawsuit filed against GitHub, Microsoft, and OpenAI regarding the legality of GitHub Copilot, an AI-using tool for programmers by Wiskkey
If you trust a random blog, go ahead.
This ruling was for a very specific use case that cannot be generalized, and also only applies to US, even only a specific district. It is also totally unclear how it applies to generative model, which even the blog cited recognizes.
The AI community just loves to trust this as it is the easy and convenient thing to do.
Also see a reply to this post you shared: https://medium.com/@brianjleeofcl/this-piece-should-be-retracted-ca740d9a36fe
killver t1_iv0rejo wrote
Reply to comment by CapaneusPrime in [N] Class-action lawsuit filed against GitHub, Microsoft, and OpenAI regarding the legality of GitHub Copilot, an AI-using tool for programmers by Wiskkey
> It's already been pretty well established that AI can be trained on copyrighted photos without issue.
This is one of the biggest misconceptions in AI at this point. This is just not true.
killver t1_itpw963 wrote
The easiest way that works well in practice is to just concatenate them. You can also normalize them first separately before concatenation. If one dimension is significantly different, you can just concatenate the other one multiple times to weight them similarly, or use a dimensionality reduction beforehand.
Another way is to just calculate two similarities separately and then average them (or weighted average).
You can take a look at this kaggle competition's solutions for inspiration: https://www.kaggle.com/competitions/shopee-product-matching/discussion
Submitted by killver t3_y2vvne in MachineLearning
killver t1_iqo7jqh wrote
Reply to comment by you-get-an-upvote in [D] Focal loss - why it scales down the loss of minority class? by Lugi
But that's the opposite as most implementations do it like OP mentions: http://pytorch.org/vision/stable/_modules/torchvision/ops/focal_loss.html#sigmoid_focal_loss
Or do I get it wrong?
killver t1_iqo6z39 wrote
Alpha in focal loss has confused me and others before. I do not understand why they built their paper writeup so heavily around it, as it was not really the contribution of the paper.
I would suggest to use a non-alpha variant in your experiments, and only think about alpha as a common way of up/downscaling classes and add it later.
killver t1_ivzp4fa wrote
Reply to comment by maxToTheJ in [D] Current Job Market in ML by diffusion-xgb
Afaik ML was impacted, but mostly responsibility folks getting booted in Meta.