boyetosekuji
boyetosekuji t1_jc04pvl wrote
what is the difference between $1.25/hr for Standard, $1.90/hr for Enhanced
boyetosekuji t1_jbzwwo7 wrote
boyetosekuji t1_ixoywrk wrote
Reply to comment by swdsld in [Project] Background removal tool based on our recent work "Revisiting Image Pyramid Structure for High Resolution Salient Object Detection" by swdsld
if you want to check depth map take a look at this paper, they claim better edge detection, valuable for eg. the wires of san Francisco bridge, etc
boyetosekuji t1_ixovllt wrote
Reply to comment by swdsld in [Project] Background removal tool based on our recent work "Revisiting Image Pyramid Structure for High Resolution Salient Object Detection" by swdsld
i've got some good images with your tool, this one was not bad too, aesthetically. Would implementing depth map work? although it would add another GPU intensive task. Keep going, Good luck.
boyetosekuji t1_ixnqtst wrote
Reply to comment by swdsld in [Project] Background removal tool based on our recent work "Revisiting Image Pyramid Structure for High Resolution Salient Object Detection" by swdsld
What's the reason for the smoke effect https://imgur.com/a/GGdWIig instead of sharp outline? Open in new tab (white bg)
boyetosekuji t1_ixnhgzt wrote
Reply to comment by swdsld in [Project] Background removal tool based on our recent work "Revisiting Image Pyramid Structure for High Resolution Salient Object Detection" by swdsld
Have you run into any use cases where bg removal suffers?
boyetosekuji t1_ixmnpd0 wrote
Reply to [Project] Background removal tool based on our recent work "Revisiting Image Pyramid Structure for High Resolution Salient Object Detection" by swdsld
Great job, tried the popular remove.bg service and result look better than theirs. https://i.imgur.com/ZCIURkA.png
boyetosekuji t1_iwz6790 wrote
Reply to comment by sharky6000 in [D] David Ha/@hardmaru of Stability AI is liking all of Elon Musk's tweets by datasciencepro
Elon is here too, i cant escape this guy. Non-US and no interest in him, but people keeps bringing him up everywhere.
boyetosekuji t1_iuj0v1n wrote
Reply to [News] The Stack: 3 TB of permissively licensed source code - Hugging Face and ServiceNow Research Denis Kocetkov et al 2022 by Singularian2501
great news, how much would it cost to train
boyetosekuji t1_iuhrdh5 wrote
would like something similar to explain code snippets too, like a VScode extension
boyetosekuji t1_jdhyeok wrote
Reply to comment by nicku_a in [P] Reinforcement learning evolutionary hyperparameter optimization - 10x speed up by nicku_a
ChatGpt: Okay, let me try to explain this using gaming terminology!
Imagine you're playing a game where you have to learn how to do something new, like defeat a tough boss. You have different settings or options (hyperparameters) to choose from, like which weapons or abilities to use, how aggressive or defensive to play, etc.
Now, imagine that this boss is really tough to beat and you don't have many chances to practice. So, you want to find the best combination of options as quickly as possible, without wasting too much time on trial and error. This is where hyperparameter optimization (HPO) comes in.
HPO is like trying out different settings or options until you find the best ones for your playstyle and the boss's behavior. However, in some games (like Dark Souls), it's harder to do this because you don't have many chances to try out different combinations before you die and have to start over. This is similar to reinforcement learning (RL), which is a type of machine learning that learns by trial and error, but it's not very sample efficient.
AgileRL is like having a bunch of other players (agents) who are also trying to defeat the same boss as you. After a while, the best players (agents) are chosen to continue playing, and their "offspring" (new combinations of settings or options) are mutated and tested to see if they work better. This keeps going until the best possible combination of settings or options is found to beat the boss in the fewest possible attempts. Using AgileRL is much faster than other ways of doing HPO for RL, which is like having a lot of other players to help you find the best strategy for defeating the boss.