sqweeeeeeeeeeeeeeeps
sqweeeeeeeeeeeeeeeps t1_jcngpzd wrote
If u were to use gpt 3.5 turbo just wait for 4 before you spend $600 on compute costs
sqweeeeeeeeeeeeeeeps t1_j20b6hu wrote
Reply to comment by Horneur in Making an AI play LoL by Horneur
You should do some reading on what game theory is too. It will be necessary to have a general approach for the harder games
sqweeeeeeeeeeeeeeeps t1_j20alj8 wrote
Reply to comment by Horneur in Making an AI play LoL by Horneur
There is no game theory function. I’m confused on how you landed to this line of thinking. Try out league-play reinforcement learning strategies here on a simple game like tic tac toe. It should play enough games to learn the best move in every scenario.
sqweeeeeeeeeeeeeeeps t1_j209r5c wrote
Reply to comment by Horneur in Making an AI play LoL by Horneur
What do you think game theory means? These are all game theory based and you can use DL on any of it. Tic tac toe is simple enough to be an explicit algorithm but you can practice making simulations and use DRL for it
sqweeeeeeeeeeeeeeeps t1_j203p1p wrote
Reply to comment by Horneur in Making an AI play LoL by Horneur
Remember, Chess AI’s were the first thing really. Dota/League is significantly harder than chess. So try making AI’s for:
Tic Tac Toe, Simple Dice Game (Farkle seems like a good one), Card game with slightly more intricacies, Checkers, Chess, A simple game involving player movement, League.
sqweeeeeeeeeeeeeeeps t1_j1zxu27 wrote
Reply to comment by Horneur in Making an AI play LoL by Horneur
Make a deep RL AI on a simple card. Start there, create the card game environment in your own code so you don’t have to worry about api’s.
sqweeeeeeeeeeeeeeeps t1_izviw1x wrote
Reply to comment by chengstark in What’s different between developing deep learning product and typical ML product? by digital-bolkonsky
This. We have no context of what ML even entails here. It’s too broad.
sqweeeeeeeeeeeeeeeps t1_iztlsd7 wrote
Reply to comment by digital-bolkonsky in What’s different between developing deep learning product and typical ML product? by digital-bolkonsky
You’re still not asking a clear question. Using ML to build a product or a model being the product. If the model is the product, then your answer is “What’s the difference between an non-DL ML model and a DL model”.
sqweeeeeeeeeeeeeeeps t1_izt8ldx wrote
Reply to comment by digital-bolkonsky in What’s different between developing deep learning product and typical ML product? by digital-bolkonsky
Pytorch / Keras / Tensorflow for deep learning
And any basic ML library you want, scitkit leaen etc.
Deep learning is all about GPU usage and running long experiments in production. I’m confused what you even want
Is the question basically asking, what skills would someone specialized in DL have vs someone specializing in non-DL ML have?
sqweeeeeeeeeeeeeeeps t1_izspv5o wrote
Reply to comment by MazenAmria in Advices for Deep Learning Research on SWIN Transformer and Knowledge Distillation by MazenAmria
Showing you can create a smaller model with the same performance means SWIN is overparameterized for that given task. Give it datasets with varying complexity, not just one single one.
sqweeeeeeeeeeeeeeeps t1_izspjid wrote
Reply to What’s different between developing deep learning product and typical ML product? by digital-bolkonsky
Google difference between ML and Deep Learning.
sqweeeeeeeeeeeeeeeps t1_izq7367 wrote
Reply to comment by abhijit1247 in Why popular face detection models are failing against cartoons and is there any way to prevent these false positives? by abhijit1247
Is this a shit post? these are trained on real human faces. Humans look very different than cartoons.
sqweeeeeeeeeeeeeeeps t1_izq6vbc wrote
Reply to comment by MazenAmria in Advices for Deep Learning Research on SWIN Transformer and Knowledge Distillation by MazenAmria
It is.
sqweeeeeeeeeeeeeeeps t1_izphlmd wrote
Reply to comment by MazenAmria in Advices for Deep Learning Research on SWIN Transformer and Knowledge Distillation by MazenAmria
? You are proving your SWIN model is overparameterized for CIFAR. Make an EVEN simpler model than those, you prob won’t be able to with off the shelf distillation. Doing this just for ImageNet literally doesn’t change anything. It’s just a different more complex dataset.
What’s your end goal? To come up with a distillation technique to make NN’s more efficient and smaller?
sqweeeeeeeeeeeeeeeps t1_izob3yb wrote
Reply to Advices for Deep Learning Research on SWIN Transformer and Knowledge Distillation by MazenAmria
MNIST and Imagenrt is a huge range. Try something in between, preferably multiple. For example CIFAR-10 and CIFAR-100. I would expect it to perform more similarly to the full SWIN model on citar-10 because less data complexity.
sqweeeeeeeeeeeeeeeps t1_izo7ejq wrote
Reply to Why popular face detection models are failing against cartoons and is there any way to prevent these false positives? by abhijit1247
Are you even retraining these models on cartoon faces?
sqweeeeeeeeeeeeeeeps t1_iwnxlbc wrote
Reply to comment by Constant-Cranberry29 in How to normalize data which contain positive and negative numbers into 0 and 1 by Constant-Cranberry29
? Not sure. Train it longer, lower learning rate, are u using teacher forcing? I’m not very familiar with best LSTM practices.
sqweeeeeeeeeeeeeeeps t1_iwnx6pv wrote
Reply to comment by Constant-Cranberry29 in How to normalize data which contain positive and negative numbers into 0 and 1 by Constant-Cranberry29
What’s your problem? Normalized data is good.
sqweeeeeeeeeeeeeeeps t1_iwnu8yt wrote
Reply to How to normalize data which contain positive and negative numbers into 0 and 1 by Constant-Cranberry29
You are misinterpreting what “normalizing” is. It converts your data to fit a standard normal distribution. That means, you have positive and negative numbers centered around 0. This is optimal for most deep learning models. The interval [0,1] is not good because you want some weights to be negative as certain features negatively impact certain results.
sqweeeeeeeeeeeeeeeps t1_iu064z7 wrote
Reply to comment by nihal_gazi in AGI staying incognito before it reveals itself? by Ivanthedog2013
“It’s not yet patented” this sounds so ridiculously funny to me. Publish, progress research, be open to critics on your ideas, without you are just making backless claims. All I see is a hs student who has coded up his little ml algo and thinks it’s agi.
Why am I wasting my time entertaining this
sqweeeeeeeeeeeeeeeps t1_iu03am3 wrote
Reply to comment by nihal_gazi in AGI staying incognito before it reveals itself? by Ivanthedog2013
Lmao this is too funny. I am sure you can easily outperform sota models “speed”, but does it have higher performance/accuracy. We use these overparameterized deep models to perform better, not be accurate. How do you know you can perform “as well as a human”? What tests are you running? What is the backbone of this algo. I think you have just made a small neural net and saying “look how fast this is”, but performs soooo much worse in comparison to actually big models. I am taking all of this with a grain of salt because you are in highschool and have no actual judgement of what sota models actually do
“70+ algorithms in the past year” is that supposed to be impressive? Are you suggesting the amount of algorithms you produce have any indicator of how they perform. How do you even tune 70 models in a year.
I have a challenge for you. Since you are in HS, read as much research as you can (probably on efficient networks or whatever you seem to like) and write a review paper of some small niche subject. Then start coming up with novel ideas for it, test it, tune it, push benchmarks and have as many legitimate comparisons to real world models. Then publish it.
sqweeeeeeeeeeeeeeeps t1_itzzt2l wrote
Reply to comment by nihal_gazi in AGI staying incognito before it reveals itself? by Ivanthedog2013
Ok so your just spouting bs about agi and have nothing to back up your claims
sqweeeeeeeeeeeeeeeps t1_itzzc07 wrote
Reply to comment by nihal_gazi in AGI staying incognito before it reveals itself? by Ivanthedog2013
I’m hoping you at least published in top conferences?
sqweeeeeeeeeeeeeeeps t1_itzv3q3 wrote
Reply to comment by nihal_gazi in AGI staying incognito before it reveals itself? by Ivanthedog2013
“I have almost figured out an algorithm for an AGI” lmao no you have not. you’re in high school claiming you are the closest person to solving agi rn as a “AI researcher”
sqweeeeeeeeeeeeeeeps t1_jcrf571 wrote
Reply to Seeking Career Advice to go from general CS background to a career in AI/Machine Learning by brown_ja
Go to grad school, get really good at optimization, prob/stats, linear algebra, and take plenty ML. Masters usually is minimum for ML positions, but PhDs will dominate positions for any cutting edge research