sharky6000
sharky6000 t1_j6wo5mi wrote
What do you want to know?
You should look up counterfactual regret (CFR) minimization, it has been the technique that underlies all the expert poker bots.
Then, if you are interested in hold'em variants, look up DeepStack, Libratus, Pluribus, ReBeL, and Player of Games.
All of the competitive bots on the hold'em variants use some form of specialized search (based on CFR or Monte Carlo CFR) over the public belief state tree.
The card draw variants are mostly untouched because the public tree methods are not as easily applicable.
Anyway feel free to dm me if you want to know more.
sharky6000 t1_j2pue3d wrote
I would not worry about your age so much, and the maturity can even help you.
I started my Ph.D. at 26, and took a long time to get it (at 32.. 6 years), but it really paid off. I now have my dream job in research industry, and putting all my training into very good use!
You might be able to start a more theory oriented ML topic and switch later if you are not liking it. I switched supervisors and topics two years in.
sharky6000 t1_j10ay53 wrote
Reply to [D] Why are we stuck with Python for something that require so much speed and parallelism (neural networks)? by vprokopev
I mean the main answer is familiarity and the abundance of code available in Python.
Some people are exploring alternate routes. You can use the PyTorch C++ API. Meta released Flashlight. Both Rust and Go are picking up in ML (more Rust than Go, now, I think but Go has Gorgonia, for Rust there is a Torch interface https://towardsdatascience.com/machine-learning-and-rust-part-4-neural-networks-in-torch-85ee623f87a)
But often you start going down these roads to later find that they're not worth it. Much of the computational savings can be done without forcing a new language on people. The whole "shaping your thinking around the framework" is an unfortunate necessary evil because of the nature of how the networks are used (via high speed devices like GPUs/TPUs) or how data gets assembled or transferred. Sadly, a lot of this is not the fault of the top-level language.
sharky6000 t1_ixqhqq7 wrote
Reply to [D] First time NeurIPS by innocentgilbertsmith
If they are using Whova (or some other app) for the conference, I strongly recommend using it and connecting through there in addition to the rest. It makes it easier to coordinate meetups and you can contact authors through it.
Go to the poster sessions and talk to people, it's a lot better way to make real connections than talks (though in the years just before covid, they were starting to get a bit over-crowded). If you see someone alone, approach them. A lot of this comes down to "don't be shy". You will get a lot more from NeurIPS if you engage.
It will be overwhelming. NeurIPS is a huge conference in an already large field of AI / ML. So it helps if you prepare with this advice people are giving in this thread. Don't try to attend absolutely everything, it will be too much, take some down time. But most of all, have fun and learn stuff!
sharky6000 t1_iwyw1oa wrote
Reply to [D] David Ha/@hardmaru of Stability AI is liking all of Elon Musk's tweets by datasciencepro
We really need a new sub called /r/MachineLearningGossip. This sub is getting ridiculous. Mods, what does this have to do with machine learning?
In this case it's about Twitter and Elon more than ML. The only link to ML is stalking a prominent person's behavior on social media.
Why do we allow these?
sharky6000 t1_iti87n8 wrote
Reply to comment by jaschau in [D] Building the Future of TensorFlow by eparlan
I am not a fan of TF by any means, but:
> It’s the 3rd most-starred software repository on GitHub (right behind Vue and React) and the most-downloaded machine learning package on PyPI
Can't really make that stuff up. There are quite a lot of TF users out there.
sharky6000 t1_j93370g wrote
Reply to [D] Please stop by [deleted]
How about better moderation / more strict rules?
I for one would really love to see "here's my code, what am I doing wrong" or "how do you do X in project Y" style posts (might be better to spin off a ML-in-practice sub...)