flapflip9

flapflip9 t1_j1dk0ke wrote

Sounds like you want to predict 50 values, given 150 inputs. ML might work, but I doubt you'd have enough data to avoid over fitting.

It also sounds like it's not like there's a single correct numerical answer for any given day, rather, you're trying to find a decent distribution. So look first into constrained optimizations first, similar to budget allocation or task distribution optimizations.

1

flapflip9 t1_iwumecm wrote

Look into open-mmlab's MMOCR, does both detection and recognition, with English and Chinese alphabet support. Absolutely wicked performance, it scrapes off text from logos, flyers, blurred text, etc. Not suitable for real-time performance.

Until a few years ago, I was quite happy with Tesseract, but they've fallen behind since then. Still good for scanning printed text or similar. Also supports a lot of languages.

5

flapflip9 t1_ivkw5cs wrote

For HU poker specifically: from what I recall, researchers were having rough upper bounds on how far their model is off from Nash equilibrium. It was down to something like less than 1 big blind(BB) per 100 hands or so, implying someone playing perfect NE could exploit such a bot by no more than that amount. So if you're playing 100$ BB HU, you'd hope to make at most 100$/100 hands - so not exactly a get rich quick scheme. I'm most likely off about the exact upper limit here, but I recall it being so small that for all practical purposes, HU poker is considered solved (the NE, that is).

Chess is different as the game tree is way bigger, you can't just lump possible moves together, some tactical lines only reward the player 7+ moves in the future, etc. No limit holdem has a lot of leaf nodes (all-in & folds) and a tree depth of about 7 on average or so. It's crazy how much more complex chess is. Think of how to this day we don't even know what's 'the best' opening; there are a few hundred perfectly playable openings (as ranked by AI), leading you down on a completely different gamepath each time.

1

flapflip9 t1_ivkuf7s wrote

Wouldn't the gametree get too large to store on GPU memory for poker? Unless of course you start making abstractions and compromises to fit into hardware constraints. I used to rely on PioSolver (a commercially available Nash equilibrium solver) a lot in my younger years, a shallow stacked post-flop tree could maybe be squeezed into 64GB ram and could be computed in a few seconds. But the entirety of the game tree, with preflop gameplay.. my superstitious peasant brain is telling me you can't trim your model down to small enough size to make it work. On the flip side.. Given how crazy well these large NLP/CV models are doing, learning poker should be a breeze.

1

flapflip9 t1_ivj6h5q wrote

The academic work on Poker AI is pretty extensive, so definitely start there. Heads-up no-limit Nash equilibrium solvers have been available for years. I've even seen 6-max Nash equilibrium solvers.. The only limiting factor seems to be the need for discretization: in order for the game tree not to branch out uncontrollably, decisions get reduced to things like fold/call/raising 20%/raising 50%/going all-in.. perfectly fine from an academic standpoint, just not very interoperable with human play.

All this to say that the challenging part of such a project isn't the framework/AI part, but rather the efficient implementation of the decision tree, hand range bucketing, hand ranking implementation, etc. If the output of such work is just another heads-up equilibrium solver, then it isn't particularly novel unfortunately. Even if it outperforms its peers/prior published work, it would be <1% improvement, as all these Nash solvers are pretty close to optimal today.

11