Since Last Time
Since my last Gin Rummy post, I mostly just ended up fixing a few bugs and tweaking a few things. As it turns out during the dry run tournament in class, a bunch of the AIs were running into a problem where both AIs would pick up a card and immediately discard it, leading to an infinite loop. As it turns out, this was primarily due to a single issue (common to many of the AIs). I’ll explain it here using the logic in my code:
When I was checking to see if I should draw from the discard pile, I would first see if that card completed (or added to) any sets. If not, I would see if it completed (or added to) any runs. However, after checking to see if it completed a set, I wasn’t removing all the sets from my hand to subsequently check to see if the card complete a run. This meant that if – for example – the top of the discard pile was a 3 of clubs, and in my hand I had a run of the 3, 4, and 5 of hearts, and then I also had the 3 of spades in my hand, the AI would pick up from the discard pile, even though that 3 of hearts shouldn’t count towards making sets since it was already part of a run. So, it was a simple fix: after checking to see if a card completed (or added to) a run, if the card didn’t complete (or add to) a run, then I remove all runs from my hand before checking to see if the card completed (or added to) a set.
I also fixed a bug where the game would crash if you got Big Gin (where the deadwood of all 10 cards in your hand plus the card you just picked up equals 0). This was just a bug in the game code that required you to discard a card even if you have Big Gin. Now, if you have Big Gin, you have to have at least one meld of four cards. So, there’s one safe discard, at least. Therefore, if a Big Gin situation occurred, I would just go through my hand and remove the card that kept my deadwood total at 0.
During the actual in-class tournament, my AI performed much worse than I expected it to. In the double elimination tournament, my AI lost both sets of games, losing both by wide margins. In my tests, I had been testing like 50 or so games. But, in class, we had the AIs play sets of 8 games. I’ll admit it: my AI is bad when playing small sets of games. However, there was another thing holding me back: my triangle strategy. This strategy, as I discussed in my last post, is basically a method to try and acquire groups of three cards which have a high probability of turning into a meld. After I performed so poorly in the tournament, I decided to test and see if my triangle strategy was the source of my problems. I tested this thoroughly by running 8 sets of 1000 games for my AI versus the AI that won the in-class tournament (Eric’s AI). In 4 of the sets, I had the triangle strategy enabled, while in the other 4 sets, I disabled the triangle strategy. Just to be sure, I ran an additional 2 sets of 1200 games without the triangles strategy. As you can see, without the triangle strategy, my AI was much more consistent and had an overall better score with a higher winning percentage than my opponent for each set of games.
After testing large sets of games and beating essentially the best AI in the class, I was curious to see how the number of games per set affected the results. Sure, my AI beat Eric’s ~52.42% of the time after sets of 1000 games, but what about smaller sets of games? So, I started out by testing six sets of 8 games, then four sets of 16, 32, 64, 128, 256, and 512 games. Again, this was using the version of my AI without the triangle strategy. As you can see, Eric’s AI is highly skilled at small sets of games, beating mine 3 out of 4 times for almost each set size up until 512. Of course, it seems with the smaller set size there is more chance for error and outliers, since there were a few anomalies where I did really well (in smaller set sizes). But then at around 500 games, it seems my AI starts taking the lead and winning the majority of games (by a small percentage, sure, but it does so consistently). In all my tests of 500 or more games, my AI (without triangles) beat Eric’s.
So what does this all mean?
- I’m not that sure
- Even without the triangle strategy, my AI would’ve performed poorly in the tournament, as it does not start doing well (aka better than the opponent) until a set size of ~500 games
- Eric’s AI is amazing
What went wrong
Ultimately, what went “wrong” was my weighting of strategies. I placed too much stock in the triangle strategy being a golden strat. I thought that my discard heuristics were fairly good as well, but maybe they weren’t as strong as I originally thought, at least for small sets of games. I should’ve done some more varied testing on those. I primarily tested my AI against three others, so maybe that was part of the problem. Overall, my AI was bad at small sets of games, and when you’re actually playing Gin Rummy IRL, you’re probably not going to be playing a ton of sets. So, my AI performed poorly with a real-life sample size.
What went right
I had some fun working on this project. I wrote a bunch of helper functions, and I thought that writing the logic for the heuristics (high card strat, next player strat, and possible cards strat) and trying to figure out the weighting of the heuristics was an engaging task. In the end, my AI does quite well with large sample sizes, so I’m happy about that, even if I disgraced myself in class with my AI’s poor performance.
This was an interesting project. I’m sure there are much deeper ways you could take it if you had more than 3 weeks, but for our time constraints, it felt a bit limited at times. Anyway, I’m excited for our next assignment, where we will make an AI for Empire, a game similar to Civilization I.