Empire AI Part II – Postmortem

So, now I’ve created the second part of my AI for Part II of the Empire project.


So, here’s what I did since last time, in terms of refactoring.

In terms of architecture, I wanted to move towards a more structured approach where a “UnitMind” is assigned to each unit. Basically, I have a “UnitHiveMind” class, which has a map of <UINT, UnitMind*> key-value pairs. This way each unit’s UnitID is mapped to a UnitMind. This Mind handles the Unit’s behavior (via states), so that the Unit code isn’t all jammed into the primary DLL cpp file. After I had implemented this it made my code a lot cleaner.

the meat of the unit ordering code (in DLLCode.cpp) after implementing the Mind system

The primary unit logic now occurs in each UnitMind’s updateDirection() function, which is called from UnitHiveMind’s updateDirection() function.

UnitHiveMind’s updateDirection() function

UnitMind states


The FOLLOW_PATTERN state is the pattern-following logic from part I of the Empire project, and is primarily used for exploring the map. FORMATION is used for pathing when a unit is part of a formation (more on that later). And lastly, SEEK_CITY is used to path towards cities that my AI does not currently own, when they are discovered.

I turned my PatternPathing class from part I into a static class. Previously, it had basically been the manager of unit movement, and I didn’t need it to do that anymore now that I had a real unit manager, UnitHiveMind. So, making it a static class and removing all of the unit management code really cleaned it up and made it into a great utility class.

the crux of the FORMATION state

Formations are pretty simple. UnitMinds default to the FOLLOW_PATTERN state, but if you call setFormationOffset(), this will switch the UnitMind’s state to FORMATION. SetFormationOffset() takes in parameters for an x offset, a y offset, and a leader UnitMind* (a pointer to the UnitMind that the current Unit will  follow). The Unit then paths towards the UnitMind’s position plus the offset. Every time a Unit is created, there is a random chance it will be assigned to follow another random UnitMind.

City Seek is fairly simple. If an undiscovered city becomes visible to me, I assign all of my units to follow it using setCityAssignment(), which changes their state to SEEK_CITY. Unfortunately, this only works for cities that are owned by another player, not neutral cities. Anyway, they then follow a simple pathing function to path to the city’s location (by adjusting their x and y positions until they match the city’s x and y positions).

Diving into batch scripting

After discovering some of the exploits available within the framework given (I did not incorporate any of these exploits into my final DLL build), I wanted to see if there was some way I could incorporate batch scripts into my DLL to hack the game parameters. I found out there weren’t really any real exploits I could incorporate using batch scripts (since all the loading of game-related data is done before the DLLs are initialized), so I decided to create some “fun” “exploits.” I discovered that passing a string into the system() function calls that string as if it were a command-line input. With that knowledge in hand, I quickly implemented a way to open a YouTube clip in Firefox and Chrome from my DLL, an easy thing to do using a Windows command-line input.


But I wanted to go one step beyond: append a section of text to the stats.txt file that is generated after the Empire application closes. Seemed simple enough at first: I would just have to use ofstream and do a simple append to the stats file. But then, I realized the following situation: the stats.txt file is generated/overwritten after all of the AI player DLLs had closed. Seemed like a dead end at first. I needed a way to delay my overwrite of the file until after all the DLLs and the main Empire application had terminated. After a deep dive into batch scripting (aka searching StackOverflow) I formulated this plan:

  1. During cleanup of my DLL, call a batch script that launches a new command-line window (minimized for stealth using /MIN and @echo off)
  2. In this new window, call a second batch script that PINGs an IP address that I know I will not get an answer from (such as with a certain timeout delay
  3. After the PING had timed out (and hopefully the main Empire app had exited), then perform the overwrite of the file

This way, the append is handled by a series of batch scripts in a new command-line window after termination of my DLL and the main application.

But, I ran into a new problem: as it turns out, upon the end of a game, the main Empire application cleans up the DLLs then waits for a user input (using a system(“pause”) call). After the user input, the main application finally terminates. Unfortunately, my append would happen a few seconds after the DLL cleanup, but not before this user input (at least, not unless the user pressed a key and exited the application very quickly). Therefore, my append was occurring before the main application terminated, and my changes would be overwritten and my efforts would all be for naught.

The solution to this problem was clear: continually re-append the file for an extended period of time after the DLL had terminated, so that hopefully, the main application will terminate before my batch scripts were finished. But, the whole reason I wanted the append to only happen a few seconds after DLL cleanup was to maintain stealth. If there was a random command-line window minimized at the bottom of the screen for an extended period of time, it might go unnoticed, but it also might not.

Fortunately, I found a solution that uses a short VBScript to run a batch script in a hidden manner.

the VBScript
the batch script, which calls the VBScript to run a second batch script invisibly

So, the pipeline now looked like this:

  1. The initial batch script is called from my DLL using a system() call
  2. This initial batch script then calls the second batch script (using the aforementioned VBScript), which makes it run in a hidden command-line window
  3. The second batch script runs a for loop 50 times
  4. Within the for loop, it first PINGs for 1 second, then appends both the stats.txt file, as well as the DLLControl.txt file (which is used by the Empire application to specify file paths for the AI player DLLs)

So, it essentially appends both the stats.txt and DLLControl.txt files 50 times in 50 seconds. This delay would hopefully cover the amount of time between the game ending and exiting the main application. Also, since this is done in its own command-line window, it is done independently of the main application, and thus survives termination of the DLLs and the main application.

the delay/append loop

For the final build of my DLL, I ended up making the following adjustments:

  1. Instead of opening the YouTube clip from my DLL, I opened it after the append loop in the batch file pictured above (thus, the clip opens 50 seconds after the game ends)
  2. Rather than reappend the text files for 50 seconds (resulting in text files with upwards of 500000 lines) I opted to make a copy of the original text file, then append the original for 50 seconds (thus, the contents of my text file would only show up once, rather than be repeated 50 times)
  3. In addition, I use the copy command to replace the font of the Empire application with a font I stored in my DLL folder

So, this portion of my project didn’t have a lot to do with AI, but I had a lot of fun learning more about batch scripting.


Overall, my AI did fairly decently in the in-class tournament. I was actually a bit surprised. My system of unit formations actually ended up helping me out in the earlier rounds, as this, combined with how all of my units would seek a particular undiscovered city, ended up being a formidable force. At least, it worked fairly well in the earlier rounds. But, as we reached the Winner’s Finals match of the bracket, my AI was stranded near a few lakes and couldn’t find any other cities, and ended up losing that one handily. A similar thing happened with my Loser’s Bracket match, and so I was knocked out for the tournament, but not before getting pretty far in the bracket.

Overall, I enjoyed working on this project. I had fun refactoring my AI and making it more architecturally sound. This made it a lot easier to add additional functionality (such as the formation system) than in my exploration AI code. I probably spent more time on batch scripting than I should’ve, and but this diversion has me excited to learn more about scripting/tools.

-wednesday-david hartman


Empire AI Part II – Planning

time for  part II of making an AI to play Empire

the metagame is a very important aspect of Empire, as we all know. The scoring heavily favors an AI that can O P T I M I Z E  P U N I S H, in terms of the meta.

applying pressure

so here’s what I’m looking at, in terms of strategy: you’ve got armies, right? These things are fairly OP unless you’re on like a map comprised of small islands (always a possibility idk). Right now, for scoring, you get points for the following two things: 1) how many cities you are in control of at the end of the game and 2) how many units you have at the end of the game.

now, armies are cheap (they can be built in a very small number of turns). Therefore, if you spam them (and they don’t die), you’re already racking up those sweet, sweet points just for owning these cheap units that your cities can pump out. Not only will spamming them help you get some points by default, it’ll also give you more of a chance to conquer cities.

viability in the meta

but why are armies so top tier? Well, there’s a few different reasons. Firstly, armies are the only land unit O_o. Land units are typically your bread-and-butter scouting/attacking units in games similar to this (I’ve played one – or more – of the Civ games at least more THAN ONCE, so that counts for something, right?), at least in the opening stages of a match. So right off the bat, armies are, at the very least, going to be your go-to units early on. Secondly, the water units take eons to build 😦 (don’t quote me on that). Even when you do build them, you might not be able to move them anywhere useful if you’re not on a map with optimal waterways.


however, water units are good for beating up armies passing by, and maybe for hanging out inside cities to defend them. Pretty cool, I guess 🙂 ? (but remember, they take a while to build 😐 )

good spacing

but you might also want some formations, so that your armies aren’t alone. I’m probably going to look into some sort of formation strategy

my plan is to have a formation system that lets you build a formation by specifying 1) how many slots are in the formation (the max number of units that can be in the formation besides the leader) and 2) where these slots are located in relation to the leader (the number of grid cells to offset them by). Then, the follower units will move only when they are out of alignment with the leader. Here are some example formations


  1. spam armies
  2. move towards unclaimed cities or explore (move in random pattern) if no visible unclaimed cities
  3. defend claimed cities
  4. make formations of armies to go fight and claim other players’ cities/units (focus more on cities, but fight passing units)
  5. maybe make some water units

wait a minute

what happened with my original exploration AI from last time? Well, as I said in my last post, I implemented a pattern-based pathing system. It actually turned out pretty well, and while it didn’t cover the whole map in 200 turns, it got pretty close. I ended up using the North, North, North, East, Northeast, Northeast, Northeast pattern. Now that I’ve put some more thought into my strategy for part II, I don’t think I’ll need a Blackboard system, but we’ll see.

the original exploration AI

-wednesday-david hartman

Empire AI (Exploration) – Planning

And so, it is time for the final project for my Artificial Opponents class. For this project, the game is Empire, a precursor to Civilization with somewhat simpler rules.

This project is split into two parts:

  • For the first part, we are supposed to make an AI that can explore as much of the map as it can in a certain number of turns. This is to get us acquainted with the code of the implementation of Empire we are working with, and also to think of some basic strategies and architecture for our AI.
  • For the second part, we are supposed to expand on this exploration AI and actually make an AI to play against other opponents in a game of Empire (which consists of exploring the map, conquering cities, and fighting enemy units).

So, for my exploration AI, here’s what I have planned out:

  • A pattern-based pathing system: This is something I initially wanted to attempt, and now have implemented. I’m not sure if I will keep this when we go into the next stage of the project, but I thought it might be a good starting point. Honestly, right now there’s not too much discouraging a strategy of sending out a bunch of the weak, yet quick-to-produce army units (since they can travel over water, although there’s a chance they’ll drown).
    • Basically, this class so holds a vector containing a pattern: a set of directions (each of which is one of the eight cardinal/intercardinal directions). Each unit loops through this vector and bases its movement off of this pattern.
    • However, I give each unit a different starting direction, and then base each subsequent movement in the pattern off of that starting direction. For example, if the unit’s starting direction is East, then East becomes that unit’s de facto “North,” and all directions thereafter are based on that.
    • However, I also have a option to make it so that anytime the unit moves, the direction it is heading becomes its new “North.”
    • Overall, I thought that this might be a useful system to explore a large amount of area with a small amount of units.
  • A Blackboard?: A blackboard implementation might be useful, for storing/having access to the following information:
    • Neutral cities: knowing where these cities are could be useful so that I could send a unit to conquer the city and then start producing units from its location.
    • Water: knowing whether or not a map is mostly water (and how close cities are to water) would be helpful for choosing which unit types to build.
    • Etc.

The reason I implemented the pattern-based system in the first place was because I thought that some sort of system where I sent out units in all directions (according to some sort of movement pattern) would be a fairly efficient way to cover the whole map.

Here are some visuals of some different patterns: as you can see a pattern of [North] covers a lot less ground than the more spiral pattern [North, North, North, East, Northeast, Northeast, Northeast]. So, my final pattern of choice will probably be something like the spiral pattern. Going diagonally also reveals more tiles than travelling straight, so that’s also something I’ll bear in mind.

Overall, I might just stick with my pattern-based system for now. I might augment it with a Blackboard, or at the very least set up the code for one. Going forward, I’ll definitely implement a Blackboard and maybe the A* pathfinding algorithm.

-“David Hartman”


Gin Rummy AI – Postmortem

Since Last Time

Since my last Gin Rummy post, I mostly just ended up fixing a few bugs and tweaking a few things. As it turns out during the dry run tournament in class, a bunch of the AIs were running into a problem where both AIs would pick up a card and immediately discard it, leading to an infinite loop. As it turns out, this was primarily due to a single issue (common to many of the AIs). I’ll explain it here using the logic in my code:

When I was checking to see if I should draw from the discard pile, I would first see if that card completed (or added to) any sets. If not, I would see if it completed (or added to) any runs. However, after checking to see if it completed a set, I wasn’t removing all the sets from my hand to subsequently check to see if the card complete a run. This meant that if – for example – the top of the discard pile was a 3 of clubs, and in my hand I had a run of the 3, 4, and 5 of hearts, and then I also had the 3 of spades in my hand, the AI would pick up from the discard pile, even though that 3 of hearts shouldn’t count towards making sets since it was already part of a run. So, it was a simple fix: after checking to see if a card completed (or added to) a run, if the card didn’t complete (or add to) a run, then I remove all runs from my hand before checking to see if the card completed (or added to) a set.

I also fixed a bug where the game would crash if you got Big Gin (where the deadwood of all 10 cards in your hand plus the card you just picked up equals 0). This was just a bug in the game code that required you to discard a card even if you have Big Gin. Now, if you have Big Gin, you have to have at least one meld of four cards. So, there’s one safe discard, at least. Therefore, if a Big Gin situation occurred, I would just go through my hand and remove the card that kept my deadwood total at 0.


During the actual in-class tournament, my AI performed much worse than I expected it to. In the double elimination tournament, my AI lost both sets of games, losing both by wide margins. In my tests, I had been testing like 50 or so games. But, in class, we had the AIs play sets of 8 games. I’ll admit it: my AI is bad when playing small sets of games. However, there was another thing holding me back: my triangle strategy. This strategy, as I discussed in my last post, is basically a method to try and acquire groups of three cards which have a high probability of turning into a meld. After I performed so poorly in the tournament, I decided to test and see if my triangle strategy was the source of my problems. I tested this thoroughly by running 8 sets of 1000 games for my AI versus the AI that won the in-class tournament (Eric’s AI). In 4 of the sets, I had the triangle strategy enabled, while in the other 4 sets, I disabled the triangle strategy. Just to be sure, I ran an additional 2 sets of 1200 games without the triangles strategy. As you can see, without the triangle strategy, my AI was much more consistent and had an overall better score with a higher winning percentage than my opponent for each set of games.



After testing large sets of games and beating essentially the best AI in the class, I was curious to see how the number of games per set affected the results. Sure, my AI beat Eric’s ~52.42% of the time after sets of 1000 games, but what about smaller sets of games? So, I started out by testing six sets of 8 games, then four sets of 16, 32, 64, 128, 256, and 512 games. Again, this was using the version of my AI without the triangle strategy. As you can see, Eric’s AI is highly skilled at small sets of games, beating mine 3 out of 4 times for almost each set size up until 512. Of course, it seems with the smaller set size there is more chance for error and outliers, since there were a few anomalies where I did really well (in smaller set sizes). But then at around 500 games, it seems my AI starts taking the lead and winning the majority of games (by a small percentage, sure, but it does so consistently). In all my tests of 500 or more games, my AI (without triangles) beat Eric’s.



So what does this all mean?

  • I’m not that sure
  • Even without the triangle strategy, my AI would’ve performed poorly in the tournament, as it does not start doing well (aka better than the opponent) until a set size of ~500 games
  • Eric’s AI is amazing

What went wrong

Ultimately, what went “wrong” was my weighting of strategies. I placed too much stock in the triangle strategy being a golden strat. I thought that my discard heuristics were fairly good as well, but maybe they weren’t as strong as I originally thought, at least for small sets of games. I should’ve done some more varied testing on those. I primarily tested my AI against three others, so maybe that was part of the problem. Overall, my AI was bad at small sets of games, and when you’re actually playing Gin Rummy IRL, you’re probably not going to be playing a ton of sets. So, my AI performed poorly with a real-life sample size.

What went right

I had some fun working on this project. I wrote a bunch of helper functions, and I thought that writing the logic for the heuristics (high card strat, next player strat, and possible cards strat) and trying to figure out the weighting of the heuristics was an engaging task. In the end, my AI does quite well with large sample sizes, so I’m happy about that, even if I disgraced myself in class with my AI’s poor performance.

This was an interesting project. I’m sure there are much deeper ways you could take it if you had more than 3 weeks, but for our time constraints, it felt a bit limited at times. Anyway, I’m excited for our next assignment, where we will make an AI for Empire, a game similar to Civilization I.

-“David Hartman”


Gin Rummy AI – Implementation

Gin Rummy is a game. Or is it?

So, anyway, here’s how my initial implementation of my Gin Rummy AI turned out.

General Approach

So my general approach was to do a few things:


  • keep in mind that the other AI players will be watching your moves – So, when picking up a card, don’t choose it unless you are quite certain it will be helpful. Secondly, when discard a card, don’t be predictable
  • knock as fast as possible
  • keep track of which cards are in play


  • t r i a n g l e s – triangles are a particular group of three cards that has a high chance of becoming a meld. A triangle is comprised of:
    • 1st card: an initial card
    • 2nd card: a card which is the same value as the initial card
    • 3rd card: a card which is same suit as 1st card and is +1 or -1 in value in comparison to the initial card
  • middle cards have the best chance of either being part of a meld or adding to a meld
  • summary: only draw a card from the discard pile if it completes a meld, completes a triangle, or is a middle card. Otherwise, draw from the deck


  • keep high cards for the first one or two turns, then get rid of them ASAP
  • this is the section that needs the most tweaking. I implemented a few different ways to choose the card to discard, but I’m not quite sure when to implement each. Here are the strategies:
    • The aforementioned high-card strat: this is only active for the first one or two turns, so I’m not too concerned about this one.
    • Possible cards strat: for this strategy, I go through all the cards in my hand and check, for each card, how many partial melds they are a part of (2-card runs and sets), then check to see if the cards needed to complete those incomplete melds are in play. I calculate a score for each and discard the one with the lowest score (the lowest potential, in a sense).
    • Next player strat: for this strategy, I keep track of every card that the next player (the player whose turn follows mine) has picked up and discarded. Then, I try to discard the card that will help this opponent the least, since their turn follows mine and they have the opportunity to pick up my discard.

Classes Used

GinAI – This class contains all my logic for discarding a card, drawing a card, keeping track of the next player’s discards and draws, etc. Check out some of the important functions:

  • card tracker
    • adjustCardStatus(Card card, bool isInPlay) – I have a few maps that I use to keep track of every card in the deck and whether or not it is in play. Obviously I can’t be 100% certain about every card, but I can at least know the status of certain cards: cards buried in the discard pile, cards discarded by players, and cards drawn by players (from the discard pile). Since I’m using maps, all I have to do is call find() on the maps and pass in the card suit (to one of the maps) and the card value (to the other map) and then change a bool.
  • drawing/discarding functions
    • chooseCardToDiscard() – This implements the discarding logic/”algorithms” I discussed above.
    • checkIfCompletesMeld(Card card) – Checks if the passed-in card completes a meld or adds onto an already-existing meld.
    • checkIfCompletesTriangle(Card card) – Checks if the passed-in card completes a “triangle” (explained above).
  • Checking melds
    • getAmountOfSameValue(Hand hand, Card card) – Checks to see if the passed-in card completes or adds onto a set.
    • getAmountInRun(Hand hand, Card card) – Checks to see if the passed-in card completes or adds onto a run.
    • getSet(Hand hand, Card card) – Returns the largest possible set in your hand (complete or incomplete) containing the passed-in card.
    • getRun(Hand hand, Card card) – Returns the longest possible run in your hand (complete or incomplete) containing the passed-in card.

CardCounter – So I set this up when I was trying to see if Blackjack-style card counting would have any merit in Gin Rummy. I’m pretty sure that this style doesn’t work for Gin Rummy, and right now this class is not incorporated into my AI at all. I might resurrect it to try out a different style of card counting, though.

Current Assessment

I tested it a few times:

  • First test: when discarding, 30% high card strat, 30% possible cards strat, 30% next player strat
  • Second test: when discarding, 50% high card strat, 50% possible cards strat

I didn’t test it too thoroughly, but at the moment, the second version gave me the best results: a 33.3% win rate rather than a 15% win rate

I feel like I’ve set up a lot of systems, and now it all comes down to tweaking their usages and adding logic to make the best use of the system. Right now, my AI does decently, but I feel like it could do better. Hopefully I’ll be able to get a better win rate by the time the final submission is due.

-”David Hartman”






Gin Rummy AI – Planning

So, it is time for the next project for my Artificial Opponents class. This time, the game is Gin Rummy. Time to make a plan, I guess.

First off, Gin Rummy is a card game. In other words, it is a “game” you play with “cards.” Like many a card game, you cannot see the opponent’s hand. More importantly, the hand size (10 cards) is quite large – I’ll explain why this is important a bit later.

So I started doing some research to get some ideas of where to go with a card game AI. That led me to a few different places:

  • Heuristics – One way to play is to assume that other players will attempt to make the most optimal choices, and then try to gain information and make choices based on this fact. For example, in Gin Rummy, you are trying to minimize your deadwood. So, if a player discards a card (and they are playing optimally), then one could theorize that this might be one of their higher cards. Ultimately, this is a strategy that can be combined with others, as I’ll talk about later on.
  • Monte Carlo methods – In the context of game AI, Monte Carlo methods usually refer to using a tree search combined with simulating a bunch of possible games (and their outcomes) to pick the optimal move in a game. I’m not sure how useful that would be here, since (as I’ll mention further on) Gin Rummy is a game where the players have a lot of incomplete information, as opposed to a game like chess or tic-tac-toe (where both players have complete information about the board and each other).
  • Minimax – Minimax is a method to choose the option that will minimize the loss of the player in a worst-case scenario. I initially counted this one out at first, since (like the Monte Carlo methods) it seems like you would need more information than Gin Rummy provides to the players. However, I then stumbled across this Stack Overflow post, which seems to suggest that, in a card game with incomplete information, you can build up and gather knowledge as the game progresses and then start using minimax.

As I previously mentioned, what we’re working with is incomplete information. In most multiplayer board games (such as chess, checkers, etc.), both players know everything about the board and position of each other’s pieces. In most multiplayer card games, however, there is at least some hidden information. For example, in blackjack, the dealer’s hand – as well as the cards that have not been played – are unknown.

But, while we may not know the other player’s hand, we do have a few tools at our disposal to gain knowledge:

  • Likelihood of opponent knocking – While we do not know what is in our opponent(s) initial hand, we do know what cards they remove from their hand, as well as whatever cards they add to their hand from the face-up discard pile. So, at the very least, we can try to keep a running total of their deadwood. If we see them pick up several cards of the same rank, then perhaps they have just created a set, and we can subtract the total of those cards from our approximation of their deadwood. This could be useful if you want to make sure you will not be undercut by your opponent.
  • Opponent’s choice of card to pick up – Every turn, a player must pick up a card, either from the discard pile or from the face-down pile of remaining cards (the “stock”). Since the discard pile is face-up, if the opponent does not choose from the discard pile but rather from the stock, then we can make a safe bet that their hand does not contain anything relevant to the card on top of the discard pile. This allows us to (potentially) grow our knowledge of the opponent’s hand as the game progresses.
  • Card counting – This is probably the strategy I will make the most use of in this project. Since human memory isn’t holding us back, it will be very easy to count cards. Starting off, we already know our hand, so right off the bat that’s 10 cards that we know will not appear in the “stock” (the face-down pile of remaining cards next to the discard pile). Next, we know every card that both us and our opponent(s) discard, so as the game goes on, our knowledge of what could possibly be in the “stock” (or the opponent’s hand) improves. I think that this will be a very effective tool, and I’ll be testing out a few different card-counting methods to see which one will work best in the context of this AI.

So, those are a few strategies I’ll be attempting to incorporate. It seems like I’ll be doing a lot of trying to gather information about the remaining cards and the opponent’s hand. At the end of the day, knocking early seems to be the best strategy (unless your opponent can undercut you), but I’ll have to test that, I guess.


– “David Hartman”





Minesweeper AI – Postmortem

So, my AI ended up performing decently, but not great.

Since last time

At the time of my last minesweeper post, I was trying to fix the brute force algorithm. However, as I was testing some modifications, I decided to see how often each portion of my AI was being called. As I might have stated before, the three components of my AI implementation were:

  1. “Basic strat” – flagging cells that were guaranteed to be mines (when the number of hidden cells adjacent to a revealed cell are equal to the revealed cell’s mine number) and then revealing cells based off of the flags that were guaranteed to not be mines
  2. Weighted pick – get all hidden cells bordering revealed cells and calculate the local probability of it being a mine, then reveal the cell with the lowest weight
  3. Brute force algorithm – go through all possible mine configurations for the hidden cells bordering revealed cells and choose the cell that appears in the least configurations

As I was observing how many times each component was being called, something seemed to be off – the the basic strategy was not being called a lot of the time and was failing when it was called. It ended up being a simple bug that was causing the basic strat to fail very often. After fixing that, my win percentage became a lot better. Now, I could consistently win around 90% games of each board size with the class (easy) parameters. When testing with the actual minesweeper settings, however, I was winning around 15/30, 7/30, and 1/30 games (S, M, L) with the actual minesweeper parameters.

So, now that the basic strat was fixed, I spent the rest of my time trying to fix the brute force algorithm and testing different combinations of brute force and weighted picking. In the end, it seemed like the brute force algorithm didn’t make too much of a difference.


Anyway, here’s some stats on how well the AI performed. Most of the time I was testing 30 games of each difficulty. Here’s some stats for one of the 30/30/30 test sessions using the class parameters, and one of the test sessions using the minesweeper parameters.

So, I did quite well with the class parameters, and not-so-great with the minesweeper parameters. However, I did improve the win percentage quite a lot since my last post, but that was probably mostly due to fixing the bug with the basic strat.

Thus, my AI does better when there is a lower mines-to-cells ratio, and struggles once the mine start to take up a larger percentage of the board, as they did in the minesweeper parameters. This would in theory be the place where the brute force algorithm would pick up the slack, but unfortunately, that didn’t work out as well as I planned. However, my basic strat worked quite well and did win a lot of (easy) games, and a few difficult ones.

What went wrong

In terms of bringing my AI to the next level (going beyond the basic strat), I had mixed results. I do indeed believe that I made a brute force algorithm that was able to generate several possible mine configurations. However, it was very inefficient in that it did so in a random manner, placing mines at random (if possible) until a configuration was generated. Also, I was unable to get the actual maximum number of possible combinations, because I think you might need the number of mines remaining to do so (and this might be impossible in situations where configurations of different numbers of mines satisfy the conditions). I might be wrong about that. Anyway, I wrote the first pass on this brute force algorithm when I was quite tired, so there were some bugs that I fixed in the following days. But, I must’ve done something wrong, because ultimately, the algorithm did not strengthen my win percentage that much (sometimes not at all). I retraced my logic and went through the various functions I wrote (for the algorithm) a few times, but it definitely would’ve benefited from more rigorous testing and picking apart.

You would expect the brute force algorithm to be able to shine brightest in situations where there are lots of mines, because this typically means lots of data and fewer possible safe cells. However, like I said, the brute force algorithm did not improve my win percentage, and so was not very successful.

What went right

Although I had a few slip-ups getting the basic strat implemented, ultimately I got it working quite well – to the level that it would be expected to at least, it’s a very common strategy. But, that was sort of the base-line for this project. I felt like the List class I made did indeed help me and made my code a tad cleaner, so that was nice as well. And even though my brute force algorithm failed, I thought that it was a good exercise to try and sort out the logic for such an algorithm, even if my implementation was very inefficient.

If I had more time?

Time was one the primary constraint for this project (we had roughly 1.5 weeks to work on it). If I had more time to work on it, I definitely would write a proper brute force algorithm, or at the very least, I would test my current one much more extensively to see where its flaws are. Going along with that, I would try to improve the efficiency of the brute force algorithm. Aside from that, I would potentially add some logic for situations where it gets close enough to the endgame where you can use your knowledge of how many mines remain to choose the correct mine configuration.

Overall, it was pretty fun to work on this project. I like the idea of making an AI to play a game. Before this, I’d only done it a handful of times. One time I had made a simple flashpunk game, and I made it play itself in order to figure out if the memory used by the game was too much (it was, needed to recycle sprites more). Last semester, I made kind of a makeshift AI for my Production II game so I could test our game without needing 4 controllers (it sorta worked but had several bugs, so it didn’t make it into the final build). Ultimately, I liked working on this Minesweeper AI but I can’t wait to dive into our next project: making an AI to play Gin Rummy.

-“David Hartman”