Esoteric Dissertations from a One-Track Mind

June 26, 2008

Who’s The Mastermind Now

Filed under: programming — Tags: — codesmithy @ 10:30 am

On sort of a whim, I got interested in the game mastermind.  It is a simple code-breaking game.  One player picks a sequence of four pegs, which can be six different colors.  The other player tries to guess what the sequence is.  To help the guesser along, after they decide on their guess, they are told how many pegs of the correct color and location they have correct, and how many pegs are of the correct color but wrong location.

I was talking with a friend of mine who was reading Jeff Hawkins On Intelligence.  Interestingly enough, I had just started reading Kluge: The Haphazard Construction of the Human Mind by Gary Marcus.  Apparently, Hawkins promotes building a machine more like the human mind.  From what I had already read of Kluge I was already like no, the human mind does not necessarily represent good engineering practice.  Then we got into a debate of what exactly intelligence is.  To me, intelligence is being able to find a solution without knowing the answer in advance.  Therefore, computation is intelligence.  Now, certain algorithms can represent a greater intelligence than others.  For example, apparently ants have differing abilities to produce scents.  As such, some ants can’t tell whether they are coming or going from food, but others can (Surely You’re Joking, Mr. Feynman! Richard P. Feynman pg. 95)  I think we’d agree those that can tell which way they are going have a greater intelligence than those who can’t, although neither are approaching the level of general level intelligence humans have.

What does this have to do with mastermind?  Well, after spending a little bit of time with a computer, we can get it to play mastermind.  Not only that, but it consistently beats my best performance on the matter.  So, it is worth examining the difference of how I solve it, as a human, and how the computer solves it.

How I play mastermind.

I play mastermind by picking a sequence ABCD, looking at the result.  Hypothesize what the next sequence might be by rules of logic, then try that sequence, look at the result.  The approach is incredibly haphazard, although I try to apply it systematically.  I sometimes fall into traps where I mistakenly inference something about a particular peg only to have to reconsider it later.  Bascially the flaws Marcus says we are prone to in Kluge.

However, as Wikipedia notes, Knuth showed how this can be done with a minmax approach.  The implementation follows.  We start out with a problem definition.

enum CODE { A, B, C, D, E, F, MAX_CODE };
static const int GUESS_LEN = 4;

Next we need a function that calculates the white peg, black peg count given a particular guess and answer. (Note, the Block class just adds assert based bounds checking during development, null is 0, and NUM_ELEMENTS is a macro to determine the number of elements in a static or auto array).

void CalculateDiff(CODE guess[],
                   CODE ans[],
                   int len,
                   int* pBlackPegs,
                   int* pWhitePegs)
   assert(guess != null);
   assert(ans != null);
   assert(len > 0 && len <= MAX_LEN);

   Block<CODE> bGuess(guess,len);
   Block<CODE> bAns(ans,len);
   bool matched[MAX_LEN];
   Block<bool>  bMatched(matched,NUM_ELEMENTS(matched));

   int blackPegs = 0;
   int whitePegs = 0;

   for(int i = 0; i < len; ++i)
      if(bGuess&#91;i&#93; == bAns&#91;i&#93;)
         bMatched&#91;i&#93; = true;

   for(int i = 0; i < len; ++i)
      if(bGuess&#91;i&#93; == bAns&#91;i&#93;) continue;
      bool done = false;
      for(int j = 0;!done && j < len; ++j)
         if(i == j) continue;
         if(bMatched&#91;j&#93;) continue;
         if(bGuess&#91;i&#93; == bAns&#91;j&#93;)
            done = true;
            bMatched&#91;j&#93; = true;

   if(pBlackPegs != null) *pBlackPegs = blackPegs;
   if(pWhitePegs != null) *pWhitePegs = whitePegs;

From there, we can define a MasterMindAI class, that produces guesses.

&#91;sourcecode language='cpp'&#93;
class MasterMindAI

   static void GetPossibility(CODE* code,int codeLen,int codeNum);
   static int  GetCode(const CODE* code,int codeLen);
   void GetGuess(CODE* code,int codeLen) const;
   int  CountRemovals(CODE* guess,CODE* ans,int codeLen) const;
   void UpdatePossibilities(CODE* guess,
                            int codeLen,
                            int blackPegs,
                            int whitePegs);
   void Reset();
   bool* m_possibilities;
   int   m_numPossibilities;

The static functions just map a numerical code 0...1295 (6^4) to a particular code string and vice versa.  0 is AAAA and 1295 is FFFF.

The heart of the intelligence is in GetGuess.

&#91;sourcecode language='cpp'&#93;
void MasterMindAI::GetGuess(CODE* code,int len) const
    int maxGuess = 252;
   int numRemainingPos = 0;
   for(int i = 0;i < m_numPossibilities; ++i)
      if(m_possibilities&#91;i&#93;) ++numRemainingPos;

   if(numRemainingPos != m_numPossibilities)
       int maxScore = -1;
       for(int i = 0; i < m_numPossibilities; ++i)
           CODE aGuess&#91;GUESS_LEN&#93;;
           int thisScore = INT_MAX;
           for(int j = 0; thisScore > maxScore && j < m_numPossibilities; ++j)
              if(!m_possibilities&#91;j&#93;) continue;
              CODE possAns&#91;GUESS_LEN&#93;;
              thisScore = std::min(thisScore
           if(thisScore > maxScore)
              maxGuess = i;
              maxScore = thisScore;

The key part is minmax counting of the possibilities eliminated. We loop through all the possibilities, and for all the remaining possible answers we record the worst case removal. We then pick the guess with the best worst-case removal.

CountRemovals calculates how many of the remaining possibilities would be eliminated with the given guess with the given answer.

int MasterMindAI::CountRemovals(CODE* initGuess,CODE* ans,int codeLen) const
int numRemoved = 0;
int blackPegs = 0;
int whitePegs = 0;
Block poss(m_possibilities,m_numPossibilities);
int code = GetCode(initGuess,codeLen);
if(poss) ++numRemoved;

for(int i = 0;i < m_numPossibilities; ++i) { if(!poss[i]) continue; int tb = 0; int tw = 0; GetPossibility(posAns,NUM_ELEMENTS(posAns),i); CalculateDiff(initGuess,posAns,codeLen,&tb,&tw); if(tb != blackPegs || tw != whitePegs) { ++numRemoved; } } return numRemoved; } [/sourcecode] Next, we have its pair, update possibilities which updates the internal possibility array based on what actually happened. At no time will we eliminate less possibilities than we predicted in CountRemovals. [sourcecode language='cpp'] void MasterMindAI::UpdatePossibilities(CODE* guess, int codeLen, int blackPegs, int whitePegs) { int numEliminated = 0; Block poss(m_possibilities,m_numPossibilities);
int code = GetCode(guess,codeLen);
poss = false;
for(int i = 0;i < m_numPossibilities;++i) { if(!poss[i]) continue; CODE ans[GUESS_LEN]; GetPossibility(ans,NUM_ELEMENTS(ans),i); int tb = 0; int tw = 0; CalculateDiff(guess,ans,codeLen,&tb,&tw); if(tb != blackPegs || tw != whitePegs) { poss[i] = false; ++numEliminated; } } } [/sourcecode] Finally we have the main loop. [sourcecode language='cpp'] CODE ans[GUESS_LEN]; MasterMindAI::GetPossibility(ans,NUM_ELEMENTS(ans),i); mai.Reset(); int numGuesses = 0; CODE sample[GUESS_LEN]; bool done = false; while(!done) { int blackPegs = 0; int whitePegs = 0; mai.GetGuess(sample,NUM_ELEMENTS(sample)); PrintCode(&std::cout,sample,NUM_ELEMENTS(sample)); CalculateDiff(sample,ans,NUM_ELEMENTS(sample),&blackPegs,&whitePegs); if(blackPegs == GUESS_LEN) { done = true; } else { mai.UpdatePossibilities(sample,NUM_ELEMENTS(sample),blackPegs,whitePegs); } } [/sourcecode] The one thing I find surprising is how many possibilities are actually eliminated in the first guess. The computer employs a strategy with a precision and speed that I just can't. So, we can train a computer to be more intelligent in specific instances, and it thinks in a way that is rather alien to the way we, as humans, do. This isn't surprising in a world where the reining intellect itself is based on a kluge. It also seems to be a property of this kluge-iness that we wish to impose the particulars of our thinking onto the machine. It is unlikely this will be for the sake of efficiency or good design, but rather because we so limited that it is the only way we could make work for certain problems we would like to solve.


Leave a Comment »

No comments yet.

RSS feed for comments on this post. TrackBack URI

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Blog at

%d bloggers like this: