Esoteric Dissertations from a One-Track Mind

July 17, 2008

Joel Spolsky Jumps the Shark

Filed under: programming — Tags: — codesmithy @ 10:40 am

In the 14th Stackoverflow podcast, Joel Spolsky worked on his credentials for being a fuddy-duddyBemoaning the criticism of a post he admitted he tried his best to phone-in, Spolsky sees these new-fangled kids and their conversational-style of blogging as leading to its demise.  He draws a parallel between the plethora of blogs and the low barriers of entry to endless September, which heralded the end of Usenet for many users.

The irony of the situation, the fact the Spolsky routinely employs many of the same devices he criticizes other people for using, seems to be completely lost on him.  How is one supposed to tell the good from the bad, Spolsky ponders?  Surely judging a blog by its design or a book by its cover is insufficient.  How is one supposed to tell the difference?  Here is a clue: PAGERANK.  There is this little company called Google, maybe Joel can remember them between his bouts of senile dementia.  Their whole shtick is that they tend to return good search results for whatever you are search for.

Let’s give it a try.  “Should I disable menu items?” I ask Google.  And, boy am I feeling lucky.  It brings me to this page, by this guy named Joel Spolsky that says “don’t do this.”  Problem solved.

So, let’s be clear about what Joel is really complaining about, it is not the lack of the good, but rather the preponderance of crap.  Joel wants a clergy, and what we have is a free market.  Higher barriers of entry don’t improve signal, they just reduce noise.  This all follows from Sturgeon’s Law: ninety percent of everything is crap.  As more players enter the market, the biggest winners are the dung beetles.

The market is, in fact, better than the clergy, but not for the reasons some people believe.  The market is not preferred because of its best case performance (a disinterested, benevolent dictator is convincingly more efficient).  Rather, the free market has the best worst-case performance, and it is precisely because some crap from the elite’s perspective is allowed to persist.

There is another argument in favor of the free market, but it speaks more to human nature than fundamental process.  When someone reaches a certain level of celebrity, their skill in the task they are famous for usually degrades.  This is part of the reason why sports teams find it so hard to win back-to-back championships, why news anchors are seldom good journalists any longer, artists become one-hit wonders, etc.  There is a tendency to rest on ones laurels and reputation, rather than focus on continually churning out a superior product and improving.  A clergy tends to become ensconced.  A free market ensures a steady stream of challengers and competition.  To the clergy, these challengers are perceived as fleas, and in many cases that is exactly what they are.  Nevertheless, it keeps the clergy honest, and threatens them with the only threat they truly understand, a revolution and a loss of status.  As Dawkins asks: was there ever dog that praised his fleas?  Probably not, but they are a sign of a healthy ecosystem.

In ending, I want to make a few things clear because a couple aspects do get lost in the medium.  I am actually a huge fan of Joel’s.  Disparaging comments above are just ribbing, a meta-example of how the best way for a flea (this blog) to improve its health is by attacking a big, healthy host at a point of perceived weakness.  Nevertheless, I do get annoyed when he takes up positions he obviously hasn’t thought all the way through.  I feel it lessens his authority on other issues I would like to cite him for when he espouses views he makes no effort to properly defend.  So, let us make clear the the creed of the fickle market: “you are only as good as your latest offering” and may we never forget it.

June 26, 2008

Who’s The Mastermind Now

Filed under: programming — Tags: — codesmithy @ 10:30 am

On sort of a whim, I got interested in the game mastermind.  It is a simple code-breaking game.  One player picks a sequence of four pegs, which can be six different colors.  The other player tries to guess what the sequence is.  To help the guesser along, after they decide on their guess, they are told how many pegs of the correct color and location they have correct, and how many pegs are of the correct color but wrong location.

I was talking with a friend of mine who was reading Jeff Hawkins On Intelligence.  Interestingly enough, I had just started reading Kluge: The Haphazard Construction of the Human Mind by Gary Marcus.  Apparently, Hawkins promotes building a machine more like the human mind.  From what I had already read of Kluge I was already like no, the human mind does not necessarily represent good engineering practice.  Then we got into a debate of what exactly intelligence is.  To me, intelligence is being able to find a solution without knowing the answer in advance.  Therefore, computation is intelligence.  Now, certain algorithms can represent a greater intelligence than others.  For example, apparently ants have differing abilities to produce scents.  As such, some ants can’t tell whether they are coming or going from food, but others can (Surely You’re Joking, Mr. Feynman! Richard P. Feynman pg. 95)  I think we’d agree those that can tell which way they are going have a greater intelligence than those who can’t, although neither are approaching the level of general level intelligence humans have.

What does this have to do with mastermind?  Well, after spending a little bit of time with a computer, we can get it to play mastermind.  Not only that, but it consistently beats my best performance on the matter.  So, it is worth examining the difference of how I solve it, as a human, and how the computer solves it.

How I play mastermind.

I play mastermind by picking a sequence ABCD, looking at the result.  Hypothesize what the next sequence might be by rules of logic, then try that sequence, look at the result.  The approach is incredibly haphazard, although I try to apply it systematically.  I sometimes fall into traps where I mistakenly inference something about a particular peg only to have to reconsider it later.  Bascially the flaws Marcus says we are prone to in Kluge.

However, as Wikipedia notes, Knuth showed how this can be done with a minmax approach.  The implementation follows.  We start out with a problem definition.

enum CODE { A, B, C, D, E, F, MAX_CODE };
static const int GUESS_LEN = 4;

Next we need a function that calculates the white peg, black peg count given a particular guess and answer. (Note, the Block class just adds assert based bounds checking during development, null is 0, and NUM_ELEMENTS is a macro to determine the number of elements in a static or auto array).

void CalculateDiff(CODE guess[],
                   CODE ans[],
                   int len,
                   int* pBlackPegs,
                   int* pWhitePegs)
   assert(guess != null);
   assert(ans != null);
   assert(len > 0 && len <= MAX_LEN);

   Block<CODE> bGuess(guess,len);
   Block<CODE> bAns(ans,len);
   bool matched[MAX_LEN];
   Block<bool>  bMatched(matched,NUM_ELEMENTS(matched));

   int blackPegs = 0;
   int whitePegs = 0;

   for(int i = 0; i < len; ++i)
      if(bGuess&#91;i&#93; == bAns&#91;i&#93;)
         bMatched&#91;i&#93; = true;

   for(int i = 0; i < len; ++i)
      if(bGuess&#91;i&#93; == bAns&#91;i&#93;) continue;
      bool done = false;
      for(int j = 0;!done && j < len; ++j)
         if(i == j) continue;
         if(bMatched&#91;j&#93;) continue;
         if(bGuess&#91;i&#93; == bAns&#91;j&#93;)
            done = true;
            bMatched&#91;j&#93; = true;

   if(pBlackPegs != null) *pBlackPegs = blackPegs;
   if(pWhitePegs != null) *pWhitePegs = whitePegs;

From there, we can define a MasterMindAI class, that produces guesses.

&#91;sourcecode language='cpp'&#93;
class MasterMindAI

   static void GetPossibility(CODE* code,int codeLen,int codeNum);
   static int  GetCode(const CODE* code,int codeLen);
   void GetGuess(CODE* code,int codeLen) const;
   int  CountRemovals(CODE* guess,CODE* ans,int codeLen) const;
   void UpdatePossibilities(CODE* guess,
                            int codeLen,
                            int blackPegs,
                            int whitePegs);
   void Reset();
   bool* m_possibilities;
   int   m_numPossibilities;

The static functions just map a numerical code 0...1295 (6^4) to a particular code string and vice versa.  0 is AAAA and 1295 is FFFF.

The heart of the intelligence is in GetGuess.

&#91;sourcecode language='cpp'&#93;
void MasterMindAI::GetGuess(CODE* code,int len) const
    int maxGuess = 252;
   int numRemainingPos = 0;
   for(int i = 0;i < m_numPossibilities; ++i)
      if(m_possibilities&#91;i&#93;) ++numRemainingPos;

   if(numRemainingPos != m_numPossibilities)
       int maxScore = -1;
       for(int i = 0; i < m_numPossibilities; ++i)
           CODE aGuess&#91;GUESS_LEN&#93;;
           int thisScore = INT_MAX;
           for(int j = 0; thisScore > maxScore && j < m_numPossibilities; ++j)
              if(!m_possibilities&#91;j&#93;) continue;
              CODE possAns&#91;GUESS_LEN&#93;;
              thisScore = std::min(thisScore
           if(thisScore > maxScore)
              maxGuess = i;
              maxScore = thisScore;

The key part is minmax counting of the possibilities eliminated. We loop through all the possibilities, and for all the remaining possible answers we record the worst case removal. We then pick the guess with the best worst-case removal.

CountRemovals calculates how many of the remaining possibilities would be eliminated with the given guess with the given answer.

int MasterMindAI::CountRemovals(CODE* initGuess,CODE* ans,int codeLen) const
int numRemoved = 0;
int blackPegs = 0;
int whitePegs = 0;
Block poss(m_possibilities,m_numPossibilities);
int code = GetCode(initGuess,codeLen);
if(poss) ++numRemoved;

for(int i = 0;i < m_numPossibilities; ++i) { if(!poss[i]) continue; int tb = 0; int tw = 0; GetPossibility(posAns,NUM_ELEMENTS(posAns),i); CalculateDiff(initGuess,posAns,codeLen,&tb,&tw); if(tb != blackPegs || tw != whitePegs) { ++numRemoved; } } return numRemoved; } [/sourcecode] Next, we have its pair, update possibilities which updates the internal possibility array based on what actually happened. At no time will we eliminate less possibilities than we predicted in CountRemovals. [sourcecode language='cpp'] void MasterMindAI::UpdatePossibilities(CODE* guess, int codeLen, int blackPegs, int whitePegs) { int numEliminated = 0; Block poss(m_possibilities,m_numPossibilities);
int code = GetCode(guess,codeLen);
poss = false;
for(int i = 0;i < m_numPossibilities;++i) { if(!poss[i]) continue; CODE ans[GUESS_LEN]; GetPossibility(ans,NUM_ELEMENTS(ans),i); int tb = 0; int tw = 0; CalculateDiff(guess,ans,codeLen,&tb,&tw); if(tb != blackPegs || tw != whitePegs) { poss[i] = false; ++numEliminated; } } } [/sourcecode] Finally we have the main loop. [sourcecode language='cpp'] CODE ans[GUESS_LEN]; MasterMindAI::GetPossibility(ans,NUM_ELEMENTS(ans),i); mai.Reset(); int numGuesses = 0; CODE sample[GUESS_LEN]; bool done = false; while(!done) { int blackPegs = 0; int whitePegs = 0; mai.GetGuess(sample,NUM_ELEMENTS(sample)); PrintCode(&std::cout,sample,NUM_ELEMENTS(sample)); CalculateDiff(sample,ans,NUM_ELEMENTS(sample),&blackPegs,&whitePegs); if(blackPegs == GUESS_LEN) { done = true; } else { mai.UpdatePossibilities(sample,NUM_ELEMENTS(sample),blackPegs,whitePegs); } } [/sourcecode] The one thing I find surprising is how many possibilities are actually eliminated in the first guess. The computer employs a strategy with a precision and speed that I just can't. So, we can train a computer to be more intelligent in specific instances, and it thinks in a way that is rather alien to the way we, as humans, do. This isn't surprising in a world where the reining intellect itself is based on a kluge. It also seems to be a property of this kluge-iness that we wish to impose the particulars of our thinking onto the machine. It is unlikely this will be for the sake of efficiency or good design, but rather because we so limited that it is the only way we could make work for certain problems we would like to solve.

March 22, 2008

March Madness Craziness: Part One

Filed under: programming — Tags: , — codesmithy @ 8:23 am

So, it is that time of the year again. The NCAA Men’s Basketball tournament is upon us. Raymond Chen has his predictions up. In years past, he has used seating capacity of the arena or the length the President/Chancellor of the University has been in charge. When filling out my own predictions, I try to pick some of the more likely upsets based on record, whether or not I feel the conference the team is from is overrated, or the team is overrated, impression of how I feel the coach has done in the tournament in the past and whether I feel they can achieve that level of success again. It is very unscientific and subjective.

I also find myself somewhat flabbergasted that Chen managed to pick Sienna, Villanova, Kansas St. and San Diego (and a little bit jealous). So, maybe he is on to something.

In addition, I feel that I have to pick some upsets because I know there are going to be upsets. However, I never really sat down and determined if that was actually a rational thing to do.

First of all, let’s talk about what is definitely irrational: random picking based on weights. I noticed a bracket simulator that assigned picks basically by picking a random number after assigning each team a relative likelihood of winning. It gives you an interesting bracket, but the strategy is not in fact rational. If you were interested in maximizing your chances of winning you’d pick the team with the best chance of winning every time. Or would you?

Let’s assume the NCAA selection committee is totally unbiased and correctly assigning seeds. Knowing the majority of people follow the NCAA’s recommendation, does it make sense to make a suboptimal bracket that virtually guarantees an exclusive victory?  I don’t know, let’s figure it out.

To determine a rational strategy, we have to start off with scoring. We’ll ignore the first game between the two 16 seeds because they aren’t included in most Pick ’ems. Generally, the scoring is based on correctly picking a winner multiplied by 2 raised what round the game was played (starting the count at 0 as is the want of computer scientists). Also, most Pick’ ems require a consistent bracket, that is a bracket where only the winners are allowed to play in the next round. I actually don’t know if would make sense to pick a team to lose in the first round, only to resurrect them later, but alas it is disallowed.

Definition of the NCAA Selection picker: Pick the team that is a lower seed. If two teams are the same seed from different regions, randomly pick one to be the winner with equal weighting.

Next time: how this selection algorithm actually performs against tournaments of the past.

January 25, 2008

Traveler’s Dilemma

Filed under: programming, science — Tags: — codesmithy @ 9:46 am

Scientific American has an article on the surprising result from a game known as the “Traveler’s Dilemma.”

A brief summary of game goes like this. There are two travelers Alice and Bob, and they each bought identical souvenirs. Unfortunately, both of them has been lost. The Airline is more than happy to compensate them for the loss, but the Airline manager doesn’t know how much the souvenirs cost. He doesn’t want to ask the travelers directly for fear they will inflate the price.

He comes up with a complicated scheme. He will ask Alice and Bob to write down the price of the souvenir without allowing them to communicate. If the amount they write down matches, then he will compensate Alice and Bob the amount they wrote down. If one of them writes a smaller amount than the other, the Airline figures the lower price is the honest price. The airline also imposes a $2 penalty on the person who had the higher, dishonest price, and rewards the person who gave the lower bid $2 for their honesty. The Airline cannot reimburse for goods over $100, or for goods under $2. Here are a couple scenarios:

Alice Bet Bob Bet Alice Outcome Bob Outcome
100 100 100 100
40 30 28 32
99 100 101 97

Now, what is interesting about this game is what happens when two rational, self-interested entities play it. Much like Prisoner’s dilemma, the particulars about how much the souvenir is worth (or one’s guilt or innocence) doesn’t really enter into the thinking about what to do in this scenario. Just like in the Prisoner’s dilemma, one knows nothing about the character of the other person, except that they are presumably rational, self-interested and capable of going through the same thought process you are. So what would such a rational thought process look like in code.

Well, first we define some of the rules of the game.

const int PENALTY = -2;
const int REWARD  = 2;
const int MIN_BET = 2;
const int MAX_BET = 100;

Next we write a function that given the two players bets, what their rewards would be.

void RewardPlayers(int aBet,int bBet,int* pAReward,int* pBReward)
    if(aBet == bBet)
        (*pAReward) = aBet;
        (*pBReward) = bBet;
    else if(aBet < bBet)
        (*pAReward) = aBet + REWARD;
        (*pBReward) = bBet + PENALTY;
        (*pAReward) = bBet + PENALTY;
        (*pBReward) = bBet + REWARD;

Next we write a function that given the opponent's bet, finds what the optimal bet I should make is.  For example, if the opponent bets $100, then I should bet $99 because then I'll receive $101.

&#91;sourcecode language='cpp'&#93;
int RationalBet(int oppBet)
    int oppOutcome = -1;
    int outcome = -1;
    int bestBet = -1;
    int bestOutcome = -1;
    for(int i = MIN_BET; i <= MAX_BET;++i)
        if(outcome > bestOutcome)
           bestBet = i;
           bestOutcome = outcome;
    return bestBet;

Finally, we determine the rational outcome. We assume that both Alice and Bob start their bets at the MAX_BET, or $100. We start with Alice, assuming Bob bets $100, does she rationally want to keep her bet at $100, or does she want to do something different. If she keeps the same bet, then we are done. If she changes her bet, then we record what her new rational bet is and switch to Bob’s perspective. Bob assumes Alice will do the rational thing, so looking at how the bets currently stand, does Bob want to change his bet given what he knows Alice will rationally do. Again, if Bob doesn’t change his bet, we are done. However, if he does, we record the new bet, switch to Alice’s perspective and repeat, until the bet doesn’t change.

void RationalOutcome(void)
    int bets[2];

    bets[0] = MAX_BET;
    bets[1] = MAX_BET;

    int side = 0;
    int newBet;
    int oldBet;
        oldBet = bets[side];
        newBet = RationalBet(bets[side == 0 ? 1 : 0]);
        if(oldBet != newBet)
           bets[side] = newBet;
           side = side == 0 ? 1 : 0;
    } while(newBet != oldBet);

    printf("A bets: %d\n",bets[0]);
    printf("B bets: %d\n",bets[1]);

As it turns out, the rational result is for both people to bet $2. There are reasons why this result occurs, but I first want to explore a reason why this result does not occur. This result does not occur because it assumes malevolence on the part of either of the players. The players are whole-heartedly disinterested, they are only looking out for themselves (self-interested) or the making the equivalent assumption about the other player (the other player is rational, self-interested and unconcerned about your welfare). To prove this is the case, you can set the penalty to 0 and rerun the simulation. You’ll arrive at the same result.So, why are we arriving at such a non-intuitive result? What is pushing the result down is the fact that the lesser bet wins, and, in fact, beats agreeing with the other person. It causes a race to the bottom. If Alice bets 98, then Bob’s best strategy is to bet 97. If Bob’s best strategy is to bet 97, then Alice’s best strategy is to bet 96, all the way down until they hit rock bottom. The article finds that people don’t act this way, even people well-versed in game theory, knowing they are going to compete with other people well-versed in game theory.

I imagine the reasons are similar to those found in the prisoner’s dilemma exercise. Successful players will cooperate, because cooperation helps their survival and the survival of those they cooperate with. When an optimal player comes across an uncooperative player, it will remember and punish the uncooperative player until it does cooperate. In short, the best strategy might be a tic-for-tat strategy or a specially optimized version of it.

The other reasons people don’t are similarly simple and attack some of the base assumptions we made. We don’t assume the other player will be perfectly rational. Even if we do assume they are somewhat rational, we don’t assume they’ll take the rational thinking to its logical conclusion. This might be because of so-called meta-thinking, the 97-96, 96-95, … sequence might be considered to similar too a Rock-Papers-Scissors loop which has no stable outcome R-P,S-P,S-R,P-R,P-S,R-S,R-P. People might go 3 or 4 iterations and just stop, seeing it as pointless to go further. Similarly, people rationally see the detriment to their own outcomes. When a player bets 98, they are no better off holistically if both players would have stayed at 100. One player assumes the other player recognizes this fact also, and takes it upon itself to be altruistic as opposed to self-interested.

Personally, I think the author gets too hung-up calling non-Nash equilibrium states emotional. I honestly don’t think it is an either/or. People come in with a wholly different set of assumptions than the mathematical analysis. The mathematical analysis is extraordinarily useful, and part of the reason why computers are good at beating people at games like chess and checkers, however calling people irrational because they deviate from the model is confusing what should be conforming to what. Should the model be conforming to reality? Or reality be conforming to the model?

It is no surprise that the model and its assumptions should conform to reality. My organic chemistry professor warned us not to confuse the two. I think the Scientific American article presents as good a warning as any as to why it is dangerous. Although, I will say that economics seems to be most guilty of committing error in judgment.

As the article explores, there are repercussions to this. It highlights the failures of pure rational self-interest, which is embodied in a few movements and policies notably Free Market Fundamentalism and Mutually Assured Destruction. It gives an example of people cooperating, even though reductionists says they should not, and opens an avenue of inquiry and possibly some evidence in favor of evolutionary psychology, to better explain why people think the way they do.

December 14, 2007


Filed under: programming — Tags: , — codesmithy @ 6:11 am

This is coolest visualizations of code and data inside an Atari 2600 cartridge I’ve seen.  It is amazing to see the branching in the code segments.  I can only imagine how painstaking it must have been to do by hand.  However, I doubt if they had much of an alternative.  Programmers were dealing with very little memory.

However, one should be able to tell by just looking at the picture that developers for pac-man were having some problems.  Here is a video of how the game actually played.

Q-Bert, on the other-hand, has some bugs, but still, what the developers were able to do is pretty impressive.

July 3, 2007

Computers Can’t Beat Humans at Go (yet), So What?

Filed under: culture, programming, random — codesmithy @ 8:52 am

Times Online had an article about computers, currently, can’t beat humans at Go. I hate the tone of the article; the subtle arrogance. After Gary Kasparov lost to Deep Blue, it marked an achievement. Not one man over a machine, but a team of men’s ingenious designs against one man. The machine was just an instrument of no escape. It calculated with mathematical precision, each possible move, considered every possibility, until it used the rules that the programmers decided to conclude what move had the best chance of victory. I’m sure another team of people sat down and decided to come up with a program that could defeat Deep Blue, they could.

The point is, that the attempt to defeat one of the greatest chess minds alive with a device, is a societal achievement, not the machine’s. The fact that the machines could toss aside amateurs years before, seems of no consequence.  If the achievement was solely for the machine, we would remove silly constraints on time. I mean, a computer could play a single game of Go or chess for 100’s of years, where as I doubt a human would care to. The other fact is $1,000,000 is not a lot of money. That would probably buy, at most, one Ph.D. in Computer Science for 10 years. How long do people learn Go for, before they become competent players?  How much do the professional Go players earn collectively?

Invariably, the problem isn’t with computer, and it is certainly not because it lacks

adaptation to uncertainty, intuition, wisdom, the ability to understand the thoughts and feelings of others, and a sense of mortality

Pattern recognition is fundamentally a hard problem.  But, our brains are adept at handling it.  Unfortunately, our system is also far from perfect, in that we are hard wired to see faces and patterns, which is why we get potato chips that apparently look like Elvis to Jesus.  Teaching a computer to recognize this garbage as a face would be deemed unacceptable, however we seem to be perfectly happy with human imperfections.

As another example, there was a time when I was in a 3-D modeling program, and I was looking at a head model.  I was inside the model.  Something I can usually easily discern.  But, the problem was that this was a face.  So my brain was flipping it for me, causing me to be utterly confused.  Optical illusions are a collection of all types of tricks to get the brain to see things that aren’t really there.  Do we blame the computer for not having these flaws?

If we ultimately are going to create an Artificial Intelligence, it would sad to handicap it with all of our imperfections.  Nor should we applaud the fact that there are a few things, that by evolution, we are good at doing but unable to explain.  Our mechanisms for detecting faces and general pattern recognition took thousands if not millions of years to develop.  To fault our ability to design a machine in the last 50 that can do a better job than the most sophisticated parts of the brain is not giving our evolution enough credit and placing too much heed to our collective intellect.

That said, to applaud the inability of a computer to do something is to applaud our ignorance, our impatience, our superstition, and our hubris.  It is revel in the Dark Ages of humanity.  It is to rail against progress, achievement and the collective human intellect.  It is for these reasons I find the undertones of the article so despicable.

June 26, 2007

Herb Sutter: The Free Lunch is Over (Parallelism)

Filed under: programming — codesmithy @ 7:12 am

Herb Sutter has written some articles about the coming era of parallelism. Here are some slides from his OGDC talk. Here is his paper from the Dr. Dobbs journal about the free lunch being over. He is fundamentally right about, if applications are going to go faster, they have to be annotated to take advantage of parallelism available via the parallel processors. Applications are not going to get faster, in and of themselves when the next processor comes out unless the applications themselves have some degree of explicit parallelism.

Right now, we exist in sort of the chicken and egg era. While parallelism is well understood in the database domain, trying to add parallelism to existing applications is now frightfully difficult and error prone. There is going to be a gradual move from unstructured data, to structured data in order for libraries that own the data to take care of much of the concurrency for the programmer. I think taking the transactional programming model is going to be the only one that make sense and will be robust. However, it will not be the most efficient, and there will be a tremendous cost transitioning. We see some of this cost come to bear in the Next-Generation Game Consoles where both XBox360 and PS3 have some form of concurrency at their heart. The XBox360 has Symmetric Multi-Processing, that is 6 cores that are virtually identical, and the PS3 has 2 cores that are nearly the same as the equivalent number of cores on the XBox360. The PS3 also has the array of Cell processors, which are streaming processors with a small local memory, but they are able to DMA the data they need back and forth. While the PS3 has more theoretical power, it is harder to take advantage of, therefore, for the first 4 or 5 years of the console lifetime, it will have practically less. Exclusive titles will be the ones that that really make the Cell and the PS3 sing, unless a developer has some great love for the PS3 and wants to make that version of a multi-platform better. However, given the familiarity with the programming model for SMP vs asymmetric multiprocessing, and a generally better tool chain as provided by Microsoft leveraging their Windows Technology, as opposed to Sony who basically has to start from scratch. The default is going to be better use of the underlying platform for the XBox360 in Multi-Platform titles and early XBox360 exclusives.

Although, both consoles really out-pace the programmer experience, and represent something genuinely new in the programming era. It is unlikely that even if the parallel tools were available, programs on the Next-Gen consoles would be able to use them in their purest form, although some of the basic ideas and principles could be followed.

For a closing thought, it is interesting to note that Wii is selling better than the PS3 and XBox360. So, the other possibility is the frantic pace of development just comes to an end.

June 22, 2007

Amazing Stories of Software Compatibility

Filed under: programming — codesmithy @ 5:48 am

Raymond Chen wrote a book called “The Old New Thing,” which complements his blog with the same title. There is a bonus chapter available online which I found to be fairly amazing. The two things that I am struck by, is how seriously Microsoft takes compatibility and how many resources they dedicate to actually maintaining it. As the technology progresses, more sophisticated and general techniques become available, such as giving problematic programs their own sandbox so previous correctness can be maintained while properly behaving programs can move forward (Windows 2000 shims being one such example).

However, people find incredible ways to depend on specific behavior of the existing code. Such as “If it has eight bytes, it must be a dialog box.” An additional problem is that Microsoft, more so in the past, thought that programmers could be trusted to do the “right” thing; that trust turned out to be somewhat misplaced. The organizations that create programs are agents that want to accomplish their own self-interested goals. Which is why the Raymond Chen’s hypothetical conversation with a vendor is the typical response. You don’t gain the organization’s attention until you mention a threat to one of its goals, and frequently you have to make that connection for them. “Oh, our next release of XYZ will not work on Windows ’95, but we want to ship XYZ on ’95.” But, you should also expect the response to: “Your program does work on Windows NT,” to be the perfectly rational “We don’t support Windows NT.” Which is to say, this problem you speak of, you see, it doesn’t affect any of my goals. The expectation for support is also slightly unfair, although it tends to be pervasive. The fact that code does subtly (or other times not so subtly) break across different platforms means additional testing. Corporations are driven by rational self-interested behavior. How much will cost to get a administered Windows NT box? How much testing will have to be done? How many sales will we have on that particular platform? The rational response might be, we don’t support that platform the costs out-weight the benefits. They might be all for Windows NT support in principle, but they also need to look at their bottom line. The fact that Microsoft comes along later and says, hey you broke your program on NT or worse yet Windows 2000 (something literally impossible to test for since it didn’t exist during development) doesn’t change the fact that the companies goal is to maximize profit, which ensuring compatibility with future OS revisions may not be justified.

Therefore, it is more insightful abstract the people and instead view the corporations involved as two self-interested agents. Microsoft interested in externalizing the costs of compatibility, which they use to sell their new operating system, and the vendor who wants to sell a quantity of software that depends on the previous operating system, and isn’t necessarily concerned with compatibility with future versions (based on expected software lifetime and expected install base and sales figures versus costs of development and testing). Therefore, future compatibility is a cost that software vendors would also like to externalize. The fact that Microsoft invests so heavily in compatibility is because they consider it so vital to the sales of their new operating system. The minutiae of what actually breaks and why is intellectually interesting from my point of view as a programmer but as a business operative is fairly meaningless, since it literally comes down to profit. Better technologies for compatibility are just ways of reducing costs. Complaints that programmer X is so stupid, and did this all wrong is a failure to see the forest from the trees. Rational self-interest of the capitalistic entities involved are causing the friction, and as a member of an institution in that corporation, an employee takes on the properties of that entity in collectively solving the threat to its profits.

Given this view, the misplaced hatred for sloppy programmer X who produced the code is properly placed on the system causing the friction. Rational self-interested parties that don’t necessarily share mutual goals, and therefore compete. It is another symptom of a capitalist system that fairness has nothing to do with who bears the cost. Complaining about compatibility from different vendors is like barking at the moon, might as well just sit down an fix the problems at hand rather than ponder how in the world the code ever got written in the first place. For one, it did get written. Also, there is a lot of it, as Chen’s chapter demonstrates. Therefore, it better to take the approach that the problem at had is more of a puzzle rather than a product of some malicious, evil entity. As most any programmer can embarrassingly attest to, that malicious entity sometimes turns out to be yourself.

June 9, 2007

Letting the Machine do the Thinking: Act 4 (Resolution)

Filed under: programming — codesmithy @ 4:26 am

Kicking the program up again last night, it did discover a 4th answer to the self-referential test (2 already found + 1 correct one + 1 new incorrect one). They are listed below in the table.


Working through the new solution, it indeed seems to be a correct answer assuming you don’t know the answer to 20 is.

Ultimately, I will say that I feel a little bit let down because of the arbitrariness of question 20.

20. Standardized test is to intelligence as barometer is to
(A) temperature (only)
(B) wind-velocity (only)
(C) latitude (only)
(D) longitude (only)
(E) temperature, wind-velocity, latitude, and longitude

A barometer is a device to measure atmospheric pressure. So, obviously the author’s intent was to make some claim that standardized tests aren’t a good way to measure intelligence. But, the analogy is malformed for at least two reasons. The first reason it is malformed is because you have one going test going to one attribute intelligence, the barometer measuring a bunch of different attributes would be one reason to prefer the other answers to the author’s chosen answer. Secondly, a relationship between an unrelated item to an unrelated item is meaningless. I mean, why not say standardized tests are to intelligence as a barometer is to black?

It might be sour grapes at this point, but I do feel let down that an arbitrary inside joke on the part of the author ruined an otherwise entertaining exercise. Precisely because it turns out to be essential in determining the “correct” solution, out of what turns out to be quite a few acceptable answers.

For those interested in the C++ source code, it can be found here.

June 8, 2007

Letting the Machine do the Thinking: Act 3 (Intrigue)

Filed under: programming — codesmithy @ 5:15 am

So, after adding functionality for the program to accept ranges, fixing a bug, and adding some additional logic to skip more of the search range, I fired up two copies to utilize 100% of my CPU and went to bed. To my surprise, I found out that both programs had claimed to have found an answer, one claimed DADBEDDEDABADBADBECA, and the other claimed DADBEDDEDABADBADBAEB. The is encouraging thing to note is the program seemed to agree on what the first 17 answers are. Given that I didn’t know how to constrain question 20, in that I thought it was wholly determined by the previous questions, it isn’t too suprising that the program may have been too permissive in allowing answers.

Now, I went through the test by hand looking for any mistakes the program may have made, and I didn’t see any inconsistencies in either of the tests.

Although, the interesting thing is that the author stated:

I should mention that if you don’t agree with me about the answer to #20,
you will get a different solution to the puzzle than the one I had in
But I should also mention that if you don’t agree with me about the answer
to #20, you are just plain wrong. 🙂

So, it might be that both answers are completely valid. But, which one did the author have in mind then? Well for question 20, I’m going to guess that had in mind “Standardized test is to intelligence as barometer is to” (A) temperature (only)” not “(B) wind-velocity (only).”

So, what did the answer turn out to be, disappointingly neither. The correct answer was: DADBEDDEDABADBADBABE. Which it turns out that my program would have also found if had continued searching instead of quiting once it had found the first answer, this was based off the problem statement that the answer was unique, which I will annoyingly concede if you agree with the author about question 20. So, with this in mind. I corrected the program to print out all the answers it finds and only quit once it exceeds its range. We will see if these are the only three solutions the program would have found or if there are more lurking out there.

Older Posts »

Blog at