Saturday, April 3, 2010

Genetic Optimization for Chess Rules

One of the many challenges of making a chess engine that uses rules for evaluation is how to attach a value to a rule. Say for example that you use these two rules;

- A bishop on a dark field gets a progressive bonus if many enemy pieces are also on dark fields
- A rook gets a bonus for every direction that it can move to that is not blocked by a friendly piece

How large should these bonusses be? Even if you play chess very well the exact value is not obvious. And the values should be chosen in proportion to the values for all the other rules.

One way to go is to use historical games to see what values for these rules were being used in these games by pretending the games were played using these rules. Or, in other words, to find the values for your rules that would have played the same moves as the winners did.

Philosophical questions aside (will it ever become better as the average level of all winners?) such optimization is easier said then done. If you have 50 rules and limit the value for each rule to 10 possible values there are 10* 10 * 10 ... = more possiblities then it is possible to evaluate. Generating random solutions is a simple way to find a set of values. Thats easy;


  1. - Generate a random value within applicable bounds for all rules (let's call this a solution)
  2. - Test for all moves played by the winner in all hsitorical games how many times the move played would be the same move your soluation valued highest.
  3. - If the solution is better than the best so far remember it. 
  4. - Return to step 1

Due to the large amount of possible solutions this will not get you a great combination of values. Genetic Optimization will get you better values, faster. The Genetic scheme works like this;

  1. - Generate a lot of solutions
  2. - Test all solutions on the moves in the database of played games
  3. - Save the top 5% scoring solutions, remember the best overall solution
  4. - Combine (breed is the technical term ;-) the solutions to form new solutions. 
  5. - Return to Step 2

This is simplifying things, but gives a good first picture.I will now discuss the specific chess-related pitfalls I came across to make this work. In no particular order;

- Getting your rules to make the exact same move might be difficult and also not required. Usually there are at least a few good moves. I switched to rewarding how high the move actually played ended up in my ranking of the moves. With a large negative if the move played was at the very bottom of the list. And also logged for inspection. Looking at very badly evaluated moves will show you which rules are bad and which moves are unfit to use for evaluation (next issue).

- You will get lot's of moves classified as bad because they were forced. Meaning that normally putting a queen in a tight corner is not useful, but if it's check prevention or a sacrifice it might be the best move anyway. This can be countered by refining the set of moves you test with. I created a set of moves played from a historical database that were purely positional moves. If a search of alphabeta depth x did not change to relative value of the board then the move was allowed into the testset. This way all moves that are forced are left out. Always searching to depth x with every evaluation is also an option, but will seriously dampen your ability to evaluate lot's of solutions.

- I created offspring that was both a combination of good values and offspring that simply mutated some values for just a few rules. After several iterations these last will finetune your values for the best score.

My own chessprogram improved markedly after tuning all the rules in this way. It beat the #### out of older versions. Then why does it not reign supreme on the internet do you wonder? Because it's also a question of how good your rules are. And I am not that good a chess player........

No comments:

Post a Comment