Skip to content

Commit

Permalink
Browse files Browse the repository at this point in the history
some grammatical fixes
  • Loading branch information
mbluemer committed May 5, 2017
1 parent 5b0db17 commit f831b67
Showing 1 changed file with 3 additions and 3 deletions.
6 changes: 3 additions & 3 deletions README.md
Expand Up @@ -34,15 +34,15 @@ We used a simple variable depth solution. We have a minimum and maximum search d

## (2) Features Used for State Evaluation

When evaluating the value of a state, we implemented various heuristics to assess the quality of that state. We decieded to on feature evalations being positive and negative for black and white, respectively. All the heuristics can be viewed in the `src/Heuristic.cpp` file.
When evaluating the value of a state, we implemented various heuristics to assess the quality of that state. We decieded to on feature evalations being positive and negative for black and white, respectively. All the heuristics can be viewed in the `src/Heuristic.cpp` file.

### Heuristic Function

The first heuristics implemented were on the more basic side of evaluating a state. Pawn count simply calculated a value for the difference between the number of pawns (black - white as this produces a negative value when white has more and a positive when black has more). We used this same formula for kings except added an extra weight, multiplying the difference between black and white kings by 1.5. We also implemented another set of heuristics for both kings and pawns by determing the number of 'safe' pieces, pieces that were on the edge of the board and could not be jumped.

After having values representing the number of pieces comparitively on the board, we moved on to looking at the position of the pieces and giving value to those in certain positions. We created a heuristic for the number of defending pieces of either black or white, that is, the number of pieces in the first two rows of that colors starting side. We also created an attacking heuristics representing the number of peices in the opponents first three rows.

We also wanted to measure how close or available a pawn promotion was. We acheived this by calculating the distance each pawn was from the opponents first row (the promotion row) and summing them up. We believed it wasn't enough just to have this raw number as other pieces may have been impeding pawns from being promotion so we created another heuristic known as openPromotion. This counted the number of open spaces on the promotion row, i.e the opponents first row.
We wanted to measure how close or available a pawn promotion was. We acheived this by calculating the distance each pawn was from the opponents first row (the promotion row) and summing them up. We believed it wasn't enough just to have this raw number as other pieces may have been impeding pawns from being promotion so we created another heuristic known as openPromotion. This counted the number of open spaces on the promotion row, i.e the opponents first row.

Finally, we wanted values for which pieces were able to make moves and which pieces were able to make jumps. For which pieces were able to make a move, we counted how many moves each pawn and king could make and simply summed them up. This, however, only considered direct moves, not jumps, as it searched the surrounding piece for an open space. To search for a jumpable piece, we had to look at whether there was an oppinent in the direct path of a piece and then whether a jump over that opponent was possible.

Expand All @@ -54,7 +54,7 @@ The learning method was originally intended to be genetic algorithm, however due

Initially, all of the heuristic feature functions returned positive values. For our very early testing, we had our AI contain static coefficients of '1' for all of the weights before we did any training. Once we modified our heuristic functions to return positive and negative values for black and white pieces respectively, we also defined a distribution range of 2^28 to -2^28 for coefficients to be randomly assigned to the features.

After inital tests when comparing to our static weights, we realized that the range of randomness of the values were far to large and negatively impacted our traning. Several iterations led us to the convergence of the range to be set betweeen -100 and 100. The distribution of weights spread to any larger magnitude did not perform well against the clients with that of the former.
After inital tests when comparing to our static weights, we realized that the range of randomness of the values was far to0 large and negatively impacted our traning. Several iterations led us to the convergence of the range to be set betweeen -100 and 100. The distribution of weights spread to any larger magnitude did not perform well against the clients with that of the former.

When testing variable depth and implementing Alpha Beta search, we initially limited our included features to the less computationally intensive functions. Early onset goals were to test our evaluation function with only the first four simple features being used ( counts of pawn, king, safe pawns, and safe kings) and achieve beating the server AI. Once variable depth was implemented, the rest of the features (all 10) were included when training to find good sets of heuristic values.

Expand Down

0 comments on commit f831b67

Please sign in to comment.