Hironobu Hamada
hh245@columbia.edu

Project: The evaluation of changing population size and crossover = probability for GPOthello.

Abstract:

I implemented the evaluation of changing population size and crossover = probability for GPOthello.=20 I trained my genetic programming player against Edgar. Since Edgar was trained as a WHITE player, I = trained my player as a BLACK player. Following conditions are basic and I picked = "population size" and "crossover probability" among them to evaluate in this = project.=20 At first, although I was going to evaluate "number of generations",=20 since the original number of generations, 31, was enough to train in = almost=20 all cases, I did not select to evaluate it. I chose 160 as a basic value = for termination=20 fitness. The reason why I chose this value is that the genetic = programming player plays five games to evaluate. Thus, 160 means that my genetic player get = more than half pieces in five games. When I tested 100 for termination fitness, it seemed run forever since it was running = more than eight hours and the result seemed not going close to the termination value. Thus I = stopped evaluating termination fitness. Because we need a lot of time for computation, I = chose five for "GoodRuns" as a basic value when I tested and evaluated the population = size.=20 And, because of computation time, I chose one for "GoodRuns" when I = tested and evaluated the crossover probability.

After I trained the genetic programming, I tested it by competing it = with Edgar and 50 random players.

Basic value:

PopulationSize 200
NumberOfGenerations 31
CrossoverProbability 90.0
CreationProbability 0.0
CreationType RampedHalf
MaximumDepthForCreation 6
MaximumDepthForCrossover 17
MaximumComplexity 100
SelectionType Probabilistic
TournamentSize 7
DemeSize 100
DemeticMigProbability 100.0
SwapMutationProbability 0.0
ShrinkMutationProbability 0.0
TerminationFitness 160.0
GoodRuns 5


Experiment:

For the evaluation of population size, I tested 50, 100, 200 and 300 = values and I got following results. And for the evaluation of crossover probability, I = tested 85, 90, 95 and 99. In the both case, I competed the genetic player with Edgar and 50 random = players. At first, I competed it with Edgar fifty times, however the = results were all the same. So, I competed it with Edgar once.

Result:

I show best value, genetic programming, test score against Edgar and test results against 50 random players after training for each case. For population size evaluation, "GoodRuns" are five and for crossover probability, it is one. Here, best value means Edgar's score in five games when I trained. Thus = small value is better.=20
(1)size 50

*GoodRun: 1
Best value 148
Genetic programming ( - ( - ( + ( / ( + white_edges 10 = ) black_edges ) ( / ( / white_near_corners black_near_corners ) = black_near_corners )) 10 ) ( + black_near_corners ( / 10 ( - ( / ( * ( / = ( / black_near_corners black_corners ) ( - white_corners white_corners = )) ( / black_near_corners black_corners )) 10 ) black ))))
Test score against Edgar 36(GP):28(Edgar)
Test results against fifty random players(RM) GP won = 16, RP won 34


*GoodRun: 2
Best value 89
Genetic programming ( - ( + black_near_corners ( / = black_corners white_edges )) ( / black_edges black_edges ))
Test score against Edgar 10(GP):54(Edgar)
Test results against fifty random players(RM) GP won = 29, RP won 21


*GoodRun: 3
Best value 132
Genetic programming ( * black_near_corners ( + = white_corners ( * black_edges white_near_corners )))
Test score against Edgar 13(GP):51(Edgar)
Test results against fifty random players(RM) GP won = 0, RP won 50


*GoodRun: 4
Best value 119
Genetic programming ( - black_edges ( - ( * = black_edges black_edges ) ( - white black_edges )))
Test score against Edgar 42(GP):22(Edgar)
Test results against fifty random players(RM) GP won = 50, RP won 0


*GoodRun: 5
Best value 130
Genetic programming ( * ( * black_edges = white_near_corners ) white_edges )
Test score against Edgar 39(GP):25(Edgar)
Test results against fifty random players(RM) GP won = 3, RP won 47


(2)size 100

*GoodRun: 1
Best value 146
Genetic programming ( + ( / ( / ( + black = black_corners ) ( / white black_corners )) ( - ( / black black_edges ) ( = * black black_edges ))) ( - ( - ( - black_edges black_corners ) ( / = white_corners black_corners )) ( - ( + black_near_corners white_corners = ) ( - black black ))))
Test score against Edgar 39(GP):25(Edgar)
Test results against fifty random players(RM) GP won = 9, RP won 41


*GoodRun: 2
Best value 155
Genetic programming ( - ( + black_near_corners ( / = black_corners white_edges )) ( / black_edges black_edges ))
Test score against Edgar 10(GP):54(Edgar)
Test results against fifty random players(RM) GP won = 29, RP won 21


*GoodRun: 3
Best value 129
Genetic programming ( - ( + white_near_corners = white_near_corners ) ( / ( / ( * white_near_corners white ) ( / black = black )) ( * ( - black_corners white_corners ) ( + black_edges = white_edges ))))
Test score against Edgar 42(GP):22(Edgar)
Test results against fifty random players(RM) GP won = 47, RP won 3


*GoodRun: 4
Best value 150
Genetic programming ( * ( / white_corners black ) = white_near_corners )
Test score against Edgar 35(GP):29(Edgar)
Test results against fifty random players(RM) GP won = 1, RP won 49


*GoodRun: 5
Best value 142
Genetic programming ( + ( * white_corners black_edges = ) ( / white black_edges ))
Test score against Edgar 37(GP):27(Edgar)
Test results against fifty random players(RM) GP won = 47, RP won 3


(3)size 200

*GoodRun: 1
Best value 145
Genetic programming ( - ( * ( / 10 black_edges ) ( - = black_near_corners white_near_corners )) ( * ( * black_corners white ) ( = - white_corners black )))
Test score against Edgar 13(GP):51(Edgar)
Test results against fifty random players(RM) GP won = 36, RP won 14


*GoodRun: 2
Best value 119
Genetic programming ( / ( + white_corners = black_corners ) ( - white_corners ( * black_edges black_edges )))
Test score against Edgar 42(GP):22(Edgar)
Test results against fifty random players(RM) GP won = 0, RP won 50


*GoodRun: 3
Best value 134
Genetic programming ( + white_edges ( * ( / = white_edges white_edges ) ( * white_near_corners black_edges )))
Test score against Edgar 39(GP):25(Edgar)
Test results against fifty random players(RM) GP won = 0, RP won 50


*GoodRun: 4
Best value 116
Genetic programming ( / ( + ( * black_corners = white_near_corners ) ( / black_near_corners white )) ( - black_edges = black_near_corners ))
Test score against Edgar 20(GP):44(Edgar)
Test results against fifty random players(RM) GP won = 9, RP won 41


*GoodRun: 5
Best value 130
Genetic programming ( + ( * white_near_corners = black_edges ) white_corners )
Test score against Edgar 39(GP):25(Edgar)
Test results against fifty random players(RM) GP won = 0, RP won 50


(4)size 300

*GoodRun: 1
Best value 120
Genetic programming ( - white ( * black_near_corners = black_edges ))
Test score against Edgar 27(GP):37(Edgar)
Test results against fifty random players(RM) GP won = 3, RP won 47


*GoodRun: 2
Best value 125
Genetic programming ( * ( - ( + white_edges = white_near_corners ) ( + black black_near_corners )) ( * ( + black_edges = white_edges ) ( / white_edges white_corners )))
Test score against Edgar 25(GP):39(Edgar)
Test results against fifty random players(RM) GP won = 46, RP won 4


*GoodRun: 3
Best value 128
Genetic programming ( * white_near_corners black_edges = )
Test score against Edgar 39(GP):25(Edgar)
Test results against fifty random players(RM) GP won = 0, RP won 50


*GoodRun: 4
Best value 120
Genetic programming ( + ( / ( + white_edges = white_edges ) ( / black_edges black_corners )) ( / ( * = white_near_corners 10 ) ( + black_corners black_corners )))
Test score against Edgar 43(GP):21(Edgar)
Test results against fifty random players(RM) GP won = 7, RP won 43


*GoodRun: 5
Best value 122
Genetic programming ( / ( - white white_near_corners ) = ( * 10 black_edges ))
Test score against Edgar 41(GP):22(Edgar)
Test results against fifty random players(RM) GP won = 2, RP won 48


(1)probability 85%

Best value 124
Genetic programming ( - white ( * 10 ( * ( - = black_edges 10 ) black_near_corners )))
Test score against Edgar 21(GP):43(Edgar)
Test results against fifty random players(RM) GP won = 50, RP won 0


(2)probability 90%

This is all the same as above results.

(3)probability 95%

Best value 122
Genetic programming ( / ( - white white_near_corners ) = ( * 10 black_edges ))
Test score against Edgar 25(GP):39(Edgar)
Test results against fifty random players(RM) GP won = 3, RP won 47


(4)probability 99%

Best value 122
Genetic programming ( / ( - white white_near_corners ) = ( * 10 black_edges ))
Test score against Edgar 43(GP):21(Edgar)
Test results against fifty random players(RM) GP won = 48, RP won 2


Analysis:

For population size evaluation, in the case of 50, genetic = programming(GP) won 3 times and lost 2 times against Edgar after training. In the case of 100, GP won 5 = times against Edgar. And in the case of 200, GP won 3 times and lost 2 times against = Edgar. Finally,=20 in the case of 300, GP won 3 times and lost 2 times against Edgar. From = the results, even if the population size is 50, GP could win 3 times, which is the same result as 200 and 300 population. The problem here, is that = the number of samples is small, which means I need more test results. However, in the = case of 100, GP won 5 times and lost no game after training. Thus, it can be said = that=20 100 population size is enough to train genetic programming in the = othello case.

And for crossover probability evaluation, in the case of 85% and 95% = probability,=20 GP lost game against Edgar after training. in the case of 99% = probability, GP won the game. There is the same problem here, which is I need more test results. However, in the case of 90% probability, since = GP won 3 times, it can be said that 90% crossover probability is reasonable value. =

In the case of test results against 50 random players, since GP was = trained against Edgar, results were totally random. This result is also reasonable.

Comment:

This assignment was interesting for me. The only problem was that we = needed a lot of computation time. However, I really enjoyed this assignment. =20