GP Techniques and Theory

Back to main GP Brainstorming page.


Relevant papers from the Symposium Working Notes:

Parallel Genetic Programming and Fine-Grained SIMD Architectures. Hugues Juille and Jordan B. Pollack

Dynamic Classifiers: Genetic Programming and Classifier Systems. Patrick Tufts


The reminder Brij [Masand] reinforced in the last session: We need to keep our minds open to new approaches; GP is too young to get set in its ways. {Astro Teller}

I have the impression that as a community we are becoming too homogenious as in accepting standard ideas as if we "know what GP is" and are "merely" working on clarifying/analyzing various technical approaches. Even as we need to be a community that collaborates using similar concepts and formalism I think we could use more diversity and new input in the form of dialogues with biologists, philosophers, software engineering professionals, science fiction writers etc. to open our minds to new fundamental ideas and to reexamine our goals and methods and even our notions of "evolution". {Brij Masand}

When first applying GP to a new problem, the biggest contribution may simply be the representation of the solution, which contrasts to standard AI approaches. The genetic search process has its own, separate, pros and cons. We need to distinguish between these two contributions over each new application and each new test problem! This is hardly done to date. {Eric Siegel}

YOU MUST VALIDATE YOUR RESULTS OVER TEST CASES, i.e. "unseen data." To check for overfitting, your learning method (GP or otherwise) must output only one individual, and that individual must be tested over unseen data. If that process takes place more than several times, the best individual must then be tested over "really unseen" data, or "meta-validation cases" -- this can be a multilayered process. Further, you must specify your validation process clearly when you write a paper. (This paragraph does not apply if there are a finite number of test cases, and they are all included in the fitness measure, eg sorting networks.) {Eric Siegel}


MISC TECHNIQUES

Multi-output problems can be done much easier using the concept of indexed memory, where memory locations are assumed to be the values for each output. {Rich Hampo}

Though we didn't talk about population convergence, I have found that when my population converges, if I mutate ALL of the population, I can get past that convergence performance level in 1-2 generations after this "Asteroid strike" mutation, and reach an even higher level after this. This works extremely well for me, though I have not done strictly scientific DOE verification. {Rich Hampo}

Could higher-order functions solve scaling problem? {Hakan Johnsson}

Could self-reference be used for evolving evolvability? {Hakan Johnsson}

Human programs change programs a little at a time. But the actual text changes may be spread about. Will "smart" crossover allow GP to do something similar? Or can we devise representations so we don't need multiple changes to perform one code update? {William Langdon}

Justinian [Rosca] pointed out the almost all crossovers involve exchanging only a small fragment of code. The average height of subtrees being 2. I suggested there was scope for a study of the correlation of subtree size (height) with positive differential fitness. {William Langdon}

There was a discussion about how to build truly complex systems. Some felt that Afra's [Zomorodian] GP to build a (software) machine which solved the problem was a nice approach with parallels in embryology and growth of multi-cellular organisms. Others suggested GP would be best served by building complex sandwiches of GP with layers of other techniques (such as ANN). It seemed to be agreed that plain GP would not suffice. (John Koza suggested that such a step was not needed at this stage, there being plenty of problems current GP technology could be used to solve --- low lying fruit). Justinan Rosca suggested that a (heirarchical) GP only solution might be viable. [Justinan and I had a discussion on evolving multiagent programs, there seem to be interesting parallels with specialised cells types in multi-cellular orgainisms.] {William Langdon}

JK [John Koza]: No mutation in GP1 and GP2 for political reasons but would now recommend people try a little mutation. Suggests people might avoid powerful primitives if expensive in cpu (eg SIN, EXP) and just use five functions +, *, -, / and if. GP able to evolve taylor expansion approximation to any function using these five. {William Langdon} Juries of best-of-run programs can be useful for problems with sparse training data and for which conservative behavior is required [Koza/Andre]. {Lee Spector}


THEORY/ANALYSIS:

The executability of GP is a red herring: the issue is representation. GP is more representationally complete than GA, and is more human-readable than GA, but like GA can be conveniently manipulated automatically (evolved). There is value to the executability however (this is not my idea, but came out during lunch Sunday with several other attendees) -- it makes GP representations, unlike most AI representations, objectively verifiable: either they run or they crash; if they run, either they do what you want, or they don't. {Stuart Card}

Various simple visualization techniques may help quite a bit; e.g., logarithmic scales on plots of P(M,i). {Lee Spector}

A theory. We borrow a lot, probably implicitly, from GA. I know we are not supposed to use the schema theorem, but I still believe that we are using building blocks. For example, I can devise a fitness function that says "was agent A at the target state after 100 moves?" Intuitively, I expect the learning curve to be slow. I could also breakdown the fitness to subgoals that I expect an agent to perform, with the ultimate goal having the majority of the fitness points. In the initial generations, an agent will ``luck out'' and achieve one of the subgoals. I expect the chromosomes to gradually achieve all of the subgoals until the overall goal is achieved. I have seen this in my experiments. This tells me that building blocks are being identified. They may not be in fixed positions, but they are there. {Thomas Haynes}

We need a collection of standard data-sets (or references to their sources). I will happily contribute my Mackey-Glass and blood flow material, including surrogates, and there should be a reference to the Santa Fe time series collection, which I will submit. I'm not sure where this should be - I confess that getting access to the GP FTP site is normally very hard for me (my ISP claims that Sprint are creating a major bottleneck at their US end), and may need to put it somewhere local here and ask someone to FTP it across. {Howard Oakley}