Theory behind adaptive complexity

September 22, 2009
by Joshua Fuller-Deets (jef08)

The idea behind this project is to create an artificial simulation in which simple interacting parts combine to perform complex actions of some sort. While this is a vague starting point, there is a wealth of information available about prior research in this area. The direction our group is currently headed in is using the Push programming language to set up some analogue of a natural organism that will then be tested based on a selection criteria we have yet to create. In each consecutive trial the organisms will modify themselves and each other towards the eventual goal of solving whatever problem they are presented with in a more efficient manner. The environment in which our program will run will have to have a selection gradient present; that is, there must be a ‘better’ and a ‘worse’, and possibly a ‘neutral’, way of adapting during each trial, such that advantageous modifications, those that help the host program perform better on the fitness test, are propagated more often than those modifications that do not. The end result of this, we hope, is that the system will modify itself in such a way that the basic pieces of code it starts with will run, modify themselves and each other, and eventually outperform their original selves.

One of the problems we face at the moment is how, exactly, we intend to set up a selection gradient, and how we are going to interpret the results of our experiment as being successful or not. Our thoughts on this at the moment are that each individual ‘organism’ in our simulation will have to operate within a given set of rules determining how and when it can propagate itself or its code. By performing these actions more efficiently it will allow organisms to propagate themselves more often or more successfully, thus increasing the propensity of that particular set of code. If we interpret segments of code in this environment as being akin to genetic information, or more broadly as simply a meme, we can interpret success as being an increase in the propensity of a given set of code. By planting tracking information in the code before we begin the simulation, we can, at the end of the simulation, track a given piece of code’s ‘offspring’ all the way through the length of the simulation. This would allow us to create a ‘family tree’, of sorts, that details the interactions and outcomes of the code. The exact details of how we are going to design this have yet to be determined.

Leave a Reply

You must be logged in to post a comment.