NeuroEvolutionary Meta-Optimization

Andreas Lang, Kenneth O. Stanley (2013). NeuroEvolutionary Meta-Optimization. In: Proceedings of the International Joint Conference on Neural Networks (Dallas, Texas). doi:10.17605/OSF.IO/CD7U5. (Note: In the field of artificial neural networks, publications in selective conferences are considered first-class contributions.)

During my research stay, supervised by Kenneth O. Stanley, at the Evolutionary Complexity group at the University of Central Florida in fall 2012, I worked on NeuroEvoluationary Meta-Optimization, in the framework of my bachelor thesis in computer science.

Summary: An optimization algorithm performs well on a certain class of optimization problems and badly on a different problem class. A meta-optimization algorithm can be trained on a problem class in order to produce an optimization algorithm which performs well on problems of this class. We proposed a novel meta-optimization algorithm driven by neuroevolution. We make an equivalence relationship between a neural network and an optimization algorithm. Thus, applying HyperNEAT, which evolves neural networks in structure and weights, we obtain a meta-optimization algorithm. Our neuroevolutionary meta-optimization performs well on problem classes derived from classical fitness function like the Rosenbrock and Ratz functions, and even on deceptive volcano-like problems.

Example of a neural network corresponding to an optimizatino algorithm that was obtained through meta-optimization. The meta-optimization algorithm produced implicitly elements for exploitation and exploration.
Example run on a Rosenbrock problem. Red crosses indicate queried points of the fitness function.

The paper is publicly available at