Notes: Evolutionary Computation
Topic of use of evolutionary computation in development of AI came up #elsehere.
(This post is mostly a list of links.)
Some interesting papers mentioned in discussion:
(Some of these have been on my reading list, some of them are new.)
Salismans et al. 2017. “Evolution Strategies as a Scalable Alternative to Reinforcement Learning.” arxiv:1703.03864
- In short: Black box optimization (by ES) for NNs.
- Benefits include: Good performance, parallelizes well, robust to hyperparameters, computational cheaper (no gradient computation needed).
- “natural evolution strategies”, see also Wierstra et al. “Natural evolution strategies.” 2014. Journal of Machine Learning Research, 15(1):949–980. pdf
- analogue, “randomized finite differences in high-dimensional space”
- ES allows a dramatically reduced variance of the so-called “policy gradient” if the actions “have long-lasting effects (i.e. it takes many timesteps for actions to influence the reward received by agent) and the value function can’t be easily approximated.
Weiß, Gerhard. “Neural networks and evolutionary computation. I. Hybrid approaches in artificial intelligence.” Evolutionary Computation, 1994. pdf
- viewpoint as of 1994
Risi and Togelius. 2015 “Neuroevolution in Games: State of the Art and Open Challenges”. arxiv:1410.7326
- review of state of art in NE, in games, prior to OpenAI paper.
- some types of NE
- old, conventional NE: fixed topology, evolve the weights
- NEAT: NeuroEvolution of Augmenting Topologies. Evolve the topologies, too. (Direct encoding: genetic representation one-to-one mapped to parameters.)
- CPPN: Compositional Pattern Producing Network. HyperNEAT. Encoded ANNs. Wikipedia stump.
- many choices how to do fitness evaluation, input representation etc.
- loses to MCTS?
All these are mainly interested in applying ES to ANNs for restricted class of RL tasks (Atari, robots in simulations, locomotion, music generation etc).
Hausknecht et al. 2014. “A Neuroevolution Approach to General Atari Game Playing.” IEEE Transactions on Computational Intelligence and AI in Games, 6(4), pp.355-366. pdf
- HyperNEAT on Atari.
- See also Stanley et al. 2009. “A hypercube-based encoding for evolving large-scale neural networks.”, Artificial life, 15(2), pp.185-212. pdf
- Unfortunately paper appears quite impenetrable to me. It does not describe CPPNs very clearly.
- Also note that we are talking about very shallow and low-res networks.
van Steenkiste et al. 2016. “A wavelet-based encoding for neuroevolution”. In Proceedings of the 2016 on Genetic and Evolutionary Computation Conference (pp. 517-524). ACM. pdf
- a fun application of wavelets.
- how to do genotype-phenotype mapping?
- usual indirect mappings: no continuity
- Discrete Cosine Transform: contuinuity, but no spatio-temporal locality
- solution: wavelets
- see Amara Graps, Introduction to Wavelets pdf; Daubechies, Ten Lectures on Wavelets (said to be good, but requires possibly uninteresting amount of mathematical sophistication), inter alia.
See also NN research group at University of Texas at Austin, publications
Artificial Life
Meyer. 1996. “Artificial Life and the Animat Approach to Artificial Intelligence.” Artificial intelligence, pp.325-354.
Husbands et al. 1997 “Artificial Evolution: A New Path for Artificial Intelligence?” Brain and Cognition 34.1, pp.130-159.
Journal: Artificial Life
And Also Starring
Ur-Crank Bostrom: Shulman, Bostrom. “How Hard is Artificial Intelligence? Evolutionary Arguments and Selection Effects”. 2012. pdf.
- quite speculative