What is it about?
Quality-Diversity algorithms are a branch of Evolutionary Computation that aims to generate collections of both diverse and high-performing solutions to a problem. Contrary to traditional optimization methods that return a single high-performing solution, the goal of Quality-Diversity algorithms is to illuminate a search space of interest called descriptor space. In contrast, classic Genetic Algorithms or gradient-based methods focus on finding a single best solution within a given search space. While this can be effective in many cases, it may lead to suboptimal results when the search space is complex, multi-modal, or deceptive. Moreover, traditional methods might converge prematurely to local optima and fail to explore other promising regions of the search space. However, while gradient-based methods excel in high-dimensional search spaces, MAP-Elites performs a divergent search based on random mutations originating from Genetic Algorithms, and thus, is limited to evolving populations of low-dimensional solutions. In this paper, we examine how to combine diversity search with gradient-based methods to get the best of both worlds. Specifically, we combine MAP-Elites algorithm with Policy Gradient methods from Reinforcement Learning to evolve collections of diverse and high-performing neural networks.
Featured Image
Photo by Google DeepMind on Unsplash
Why is it important?
Quality-Diversity algorithms are important because they address a critical challenge in optimization tasks: finding high-quality solutions and exploring a wide range of possible solutions. Rather than searching for a single best solution, they aim to explore and map the entire range of high-quality solutions. This approach has proven useful in various domains, including robotics, optimization, game playing, and creative design. Thus, it is important to scale those methods to high-dimensional problems.
Read the Original
This page is a summary of: MAP-Elites with Descriptor-Conditioned Gradients and Archive Distillation into a Single Policy, July 2023, ACM (Association for Computing Machinery),
DOI: 10.1145/3583131.3590503.
You can read the full text:
Resources
Code
Repository for MAP-Elites with Descriptor-Conditioned Gradients and Archive Distillation into a Single Policy paper, introducing the Descriptor-Conditioned Gradients MAP-Elites algorithm.
Presentation
Presentation for MAP-Elites with Descriptor-Conditioned Gradients and Archive Distillation into a Single Policy paper, introducing the Descriptor-Conditioned Gradients MAP-Elites algorithm given at GECCO 2023.
Contributors
The following have contributed to this page