Monday, June 10, 2013

An Introduction to Genetic Algorithms with Applications to Antenna Arrays

Previously, I discussed testing antenna arrays inside an anechoic chamber, and I explained the setup for testing an antenna array inside an anechoic chamber (see Figure 1 below).  I briefly mentioned that I've researched the use of genetic algorithms (GAs) in evolving antenna arrays to focus electromagnetic (EM) / radio frequency (RF) energy on a signal of interest (SOI) while simultaneously minimizing EM/RF energy in jammer directions. In this article, I will explain how GAs work and how I've used them in my research.

Figure 1: Test setup diagram of antenna array in anechoic chamber (Copyright Jonathan Becker)
The GA was originally developed by John Holland [1] in the 1970s with an update to his classic text in 1992, and was developed further by David Goldberg [2] and Kenneth De Jong [3]. The most basic of GAs is known as the Simple Genetic Algorithm (SGA), and it has three evolutionary operators: "survival of the fittest" mate selection, crossover, and mutation. It is through these three operators that the SGA searches for optimal solutions to a given problem. Typically, a GA is used to find a maximal (i.e., best) solution although it can be modified to solve minimization problems a modification to the fitness function.

Before I can explain the SGA's three evolutionary operators in details, I need to explain a few concepts relevant to GAs in general. First, there is the concept of a fitness function. The fitness function measures how good a solution is in solving a maximization problem. In my research, I use GAs to maximize the Signal to Interference and Noise Ratio (SINR) because maximizing SINR ensures that the GA has thwarted jammers while focusing energy on a SOI. Of course, not all optimization problems involving maximizing some quantity. For example, suppose that you want to minimize the weight of a mechanical part to be installed inside an unmanned aerial vehicle. To apply a GA to this problem, you would first define the objective function to be minimized and invert it while adding a small number in the denominator (to prevent an infinite result). This converts a minimization problem to a maximization problem.

Second, GAs operate on strings equivalent to chromosomes in biology. Typically, the strings are binary (i.e. 1s and 0s) valued although they can be real valued depending on the application. SGAs operate on singly stranded haploid binary strings, and more complicated GAs can operated on doubly stranded diploid strings [4,5]. Third, GAs operate on a population of strings. This allows the GA to search multiple sections of the solution space in parallel -- a property Goldberg called "implicit parallelism" [2]. I show flowchart of the SGA used to maximize SINR in solving the anti-jamming beamforming problem below in Figure 2.

Figure 2: Flowchart of Simple Genetic Algorithm (Copyright Jonathan Becker)
I created the flowchart to explain the concepts of mate selection, crossover, and mutation. The GA chooses pairs of strings based on their fitness using "survival of the fittest." This means that strings with higher fitness are more likely to be chosen than strings with low fitness. In fact, lowly fit strings are likely to die out in the generation that they occur. After the GA selects mating pairs, it performs chromosomal crossover. This is much like crossover in biology where two chromosomes are split and the tail ends are swapped. The idea behind crossover is to create better solutions (i.e., children) by sharing the good properties of their parents (i.e. the mates). Of course, mate selection and crossover is not perfect, and information can be lost in the process. Some of this information might be necessary in solving the given problem at hand. This is where mutation comes into play, as randomly mutating bits (with a small probability, less than 2%) allows information that was lost to be reintroduced into the population. Mutation is a secondary operator in GAs.

I will close this post with an answer to a common criticism of GAs: If the initial population is chosen at random, that means GAs are random search algorithms. (Random search is not an effective algorithm in solving engineering problems.) This is a misconception of how GAs operate. It is true that the initial populations are chosen at random, and crossover points and mutations are chosen with probabilities Pc and Pm. However, the fitness function guides the GA in its search of the solution space to find maximal solution(s). The fitness function is well defined and allows the GA to efficiently solve a wide variety of engineering problems.

I have listed my references below, and you are welcome to read them if you want to learn more about genetic algorithms.


Best,

Jonathan Becker
ECE PhD Candidate
Carnegie Mellon University


References:
  1. J. H. Holland. Adaptation in natural and artificial systems: an introductory analysis with applications to biology, control, and artificial intelligence. Bradford Books. MIT Press, 1992.
  2. D. E. Goldberg. Genetic Algorithms in Search, Optimization, and Machine Learning. Adison-Wesley, Reading, MA, 1989.
  3. K. De Jong. An Analysis of the Behavior of a Class of Genetic Adaptive Systems. PhD Thesis. 1975.
  4. D. S. Weile and E. Michielssen. "The control of adaptive antenna arrays with genetic algorithms using dominance and diploidy." IEEE Trans. Antennas and Propagation, 49(10):1424–1433, October 2001.
  5. J. Becker, J. Lohn, and D. Linden. "Evaluation of genetic algorithms in mitigating wireless interference in situ at 2.4 GHz." In WiOpt 2013 Indoor and Outdoor Small Cells Workshop, pages 1–8, May 2013.


No comments:

Post a Comment