## Numerical Optimization Problem with DEAP

$DEAP$ (Distributed Evolutionary Algorithms in $Python$) is a flexible library for implementing evolutionary algorithms.

It’s widely used for optimization problems, where traditional methods struggle with non-linearity, high dimensionality, or noisy functions.

In this example, we’ll solve a **numerical optimization problem** using a genetic algorithm ($GA$) implemented with $DEAP$.

Specifically, we’ll minimize the **Rastrigin function**, a common benchmark in optimization problems that is highly multimodal and presents many local minima, making it challenging for classical optimization techniques.

#### Problem

Minimize the **Rastrigin function**:

$$

f(x) = 10n + \sum_{i=1}^{n} \left[ x_i^2 - 10\cos(2\pi x_i) \right]

$$

where $( n )$ is the number of dimensions (we’ll use $2$ in this example).

The function has a global minimum at $( f(0, 0, \dots, 0) = 0 )$.

### Approach

We will:

- Define the problem and the fitness function using $DEAP$.
- Set up a genetic algorithm with parameters like population size, mutation rate, and crossover rate.
- Run the genetic algorithm to find the minimum of the
**Rastrigin function**.

### Steps and Code Implementation

**Install DEAP**:

First, you need to install the $DEAP$ library if you haven’t already:1

pip install deap

**Define the Rastrigin Function and Optimization Problem**:

Below is the full $Python$ code to minimize the**Rastrigin function**using $DEAP$.

1 | import random |

### Explanation

**Fitness Function (Rastrigin)**:- The
`rastrigin`

function evaluates the fitness of an individual, which is a vector of decision variables.

The smaller the value of this function, the better the individual (since we are minimizing). - The function returns a tuple because $DEAP$ expects the fitness to be returned in this format.

- The
**Individual and Population**:- We define an “Individual” as a list of floats (decision variables).

Each individual has a fitness attribute associated with it. - A “population” is a list of individuals.

- We define an “Individual” as a list of floats (decision variables).
**Genetic Operators**:`cxBlend`

: A blending crossover is used where genes from two parents are combined.`mutGaussian`

: This applies $Gaussian$ mutation, where a random value sampled from a normal distribution is added to the gene.`selTournament`

: Tournament selection selects individuals based on their fitness for crossover and mutation.

**Genetic Algorithm Parameters**:**Population size**: $300$ individuals in the population.**Crossover probability (cxpb)**: $0.7$, meaning $70$% of the population undergoes crossover.**Mutation probability (mutpb)**: $0.2$, meaning $20$% of the population undergoes mutation.**Generations**: The algorithm runs for $50$ generations.

**Algorithm Execution**:`algorithms.eaSimple`

runs the evolutionary algorithm using simple evolutionary strategies. It returns the final population and the log of evolution.`tools.selBest`

selects the best individual from the population.

**Result**:- The best individual found by the
**GA**and its fitness value are printed.

The fitness value should be close to $0$ for the**Rastrigin function**‘s global minimum.

- The best individual found by the

### Output

The genetic algorithm will print the best individual and its corresponding fitness:

1 | Best individual is: [-1.0333568778912494e-09, 1.0226956450775647e-09] |

In this case, the algorithm finds an individual close to the global minimum $( [0, 0] )$, with a fitness value near zero, indicating that the solution is very close to the true minimum of the **Rastrigin function**.

### Applications

$Genetic$ $algorithms$, as implemented in $DEAP$, are widely used in optimization problems such as:

**Engineering design**: Optimizing structures or systems with complex constraints.**Machine learning**: Feature selection, hyperparameter tuning, and architecture search.**Operations research**: Scheduling, resource allocation, and routing problems.

$Genetic$ $algorithms$ are particularly useful in cases where the objective function is non-convex, noisy, or discontinuous, making traditional gradient-based methods less effective.