## Optimization of a Complex Non-Linear Function Using DEAP

Let’s consider an **optimization problem** that involves a complex mathematical function.

Here is a challenging example where we want to minimize a non-linear, multi-variable function that includes trigonometric and exponential components.

### Optimization Problem

We aim to minimize the following complex objective function:

$$

f(x, y, z) = e^{x^2 + y^2 + z^2} + \sin(5x) + \cos(5y) + \tan(z)

$$

where $(x)$, $(y)$, and $(z)$ are real-valued variables.

This function has multiple local minima and maxima, making it difficult for traditional optimization techniques to solve efficiently.

### Problem Setup

**Objective**: Minimize the function $(f(x, y, z))$.**Constraints**:- $(x \in [-5, 5])$
- $(y \in [-5, 5])$
- $(z \in [-1, 1])$ (The range of $(z)$ is constrained because the tangent function can become undefined at specific values of $(z)$.)

**Method**: We will use a**Genetic Algorithm (GA)**via the $DEAP$ library to solve this problem.

Genetic algorithms are a powerful tool for global optimization, especially for non-convex and multi-modal problems like this one.

### DEAP Implementation

We will use $DEAP$ to define the genetic algorithm, where each individual in the population represents a candidate solution $((x, y, z))$.

The steps for the genetic algorithm are as follows:

**Initialization**: A population of random individuals (sets of $(x)$, $(y)$, and $(z)$) is created.**Selection**: Individuals are selected based on their fitness values, which are computed as the value of the objective function $(f(x, y, z))$.

We aim to minimize this value.**Crossover**: Pairs of individuals are selected to produce offspring via crossover.**Mutation**: Random mutations are applied to some individuals to introduce new genetic material.**Evolution**: Over multiple generations, the population evolves toward better solutions.

Here is how to solve this problem using $DEAP$ in $Python$:

1 | import random |

### Explanation of the Code

**Objective Function**: The`objective`

function computes the value of $(f(x, y, z))$ for a given individual.

Since $DEAP$ minimizes the objective, the fitness value is the value of $(f(x, y, z))$.**Individual Representation**: Each individual is a list of three floating-point numbers representing $(x)$, $(y)$, and $(z)$.**Constraints**: The constraints ensure that the variables $(x)$, $(y)$, and $(z)$ stay within their respective bounds during the evolutionary process.

This is achieved using the`checkBounds`

function.**Evolutionary Operations**:**Crossover**: We use a blend crossover (`cxBlend`

), which combines the genetic material from two parent individuals.**Mutation**: $Gaussian$ $mutation$ (`mutGaussian`

) is used to add small random changes to the individuals.**Selection**: Tournament selection (`selTournament`

) is used to select individuals for reproduction based on their fitness.

**Population Evolution**: The algorithm runs for a defined number of generations, during which the population evolves toward better solutions.

### Running the Algorithm

When you run the algorithm, $DEAP$ will output the progress of the optimization process, including the best fitness found so far.

The final output will be the best individual (i.e., the values of $(x)$, $(y)$, and $(z)$) and the corresponding minimum value of the objective function.

This example demonstrates how $DEAP$ can be used to tackle complex mathematical optimization problems with non-linear, multi-variable functions.

### Result

1 | Best individual: [-1.0229844156448218, 1.0256973498197954, 1.5785811632285052] |

The result shows that the genetic algorithm found the best solution for the given optimization problem.

The **best individual** refers to the optimal values of the variables $((x, y, z))$ that minimize the objective function:

- $(x = -1.023)$
- $(y = 1.026)$
- $(z = 1.579)$

The corresponding **best fitness** (which represents the minimum value of the objective function $(f(x, y, z)))$ is approximately $(-28.59)$.

This is the lowest value found by the algorithm, indicating a successful minimization of the complex function.