# Evolution strategy (SRES) (analysis)

- Analysis title
- Evolution strategy (SRES)
- Provider
- Institute of Systems Biology
- Class
`SRESOptMethod`

- Plugin
- ru.biosoft.analysis.optimization (Common methods of data optimization analysis plug-in)

### Stochastic ranking evolution strategy (SRES)^{1}

In the (μ, λ)-ES algorithm, the individual *i* is a set of real-valued vectors (*x*_{i}, σ_{i}) ∀ *i* ∈ {1,...,λ}. The initial population of *x* is generated according to a uniform *n*-dimensional probability distribution over the search space *S*. Let δ*x* be an approximate measure of the expected distance to the global optimum, then the initial setting for the "mean step sizes" should be

where σ_{i, j} denotes the *j*th component of the vector σ_{i}. We use these initial values as upper bounds on σ.

Following the bubble-sort-like procedure is used to rank the individuals in a population, and the best (highest ranked) μ individuals out of λ are selected for the next generation. The truncation level is set at μ / λ ≈ 1/7.

Variation of strategy parameters is performed before the modification of objective variables. We generate λ new strategy parameters from μ old ones so that we can use the λ new strategy parameters in generating λ offspring later. The "mean step sizes" are updated according to the log-normal update rule: *i* = 1,...,μ, *h* = 1,...,λ, *j* = 1,...,*n*,

(1) |

where *N*(0,1) is a normally distributed one-dimensional random variable with an expectation of 0 and variance 1 and *k* ∈ {1,...,μ} is an index generated at random and anew for each *j*. The "learning rates" τ and τ′ are set equal to (4*n*)^{−¼} and (2*n*)^{−½}, respectively. Recombination is performed on the self-adaptive parameters before applying the update rule given by (1).

Having varied the strategy parameters, each individual (*x*_{i}, σ_{i}), ∀ *i* ∈ {1,...,μ} creates λ/μ offspring on average, so that a total of λ offspring are generated

Recombination is not used in the variation of objective variables. When an offspring is generated outside the parametric bounds defined by the problem, the mutation (variation) of the objective variable will be retried until the variable is within its bounds. In order to save computation time, the mutation is retried only ten times and then ignored, learning the object variable in its original state within the parameter bounds.

#### References

- TP Runarsson, X Yao, "Stochastic Ranking for Constrained Evolutionary Optimization". IEEE Transactions on Evolutionary Computation, vol. 4, #3, pp. 284-294, Sept 2000.