DE is an index to measure the complexity or irregularity of time series, which has the advantages of high calculation efficiency and considering the details of data amplitude

(1) Using the normal distribution function of the formula, the time series are divided into two parts χ Map to y = [Y1, Y2… Yn], Yi ∈ (0,1).

Where: μ and σ 2 is the expectation and variance of time series.

(2) Y is mapped to the range of [1,2,…, C] by the following linear transformation.

Where: J = 1,2,…, N; Round (*) is the rounding function; C is the number of grades.

(3) Calculate the embedding vector.

Where: m and D are embedding dimension and delay, I = 1,2,…, N – (m-1) d.

(4) According to the definition of information entropy, De of signal x is defined as:

The length of the sequence u = [U1, U2,…, UL] is l, and the length of the sequence u = [U1, U2,…, UL] is L τ（ The average value of each sequence is calculated

Where: 1 ≤ J ≤ [l]/ τ]= N. [*] indicates rounding down.

The average value of each segment is combined into a coarse-grained sequence of length, and the dispersion entropy of each coarse-grained sequence is calculated, which is called multi-scale dispersion entropy (MDE). Since the normal distribution function still uses the mean and variance of the original signal when calculating the distribution entropy of each coarse-grained sequence, MDE is not a simple combination of the distribution entropy of coarse-grained sequence.