Complex Systems Theory#

https://upload.wikimedia.org/wikipedia/commons/c/c7/Complex_systems_thoery.jpg

    Complex systems theory is a mixture of multipple fields of study including (but not limited to) systems theory, network/graph theory, non-linear dynamics, control theory, and field theory. In this notebook, we shall focus on dynamical systems theory, starting with simpler, linear, 1-dimensional systems, move to 2-dimensional systems, simple oscillatory motion, phase oscillators, coupled oscillation, and end with control of system with an introduction to bifurcations.

The road is long, and the theory is math-heavy, so let's begin...



image source: wikimedia corresponding to the wikipedia article titled Complex system

Dynamical Systems#

    The idea behind dynamical systems is relating the change in position of a body, a function of the body, or a system with the position itself. We characterize that with the differential equation: \(\frac{dx}{dt} = \mathcal{f}(x)\).

Some nomenclature:

We use the dot-accent for differential terms;
   i.e. \(\dot{x} := \frac{dx}{dt}\) or \(\frac{dx}{ds}\) for time or spacial position (respectively),
     \(\ddot{x} := \frac{d^2x}{dt^2}\) or \(\frac{d^2x}{ds^2}\),
  and ../_images/eqtn1.png or \(\frac{d^3x}{ds^3}\), and so on…
We use lower case normal fonts for scalars: like \(x\) or \(y\),
     lower case math script for vectors: like \(\mathbb{v}\),
    and upper case math script for matrices: like \(\mathbb{A}\).

So, the basic formula now can be written as: \(\dot{x} = \mathcal{f}(x)\). If one were to try out the simplest function for the equation, that is, to solve for \(\mathcal{f}(x) = x\),
$\(let\ x := x(t) = k\phi(t)\)\( \)\(i.e.\ \dot{x} := k\dot\phi(t) = k\frac{d\phi(t)}{dt}\)\( \)\(\therefore\ k\frac{d\phi(t)}{dt} = k\phi(t) \ \ \ \ \ [\because \dot{x} = x]\)\( \)\(\implies\ \frac{d\phi(t)}{dt} = \phi(t) \)\( \)\(\implies\ \phi(t) = \mathcal{e}^{t} \)\( \)\(\implies\ x(t) = k\mathcal{e}^{t} \ \ \ \ \ [k = x(t=0)]\)$

Modeling for a simple 1-D system:#

    Let’s assume a simple uni-directional motion of a physical body (a ball in a spring-mass system). We assume the spring constant to be \(k\) and mass of the ball to be \(m\). If the spring is pulled down by an amount \(x\), we can write the equations for conservation of energy \((\frac{1}{2}kx^2 = \frac{1}{2}mv^2)\) and equillibrium of force \((ma = -kx)\). Note, that velocity is \(v = \frac{dx}{dt} =: \dot{x}\) and acceleration is \(a = \frac{d^2x}{dt^2} =: \ddot{x}\). We now get the following equations:

dynsys_1d_spring_mass

\[\frac{1}{2}kx^2 = \frac{1}{2}m\dot{x}^2\]
\[\implies \dot{x}^2 = \frac{k}{m}x^2\]
\[\implies \dot{x} = \sqrt{\frac{k}{m}}x + 0\dot{x}\]
\[ also,\ m\ddot{x} = -kx\]
\[\therefore \ddot{x} = -\frac{k}{m}x + 0\dot{x}\]

Now, why would we want to add a ‘\(0\dot{x}\)’ term here? To account for a position/motion vector, \( \mathbb{y} = \begin{bmatrix} x \\ \dot{x} \end{bmatrix}\), and its differential;

\[\begin{split} \begin{aligned} &\dot{y}=\frac{d y}{d t}\\ &=\left[\begin{array}{c} \frac{d x}{d t} \\ \frac{d \dot{x}}{d t} \end{array}\right]\\ &\text { i.e. } \dot{y}=\left[\begin{array}{l} \dot{x} \\ \ddot{x} \end{array}\right]=\left[\begin{array}{c} \sqrt{\frac{k}{m}} x+0 \dot{x} \\ -\frac{k}{m} x+0 \dot{x} \end{array}\right]\\ &=\left[\begin{array}{cc} \sqrt{\frac{k}{m}} x & 0 \dot{x} \\ -\frac{k}{m} x & 0 \dot{x} \end{array}\right]\\ &=\left[\begin{array}{ll} \sqrt{\frac{k}{m}} & 0 \\ -\frac{k}{m} & 0 \end{array}\right]\left[\begin{array}{l} x \\ \dot{x} \end{array}\right]\\ &\therefore \dot{y}=\left[\begin{array}{cc} \sqrt{\frac{k}{m}} & 0 \\ -\frac{k}{m} & 0 \end{array}\right] \mathbb{Y}\\ &\text { or, } \dot{y}=\mathbb{K} \cdot \mathbb{y} \end{aligned} \end{split}\]

Modeling a Coupled System:#

    Let’s set up a similar system but with two different variables, \(\mathbb{v} = \begin{bmatrix} x \\ y \end{bmatrix}\), such that \((\dot{x} = ax + by)\) and \((\dot{y} = cx + dy)\). Think, motion of a body on a surface. One can then write:

\[\begin{split} \dot{v}:=\left[\begin{array}{c} \dot{x} \\ \dot{y} \end{array}\right]=\left[\begin{array}{l} a x+b y \\ c x+d y \end{array}\right]=\left[\begin{array}{ll} a & b \\ c & d \end{array}\right] \cdot\left[\begin{array}{l} x \\ y \end{array}\right]=\left[\begin{array}{ll} a & b \\ c & d \end{array}\right] \cdot \mathbb{v}=: \mathbb{A} \cdot \mathbb{v} \end{split}\]

Simple Predation Model:#

    These systems model equation where the population growth is linked to multiple predatory/prey groups, including their own. Let’s assume a simpler model of only 2 groups: rabbits (\(x\)) and foxes (\(y\)). We assume that rabbits have a population growth factor of \(g\) and their death rate can be measured by a kill rate \(k\) dependent upon the population of foxes. On the other hand, foxes will have a growth rate based on the availability of food/prey, with a factor of \(f\), while an absence of food will imply a death rate of \(d\). We can now form the two equations as:

\[\begin{split} \begin{aligned} &x=g x-k y \\ &\dot{y}=f x-d y \end{aligned} \end{split}\]

\(\dot{\mathbb{v}} := \begin{bmatrix}\dot{x}\\\dot{y}\end{bmatrix} = \begin{bmatrix}gx - ky\\fx - dy\end{bmatrix} = \begin{bmatrix}g & -k\\f & -d\end{bmatrix}\cdot\begin{bmatrix}x\\y\end{bmatrix} =: \begin{bmatrix}g & -k\\f & -d\end{bmatrix}\cdot\mathbb{v}\)


    Note that this is a simpler, linear predation model. Generally, the interaction term is controlled by the population value of the group. That is, there is a control parameter for the death toll on rabbits with the corrected equation looking like [\(\dot{x} = gx - kxy = (g - ky)\cdot x\)], and the predation-value is controlled by how many foxes go on the hunt, giving us [\(\dot{y} = fxy - dy = (-d + fx)\cdot y\)].

Models in Discrete time:#

    Let’s go back to the general model, and link discrete intervals as sampled continuous time segments. i.e. we can say that \(t\rightarrow n\), in which case \(dt(\approx\Delta t)\rightarrow \Delta n = (n+1)-(n) = 1\). Sampling thus, we can also change the continuous models from the coupled system:

\[\dot{x} \approx \frac{\Delta x}{\Delta n} = \frac{x(n+1)-x(n)}{(n+1)-(n)} = x(n+1) - x(n)\]
\[similarly, \dot{y} \approx y(n+1) - y(n)\]
\[\begin{split} \therefore \dot{v} \approx \Delta \mathbb{v}=\mathbb{v}(n+1)-\mathbb{v}(n)=\left[\begin{array}{l} x(n+1) \\ y(n+1) \end{array}\right]-\left[\begin{array}{l} x(n) \\ y(n) \end{array}\right] \end{split}\]
\[\begin{split} \begin{aligned} \text { i.e. } \Delta \mathbb{v} &=\left[\begin{array}{ll} a & b \\ c & d \end{array}\right] \cdot \mathbb{v} \\ {\left[\begin{array}{l} x(n+1) \\ y(n+1) \end{array}\right]-\left[\begin{array}{l} x(n) \\ y(n) \end{array}\right] } &=\left[\begin{array}{ll} a & b \\ c & d \end{array}\right] \cdot\left[\begin{array}{l} x(n) \\ y(n) \end{array}\right] \\ {\left[\begin{array}{l} x(n+1) \\ y(n+1) \end{array}\right] } &=\left[\begin{array}{ll} a & b \\ c & d \end{array}\right] \cdot\left[\begin{array}{c} x(n) \\ y(n) \end{array}\right]+\left[\begin{array}{ll} 1 & 0 \\ 0 & 1 \end{array}\right] \cdot\left[\begin{array}{l} x(n) \\ y(n) \end{array}\right] \\ \therefore\left[\begin{array}{c} x(n+1) \\ y(n+1) \end{array}\right] &=\left[\begin{array}{cc} 1+a & b \\ c & 1+d \end{array}\right] \cdot\left[\begin{array}{c} x(n) \\ y(n) \end{array}\right] \end{aligned} \end{split}\]


    The last equation is similar to that of the Markov Chain, especially given that the dynamical systems we study are usually memoryless and homogeneous, satisfying the Markov Principal. However, one must note that unlike probabilities for the transition matrix, here the matrix is made up of a set of parameters that may act as control variables/functions.

Phase Oscillators:#

    We now look at phase oscillators. Assume, as shown in the figure, that a mass is hanging by a thread in an ideal world where the only force on the mass is that of gravity. We now displace the mass by an angle \(\theta\), and let it swing. The angular velocity could be calculated as \(\omega = \frac{d\theta}{dt} = \dot{\theta}\), or the position/angle of the mass by \(\theta = \omega t + \theta_0\), where \(\theta_0\) is the initial position/angle of the mass.

dynsys_phase_oscillation

    In case we couple the system by adding another mass on a string, and connect them both by a rod. Then do the same excercise, that is pull a mass to an angle and let the mass swing, we see that since the rod is ideal, the second mass-string system will follow the first system, \(\theta_i = \omega_i t + \theta_{0,i}\). In such cases, we say that the system is ‘coupled’;

\[\begin{split} \begin{aligned} \Theta:=& {\left[\begin{array}{l} \theta_{1} \\ \theta_{2} \end{array}\right]=\left[\begin{array}{c} \omega_{1} t+\theta_{0,1} \\ \omega_{2} t+\theta_{0,2} \end{array}\right] } \\ & {\left[\begin{array}{l} \theta_{1} \\ \theta_{2} \end{array}\right]=\left[\begin{array}{c} \omega t+\theta_{0} \\ \omega t+\theta_{0} \end{array}\right] \text { if perfectly coupled } } \end{aligned} \end{split}\]

    However, if there is an imperfection in the way these systems are coupled, then we will observe that each of the two systems will try to exert a control on the motion of the other. We might observe this is the form of a deviation in the angular velocity between the two oscillators as \(\omega_1 - \omega_2 = \delta\). To set up control over this seemingly chaotic model, we use the phase-difference as a control parameter, \(\phi(t) := \theta_1(t) - \theta_2(t)\ [\text{mod}(2\pi)]\). We apply this control parameterin the equation of change as \(\dot{\theta} = \omega \pm \gamma \text{sin}\phi\).

\[\begin{split} \begin{aligned} \therefore \dot{\phi} &=\dot{\theta_{1}}-\dot{\theta_{2}} \\ &=\left(\omega_{1}-\gamma \sin \phi\right)-\left(\omega_{2}+\gamma \sin \phi\right) \\ &=\left(\omega_{1}-\omega_{2}\right)-2 \gamma \sin \phi \\ &=\delta-2 \gamma \sin \phi \\ &=\left(1-\frac{2 \gamma}{\delta} \sin \phi\right) \cdot \delta \quad \text { Now, let } \mu:=\frac{2 \gamma}{\delta} ; \\ \therefore \dot{\phi} &=(1-\mu \sin \phi) \cdot \delta \end{aligned} \end{split}\]


    Note, that we substitute \(\mu := \frac{2\gamma}{\delta}\) as \(\gamma\) is the only parameter that we can control, and that \(\delta\) is observed deviation, i.e. the scalar that will moderate the control parameter \(\text{sin}\phi\). We also note that \(\delta\) is the deviation of the angle in time (difference in angular velocity), i.e. \(\delta\sim\frac{ds}{dt}\), since \(s\) represents the spacial dimention. Also given that \(\dot{\phi}=\frac{d\phi}{dt}=\frac{d\phi}{ds}\cdot\frac{ds}{dt}\), we find that:

\[\begin{split} \begin{aligned} \frac{d \phi}{d s} & \sim(1-\mu \sin \phi) \\ f(\phi, \mu) &:=(1-\mu \sin \phi) \\ \text { i. e. } f^{\prime}(\phi, \mu) &=-\mu \cos \phi \end{aligned} \end{split}\]


    Now, we know that \(\mathcal{f}\sim\frac{d\phi}{ds}\), i.e. if \(\mu=1\), we may find that we hit a local extrema (\(\frac{d\phi}{ds}=0\)) at \(\phi=\frac{pi}{2}\). In systems theory, we call such points Equillibrium Points, represented as \(\text{EQL} \sim (\phi^\text{*},\mu^\text{*})\). However, if \(\phi=\frac{\pi}{2}\), then \(\mathcal{f'} \sim \frac{d^2\phi}{ds^2} = 0\). From differential mathematics, we find that when the second-order differential equates to zero (\(0\)) for a given set of values, the system becomes bi-stable. In dynamical systems, we call systems at this point to be in a state of Bifurcation.

General Equation for Bifurcations:#


    Bifurcative states always occur at \(\text{EQL}\sim(x^\text{*},\mu^\text{*})\) for any given function (\(\mathcal{f}\)) of a dynamical system. To solve for the bifurcation, we use the generalized Taylor expantion for differential equations:

\[\begin{split} \begin{aligned} \lim _{(\varepsilon, \delta) \rightarrow(0,0)} f\left(x^{*}+\varepsilon, \mu^{*}+\delta\right) &=\left.f\right|_{E Q L} \\ &+\left.\varepsilon \cdot \frac{\partial f}{\partial x}\right|_{E Q L}+\left.\delta \cdot \frac{\partial f}{\partial \mu}\right|_{E Q L} \\ &+\left.\varepsilon^{2} \cdot \frac{\partial^{2} f}{\partial x^{2}}\right|_{E Q L}+\left.\varepsilon \delta \cdot \frac{\partial^{2} f}{\partial x \partial \mu}\right|_{E Q L} \\ &+\mathcal{O}\left(\varepsilon^{3}, \delta^{2}\right) \ldots \end{aligned} \end{split}\]

[higher order terms \(\sim 0\) ].

Notes and Resources:#

    Some really good examples of bifurcation are provided in this PDF from UCSD. Even though we stop here in this notebook, certain topics like Hopf bifurcation, Lyapunov theory, and Chaos theory are also pretty interesting. Other resources one may want to go through would be Project Complexity Explained by Dr. Manlio De Domenico (CoMuNe Lab, Fondazione Bruno Kessler, Italy) and Dr. Hiroki Sayama (Dept. of Systems Science and Industrial Engineering, Binghamton Univ., USA), Dr. Sayama’s repository for complex systems analysis PyCX, and the project Complexity Explorables by Dirk Brockmann. Books we found useful to review included:

  1. J. Guckenheimer and P. Holmes’ Nonlinear Oscillations, Dynamical Systems, and Bifurcations of Vector Fields (Springer)

  2. S.H. Strogatz’s Nonlinear dynamics and chaos: with applications to physics, biology, chemistry and engineering (Cambridge Press)

  3. M. W. Hirsch S. Smale and R. L. Devaney’s Differential equations, dynamical systems and an introduction to chaos (Elsevier)

  4. Eugene Izhikevich’s Dynamical systems in neuroscience: the geometry of excitability and bursting (MIT Press)