Lecture 2 Models for two populations

Lesson plan:

  1. We discuss the concepts introduced in the previous chapter in the context of two-dimensional (“planar”) systems of differential equation.
  2. We summarize important concepts from linear algebra: eigenvalues and eigenvectors, the matrix exponential.
  3. We show how these concepts can be used to solve linear systems of differential equations.
  4. We introduce the “community matrix”, allowing us to determine the local asymptotic stability of equilibria.
  5. We analyze in detail the classic Lotka-Volterra predator-prey model, and a version of the model in which the prey grows logistically.

2.1 Qualitative analysis of models for two populations

In this section, we extend the qualitative analysis we’ve performed for a single populations to models with two populations. Many of the methods introduced below extend to the case of multiple populations.

2.1.1 Isoclines of null growth

Take a two-dimensional model. For each equation, we can write the solution of \(dx/dt = 0\) as a function of \(y\), thereby defining the curve in the \((x, y)\) plane (the “phase plane”) for which the growth of species \(x\) is zero. For a concrete example, take the classic predator-prey Lotka-Volterra system:

\[ \begin{cases} \dfrac{d x(t)}{dt} = \rho\, x(t) - \alpha\, x(t)\, y(t)\\ \dfrac{d y(t)}{dt} = -\delta\, y(t) + \beta\, x(t)\, y(t) \end{cases} \]

where \(x(t)\) is the density of the prey species at time \(t\) and \(y(t)\) that of the predator. We can interpret \(\rho\) as the intrinsic growth rate of the prey (i.e., the growth when the predator is absent), \(\delta\) as the death rate of the predator, and \(\alpha\) and \(\beta\) as the loss of (gain in) growth due to predation.

History: Alfred J. Lotka (1880-1949)

Alfred Lotka was born to French-speaking American parents in Lemberg (then part of the Habsburg empire, now Lviv, Ukraine). He studied in France, Germany and England, receiving a BSc in 1901 and a DSc in 1912 from Birmingham university. He moved to the US in 1902, and worked at the US Patent office, as an editor of Scientific American, and as a statistician at the Metropolitan Life Insturance Company in NYC. He wrote more than a hundred papers and five books, spanning a large range of topics. He’s best known for the book Elements of Physical Biology, his contributions to demography, and one of the first studies dealing with bibliometrics (Lotka 1926).

Starting in 1910 (reprinted as Lotka (2002)) he investigated coupled differential equations relating to chemical as well as ecological dynamics. In Lotka (1920) he studied a system of two ODEs that gave rise to perpetual oscillations: “It was, therefore, with considerable surprise that the writer, on applying his method to certain special cases, found these to lead to undamped, and hence indefinitely continued, oscillations.” He went on to describe “1. A species of organism \(S_1\), a plant species, say, deriving its nourishment from a source presented in such large excess that the mass of the source may be considered constant during the period of time with which we are concerned. 2. A species \(S_2\), for example a herbivorous animal species, feeding on \(S_1\).”

The equations he had derived (and then studied later in more detail) are now termed Lotka-Volterra equations.

Let’s look at a possible trajectory for the system, to gain an intuition of what can happen:

You can see that the population densities, when drawn in the “phase plane” cycle counterclockwise. Let’s try to understand why.

The equation for the prey is zero either when \(x(t) = 0\) or when \(\rho - \alpha\,y(t) = 0\), yielding \(y(t) = \rho / \alpha\). Whenever the density of the predator \(y(t) < \rho / \alpha\), prey will grow; conversely, whenever \(y(t) > \rho / \alpha\), prey will decline.

Graphically:

Similarly, the equation for the predator is zero either when \(y(t) = 0\) or when \(-\delta + \beta\, x(t) = 0\), yielding \(x(t) = \delta / \beta\). Whenever the density of the prey \(x(t) < \delta / \beta\), predators will decline; conversely, whenever \(x(t) > x(t) = \delta / \beta\), predators will grow.

Graphically:

Now let’s put the two graphs together:

Clearly, a possible equilibrium of the system is \((x^\star, y^\star)^T = (0, 0)^T\) (often called the “trivial” equilibrium). You can see that there is another equilibrium where the two isoclines meet \((x^\star, y^\star)^T = (\delta / \beta, \rho / \alpha)^T\), and that the dynamics will tend to cycle around the equilibrium.

But how do we know whether dynamics will cycle toward the equilibrium, spiral away from it, or describe closed orbits? To answer this question, we can try to extend our linear analysis by Taylor-expanding the dynamics around the equilibrium.

2.2 Local stability analysis

Suppose that a feasible (i.e., positive) equilibrium \(x^\star\) exists for a given model. Then we can ask whether it is attractive (stable), i.e. if trajectories started at initial condition \(x(0)\) will eventually reach \(x^\star\). This problem is in general difficult to solve (but see below); as an alternative, we can test for local asymptotic stability, i.e., ask whether the system will return to the equilibrium if perturbed infinitesimally away from it. In general, whenever we describe an ecological community as a system of nonlinear, autonomous ODEs:

\[ \frac{d x_i (t)}{d t} = f_i (x(t)) \;, \]

we define an equilibrium \(x^\star\) as a vector of densities such that:

\[ \left. \frac{d x_i}{d t} \right|_{{x}^\star} = f_i ({x}^\star) = 0 \quad \forall i \]

A given system might have a multitude of equilibria. When the system is resting at an equilibrium point, it will remain there unless it is perturbed away from it. Local stability analysis is a method to probe whether a system that is perturbed infinitesimally away from an equilibrium will eventually return to it, or rather move away from it.

Taylor series

Single-variable: suppose function \(f\) infinitely differentiable around a point \(x = a\). Then

\[ f(x) = \sum_{k = 0}^\infty \dfrac{D^{k} f(a)}{k!} (x-a)^k = f(a) + \left. \dfrac{d f}{d x} \right|_{a} (x-a)+ \dfrac{1}{2}\left. \dfrac{d^2 f}{d x^2} \right|_{a} (x-a)^2 + \cdots \]

where \(D^{k} f(a)\) is the \(k\)-th derivative of \(f(x)\) w.r.t. \(x\), evaluated at \(a\).

Vector-valued functions: now \(f(x)\) is a vector-valued function, and \(x\) a vector. To expand around the point \(a\), we need to define the Jacobian matrix

\[ J = Df(x) \]

with elements:

\[ J_{ij} = \dfrac{\partial f_i({x})}{\partial x_j} \]

Next, we define the Hessian tensor (in this case, a three-dimensional tensor):

\[ H_{ijk} = \dfrac{\partial^2 f_i(x)}{\partial x_j\, \partial x_k} \]

It is convenient to write the Taylor expansion in component form:

\[ f_i(x) \approx f_i(a) + \sum_j \left . J_{ij} \right|_{a} (x_j - a_j) +\dfrac{1}{2} \sum_j \sum_k \left. H_{ijk} \right|_a (x_j-a_j)(x_k-a_k) \]

Example

Consider the predator-prey model with Type II functional response:

\[ \begin{cases} \dfrac{d x(t)}{dt} = f_x = x(t) \left(1 - x(t) - \alpha\, \frac{y(t)}{1+ x(t)} \right)\\ \dfrac{d y(t)}{dt} = f_y = y(t) \left(-1 + \alpha\, \frac{x(t)}{1+ x(t)} \right)\\ \end{cases} \] The system has up to three equilibria: \((x^\star, y^\star) = (0,0)\), \((x^\star, y^\star) = (1,0)\), and, whenever \(\alpha > 2\), \(\left(\frac{1}{\alpha - 1}, \frac{\alpha - 2}{(\alpha - 1)^2}\right)\). The Jacobian for this system is:

\[ J = \begin{pmatrix} \frac{\partial f_x}{\partial x} & \frac{\partial f_x}{\partial y}\\ \frac{\partial f_y}{\partial x} & \frac{\partial f_y}{\partial y}\\ \end{pmatrix} = \begin{pmatrix} 1 -2 x - \frac{\alpha y}{(x+1)^2} & - \frac{\alpha x}{(x+1)}\\ \frac{\alpha y}{(x+1)^2} & -1 + \frac{\alpha x}{(x+1)} \end{pmatrix} \] The Hessian has two slices:

\[ H_x = \begin{pmatrix} \frac{\partial^2 f_x}{\partial x^2} &\frac{\partial^2 f_x}{\partial x \partial y}\\ \frac{\partial^2 f_x}{\partial x \partial y} & \frac{\partial^2 f_x}{\partial x^2}\\ \end{pmatrix} = \begin{pmatrix} 2(-1 + \frac{\alpha y}{(x+1)^3}) &-\frac{\alpha}{(x+1)^2}\\ -\frac{\alpha}{(x+1)^2} & 0\\ \end{pmatrix} \]

Similarly,

\[ H_y = \begin{pmatrix} -2 \frac{\alpha y}{(x+1)^3} &\frac{\alpha}{(x+1)^2}\\ \frac{\alpha}{(x+1)^2} & 0\\ \end{pmatrix} \] Note that each slice of the Hessian is necessarily symmetric.

Now we can approximate the dynamics at any point; for example, around the equilibrium with the prey alone:

\[ \dfrac{d x(t)}{dt} \approx J_{11}|_{x^\star} \Delta x + J_{12}|_{x^\star} \Delta y + \frac{1}{2} H_{x,1,1}|_{x^\star} \Delta x^2 + \frac{1}{2} H_{x,2,2}|_{x^\star} \Delta y^2 + H_{x,1,2}|_{x^\star} \Delta x \Delta y \] with some calculations, we obtain:

\[ \dfrac{d x(t)}{dt} \approx -\Delta x (1 + 2 \Delta x) - \frac{1}{2} \alpha (1 + \Delta x) \Delta y \] and similarly,

\[ \dfrac{d y(t)}{dt} \approx \frac{1}{2} (\alpha -2 + \alpha \Delta x) \Delta y \] We can see already that, when \(\Delta y > 0\) (the only sensible case), and \(\Delta x = 0\), the predator can invade (at least initially) whenever \(\alpha > 2\).

By considering higher and higher order terms, we can have an increasingly good approximation of the system—always obtaining a polynomial system.

Suppose that a system is resting at an equilibrium \(x^\star\), and that it is slightly perturbed away from it. \(x(t) = \Delta x(t) + x^\star\) is the state of the system immediately after the perturbation. Taylor-expanding around \(x^\star\) and taking only the linear term, we have:

\[ f(x(t)) = f(x^\star)+ \left. J \right|_{x^\star} \Delta x(t) = \left. J \right|_{x^\star} \Delta x(t) \]

Where \(J\) is the Jacobian matrix of the system, whose elements are defined as:

\[ J_{ij} = \frac{\partial f_i({x})}{\partial x_j} \]

Each element of this matrix is therefore a function, whose value depends on \(x\). When we evaluate the Jacobian matrix at an equilibrium point \(x^\star\), we obtain the so-called “community matrix” \(M\):

\[ M = \left. {J} \right|_{ {x}^\star} \]

Note that, although each system has a unique Jacobian matrix, there are as many community matrices as there are equilibria. The community matrix details the effect of increasing the density of one species on any other species around the equilibrium point.

We can therefore write the differential equation:

\[ \frac{d \Delta x(t)}{dt} \approx M \Delta x(t) \]

which is a system of linear differential equations—i.e., the simplest system of ODEs, which can be solved in full generality.

To solve the system, we need to recap a few important concepts from linear algebra.

Eigenvalues and eigenvectors

For a matrix \(M\), we have that if \(M v = \lambda v\) with \(v\) different from the zero vector, then \(\lambda\) is an eigenvalue and \(v\) the corresponding eigenvector. Practically, you can think of a matrix as an operator that turns a vector into another vector. If the resulting vector is a rescaled version of the initial vector, then you’ve found an eigenvector of the matrix, and the rescaling factor is the associated eigenvalue.

For example, show that \((1, 1)^t\) is an eigenvector of the matrix:

\[ A = \begin{pmatrix} 1 + a & 1-a\\ 2a + 2 & -2a \end{pmatrix} \]

We have:

\[ Av = \begin{pmatrix} (1 + a) v_1 + (1-a) v_2\\ (2a + 2) v_1 - 2a v_2 \end{pmatrix} \]

if \(v = (1, 1)^t\), we have:

\[ Av = \begin{pmatrix} 2\\ 2 \end{pmatrix} = 2\, v \]

and as such \(v\) is an eigenvector of \(M\), with associated eigenvalue \(\lambda = 2\).

Finding eigenvalues

The eigenvalues of a matrix \(M\) are the roots (zeros) of the chacteristic polynomial:

\[ p(\lambda) = \det(\lambda I - M) = 0 \]

Where does this equation come from? We write:

\[ \begin{aligned} A v = \lambda v\\ \lambda v - Av = 0\\ (\lambda I - A)v = 0 \end{aligned} \]

if \(v\) is nonzero, then the matrix \(\lambda I - A\) must be singular (i.e., have at least one eigenvalue equal to zero), and \(v\) should be the eigenvector of \(\lambda I - A\) corresponding to a zero eigenvalue. Because the determinant is the product of the eigenvalues, then the determinant must also be zero.

For a \(2 \times 2\) matrix, we have:

\[ \begin{aligned} \det \begin{pmatrix} \lambda - a_{11} & -a_{12}\\ -a_{21} & \lambda -a_{22} \end{pmatrix} &= (\lambda - a_{11}) (\lambda - a_{22}) - a_{12} a_{21}\\ &= \lambda^2 - \lambda (a_{11} + a_{22}) + a_{11} a_{22} - a_{12} a_{21} \end{aligned} \]

More compactly,

\[ p(\lambda) = \lambda^2 - \lambda\, \text{tr}(A) + \det(A) \]

where \(\text{tr}(M) = \sum_i M_{ii} = \sum_i \lambda_i\), and \(\det(M) = \prod_i \lambda_i\). We therefore find:

\[ \lambda = \frac{\text{tr}(M) \pm \sqrt{(\text{tr}(M))^2 - 4 \det(M)}}{2} \]

Find the eigenvalues for the matrix \(A\) above:

\[ \begin{aligned} \lambda &= \frac{1-a \pm \sqrt{(1-a)^2+8(1+a)}}{2}\\ &= \frac{1-a \pm \sqrt{(1+a^2 -2 a + 8 a + 8)}}{2} \\ &= \frac{1-a \pm (a+ 3)}{2} \end{aligned} \]

The eigenvalues are therefore \(\lambda_1 = 2\) and \(\lambda_2 = -(1+a)\).

Facts about eigenvalues and eigenvectors

Given a matrix \(A\), of size \(n \times n\), a complex number \(\lambda\) is an eigenvalue of \(A\) if there is a nonzero (complex) vector \(v\) such that \(A v = \lambda v\). The vector \(v\) is called the eigenvector of \(A\) associated with \(\lambda\). Note that eigenvectors are defined up to multiplication: if \(v\) is an eigenvector, then \(\alpha\, v\), with \(\alpha\) real is also an eigenvector. Often, we choose \(v\) such that its norm \(\sqrt{\sum_i v_i^2} = 1\) (called “unit” eigenvector).

A matrix of size \(n\) has at most \(n\) distinct eigenvalues. If all eigenvalues are distinct then the eigenvectors are linearly independent. This means that if we build the matrix \(V = (v_1, v_2, \ldots, v_n)\), then \(V\) is invertible.

Because of the fact above, and diagonalizable matrix (e.g., a sufficient condition is to have all eigenvalues distinct), then we can write:

\[ A = V \Lambda V^{-1} \]

where \(V\) is the matrix of eigenvectors, and \(\Lambda\) a diagonal matrix with the eigenvalues of \(A\) on the diagonal. As such, \(V^{-1} A V = \Lambda\). This is a “similarity transformation”, meaning that the eigenvalues of \(A\) and \(\Lambda\) are (obviously) the same.

If the matrix \(A\) contains only real numbers (as always the case in population models), then the eigenvalues of \(A\) are either real, or form pairs of complex conjugate eigenvalues of form \(\alpha \pm i\, \beta\). This means for example that all odd-sized real matrices have at least one real eigenvalue. If \(A\) is real and symmetric, all eigenvalues are real, and all eigenvectors are orthogonal, and if it is diagonalizable we can write \(A = V \Lambda V^{T}\). A diagonal matrix \(A\) has eigenvalues \(\lambda_i = A_{ii}\)

If \(A\) has eigenvalues \(\lambda\), \(B = \beta\, A\) has eigenvalues \(\beta\, \lambda\), and \(C = A + \gamma\, I\) has eigenvalues \(\lambda + \gamma\). The eigenvalues of \(A^2 = A \times A\) are \(\lambda^2\), and the eigenvalues of \(A^{-1}\) are \(\lambda^{-1}\). The eigenvalues of \(A^T\) are the same as those of \(A\) (but the eigenvectors are not the same in general). \(A + A^T\) (this matrix is symmetric) has only real eigenvalues, and \(A - A^T\) (this matrix is skew-symmetric) has purely imaginary eigenvalues.

A matrix is positive definite if all its eigenvalues are real and positive. It is positive semi-definite if eigenvalues can be zero. The matrices \(AA^T\) and \(A^T A\) are positive semi-definite, and have the same eigenvalues up to some zeros (used in SVD and PCA). Correlation and covariance matrices have these form.

Provided that some mild conditions are satisfied, a matrix with non-negative entries has a unique largest real eigenvalue with a corresponding eigenvector that can be chosen to have strictly positive components (Perron-Frobenius theorem). A matrix with constant row sums \(\theta\) has \(\theta\) as the largest eigenvalue, and \(v = 1\) as the corresponding eigenvector (e.g., Markov Chains).

Descartes’ rule of signs

Take \(\alpha_i\) to be the coefficients of the polynomial in standard form:

\[ P(\lambda) = \sum_{i=0}^n \alpha_i \lambda^i \]

then the number of real positive roots of the polynomial is at most the number of sign changes in the sequence \(\alpha_0, \ldots, \alpha_n\), omitting the zeros. As a corollary, the number of real negative roots is at most the number of sign changes when we write \(P(-\lambda)\). In particular, to have no real positive roots, we need all the \(\alpha_i\) to have the same sign (omitting the zeros). The difference between the number of sign changes and the number of positive roots is always even.

Example:

\[ P(\lambda) = 1 + 4 \lambda - 5 \lambda^2 + \lambda^3 \] we have two sign changes (the signs are \(+ + - +\)), and therefore the polynomial has at most two positive real roots (i.e., either two positive roots, or zero). Taking:

\[ P(-\lambda) = 1 - 4 \lambda - 5 \lambda^2 - \lambda^3 \]

we find a single change in signs, meaning that the polynomial has a negative real root. This polynomial has indeed one negative and two positive real roots.

Routh-Hurwitz criterion

The Routh-Hurwitz criterion allows to determine whether all the roots of a polynomial lie in the left half-plane (i.e., have negative real part). These polynomials are called stable, for reasons that will be obvious in a moment. It is easily stated for second and third-degree polynomials.

The polynomial:

\[ P(\lambda) = \alpha_0 + \alpha_1 \lambda + \lambda^2 \] is stable if and only if \(\alpha_0 > 0\) and \(\alpha_1 >0\).

The polynomial:

\[ P(\lambda) = \alpha_0 + \alpha_1 \lambda + \alpha_2 \lambda^2 + \lambda^3 \] is stable if and only if \(\alpha_i > 0\quad \forall i\) and \(\alpha_2 \alpha_1 > \alpha_0\).

Homework 2a

Show that a matrix \(A\) with sign-pattern:

\[ S(A) = \begin{pmatrix} - & +\\ - & 0 \end{pmatrix} \]

has eigenvalues with negative real part. When a matrix is stable due to its sign-pattern (rather than specific values of the coefficients), it is called qualitatively stable.

Find all \(3 \times 3\) matrices that are sign-stable. (Showing that there are no purely imaginary eigenvalues is challenging; proving that this is the case is optional).

Key paper: May (1973)

The idea of “sign-stability” was first introduced in economics. May borrowed the idea to analyze simple food webs. Analysis of ecological models based on sign (rather than magnitude) of coefficients was further developed by Richard Levins and collaborators. A good modern summary, along with a few new results, are provided by Dambacher et al. (2003).

We can now analyze systems of linear ODEs.

Solution of linear systems of ODEs

Matrix exponential

In analogy with the power series

\[ e^x = \sum_{n=0}^{\infty} \dfrac{x^n}{n!} = 1 + x + \dfrac{x^2}{2} + \dfrac{x^3}{6} + \dfrac{x^4}{24} + \ldots \]

we define the matrix exponential

\[ e^X = \sum_{n=0}^{\infty} \dfrac{1}{n!}X^n = I + X + \dfrac{1}{2}X^2 + \dfrac{1}{6}X^3 + \dfrac{1}{24}X^4 + \ldots \]

where \(X^2 = X \times X\) and so on.

Solution of systems of linear ODEs

This allows us to solve the system of linear differential equations

\[ \dfrac{d x(t)}{dt} = A x(t) \]

with \(x(0) = x_0\). When the determinant of \(A\), \(\det A \neq 0\), the only equilibrium of the system is \(x^\star = 0\). If \(\det A = 0\) (and \(A \neq 0\)), on the other hand, there are infinitely many equilibria, all belonging to the same line (in 2-d). By writing the solution we can determine whether trajectories will approach or move away from an equilibrium. We write the solution as:

\[ x(t) = e^{A t} x_0 \]

Importantly, we have that, if \(A\) is diagonalizable,

\[ e^{A t} = V e^{\Lambda t}V^{-1} \]

where \(\Lambda\) is the diagonal matrix containing the eigenvalues of \(A\). Because \(\Lambda\) is diagonal, we can solve the exponential explicitly:

\[ e^{\Lambda t} = \begin{pmatrix}e^{\lambda_{1}t} & 0 & \ldots & 0\\0& e^{\lambda_{2}t} & \ldots & 0\\\vdots & \vdots & \ddots & \vdots \\0& 0& \ldots & e^{\lambda_{n}t} \end{pmatrix} \]

Now we want to study the dynamics of the system. To keep the complexity to a minimum, we define \(y(t) = V^{-1} x(t)\), meaning that we are choosing the most appropriate coordinates to study our trajectories. Our equation becomes:

\[ \begin{aligned} x(t) &= V e^{\Lambda t} V^{-1} x_0\\ V^{-1}x(t) &= V^{-1}V e^{\Lambda t} V^{-1} x_0\\ y(t) = e^{\Lambda t} y_0 \end{aligned} \]

And therefore \(y_j(t) = e^{(\lambda_j)t} y_j(0)\). Clealry, if all \(\lambda_j\) are real and negative, the trajectories will die out, and the origin \(y^\star = 0\) is stable. Similarly, if any eigenvalue is real and positive, then the perturbation will amplify in at least one direction. Next, we consider \(\lambda_j\) to be complex. In this case, \(\lambda_j = \alpha + i\, \beta\). Using Euler’s formula, we have

\[ y_j(t) = y_j(0)\, e^{(\alpha + i\, \beta) t} = y_j(0)\, e^{\alpha\, t} (\cos(\beta\, t) + i \sin(\beta\, t)) \]

As such, the solution will oscillate, with damped oscillations whenever \(a < 0\) and increasing oscillations when \(a > 0\). For example, a case of damped oscillations:

and of increasing oscillations:

In fact, all the possible trajectories in planar systems can be classified using only the determinant (product of eigenvalues) and the trace (sum of the eigenvalues) of the matrix \(A\):

Now we can solve the system of ODEs:

\[ \frac{d \Delta x(t)}{dt} \approx M \Delta x(t) \]

obtaining:

\[ \Delta x(t) = e^{Mt}\Delta x(0) = Q e^{\Lambda t} Q^{-1}\Delta x(0) \]

Where \(Q\) is the matrix containing the (unit) eigenvectors of \(M\), and \(\Lambda\) is a diagonal matrix containing the eigenvalues of \(M\). As such, the eigenvalues of \(M\) determine the stability of the equilibrium \({x}^\star\): if all the eigenvalues have negative real part, then the system will eventually return to the equilibrium after sufficiently small perturbations; conversely, if any of the eigenvalues have positive real part, the system will move away from the equilibrium whenever perturbed. Therefore, depending on the sign of the “rightmost” eigenvalue of \({M}\), \(\lambda_1\), we can determine the stability of \({x}^\star\):

\[ \text{Re}(\lambda_1) \begin{cases} < 0 \to {x}^\star \quad \text{is stable}\\ > 0 \to {x}^\star \quad \text{is unstable} \end{cases} \]

Key paper: Neubert and Caswell (1997)

As we have seen, some simple linear algebra can be used to determine the long-term behavior of an ecological system. As Neubert & Caswell show in this masterful paper, similar techniques can be used to determine the transient behavior of these systems: will perturbations amplify before subsiding? How far from the equilibrium can the system move?

2.3 Stability analysis of the Lotka-Volterra Predator-Prey model

We still haven’t figured out whether the coexistence equilibrium for the model

\[ \begin{cases} \dfrac{d x(t)}{dt} = \rho\, x(t) - \alpha\, x(t)\, y(t)\\ \dfrac{d y(t)}{dt} = -\delta\, y(t) + \beta\, x(t)\, y(t) \end{cases} \]

is stable or not. The Jacobian becomes:

\[ J = \begin{pmatrix} \rho - \alpha\, y & -\alpha\, x\\ \beta\, y & -\delta + \beta\, x \end{pmatrix} \]

The community matrix for the equilibrium \((x^\star, y^\star)^T = (0, 0)^T\) is:

\[ M_0 = \left . J \right|_{(0, 0)^T} = \begin{pmatrix} \rho & 0\\ 0 & -\delta \end{pmatrix} \]

which has eigenvalues \(\rho > 0\) and \(-\delta < 0\) — the equilibrium is unstable. The coexistence equilibrium \((x^\star, y^\star)^T = (\delta / \beta, \rho / \alpha)^T\) yields the community matrix:

\[ M_c = \left . J \right|_{(\delta / \beta, \rho / \alpha)^T} = \begin{pmatrix} 0 & -\dfrac{\alpha\, \delta}{\beta} \\ \dfrac{\beta\, \rho}{\alpha} & 0 \end{pmatrix} \]

which has purely imaginary eigenvalues \(\lambda = \pm i \sqrt{\rho \, \delta}\). As such the equilibrium is not attractive nor unstable. In fact, Lotka and Volterra were (independently) able to prove that the system cycles neutrally around the equilibrium.

Vito Volterra (1860-1940)

Vito Volterra was born in Ancona (then part of the Papal State) in a poor jewish family. The financial situation precipitated with the death of his father, when Vito was two. Vito and his mother went to live with relatives in Turin and then Florence. Volterra showed amazing mathematical talent at a very young age. Antonio Roiti, professor of physics in Florence, noticed the budding mathematician and hired him as his assistant, so that he could continue his studies. He went on to enroll at the Scuola Normale in Pisa, receiving a degree in Physics in 1882. At age 23 he was made full professor of Rational Mechanics in Pisa, and then in 1900 of Mathematical Physics in Rome. For thirty years, he contributed important studies in mathematics, and enriched academic life in Italy (for example, he was the first director of the National Center for Research). In 1931 he refused to take an oath of loyalty to the fascist regime (only 12 professors out of 1250 refused), and was therefore forced to resign (his take on the fascist enterprise: “Empires die, but Euclid’s theorems keep their youth forever”).

His interest in mathematical ecology is due to Umberto D’Ancona (his son-in-law), who had studied the trends in fisheries in the Adriatic sea before and immediately after WWI. In 1914-1918 fisheries in the Adriatic had stopped completely because of the conflict. D’Ancona had noticed that, while herbivorous fish had remained about constant, the piscivorous fish had increased dramatically in numbers. The problem piqued Volterra who immediately published a sophisticated study, proposing the same equations studied by Lotka. In a short letter to Nature (Volterra 1926a), he stated the so-called “Volterra’s Effect” (which he termed “Law III”): “a complete closure of the fishery was a form of ‘protection’ under which the voracious fishes were much the better and prospered accordingly, but the ordinary food-fishes, on which these are accustomed to prey, were worse off than before.” This brief paper was a summary of a much more extensive article (Volterra 1926b).

Lotka-Volterra interactions

In 1927, Lotka wrote to Nature to raise the issue that the equations studied by Volterra and the figures presented in Volterra’s brief article were identical to those found in Elements of Physical Biology (published in 1925). He concluded: “It would be gratifying if Prof. Volterra’s publication should direct attention to a field and method of inquiry which apparently has hitherto passed almost unnoticed.”

Volterra graciously conceded “I recognize his priority, and am sorry not to have known his work, and therefore not have been able to mention it.” He however listed a few points in which the two authors had pursued different directions, and concluded “Working independently the one from the other, we have found some common results, and this confirms the exactitude and the interest in the position of the problem. I agree with him in his conclusions that these studies and these methods of research deserve to receive greated attention from scholars, and should give rise to important applications.”

2.3.1 Constant of motion for Lotka-Volterra Predator-Prey

Write:

\[ \dfrac{d x(t)}{d y(t)} = \dfrac{\rho\, x(t) - \alpha\, x(t)\, y(t)}{-\delta\, y(t) + \beta\, x(t)\, y(t)} \]

As such:

\[ (-\delta\, y + \beta\, x\, y)\, d x = (\rho\, x - \alpha\, x\, y)\, d y \]

dividing both sides by \(x\, y\), we obtain:

\[ \left(-\dfrac{\delta}{x} + \beta \right)\, d x = \left(\dfrac{\rho}{y} - \alpha \right)\, d y \]

Integrating both sides:

\[ \beta\, x - \delta\, \log x = - \alpha\, y + \rho\, \log y + C \]

where \(C\) is a constant of integration. Rearranging, and substituting \(x^\star = \delta / \beta\) and \(y^\star = \rho / \alpha\), we obtain:

\[ \beta (x - x^\star \log x) + \alpha (y - y^\star \log y) = C \]

This means that the system has a constant of motion (a.k.a. “first integral”): the value of \(C\) depends on the initial conditions, and then the system will cycle around the equilibrium in closed orbits. Note that a) \(C>0\) if the initial conditions are different from the equilibrium; b) \(C = C_m > 0\) if we are at equilibrium; and therefore c) \(C \geq C_m\) for all (positive) initial conditions.

We can use \(C\) as a Lyapunov function:

\[ V(x,y) = \beta (x - x^\star \log x) + \alpha (y - y^\star \log y) \]

\[ \dfrac{d V(x,y)}{dt} = \left(\beta - \dfrac{\delta}{x} \right) \dfrac{dx}{dt}+\left(\alpha - \dfrac{\rho}{y} \right) \dfrac{dy}{dt} = 0 \]

Therefore, the dynamics are such that \(V(x,y)\) remains constant.

2.3.2 Analysis of Predator-Prey model with logistic growth for the prey

In the classic Lotka-Volterra model, the local stability analysis of the coexistence equilibrium is inconclusive. When however the prey grows logistically it is easy to show that the equilibrium is now stable. For example, consider the system

\[ \begin{cases} \dfrac{dx}{dt} = x (1 - x/2 -y)\\ \dfrac{dy}{dt} = y (x - 1) \end{cases} \]

First, we find the equilibria. From the first equation, we have \(x^\star = 0\), or \(y^\star = 1 - x^\star /2\); from the second equation, we have \(y^\star = 0\) or \(x^\star = 1\). Combining them, we find that either \((x^\star, y^\star)^T = (0, 0)^T\) (trival), \((x^\star, y^\star)^T = (2, 0)^T\) (boundary), or \((x^\star, y^\star)^T = (1, 1/2)^T\) (coexistence).

The isoclines of zero growth are \(y = 1 - x/2\) for the prey, and \(x = 1\) for the predator. The Jacobian for the system is

\[ J = \begin{pmatrix} 1 - x - y & -x\\ y & x -1 \end{pmatrix} \]

At the equilibrium \((x^\star, y^\star) = (1, 1/2)\), we have:

\[ M = \begin{pmatrix} - \frac{1}{2} & -1\\ \frac{1}{2} & 0 \end{pmatrix} \]

The trace is \(-1/2\) and the determinant \(1/2\), as such the equilibrium is stable. The eigenvalues are given by:

\[ p(\lambda) = \lambda^2 + \frac{1}{2} \lambda + \frac{1}{2} \]

Obtaining:

\[ \lambda = \dfrac{-\frac{1}{2} \pm \sqrt{\frac{1}{4} - 2}}{2} = \dfrac{-1 \pm i \sqrt{7}}{4} \]

And as such small perturbations will oscillate back to equilibrium. Numerically:

2.3.2.1 Global stability

We can write a Lyapunov function for the system above. Take:

\[ V(x,y) = \left(x - x^\star - x^\star \log \dfrac{x}{x^\star} \right) + \left(y - y^\star - y^\star \log \dfrac{y}{y^\star} \right) = (x - 1 - \log x) + \left(y - \dfrac{1}{2} - \dfrac{1}{2} \log 2y \right) \]

Derive with respect to time to obtain:

\[ \dfrac{d V(x,y)}{dt} = -\dfrac{1}{2}(1-x)^2 < 0 \]

As such, the equilibrium \((x^\star, y^\star) = (1, 1/2)\) is globally stable.

Homework 2b

Take the competitive Lotka-Volterra model:

\[ \dfrac{dx}{dt} = D(x) (r - Ax) \]

with the coefficients of \(r = (r_1, r_2)^T\) and \(A\) all positive. Discuss the existence of a coexistence equilibrium and its stability. Write a Lyapunov function for the case in which \(A_{11} = A_{22} > A_{12} = A_{21}\).

References

Dambacher, J. M., H.-K. Luh, H. W. Li, and P. A. Rossignol. 2003. Qualitative stability and ambiguity in model ecosystems. The American Naturalist 161:876–888.
Lotka, A. J. 1920. Analytical note on certain rhythmic relations in organic systems. Proceedings of the National Academy of Sciences 6:410–415.
Lotka, A. J. 1926. The frequency distribution of scientific productivity. Journal of the Washington academy of sciences 16:317–323.
Lotka, A. J. 2002. Contribution to the theory of periodic reactions. The Journal of Physical Chemistry 14:271–274.
May, R. M. 1973. Qualitative stability in model ecosystems. Ecology 54:638–641.
Neubert, M. G., and H. Caswell. 1997. Alternatives to resilience for measuring the responses of ecological systems to perturbations. Ecology 78:653–665.
Volterra, V. 1926b. Variazioni e fluttuazioni del numero d’individui in specie animali conviventi. Memor. Accad. Lincei. 6:31–113.
Volterra, V. 1926a. Fluctuations in the abundance of a species considered mathematically. Nature 118:558–560.