Mathematical Origins of Life

The purpose of this post is to demonstrate some very beautiful (I think!) mathematics that arises form Darwinian evolutionary theory. It is a real shame that most courses and discussions dealing with evolution never introduce any type of mathematical formalism which is very strange, since at the most fundamental levels, evolution must also be governed by quantum mechanics and electromagnetism, from which chemistry and biochemistry arise via top-down and bottom-up causation. See this article by George Ellis for more on the role of top-down causation in the universe and the hierarchy of physical matter. Indeed, my personal belief is that if some biologists and evolutionary biologists like Dawkins, Coyne, and others took the time to explain evolution with some modicum of mathematical formalism to properly describe the underlying mechanics instead of using it as an opportunity to attack religious people, the world would be a much better place, and the dialogue between science and religion would be much more smooth and intelligible.

In this post today, I will describe some formalism behind the phenomena of prebiotic evolution. It turns out that there has been a very good book by Claudius Gros and understanding evolution as a complex dynamical system (dynamical systems theory is my main area of research), and the interested reader should check out his book for more details on what follows below.

We can for simplicity consider a quasispecies as a system of macromolecules that have the ability to carry information, and consider the dynamics of the concentrations of the constituent molecules as the following dynamical system:

\boxed{\dot{x}_{i} = W_{ii}x_{i} + \sum_{j \neq i}W_{ij}x_{j} - x_{i} \phi(t)},

where x_{i} are the concentrations of N molecules, W_{ii} is the autocatalytic self-replication rate, and W_{ij} are mutation rates.

From this, we can consider the following catalytic reaction equations:

\boxed{\dot{x}_i = x_{i} \left(\lambda_{i} + k_i^j x_j - \phi \right)},

\boxed{\phi = x^{k}\left(\lambda_{k} + \kappa_k^j x_j\right) },

x_i are the concentrations, \lambda_i are the autocatalytic growth rates, and \kappa_{ij} are the transmolecular catalytic rates. We choose \phi such that

\boxed{\dot{C} = \sum_i \dot{x}_i = \sum_i x_i \left(\lambda_i + \sum_j \kappa_{ij}x_{j} \right) - C \phi = (1-C)\phi}.

Clearly:

\lim_{C \to 1} (1-C)\phi = 0,

that is, this quick calculation shows that the total concentration C remains constant.

Let us consider now the case of homogeneous interactions such that

\kappa_{i \neq j} = \kappa, \kappa_{ii} = 0, \lambda_i = \alpha i,

which leads to

\boxed{\dot{x}_{i} = x_{i} \left(\lambda_i + \kappa \sum_{j \neq i} x_{j} - \phi \right)} ,

which becomes

\boxed{\dot{x}_i = x_i \left(\lambda_i + \kappa - \kappa x_i - \phi\right)}.

This is a one-dimensional ODE with the following invariant submanifolds:

\boxed{x_{i}^* = \frac{\lambda_i + \kappa - \phi}{\kappa}},

\boxed{x_i^* = 0, \quad \lambda_i = N \alpha}.

With homogeneous interactions, the concentrations with the largest growth rates will dominate, so there exists a N^* such that 1 \leq N^* \leq N where

\boxed{x_i^* = \frac{\lambda_i + \kappa - \phi}{\kappa}, \quad N^* \leq i \leq N},

\boxed{0, \quad 1 \leq i < N^*}.

The quantities N^* and \phi are determined via normalization conditions that give us a system of equations:

\boxed{1 = \frac{\alpha}{2\kappa} \left[N(N+1) - N^*(N^* - 1)\right] + \left[\frac{\kappa - \phi}{\kappa}\right] \left(N + 1 - N^*\right)},

\boxed{0 = \frac{\lambda_{N^*-1} + \kappa - \phi}{\kappa} = \frac{\alpha(N^* - 1)}{\kappa} + \frac{\kappa - \phi}{\kappa} }.

For large N, N^*, we obtain the approximation

\boxed{N - N^* \approx \sqrt{\frac{2 \kappa}{\alpha}}},

which is the number of surviving species.

Clearly, this is non-zero for a finite catalytic rate \kappa. This shows the formation of a hypercycle of molecules/quasispecies.

These computations clearly should be taken with a grain of salt. As pointed out in several sources, hypercycles describe closed systems, but, life exists in an open system driven by an energy flux. But, the interesting thing is, despite this, the very last calculation shows that there is clear division between molecules i = N^*, \ldots N which can be considered as a type of primordial life-form separated by these molecules belonging to the environment.

Advertisements

Black Holes, Black Holes Everywhere

BH_LMC

Nowadays, one cannot watch a popular science tv show, read a popular science book, take an astrophysics class without hearing about black holes. The problem is that very few people discuss this topic appropriately. This is further evidenced that these same people also claim that the universe’s expansion is governed by the Friedmann equation as applied to a Friedmann-Lemaitre-Robertson-Walker (FLRW) universe.

The fact is that black holes despite what is widely claimed, are not astrophysical phenomena, they are a phenomena that arise from mathematical general relativity. That is, we postulate their existence from mathematical general relativity, in particular, Birkhoff’s theorem, which states the following (Hawking and Ellis, 1973):

Any C^2 solution of Einstein’s vacuum equations which is spherically symmetric in some open set V is locally equivalent to part of the maximally extended Schwarzschild solution in V.

In other words, if a spacetime contains a region which is spherically symmetric, asymptotically flat/static, and empty such that T_{ab} = 0, then the metric inn this region is described by the Schwarzschild metric:

\boxed{ds^2 = -\left(1 - \frac{2M}{r}\right)dt^2 + \frac{dr^2}{1-\frac{2M}{r}} + r^2\left(d\theta^2 + \sin^2 \theta d\phi^2\right)}

The concept of a black hole then occurs because of the r = 0 singularity that occurs in this metric.

The problem then arises in most discussions nowadays, because the very same astrophysicists that claim that black holes exist, also claim that the universe is expanding according to the Einstein field equations as applied to a FLRW metric, which are frequently written nowadays as:

The Raychaudhuri equation:

\boxed{\dot{H} = -H^2 - \frac{1}{6} \left(\mu + 3p\right)},

(where H is the Hubble parameter)

The Friedmann equation:

\boxed{\mu = 3H^2 + \frac{1}{2} ^{3}R},

(where \mu is the energy density of the dominant matter in the universe and ^{3}R is the Ricci 3-scalar of the particular FLRW model),

and

The Energy Conservation equation:

\boxed{\dot{\mu} = -3H \left(\mu + p\right)}.

The point is that one cannot have it both ways! One cannot claim on one hand that black holes exist in the universe, while also claiming that the universe is FLRW! Since, by Birkhoff’s theorem, external to the black hole source must be a spherically symmetric and static spacetime, for which a FLRW is not static nor asymptotically flat, because of a lack of global timelike Killing vector.

I therefore believe that models of the universe that incorporate both black holes and large-scale spatial homogeneity and isotropy should be much more widely introduced and discussed in the mainstream cosmology community. One such example are the Swiss-Cheese universe models. These models assume a FLRW spacetime with patches “cut out” in such a way to allow for Schwarzschild solutions to simultaneously exist. Swiss-Cheese universes actually have a tremendous amount of explanatory power. One of the mysteries of current cosmology is the origin of the existence of dark energy. The beautiful thing about Swiss-Cheese universes is that one is not required to postulate the existence of hypothetical dark energy to account for the accelerated expansion of the universe. This interesting article from New Scientist from a few years ago explains some of this.

Also, the original Swiss-Cheese universe model in its simplest foundational form was actually proposed by Einstein and Strauss in 1945.

The basic idea is as follows, and is based on Israel’s junction formalism (See Hervik and Gron’s book, and Israel’s original paper for further details. I will just describe the basic idea in what follows). Let us take a spacetime and partition it into two:

\boxed{M = M^{+} \cup M^{-}}

with a boundary

\boxed{\Sigma \equiv \partial M^{+} \cap \partial M^{-}}.

Now, within these regions we assume that the Einstein Field equations are satisfied, such that:

\boxed{\left[R_{uv} - \frac{1}{2}R g_{uv}\right]^{\pm} = \kappa T_{uv}^{\pm}},

where we also induce a metric on \Sigma as:

\boxed{d\sigma^2 = h_{ij}dx^{i} dx^{j}}.

The trick with Israel’s method is understanding is understanding how \Sigma is embedded in M^{\pm}.  This can be quantified by the covariant derivative on some basis vector of \Sigma:

\boxed{K_{uv}^{\pm} = \epsilon n_{a} \Gamma^{a}_{uv}}.

The projections of the Einstein tensor is then given by Gauss’ theorem and the Codazzi equation:

\boxed{\left[E_{uv}n^{u}n^{v}\right]^{\pm} = -\frac{1}{2}\epsilon ^{3}R + \frac{1}{2}\left(K^2 - K_{ab}K^{ab}\right)^{\pm}},

\boxed{\left[E_{uv}h^{u}_{a} n^{v}\right]^{\pm} = -\left(^{3}\nabla_{u}K^{u}_{a} - ^{3}\nabla_{a}K\right)^{\pm}},

\boxed{\left[E_{uv}h^{u}_{a}h^{v}_{b}\right]^{\pm} = ^{(3)}E_{ab} + \epsilon n^{u} \nabla_{u} \left(K_{ab} - h_{ab}K\right)^{\pm} - 3 \left[\epsilon K_{ab}K\right]^{\pm} + 2 \epsilon \left[K^{u}_{a} K_{ub}\right]^{\pm} + \frac{1}{2}\epsilon h_{ab} \left(K^2 + K^{uv}K_{uv}\right)^{\pm}}

Defining the operation [T] \equiv T^{+} - T^{-}, the Einstein field equations are given by the Lanczos equation:

\boxed{\left[K_{ij}\right] - h_{ij} \left[K\right] = \epsilon \kappa S_{ij}},

where S_{ij} results from defining an energy-momentum tensor across the boundary, and computing

\boxed{S_{ij} = \lim_{\tau \to 0} \int^{\tau/2}_{-\tau/2} T_{ij} dy}.

The remaining dynamical equations are then given by

\boxed{^{3}\nabla_{j}S^{j}_{i} + \left[T_{in}\right] = 0},

and

\boxed{S_{ij} \left\{K^{ij}\right\} + \left[T_{nn}\right] = 0},

with the constraints:

\boxed{^{3}R - \left\{K\right\}^2 + \left\{K_{ij}\right\} \left\{K^{ij}\right\} = -\frac{\kappa^2}{4} \left(S_{ij}S^{ij} - \frac{1}{2}S^2\right) - 2 \kappa \left\{T_{nn}\right\}}.

\boxed{\left\{^{3} \nabla_{j}K^{j}_{i} \right\} - \left\{^{3}\nabla_{i}K\right\} = -\kappa \left\{T_{in}\right\}}.

Therefore:

  1. If black holes exist, then by Birkhoff’s theorem, the spacetime external to the black hole source must be spherically symmetric and static, and cannot represent our universe.
  2. Perhaps, a more viable model for our universe is then a spatially inhomogeneous universe on the level of Lemaitre-Tolman-Bondi, Swiss-Cheese, the set of G_{2} cosmologies, etc… The advantage of these models, particular in the case of Swiss-Cheese universes is that one does not need to postulate a hypothetical dark energy to explain the accelerated expansion of the universe, this naturally comes out out of such models.

Under a more general inhomogeneous cosmology, the Einstein field equations now take the form:

Raychauhduri’s Equation:

\boxed{\dot{H} = -H^2 + \frac{1}{3} \left(h^{a}_{b} \nabla_{a}\dot{u}^{b} + \dot{u}_{a}\dot{u}^{a} - 2\sigma^2 + 2 \omega^2\right) - \frac{1}{6}\left(\mu + 3p\right)}

Shear Propagation Equation:

\boxed{h^{a}_{c}h^{b}_{d} \dot{\sigma}^{cd} = -2H\sigma^{ab} + h^{(a}_ch^{b)}_{d}\nabla^{c}\dot{u}^{d} + \dot{u}^{a}\dot{u}^{b} - \sigma^{a}_{c} \sigma^{bc} - \omega^{a}\omega^{b} - \frac{1}{3}\left(h^{c}_{d}\nabla_{c}\dot{u}^{d} + \dot{u}_{c}\dot{u}^{c} - 2\sigma^2 - \omega^2\right)h^{ab} - \left(E^{ab} - \frac{1}{2}\pi^{ab}\right)}

Vorticity Propagation Equation:

\boxed{h^{a}_{b}\dot{\omega}^{b} = -2H\omega^{a} + \sigma^{a}_{b}\omega^{b} - \frac{1}{2}\eta^{abcd}\left(\nabla_{b} \omega_{c} + 2\dot{u}_{b}\omega_{c}\right)u_{d} + q^{a}}

Constraint Equations:

\boxed{h^{a}_{c} h^{c}_{d} \nabla_{b} \sigma^{cd} - 2h^{a}_{b}\nabla^{b}H - \eta^{abcd}\left(\nabla_{b}\omega_{c} + 2 \dot{u}_{b} \omega_{c}\right)u_{d} + q^{a} = 0},

\boxed{h^{a}_{b} \nabla_{a}\omega^{b} - \dot{u}_{a}\omega^{a} = 0},

\boxed{H_{ab} - 2\dot{u}_{(a}\omega_{b)} - h^{c}_{(a}h^{d}_{b)}\nabla_{c} \omega_{d} + \frac{1}{3} \left(2\dot{u}_{c}\omega_{c} + h^{c}_{d} \nabla_{c}\omega^{d}\right)h_{ab} - h^{c}_{(a}h^{d}_{b)} \eta_{cefg}\left(\nabla^{e}\sigma^{f}_{d}\right)u^{g}=0}.

Matter Evolution Equations through the Bianchi identities:

\boxed{\dot{\mu} = -3H\left(\mu + p\right) - h^{a}_{b}\nabla_{a}q^{b} - 2\dot{u}_{a}q^{a} - \sigma^{a}_{b}\pi^{b}_{a}},

\boxed{h^{a}_{b}\dot{q}^{b} = -4Hq^{a} - h^{a}_{b}\nabla^{b}p - \left(\mu + p\right)\dot{u}^{a} - h^{a}_{c}h^{b}_{d}\nabla_{b} \pi^{cd} - \dot{u}_{b}\pi^{ab} - \sigma^{a}_{b}q^{b} + \eta^{abcd}\omega_{b}q_{c}u_{d}}.

One also has evolution equations for the Weyl curvature tensors E_{ab} and H_{ab}, these can be found in Ellis’ Cargese Lectures. 

Despite the fact that these modifications are absolutely necessary if one is to take seriously the notion that our universe has black holes in it, most astronomers and indeed most astrophysics courses continue to use the simpler versions assuming that the universe is spatially homogeneous and isotropic, which contradicts by definition the notion of black holes existing in our universe.

An Analysis of The 2015 NBA Finals Matchup

The NBA finals are exactly five days away, and I wanted to present an analysis breaking down the matchup between The Golden State Warriors and Cleveland Cavaliers.

I used machine and statistical learning techniques to generate the most probable scenarios for the outcome of each game, and this is what I found.

GSWCLEscenarios

Note that the probabilities listed above are not the probabilities for a team to win a specific game, they are the probabilities of a specific scenario occurring. Also, multiple scenarios can occur in a single game, so the probability of multiple scenarios occurring would be the sum of the individual ones. 

The Model Results So Far (Updated: June 11, 2015)

Game 1: Scenario Outcomes: 1 and 2 – GSW win

Game 2: Scenario Outcome: 9 – CLE win

Game 3: Scenario Outcomes: 5, 8 – CLE win

Thoughts so far: Despite GSW being down right now 2-1, I still believe that Cleveland’s wins were statistical anomalies. Cleveland’s Game 2 and Game 3 wins according to our model only had 1.07%, 9.34%, and 1.765% chances of occurring in this series. Whereas, the GSW Game 1 win had a 44% chance of occurring in this series.

Game 4: Scenario Outcome: 2 – GSW win

Updated: June 14, 2015

Game 5: Scenario Outcomes: 1,2 – GSW win

Thoughts: All of GSW wins have been the dominant scenarios in this series, i.e., Outcomes 1 and 2. All of CLE wins in this series have been statistical anomalies/outliers. This pattern continued in Game 5.

Updated: June 17, 2015

Game 6: Scenario Outcomes: 1,2 – GSW win

Another GSW win through the dominant scenarios in the series, as expected. 

Data Analytics and The 1995-1996 Chicago Bulls

It is without question that the greatest team in NBA history was the 1995-1996 Chicago Bulls. They went 72-10 that year and went on to win the NBA Championship against a top-notch Seattle Supersonics team.  

Phil Jackson’s system and first-class coaching were the major reasons why the Bulls were so good, but I wanted to analyze their reason for winning using data science methodologies.

The results that I found were very interesting. First, I mined through each individual game’s data to obtain patterns in the Bulls wins and losses, and this is what I found:

One sees that the Bulls were a defensive nightmare, and if you look at these results in detail, it makes sense that the Sonics were really the only team that ever posed a threat to them. This shows that to beat the Bulls, the opposing team would have to simultaneously:

  1.  Ensure Ron Harper had a FG% less than 44.95% in a game,
  2. Ensure Dennis Rodman would have less than 17 total rebounds in a game,
  3. Ensure Luc Longley had less than 2 blocks in a game,
  4. Ensure Michael Jordan had a FG% less than 46.55% in a game.

If any one of these conditions were not met, the Bulls would win!

This analysis on some level also dispels the notion espoused by several sports analysts like Skip Bayless of ESPN who continually claim that the Bulls’ sole reason for success was Michael Jordan. Ron Harper’s contributions although of paramount importance are rarely mentioned nowadays.

This analysis also shows that the key to the success of the Bulls was not necessarily the number of points that Jordan scored, but the incredible efficiency with which he scored them.

A boosting algorithm also allows us to deduce the most important characteristics in the Bulls’ quality of play and whether they would win or lose a game.  The results are as follows:

We see that a key feature of the Bulls’ quality of play depends on how efficient Ron Harper in terms of his FG%.

It is quite interesting that this analysis shows that winning a championship is not about one player, sure, every team needs great players, but the Bulls were a great team, consisting of many great components working together.

Data Analytics and The Raptors 2015 Loss

          Based on several internal statistical models that my colleagues and I developed, we all have concluded that the Raptors losing the way they did in the first round was somewhat of a statistical anomaly. Through an extensive analysis, I present evidence below that shows it was due to several coaching breakdowns in strategy that lead to the Raptors’ collapse. 

Optimal preparedness would have been to prepare and utilize an extensive analysis of the Washington Wizards’ style of play. Using advanced machine learning techniques, we generated two results, first based on tree boosting, and the other based on classification trees that found the weak points in the Wizards’ system that would have greatly helped the Raptors in this series. 

First, one should be interested in the most important commonalities and characteristics in the Wizards’ play. This result is as follows:

  
One can immediately see that out of several factors, the two most important factors in determining whether the Wizards will win or lose a game is their team FG% and the number of points their opponent score in a game. From this analysis, we obtain that to beat the Wizards, the Raptors should have focused on particularly strong interior defense, and in particular, stopping penetration. From an offensive point of view, the Raptors should have played a strong and slow half-court game focused on getting close-to-the-basket, high-percentage shots, instead of “high-octane” running up and down the court as they seemed to do very frequently. 

Going deeper in this analysis, one also has as a result the following classification tree:

  
In this tree, “W” and “L” denote whether the Wizards will win or lose a game, “FG.” denotes the Wizards’ FG%,  “OFG.%” denotes the Raptors’ field goal percentage, and “OPTS” denotes the number of points in a game the Raptors should score. One sees that for the Wizards to lose games, the coaching strategy should have been designed to ensure that the Wizards would shoot below 45.25%, while the Raptors should have shot at least 40.3% each game. Complementary to the above analysis, one notes that since three point shots are not fundamental to the Wizards’ offense, to accomplish this, the Raptors should have had strong half-court defensive schemes (including traps and trapping zones), combined with slow-paced, interior offensive schemes. 

In conclusion, it is important to note that these analytical results and ideas were available well in advance of the NBA playoffs, and the Raptors would have tremendously benefited from using these ideas. I would also like to point out that I have only offered a preview of the results I obtained. I have also developed several results pertaining to optimal offensive and defensive schemes that would not only change the way the Raptors play, but would make them significantly better.