A Series of Lectures on Fine-Tuning in Biology

A recent lecture and a series of interviews has been posted online where cosmologist George F.R. Ellis discusses the issue of fine-tuning in biology at considerable length and in considerable detail. Of course, the larger theme here is that to discuss and understand things like Darwinian evolution properly, one needs to have an understanding of the underlying physics, as it is laws of physics that allow life to emerge and for Darwinian evolution to occur in the first place. Here are the lectures:

 

 

 

 

 

Advertisements

The “Evolution” of the 3-Point Shot in The NBA

The purpose of this post is to determine whether basketball teams who choose to employ an offensive strategy that involves predominantly shooting three point shots is stable and optimal. We employ a game-theoretical approach using techniques from dynamical systems theory to show that taking more three point shots to a point where an offensive strategy is dependent on predominantly shooting threes is not necessarily optimal, and depends on a combination of payoff constraints, where one can establish conditions via the global stability of equilibrium points in addition to Nash equilibria where a predominant two-point offensive strategy would be optimal as well. We perform a detailed fixed-points analysis to establish the local stability of a given offensive strategy. We finally prove the existence of Nash equilibria via global stability techniques via the monotonicity principle. We believe that this work demonstrates that the concept that teams should attempt more three-point shots because a three-point shot is worth more than a two-point shot is therefore, a highly ambiguous statement.

1. Introduction

We are currently living in the age of analytics in professional sports, with a strong trend of their use developing in professional basketball. Indeed, perhaps, one of the most discussed results to come out of the analytics era thus far is the claim that teams should shoot as many three-point shots as possible, largely because, three-point shots are worth more than two-point shots, and this somehow is indicative of a very efficient offense. These ideas were mentioned for example by Alex Rucker who said “When you ask coaches what’s better between a 28 percent three-point shot and a 42 percent midrange shot, they’ll say the 42 percent shot. And that’s objectively false. It’s wrong. If LeBron James just jacked a three on every single possession, that’d be an exceptionally good offense. That’s a conversation we’ve had with our coaching staff, and let’s just say they don’t support that approach.” It was also claimed in the same article that “The analytics team is unanimous, and rather emphatic, that every team should shoot more 3s including the Raptors and even the Rockets, who are on pace to break the NBA record for most 3-point attempts in a season.” These assertions were repeated here. In an article by John Schuhmann, it was claimed that “It’s simple math. A made three is worth 1.5 times a made two. So you don’t have to be a great 3-point shooter to make those shots worth a lot more than a jumper from inside the arc. In fact, if you’re not shooting a layup, you might as well be beyond the 3-point line. Last season, the league made 39.4 percent of shots between the restricted area and the arc, for a value of 0.79 points per shot. It made 36.0 percent of threes, for a value of 1.08 points per shot.” The purpose of this paper is to determine whether basketball teams who choose to employ an offensive strategy that involves predominantly shooting three point shots is stable and optimal. We will employ a game-theoretical approach using techniques from dynamical systems theory to show that taking more three point shots to a point where an offensive strategy is dependent on predominantly shooting threes is not necessarily optimal, and depends on a combination of payoff constraints, where one can establish conditions via the global stability of equilibrium points in addition to Nash equilibria where a predominant two-point offensive strategy would be optimal as well. (Article research and other statistics provided by: Hargun Singh Kohli)

2. The Dynamical Equations

For our model, we consider two types of NBA teams. The first type are teams that employ two point shots as the predominant part of their offensive strategy, while the other type consists of teams that employ three-point shots as the predominant part of their offensive strategy. There are therefore two predominant strategies, which we will denote as {s_{1}, s_{2}}, such that we define

\displaystyle \mathbf{S} = \left\{s_{1}, s_{2}\right\}. \ \ \ \ \ (1)

We then let {n_{i}} represent the number of teams using {s_{i}}, such that the total number of teams in the league is given by

\displaystyle N = \sum_{i =1}^{k} n_{i}, \ \ \ \ \ (2)

which implies that the proportion of teams using strategy {s_{i}} is given by

\displaystyle x_i = \frac{n_{i}}{N}. \ \ \ \ \ (3)

The state of the population of teams is then represented by {\mathbf{x} = (x_{1}, \ldots, x_{k})}. It can be shown that the proportions of individuals using a certain strategy change in time according to the following dynamical system

\displaystyle \dot{x}_{i} = x_{i}\left[\pi(s_{i}, \mathbf{x}) - \bar{\pi}(\mathbf{x})\right], \ \ \ \ \ (4)

subject to

\displaystyle \sum_{i =1}^{k} x_{i} = 1, \ \ \ \ \ (5)

where we have defined the average payoff function as

\displaystyle \bar{\pi}(\mathbf{x}) = \sum_{i=1}^{k} x_{i} \pi(s_{i}, \mathbf{x}). \ \ \ \ \ (6)

Now, let {x_{1}} represent the proportion of teams that predominantly shoot two-point shots, and let {x_{2}} represent the proportion of teams that predominantly shoot three-point shots. Further, denoting the game action set to be {A = \left\{T, Th\right\}}, where {T} represents a predominant two-point shot strategy, and {Th} represents a predominant three-point shot strategy. As such, we assign the following payoffs:

\displaystyle \pi(T,T) = \alpha, \quad \pi(T,Th) = \beta, \quad \pi(Th, T) = \gamma, \quad \pi(Th,Th) = \delta. \ \ \ \ \ (7)

We therefore have that

\displaystyle \pi(T,\mathbf{x}) = \alpha x_{1} + \beta x_{2}, \quad \pi(Th, \mathbf{x}) = \gamma x_{1} + \delta x_{2}. \ \ \ \ \ (8)

From (6), we further have that

\displaystyle \bar{\pi}(\mathbf{x}) = x_{1} \left( \alpha x_{1} + \beta x_{2}\right) + x_{2} \left(\gamma x_{1} + \delta x_{2}\right). \ \ \ \ \ (9)

From Eq. (4) the dynamical system is then given by

\boxed{\dot{x}_{1} = x_{1} \left\{ \left(\alpha x_{1} + \beta x_{2} \right) - x_{1} \left( \alpha x_{1} + \beta x_{2}\right) - x_{2} \left(\gamma x_{1} + \delta x_{2}\right) \right\}},

\boxed{\dot{x}_{2} = x_{2} \left\{ \left( \gamma x_{1} + \delta x_{2}\right) -x_{1} \left( \alpha x_{1} + \beta x_{2}\right) - x_{2} \left(\gamma x_{1} + \delta x_{2}\right) \right\}},

subject to the constraint

\displaystyle x_{1} + x_{2} = 1. \ \ \ \ \ (10)

Indeed, because of the constraint (10), the dynamical system is actually one-dimensional, which we write in terms of {x_{1}} as

\displaystyle \boxed{\dot{x}_{1} = x_{1} \left(-1 + x_{1}\right) \left[\delta + \beta \left(-1 + x_{1}\right) - \delta x_{1} + \left(\gamma-\alpha\right)x_{1}\right]}. \ \ \ \ \ (11)

From Eq. (11), we immediately notice some things of importance. First, we are able to deduce just from the form of the equation what the invariant sets are. We note that for a dynamical system {\mathbf{x}' = \mathbf{f(x)} \in \mathbf{R^{n}}} with flow {\phi_{t}}, if we define a {C^{1}} function {Z: \mathbf{R}^{n} \rightarrow \mathbf{R}} such that {Z' = \alpha Z}, where {\alpha: \mathbf{R}^{n} \rightarrow \mathbf{R}}, then, the subsets of {\mathbf{R}^{n}} defined by {Z > 0, Z = 0}, and {Z < 0} are invariant sets of the flow {\phi_{t}}. Applying this notion to Eq. (11), one immediately sees that {x_1 > 0}, {x_1 = 0}, and {x_1 < 0} are invariant sets of the corresponding flow. Further, there also exists a symmetry such that {x_{1} \rightarrow -x_{1}}, which implies that without loss of generality, we can restrict our attention to {x_{1} \geq 0}.

3. Fixed-Points Analysis

With the dynamical system in hand, we are now in a position to perform a fixed-points analysis. There are precisely three fixed points, which are invariant manifolds and are given by:

\displaystyle P_{1}: x_{1}^{*} = 0, \quad P_{2}: x_{1}^{*} = 1, \quad P_{3}: x_{1}^{*} = \frac{\beta - \delta}{-\alpha + \beta - \delta + \gamma}. \ \ \ \ \ (12)

Note that, {P_{3}} actually contains {P_{1}} and {P_{2}} as special cases. Namely, when {\beta = \delta}, {P_{3} = 0 = P_{1}}, and when {\alpha = \gamma}, {P_{3} = 1 = P_{2}}. We will therefore just analyze, the stability of {P_{3}}. {P_{3} = 0} represents a state of the population where all teams predominantly shoot three-point shots. Similarly, {P_{3} = 1} represents a state of the population where all teams predominantly shoot two-point shots, We additionally restrict

\displaystyle 0 \leq P_{3} \leq 1 \Rightarrow 0 \leq \frac{\beta - \delta}{-\alpha + \beta - \delta + \gamma} \leq 1, \ \ \ \ \ (13)

which implies the following conditions on the payoffs:

\displaystyle \left[\delta < \beta \cap \gamma \leq \alpha \right] \cup \left[\delta = \beta \cap \left(\gamma < \alpha \cup \gamma > \alpha \right) \right] \cup \left[\delta > \beta \cap \gamma \leq \alpha \right]. \ \ \ \ \ (14)

With respect to a stability analysis of {P_{3}}, we note the following. The point {P_{3}} is a: • Local sink if: {\{\delta < \beta\} \cap \{\gamma > \alpha\}}, • Source if: {\{\delta > \beta\} \cap \{\gamma < \alpha\}}, • Saddle: if: {\{\delta = \beta \} \cap (\gamma < \alpha -\beta + \delta \cup \gamma > \alpha - \beta + \delta)}, or {(\{\delta < \beta\} \cup \{\delta > \beta\}) \cap \gamma = \frac{\alpha \delta - \alpha \beta}{\delta - \beta}}.

What this last calculation shows is that the condition \delta = \beta which always corresponds to the point x_{1}^* = 0, which corresponds to a dominant 3-point strategy always exists as a saddle point! That is, there will NEVER be a league that dominantly adopts a three-point strategy, at best, some teams will go towards a 3-point strategy, and others will not irrespective of what the analytics people say. This also shows that a team's basketball strategy really should depend on its respective payoffs, and not current "trends". This behaviour is displayed in the following plot.

Note the saddle point (x1,x2) = (0,1). This clearly shows that all NBA teams will never adopt a dominant 3-point strategy, as it is always more optimal to play to maximize payoffs.
Note the saddle point (x1,x2) = (0,1). This clearly shows that all NBA teams will never adopt a dominant 3-point strategy, as it is always more optimal to play to maximize payoffs.

Further, the system exhibits some bifurcations as well. In the neigbourhood of {P_{3} = 0}, the linearized system takes the form

\displaystyle x_{1}' = \beta - \delta. \ \ \ \ \ (15)

Therefore, {P_{3} = 0} destabilizes the system at {\beta = \delta}. Similarly, {P_{3} = 1} destabilizes the system at {\gamma = \alpha}. Therefore, bifurcations of the system occur on the lines {\gamma = \alpha} and {\beta = \delta} in the four-dimensional parameter space.

4. Global Stability and The Existence of Nash Equilibria

With the preceding fixed-points analysis completed, we are now interested in determining global stability conditions. The main motivation is to determine the existence of any Nash equilibria that occur for this game via the following theorem: If {\mathbf{x}^{*}} is an asymptotically stable fixed point, then the symmetric strategy pair {[\sigma^{*}, \sigma^{*}]}, with {\sigma^{*} = \mathbf{x}^*} is a Nash equilibrium. We will primarily make use of the monotonicity principle, which says let {\phi_{t}} be a flow on {\mathbb{R}^{n}} with {S} an invariant set. Let {Z: S \rightarrow \mathbb{R}} be a {C^{1}} function whose range is the interval {(a,b)}, where {a \in \mathbb{R} \cup \{-\infty\}, b \in \mathbb{R} \cup \{\infty\}}, and {a < b}. If {Z} is decreasing on orbits in {S}, then for all {\mathbf{x} \in S},

\boxed{\omega(\mathbf{x}) \subseteq \left\{\mathbf{s} \in \partial S | \lim_{\mathbf{y} \rightarrow \mathbf{s}} Z(\mathbf{y}) \neq \mathbf{b}\right\}},

\boxed{ \alpha(\mathbf{x}) \subseteq \left\{\mathbf{s} \in \partial S | \lim_{\mathbf{y} \rightarrow \mathbf{s}} Z(\mathbf{y}) \neq \mathbf{a}\right\}}.

Consider the function

\displaystyle Z_{1} = \log \left(-1 + x_{1}\right). \ \ \ \ \ (16)

Then, we have that

\displaystyle \dot{Z}_{1}= x_{1} \left[\delta + \beta \left(-1 + x_{1}\right) - \delta x_{1} + x_{1} \left(\gamma - \alpha\right)\right]. \ \ \ \ \ (17)

For the invariant set {S_1 = \{0 < x_{1} < 1\}}, we have that {\partial S_{1} = \{x_{1} = 0\} \cup \{x_{1} = 1\}}. One can then immediately see that in {S_{1}},

\displaystyle \dot{Z}_{1} < 0 \Leftrightarrow \left\{\beta > \delta\right\} \cap \left\{\alpha \geq \gamma\right\}. \ \ \ \ \ (18)

Therefore, by the monotonicity principle,

\displaystyle \omega(\mathbf{x}) \subseteq \left\{\mathbf{x}: x_{1} = 1 \right\}. \ \ \ \ \ (19)

Note that the conditions {\beta > \delta} and {\alpha \geq \gamma} correspond to {P_{3}} above. In particular, for {\alpha = \gamma}, {P_{3} = 1}, which implies that {x_{1}^{*} = 1} is globally stable. Therefore, under these conditions, the symmetric strategy {[1,1]} is a Nash equilibrium. Now, consider the function

\displaystyle Z_{2} = \log \left(x_{1}\right). \ \ \ \ \ (20)

We can therefore see that

\displaystyle \dot{Z}_{2} = \left[-1 + x_{1}\right] \left[\delta + \beta\left(-1+x_{1}\right) - \delta x_{1} + \left(-\alpha + \gamma\right) x_{1}\right]. \ \ \ \ \ (21)

Clearly, {\dot{Z}_{2} < 0} in {S_{1}} if for example {\beta = \delta} and {\alpha < \gamma}. Then, by the monotonicity principle, we obtain that

\displaystyle \omega(\mathbf{x}) \subseteq \left\{\mathbf{x}: x_{1} = 0 \right\}. \ \ \ \ \ (22)

Note that the conditions {\beta = \delta} and {\alpha < \gamma} correspond to {P_{3}} above. In particular, for {\beta = \delta}, {P_{3} = 0}, which implies that {x_{1}^{*} = 0} is globally stable. Therefore, under these conditions, the symmetric strategy {[0,0]} is a Nash equilibrium. In summary, we have just shown that for the specific case where {\beta > \delta} and {\alpha = \gamma}, the strategy {[1,1]} is a Nash equilibrium. On the other hand, for the specific case where {\beta = \delta} and {\alpha < \gamma}, the strategy {[0,0]} is a Nash equilibrium. 5. Discussion In the previous section which describes global results, we first concluded that for the case where {\beta > \delta} and {\alpha = \gamma}, the strategy {[1,1]} is a Nash equilibrium. The relevance of this is as follows. The condition on the payoffs thus requires that

\displaystyle \pi(T,T) = \pi(Th,T), \quad \pi(T,Th) > \pi(Th,Th). \ \ \ \ \ (23)

That is, given the strategy adopted by the other team, neither team could increase their payoff by adopting another strategy if and only if the condition in (23) is satisfied. Given these conditions, if one team has a predominant two-point strategy, it would be the other team’s best response to also use a predominant two-point strategy. We also concluded that for the case where {\beta = \delta} and {\alpha < \gamma}, the strategy {[0,0]} is a Nash equilibrium. The relevance of this is as follows. The condition on the payoffs thus requires that

\displaystyle \pi(T,Th) = \pi(Th,Th), \quad \pi(T,T) < \pi(Th,T). \ \ \ \ \ (24)

That is, given the strategy adopted by the other team, neither team could increase their payoff by adopting another strategy if and only if the condition in (24) is satisfied. Given these conditions, if one team has a predominant three-point strategy, it would be the other team’s best response to also use a predominant three-point strategy. Further, we also showed that {x_{1} = 1} is globally stable under the conditions in (23). That is, if these conditions hold, every team in the NBA will eventually adopt an offensive strategy predominantly consisting of two-point shots. The conditions in (24) were shown to imply that the point {x_{1} = 0} is globally stable. This means that if these conditions now hold, every team in the NBA will eventually adopt an offensive strategy predominantly consisting of three-point shots. We also provided through a careful stability analysis of the fixed points criteria for the local stability of strategies. For example, we showed that a predominant three-point strategy is locally stable if {\pi(T,Th) - \pi(Th,Th) < 0}, while it is unstable if {\pi(T,Th) - \pi(Th,Th) \geq 0}. In addition, a predominant two-point strategy was found to be locally stable when {\pi(Th,T) - \pi(T,T) < 0}, and unstable when {\pi(Th,T) - \pi(T,T) \geq 0}. There is also they key point of which one of these strategies has the highest probability of being executed. We know that

\displaystyle \pi(\sigma,\mathbf{x}) = \sum_{s \in \mathbf{S}} \sum_{s' \in \mathbf{S}} p(s) x(s') \pi(s,s'). \ \ \ \ \ (25)

That is, the payoff to a team using strategy {\sigma} in a league with profile {\mathbf{x}} is proportional to the probability of this team using strategy {s \in \mathbf{S}}. We therefore see that a team’s optimal strategy would be that for which they could maximize their payoff, that is, for which {p(s)} is a maximum, while keeping in mind the strategy of the other team, hence, the existence of Nash equilibria. Hopefully, this work also shows that the concept that teams should attempt more three-point shots because a three-point shot is worth more than a two-point shot is a highly ambiguous statement. In actuality, one needs to analyze what offensive strategy is optimal which is constrained by a particular set of payoffs.

Mathematical Origins of Life

The purpose of this post is to demonstrate some very beautiful (I think!) mathematics that arises form Darwinian evolutionary theory. It is a real shame that most courses and discussions dealing with evolution never introduce any type of mathematical formalism which is very strange, since at the most fundamental levels, evolution must also be governed by quantum mechanics and electromagnetism, from which chemistry and biochemistry arise via top-down and bottom-up causation. See this article by George Ellis for more on the role of top-down causation in the universe and the hierarchy of physical matter. Indeed, my personal belief is that if some biologists and evolutionary biologists like Dawkins, Coyne, and others took the time to explain evolution with some modicum of mathematical formalism to properly describe the underlying mechanics instead of using it as an opportunity to attack religious people, the world would be a much better place, and the dialogue between science and religion would be much more smooth and intelligible.

In this post today, I will describe some formalism behind the phenomena of prebiotic evolution. It turns out that there has been a very good book by Claudius Gros and understanding evolution as a complex dynamical system (dynamical systems theory is my main area of research), and the interested reader should check out his book for more details on what follows below.

We can for simplicity consider a quasispecies as a system of macromolecules that have the ability to carry information, and consider the dynamics of the concentrations of the constituent molecules as the following dynamical system:

\boxed{\dot{x}_{i} = W_{ii}x_{i} + \sum_{j \neq i}W_{ij}x_{j} - x_{i} \phi(t)},

where x_{i} are the concentrations of N molecules, W_{ii} is the autocatalytic self-replication rate, and W_{ij} are mutation rates.

From this, we can consider the following catalytic reaction equations:

\boxed{\dot{x}_i = x_{i} \left(\lambda_{i} + k_i^j x_j - \phi \right)},

\boxed{\phi = x^{k}\left(\lambda_{k} + \kappa_k^j x_j\right) },

x_i are the concentrations, \lambda_i are the autocatalytic growth rates, and \kappa_{ij} are the transmolecular catalytic rates. We choose \phi such that

\boxed{\dot{C} = \sum_i \dot{x}_i = \sum_i x_i \left(\lambda_i + \sum_j \kappa_{ij}x_{j} \right) - C \phi = (1-C)\phi}.

Clearly:

\lim_{C \to 1} (1-C)\phi = 0,

that is, this quick calculation shows that the total concentration C remains constant.

Let us consider now the case of homogeneous interactions such that

\kappa_{i \neq j} = \kappa, \kappa_{ii} = 0, \lambda_i = \alpha i,

which leads to

\boxed{\dot{x}_{i} = x_{i} \left(\lambda_i + \kappa \sum_{j \neq i} x_{j} - \phi \right)} ,

which becomes

\boxed{\dot{x}_i = x_i \left(\lambda_i + \kappa - \kappa x_i - \phi\right)}.

This is a one-dimensional ODE with the following invariant submanifolds:

\boxed{x_{i}^* = \frac{\lambda_i + \kappa - \phi}{\kappa}},

\boxed{x_i^* = 0, \quad \lambda_i = N \alpha}.

With homogeneous interactions, the concentrations with the largest growth rates will dominate, so there exists a N^* such that 1 \leq N^* \leq N where

\boxed{x_i^* = \frac{\lambda_i + \kappa - \phi}{\kappa}, \quad N^* \leq i \leq N},

\boxed{0, \quad 1 \leq i < N^*}.

The quantities N^* and \phi are determined via normalization conditions that give us a system of equations:

\boxed{1 = \frac{\alpha}{2\kappa} \left[N(N+1) - N^*(N^* - 1)\right] + \left[\frac{\kappa - \phi}{\kappa}\right] \left(N + 1 - N^*\right)},

\boxed{0 = \frac{\lambda_{N^*-1} + \kappa - \phi}{\kappa} = \frac{\alpha(N^* - 1)}{\kappa} + \frac{\kappa - \phi}{\kappa} }.

For large N, N^*, we obtain the approximation

\boxed{N - N^* \approx \sqrt{\frac{2 \kappa}{\alpha}}},

which is the number of surviving species.

Clearly, this is non-zero for a finite catalytic rate \kappa. This shows the formation of a hypercycle of molecules/quasispecies.

These computations clearly should be taken with a grain of salt. As pointed out in several sources, hypercycles describe closed systems, but, life exists in an open system driven by an energy flux. But, the interesting thing is, despite this, the very last calculation shows that there is clear division between molecules i = N^*, \ldots N which can be considered as a type of primordial life-form separated by these molecules belonging to the environment.

Reply to Recent NYT Article: “God, Darwin, and My College Biology Class”

I recently came across the article/op-ed in the NYTimes, titled, “God, Darwin, and My College Biology Class”, http://www.nytimes.com/2014/09/28/opinion/sunday/god-darwin-and-my-college-biology-class.html?_r=0

In this article, it is stated: “Since Darwin, however, we have come to understand that an entirely natural and undirected process, namely random variation plus natural selection, contains all that is needed to generate extraordinary levels of non-randomness. Living things are indeed wonderfully complex, but altogether within the range of a statistically powerful, entirely mechanical phenomenon.”

When it is stated that “contains all that is needed to generate extraordinary levels of non-randomness” is factually not accurate, for one is making the mistake that many reductionists make by assuming that all complexity arises from bottom-up causation alone, and are completely ignoring the effects of top-down causation. The reason is as follows: Lower levels of complexity are necessarily governed by uncertainties due to quantum mechanics, it is not clear how these quantum uncertainties transition to a classical state. Mathematically, these uncertainties that are at the heart of the random variation that is cited are governed in a Hilbert, L^2 Lebesgue integrable space. Classical systems, determined by phase space manifolds have these probabilistic domains in the cotangent bundle of the manifold. The article is essentially saying that the cotangent bundle determines the phase space and not the other way around which is not correct. Further, there remains the unsolved issue of how the quantum fluctuations become classical (unless you follow the untestable many worlds route, which has major problems – see S D Hsu Modern Physics Letters A27: 1230114 (2012) for one interesting comment, and the writings of Sudarsky.

On the other side, top-down causation via cosmology and Einstein’s equations seed the correct conditions for dynamical Darwinian evolution to take place to begin with, for some reason, the author completely leaves this out.

Without a question, the author is an expert an evolutionary biology, but I am afraid he has looked through these issues through a very narrow lens, which does not do the issue a full and complete justification, and indeed, is responsible for much of the discomfort with evolutionary theory that is described so accurately and well in the article.

In the mean time, I would humbly suggest that the interested reader look at the following articles which describe Darwinian evolution in a more complete context as a function of emergence and complexity through physics which underlies biology. Also, one should see the work of Denis Noble, http://musicoflife.co.uk who advocates for a dynamical systems-based view of biological systems, which I personally believe to be correct, as it is much more mathematically and physically sound compared to standard evolutionary theory. This YouTube video of a lecture from the noted cosmologist GFR Ellis also sums up the problem with the reductionist view of evolutionary biology: http://youtu.be/nEhTkF3eG8Q

Title:
Laws, Causation and Dynamics at Different Levels
Authors:
Butterfield, Jeremy
Publication:
eprint arXiv:1406.4732
Publication Date:
06/2014
Origin:
ARXIV
Keywords:
Physics – History and Philosophy of Physics, Physics – Popular Physics
Comment:
29 pages, 3 figures; Interface Focus (Royal Society London), volume 2, 2012, pp. 101-114; doi:10.1098/rsfs.2011.0052
Bibliographic Code:
2014arXiv1406.4732B
Abstract

I have two main aims. The first is general, and more philosophical (Section 2). The second is specific, and more closely related to physics (Sections 3 and 4). The first aim is to state my general views about laws and causation at different ‘levels’. The main task is to understand how the higher levels sustain notions of law and causation that ‘ride free’ of reductions to the lower level or levels. I endeavour to relate my views to those of other symposiasts. The second aim is to give a framework for describing dynamics at different levels, emphasising how the various levels’ dynamics can mesh or fail to mesh. This framework is essentially that of elementary dynamical systems theory. The main idea will be, for simplicity, to work with just two levels, dubbed ‘micro’ and ‘macro’ which are related by coarse-graining. I use this framework to describe, in part, the first four of Ellis’ five types of top-down causation.

Title:
The arrow of time and the nature of spacetime
Authors:
Ellis, George F R
Publication:
eprint arXiv:1302.7291
Publication Date:
02/2013
Origin:
ARXIV
Keywords:
General Relativity and Quantum Cosmology, Physics – History and Philosophy of Physics
Comment:
56 pages, 7 figures
Bibliographic Code:
2013arXiv1302.7291E
Abstract

This paper extends the work of a previous paper [arXiv:1208.2611] on the flow of time, to consider the origin of the arrow of time. It proposes that a `past condition’ cascades down from cosmological to micro scales, being realized in many microstructures and setting the arrow of time at the quantum level by top-down causation. This physics arrow of time then propagates up, through underlying emergence of higher level structures, to geology, astronomy, engineering, and biology. The appropriate space-time picture to view all this is an emergent block universe (`EBU’), that recognizes the way the present is different from both the past and the future. This essential difference is the ultimate reason the arrow of time has to be the way it is.

Title:
Recognising Top-Down Causation
Authors:
Ellis, George F R
Publication:
eprint arXiv:1212.2275
Publication Date:
12/2012
Origin:
ARXIV
Keywords:
Physics – Classical Physics, Nonlinear Sciences – Adaptation and Self-Organizing Systems, Physics – History and Philosophy of Physics
Comment:
11 pages, 2 figures, 2 tables. 2nd prize in FQXI essay competition
Bibliographic Code:
2012arXiv1212.2275E
Abstract

One of the basic assumptions implicit in the way physics is usually done is that all causation flows in a bottom up fashion, from micro to macro scales. However this is wrong in many cases in biology, and in particular in the way the brain functions. Here I make the case that it is also wrong in the case of digital computers – the paradigm of mechanistic algorithmic causation – and in many cases in physics, ranging from the origin of the arrow of time to the process of state vector preparation. I consider some examples from classical physics, as well as the case of digital computers, and then explain why this is possible without contradicting the causal powers of the underlying microphysics. Understanding the emergence of genuine complexity out of the underlying physics depends on recognising this kind of causation.

Title:
Evolutionary Transitions and Top-Down Causation
Authors:
Imari Walker, Sara; Cisneros, Luis; Davies, Paul C. W.
Publication:
eprint arXiv:1207.4808
Publication Date:
07/2012
Origin:
ARXIV
Keywords:
Nonlinear Sciences – Adaptation and Self-Organizing Systems, Quantitative Biology – Other Quantitative Biology
Comment:
8 pages, 4 figures; Proceedings of Artificial Life XIII (2012) p. 283-290
Bibliographic Code:
2012arXiv1207.4808I
Abstract

Top-down causation has been suggested to occur at all scales of biological organization as a mechanism for explaining the hierarchy of structure and causation in living systems. Here we propose that a transition from bottom-up to top-down causation — mediated by a reversal in the flow of information from lower to higher levels of organization, to that from higher to lower levels of organization — is a driving force for most major evolutionary transitions. We suggest that many major evolutionary transitions might therefore be marked by a transition in causal structure. We use logistic growth as a toy model for demonstrating how such a transition can drive the emergence of collective behavior in replicative systems. We then outline how this scenario may have played out in those major evolutionary transitions in which new, higher levels of organization emerged, and propose possible methods via which our hypothesis might be tested.

Title:
On the limits of quantum theory: Contextuality and the quantum-classical cut
Authors:
Ellis, George F. R.
Affiliation:
AA(Mathematics Department, University of Cape Town, South Africa)
Publication:
Annals of Physics, Volume 327, Issue 7, p. 1890-1932.
Publication Date:
07/2012
Origin:
ELSEVIER
Abstract Copyright:
(c) 2012 Elsevier Inc.
DOI:
10.1016/j.aop.2012.05.002
Bibliographic Code:
2012AnPhy.327.1890E
Abstract

This paper is based on four assumptions: 1. Physical reality is made of linearly behaving components combined in non-linear ways. 2. Higher level behaviour emerges from this lower level structure. 3. The way the lower level elements behaves depends on the context in which they are embedded. 4. Quantum theory applies to the lower level entities. An implication is that higher level effective laws, based on the outcomes of non-linear combinations of lower level linear interactions, will generically not be unitary; hence the applicability of quantum theory at higher levels is strictly limited. This leads to the view that both state vector preparation and the quantum measurement process are crucially based on top-down causal effects, and helps provide criteria for the Heisenberg cut that challenge some views on Schrödinger’s cat.

Title:
Top-Down Causation and Autonomy in Complex Systems
Authors:
Juarrero, Alicia
Affiliation:
AA(Emeritus, Prince George’s Community College)
Publication:
Downward Causation and the Neurobiology of Free Will, Understanding Complex Systems. ISBN 978-3-642-03204-2. Springer Berlin Heidelberg, 2009, p. 83
Publication Date:
00/2009
Origin:
SPRINGER
Keywords:
Physics
Abstract Copyright:
(c) 2009: Springer Berlin Heidelberg
DOI:
10.1007/978-3-642-03205-9_5
Bibliographic Code:
2009dcnf.book…83J
Abstract

Evolutionary evidence shows that complex dynamical systems become increasingly self-directed and decoupled from merely energetic forces over time. In this paper I analyze these transformations, concentrating on changes in the type of top-down causation that characterizes such self-organized and autopoietic pro cesses. Specifically, I show that the top-down selection criteria of these systems makes some of them autonomous, and that because once evolution reaches humans the criteria according to which voluntary actions are selected are semantic and symbolic – and can be self-consciously chosen – human self-direction constitutes a form of strong autonomy that can arguably be considered “free will.”

Title:
Top-Down Causation and the Human Brain
Authors:
Ellis, George F. R.
Affiliation:
AA(Mathematics Department, University of Cape Town)
Publication:
Downward Causation and the Neurobiology of Free Will, Understanding Complex Systems. ISBN 978-3-642-03204-2. Springer Berlin Heidelberg, 2009, p. 63
Publication Date:
00/2009
Origin:
SPRINGER
Keywords:
Physics
Abstract Copyright:
(c) 2009: Springer Berlin Heidelberg
DOI:
10.1007/978-3-642-03205-9_4
Bibliographic Code:
2009dcnf.book…63E
Abstract

A reliable understanding of the nature of causation is the core feature of science. In this paper the concept of top-down causation in the hierarchy of structure and causation is examined in depth. Five different classes of top-down causation are identified and illustrated with real-world examples. They are (1) al gorithmic top-down causation; (2) top-down causation via nonadaptive information control; (3) top-down causation via adaptive selection; (4) top-down causation via adaptive information control; and (5) intelligent top-down causation (i.e., the effect of the human mind on the physical world). Recognizing these forms of causation implies that other kinds of causes than physical and chemical interactions are effective in the real world. Because of the existence of random processes at the bottom, there is sufficient causal slack at the physical level to allow all these kinds of causation to occur without violation of physical causation. That they do indeed occur is indicated by many kinds of evidence. Each such kind of causation takes place in particular in the human brain, as is indicated by specific examples.

Title:
Top-Down Causation by Information Control: From a Philosophical Problem to a Scientific Research Program
Authors:
Auletta, G.; Ellis, G. F. R.; Jaeger, L.
Publication:
eprint arXiv:0710.4235
Publication Date:
10/2007
Origin:
ARXIV
Keywords:
Quantitative Biology – Other Quantitative Biology
Comment:
Revised version to meet referee’s comments, and responding to a paper by Wegscheid et al that was not mentioned in the previous version. 23 pages, 9 figures
Bibliographic Code:
2007arXiv0710.4235A
Abstract

It has been claimed that different types of causes must be considered in biological systems, including top-down as well as same-level and bottom-up causation, thus enabling the top levels to be causally efficacious in their own right. To clarify this issue, important distinctions between information and signs are introduced here and the concepts of information control and functional equivalence classes in those systems are rigorously defined and used to characterise when top down causation by feedback control happens, in a way that is testable. The causally significant elements we consider are equivalence classes of lower level processes, realised in biological systems through different operations having the same outcome within the context of information control and networks.

Title:
Physics and the Real World
Authors:
Ellis, George F. R.
Affiliation:
AA(Mathematics Department, University of Cape Town)
Publication:
Foundations of Physics, Volume 36, Issue 2, pp.227-262
Publication Date:
02/2006
Origin:
CROSSREF; SPRINGER
Keywords:
Physics, emergence, causality
DOI:
10.1007/s10701-005-9016-x
Bibliographic Code:
2006FoPh…36..227E
Abstract

Physics and chemistry underlie the nature of all the world around us, including human brains. Consequently some suggest that in causal terms, physics is all there is. However, we live in an environment dominated by objects embodying the outcomes of intentional design (buildings, computers, teaspoons). The present day subject of physics has nothing to say about the intentionality resulting in existence of such objects, even though this intentionality is clearly causally effective. This paper examines the claim that the underlying physics uniquely causally determines what happens, even though we cannot predict the outcome. It suggests that what occurs is the contextual emergence of complexity: the higher levels in the hierarchy of complexity have autonomous causal powers, functionally independent of lower level processes. This is possible because top-down causation takes place as well as bottom-up action, with higher level contexts determining the outcome of lower level functioning and even modifying the nature of lower level constituents. Stored information plays a key role, resulting in non-linear dynamics that is non-local in space and time. Brain functioning is causally affected by abstractions such as the value of money and the theory of the laser. These are realised as brain states in individuals, but are not equivalent to them. Consequently physics per se cannot causally determine the outcome of human creativity, rather it creates the possibility space allowing human intelligence to function autonomously. The challenge to physics is to develop a realistic description of causality in truly complex hierarchical structures, with top-down causation and memory effects allowing autonomous higher levels of order to emerge with genuine causal powers.

Title:
Emergence and Dissolvence in the Self-organisation of Complex Systems
Authors:
Testa, Bernard; Kier, Lemont B.
Publication:
Entropy, vol. 2, Issue 1, p.1-25
Publication Date:
03/2000
Origin:
ADS
Keywords:
property space, emergent properties, dissolvence, information, self-organisation, complex systems, complexity
Comment:
Article
DOI:
10.3390/e2010001
Bibliographic Code:
2000Entrp…2….1T
Abstract

The formation of complex systems is accompanied by the emergence of properties that are non-existent in the components. But what of the properties and behaviour of such components caught up in the formation of a system of a higher level of complexity? In this assay, we use a large variety of examples, from molecules to organisms and beyond, to show that systems merging into a complex system of higher order experience constraints with a partial loss of choice, options and independence. In other words, emergence in a complex system often implies reduction in the number of probable states of its components, a phenomenon we term dissolvence. This is seen in atoms when they merge to form molecules, in biomolecules when they form macromolecules such as proteins, and in macromolecules when they form aggregates such as molecular machines or membranes. At higher biological levels, dissolvence occurs for example in components of cells (e.g. organelles), tissues (cells), organs (tissues), organisms (organs) and societies (individuals). Far from being a destruction, dissolvence is understood here as a creative process in which information is generated to fuel the process of self-organisation of complex systems, allowing them to appear and evolve to higher states of organisation and emergence. Questions are raised about the relationship of dissolvence and adaptability; the interrelation with top-down causation; the reversibility of dissolvence; and the connection between dissolvence and anticipation.

IMG_0224.JPG