The “Evolution” of the 3-Point Shot in The NBA

The purpose of this post is to determine whether basketball teams who choose to employ an offensive strategy that involves predominantly shooting three point shots is stable and optimal. We employ a game-theoretical approach using techniques from dynamical systems theory to show that taking more three point shots to a point where an offensive strategy is dependent on predominantly shooting threes is not necessarily optimal, and depends on a combination of payoff constraints, where one can establish conditions via the global stability of equilibrium points in addition to Nash equilibria where a predominant two-point offensive strategy would be optimal as well. We perform a detailed fixed-points analysis to establish the local stability of a given offensive strategy. We finally prove the existence of Nash equilibria via global stability techniques via the monotonicity principle. We believe that this work demonstrates that the concept that teams should attempt more three-point shots because a three-point shot is worth more than a two-point shot is therefore, a highly ambiguous statement.

1. Introduction

We are currently living in the age of analytics in professional sports, with a strong trend of their use developing in professional basketball. Indeed, perhaps, one of the most discussed results to come out of the analytics era thus far is the claim that teams should shoot as many three-point shots as possible, largely because, three-point shots are worth more than two-point shots, and this somehow is indicative of a very efficient offense. These ideas were mentioned for example by Alex Rucker who said “When you ask coaches what’s better between a 28 percent three-point shot and a 42 percent midrange shot, they’ll say the 42 percent shot. And that’s objectively false. It’s wrong. If LeBron James just jacked a three on every single possession, that’d be an exceptionally good offense. That’s a conversation we’ve had with our coaching staff, and let’s just say they don’t support that approach.” It was also claimed in the same article that “The analytics team is unanimous, and rather emphatic, that every team should shoot more 3s including the Raptors and even the Rockets, who are on pace to break the NBA record for most 3-point attempts in a season.” These assertions were repeated here. In an article by John Schuhmann, it was claimed that “It’s simple math. A made three is worth 1.5 times a made two. So you don’t have to be a great 3-point shooter to make those shots worth a lot more than a jumper from inside the arc. In fact, if you’re not shooting a layup, you might as well be beyond the 3-point line. Last season, the league made 39.4 percent of shots between the restricted area and the arc, for a value of 0.79 points per shot. It made 36.0 percent of threes, for a value of 1.08 points per shot.” The purpose of this paper is to determine whether basketball teams who choose to employ an offensive strategy that involves predominantly shooting three point shots is stable and optimal. We will employ a game-theoretical approach using techniques from dynamical systems theory to show that taking more three point shots to a point where an offensive strategy is dependent on predominantly shooting threes is not necessarily optimal, and depends on a combination of payoff constraints, where one can establish conditions via the global stability of equilibrium points in addition to Nash equilibria where a predominant two-point offensive strategy would be optimal as well. (Article research and other statistics provided by: Hargun Singh Kohli)

2. The Dynamical Equations

For our model, we consider two types of NBA teams. The first type are teams that employ two point shots as the predominant part of their offensive strategy, while the other type consists of teams that employ three-point shots as the predominant part of their offensive strategy. There are therefore two predominant strategies, which we will denote as {s_{1}, s_{2}}, such that we define

\displaystyle \mathbf{S} = \left\{s_{1}, s_{2}\right\}. \ \ \ \ \ (1)

We then let {n_{i}} represent the number of teams using {s_{i}}, such that the total number of teams in the league is given by

\displaystyle N = \sum_{i =1}^{k} n_{i}, \ \ \ \ \ (2)

which implies that the proportion of teams using strategy {s_{i}} is given by

\displaystyle x_i = \frac{n_{i}}{N}. \ \ \ \ \ (3)

The state of the population of teams is then represented by {\mathbf{x} = (x_{1}, \ldots, x_{k})}. It can be shown that the proportions of individuals using a certain strategy change in time according to the following dynamical system

\displaystyle \dot{x}_{i} = x_{i}\left[\pi(s_{i}, \mathbf{x}) - \bar{\pi}(\mathbf{x})\right], \ \ \ \ \ (4)

subject to

\displaystyle \sum_{i =1}^{k} x_{i} = 1, \ \ \ \ \ (5)

where we have defined the average payoff function as

\displaystyle \bar{\pi}(\mathbf{x}) = \sum_{i=1}^{k} x_{i} \pi(s_{i}, \mathbf{x}). \ \ \ \ \ (6)

Now, let {x_{1}} represent the proportion of teams that predominantly shoot two-point shots, and let {x_{2}} represent the proportion of teams that predominantly shoot three-point shots. Further, denoting the game action set to be {A = \left\{T, Th\right\}}, where {T} represents a predominant two-point shot strategy, and {Th} represents a predominant three-point shot strategy. As such, we assign the following payoffs:

\displaystyle \pi(T,T) = \alpha, \quad \pi(T,Th) = \beta, \quad \pi(Th, T) = \gamma, \quad \pi(Th,Th) = \delta. \ \ \ \ \ (7)

We therefore have that

\displaystyle \pi(T,\mathbf{x}) = \alpha x_{1} + \beta x_{2}, \quad \pi(Th, \mathbf{x}) = \gamma x_{1} + \delta x_{2}. \ \ \ \ \ (8)

From (6), we further have that

\displaystyle \bar{\pi}(\mathbf{x}) = x_{1} \left( \alpha x_{1} + \beta x_{2}\right) + x_{2} \left(\gamma x_{1} + \delta x_{2}\right). \ \ \ \ \ (9)

From Eq. (4) the dynamical system is then given by

\boxed{\dot{x}_{1} = x_{1} \left\{ \left(\alpha x_{1} + \beta x_{2} \right) - x_{1} \left( \alpha x_{1} + \beta x_{2}\right) - x_{2} \left(\gamma x_{1} + \delta x_{2}\right) \right\}},

\boxed{\dot{x}_{2} = x_{2} \left\{ \left( \gamma x_{1} + \delta x_{2}\right) -x_{1} \left( \alpha x_{1} + \beta x_{2}\right) - x_{2} \left(\gamma x_{1} + \delta x_{2}\right) \right\}},

subject to the constraint

\displaystyle x_{1} + x_{2} = 1. \ \ \ \ \ (10)

Indeed, because of the constraint (10), the dynamical system is actually one-dimensional, which we write in terms of {x_{1}} as

\displaystyle \boxed{\dot{x}_{1} = x_{1} \left(-1 + x_{1}\right) \left[\delta + \beta \left(-1 + x_{1}\right) - \delta x_{1} + \left(\gamma-\alpha\right)x_{1}\right]}. \ \ \ \ \ (11)

From Eq. (11), we immediately notice some things of importance. First, we are able to deduce just from the form of the equation what the invariant sets are. We note that for a dynamical system {\mathbf{x}' = \mathbf{f(x)} \in \mathbf{R^{n}}} with flow {\phi_{t}}, if we define a {C^{1}} function {Z: \mathbf{R}^{n} \rightarrow \mathbf{R}} such that {Z' = \alpha Z}, where {\alpha: \mathbf{R}^{n} \rightarrow \mathbf{R}}, then, the subsets of {\mathbf{R}^{n}} defined by {Z > 0, Z = 0}, and {Z < 0} are invariant sets of the flow {\phi_{t}}. Applying this notion to Eq. (11), one immediately sees that {x_1 > 0}, {x_1 = 0}, and {x_1 < 0} are invariant sets of the corresponding flow. Further, there also exists a symmetry such that {x_{1} \rightarrow -x_{1}}, which implies that without loss of generality, we can restrict our attention to {x_{1} \geq 0}.

3. Fixed-Points Analysis

With the dynamical system in hand, we are now in a position to perform a fixed-points analysis. There are precisely three fixed points, which are invariant manifolds and are given by:

\displaystyle P_{1}: x_{1}^{*} = 0, \quad P_{2}: x_{1}^{*} = 1, \quad P_{3}: x_{1}^{*} = \frac{\beta - \delta}{-\alpha + \beta - \delta + \gamma}. \ \ \ \ \ (12)

Note that, {P_{3}} actually contains {P_{1}} and {P_{2}} as special cases. Namely, when {\beta = \delta}, {P_{3} = 0 = P_{1}}, and when {\alpha = \gamma}, {P_{3} = 1 = P_{2}}. We will therefore just analyze, the stability of {P_{3}}. {P_{3} = 0} represents a state of the population where all teams predominantly shoot three-point shots. Similarly, {P_{3} = 1} represents a state of the population where all teams predominantly shoot two-point shots, We additionally restrict

\displaystyle 0 \leq P_{3} \leq 1 \Rightarrow 0 \leq \frac{\beta - \delta}{-\alpha + \beta - \delta + \gamma} \leq 1, \ \ \ \ \ (13)

which implies the following conditions on the payoffs:

\displaystyle \left[\delta < \beta \cap \gamma \leq \alpha \right] \cup \left[\delta = \beta \cap \left(\gamma < \alpha \cup \gamma > \alpha \right) \right] \cup \left[\delta > \beta \cap \gamma \leq \alpha \right]. \ \ \ \ \ (14)

With respect to a stability analysis of {P_{3}}, we note the following. The point {P_{3}} is a: • Local sink if: {\{\delta < \beta\} \cap \{\gamma > \alpha\}}, • Source if: {\{\delta > \beta\} \cap \{\gamma < \alpha\}}, • Saddle: if: {\{\delta = \beta \} \cap (\gamma < \alpha -\beta + \delta \cup \gamma > \alpha - \beta + \delta)}, or {(\{\delta < \beta\} \cup \{\delta > \beta\}) \cap \gamma = \frac{\alpha \delta - \alpha \beta}{\delta - \beta}}.

What this last calculation shows is that the condition \delta = \beta which always corresponds to the point x_{1}^* = 0, which corresponds to a dominant 3-point strategy always exists as a saddle point! That is, there will NEVER be a league that dominantly adopts a three-point strategy, at best, some teams will go towards a 3-point strategy, and others will not irrespective of what the analytics people say. This also shows that a team's basketball strategy really should depend on its respective payoffs, and not current "trends". This behaviour is displayed in the following plot.

Note the saddle point (x1,x2) = (0,1). This clearly shows that all NBA teams will never adopt a dominant 3-point strategy, as it is always more optimal to play to maximize payoffs.
Note the saddle point (x1,x2) = (0,1). This clearly shows that all NBA teams will never adopt a dominant 3-point strategy, as it is always more optimal to play to maximize payoffs.

Further, the system exhibits some bifurcations as well. In the neigbourhood of {P_{3} = 0}, the linearized system takes the form

\displaystyle x_{1}' = \beta - \delta. \ \ \ \ \ (15)

Therefore, {P_{3} = 0} destabilizes the system at {\beta = \delta}. Similarly, {P_{3} = 1} destabilizes the system at {\gamma = \alpha}. Therefore, bifurcations of the system occur on the lines {\gamma = \alpha} and {\beta = \delta} in the four-dimensional parameter space.

4. Global Stability and The Existence of Nash Equilibria

With the preceding fixed-points analysis completed, we are now interested in determining global stability conditions. The main motivation is to determine the existence of any Nash equilibria that occur for this game via the following theorem: If {\mathbf{x}^{*}} is an asymptotically stable fixed point, then the symmetric strategy pair {[\sigma^{*}, \sigma^{*}]}, with {\sigma^{*} = \mathbf{x}^*} is a Nash equilibrium. We will primarily make use of the monotonicity principle, which says let {\phi_{t}} be a flow on {\mathbb{R}^{n}} with {S} an invariant set. Let {Z: S \rightarrow \mathbb{R}} be a {C^{1}} function whose range is the interval {(a,b)}, where {a \in \mathbb{R} \cup \{-\infty\}, b \in \mathbb{R} \cup \{\infty\}}, and {a < b}. If {Z} is decreasing on orbits in {S}, then for all {\mathbf{x} \in S},

\boxed{\omega(\mathbf{x}) \subseteq \left\{\mathbf{s} \in \partial S | \lim_{\mathbf{y} \rightarrow \mathbf{s}} Z(\mathbf{y}) \neq \mathbf{b}\right\}},

\boxed{ \alpha(\mathbf{x}) \subseteq \left\{\mathbf{s} \in \partial S | \lim_{\mathbf{y} \rightarrow \mathbf{s}} Z(\mathbf{y}) \neq \mathbf{a}\right\}}.

Consider the function

\displaystyle Z_{1} = \log \left(-1 + x_{1}\right). \ \ \ \ \ (16)

Then, we have that

\displaystyle \dot{Z}_{1}= x_{1} \left[\delta + \beta \left(-1 + x_{1}\right) - \delta x_{1} + x_{1} \left(\gamma - \alpha\right)\right]. \ \ \ \ \ (17)

For the invariant set {S_1 = \{0 < x_{1} < 1\}}, we have that {\partial S_{1} = \{x_{1} = 0\} \cup \{x_{1} = 1\}}. One can then immediately see that in {S_{1}},

\displaystyle \dot{Z}_{1} < 0 \Leftrightarrow \left\{\beta > \delta\right\} \cap \left\{\alpha \geq \gamma\right\}. \ \ \ \ \ (18)

Therefore, by the monotonicity principle,

\displaystyle \omega(\mathbf{x}) \subseteq \left\{\mathbf{x}: x_{1} = 1 \right\}. \ \ \ \ \ (19)

Note that the conditions {\beta > \delta} and {\alpha \geq \gamma} correspond to {P_{3}} above. In particular, for {\alpha = \gamma}, {P_{3} = 1}, which implies that {x_{1}^{*} = 1} is globally stable. Therefore, under these conditions, the symmetric strategy {[1,1]} is a Nash equilibrium. Now, consider the function

\displaystyle Z_{2} = \log \left(x_{1}\right). \ \ \ \ \ (20)

We can therefore see that

\displaystyle \dot{Z}_{2} = \left[-1 + x_{1}\right] \left[\delta + \beta\left(-1+x_{1}\right) - \delta x_{1} + \left(-\alpha + \gamma\right) x_{1}\right]. \ \ \ \ \ (21)

Clearly, {\dot{Z}_{2} < 0} in {S_{1}} if for example {\beta = \delta} and {\alpha < \gamma}. Then, by the monotonicity principle, we obtain that

\displaystyle \omega(\mathbf{x}) \subseteq \left\{\mathbf{x}: x_{1} = 0 \right\}. \ \ \ \ \ (22)

Note that the conditions {\beta = \delta} and {\alpha < \gamma} correspond to {P_{3}} above. In particular, for {\beta = \delta}, {P_{3} = 0}, which implies that {x_{1}^{*} = 0} is globally stable. Therefore, under these conditions, the symmetric strategy {[0,0]} is a Nash equilibrium. In summary, we have just shown that for the specific case where {\beta > \delta} and {\alpha = \gamma}, the strategy {[1,1]} is a Nash equilibrium. On the other hand, for the specific case where {\beta = \delta} and {\alpha < \gamma}, the strategy {[0,0]} is a Nash equilibrium. 5. Discussion In the previous section which describes global results, we first concluded that for the case where {\beta > \delta} and {\alpha = \gamma}, the strategy {[1,1]} is a Nash equilibrium. The relevance of this is as follows. The condition on the payoffs thus requires that

\displaystyle \pi(T,T) = \pi(Th,T), \quad \pi(T,Th) > \pi(Th,Th). \ \ \ \ \ (23)

That is, given the strategy adopted by the other team, neither team could increase their payoff by adopting another strategy if and only if the condition in (23) is satisfied. Given these conditions, if one team has a predominant two-point strategy, it would be the other team’s best response to also use a predominant two-point strategy. We also concluded that for the case where {\beta = \delta} and {\alpha < \gamma}, the strategy {[0,0]} is a Nash equilibrium. The relevance of this is as follows. The condition on the payoffs thus requires that

\displaystyle \pi(T,Th) = \pi(Th,Th), \quad \pi(T,T) < \pi(Th,T). \ \ \ \ \ (24)

That is, given the strategy adopted by the other team, neither team could increase their payoff by adopting another strategy if and only if the condition in (24) is satisfied. Given these conditions, if one team has a predominant three-point strategy, it would be the other team’s best response to also use a predominant three-point strategy. Further, we also showed that {x_{1} = 1} is globally stable under the conditions in (23). That is, if these conditions hold, every team in the NBA will eventually adopt an offensive strategy predominantly consisting of two-point shots. The conditions in (24) were shown to imply that the point {x_{1} = 0} is globally stable. This means that if these conditions now hold, every team in the NBA will eventually adopt an offensive strategy predominantly consisting of three-point shots. We also provided through a careful stability analysis of the fixed points criteria for the local stability of strategies. For example, we showed that a predominant three-point strategy is locally stable if {\pi(T,Th) - \pi(Th,Th) < 0}, while it is unstable if {\pi(T,Th) - \pi(Th,Th) \geq 0}. In addition, a predominant two-point strategy was found to be locally stable when {\pi(Th,T) - \pi(T,T) < 0}, and unstable when {\pi(Th,T) - \pi(T,T) \geq 0}. There is also they key point of which one of these strategies has the highest probability of being executed. We know that

\displaystyle \pi(\sigma,\mathbf{x}) = \sum_{s \in \mathbf{S}} \sum_{s' \in \mathbf{S}} p(s) x(s') \pi(s,s'). \ \ \ \ \ (25)

That is, the payoff to a team using strategy {\sigma} in a league with profile {\mathbf{x}} is proportional to the probability of this team using strategy {s \in \mathbf{S}}. We therefore see that a team’s optimal strategy would be that for which they could maximize their payoff, that is, for which {p(s)} is a maximum, while keeping in mind the strategy of the other team, hence, the existence of Nash equilibria. Hopefully, this work also shows that the concept that teams should attempt more three-point shots because a three-point shot is worth more than a two-point shot is a highly ambiguous statement. In actuality, one needs to analyze what offensive strategy is optimal which is constrained by a particular set of payoffs.

Advertisements

On The Acausality of Heat Propagation

In many physics and chemistry courses, one is typically taught that heat propagates according to the heat equation, which is a parabolic partial differential equation:

\boxed{u_t = \alpha u_{xx}},

where \alpha is the thermal diffusivity and is material dependent. Note also, we are considering the one-dimensional case for simplicity.

Now, let f(x-at) be a solution to this problem, which represents a wave travelling at speed a. We get that

\boxed{-a f' = \alpha f''}.

This implies that

\boxed{u(x,t) = -\frac{\alpha c_1}{a} \exp\left[-\frac{a (x-at)}{\alpha}\right] + c_{2}},

where c_{1}, c_{2} are constants determined by appropriate boundary conditions. We can see that as a \to \infty, u(x,t) < \infty! That is, that even under an infinite propagation speed (greater than the speed of light), the solution to the heat equation remains bounded. PDE folks will also say that solutions to the heat equation have characteristics that propagate at an infinite speed. Thus, the heat equation is fundamentally acausal, indeed, all such distribution propagations from Brownian motions to simple diffusions are fundamentally acausal, and violate relativity theory.

Some efforts have been made, and it is still an active area of mathematical physics research to form a relativistic heat conduction theory, see here, for more information.

What we really need are hyperbolic partial differential equations to maintain causality. That is why, Einstein’s field equations, Maxwell’s equations, and the Schrodinger equation are hyperbolic partial differential equations, to maintain causality. This can be seen by considering an analogous methodology to the wave equation in 1-D:

\boxed{u_{tt} = c^2 u_{xx}}.

Now, consider a travelling wave solution as before f(x-at). Substituting this into this wave equation, we obtain that

\boxed{a^2 f'' = c^2 f'' \Rightarrow a^2 = c^2 \Rightarrow a = \pm c}.

That is, all solutions to the wave equation travel at the speed of light, i.e., a = c! Therefore, wave equations are fundamentally causal, and all dynamical laws of nature, must be given in terms of hyperbolic partial differential equations, as to be consistent with Relativity theory.

Article on Three-Point Shooting in the Modern-Day NBA

 

Continuing the debate of the value of three-point shooting in today’s NBA, my article analyzing this issue from a mathematical perspective has now been published on the arXiv, check it out!  

  

Mathematical Origins of Life

The purpose of this post is to demonstrate some very beautiful (I think!) mathematics that arises form Darwinian evolutionary theory. It is a real shame that most courses and discussions dealing with evolution never introduce any type of mathematical formalism which is very strange, since at the most fundamental levels, evolution must also be governed by quantum mechanics and electromagnetism, from which chemistry and biochemistry arise via top-down and bottom-up causation. See this article by George Ellis for more on the role of top-down causation in the universe and the hierarchy of physical matter. Indeed, my personal belief is that if some biologists and evolutionary biologists like Dawkins, Coyne, and others took the time to explain evolution with some modicum of mathematical formalism to properly describe the underlying mechanics instead of using it as an opportunity to attack religious people, the world would be a much better place, and the dialogue between science and religion would be much more smooth and intelligible.

In this post today, I will describe some formalism behind the phenomena of prebiotic evolution. It turns out that there has been a very good book by Claudius Gros and understanding evolution as a complex dynamical system (dynamical systems theory is my main area of research), and the interested reader should check out his book for more details on what follows below.

We can for simplicity consider a quasispecies as a system of macromolecules that have the ability to carry information, and consider the dynamics of the concentrations of the constituent molecules as the following dynamical system:

\boxed{\dot{x}_{i} = W_{ii}x_{i} + \sum_{j \neq i}W_{ij}x_{j} - x_{i} \phi(t)},

where x_{i} are the concentrations of N molecules, W_{ii} is the autocatalytic self-replication rate, and W_{ij} are mutation rates.

From this, we can consider the following catalytic reaction equations:

\boxed{\dot{x}_i = x_{i} \left(\lambda_{i} + k_i^j x_j - \phi \right)},

\boxed{\phi = x^{k}\left(\lambda_{k} + \kappa_k^j x_j\right) },

x_i are the concentrations, \lambda_i are the autocatalytic growth rates, and \kappa_{ij} are the transmolecular catalytic rates. We choose \phi such that

\boxed{\dot{C} = \sum_i \dot{x}_i = \sum_i x_i \left(\lambda_i + \sum_j \kappa_{ij}x_{j} \right) - C \phi = (1-C)\phi}.

Clearly:

\lim_{C \to 1} (1-C)\phi = 0,

that is, this quick calculation shows that the total concentration C remains constant.

Let us consider now the case of homogeneous interactions such that

\kappa_{i \neq j} = \kappa, \kappa_{ii} = 0, \lambda_i = \alpha i,

which leads to

\boxed{\dot{x}_{i} = x_{i} \left(\lambda_i + \kappa \sum_{j \neq i} x_{j} - \phi \right)} ,

which becomes

\boxed{\dot{x}_i = x_i \left(\lambda_i + \kappa - \kappa x_i - \phi\right)}.

This is a one-dimensional ODE with the following invariant submanifolds:

\boxed{x_{i}^* = \frac{\lambda_i + \kappa - \phi}{\kappa}},

\boxed{x_i^* = 0, \quad \lambda_i = N \alpha}.

With homogeneous interactions, the concentrations with the largest growth rates will dominate, so there exists a N^* such that 1 \leq N^* \leq N where

\boxed{x_i^* = \frac{\lambda_i + \kappa - \phi}{\kappa}, \quad N^* \leq i \leq N},

\boxed{0, \quad 1 \leq i < N^*}.

The quantities N^* and \phi are determined via normalization conditions that give us a system of equations:

\boxed{1 = \frac{\alpha}{2\kappa} \left[N(N+1) - N^*(N^* - 1)\right] + \left[\frac{\kappa - \phi}{\kappa}\right] \left(N + 1 - N^*\right)},

\boxed{0 = \frac{\lambda_{N^*-1} + \kappa - \phi}{\kappa} = \frac{\alpha(N^* - 1)}{\kappa} + \frac{\kappa - \phi}{\kappa} }.

For large N, N^*, we obtain the approximation

\boxed{N - N^* \approx \sqrt{\frac{2 \kappa}{\alpha}}},

which is the number of surviving species.

Clearly, this is non-zero for a finite catalytic rate \kappa. This shows the formation of a hypercycle of molecules/quasispecies.

These computations clearly should be taken with a grain of salt. As pointed out in several sources, hypercycles describe closed systems, but, life exists in an open system driven by an energy flux. But, the interesting thing is, despite this, the very last calculation shows that there is clear division between molecules i = N^*, \ldots N which can be considered as a type of primordial life-form separated by these molecules belonging to the environment.

Black Holes, Black Holes Everywhere

BH_LMC

Nowadays, one cannot watch a popular science tv show, read a popular science book, take an astrophysics class without hearing about black holes. The problem is that very few people discuss this topic appropriately. This is further evidenced that these same people also claim that the universe’s expansion is governed by the Friedmann equation as applied to a Friedmann-Lemaitre-Robertson-Walker (FLRW) universe.

The fact is that black holes despite what is widely claimed, are not astrophysical phenomena, they are a phenomena that arise from mathematical general relativity. That is, we postulate their existence from mathematical general relativity, in particular, Birkhoff’s theorem, which states the following (Hawking and Ellis, 1973):

Any C^2 solution of Einstein’s vacuum equations which is spherically symmetric in some open set V is locally equivalent to part of the maximally extended Schwarzschild solution in V.

In other words, if a spacetime contains a region which is spherically symmetric, asymptotically flat/static, and empty such that T_{ab} = 0, then the metric inn this region is described by the Schwarzschild metric:

\boxed{ds^2 = -\left(1 - \frac{2M}{r}\right)dt^2 + \frac{dr^2}{1-\frac{2M}{r}} + r^2\left(d\theta^2 + \sin^2 \theta d\phi^2\right)}

The concept of a black hole then occurs because of the r = 0 singularity that occurs in this metric.

The problem then arises in most discussions nowadays, because the very same astrophysicists that claim that black holes exist, also claim that the universe is expanding according to the Einstein field equations as applied to a FLRW metric, which are frequently written nowadays as:

The Raychaudhuri equation:

\boxed{\dot{H} = -H^2 - \frac{1}{6} \left(\mu + 3p\right)},

(where H is the Hubble parameter)

The Friedmann equation:

\boxed{\mu = 3H^2 + \frac{1}{2} ^{3}R},

(where \mu is the energy density of the dominant matter in the universe and ^{3}R is the Ricci 3-scalar of the particular FLRW model),

and

The Energy Conservation equation:

\boxed{\dot{\mu} = -3H \left(\mu + p\right)}.

The point is that one cannot have it both ways! One cannot claim on one hand that black holes exist in the universe, while also claiming that the universe is FLRW! Since, by Birkhoff’s theorem, external to the black hole source must be a spherically symmetric and static spacetime, for which a FLRW is not static nor asymptotically flat, because of a lack of global timelike Killing vector.

I therefore believe that models of the universe that incorporate both black holes and large-scale spatial homogeneity and isotropy should be much more widely introduced and discussed in the mainstream cosmology community. One such example are the Swiss-Cheese universe models. These models assume a FLRW spacetime with patches “cut out” in such a way to allow for Schwarzschild solutions to simultaneously exist. Swiss-Cheese universes actually have a tremendous amount of explanatory power. One of the mysteries of current cosmology is the origin of the existence of dark energy. The beautiful thing about Swiss-Cheese universes is that one is not required to postulate the existence of hypothetical dark energy to account for the accelerated expansion of the universe. This interesting article from New Scientist from a few years ago explains some of this.

Also, the original Swiss-Cheese universe model in its simplest foundational form was actually proposed by Einstein and Strauss in 1945.

The basic idea is as follows, and is based on Israel’s junction formalism (See Hervik and Gron’s book, and Israel’s original paper for further details. I will just describe the basic idea in what follows). Let us take a spacetime and partition it into two:

\boxed{M = M^{+} \cup M^{-}}

with a boundary

\boxed{\Sigma \equiv \partial M^{+} \cap \partial M^{-}}.

Now, within these regions we assume that the Einstein Field equations are satisfied, such that:

\boxed{\left[R_{uv} - \frac{1}{2}R g_{uv}\right]^{\pm} = \kappa T_{uv}^{\pm}},

where we also induce a metric on \Sigma as:

\boxed{d\sigma^2 = h_{ij}dx^{i} dx^{j}}.

The trick with Israel’s method is understanding is understanding how \Sigma is embedded in M^{\pm}.  This can be quantified by the covariant derivative on some basis vector of \Sigma:

\boxed{K_{uv}^{\pm} = \epsilon n_{a} \Gamma^{a}_{uv}}.

The projections of the Einstein tensor is then given by Gauss’ theorem and the Codazzi equation:

\boxed{\left[E_{uv}n^{u}n^{v}\right]^{\pm} = -\frac{1}{2}\epsilon ^{3}R + \frac{1}{2}\left(K^2 - K_{ab}K^{ab}\right)^{\pm}},

\boxed{\left[E_{uv}h^{u}_{a} n^{v}\right]^{\pm} = -\left(^{3}\nabla_{u}K^{u}_{a} - ^{3}\nabla_{a}K\right)^{\pm}},

\boxed{\left[E_{uv}h^{u}_{a}h^{v}_{b}\right]^{\pm} = ^{(3)}E_{ab} + \epsilon n^{u} \nabla_{u} \left(K_{ab} - h_{ab}K\right)^{\pm} - 3 \left[\epsilon K_{ab}K\right]^{\pm} + 2 \epsilon \left[K^{u}_{a} K_{ub}\right]^{\pm} + \frac{1}{2}\epsilon h_{ab} \left(K^2 + K^{uv}K_{uv}\right)^{\pm}}

Defining the operation [T] \equiv T^{+} - T^{-}, the Einstein field equations are given by the Lanczos equation:

\boxed{\left[K_{ij}\right] - h_{ij} \left[K\right] = \epsilon \kappa S_{ij}},

where S_{ij} results from defining an energy-momentum tensor across the boundary, and computing

\boxed{S_{ij} = \lim_{\tau \to 0} \int^{\tau/2}_{-\tau/2} T_{ij} dy}.

The remaining dynamical equations are then given by

\boxed{^{3}\nabla_{j}S^{j}_{i} + \left[T_{in}\right] = 0},

and

\boxed{S_{ij} \left\{K^{ij}\right\} + \left[T_{nn}\right] = 0},

with the constraints:

\boxed{^{3}R - \left\{K\right\}^2 + \left\{K_{ij}\right\} \left\{K^{ij}\right\} = -\frac{\kappa^2}{4} \left(S_{ij}S^{ij} - \frac{1}{2}S^2\right) - 2 \kappa \left\{T_{nn}\right\}}.

\boxed{\left\{^{3} \nabla_{j}K^{j}_{i} \right\} - \left\{^{3}\nabla_{i}K\right\} = -\kappa \left\{T_{in}\right\}}.

Therefore:

  1. If black holes exist, then by Birkhoff’s theorem, the spacetime external to the black hole source must be spherically symmetric and static, and cannot represent our universe.
  2. Perhaps, a more viable model for our universe is then a spatially inhomogeneous universe on the level of Lemaitre-Tolman-Bondi, Swiss-Cheese, the set of G_{2} cosmologies, etc… The advantage of these models, particular in the case of Swiss-Cheese universes is that one does not need to postulate a hypothetical dark energy to explain the accelerated expansion of the universe, this naturally comes out out of such models.

Under a more general inhomogeneous cosmology, the Einstein field equations now take the form:

Raychauhduri’s Equation:

\boxed{\dot{H} = -H^2 + \frac{1}{3} \left(h^{a}_{b} \nabla_{a}\dot{u}^{b} + \dot{u}_{a}\dot{u}^{a} - 2\sigma^2 + 2 \omega^2\right) - \frac{1}{6}\left(\mu + 3p\right)}

Shear Propagation Equation:

\boxed{h^{a}_{c}h^{b}_{d} \dot{\sigma}^{cd} = -2H\sigma^{ab} + h^{(a}_ch^{b)}_{d}\nabla^{c}\dot{u}^{d} + \dot{u}^{a}\dot{u}^{b} - \sigma^{a}_{c} \sigma^{bc} - \omega^{a}\omega^{b} - \frac{1}{3}\left(h^{c}_{d}\nabla_{c}\dot{u}^{d} + \dot{u}_{c}\dot{u}^{c} - 2\sigma^2 - \omega^2\right)h^{ab} - \left(E^{ab} - \frac{1}{2}\pi^{ab}\right)}

Vorticity Propagation Equation:

\boxed{h^{a}_{b}\dot{\omega}^{b} = -2H\omega^{a} + \sigma^{a}_{b}\omega^{b} - \frac{1}{2}\eta^{abcd}\left(\nabla_{b} \omega_{c} + 2\dot{u}_{b}\omega_{c}\right)u_{d} + q^{a}}

Constraint Equations:

\boxed{h^{a}_{c} h^{c}_{d} \nabla_{b} \sigma^{cd} - 2h^{a}_{b}\nabla^{b}H - \eta^{abcd}\left(\nabla_{b}\omega_{c} + 2 \dot{u}_{b} \omega_{c}\right)u_{d} + q^{a} = 0},

\boxed{h^{a}_{b} \nabla_{a}\omega^{b} - \dot{u}_{a}\omega^{a} = 0},

\boxed{H_{ab} - 2\dot{u}_{(a}\omega_{b)} - h^{c}_{(a}h^{d}_{b)}\nabla_{c} \omega_{d} + \frac{1}{3} \left(2\dot{u}_{c}\omega_{c} + h^{c}_{d} \nabla_{c}\omega^{d}\right)h_{ab} - h^{c}_{(a}h^{d}_{b)} \eta_{cefg}\left(\nabla^{e}\sigma^{f}_{d}\right)u^{g}=0}.

Matter Evolution Equations through the Bianchi identities:

\boxed{\dot{\mu} = -3H\left(\mu + p\right) - h^{a}_{b}\nabla_{a}q^{b} - 2\dot{u}_{a}q^{a} - \sigma^{a}_{b}\pi^{b}_{a}},

\boxed{h^{a}_{b}\dot{q}^{b} = -4Hq^{a} - h^{a}_{b}\nabla^{b}p - \left(\mu + p\right)\dot{u}^{a} - h^{a}_{c}h^{b}_{d}\nabla_{b} \pi^{cd} - \dot{u}_{b}\pi^{ab} - \sigma^{a}_{b}q^{b} + \eta^{abcd}\omega_{b}q_{c}u_{d}}.

One also has evolution equations for the Weyl curvature tensors E_{ab} and H_{ab}, these can be found in Ellis’ Cargese Lectures. 

Despite the fact that these modifications are absolutely necessary if one is to take seriously the notion that our universe has black holes in it, most astronomers and indeed most astrophysics courses continue to use the simpler versions assuming that the universe is spatially homogeneous and isotropic, which contradicts by definition the notion of black holes existing in our universe.

Some Thoughts On Howard Beck’s Bleacher Report Article

Howard Beck had an interesting article today on Bleacher Report, basically suggesting that the NBA finals, in particular, the current style of play embodied by The Golden State Warriors is somehow a vindication of D’Antoni’s basketball philosophies: “Shoot a lot of threes”, “Shoot in 7 seconds or less”, “Play small lineups”, etc…

While the Warriors have certainly embodied some of these philosophies, my personal opinion is that D’Antoni’s style of play can only be vindicated if there is a clear trend in championship teams that reflect these philosophies. As I show below, this is simply not the case.

I looked at the last 15 NBA Champions (from 2000-2014), and tried to see if there were any clear patterns in common between the teams. This is essentially what I found:

nbarankings15yrs

Two things that are immediately clear are:

1. There is very little that championship teams have in common!

2. The overwhelming thing that they do have in common is that 14 of the last 15 NBA champions have all been ranked in the Top 10 for Defensive Rating, something that Mike D’Antoni’s coaching philosophy has never really included throughout his years in Phoenix, New York, and Los Angeles.

This, I believe is the grand point that no one seems to be interested in making, perhaps, because according to the “mainstream”, defensive-oriented basketball, which, by definition is “less-flashy” still is the overwhelming common characteristic amongst championship-winning teams. 

Perhaps, the Warriors will win this year, but as I said above, I do not believe that one year is anywhere near enough to establish a trend and a vindication of D’Antoni’s basketball philosophies.

Further, there were some other things in Beck’s article that I found to be a bit concerning:

He claimed Today, coaches speak enthusiastically about “positionless” basketball—whereas 10 years ago, D’Antoni had to sell Marion and Stoudemire on the concept.”

This is not actually true. The triangle offense is the de facto example of “positionless” basketball, and has been around since the 1940s when Sam Barry introduced it at USC. Phil Jackson and Tex Winter’s Bulls and Lakers teams embodied the concept of positionless basketball. In fact, as can be seen from the diagram below (taken from http://khamel83.tripod.com/intro.htm), players don’t have set positions in the triangle offense. Rather, there are regions based on optimality and spacing:

triangle_spots357x350Many examples can be found from teams playing in the triangle offense system of guards posting up, big men coming out to shoot threes, etc…

An Analysis of The 2015 NBA Finals Matchup

The NBA finals are exactly five days away, and I wanted to present an analysis breaking down the matchup between The Golden State Warriors and Cleveland Cavaliers.

I used machine and statistical learning techniques to generate the most probable scenarios for the outcome of each game, and this is what I found.

GSWCLEscenarios

Note that the probabilities listed above are not the probabilities for a team to win a specific game, they are the probabilities of a specific scenario occurring. Also, multiple scenarios can occur in a single game, so the probability of multiple scenarios occurring would be the sum of the individual ones. 

The Model Results So Far (Updated: June 11, 2015)

Game 1: Scenario Outcomes: 1 and 2 – GSW win

Game 2: Scenario Outcome: 9 – CLE win

Game 3: Scenario Outcomes: 5, 8 – CLE win

Thoughts so far: Despite GSW being down right now 2-1, I still believe that Cleveland’s wins were statistical anomalies. Cleveland’s Game 2 and Game 3 wins according to our model only had 1.07%, 9.34%, and 1.765% chances of occurring in this series. Whereas, the GSW Game 1 win had a 44% chance of occurring in this series.

Game 4: Scenario Outcome: 2 – GSW win

Updated: June 14, 2015

Game 5: Scenario Outcomes: 1,2 – GSW win

Thoughts: All of GSW wins have been the dominant scenarios in this series, i.e., Outcomes 1 and 2. All of CLE wins in this series have been statistical anomalies/outliers. This pattern continued in Game 5.

Updated: June 17, 2015

Game 6: Scenario Outcomes: 1,2 – GSW win

Another GSW win through the dominant scenarios in the series, as expected.