Here are some links to some excellent cosmology lectures on YouTube. The videos actually teach you cosmology, as you would see it in a university/college setting. They do require some knowledge of General Relativity, but one in principle, can get away by knowing what a metric tensor is, Christoffel symbols are, and the various Riemann and Ricci tensor definitions. A good book for learning all of this is by Stephani, which is aimed at an introductory level. These lectures are by the world-renowned cosmologist George F.R. Ellis, who is now Professor Emeritus at The University of Cape Town.

Here are some lectures on Cosmology and Philosophy and their importance to one another, these are not at a technical level:

There has been much noise in recent days of Sikhs wanting to enlist in the U.S. Army and being unable to do so because they insist on following their religious mandates of uncut hair, an untrimmed beard, and covering their hair with a turban. The recent news stories are summarized here:

As usual, a myriad of folks have come out chiding the Sikh person for not conforming by disobeying his religion and removing his hair and beard. Such comments have been supported with questions that have typically come up whenever similar stories have come in the media of how Sikhs are expected to wear gas masks, how can they perform in combat operations without a helmet, etc… The point is that all of these issues are in fact, non-issues and are completely unfounded. There is much literature on Sikhs’ performance in the various wars throughout history and their success in said wars with turbans and beards, so I will not pursue that discussion here. However, I would like to take this opportunity to point out a level of hypocrisy which people who oppose Sikhs in the military on the aforementioned grounds seem to have a penchant for. Here are just a few examples of countless numbers of White, American soldiers fighting in US conflicts with beards and in some cases, long hair. Why weren’t the same questions and objections raised in their cases?

Here are just some examples of Sikhs fighting and giving their lives in World War II for the Allied forces. As you can see, there are no technical issues of turbans and beard arising in these cases:

Winston Churchill had this to say about Sikhs as well:

“…..It is a matter of regret that due to the obsession of the present times people are distorting the superior religious and social values, but those who wish to preserve them with respect, we should appreciate them as well as help them. Sikhs do need our help for such a cause and we should give it happily. Those who know the Sikh history, know England’s relationship with the Sikhs and are aware of the achievements of the Sikhs, they should persistently support the idea of relaxation to Sikhs to ride a motorbike with their turbans on, because it is their religious privilege.”

Churchill, further added:

“…British people are highly indebted and obliged to Sikhs for a long time. I know that within this century we needed their help twice and they did help us very well. As a result of their timely help, we are today able to live with honour, dignity, and independence. In the war, they fought and died for us, wearing the turbans. At that time we were not adamant that they should wear safety helmets because we knew that they are not going to wear them anyways and we would be deprived of their help. At that time due to our miserable and poor situation, we did not force it on them to wear safety helmets, why should we force it now? Rather, we should now respect their traditions and by granting this legitimate concession, win their applaud.”

So, we should all strive to be a bit more educated in this regard. These exemptions have been granted to White, American soldiers in the past, and even if not in a formal sense, there is an eerie silence coming from the same critics of Sikhs wanting to join the military when non-Sikhs have also not conformed to these rules as the above pictures clearly demonstrate.

Krauss and many other physicists continuously engage in this type of low-level philosophy with the ironic goal of diminishing the value of philosophy using “science”.

This paper dissects all the arguments in Krauss’ book and shows from a mathematical standpoint that like others who make similar arguments, they are not grounded in actual physics and are extremely flawed. One therefore concludes, that such arguments are not based in science, but in bad philosophy.

A teaser: This is what “nothing” actually looks like, well one depiction of it anyways: But this is not nothing, it is something, where did this structure come from? Krauss actually ignores the question entirely in his book, which is very strange.

The famed cosmologist George Ellis also has discussed Krauss’ book in one of his talks, here is the link for that:

A video making some of the arguments in the above paper easier to understand can be found here:

I greatly debated with myself on whether to write this posting. I have seen Interstellar twice now including the special 70 mm IMAX screening, and am seeing it a third time later today. Simply put, the movie is fascinating. It combines, (yes) accurate science and real depictions of general relativistic effects with a great story as is to be expected from Christopher Nolan.

It was pointed out to me recently that some people have taken to the internet to write extensive articles criticizing the science in the movie, which is very strange. First, I didn’t think too much of it, as Kip Thorne was not only an executive producer, but also a consultant on the film, and has also seen the film. Surely, if there was something wrong from a GR-point-of-view, he would point it out. After all, he did manage to get two original scientific papers out of working on this movie.

The two reviews criticizing the science that I have seen, so far, stem from:

They seem to be keen on really nitpicking certain things, which is certainly in their prerogative to do so, but I will just discuss in this article a major flaw in both of their reviews, in which they claim part of the science of Interstellar is wrong. They seem to both have an issue with the time dilation effect as described in the movie of the water planet close to the Black Hole, where it is claimed in the movie that 1 hour in the planet’s reference frame corresponds to 7 years in an observer’s reference frame far from the black hole. The two reviewers then go on to say that this is impossible as:

1. One would have to essentially be a “pinch” from the event horizon of the black hole.

2. The planet would not be in a stable orbit, and would spiral and crash into the black hole’s singularity point.

These are their two grand assumptions, but simply put, these assumptions are very, very wrong! They are basing their assumptions on the Schwarzschild solution of General Relativity:

This metric tensor describes the local geometry of the spacetime outside the region of a static, non-rotating, and spherically symmetric black hole/astrophysical body.

Notice how I emphasized non-rotating. If one uses this geometry as Plait and Trotta have, one will deduce all sorts of wrong conclusions. In truth, as has been said by both Thorne and Nolan during the special features videos posted on YouTube and I believe by the characters in the film, the black hole in the movie is spinning very, very fast and therefore, its angular momentum cannot be neglected. One therefore needs to at minimum use the Kerr metric:

where

J denotes the angular momentum and is absolutely key to understanding that the effect depicted in the movie is indeed very plausible.

Now we deal with the claimed time dilation effect of 1 hour = 7 years as described earlier. It can be shown that the time dilation equation derived from the Kerr metric takes the form:

Substituting for d\tau = 1 hour, and dt = 7 years, one obtains the following relation:

This equation fully describes a black hole of mass M, rotating with angular momentum J, as observed by an observer at radial coordinate r, and angular coordinate theta. The fraction on the right-hand-side of the equation fully depicts the 1 hour = 7 years dilation effect. For the Kerr metric, unlike the Schwarzschild metric, there are several stable orbits that can occur. Plait’s article took issue with the fact that for a stable orbit, the orbital radius should be 3 times the Schwarzschild radius, but as I said, it is due to him assuming the incorrect geometry. In actuality, the Kerr metric allows for three possible stable orbits, which were derived in the paper by Bardeen, Press, Teukolsky: Astrophysical Journal, Vol. 178, pp. 347-370 (1972):

1. For zero angular momentum: innermost stable orbit = 3 x Schwarzschild radius

2. For angular momentum a = M, corresponding to corotational behaviour: innermost stable orbit = 0.5 * Schwarzschild radius

3. For angular momentum a = M, corresponding to retrograde motion: innermost stable orbit = 9/2 * Schwarzschild radius

It turns out that the only way to satisfy the equation above is by considering case #2 here: One obtains two solutions:

or

Therefore, as this shows, it is completely possible to have a rotating black hole with an observer outside of it that experiences such time dilation effects while still exhibiting a stable orbit, that is, it never crashes into the black hole!

Just for fun, let’s plug in some numbers. Let us consider a very massive black hole that has a mass of 2000 Solar Masses, applying the above formulas we see that:

I chose to fix the angular coordinate for demonstration purposes only.

Therefore, this calculation shows that the time dilation effect in the movie is perfectly reasonable and accurate. I will write more about the other aspects of people’s reviews later which on a first reading seem to also be based on incorrect assumptions, but at the present moment, I don’t have the time!

Update: So, an article was just released detailing the science of Interstellar: http://www.space.com/27692-science-of-interstellar-infographic.html

In it, it is said that the mass of the black hole is 100 million solar masses. With this now, I can properly work out the example above, I made up numbers before, because I did not have this information before today! So, here it is re-worked:

For a very massive black hole, in the movie it is stated that M = 100 million times the mass of the sun. With this information, substituting into the equation above, we get:

This will no doubt please anyone who noticed the orbit in the first example seemed too small!

Update: A Comment on Tidal Forces

Also, by popular request, some have claimed that the planet close to the black hole should be completely destroyed by tidal forces, since it is so close to the black hole. This is not so. For this discussion, I will revert back to the Schwarzschild metric, since the mathematics is simpler, but the discussion can of course be extended to the Kerr metric. Consider the planet in question (the water planet) at a radial position r. The tidal forces felt by the planetary body are measured by the orthonormal components of the Riemann curvature tensor. If we consider a static orthonormal frame as is done in Misner, Thorne and Wheeler, we have:

At this radial position, we obtain for the Riemann curvature tensor components:

Now, we can transform over to the planet’s frame by applying a Lorentz boost in the radial direction with velocity:

One sees that all components of the the curvature tensor are completely unaffected by this boost! One therefore sees that none of the components of the curvature tensor in the planet’s reference frame become infinite at the gravitational radius. Moreover, as the planet/observer approaches the horizon, as can be seen from the Riemann components, the tidal forces are finite, and do not tear anything apart, at least when the mass M is very large (as is the case in the film). However, let us see the curvature invariants, for a Schwarzschild metric we have:

This is invariant, and so is a singularity in every reference frame. Indeed, as r -> 0, the tidal forces become infinite. So, only past the horizon, very close to the singularity, do we have to worry about tidal forces from the black hole breaking anything up!

Now, the astro community are largely mistaken on this whole tidal force ripping up the planet. All the papers they use are citing the whole idea of using the Roche limit. This can’t be done for several reasons. As I outlined for another astronomer (who will remain unnamed for this posting), the problem is as follows:

I am a stickler for mathematical form, and I refuse to acknowledge the validity of the Roche limit in General Relativity. Here are my reasons:

1. Even if I was to conclude that a spherical body orbiting a Kerr black hole will break up because of the tidal forces as described by the Roche limit, this conclusion is highly questionable without a 2-body GR approach because: you are assuming from the onset that the Kerr black hole remains spherical, and the mass in question has no effect on the Kerr black hole, so you are implicitly using a far-field approximation from the onset.

2. Newtonian gravity is linear, GR is not. Since there’s no 2-body problem analytic solution in GR, there is simply NO GR equivalent of the Roche limit.

3. The Roche limit is simply a result of Lagrange points in 2-body orbital Newtonian mechanics, and I prefer to leave it there. Adding GR corrections is not good enough.

4. In the Roche limit and the governing Newtonian regime, pressure does not generate any gravitational field, but, as you well know, in GR, pressure does contribute to the En.Mom tenor, and as a result the gravitational attraction. In fact, if collapse happen sufficiently far, the pressure growth goes exponentially and it is far more important than the rest-mass density.

5. The real way to do this problem aside from considering a 2-body problem in GR, and getting an analytic solution, is to consider an internal Schwarzschild geometry in an external Kerr geometry background. But, because of the cross-term in the Kerr metric that one cannot transform away because of any coordinate transformation, the matching conditions are impossible to derive. If I on the other hand assume an external Schwarzschild geometry (which is not relevant for this problem, but…) then one obtains the well-known TOV equation. The TOV equation is essentially how one obtains the collapse conditions properly.

6. The Roche limit is a Newtonian result, and because of the linearity of Newtonian gravity, and the lack of pressure contributing to gravity, prevent any such effect in Newtonian physics.

7. The Roche limit arguments are always weak-field effects, which will not give you an accurate answer especially in this regard.

It does raise an interesting question though. Why do you insist on using the Roche limit if the pressure influencing spacetime curvature (which would be significant because of the magnitude of the tidal forces) cannot be accounted for in this approach? It is in fact worse than this. If I have a significant pressure as implied by the Roche limit, then the En-Mom tensor is no longer non-zero, and one does not even have a Schwarzschild/Kerr or any other vacuum solution. This now goes into the domain of cosmology, which makes this problem, much, more difficult.

Finally, there are also issues having to do with causality, the fact that the governing structural equations in the Roche limit approach are elliptic PDEs (the Poisson equation) and the heat equation, which is a parabolic PDE. Both are acausal, in the case of elliptic PDEs, all solutions are spacelike, and no physical body would move along spacelike hypersurfaces.

Therefore, for all these mathematical points, I refuse to acknowledge the validity of the Roche limit in this situation, and prefer a non-Newtonian GR approach, and it is the only correct way to do this problem. But, like I said, we’re approaching this from different points-of-view, Phil and the astro community seem to be satisfied with approximate solutions! -:) (Thanks to GFE and CCD for pointing some of these points out in an interesting discussion on the mathematical formulation of Einstein’s equations!)

Update: On the whole issue of those giant waves in the movie

A lot of folks have been asking whether the situation of those giant waves that are observed on the water planet near the black hole are feasible in the movie. My honest answer is that I have to solve some equations to find out, but ironically those equations are not astrophysical or general relativistic in nature, they are purely dependent on standard Navier-Stokes theory. If you noticed from the film, the wavelength of the water waves was much, much greater than the depth of the water itself, this situation is ripe for the shallow-water equations, which are obtained by applying the Navier-Stokes equations to such a problem. For those who are interested, any reasonably advanced-level fluid mechanics textbook discusses these equations at length. In any event, these equations are three coupled, nonlinear partial differential equations, and have no analytic solution in general, they look like:

where u is the x-direction velocity, v is the y-direction velocity, h is the height deviation of the pressure surface from the mean height H (i.e., how high the wave will be), H is the height of the pressure surface, g is the local acceleration due to gravity, f is the Coriolis coefficient that is determined from the rotation of the planet, and b denotes viscous drag forces. For the situation in the movie, we are told that the acceleration due to gravity on the planet is 130% that of Earth’s, which means that g = 9.81 x 1.30 = 12.753 m/s^2, f will be reasonably influenced by internal forces in planet’s structure, combined with interestingly enough the Lens-Thirring effect/frame dragging from the rotating black hole which will also cause the planet to precess. Solving these equations must be done numerically, and has been well-studied in the scientific literature, indeed, many simulations have been done. Here are some examples:

Update: A Comment on Hawking Radiation

Some have also claimed that the radiation emitted, namely, Hawking radiation from the massive black hole should be enough to kill the nearby observers. This is also a misconception of what Hawking radiation is. Hawking radiation is a quantum effect, and is given by the equation:

That is, this gives you the temperature of the electromagnetic radiation emitted from a black hole. Let us do some calculations for the black hole in question. For a Kerr black hole, we have that the surface gravity is (see the discussion in Hervik and Gron):

The other constants in the temperature formula above are the well-known Planck’s constant, Boltzmann’s constant, and speed of light. Putting these two equations together and substituting the numbers that we derived in the previous section, we see that the temperature of EM radiation emitted from the giant black hole in the movie is approximately:

which is extremely, extremely negligible! Therefore, no one will die from the EM radiation emitted from the massive spinning black hole!

UPDATE: BY POPULAR REQUEST

COMMENTS: ON THE LAST ACT OF THE MOVIE

So, I have received some requests to discuss the scientific accuracy of the last part of the film, where the main character, Cooper travels through the black hole and reaches a five-dimensional universe.

The key point of understanding why this is possible is to recall once again, that we are using the Kerr metric that depicts spinning black holes, that is everything. In a non-rotating black hole, once someone passes the event horizon, he will have no choice but to continue towards the singularity meeting his eventual death. This is not so for a Kerr black hole. Let us see why:

Much of my discussion is based on the great G.R. books from Hawking and Ellis, Misner, Thorne, Wheeler, Gron and Hervik, and Wald.

Let me write the Kerr metric in a slightly more different form that will be practical for this discussion:

where we have defined per the conventions in Gron and Hervik,

Recall that this metric describes the spacetime outside a rotating black hole with mass M and angular momentum J = Ma. Notice that in this form the singularities of the metric are easily observable. Namely, where \Delta = 0 and \Sigma = 0. The \Delta = 0 equation describes the horizon, and folks familiar with general relativity know that this is coordinate singularity. By a suitable coordinate transformation, one can ‘transform away’ this singularity. However, the \Sigma = 0 denotes in fact a real physical singularity, given by the set of points that satisfy

As can be confirmed the solution to this equation is a two-dimensional ring:

Therefore, while the singularity for a regular Schwarzschild metric is a point singularity from which nothing can escape, the singularity for a Kerr rotating black hole is a ring, which in fact, is avoidable!

To see this, let us dive a bit further into the structure of the Kerr metric. Following Hawking and Ellis’ remarkable text,

From this figure, we note the following remarkable property of the Kerr metric. One passes through the ring singularity in the rotating Black Hole by going from the (x,z) plane on the left to the (x’,z’) plane on the right of the diagram. It can be shown that, because of this complicated topology, closed timelike curves exist in the neighbourhood of the ring singularity. (For those that are interested, a complete discussion involving Killing vectors are detailed in Wald’s GR text). The significance of the existence of closed timelike curves is an observer traversing along these curves can violate causality, and thus go backwards in time by an arbitrary amount. Note that there are some issues regarding stability that I have not detailed here as they are much more technical than what is covered in this posting.

Now, connecting all of this to the movie. The structure above allows one (as has been reported in the literature) to use the Kerr black hole as a wormhole itself. It is therefore plausible that Cooper’s character avoids the singularity of the rotating black hole and transports to another region of the universe. In the movie it is depicted that he ends up in a 5-D universe, 4 spatial dimensions and 1 time. Again, this is perfectly theoretically possible. Purists might argue against it, but like I said, it is theoretically possible. For example, the wormhole could transport you to a region of spacetime where the geometry locally is 5-D Minkowskian and has the metric:

However, as human beings can only perceive of 3 spatial dimensions and 1 time dimension, these four-dimensional spatial sections have to be embedded in a 3-dimensional setting for us to visualize them. These four-dimensional spatial sections are completely Euclidean, and one can think of a tesseract with the following domain:

In fact, in relativity theory, time flows “upwards”. One can foliate the above metric tensor into a 1+4 split, and obtain the following dynamical picture of how an observer “moves through” such a five-dimensional spacetime. Each spatial slice is taken to be 4-D, but since we can’t perceive of 4 spatial dimensions, this 4-D surface is embedded into three-dimensional space to produce the tesseract as in the film:

These are both depicted in the movie. So, once again, anyone saying these are pure fiction/fantasy out of Nolan’s mind are mistaken. There are technical arguments involved giving mathematical conditions showing where these conditions would fail to work, but that is becoming too technical for a science fiction movie. The point is that in large, the theory of General Relativity supports these ideas, and it is all based on the idea of using a rotating black hole in the movie, it is the true centre of the plot!

Update: The cool part of all this is that I obtained this image from the Interstellar website showing Dr. Brand’s blackboard:

Note that the metric tensor on the bottom left-hand-corner is exactly the Kerr metric I described earlier. It seems that indeed the “quantum data” that is to be obtained from the black hole singularity actually is obtained from when TARS falls into the black hole and goes through the ring singularity. What’s interesting is that Thorne’s depiction here (which he drew according to the special features of the movie) actually show you where the quantum data would be with respect to the singularity in the above Penrose diagram.

For years now, I have heard this constant story of how it is acceptable for Sikhs to celebrate Diwali as per the Hindu traditions of lighting lamps, etc… Further, Raagis and Bhai Sahibs in Gurdwaras have conflated Bandhi Chorr Diwas with lighting lamps as per Hindu Diwali traditions. They further support these ideas with the supposed Vaar from Bhai Gurdas Jee, in which they ironically only mention and repeat the first line! : “deewaalee dee raath dheevae baaleean”. Of course, just by reading this line, it would suggest that the aforementioned actions are justified. But taking one line completely out of context leads one to these conclusions. A full reading of Bhai Gurdas Jee’s Vaar on the Diwali matter which given the timeframe is also a historical first-hand account suggests that Sikhs are to practice completely the opposite and in fact, lighting lamps is contrary to Gurmat. The full Vaar’s transliteration is below:

Vaars Bhai Gurdaas

diwali dee raath dheevae baaleeani
thaarae jaath sanaath a(n)bar bhaaleean
fulaa(n) dhee baagaath chun chun chaaleean
theerathh jaathee jaath nain nihaaleean
har cha(n)dhuree jhaath vasaae ouchaaleean
guramukh sukhafal dhaath shabadh samhaaleean

The essence of this Vaar is in every line after the first. Namely, in the third, fourth, and fifth lines, Bhai Gurdas Jee compares those that celebrate Diwali by lighting lamps akin to those who go on long pilgrimages to find God, and to those who search for God by worshipping the stars, or things in nature, etc… All contrary to Gurmat by a simple reading of Japjee Sahib! Indeed, Bhai Sahib Jee in the last line clearly states that a person of Gurmat does not practice any of these things, which he declares to be temporary and pointless.

So, there you have it. A simple reading of the full Vaar changes the entire context of the “importance” of Diwali in Sikhism. I doubt many Sikhs will read this posting with sincerity, but someone has to speak the truth!

As has been well documented over recent days, there has been great excitement over the recent activity of full and partial solar eclipses, with students, astronomy enthusiasts, etc… all over showing great enthusiasm. However, a vast majority seem to be completely unaware as to why these solar eclipses are so important. Their most important purpose is that one can directly confirm as Eddington did in 1919 the validity of Einstein’s theory of General Relativity. When the sun is eclipsed, one can directly observe starlight from behind the sun being bent as should happen according to Einstein’s theory, namely that light bends according to the curvature of spacetime. I document here a brief calculation that demonstrates this:

We will assume that the spacetime under consideration is spherically symmetric and static, and so by Birkhoff’s theorem, outside the spherically symmetric body, the solution to Einstein’s equations is the well-known Schwarzschild metric:

To determine an equation for the path that light should follow in this spacetime, one writes the associated Lagrangian of geodesics as:

Applying the Euler-Lagrange equations and exploiting the fact that the spacetime is spherically symmetric and static, one obtains the orbit equation for light as:

This ordinary differential equation is a nightmare to solve!! For example, Mathematica gives:

That is, a complicated function of elliptic integrals. The easier approach is to consider perturbations as per the text of Gron and Hervik, which we do here:

Essentially, without getting into too much detail, taking this approach (See Ch. 10 of Gron and Hervik), we obtain the solution to the ODE above as:

where b is defined to be an impact parameter. At u = 0, the photon flies out towards radial infinity, and so the deflection angle, can be calculated from the relationship:

Let us perform a Taylor expansion about Pi/2, to obtain the angle by which an astrophysical body causes light to deflect by as:

For the Sun, the deflection angle turns out to be approximately

That is, the curvature of spacetime induced by the Sun causes starlight from behind the sun to be “bent”/deflected by 1.75”. This is precisely what Eddington’s team observed in their 1919 expedition. This test was what finally confirmed Einstein’s theory of General Relativity, and is the true reason why solar eclipses are so important!

In this article, it is stated: “Since Darwin, however, we have come to understand that an entirely natural and undirected process, namely random variation plus natural selection, contains all that is needed to generate extraordinary levels of non-randomness. Living things are indeed wonderfully complex, but altogether within the range of a statistically powerful, entirely mechanical phenomenon.”

When it is stated that “contains all that is needed to generate extraordinary levels of non-randomness” is factually not accurate, for one is making the mistake that many reductionists make by assuming that all complexity arises from bottom-up causation alone, and are completely ignoring the effects of top-down causation. The reason is as follows: Lower levels of complexity are necessarily governed by uncertainties due to quantum mechanics, it is not clear how these quantum uncertainties transition to a classical state. Mathematically, these uncertainties that are at the heart of the random variation that is cited are governed in a Hilbert, L^2 Lebesgue integrable space. Classical systems, determined by phase space manifolds have these probabilistic domains in the cotangent bundle of the manifold. The article is essentially saying that the cotangent bundle determines the phase space and not the other way around which is not correct. Further, there remains the unsolved issue of how the quantum fluctuations become classical (unless you follow the untestable many worlds route, which has major problems – see S D Hsu Modern Physics Letters A27: 1230114 (2012) for one interesting comment, and the writings of Sudarsky.

On the other side, top-down causation via cosmology and Einstein’s equations seed the correct conditions for dynamical Darwinian evolution to take place to begin with, for some reason, the author completely leaves this out.

Without a question, the author is an expert an evolutionary biology, but I am afraid he has looked through these issues through a very narrow lens, which does not do the issue a full and complete justification, and indeed, is responsible for much of the discomfort with evolutionary theory that is described so accurately and well in the article.

In the mean time, I would humbly suggest that the interested reader look at the following articles which describe Darwinian evolution in a more complete context as a function of emergence and complexity through physics which underlies biology. Also, one should see the work of Denis Noble, http://musicoflife.co.uk who advocates for a dynamical systems-based view of biological systems, which I personally believe to be correct, as it is much more mathematically and physically sound compared to standard evolutionary theory. This YouTube video of a lecture from the noted cosmologist GFR Ellis also sums up the problem with the reductionist view of evolutionary biology: http://youtu.be/nEhTkF3eG8Q

Title:
Laws, Causation and Dynamics at Different Levels
Authors:
Butterfield, Jeremy
Publication:
eprint arXiv:1406.4732
Publication Date:
06/2014
Origin:
ARXIV
Keywords:
Physics – History and Philosophy of Physics, Physics – Popular Physics
Comment:
29 pages, 3 figures; Interface Focus (Royal Society London), volume 2, 2012, pp. 101-114; doi:10.1098/rsfs.2011.0052
Bibliographic Code:
2014arXiv1406.4732B
Abstract

I have two main aims. The first is general, and more philosophical (Section 2). The second is specific, and more closely related to physics (Sections 3 and 4). The first aim is to state my general views about laws and causation at different ‘levels’. The main task is to understand how the higher levels sustain notions of law and causation that ‘ride free’ of reductions to the lower level or levels. I endeavour to relate my views to those of other symposiasts. The second aim is to give a framework for describing dynamics at different levels, emphasising how the various levels’ dynamics can mesh or fail to mesh. This framework is essentially that of elementary dynamical systems theory. The main idea will be, for simplicity, to work with just two levels, dubbed ‘micro’ and ‘macro’ which are related by coarse-graining. I use this framework to describe, in part, the first four of Ellis’ five types of top-down causation.

Title:
The arrow of time and the nature of spacetime
Authors:
Ellis, George F R
Publication:
eprint arXiv:1302.7291
Publication Date:
02/2013
Origin:
ARXIV
Keywords:
General Relativity and Quantum Cosmology, Physics – History and Philosophy of Physics
Comment:
56 pages, 7 figures
Bibliographic Code:
2013arXiv1302.7291E
Abstract

This paper extends the work of a previous paper [arXiv:1208.2611] on the flow of time, to consider the origin of the arrow of time. It proposes that a `past condition’ cascades down from cosmological to micro scales, being realized in many microstructures and setting the arrow of time at the quantum level by top-down causation. This physics arrow of time then propagates up, through underlying emergence of higher level structures, to geology, astronomy, engineering, and biology. The appropriate space-time picture to view all this is an emergent block universe (`EBU’), that recognizes the way the present is different from both the past and the future. This essential difference is the ultimate reason the arrow of time has to be the way it is.

Title:
Recognising Top-Down Causation
Authors:
Ellis, George F R
Publication:
eprint arXiv:1212.2275
Publication Date:
12/2012
Origin:
ARXIV
Keywords:
Physics – Classical Physics, Nonlinear Sciences – Adaptation and Self-Organizing Systems, Physics – History and Philosophy of Physics
Comment:
11 pages, 2 figures, 2 tables. 2nd prize in FQXI essay competition
Bibliographic Code:
2012arXiv1212.2275E
Abstract

One of the basic assumptions implicit in the way physics is usually done is that all causation flows in a bottom up fashion, from micro to macro scales. However this is wrong in many cases in biology, and in particular in the way the brain functions. Here I make the case that it is also wrong in the case of digital computers – the paradigm of mechanistic algorithmic causation – and in many cases in physics, ranging from the origin of the arrow of time to the process of state vector preparation. I consider some examples from classical physics, as well as the case of digital computers, and then explain why this is possible without contradicting the causal powers of the underlying microphysics. Understanding the emergence of genuine complexity out of the underlying physics depends on recognising this kind of causation.

Title:
Evolutionary Transitions and Top-Down Causation
Authors:
Imari Walker, Sara; Cisneros, Luis; Davies, Paul C. W.
Publication:
eprint arXiv:1207.4808
Publication Date:
07/2012
Origin:
ARXIV
Keywords:
Nonlinear Sciences – Adaptation and Self-Organizing Systems, Quantitative Biology – Other Quantitative Biology
Comment:
8 pages, 4 figures; Proceedings of Artificial Life XIII (2012) p. 283-290
Bibliographic Code:
2012arXiv1207.4808I
Abstract

Top-down causation has been suggested to occur at all scales of biological organization as a mechanism for explaining the hierarchy of structure and causation in living systems. Here we propose that a transition from bottom-up to top-down causation — mediated by a reversal in the flow of information from lower to higher levels of organization, to that from higher to lower levels of organization — is a driving force for most major evolutionary transitions. We suggest that many major evolutionary transitions might therefore be marked by a transition in causal structure. We use logistic growth as a toy model for demonstrating how such a transition can drive the emergence of collective behavior in replicative systems. We then outline how this scenario may have played out in those major evolutionary transitions in which new, higher levels of organization emerged, and propose possible methods via which our hypothesis might be tested.

Title:
On the limits of quantum theory: Contextuality and the quantum-classical cut
Authors:
Ellis, George F. R.
Affiliation:
AA(Mathematics Department, University of Cape Town, South Africa)
Publication:
Annals of Physics, Volume 327, Issue 7, p. 1890-1932.
Publication Date:
07/2012
Origin:
ELSEVIER
Abstract Copyright:
(c) 2012 Elsevier Inc.
DOI:
10.1016/j.aop.2012.05.002
Bibliographic Code:
2012AnPhy.327.1890E
Abstract

This paper is based on four assumptions: 1. Physical reality is made of linearly behaving components combined in non-linear ways. 2. Higher level behaviour emerges from this lower level structure. 3. The way the lower level elements behaves depends on the context in which they are embedded. 4. Quantum theory applies to the lower level entities. An implication is that higher level effective laws, based on the outcomes of non-linear combinations of lower level linear interactions, will generically not be unitary; hence the applicability of quantum theory at higher levels is strictly limited. This leads to the view that both state vector preparation and the quantum measurement process are crucially based on top-down causal effects, and helps provide criteria for the Heisenberg cut that challenge some views on Schrödinger’s cat.

Title:
Top-Down Causation and Autonomy in Complex Systems
Authors:
Juarrero, Alicia
Affiliation:
AA(Emeritus, Prince George’s Community College)
Publication:
Downward Causation and the Neurobiology of Free Will, Understanding Complex Systems. ISBN 978-3-642-03204-2. Springer Berlin Heidelberg, 2009, p. 83
Publication Date:
00/2009
Origin:
SPRINGER
Keywords:
Physics
Abstract Copyright:
(c) 2009: Springer Berlin Heidelberg
DOI:
10.1007/978-3-642-03205-9_5
Bibliographic Code:
2009dcnf.book…83J
Abstract

Evolutionary evidence shows that complex dynamical systems become increasingly self-directed and decoupled from merely energetic forces over time. In this paper I analyze these transformations, concentrating on changes in the type of top-down causation that characterizes such self-organized and autopoietic pro cesses. Specifically, I show that the top-down selection criteria of these systems makes some of them autonomous, and that because once evolution reaches humans the criteria according to which voluntary actions are selected are semantic and symbolic – and can be self-consciously chosen – human self-direction constitutes a form of strong autonomy that can arguably be considered “free will.”

Title:
Top-Down Causation and the Human Brain
Authors:
Ellis, George F. R.
Affiliation:
AA(Mathematics Department, University of Cape Town)
Publication:
Downward Causation and the Neurobiology of Free Will, Understanding Complex Systems. ISBN 978-3-642-03204-2. Springer Berlin Heidelberg, 2009, p. 63
Publication Date:
00/2009
Origin:
SPRINGER
Keywords:
Physics
Abstract Copyright:
(c) 2009: Springer Berlin Heidelberg
DOI:
10.1007/978-3-642-03205-9_4
Bibliographic Code:
2009dcnf.book…63E
Abstract

A reliable understanding of the nature of causation is the core feature of science. In this paper the concept of top-down causation in the hierarchy of structure and causation is examined in depth. Five different classes of top-down causation are identified and illustrated with real-world examples. They are (1) al gorithmic top-down causation; (2) top-down causation via nonadaptive information control; (3) top-down causation via adaptive selection; (4) top-down causation via adaptive information control; and (5) intelligent top-down causation (i.e., the effect of the human mind on the physical world). Recognizing these forms of causation implies that other kinds of causes than physical and chemical interactions are effective in the real world. Because of the existence of random processes at the bottom, there is sufficient causal slack at the physical level to allow all these kinds of causation to occur without violation of physical causation. That they do indeed occur is indicated by many kinds of evidence. Each such kind of causation takes place in particular in the human brain, as is indicated by specific examples.

Title:
Top-Down Causation by Information Control: From a Philosophical Problem to a Scientific Research Program
Authors:
Auletta, G.; Ellis, G. F. R.; Jaeger, L.
Publication:
eprint arXiv:0710.4235
Publication Date:
10/2007
Origin:
ARXIV
Keywords:
Quantitative Biology – Other Quantitative Biology
Comment:
Revised version to meet referee’s comments, and responding to a paper by Wegscheid et al that was not mentioned in the previous version. 23 pages, 9 figures
Bibliographic Code:
2007arXiv0710.4235A
Abstract

It has been claimed that different types of causes must be considered in biological systems, including top-down as well as same-level and bottom-up causation, thus enabling the top levels to be causally efficacious in their own right. To clarify this issue, important distinctions between information and signs are introduced here and the concepts of information control and functional equivalence classes in those systems are rigorously defined and used to characterise when top down causation by feedback control happens, in a way that is testable. The causally significant elements we consider are equivalence classes of lower level processes, realised in biological systems through different operations having the same outcome within the context of information control and networks.

Title:
Physics and the Real World
Authors:
Ellis, George F. R.
Affiliation:
AA(Mathematics Department, University of Cape Town)
Publication:
Foundations of Physics, Volume 36, Issue 2, pp.227-262
Publication Date:
02/2006
Origin:
CROSSREF; SPRINGER
Keywords:
Physics, emergence, causality
DOI:
10.1007/s10701-005-9016-x
Bibliographic Code:
2006FoPh…36..227E
Abstract

Physics and chemistry underlie the nature of all the world around us, including human brains. Consequently some suggest that in causal terms, physics is all there is. However, we live in an environment dominated by objects embodying the outcomes of intentional design (buildings, computers, teaspoons). The present day subject of physics has nothing to say about the intentionality resulting in existence of such objects, even though this intentionality is clearly causally effective. This paper examines the claim that the underlying physics uniquely causally determines what happens, even though we cannot predict the outcome. It suggests that what occurs is the contextual emergence of complexity: the higher levels in the hierarchy of complexity have autonomous causal powers, functionally independent of lower level processes. This is possible because top-down causation takes place as well as bottom-up action, with higher level contexts determining the outcome of lower level functioning and even modifying the nature of lower level constituents. Stored information plays a key role, resulting in non-linear dynamics that is non-local in space and time. Brain functioning is causally affected by abstractions such as the value of money and the theory of the laser. These are realised as brain states in individuals, but are not equivalent to them. Consequently physics per se cannot causally determine the outcome of human creativity, rather it creates the possibility space allowing human intelligence to function autonomously. The challenge to physics is to develop a realistic description of causality in truly complex hierarchical structures, with top-down causation and memory effects allowing autonomous higher levels of order to emerge with genuine causal powers.

Title:
Emergence and Dissolvence in the Self-organisation of Complex Systems
Authors:
Testa, Bernard; Kier, Lemont B.
Publication:
Entropy, vol. 2, Issue 1, p.1-25
Publication Date:
03/2000
Origin:
ADS
Keywords:
property space, emergent properties, dissolvence, information, self-organisation, complex systems, complexity
Comment:
Article
DOI:
10.3390/e2010001
Bibliographic Code:
2000Entrp…2….1T
Abstract

The formation of complex systems is accompanied by the emergence of properties that are non-existent in the components. But what of the properties and behaviour of such components caught up in the formation of a system of a higher level of complexity? In this assay, we use a large variety of examples, from molecules to organisms and beyond, to show that systems merging into a complex system of higher order experience constraints with a partial loss of choice, options and independence. In other words, emergence in a complex system often implies reduction in the number of probable states of its components, a phenomenon we term dissolvence. This is seen in atoms when they merge to form molecules, in biomolecules when they form macromolecules such as proteins, and in macromolecules when they form aggregates such as molecular machines or membranes. At higher biological levels, dissolvence occurs for example in components of cells (e.g. organelles), tissues (cells), organs (tissues), organisms (organs) and societies (individuals). Far from being a destruction, dissolvence is understood here as a creative process in which information is generated to fuel the process of self-organisation of complex systems, allowing them to appear and evolve to higher states of organisation and emergence. Questions are raised about the relationship of dissolvence and adaptability; the interrelation with top-down causation; the reversibility of dissolvence; and the connection between dissolvence and anticipation.