Over the past number of years, the *advanced* metric known as Offensive Rating has become the standard way of measuring a basketball team’s offensive efficiency. Broadly speaking, it is defined as points scored per 100 possessions. Specifically, for **teams**, it is defined as (See: https://www.basketball-reference.com/about/ratings.html and https://www.nbastuffer.com/analytics101/possession/ AND https://fansided.com/2015/12/21/nylon-calculus-101-possessions/):

There is a significant issue with this definition as I now demonstrate. Let us compute the partial derivative of this expression with respect to OppORB, we easily obtain:

As the denominator is always positive, we would like to examine the numerator. The numerator is always negative due to physical constraints (i.e., can’t have negative points or rebounds!) and if **OppFG < OppFGA, **which makes intuitive sense. It is only positive if **OppFG > OppFGA, **which logically cannot happen. Therefore, this numerator is always negative (except for the rare case when OppFG = OppFGA of course), which means that the entire partial derivative is positive.

]]>This means that a team’s offensive rating / offensive efficiency increases as it’s opponent’s offensive rebounds increase. Intuitively, this shouldn’t be the case. If your opponent has a high number of offensive rebounds, this should give you less possessions, and put pressure on you to score, thus resulting in less points overall. The problem is that the more general definition of offensive efficiency is 100*(Points Scored)/(Possessions), which is obviously maximized when possessions is minimized. The problem of course, is that the more detailed definition of possessions implies that this minimization of possessions occurs at the cost of maximizing opponent offensive rebounds, which intuitively should not be the case.

(If you cannot see the dashboard embedded below for whatever reason, click here to be taken directly to the dashboard in a separate page.)

]]>

I decided to try to analyze this statement quantitatively. Indeed, one can calculate the probability that an illegal immigrant will commit a crime *within* The United States as follows. Let us denote crime (or criminal) by *C*, while denoting illegal immigrant by *ii*. Then, by Bayes’ theorem, we have:

It is quite easy to find data associated with the various factors in this formula. For example, one finds that

Putting all of this together, we find that:

That is, the probability that an illegal immigrant will commit a crime (of any type) while in The United States is a very low 11.35%.

Therefore, Trump’s claim of “tremendous amounts of crime” being brought to The United States by illegal immigrants is incorrect.

Note that, the numerical factors used above were obtained from:

- https://www.justice.gov/opa/pr/departments-justice-and-homeland-security-release-data-incarcerated-aliens-94-percent-all
- https://www.washingtontimes.com/news/2017/aug/1/immigrants-22-percent-federal-prison-population/
- https://en.wikipedia.org/wiki/Incarceration_in_the_United_States

]]>

Looking at this season’s data, let us examine two things. The first thing is the number of points a team’s opponent is expected to score for every three-point shot the other team attempts. We discovered that remarkably, the number of points obeys a lognormal distribution:

This means that for every three point shot your team attempts, the opposing team is expected to score

which comes out to about 3.7495 points. So, for every 3PA by a team, the opponent is expected to score more than 3 points based on the most recent NBA data. Keeping that in mind, we see also by integrating above that there is a 99.99% probability that the opponent will score more than 2 points for every 3PA by a team, and a 93.693% probability that the opponent will score more than 3 points for every single 3PA by the other team.

This would suggest a significant breakdown of defensive emphasis in the “modern-day” NBA where evidently teams are just interested in playing shot-for-shot basketball, but in a very risky way that is not optimal.

The work so far covered just three-point attempts, but, what are the effects of *missing* a three-point shot? The number of opponent points per a three-point miss also remarkably obeys a lognormal distribution:

Therefore, for every three-point shot your team misses, the opposing team is expected to score:

which comes out to about 5.87345 points. This identifies a remarkable risk to a team missing a three-point shot. This computation shows that one three-point shot miss corresponds to about 6 points for the opposing team! Looking at probabilities by integrating the density function above, one can show that there is a 99.9999% probability that the opposing team would score more than two points for every three-point miss, a 99.998% probability that the opposing team would score more than three points for every three-point miss, a 99.583% probability that the opposing team would score more than four points for every three-point miss, and so on.

What these calculations demonstrate is that gearing a team’s offense to focus on attempting three-point shots is remarkably risky, especially if a team misses a three-point shot. Given that the average number of three-point attempts is increasing over the last number of years, but the average number of makes has relatively stayed the same (See this older article here: https://relativitydigest.com/2016/05/26/the-three-point-shot-myth-continued/), teams are exposing themselves to greater and greater risk of losing games by adopting this style of play.

]]>

From the aforementioned paper, one concludes that the two most important factors in determining whether a team makes the playoffs or not is its opponent assists per game and opponent two-point shots made per game. Based on that, I came up with the following equation:

A plot of this equation is as follows:

A contour plot is perhaps more illuminating:

One can see from this contour plot that teams have the highest probabilities of making the playoffs when their opponent 2-point shots and opponent assists are both around 20. In general, we also see that while a team can allow more opponent 2-point shots, having a low number of opponent assists per game is evidently the most important factor.

*Using this equation, I was able to classify 71% of playoff teams correctly from the last 16 years of NBA data. Even though the playoff classifier developed in the paper mentioned above is more accurate in general, those methods are non-parametric, so, it is difficult to obtain an equation. To get an equation as we have done here, can be extremely useful for modelling purposes and understanding the nature of probabilities in deciding whether a certain team will make the playoffs in a given season. (Also: note that we are using the convention of using 0.50 as the threshold probability, so a probability output of >0.5, is classified as a team making the playoffs.)*

These lectures start off with manifold theory, and end with examples in biology, game theory, and general relativity/cosmology.

]]>It seems that one cannot turn on ESPN or any YouTube channel nowadays without the ongoing debate of whether Michael Jordan is better than Lebron, what would happen if Michael Jordan played in today’s NBA, etc… However, I have not seen a single scientific approach to this question. Albeit, it is sort of an impossible question to answer, but, using data science I will try.

From a data science perspective, it only makes sense to look at Michael Jordan’s performance in a single season, and try to predict based on that season how he would perform in the most recent NBA season. That being said, let’s look at Michael Jordan’s game-to-game performance in the 1995-1996 NBA season when the Bulls went 72-10.

Using neural networks and Garson’s algorithm , to regress against Michael Jordan’s per game point total, we note the following:

One can see from this variable importance plot, Michael’s points in a given game were most positively associated with teams that committed a high number of turnovers followed by teams that make a lot of 3-point shots. Interestingly, there was not a strong negative factor on Michael’s points in a given game.

Given this information, and the per-game league averages of the 2017 season, we used this neural network to make a prediction on how many points Michael would average in today’s season:

Michael Jordan: 2017 NBA Season Prediction: 32.91 Points / Game (+/- 6.9)

It is interesting to note that Michael averaged 30.4 Points/Game in the 1995-1996 NBA Season. We therefore conclude that the 1995-1996 Michael would average a higher points/game if he played in today’s NBA.

As an aside, a plot of the neural network used to generate these variable importance plots and predictions is as follows:

What about the reverse question? What if the 2016-2017 Lebron James played in the 1995-1996 NBA? What would happen to his per-game point average? Using the same methodology as above, we used neural networks in combination with Garson’s algorithm to obtain a variable importance plot for Lebron James’ per-game point totals:

One sees from this plot that Lebron’s points every game were most positively impacted by teams that predominantly committed personal fouls, followed by teams that got a lot of offensive rebounds. There were no predominantly strong negative factors that affected Lebron’s ability to score.

Using this neural network model, we then tried to make a prediction on how many points per game Lebron would score if he played in the 1995-1996 NBA Season:

Lebron James: 1995-1996 NBA Season Prediction: 18.81 Points / Game (+/- 4.796)

This neural network model predicts that Lebron James would average 18.81 Points/Game if he played in the 1995-1996 NBA season, which is a drop from the 26.4 Points/Game he averaged this most recent NBA season.

Therefore, at least from this neural network model, one concludes that Lebron’s per game points would decrease if he played in the 1995-1996 Season, while Michael’s number would increase slightly if he played in the 2016-2017 Season.

]]>The Golden State Warriors have posed quite the conundrum for opposing teams. They are quick, have a spectacular ability to move the ball, and play suffocating defense. Given their play in the playoffs thus far, all of these points have been exemplified even more to the point where it seems that they are unbeatable.

I wanted to take somewhat of a simplified approach and see if opposing teams are missing something. That is, is their some weakness in their play that opposing teams can exploit, a “weakness in Helm’s deep”?

The most obvious place to start from a data science point-of-view seemed to me to look atFrom here, I now extracted the x and y coordinates of each shot and recorded a response variable of “made” or “missed” in a table, such that the coordinates were now predictor variables and the shot classification (made/missed) was the response variable. Altogether, we had 7104 observations. Splitting this dataset up into a 70% training dataset and a 30% test data set, I tried the following algorithms, recording the % of correctly classified observations:

Algorithm |
% of Correctly Predicted Observations |

Logistic Regression |
56.43 |

Gradient Boosted Decision Trees |
62.62 |

Random Forests |
58.54 |

Neural Networks with Entropy Fitting |
62.47 |

Naive Bayes Classification with Kernel Density Estimation |
57.32 |

One sees that that gradient boosted decision trees had the best performance correctly classifying 62.62% of the test observations. Given how noisy the data is, this is not bad, and much better than expected. I should also mention that these numbers were obtained after tuning these models using cross-validation for optimal parameters.

Using the gradient boosted decision tree model, we made a set of predictions for a vast number of (x,y)-coordinates for basketball court. We obtained the following contour plot:

Overlaying this on top of the basketball court diagram, we got:

The contour plot levels denote the **probabilities** that the GSW will make a shot from a given (x,y) location on the court. As a sanity check, the lowest probabilities seem to be close to the 1/2-court line and beyond the three-point line. The highest probabilities are surprisingly along very specific areas on the court: very close the basket, the line from the basket to the left corner, extending up slightly, and a very narrow line extending from the basket to the right corner. Interestingly, the probabilities are low on the right side of the basket, specifically:

A map showing the probabilities more explicitly is as follows (although, upon uploading it, I realized it is a bit harder to read, I will re-upload a clearer version soon!)

]]>In conclusion, it seems that, at least according to a first look at the data, the Warriors do indeed have several “weak spots” in their offense that opponents should certainly look to exploit by designing defensive schemes that force them to take shots in the aforementioned low-probability zones. As for future improvements, I think it would be interesting to add as predictor variables things like geographic location, crowd sizes, team opponent strengths, etc… I will look into making these improvements in the near future.