Basketball Machine Learning Paper Updated 

I have now made a significant update to my applied machine learning paper on predicting patterns among NBA playoff and championship teams, which can be accessed here: arXiv Link . 

The Most Optimal Strategy for the Knicks

In a previous article, I showed how one could use data in combination with advanced probability techniques to determine the optimal shot / court positions for LeBron James. I decided to use this algorithm on the Knicks’ starting 5, and obtained the following joint probability density contour plots:

One sees that the Knicks offensive strategy is optimal if and only if players gets shots as close to the basket as possible. If this is the case, the players have a high probability of making shots even if defenders are playing them tightly. This means that the Knicks would be served best by driving in the paint, posting up, and Porzingis NOT attempting a multitude of three point shots.

By the way, a lot of people are convinced nowadays that someone like Porzingis attempting 3’s is a sign of a good offense, as it is an optimal way to space the floor. I am not convinced of this. Spacing the floor geometrically translates to a multi-objective nonlinear optimization problem. In particular, let (x_i, y_i) represent the (x-y)-coordinates of a player on the floor. Spreading the floor means one must maximize (simultaneously) each element of the following distance metric:

distancematrix

subject to -14 \leq x_i \leq 14, 0 \leq y_i \leq 23.75. While a player attempting 3-point shots may be one way to solve this problem, I am not convinced that it is a unique solution to this optimization problem. In fact, I am convinced that there are a multiple of solutions to this optimization problem.

This solution is slightly simpler if one realizes that the metric above is symmetric, so that there are only 11 independent components.

Breaking Down the 2015-2016 NBA Season

In this article, I will use Data Science / Machine Learning methodologies to break down the real factors separating the playoff from non-playoff teams. In particular, I used the data from Basketball-Reference.com to associate 44 predictor variables which each team: “FG” “FGA” “FG.” “X3P” “X3PA” “X3P.” “X2P” “X2PA” “X2P.” “FT” “FTA” “FT.” “ORB” “DRB” “TRB” “AST”   “STL” “BLK” “TOV” “PF” “PTS” “PS.G” “oFG” “oFGA” “oFG.” “o3P” “o3PA” “o3P.” “o2P” “o2PA” “o2P.” “oFT”   “oFTA” “oFT.” “oORB” “oDRB” “oTRB” “oAST” “oSTL” “oBLK” “oTOV” “oPF” “oPTS” “oPS.G”

, where a letter ‘o’ before the last 22 predictor variables indicates a defensive variable. (‘o’ stands for opponent. )

Using principal components analysis (PCA), I was able to project this 44-dimensional data set to a 5-D dimensional data set. That is, the first 5 principal components were found to explain 85% of the variance. 

Here are the various biplots: 


In these plots, the teams are grouped according to whether they made the playoffs or not. 

One sees from this biplot of the first two principal components that the dominant component along the first PC is 3 point attempts, while the dominant component along the second PC is opponent points. CLE and TOR have a high negative score along the second PC indicating a strong defensive performance. Indeed, one suspects that the final separating factor that led CLE to the championship was their defensive play as opposed to 3-point shooting which all-in-all didn’t do GSW any favours. This is in line with some of my previous analyses

Optimal Positions for NBA Players

I was thinking about how one can use the NBA’s new SportVU system to figure out optimal positions for players on the court. One of the interesting things about the SportVU system is that it tracks player (x,y) coordinates on the court. Presumably, it also keeps track of whether or not a player located at (x,y) makes a shot or misses it. Let us denote a player making a shot by 1, and a player missing a shot by 0. Then, one essentially will have data in the form (x,y, \text{1/0}).

One can then use a logistic regression to determine the probability that a player at position (x,y) will make a shot:

p(x,y) = \frac{\exp\left(\beta_0 + \beta_1 x + \beta_2 y\right)}{1 +\exp\left(\beta_0 + \beta_1 x + \beta_2 y\right)}

The main idea is that the parameters \beta_0, \beta_1, \beta_2 uniquely characterize a given player’s probability of making a shot.

As a coaching staff from an offensive perspective, let us say we wish to position players as to say they have a very high probability of making a shot, let us say, for demonstration purposes 99%. This means we must solve the optimization problem:

\frac{\exp\left(\beta_0 + \beta_1 x + \beta_2 y\right)}{1 +\exp\left(\beta_0 + \beta_1 x + \beta_2 y\right)} = 0.99

\text{s.t. } 0 \leq x \leq 28, \quad 0 \leq y \leq 47

(The constraints are determined here by the x-y dimensions of a standard NBA court).

This has the following solutions:

x = \frac{-1. \beta _0-1. \beta _2 y+4.59512}{\beta _1}, \quad \frac{-1. \beta _0-28. \beta _1+4.59512}{\beta _2} \leq y

with the following conditions:

constraints1

One can also have:

x = \frac{-1. \beta _0-1. \beta _2 y+4.59512}{\beta _1}, \quad y \leq 47

with the following conditions:

constraints2

Another solution is:

x = \frac{-1. \beta _0-1. \beta _2 y+4.59512}{\beta _1}

with the following conditions:

constraints3

The fourth possible solution is:

x = \frac{-1. \beta _0-1. \beta _2 y+4.59512}{\beta _1}

with the following conditions:

constraints4

In practice, it should be noted, that it is typically unlikely to have a player that has a 99% probability of making a shot.

To put this example in more practical terms, I generated some random data (1000 points) for a player in terms of (x,y) coordinates and whether he made a shot from that distance or not. The following scatter plot shows the result of this simulation:

bballoptim5

In this plot, the red dots indicate a player has made a shot (a response of 1.0) from the (x,y) coordinates given, while a purple dot indicates a player has missed a shot from the (x,y) coordinates given (a response of 0.0).

Performing a logistic regression on this data, we obtain that \beta_0 = 0, \beta_1 = 0.00066876, \beta_2 = -0.00210949.

Using the equations above, we see that this player has a maximum probability of 58.7149 \% of making a shot from a location of (x,y) = (0,23), and a minimum probability of 38.45 \% of making a shot from a location of (x,y) = (28,0).

Basketball Paper Update

Everyone by now knows about this paper I wrote a few months ago: http://arxiv.org/abs/1604.05266

Using data science / machine learning methodologies, it basically showed that the most important factors in characterizing a team’s playoff eligibility are the opponent field goal percentage and the opponent points per game. This seems to suggest that defensive factors as opposed to offensive factors are the most important characteristics shared among NBA playoff teams. It was also shown that championship teams must be able to have very strong defensive characteristics, in particular, strong perimeter defense characteristics in combination with an effective half-court offense that generates high-percentage two-point shots. A key part of this offensive strategy must also be the ability to draw fouls. 

Some people have commented that despite this, teams who frequently attempt three point shots still can be considered to have an efficient offense as doing so leads to better rebounding, floor spacing, and higher percentage shots. We show below that this is not true. Looking at the last 16 years of all NBA teams (using the same data we used in the paper), we performed a correlation analysis of an individual NBA team’s 3-point attempts per game and other relevant variables, and discovered: 


One sees that there is very little correlation between a team’s 3-point attempts per game and 2-point percentage, free throws, free throw attempts, and offensive rebounds. In fact, at best, there is a somewhat “medium” anti-correlation between 3-point attempts per game and a team’s 2-point attempts per game. 

What are the factors behind Golden State’s and Cleveland’s Wins in The NBA Finals

As I write this, Cleveland just won the series 4-3. What was behind each team’s wins and losses in this series?

First, Golden State: A correlation plot of their per game predictor variables versus the binary win/loss outcome is as follows: 


The key information is in the last column of this matrix: 


Evidently, the most important factors in GSW’s winning games were Assists, number of Field Goals made, Field Goal percentage, and steals. The most important factors in GSW losing games this series were number of three point attempts per game (Imagine that!), and number of personal fouls per game. 

Now, Cleveland: A correlation plot of their per game predictor variables versus the binary win/loss outcome is as follows: 


The key information is in the last column of this matrix: 


Evidently, the most important factor in CLE’s wins was their number of defensive rebounds. Following behind this were number of three point shots made, and field goal percentage. There were some weak correlations between Cleveland’s losses and their number of offensive rebounds and turnovers. 

Note that these results are essentially a summary analysis of previous blog postings which tracked individual games. For example, here , here and a first attempt here. 

Game 2 of CLE vs GSW Breakdown

As usual, here is the post-game breakdown of Game 2 of the NBA Finals between Cleveland and Golden State. Using my live-tracking app to track the relevant factors (as explained in previous posts) here are the live-captured time series:


Computing the correlations between each time series above and the Golden State Warriors point difference, we obtain:


One sees once again that the most relevant factors to GSW’s point difference in the game was CLE’s personal fouls during the game, GSW’s personal fouls during the game, and not far behind, GSW 3-point percentage during the game. What is interesting is that one can see the importance of these variables played out in real time matching the two graphs above.

In fact, looking at the personal fouls vs. GSW point difference in real time (essentially taking a subset of the time series graph above), we obtain:

graph_1gswgme2