How to Beat the Golden State Warriors

By: Dr. Ikjyot Singh Kohli

The Golden State Warriors have posed quite the conundrum for opposing teams. They are quick, have a spectacular ability to move the ball, and play suffocating defense. Given their play in the playoffs thus far, all of these points have been exemplified even more to the point where it seems that they are unbeatable.

I wanted to take somewhat of a simplified approach and see if opposing teams are missing something. That is, is their some weakness in their play that opposing teams can exploit, a “weakness in Helm’s deep”?

original
“Helm’s Deep has but one weakness”– (Sorry, couldn’t resist!)
The most obvious place to start from a data science point-of-view seemed to me to look at every single shot the Warriors took as a team this season in each game and compile a grand ensemble shot chart. Using the data from Basketball-reference.com and some data scraping scripts I wrote in R, I obtained the following:

GSWshotchart
Red circles denote missed shots, black circles denote made shots. Note that in this diagram and what follows, we have defined coordinates such that the origin of the x-y plane here denotes the far left and far bottom of an NBA court such that the basket itself is approximately at (x,y) = (25,0).
Certainly, on the surface, it seems that there is no discernible pattern between made shots and missed shots. This is where the machine learning comes in!

From here, I now extracted the x and y coordinates of each shot and recorded a response variable of “made” or “missed” in a table, such that the coordinates were now predictor variables and the shot classification (made/missed) was the response variable. Altogether, we had 7104 observations. Splitting this dataset up into a 70% training dataset and a 30% test data set, I tried the following algorithms, recording the % of correctly classified observations:

Algorithm % of Correctly Predicted Observations
Logistic Regression

56.43

Gradient Boosted Decision Trees

62.62

Random Forests

58.54

Neural Networks with Entropy Fitting

62.47

Naive Bayes Classification with Kernel Density Estimation

57.32

One sees that that gradient boosted decision trees had the best performance correctly classifying 62.62% of the test observations. Given how noisy the data is, this is not bad, and much better than expected. I should also mention that these numbers were obtained after tuning these models using cross-validation for optimal parameters.

Using the gradient boosted decision tree model, we made a set of predictions for a vast number of (x,y)-coordinates for basketball court. We obtained the following contour plot:

contouroneGSW

Overlaying this on top of the basketball court diagram, we got:

contourtwoGSW

The contour plot levels denote the probabilities that the GSW will make a shot from a given (x,y) location on the court. As a sanity check, the lowest probabilities seem to be close to the 1/2-court line and beyond the three-point line. The highest probabilities are surprisingly along very specific areas on the court: very close the basket, the line from the basket to the left corner, extending up slightly, and a very narrow line extending from the basket to the right corner. Interestingly, the probabilities are low on the right side of the basket, specifically:

contourtwoGSW

A map showing the probabilities more explicitly is as follows (although, upon uploading it, I realized it is a bit harder to read, I will re-upload a clearer version soon!)
contourgsw3

In conclusion, it seems that, at least according to a first look at the data, the Warriors do indeed have several “weak spots” in their offense that opponents should certainly look to exploit by designing defensive schemes that force them to take shots in the aforementioned low-probability zones. As for future improvements, I think it would be interesting to add as predictor variables things like geographic location, crowd sizes, team opponent strengths, etc… I will look into making these improvements in the near future.

The “Interference” of Phil Jackson

By: Dr. Ikjyot Singh Kohli

So, I came across this article today by Matt Moore on CBSSports, who basically once again has taken to the web to bash the Triangle Offense. Of course, much of what he claims (like much of the Knicks media) is flat-out wrong based on very primitive and simplistic analysis, and I will point it out below. Further, much of this article seems to motivated by several comments Carmelo Anthony made recently expressing his dismay at Jeff Hornacek moving away from the “high-paced” offense that the Knicks were running before the All-Star break:

“I think everybody was trying to figure everything out, what was going to work, what wasn’t going to work,’’ Anthony said in the locker room at the former Delta Center. “Early in the season, we were winning games, went on a little winning streak we had. We were playing a certain way. We went away from that, started playing another way. Everybody was trying to figure out: Should we go back to the way we were playing, or try to do something different?’’

Anthony suggested he liked the Hornacek way.

“I thought earlier we were playing faster and more free-flow throughout the course of the game,’’ Anthony said. “We kind of slowed down, started settling it down. Not as fast. The pace slowed down for us — something we had to make an adjustment on the fly with limited practice time, in the course of a game. Once you get into the season, it’s hard to readjust a whole system.’’

First, it is well-known that the Knicks have been implementing more of the triangle offense since All-Star break. All-Star Weekend was Feb 17-19, 2017. The Knicks record before All-Star weekend was amusingly 23-34, which is 11 games below .500 and is nowhere mentioned in any of these articles, and is also not mentioned (realized?) by Carmelo. 

Anyhow, the question is as follows. If Hornacek was allowed to continue is non-triangle ways of pushing the ball/higher pace (What Carmelo claims he liked), would the Knicks have made the playoffs? Probably not. I claim this to be the case based on a detailed machine-learning-based analysis of playoff-eligible teams that has been available for sometime now. In fact, what is perhaps of most importance from this paper is the following classification tree that determines whether a team is playoff-eligible or not:

img_4304

So, these are the relevant factors in determining whether or not a team in a given season makes the playoffs. (Please see the paper linked above for details on the justification of these results.)

Looking at these predictor variables for the Knicks up to the All-Star break.

  1. Opponent Assists/Game: 22.44
  2. Steals/Game: 7.26
  3. TOV/Game: 13.53
  4. DRB/Game: 33.65
  5. Opp.TOV/Game: 12.46

Since Opp.TOV/Game = 12.46 < 13.16, the Knicks would actually be predicted to miss the NBA Playoffs. In fact, if current trends were allowed to continue, the so-called “Hornacek trends”, one can compute the probability of the Knicks making the playoffs:

knickspdfoTOV1

From this probability density function, we can calculate that the probability of the Knicks making the playoffs was 36.84%. The classification tree also predicted that the Knicks would miss the playoffs. So, what is being missed by Carmelo, Matt Moore, and the like is the complete lack of pressure defense, hence, the insufficient amount of opponent TOV/G. So, it is completely incorrect to claim that the Knicks were somehow “Destined for glory” under Hornacek’s way of doing this. This is exacerbated by the fact that the Knicks’ opponent AST/G pre-All-Star break was already pretty high at 22.44.

The question now is how have the Knicks been doing since Phil Jackson’s supposed interference and since supposedly implementing the triangle in a more complete sense? (On a side note, I still don’t think you can partially implement the triangle, I think it needs a proper off-season implementation as it is a complete system).

Interestingly enough, the Knicks opponent assists per game (which, according to the machine learning analysis is the most relevant factor in determining whether a team makes the playoffs) from All-Star weekend to the present-day is an impressive 20.642/Game. By the classification tree above, this actually puts the Knicks safely in playoff territory, in the sense of being classified as a playoff team, but it is too little, too late.

The defense has actually improved significantly with respect to the key relevant statistic of opponent AST/G. (Note that, as will be shown in a future article, DRTG and ORTG are largely useless statistics in determining a team’s playoff eligibility, another point completely missed in Moore’s article) since the Knicks have started to implement the triangle more completely.

The problem is that it is obviously too little, too late at this point. I would argue based on this analysis, that Phil Jackson should have actually interfered earlier in the season. In fact, if the Knicks keep their opponent Assists/game below 20.75/game next season (which is now very likely, if current trends continue), the Knicks would be predicted to make the playoffs by the above machine learning analysis. 

Finally, I will just make this point. It is interesting to look at Phil Jackson teams that were not filled/packed with dominant players. As the saying goes, unfortunately, “Phil Jackson’s success had nothing to do with the triangle, but, because he had Shaq/Kobe, Jordan/Pippen, etc… ”

Well, let’s first look at the 1994-1995 Chicago Bulls, a team that did not have Michael Jordan, but ran the triangle offense completely. Per the relevant statistics above:

  1. Opp. AST/G = 20.9
  2. STL/G = 9.7
  3. AST/G = 24.0
  4. Opp. TOV/G = 18.1

These are remarkable defensive numbers, which supports Phil’s idea, that the triangle offense leads to good defense.

 

 

Basketball Machine Learning Paper Updated 

I have now made a significant update to my applied machine learning paper on predicting patterns among NBA playoff and championship teams, which can be accessed here: arXiv Link . 

Analyzing Lebron James’ Offensive Play

Where is Lebron James most effective on the court?

Based on 2015-2016 data, we obtained from NBA.com the following data which tracks Lebron’s FG% based on defender distance:

lebrondef

From Basketball-Reference.com, we then obtained data of Lebron’s FG% based on his shot distance from the basket:

lebronshotdist

Based on this data, we generated tens of thousands of sample data points to perform a Monte Carlo simulation to obtain relevant probability density functions. We found that the joint PDF was a very lengthy expression(!):

 

Graphically, this was:

lebronjointplot

A contour plot of the joint PDF was computed to be:

lebroncontour

From this information, we can compute where/when LeBron has the highest probability of making a shot. Numerically, we found that the maximum probability occurs when Lebron’s defender is 0.829988 feet away, while Lebron is 1.59378 feet away from the basket. What is interesting is that this analysis shows that defending Lebron tightly doesn’t seem to be an effective strategy if his shot distance is within 5 feet of the basket. It is only an effective strategy further than 5 feet away from the basket. Therefore, opposing teams have the best chance at stopping Lebron from scoring by playing him tightly and forcing him as far away from the basket as possible.

The Relationship Between The Electoral College and Popular Vote

An interesting machine learning problem: Can one figure out the relationship between the popular vote margin, voter turnout, and the percentage of electoral college votes a candidate wins? Going back to the election of John Quincy Adams, the raw data looks like this:

Electoral College Party Popular vote  Margin (%)

Turnout

Percentage of EC

John Quincy Adams D.-R. -0.1044 0.27 0.3218
Andrew Jackson Dem. 0.1225 0.58 0.68
Andrew Jackson Dem. 0.1781 0.55 0.7657
Martin Van Buren Dem. 0.14 0.58 0.5782
William Henry Harrison Whig 0.0605 0.80 0.7959
James Polk Dem. 0.0145 0.79 0.6182
Zachary Taylor Whig 0.0479 0.73 0.5621
Franklin Pierce Dem. 0.0695 0.70 0.8581
James Buchanan Dem. 0.12 0.79 0.5878
Abraham Lincoln Rep. 0.1013 0.81 0.5941
Abraham Lincoln Rep. 0.1008 0.74 0.9099
Ulysses Grant Rep. 0.0532 0.78 0.7279
Ulysses Grant Rep. 0.12 0.71 0.8195
Rutherford Hayes Rep. -0.03 0.82 0.5014
James Garfield Rep. 0.0009 0.79 0.5799
Grover Cleveland Dem. 0.0057 0.78 0.5461
Benjamin Harrison Rep. -0.0083 0.79 0.58
Grover Cleveland Dem. 0.0301 0.75 0.6239
William McKinley Rep. 0.0431 0.79 0.6063
William McKinley Rep. 0.0612 0.73 0.6532
Theodore Roosevelt Rep. 0.1883 0.65 0.7059
William Taft Rep. 0.0853 0.65 0.6646
Woodrow Wilson Dem. 0.1444 0.59 0.8192
Woodrow Wilson Dem. 0.0312 0.62 0.5217
Warren Harding Rep. 0.2617 0.49 0.7608
Calvin Coolidge Rep. 0.2522 0.49 0.7194
Herbert Hoover Rep. 0.1741 0.57 0.8362
Franklin Roosevelt Dem. 0.1776 0.57 0.8889
Franklin Roosevelt Dem. 0.2426 0.61 0.9849
Franklin Roosevelt Dem. 0.0996 0.63 0.8456
Franklin Roosevelt Dem. 0.08 0.56 0.8136
Harry Truman Dem. 0.0448 0.53 0.5706
Dwight Eisenhower Rep. 0.1085 0.63 0.8324
Dwight Eisenhower Rep. 0.15 0.61 0.8606
John Kennedy Dem. 0.0017 0.6277 0.5642
Lyndon Johnson Dem. 0.2258 0.6192 0.9033
Richard Nixon Rep. 0.01 0.6084 0.5595
Richard Nixon Rep. 0.2315 0.5521 0.9665
Jimmy Carter Dem. 0.0206 0.5355 0.55
Ronald Reagan Rep. 0.0974 0.5256 0.9089
Ronald Reagan Rep. 0.1821 0.5311 0.9758
George H. W. Bush Rep. 0.0772 0.5015 0.7918
Bill Clinton Dem. 0.0556 0.5523 0.6877
Bill Clinton Dem. 0.0851 0.4908 0.7045
George W. Bush Rep. -0.0051 0.51 0.5037
George W. Bush Rep. 0.0246 0.5527 0.5316
Barack Obama Dem. 0.0727 0.5823 0.6784
Barack Obama Dem. 0.0386 0.5487 0.6171

Clearly, the percentage of electoral college votes a candidate depends nonlinearly on the voter turnout percentage and popular vote margin (%) as this non-parametric regression shows:

electoralmap.png

We therefore chose to perform a nonlinear regression using neural networks, for which our structure was:

nnetplot

As is turns out, this simple neural network structure with one hidden layer gave the lowest test error, which was 0.002496419 in this case.

Now, looking at the most recent national polls for the upcoming election, we see that Hillary Clinton has a 6.1% lead in the popular vote. Our neural network model then predicts the following:

Simulation Popular Vote Margin Percentage of Voter Turnout Predicted Percentage of Electoral College Votes (+/- 0.04996417)
1 0.061 0.30 0.6607371
2 0.061 0.35 0.6647464
3 0.061 0.40 0.6687115
4 0.061 0.45 0.6726314
5 0.061 0.50 0.6765048
6 0.061 0.55 0.6803307
7 0.061 0.60 0.6841083
8 0.061 0.65 0.6878366
9 0.061 0.70 0.6915149
10 0.061 0.75 0.6951424

One sees that even for an extremely low voter turnout (30%), at this point Hillary Clinton can expect to win the Electoral College by a margin of 61.078% to 71.07013%, or 328 to 382 electoral college votes. Therefore, what seems like a relatively small lead in the popular vote (6.1%) translates according to this neural network model into a large margin of victory in the electoral college.

One can see that the predicted percentage of electoral college votes really depends on popular vote margin and voter turnout. For example, if we reduce the popular vote margin to 1%, the results are less promising for the leading candidate:

Pop.Vote Margin Voter Turnout % E.C. % Win E.C% Win Best Case E.C.% Win Worst Case
0.01 0.30 0.5182854 0.4675000 0.5690708
0.01 0.35 0.5244157 0.4736303 0.5752011
0.01 0.40 0.5305820 0.4797967 0.5813674
0.01 0.45 0.5367790 0.4859937 0.5875644
0.01 0.50 0.5430013 0.4922160 0.5937867
0.01 0.55 0.5492434 0.4984580 0.6000287
0.01 0.60 0.5554995 0.5047141 0.6062849
0.01 0.65 0.5617642 0.5109788 0.6125496
0.01 0.70 0.5680317 0.5172463 0.6188171
0.01 0.75 0.5742963 0.5235109 0.6250817

One sees that if the popular vote margin is just 1% for the leading candidate, that candidate is not in the clear unless the popular vote exceeds 60%.

 

Breaking Down the 2015-2016 NBA Season

In this article, I will use Data Science / Machine Learning methodologies to break down the real factors separating the playoff from non-playoff teams. In particular, I used the data from Basketball-Reference.com to associate 44 predictor variables which each team: “FG” “FGA” “FG.” “X3P” “X3PA” “X3P.” “X2P” “X2PA” “X2P.” “FT” “FTA” “FT.” “ORB” “DRB” “TRB” “AST”   “STL” “BLK” “TOV” “PF” “PTS” “PS.G” “oFG” “oFGA” “oFG.” “o3P” “o3PA” “o3P.” “o2P” “o2PA” “o2P.” “oFT”   “oFTA” “oFT.” “oORB” “oDRB” “oTRB” “oAST” “oSTL” “oBLK” “oTOV” “oPF” “oPTS” “oPS.G”

, where a letter ‘o’ before the last 22 predictor variables indicates a defensive variable. (‘o’ stands for opponent. )

Using principal components analysis (PCA), I was able to project this 44-dimensional data set to a 5-D dimensional data set. That is, the first 5 principal components were found to explain 85% of the variance. 

Here are the various biplots: 


In these plots, the teams are grouped according to whether they made the playoffs or not. 

One sees from this biplot of the first two principal components that the dominant component along the first PC is 3 point attempts, while the dominant component along the second PC is opponent points. CLE and TOR have a high negative score along the second PC indicating a strong defensive performance. Indeed, one suspects that the final separating factor that led CLE to the championship was their defensive play as opposed to 3-point shooting which all-in-all didn’t do GSW any favours. This is in line with some of my previous analyses

Basketball Paper Update

Everyone by now knows about this paper I wrote a few months ago: http://arxiv.org/abs/1604.05266

Using data science / machine learning methodologies, it basically showed that the most important factors in characterizing a team’s playoff eligibility are the opponent field goal percentage and the opponent points per game. This seems to suggest that defensive factors as opposed to offensive factors are the most important characteristics shared among NBA playoff teams. It was also shown that championship teams must be able to have very strong defensive characteristics, in particular, strong perimeter defense characteristics in combination with an effective half-court offense that generates high-percentage two-point shots. A key part of this offensive strategy must also be the ability to draw fouls. 

Some people have commented that despite this, teams who frequently attempt three point shots still can be considered to have an efficient offense as doing so leads to better rebounding, floor spacing, and higher percentage shots. We show below that this is not true. Looking at the last 16 years of all NBA teams (using the same data we used in the paper), we performed a correlation analysis of an individual NBA team’s 3-point attempts per game and other relevant variables, and discovered: 


One sees that there is very little correlation between a team’s 3-point attempts per game and 2-point percentage, free throws, free throw attempts, and offensive rebounds. In fact, at best, there is a somewhat “medium” anti-correlation between 3-point attempts per game and a team’s 2-point attempts per game.