Movie Sentiment Tracker

I wrote an extensive application using NLP and TensorFlow/Keras in Python that looks at all of the current and upcoming Hollywood releases for 2020 and tracks the online Twitter sentiment for each of them. The model output was then displayed in a PowerBI dashboard. In essence, we are predicting the classification probability Pr(Sentiment=Positive|Data).

You can access the dashboard by clicking on the screenshot below:

We have also included a new feature that gives a daily popularity score for movies. An algorithm was designed to rank movies according to daily positive sentiment. This can be found on Page 2 of the dashboard link.

You can select different titles by clicking the dropdown list. The left-side graph shows you the sentiment distribution of all of the tweet data corresponding to a film. The right-side graph calculates the median tweet sentiment for a given day for the selected film. (Right now, we go back 30 days from the present day). It is intended that this dashboard will be refreshed every day.

Did Clyburn Help Biden in South Carolina?

By: Dr. Ikjyot Singh Kohli

The conventional wisdom by the political pundits/analysts who are seeking to explain Joe Biden’s massive win in the 2020 South Carolina primary is that Jim Clyburn’s endorsement was the sole reason why Biden won. (Here is just one article describing this.)

I wanted to analyze the data behind this and actually measure the effect of the Clyburn effect. Clyburn formally endorsed Biden on February 26, 2020.

Using extensive polling data from RealClearPolitics, I looked at Biden’s margin of victory according to various polling samples before the Clyburn endorsement. I used Kernel Density Estimation to form the following probability density function of Biden’s predicted margin of victory (as a percentage/popular vote) in the 2020 South Carolina Primary:

Assuming this probability density function has the form p(x), we notice some interesting properties:

  • The Expected Margin of Victory for Biden is given by: \int x p(x) dx. Using numerical integration, we find that this is \int x p(x) dx = 18.513 \%. The error in this prediction is given by var(x) = \int x^2 p(x) dx - (\int x p(x) dx)^2 = 107.79. This means that the predicted Biden margin of victory is 18.51 \pm 10.382. Clearly, the higher bound of this prediction is 28.89%. That is, according to the data before Clyburn’s endorsement, it was perfectly reasonable to expect that Biden’s victory in South Carolina could have been around 29%. Indeed, Biden’s final margin of victory in South Carolina was 28.5%, which is within the prediction margin. Therefore, it seems it is unlikely Jim Clyburn’s endorsement boosted Biden’s victory in South Carolina.
  • Given the density function above, we can make some more interesting calculations:
  • P(Biden win > 5%) = 1 - \int_{-\infty}^{5} f(x) dx = 0.904 = 90.4%
  • P(Biden win > 10%) = 1 - \int_{-\infty}^{10} f(x) dx = 0.799 = 79.9%
  • P(Biden win > 15%) = 1 - \int_{-\infty}^{15} f(x) dx = 0.710 = 71.0%
  • P(Biden win > 20%) = 1 - \int_{-\infty}^{20} f(x) dx = 0.567 = 56.7%

What these calculations show is that the probability that Biden would have won by more than 5% before Clyburn’s endorsement was 90.4%. The probability that Biden would have won by more than 10% before Clyburn’s endorsement was 79.9%. The probability that Biden would have won by more than 20% before Clyburn’s endorsement was 56.7%, and so on.

Given these calculations, it actually seems unlikely that Clyburn’s endorsement made a huge impact on Biden’s win in South Carolina. This analysis shows that Biden would have likely won by more 15%-20% regardless.

How to Beat the Golden State Warriors

By: Dr. Ikjyot Singh Kohli

The Golden State Warriors have posed quite the conundrum for opposing teams. They are quick, have a spectacular ability to move the ball, and play suffocating defense. Given their play in the playoffs thus far, all of these points have been exemplified even more to the point where it seems that they are unbeatable.

I wanted to take somewhat of a simplified approach and see if opposing teams are missing something. That is, is their some weakness in their play that opposing teams can exploit, a “weakness in Helm’s deep”?

original
“Helm’s Deep has but one weakness”– (Sorry, couldn’t resist!)
The most obvious place to start from a data science point-of-view seemed to me to look at every single shot the Warriors took as a team this season in each game and compile a grand ensemble shot chart. Using the data from Basketball-reference.com and some data scraping scripts I wrote in R, I obtained the following:

GSWshotchart
Red circles denote missed shots, black circles denote made shots. Note that in this diagram and what follows, we have defined coordinates such that the origin of the x-y plane here denotes the far left and far bottom of an NBA court such that the basket itself is approximately at (x,y) = (25,0).
Certainly, on the surface, it seems that there is no discernible pattern between made shots and missed shots. This is where the machine learning comes in!

From here, I now extracted the x and y coordinates of each shot and recorded a response variable of “made” or “missed” in a table, such that the coordinates were now predictor variables and the shot classification (made/missed) was the response variable. Altogether, we had 7104 observations. Splitting this dataset up into a 70% training dataset and a 30% test data set, I tried the following algorithms, recording the % of correctly classified observations:

Algorithm % of Correctly Predicted Observations
Logistic Regression

56.43

Gradient Boosted Decision Trees

62.62

Random Forests

58.54

Neural Networks with Entropy Fitting

62.47

Naive Bayes Classification with Kernel Density Estimation

57.32

One sees that that gradient boosted decision trees had the best performance correctly classifying 62.62% of the test observations. Given how noisy the data is, this is not bad, and much better than expected. I should also mention that these numbers were obtained after tuning these models using cross-validation for optimal parameters.

Using the gradient boosted decision tree model, we made a set of predictions for a vast number of (x,y)-coordinates for basketball court. We obtained the following contour plot:

contouroneGSW

Overlaying this on top of the basketball court diagram, we got:

contourtwoGSW

The contour plot levels denote the probabilities that the GSW will make a shot from a given (x,y) location on the court. As a sanity check, the lowest probabilities seem to be close to the 1/2-court line and beyond the three-point line. The highest probabilities are surprisingly along very specific areas on the court: very close the basket, the line from the basket to the left corner, extending up slightly, and a very narrow line extending from the basket to the right corner. Interestingly, the probabilities are low on the right side of the basket, specifically:

contourtwoGSW

A map showing the probabilities more explicitly is as follows (although, upon uploading it, I realized it is a bit harder to read, I will re-upload a clearer version soon!)
contourgsw3

In conclusion, it seems that, at least according to a first look at the data, the Warriors do indeed have several “weak spots” in their offense that opponents should certainly look to exploit by designing defensive schemes that force them to take shots in the aforementioned low-probability zones. As for future improvements, I think it would be interesting to add as predictor variables things like geographic location, crowd sizes, team opponent strengths, etc… I will look into making these improvements in the near future.

The “Interference” of Phil Jackson

By: Dr. Ikjyot Singh Kohli

So, I came across this article today by Matt Moore on CBSSports, who basically once again has taken to the web to bash the Triangle Offense. Of course, much of what he claims (like much of the Knicks media) is flat-out wrong based on very primitive and simplistic analysis, and I will point it out below. Further, much of this article seems to motivated by several comments Carmelo Anthony made recently expressing his dismay at Jeff Hornacek moving away from the “high-paced” offense that the Knicks were running before the All-Star break:

“I think everybody was trying to figure everything out, what was going to work, what wasn’t going to work,’’ Anthony said in the locker room at the former Delta Center. “Early in the season, we were winning games, went on a little winning streak we had. We were playing a certain way. We went away from that, started playing another way. Everybody was trying to figure out: Should we go back to the way we were playing, or try to do something different?’’

Anthony suggested he liked the Hornacek way.

“I thought earlier we were playing faster and more free-flow throughout the course of the game,’’ Anthony said. “We kind of slowed down, started settling it down. Not as fast. The pace slowed down for us — something we had to make an adjustment on the fly with limited practice time, in the course of a game. Once you get into the season, it’s hard to readjust a whole system.’’

First, it is well-known that the Knicks have been implementing more of the triangle offense since All-Star break. All-Star Weekend was Feb 17-19, 2017. The Knicks record before All-Star weekend was amusingly 23-34, which is 11 games below .500 and is nowhere mentioned in any of these articles, and is also not mentioned (realized?) by Carmelo. 

Anyhow, the question is as follows. If Hornacek was allowed to continue is non-triangle ways of pushing the ball/higher pace (What Carmelo claims he liked), would the Knicks have made the playoffs? Probably not. I claim this to be the case based on a detailed machine-learning-based analysis of playoff-eligible teams that has been available for sometime now. In fact, what is perhaps of most importance from this paper is the following classification tree that determines whether a team is playoff-eligible or not:

img_4304

So, these are the relevant factors in determining whether or not a team in a given season makes the playoffs. (Please see the paper linked above for details on the justification of these results.)

Looking at these predictor variables for the Knicks up to the All-Star break.

  1. Opponent Assists/Game: 22.44
  2. Steals/Game: 7.26
  3. TOV/Game: 13.53
  4. DRB/Game: 33.65
  5. Opp.TOV/Game: 12.46

Since Opp.TOV/Game = 12.46 < 13.16, the Knicks would actually be predicted to miss the NBA Playoffs. In fact, if current trends were allowed to continue, the so-called “Hornacek trends”, one can compute the probability of the Knicks making the playoffs:

knickspdfoTOV1

From this probability density function, we can calculate that the probability of the Knicks making the playoffs was 36.84%. The classification tree also predicted that the Knicks would miss the playoffs. So, what is being missed by Carmelo, Matt Moore, and the like is the complete lack of pressure defense, hence, the insufficient amount of opponent TOV/G. So, it is completely incorrect to claim that the Knicks were somehow “Destined for glory” under Hornacek’s way of doing this. This is exacerbated by the fact that the Knicks’ opponent AST/G pre-All-Star break was already pretty high at 22.44.

The question now is how have the Knicks been doing since Phil Jackson’s supposed interference and since supposedly implementing the triangle in a more complete sense? (On a side note, I still don’t think you can partially implement the triangle, I think it needs a proper off-season implementation as it is a complete system).

Interestingly enough, the Knicks opponent assists per game (which, according to the machine learning analysis is the most relevant factor in determining whether a team makes the playoffs) from All-Star weekend to the present-day is an impressive 20.642/Game. By the classification tree above, this actually puts the Knicks safely in playoff territory, in the sense of being classified as a playoff team, but it is too little, too late.

The defense has actually improved significantly with respect to the key relevant statistic of opponent AST/G. (Note that, as will be shown in a future article, DRTG and ORTG are largely useless statistics in determining a team’s playoff eligibility, another point completely missed in Moore’s article) since the Knicks have started to implement the triangle more completely.

The problem is that it is obviously too little, too late at this point. I would argue based on this analysis, that Phil Jackson should have actually interfered earlier in the season. In fact, if the Knicks keep their opponent Assists/game below 20.75/game next season (which is now very likely, if current trends continue), the Knicks would be predicted to make the playoffs by the above machine learning analysis. 

Finally, I will just make this point. It is interesting to look at Phil Jackson teams that were not filled/packed with dominant players. As the saying goes, unfortunately, “Phil Jackson’s success had nothing to do with the triangle, but, because he had Shaq/Kobe, Jordan/Pippen, etc… ”

Well, let’s first look at the 1994-1995 Chicago Bulls, a team that did not have Michael Jordan, but ran the triangle offense completely. Per the relevant statistics above:

  1. Opp. AST/G = 20.9
  2. STL/G = 9.7
  3. AST/G = 24.0
  4. Opp. TOV/G = 18.1

These are remarkable defensive numbers, which supports Phil’s idea, that the triangle offense leads to good defense.

 

 

Basketball Machine Learning Paper Updated 

I have now made a significant update to my applied machine learning paper on predicting patterns among NBA playoff and championship teams, which can be accessed here: arXiv Link . 

Analyzing Lebron James’ Offensive Play

Where is Lebron James most effective on the court?

Based on 2015-2016 data, we obtained from NBA.com the following data which tracks Lebron’s FG% based on defender distance:

lebrondef

From Basketball-Reference.com, we then obtained data of Lebron’s FG% based on his shot distance from the basket:

lebronshotdist

Based on this data, we generated tens of thousands of sample data points to perform a Monte Carlo simulation to obtain relevant probability density functions. We found that the joint PDF was a very lengthy expression(!):

 

Graphically, this was:

lebronjointplot

A contour plot of the joint PDF was computed to be:

lebroncontour

From this information, we can compute where/when LeBron has the highest probability of making a shot. Numerically, we found that the maximum probability occurs when Lebron’s defender is 0.829988 feet away, while Lebron is 1.59378 feet away from the basket. What is interesting is that this analysis shows that defending Lebron tightly doesn’t seem to be an effective strategy if his shot distance is within 5 feet of the basket. It is only an effective strategy further than 5 feet away from the basket. Therefore, opposing teams have the best chance at stopping Lebron from scoring by playing him tightly and forcing him as far away from the basket as possible.

The Relationship Between The Electoral College and Popular Vote

An interesting machine learning problem: Can one figure out the relationship between the popular vote margin, voter turnout, and the percentage of electoral college votes a candidate wins? Going back to the election of John Quincy Adams, the raw data looks like this:

Electoral College Party Popular vote  Margin (%)

Turnout

Percentage of EC

John Quincy Adams D.-R. -0.1044 0.27 0.3218
Andrew Jackson Dem. 0.1225 0.58 0.68
Andrew Jackson Dem. 0.1781 0.55 0.7657
Martin Van Buren Dem. 0.14 0.58 0.5782
William Henry Harrison Whig 0.0605 0.80 0.7959
James Polk Dem. 0.0145 0.79 0.6182
Zachary Taylor Whig 0.0479 0.73 0.5621
Franklin Pierce Dem. 0.0695 0.70 0.8581
James Buchanan Dem. 0.12 0.79 0.5878
Abraham Lincoln Rep. 0.1013 0.81 0.5941
Abraham Lincoln Rep. 0.1008 0.74 0.9099
Ulysses Grant Rep. 0.0532 0.78 0.7279
Ulysses Grant Rep. 0.12 0.71 0.8195
Rutherford Hayes Rep. -0.03 0.82 0.5014
James Garfield Rep. 0.0009 0.79 0.5799
Grover Cleveland Dem. 0.0057 0.78 0.5461
Benjamin Harrison Rep. -0.0083 0.79 0.58
Grover Cleveland Dem. 0.0301 0.75 0.6239
William McKinley Rep. 0.0431 0.79 0.6063
William McKinley Rep. 0.0612 0.73 0.6532
Theodore Roosevelt Rep. 0.1883 0.65 0.7059
William Taft Rep. 0.0853 0.65 0.6646
Woodrow Wilson Dem. 0.1444 0.59 0.8192
Woodrow Wilson Dem. 0.0312 0.62 0.5217
Warren Harding Rep. 0.2617 0.49 0.7608
Calvin Coolidge Rep. 0.2522 0.49 0.7194
Herbert Hoover Rep. 0.1741 0.57 0.8362
Franklin Roosevelt Dem. 0.1776 0.57 0.8889
Franklin Roosevelt Dem. 0.2426 0.61 0.9849
Franklin Roosevelt Dem. 0.0996 0.63 0.8456
Franklin Roosevelt Dem. 0.08 0.56 0.8136
Harry Truman Dem. 0.0448 0.53 0.5706
Dwight Eisenhower Rep. 0.1085 0.63 0.8324
Dwight Eisenhower Rep. 0.15 0.61 0.8606
John Kennedy Dem. 0.0017 0.6277 0.5642
Lyndon Johnson Dem. 0.2258 0.6192 0.9033
Richard Nixon Rep. 0.01 0.6084 0.5595
Richard Nixon Rep. 0.2315 0.5521 0.9665
Jimmy Carter Dem. 0.0206 0.5355 0.55
Ronald Reagan Rep. 0.0974 0.5256 0.9089
Ronald Reagan Rep. 0.1821 0.5311 0.9758
George H. W. Bush Rep. 0.0772 0.5015 0.7918
Bill Clinton Dem. 0.0556 0.5523 0.6877
Bill Clinton Dem. 0.0851 0.4908 0.7045
George W. Bush Rep. -0.0051 0.51 0.5037
George W. Bush Rep. 0.0246 0.5527 0.5316
Barack Obama Dem. 0.0727 0.5823 0.6784
Barack Obama Dem. 0.0386 0.5487 0.6171

Clearly, the percentage of electoral college votes a candidate depends nonlinearly on the voter turnout percentage and popular vote margin (%) as this non-parametric regression shows:

electoralmap.png

We therefore chose to perform a nonlinear regression using neural networks, for which our structure was:

nnetplot

As is turns out, this simple neural network structure with one hidden layer gave the lowest test error, which was 0.002496419 in this case.

Now, looking at the most recent national polls for the upcoming election, we see that Hillary Clinton has a 6.1% lead in the popular vote. Our neural network model then predicts the following:

Simulation Popular Vote Margin Percentage of Voter Turnout Predicted Percentage of Electoral College Votes (+/- 0.04996417)
1 0.061 0.30 0.6607371
2 0.061 0.35 0.6647464
3 0.061 0.40 0.6687115
4 0.061 0.45 0.6726314
5 0.061 0.50 0.6765048
6 0.061 0.55 0.6803307
7 0.061 0.60 0.6841083
8 0.061 0.65 0.6878366
9 0.061 0.70 0.6915149
10 0.061 0.75 0.6951424

One sees that even for an extremely low voter turnout (30%), at this point Hillary Clinton can expect to win the Electoral College by a margin of 61.078% to 71.07013%, or 328 to 382 electoral college votes. Therefore, what seems like a relatively small lead in the popular vote (6.1%) translates according to this neural network model into a large margin of victory in the electoral college.

One can see that the predicted percentage of electoral college votes really depends on popular vote margin and voter turnout. For example, if we reduce the popular vote margin to 1%, the results are less promising for the leading candidate:

Pop.Vote Margin Voter Turnout % E.C. % Win E.C% Win Best Case E.C.% Win Worst Case
0.01 0.30 0.5182854 0.4675000 0.5690708
0.01 0.35 0.5244157 0.4736303 0.5752011
0.01 0.40 0.5305820 0.4797967 0.5813674
0.01 0.45 0.5367790 0.4859937 0.5875644
0.01 0.50 0.5430013 0.4922160 0.5937867
0.01 0.55 0.5492434 0.4984580 0.6000287
0.01 0.60 0.5554995 0.5047141 0.6062849
0.01 0.65 0.5617642 0.5109788 0.6125496
0.01 0.70 0.5680317 0.5172463 0.6188171
0.01 0.75 0.5742963 0.5235109 0.6250817

One sees that if the popular vote margin is just 1% for the leading candidate, that candidate is not in the clear unless the popular vote exceeds 60%.