Predicting Trump’s Win

Posted on Posted in Politika

I got into polling and forecasting in 2014 in order to bring data-driven scientific campaigning to the Turkish local elections. I had skin in the game; I cared about the outcome primarily and therefore we had to be right. As a one-time fixed income derivatives analyst and trader for Goldman Sachs and a heavy practitioner of Silicon Valley technology-, data-driven approach, the tools and approaches popularized by Nate Silver (time decay, pollster performance, etc.) came easy to implement. When the parliamentary elections rolled in June 2015, the machine started spitting out impressive results. Our proprietary predictions instrumental in the outcome we cared about, were also incredibly accurate: we were the best predictor.

Then 6 months later in November 2015, we were off by 8% at the early elections. Our expected outcome also suffered, so I simply stopped predicting. The honorable thing to do in these situations.

It was rather obvious that there were systemic polling errors with respect to voting preferences, and issues with respect to voting propensity in Turkey. Face-to-face, phone and internet sampling of likely voters suffered tremendously and were correlated with voting propensity. We even created an extremely simple polling app and sent our own folks to the streets for large unsampled (for post processing) surveys. Didn’t work. We found some polling stations in major cities whose districts robustly mimicked overall voting behavior so we sampled there. Didn’t work (except for calling the elections results accurately immediately after the count of paper ballots at these stations and hours before anyone can say anything meaningful – this innovation negated claims about election fraud: a handful of districts out of 20,000 were known only to us to be specifically targeted for hampering).

There had to be other ways to estimate when current methods seemed to be failing.

In January 2015, we heard about a tech team that claimed accurate results for the local elections, and more importantly shared our own contrarian prediction for the upcoming general election. Their approach was very simplistic. Count all the positive mentions and likes for the parties and their leaders on Facebook’s public forums and pages, and the simple ratio of positive mentions to total mentions, they claimed, was the exact voting percentage for the parties. We started classifying public Facebook language and signals into a voting propensity and preference. Even the simple count ratio was enough to predict the June 2015 results. Quantifying unguarded social media interaction showed a lot of promise to overcome egregious sampling errors.

We added a few more bells and whistles but the method did not smell rigor. Especially when it started diverging by about 10 points with our Nate Silvery models for November 2015. The governing party could not grab 50% of the voters. It was not possible. Our models and everyone else’s predictions were around 40-43%. It just was not possible. When reality sank in, I wrote a public apology for sucking so bad and decided to test our “energy” model on other upcoming worldwide elections.

A few things happened. All predictions and pundits kept sucking globally. Austria, Hungary, UK, Colombia, you have seen it all. Three things were obvious: 1) sampling errors were horrible and yet survey companies and pundits kept their heads in the sand about this, 2) electorate worldwide seemed to care much less about voting (unlike in Turkey) so the percentages distribution just did not matter, voting propensity had a much larger effect in general and yet IYIs kept measuring percentage distribution (to the second decimal point), and 3) gross failure (after failure) had none of the IYIs alarmed even a little bit, and therefore stifling any new innovation from these experienced practitioners. People had the right analysis sometimes but nobody cared enough to change methods.

It was with pleasure that we shared our contrarian predictions with friends taking comfort in the fact that our little “energy” model always painted a different picture than predicted elsewhere. We were right for Brexit (and so was Facebook) and Greek Referendum (we unfortunately did not track others).

A few other things also happened. Scared voters across the globe were correlated around a trepidation with a techno-fueled, eat code if you can’t find bread neoliberalism and trusted old bogeyman fascism energized the left behinds. They were left behind – no denying that in any shape or form but that’s another story. And you gotta respect the emotion that causes the “energy.”

One could simply call for Trump due to Brexit the day after Brexit. Or Colombia. It would have been more robust. One would have made so much money simply betting against IYIs. That would also be robust. Time after time, tested.

“Energy model” had merit and was showing Trump a few months after he entered the race. We shared this with a few friends and started updating our models for the peculiarities of the US context. On the plus side, polling was so much more frequent, comprehensive and varied at local, state and national level in the US so simply taking the Nate Silver model instead of our own Nate Silvery model would have been fine for some mechanics (the website is beautiful too). Or Nate Cohn’s.

We just had to beef up the energy model on the state and county level. Because we knew Nate Silver was going to be wrong (he at least had the right ideas and deduction especially towards the end). And for a good discussion as to why calling a binary outcome with ridiculous probability predictions is ridiculous you can refer to another a trader version of such things in probability here with Nassim Taleb.

When October 2015 rolled, we started betting friends on the outcome, play the prediction markets for fun and profit, but most importantly caring about the outcome and doing something about the outcome on the basis of our predictions. Trump was going to win. Unless…

We showed up in battleground states to keep testing our energy model. Volusia was the most battleground county of the most battleground state, Florida. We trolled  manually Facebook pages in Volusia to get a sense of energy, not trusting our own automatic data collection and analysis. We talked to undecideds, independents, observed tailgate parties, and anecdotally try to understand energy. We analyzed observations of other folks such as Zeynep Tufekci who were formalizing some of the theses we were seeing. We canvassed in Loudon County, Virginia to get a proprietary understanding of global correlations, energy, interaction, and other significant factors not captured by the entire IYI infrastructure. Trump dominated Facebook conversation by shockingly high factors.
In every state, and according to Facebook’s advertising tool, Trump “interested” more people than Hillary by factors of 2x in very blue states to 12x in very red states.

Ohio was an easy call. So were other correlated states like Pennsylvania when one looked at the energy. We got Wisconsin wrong (nobody is perfect). We were confident, so confident in fact, that we could call the entire election (and go all in) right after the first state was called (Indiana). Anything over 54% in Indiana meant Trump. It was obvious. To us.

Every few months, we shared our results with friends, and with those who really cared about the outcome in detail. I tweeted about it publicly.

Trump was going to win when according to IYIs he had no chance of winning the Republican primaries; when he had 30% chance of doing so; when he had 10% chance of winning presidency; and all the way to the day before when he still had literally no chance of winning it. Nate Silver was actually saying the right things except for his 25% probability for Trump. He at least had it much better than Sam Wang, supposedly the new Nate Silver. They should have been forced to either run for office on their numbers or at least trade with their life savings. And then share their probabilities with the world. Shoot, bet against the IYI was such a profitable strategy that we bet for the Cubs after I noticed Nate’s probability against them and tried a version of our energy model (momentum in this case, eg. what’s the probability of winning your third game after two consecutive wins) to solidify the bet. (Now now we were mostly fooling around – it would be stupid of us to take some of this fleeting edge and go crazy).

Our models had Trump fluctuating between 52% and 55% all through our prediction period. Enough edge (significantly enough edge) to trade. And enough warning for those who cared to perhaps change the outcome.

And we shared our final call for Trump electoral votes with Nate Silver a few days before the election.

298.

He got 304…