Why Media Polls Frequently Get It Wrong

 

 

Not long ago, a local TV station asked me to look at a political poll it had just commissioned.   The poll showed a well-known incumbent unexpectedly in trouble and they wanted an outsider review to make sure they weren’t missing something.

While I have commented frequently on media polls, it was the first time in 30-years of consulting that a news outlet contacted me to review a poll before they ran with it.   Their “top lines” – the numbers from all voters surveyed – accurately showed the incumbent had problems.   But, I simply glanced at those numbers before immediately turning to the “crosstabs” – the grey pages at the back of the poll which few people actually read.

Crosstabs are, in fact, the most important part of the poll.  They are the statistical tables that break out the details of how various demographic groups responded each question.  Using that data and knowledge of traditional voting habits and patterns, I showed the reporter that while their “top lines” may accurately report what respondents told them, the top lines didn’t really reflect how voters would behave once they entered the voting booth.  Drawing on an understanding of past voter behavior and other data, I pointed out why certain demographic groups were unlikely to behave according to their poll answers once in the privacy of their voting booth.

They looked like prophets when they reported these anomalies in their poll and just days later the election played out exactly like they predicted.

Media poll data is frequently accurate, but the data are frequently reported wrong.   Why?   Because the media typically reports the “top lines” of the poll without performing the hard analysis of comparing voter responses with voter history and other “predictors” of voter behavior.

First, polls are snapshots in time.   A poll in September is not valid for predicting November.   Remember the famous “Dewey Beats Truman” Chicago Tribune headline that Harry Truman held up on Election Night, November 1948?

The news media was overly influenced by a September poll of Readers Digest subscribers – a group that was then disproportionately Republican.   It was also in an era when polling was still unsophisticated.  Yet, it convinced the media that Truman would lose.

It was conducted before Truman crisscrossed the country during his famous whistle-stop train campaign and brought “home” certain Democratic voters who the poll indicated were straying from their party.  Dewey, equally convinced by the poll, curtailed his fall campaign trips and relied primarily on radio speeches to reach voters.  Thus, Truman eked out a win, much to the “Trib’s” embarrassment.   

Even today, when polling methodologies are more refined, challenges remain in properly reading poll results,   For instance, it may surprise you, but some people lie to pollsters.   

Yet, in spite of that fact, most methodologically sound polls are fairly accurate if you realize that poll responses are simply one aspect of discerning likely voter behavior.  Even the best polls require sophisticated analysis to predict voter behavior accurately.

In the 1998 Georgia gubernatorial race between Guy Milner and Roy Barnes, the Milner campaign conducted a poll showing its candidate with a seven point lead in September.   Everyone was trumpeting this as a major news item.

Later that week, pollster John McLaughlin was in my party office and asked if he could look at the poll sitting on desk.   He, too, ignored the “top line” data at the front of the poll and flipped to the statistical tables at the back.  After two or three minutes of examination, he announced, “Milner is down by five points.”

How could a professionally conducted, methodologically sound poll be off by 12 percentage points?  It wasn’t.  The data was correct; the analysis was wrong.

“Look at the black voter percentages for Milner,” he responded.   “They are at 17%.   They won’t vote that high for a Republican.   By election day, their numbers will drop to the normal single digit level and if you extrapolate that across likely voters, Milner’s down five-to-seven points.”

Those were almost precisely the election results two months later (okay, in this case a September poll did predict November.  Also, Milner’s pollster probably understood the discrepancies, but used the top lines to influence media coverage of the campaign).  
 
To use polls accurately in campaigns, you must compare what voters tell pollsters against what they typically do on Election Day.  It doesn’t mean voters won’t behave differently than they normally do in certain elections, but the analysis must determine which factors exist that will cause certain demographic groups to depart from their normal voting patterns. A properly analyzed poll will find those factors when they exist.

This kind of information is rarely found in the aggregated numbers that the press typically reports, but can be easily found by comparing the poll’s crosstabs with voter history and other past behavioral patterns from defined voter groups – white men, black women, college-educated Democratic females, blue-collar Republican males, etc.

There may be valid reasons at any moment in time when Republican males would be upset with a Republican incumbent.  That was the case in the TV poll I reviewed.  However, history shows that demographic is among the least likely group to vote Democratic in  November.

They almost always “come home” on Election Day.   I pointed that out in my off-camera analysis of the poll and, sure enough, when they voted, their anger took a back seat to partisan loyalty.

Ditto with black voters.   As the Milner example showed, they, too, almost always come home in November, meaning a 90+% Democratic vote.

Barring a watershed election where partisan loyalties shift, normal elections hinge on decisions by swing and independent voters as well as partisan turnout – which party’s voters are most motivated to vote.   While few call 2008 a “watershed”, it did feature abnormal elements.  Data from the 2008 election shows America’s most habitual voters - older whites - stayed home, while the country’s least likely voters - the under 30 age group - came out in droves.  Why?  McCain never excited his base, while Obama electrified America’s youth.  Shrewd pollsters were picking up those signals as the campaign closed.

These are the anomalies that pollsters look for to help them arrive at accurate predictions and these factors often are overlooked in media polls - and poorly drawn campaign polls, too.  

So, the next time you see a media poll that misses the mark in an election, you’ll look like a Frank Luntz-ean genius at the Waffle House when you tell everyone “their prediction was way off base because the media ignored some anomalies in the crosstabs when they reported their top lines.”

While no one there will understand a word of what you just said, they will all nod in wonder at your insights.   It might even get you an election morning invitation to the local Optimist Club every four years to predict the presidential election outcome.

At least, it worked for me.

 

 

Leave a Reply

You must be logged in to post a comment.