Rating Presidential Candidates by the Politifact Heuristic

It’s really difficult to be so immensely knowledgable about politics that you can tell if a certain candidate is being truthful or not. It’s even more difficult when compounded by the existence of many candidates and the wide variety of things they say, especially in a political climate where the off-the-cuff sound bite is more important than careful and considered research. I study political science quite intensely in college and pay close attention to the news, and even I am no where near able to evaluate every candidate.

How are we, even those of us less specialized in politics, supposed to cut through this clutter and make sure our favourite candidate is truthful? It’s not for lack of trying, because it truly is of immense importance to the functioning of our country to make sure that the person who is elected is competent enough to lead our country and not misinform the populace.

Luckily for us, we can rely on what are called political heuristics.

 

Using Heuristics

When solving a problem (anything from determining whether a number is prime to figuring out which candidate to vote for), ideally we want to preform a complete and thorough analysis arriving at the answer that is most likely to be correct. However, there are many cases in which we simply do not have the time or resources to complete the calculation as thourgouhly as possible, and thus will want to appromixate a reliable judgement with a quicker and less costly approach.

Such approximations are called heuristics. Heuristics are thus a trade-off, sacrificing reliability and thoroughness to get speed and ease of calculation. A heuristic could perhaps be considered something like an educated guess, except a bit more so.

Heuristics are used when making political decisions all the time, and in other parts of daily life. Often we don’t have nearly enough time or expertise to calculate the full impact of… say Ballot Issue #2… on you and the entire country over the next decade or so. So, instead we rely on heuristics, and assume that beneficial legislation is the kind that increases our ability to do things, increases our access to goods and services, costs us less money, reduces discrimination, etc.

Yet, even evaluating an individual issue on these factors is often still a calculation we cannot thoroughly preform, so we use further heuristics, such as trusting experts. Say… Paul Krugman… tells us that voting “yes” on this issue will increase our access to goods and services and cost us less money, and since we believe Paul Krugman (he does have a Nobel prize after all!), we decide to vote yes on this issue. This is the expert advice heuristic.

 

The PolitiFact Heuristic

But what are we to use as a heuristic for evaluating how truthful a Presidential Candidate is? Well, we could attempt to construct a giant collection of every statement each candidate has made, and then evaluate each statement based on whether it is true or false, and then compare candidates to each other… or we can rely on the data of other people who have done just that.

Enter PolitiFact. PolitiFact is a website located at politifact.com that aims to “fact-check statements by members of Congress, the White House, lobbyists and interest groups”. Run by the St. Petersburg Times, an independent newspaper, PolitiFact elaborates on their history and promises “that no one is behind the scenes telling us what to write for someone else’s benefit. We are an independent, nonpartisan news organization. We are not beholden to any government, political party or corporate interest. We are proud to be able to say that we are independent journalists. And for that, we thank Nelson Poynter.”

Thus we probably have enough information to establish PolitiFact as reasonably trustworthy, and a sufficiently reliable source of information that we can draw upon it to approximate knowledge about the trustworthiness of candidates in a heuristic. But what knowledge are we drawing upon?

 

Truth-O-Meter: Grading Statements

Enter the Truth-o-Meter. The meter rates a statement made by a politician on a rating of six different scales: True, Mostly True, Half True, Mostly False, False, and Pants on Fire. They explain their scale as follows:

TRUE – The statement is accurate and there’s nothing significant missing.

MOSTLY TRUE – The statement is accurate but needs clarification or additional information.

HALF TRUE – The statement is partially accurate but leaves out important details or takes things out of context.

BARELY TRUE – The statement contains an element of truth but ignores critical facts that would give a different impression.

FALSE – The statement is not accurate.

PANTS ON FIRE – The statement is not accurate and makes a ridiculous claim.

Every statement checked is given one of these six grades along with a somewhat thorough and verifiable justification.

 

Ratings: Possible Selection Bias

Don’t get too excited about these ratings, though, because it’s important to know PolitiFact’s standards for picking out statements to be graded. Obviously they don’t grade every sentence each candidate says — if Rick Perry conceded that Texas was indeed a state in the United States or embarrassedly admitted that Barack Obama was indeed our current President (at least as of the time of this upcoming election), he would be deserving of a TRUE rating, but the statement wouldn’t be checked or included in his record because it isn’t sufficiently significant.

Instead, Politifact only checks specific statements, saying “[b]ecause we can’t possibly check all claims, we select the most newsworthy and significant ones”. Explained in further detail:

In deciding which statements to check, we ask ourselves these questions:

  • Is the statement rooted in a fact that is verifiable? We don’t check opinions, and we recognize that in the world of speechmaking and political rhetoric, there is license for hyperbole.
  • Is the statement leaving a particular impression that may be misleading?
  • Is the statement significant? We avoid minor “gotchas”’ on claims that obviously represent a slip of the tongue.
  • Is the statement likely to be passed on and repeated by others?
  • Would a typical person hear or read the statement and wonder: Is that true?

Thus it’s probably important to know that there will be a selection affect making our heuristic less reliable, since we only get a range of what politicians are saying about a particular kind of statement — those that stand out — rather than about all statements.

Additionally, PolitiFact has only been scoring statements since 2008, and thus the inclusion of Bill Clinton does not include the lies he was impeached (but not successfully removed from office) for.

Thus a politician could be factually incorrect on matters of less significant importance and not get called out on it. Though, perhaps we wouldn’t really care, and it only really matters that the politicians are called out on the big stuff.

 

Results: Parsing the PolitiFact Heuristic

Using the Truth-o-Meter and PolitiFact’s journalism, we can then form a perhaps semi-reliable comparison of the truthfulness of each candidate based on how many of their checked statements are considered true versus considered false. This is the PolitiFact Heuristic — instead of measuring the truthfulness of each candidate ourselves by directly measuring what everyone says, we instead defer to an expert and tabulate based on their data.

This PolitiFact approach has been considered by a few of my friends in discussions about candidates in Political Science classes, and also seems to have been employed in a very similar fashion as the way I propose by professional political scientist Nate Silver, in his essay “A Look at PolitiFact Grades of Candidates”, writing for FiveThirtyEight.

Now, given all of this, how are the current Presidential candidates fairing?

 

Candidate Scores

Looking at the tables provided for every candidate, we arrive at the following scores for all the current Republican Presidentical Candidates, plus Democratic Presidential Candidate and Current President Barack Obama, and ex-President Bill Clinton for a potentially illuminating control:

Here, we see many candidates are prone to make a decent amount of false statements along with true statements. But it’s a bit difficult to compare unless we were to convert everyone’s ratings into a percentage of all statements made. For instance, Herman Cain has three statements rated Mostly True and 22 total statements, making his Mostly True rating 13.64%. Once we do that conversion, the table looks like this:

 

Ranking Candidates by Percentages

Now there’s a couple ways we could take this. We could choose to order candidates by the amount of true statements they made (in percent):

Or, perhaps you could order the candidates by the amount of Pants on Fire statements they made (in percent):

 

Ranking Candidates by Categories

But, given the differences in ratings based on those two approaches, perhaps we need a different approach. How about the approach used by Nate Silver in his article — combine True, Mostly True, and Half True in one category, and Mostly False, False, and Pants on Fire in another category?

Seems to be a good measure of the truthfulness of a candidate, though it isn’t a flattering revelation — more than half the Presidential Candidates lie (or are misinformed) more than half of the time (in the types of statements surveyed)!

However, though the categorization makes sorting and ranking straightforward, perhaps it isn’t entirely fair because it would score equally a candidate who made only mostly false statements and a candidate who made tons of brazingly Pants on Fire statements.

How do we solve this?

 

Ranking the Candidates by Truth-Points

Another solution is suggested in the comments of Nate Silver’s article by a commenter named Michael Weiss. He suggests:

Of course, if you wanted to ignore the sampling bias (which you’re kind of forced to do if you’re going to use these data at all), it should probably be weighted, with perhaps a scale like this: PoF = -3, F = -2, MF = -1, HT = 0, MT = 1, T = 2

Using his method, we get the following scores for our candidates (and Bill Clinton control):

 

Ranking Even More People

Now just so we had more data to use and analyze, what if we included Fox News pundit Bill O’Reilly; economist, New York times columnist, and personal hero Paul Krugman; Vice President Joe Biden; House Minority Leader Nancy Pelosi; Speaker of the House John Boehner; Senate Minority Leader Mitch McConnell; Senate Majority Leader Harry Reid; ex-Vice-Presidential Candidate and professional gadfly Sarah Palin; Fox News pundit Glenn Beck; Washington Post columnist George Will; and MSNBC pundit Rachel Maddow? (Others could not be included because they had less than ten graded statements, and I felt like that was too low.)

How do all these people fair when scored by Truth-Points and by the True-ish/False-ish methods? Turns out, they fair and rank something like this:

 

What Conclusions Can We Draw?

Before talking about what conclusions we can draw, it’s worth noting what conclusions we cannot draw. I’d warn everyone looking at this data about three things:

First, comparisons may be invalid because of statement sample biases. Again, it may be that 90% of every statement made by Glenn Beck is trustworthy and accurate, it’s just that of the specifically interesting things he says, his statements appear to be largely untrustworthy and inaccurate.

Second, comparisons may be invalid because of person inclusion biases. It certainly does look like Democrats appear seem to be more truthful than Republicans, but this probably isn’t true because of the large number of currently campaigning Republicans placed next to the large number of non-campaigning Democrats, and the propensity of people to utter falsehoods while under the pressures of the campaign trail and not in other locations.

Third, generalizations may be invalid because of an insufficient sample. It certainly looks like columnists (George Will and Paul Krugman) are far more trustworthy than pundits (Bill O’Reilly, Glenn Beck, and Rachel Maddow) and politicians (nearly everyone else), but the fact that we only included two famous columnists out of hundreds might mean that we have an unrepresentative sample, and columnists are largely inaccurate.

 

So what does this add up to?

When we acknowledge that we’re using a heuristic, and thus may not have accurate results but instead results that we consider accurate enough, we might not want to consider supporting or voting for Michelle Bachmann or Herman Cain on the basis of their continual habit of lying or being misinformed.

Likewise, we might want to lend extra support to Republican candidates such as John Huntsman and Ron Paul for being especially reliable compared to the others.

 

Additionally, we can notice just how untrustworthy politicians seem to be overall — the median politician was a literal coin-flip on whether he or she spoke a “True-ish” statement or not, lying or being misinformed 50% of the time. Additionally, the median politican had a negative truth-points score.

We should take the selection factor in mind and recognize that politicans were only scored on interesting, significant, and potentially dubious statements — but we still have some basis for outrage regarding truth in politics.

But do keep those warnings in mind… like in all other sciences, be careful here not to go beyond what your evidence actually supports.

-

I now blog at EverydayUtilitarian.com. I hope you'll join me at my new blog! This page has been left as an archive.

On 20 Nov 2011 in All, Political Science. 2 Comments.

2 Comments

  1. #1 Garren says:
    23 Nov 2011, 5:28 pm  

    If you had suggested this, and not done it, I would have been compelled to do so.

    Thanks!

  2. #2 Trevon Cullinan says:
    26 Jan 2012, 9:50 am  

    Appreciate you sharing, great blog post.Thanks Again. Awesome.

Leave a Reply

You must be logged in to post a comment.