In my travels this morning, I spied an article at New Republic about Public Policy Polling entitled, There’s Something Wrong With America’s Premier Liberal Pollster – The problem with PPP’s methodology. The article points out that PPP has been deleting questions to avoid criticism, following no consistent methodology, bad sampling and, in essence, weighting questions to get the results they like.
Excerpts with emphasis added:
After examining PPP’s polls from 2012 and conducting a lengthy exchange with PPP’s director, I’ve found that PPP withheld controversial elements of its methodology, to the extent it even has one, and treated its data inconsistently. The racial composition of PPP’s surveys was informed by whether respondents voted for Obama or John McCain in 2008, even though it wasn’t stated in its methodology. PPP then deleted the question from detailed releases to avoid criticism. Throughout its seemingly successful run, PPP used amateurish weighting techniques that distorted its samples—embracing a unique, ad hoc philosophy that, time and time again, seemed to save PPP from producing outlying results. The end result is unscientific and unsettling.
It wasn’t just Georgia. PPP’s polls display the same conveniently self-correcting pattern: More weight given to the groups that push PPP’s result closer to the expected outcome throughout 2012. When PPP’s polls otherwise would have shown the president doing worse than expected, there tended to be fewer white voters, which had the effect of bringing PPP closer to expectations.
In many ways, PPP’s technique has the strengths and weaknesses of weighting by party ID, the widely criticized approach employed by some of PPP’s most vocal critics—the pro-Romney poll “unskewers” of 2012. Like party ID, how you voted in the last election correlates well with how you’ll vote this time. The issue is whether the pollster knows the ratio of former Obama and McCain supporters. Unfortunately, there’s no easy way to know whether the poll should be weighted to an electorate where people say they voted for Obama by 7.2 points or 12 points or nothing at all.
Jensen said that using respondents’ 2008 preferences to project the racial composition of the electorate wasn’t as bad as it looks: “It’s not us saying ‘Obama’s doing well enough against Romney so we don’t need to weight African Americans as high,’” he said. “It’s ‘we don’t want to put out a sample that over-represents people who voted for Obama last time and give Republicans something to attack us about.’”
But there’s an obvious problem with that explanation: The detailed releases accompanying PPP surveys rarely included any mention of a question about the 2008 presidential election, so there was nothing for Republicans to attack them about.
Give the opposition, the Republicans, nothing to attack. That was the whole point. PPP might consider changing it’s name to OFA. To borrow from Instapundit, it looks like these inconsistencies and methodology issues are not bugs to PPP, but features.
Given the details in the New Republic article and the news that PPP withheld a poll that showed Colorado gun grabbing legislators likely to be recalled, one should question what else they are withholding and if their most recent NC poll has hidden Easter eggs.
Flashback to just last month when PPP did a poll trumpeting the decline of Governor Pat McCrory and a blow to conservatives. I pointed out the sample was a wee bit slanted at +9 Democrat and that this poll was clearly more pointed at moderate Democrats:
Q19 If you are a Democrat, press 1. If a Republican,
press 2. If you are an independent or identify
with another party, press 3.
Nice +9 Democrat Sample there!
The Poll is located here:http://www.publicpolicypolling.com/pdf/2011/PPP_Release_NC_814.pdf
I tweeted asking why the PDF was labeled 2011, but they decided to just retweet me instead of answering me.
Oh well. Whatever.
They never did answer me. Maybe they deleted that question too.