It’s been less than a day since President Donald Trump’s Media Survey went live and there are already several analyses available complete with comedic fodder, depending on what side of the proverbial (and literal?) fence you align yourself with.
I’m going to #resist the temptation to include all of the personal and political reasons I do not agree with the survey and focus on the methodological reasons that this survey cannot be used as evidence of fact or opinion.
In short, it violates nearly every rule of survey construction in the book.
In order to be generalizable to any population, a survey must take a random sample. In other words, if you want to draw conclusions about the entire American population, each American must have an equal and unknown chance of being selected for the survey. A randomly selected population reduces population bias – it’s a step taken to ensure that survey respondents aren’t disproportionately of one mindset.
When a survey is voluntary, it’s not random; people who have strong opinions on either side are most likely going to be the ones who respond to a voluntary survey. When a survey is posted on the GOP website, it’s most certainly going to be biased in favour of Republicans. Even the way it is worded assumes that Republicans are the respondents.
Survey respondents are far more likely to give their true feelings when they know their responses are anonymous. Considering Trump’s propensity towards threats for speaking freely (though claiming to do it in the name of free speech), the fact that this survey requires a name, email address and zip code to be submitted introduces sample bias as well.
Loaded questions contain assumptions. The question below is a handy example.
Double (or more) barreled questions
A question should ask about one issue so that responses accurately indicate whether respondents agree with that specific issue. In the example below, respondents may feel that the mainstream media tells the truth about positions and not actions, or vice versa.
Leading questions are those built to steer respondents toward one answer. The below question is an example of a leading question because it gives Republican-slanted examples.
Bonus: this is also a triple-barreled question. Though it asks about issues in general, a proper survey would either ask about each issue separately or not give examples at all.
It is best practice for surveys to frame questions positively so people do not get confused about a double-negative response. It’s survey construction 101 to avoid using “not” in survey questions. A more methodologically proper survey would ask “has the media done its due diligence to expose ObamaCare’s failures?”
Bonus: this question is also loaded because it assumes failures, as well as leading because it uses the word “many.”
What does this even mean?
On a related note, AVClub.com earlier reported on the Trump survey and pointed out the poor grammar of question 23:
The question has since been changed to make sense. However changing a question when a survey has already been released and responded to is a methodological fail.
If President Trump really wanted to be transparent about the survey, results might be readily available once the survey form is submitted. Instead, respondents are taken to a page soliciting donations to help fight (in an unnamed way) the exact issue surveyed. Hmmm, do you think the GOP have pinpointed their audience for this survey?
In summary, there are numerous reasons why this survey is flawed in its method and can in no way be used as evidence of how Americans or even Republicans feel.
Survey administration is a skill and an art, but I supposed Trump is still novice to this whole administration thing.