Significant Sound and Fury, Signifying Nothing
Andrew Gelman seizes on what bothers me about so much current political science, particularly studies that focus on attitudes.
This becomes particularly clear when we look at work along these lines in political science. If, for example, subliminal smiley faces have big effects on political attitudes, then this should cause us to think twice about how seriously to take such attitudes, no? Or if men’s views on economic redistribution are in large part determined by physical strength, or if women’s vote preferences are in large part determined by what time of the month it is, or if both sexes’ choice to associate with co-partisans is in large part determined by how they smell, then this calls into question a traditional civics-class view of the will of the people.
Luckily (or, perhaps, depending on your view, unluckily), the evidence for the empirical claims in the above paragraphs ranges from weak to nonexistent.
But my point is that there is a wave of research, coming from different directions, but all basically saying that our political attitudes are shallow and easily manipulated and thus, implicitly, not to be trusted. I don’t find this evidence convincing and, beyond this, I’m troubled by the eagerness some people seem to show to grab on to such claims, with their ultimately anti-democratic implications.
There’s this idea – a powerful one – that people have attitudes, but they also have “non-attitudes”. When forced to answer questions on a survey, very often people are asked questions about which they have little or no opinions. For example, I simply don’t have an opinion on the prospects of the Boston Red Sox next year. But people don’t like saying “no opinion”, and so will often just answer something. These are the opinions that are most changeable, because they are less “opinions” than randomly produced responses. And so when surveys show strong effects from treatments like smiley faces, most of the movement is coming out of those most-weakly-held opinions. Almost definitionally, the stronger the treatment effect these studies show the less important the thing they are measuring.
The proliferation of these types of studies produce an awful lot of papers with a well-designed experiment, a counterintuitive result, and the nagging question, “So…did we actually learn anything important?”