06/12/2013

The Law of Yawn

Regular readers may remember a post about identification of causal effects I wrote in August. Here's the full text:
The better a model is at identifying a causal effect, the less likely it is the effect is going to look substantial. That's because of (i) publication bias, (ii) how the world works.
You may note that that text contains zero examples - it's just a general impression plus some armchair theorizing. Thankfully, Steve Sailer provides an example from commercial marketing research:
In fact, one side effect of bad quantitative methodologies is that they generate phantom churn, which keeps customers interested. For instance, the marketing research company I worked for made two massive breakthroughs in the 1980s to dramatically more accurate methodologies in the consumer packaged goods sector. Before we put to use checkout scanner data, market research companies were reporting a lot of Kentucky windage. In contrast, we reported actual sales in vast detail. Clients were wildly excited ... for a few years. And then they got kind of bored.

You see, our competitors had previously reported all sorts of exciting stuff to clients: For example, back in the 1970s they'd say: of the two new commercials you are considering, our proprietary methodology demonstrates that Commercial A will increase sales by 30% while Commercial B will decrease sales by 20%.

Wow.

We'd report in the 1980s: In a one year test of identically matched panels of 5,000 households in Eau Claire and Pittsfield, neither new commercial A nor B was associated with a statistically significant increase in sales of Charmin versus the matched control group that saw the same old Mr. Whipple commercial you've been showing for five years. If you don't believe us, we'll send you all the data tapes and you can look for yourselves.

Ho-hum.
In the social sciences - and I would include marketing - there probably are few cases when the effect of X on Y is genuinely zero. Just about everything influences everything else, in a roundabout way. There's a flipside to that: The influence of single factors is usually very small. A core reason for that is that people's personalities and behaviour are pretty stable, which is why the concept "personality" makes sense.

Of course, there's also Xs that have a large influence on Y. The problem is that researching this is, or soon becomes, pretty boring. In fact, when people say "Did we really need a study for that?", they sometimes have a point. When an influence is large, it will usually (though not by logical necessity) be readily apparent. You don't need to be a social scientist to see that adolescent's friends influence their behaviour.

So, shut up shop? I think not. One, you do need a social scientist to tell you how large an obvious effect is. Two, the above allows for a sweet spot where effects are not obvious, but large enough to detect. Three, and this is perhaps the most important point, it is a worthwhile endeavour to show that the effect of X on Y really is close to zero, contrary to what some people would have you believe. Especially if X costs money.

No comments: