Robustness Tests for the Analysis of Observational Data
Inferences from quantitative empirical analyses depend on a correct specification of the data-generating process. If the model specification does not exactly match the 'true model', estimates are biased and inferences could be wrong. In the past, social scientist tried to overcome these problems by asking for better theory, by developing model specification tests, or by developing clever research designs which aim at holding 'all other factors' constant. Robustness tests do not try to eliminate specification errors. They deal with model uncertainty by asking whether inferences are robust to realistic changes in the model specification. Accordingly, we define robustness as stability of inferences to plausible changes in the model specification. This definition differs largely from the use of the term robustness in the literature, which has dominantly perceived robustness as stability of point estimates. Yet, inferences can be stable despite significant changes in point estimates. The project develops four different types of robustness tests: model variation tests, randomized permutation tests, the third type as inferential limit tests, and the fourth type as placebo tests.
The robustness project has its own webpage here.
Thomas Plümper 2014-2019