Huron’s Research Slogans
Motivated by truth, with no hope of Proof
There is no inductive proof. We are not in the business of proving something to be true. We would love to know the truth (if that exists), but we understand that we could never be sure of the truth, even if we had it. The best we can hope for is that what we observe is consistent with our theories.
The best research invites failure.
Give the world an opportunity to tell you that you’re wrong. (This is the essence of good research.)
We invite failure by testing predictions.
Test an idea by making a prediction, and then determine whether the observations are consistent with the prediction.
We recognize failure by drawing a line in the sand.
In order to make failure obvious, establish a criterion in advance that says, “If the evidence doesn’t cross this line, then I’ll admit failure.” In statistics, the line is referred to as the confidence level.
Aim not to be right, but to be not not right.
Instead of establishing The Truth, our more modest aim is to be not obviously wrong. When our observations turn out to be consistent with our hypothesis, we don’t claim that we are right; instead the observations suggest that our hypothesis may not be wrong.
Test hypotheses by operationalizing terms.
Translate all of the terms in a hypothesis into concrete things you can measure. We can’t directly measure concepts like “sadness.” We have no choice but to measure things using imperfect rulers.
Operationalize, but don’t essentialize.
All concepts are inherently enigmatic and fuzzy. Terms like “melody,” “listen” or “note” can never be pinned-down. It is impossible to provide comprehensive definitions or grasp the essence of some concept. We are forced to approximate or estimate concepts through operational definitions — but don’t confuse the operational definition with the concept itself, and don’t assume concepts are “real.”
Compare, compare, compare.
Contrast a “treatment” condition with one or more “control” conditions.
The rhetoric of science is the rhetoric of prophecy.
People are most impressed when someone accurately foretells the future. Science is a form of rhetoric whose persuasive power resides in the testing of predictions. The rhetorical power of science comes not from scholars assembling evidence, but from scholars testing predictions.
Hindsight is 20/20.
Most things seem obvious in retrospect (hindsight bias). When the results aren’t obvious, humans are enormously gifted at coming up with explanatory accounts. We can make up a story for just about any set of data. Post hoc theories don’t have the same plausibility as a priori theories. The true test is making up the story first (i.e., prediction)! Prefer theorizing first, then collect your data.
Reductionism is a method, not a belief.
We simplify problems, not because we believe problems to be simple, but because we believe problems to be complex. Restricting our gaze is a useful strategy for discovery.
Don’t try to explain the whole world at once.
Manipulate one variable at a time. Seek simplicity, even as you distrust it.
Generalize, but don’t universalize.
When presenting your results, frame them narrowly rather than broadly.
Avoid chronic hypothesislessness.
Exploratory and descriptive studies are important, but you can’t invite failure without testing predictions.
Beware of the post hoc theory.
The scholar who only offers theories after looking at the evidence is a scholar who is never wrong. Post hoc theorists don’t allow the world to tell them when their ideas are problematic.
From Question to Theory to Conjecture to Hypothesis to Protocol.
Start with a question, propose an explanatory theory, derive a conjecture, refine the conjecture into a hypothesis, then operationalize the terms of the hypothesis into a protocol. The protocol provides an action plan for how to carry out the research.
No causation without manipulation.
Causality cannot be inferred unless you manipulate one of the variables. The Experiment is the only type of study in which it is possible to infer causality. Correlational studies don’t allow us to discount the possibility of a “3rd variable.”
Don’t get stuck with sticky data.
Seek data independence. Ideally, each piece of data should be gathered from a different source. (Collecting independent data is another way of minimizing the effect of unknown third variables.)
The law of large numbers does not apply to small numbers.
Pay attention to sample sizes. The smaller the sample size, the greater the variability.
Always debrief.
Listen carefully to what people say about their experiences. Look for ways in which participants are misunderstanding the instructions. Be vigilant for possible demand characteristics — where the participant forms an idea about the experiment that confounds the results.
Make friends with a statistician.
Before you collect any data, talk with a statistician. Describe what you are planning to do and listen carefully to the advice. Statistical consultants are thrilled when people come and talk with them before the data are collected. Take advantage of their expertise.
Correct for multiple tests.
Each statistical test increases the likelihood of making a Type I (false positive) error. Repeating an experiment also increases the probability of a Type I error.
If you torture the data long enough it will confess to anything.
It’s not true that “statistics can prove anything,” but it is possible to manipulate data in ways that deceive yourself and others. Create a data analysis plan. Do not exclude outliers, normalize data, set conditions for excluding participants, or introduce post hoc tests without some principled prior reasoning. Be honest in reporting the analyses you carry out. Don’t hide multiple tests.