Meta-Analysis as Power, Not Neutral Synthesis
Meta-analysis is often presented as psychology’s most authoritative evidentiary tool. By aggregating results across studies, it promises to rise above idiosyncratic findings and reveal what the field truly knows. In this framing, individual studies are provisional, while meta-analytic conclusions carry a higher epistemic status. What is less frequently acknowledged is that meta-analysis is not merely a technical procedure. It is an exercise of power. Decisions about inclusion, exclusion, weighting, and interpretation shape what counts as knowledge, often more decisively than any single experiment.
The idealized image of meta-analysis is one of neutrality. Data are gathered, effects are pooled, and the truth emerges statistically purified. This image relies on a critical assumption: that the studies being aggregated are commensurable, that they are examining the same phenomenon in sufficiently similar ways. In psychology, this assumption is often tenuous. Constructs are defined flexibly, operationalizations vary widely, and contextual factors differ substantially across studies. Aggregation can therefore obscure conceptual heterogeneity rather than resolve it.
Jacob Cohen warned of this danger decades ago, noting that psychological constructs often lack the precision required for cumulative inference. Paul Meehl was even more direct. In his critique of “soft” psychological theories, Meehl argued that the field’s tolerance for vague constructs and weak tests made it difficult to interpret confirmatory evidence meaningfully. Meta-analysis, from this perspective, risks amplifying theoretical weakness by pooling results that were never conceptually aligned to begin with.
The authority of meta-analysis rests not only on statistical technique, but on gatekeeping. Analysts decide which studies count, which outcomes are comparable, and which moderators deserve attention. These decisions are necessarily interpretive. Inclusion criteria reflect judgments about methodological adequacy and theoretical relevance. Exclusion criteria reflect judgments about what can be safely ignored. Once formalized, these judgments become invisible, presented as technical necessities rather than as epistemic choices.
The power of meta-analysis becomes especially apparent when it is used to settle debates. A meta-analytic conclusion can effectively close inquiry by declaring an effect small, unreliable, or absent. Yet such declarations often mask unresolved theoretical disagreement. When competing theories predict different patterns under different conditions, averaging across those conditions may produce a null effect that satisfies no theory but appears authoritative nonetheless.
The ego depletion literature again provides a revealing example. Large-scale meta-analyses were used to argue both for and against the existence of a depletion effect, depending on analytic decisions and inclusion criteria. Rather than clarifying the phenomenon, meta-analytic disagreement exposed how under-specified the underlying theory was. Aggregation did not resolve the issue; it displaced it.
Meta-analysis also interacts with publication bias in ways that are difficult to correct fully. Statistical techniques exist to estimate missing studies, but these corrections rely on assumptions that may not hold in practice. More importantly, meta-analysis inherits the field’s incentive structures. If null results are underreported, the aggregated literature reflects that skew. The appearance of precision can obscure systematic absence.
The language surrounding meta-analysis often reinforces its authority. Phrases such as “the literature shows” or “the evidence indicates” suggest a level of consensus that may not exist at the level of theory. Effect sizes are reported with confidence intervals that appear definitive, even when the underlying studies vary widely in design and conceptual framing. The synthesis acquires a rhetorical force that exceeds its epistemic warrant.
This rhetorical force has practical consequences. Meta-analytic findings influence policy recommendations, clinical guidelines, and funding priorities. Once an effect is declared weak or strong at the meta-analytic level, alternative lines of inquiry may be deprioritized. The synthesis becomes prescriptive, shaping the future of the field as much as summarizing its past.
The problem is not aggregation per se. In fields with well-defined objects and stable measurement, meta-analysis can be genuinely illuminating. The problem arises when aggregation is treated as conceptually neutral in a field where constructs are fluid and context-dependent. Without careful theoretical curation, meta-analysis risks functioning as a statistical averaging machine that erases meaningful variation.
Cronbach’s distinction between the two disciplines of psychology is instructive here. Cronbach argued that psychology oscillates between experimental control and correlational exploration, and that neither can subsume the other. Meta-analysis, when treated as the pinnacle of evidence, implicitly privileges one discipline over the other. It favors effects that are consistent across contexts, often at the expense of understanding why effects vary.
A more reflective use of meta-analysis would treat it as a hypothesis-generating tool rather than a verdict. Patterns of heterogeneity should prompt theoretical refinement rather than be smoothed away. Divergent findings should be examined for what they reveal about boundary conditions, not dismissed as noise. Meta-analysis should reopen questions, not close them prematurely.
Recognizing meta-analysis as an exercise of power does not delegitimize it. It situates it. Power is not inherently problematic; unacknowledged power is. When meta-analytic conclusions are presented as inevitable outcomes of data aggregation, the interpretive labor behind them disappears. Restoring visibility to that labor allows the discipline to debate not only results, but the assumptions that produced them.
Psychology does not advance by declaring the literature settled. It advances by clarifying where disagreement lies and why. Meta-analysis can contribute to that process, but only if it is treated as a tool embedded within theoretical judgment rather than as an oracle. Synthesis without theory is not cumulative knowledge. It is statistical compression.
Letter to the Reader
If you have ever felt intimidated by the authority of a meta-analysis, that reaction is understandable. When I was trained in the mid-1980s, meta-analysis was emerging as a corrective to narrative review, and it carried a promise of objectivity that was deeply appealing.
With experience, it becomes clear that synthesis always reflects choices. Pay attention to who made them, and on what grounds. Meta-analysis can illuminate patterns, but it cannot replace theoretical thinking. The moment it seems to do so is the moment its power should be examined most closely.