Qualitative Empirical Legal Scholarship: Standards and Verification Framework
Seth C. Oranburg1
Abstract
Generalist institutions regularly evaluate specialized empirical claims without the expertise or infrastructure to assess evidentiary warrant. Student-edited law reviews are a particularly exposed case. Recent meta-research finds widespread gaps in data and code disclosure even in quantitative empirical work published in leading law journals, despite mature transparency norms in fields such as economics and political science. For qualitative and mixed methods research, the institutional baseline is even less developed: legal scholarship lacks widely adopted, law-review-embedded reporting standards, and law reviews rarely require structured disclosure or warrant-checking protocols. Editors confronting interviews, fieldwork, or case-based analysis therefore lack a consistent, institutionally legible basis for distinguishing rigorous empirical inference from sophisticated narrative.
The result is adverse selection. When methodological quality is hard to observe, editorial selection shifts to proxies: prose, credentials, and topic salience. Authors who invest heavily in rigorous design and validation gain little advantage in law review placement and often route their strongest work to venues with expert screening. Work with thinner design constraints correspondingly faces less screening risk in law reviews, where weaknesses are harder to detect. The result is a market for lemons: a skewed pool of empirical claims and an eroding basis for confidence in what legal scholarship says about how law operates in practice.
This Article proposes institutional infrastructure that enables generalist editors to evaluate claim-evidence warrant without methodological evangelism or specialized training. It does so through three linked governance tools. First, a claim typology distinguishes pattern, mechanism, and interpretive claims and specifies the minimum evidentiary warrants each requires. Second, a standardized methodological abstract and a short audit rubric translate those warrants into operational disclosure and screening steps that a generalist editor can apply. Third, a best available evidence doctrine supplies a narrow safe harbor for legally constrained research. By standardizing these warrants, the framework provides a consistent basis for both designing rigorous qualitative legal scholarship and verifying it.
Introduction
In 1963, Stewart Macaulay published a study that became foundational to empirical legal scholarship. Drawing on interviews with sixty-eight businesspeople and lawyers in Wisconsin, Macaulay documented a phenomenon invisible to doctrinal analysis: firms routinely conducted business with minimal reliance on formal contract law, preferring relational norms and reputation over legal enforcement.2 The study was qualitative, interpretive, and empirical. It transformed contract theory by revealing the gap between law on the books and law in action.3
Six decades later, qualitative empirical legal scholarship operates in an institutional environment that remains comparatively thin on verification infrastructure. Recent meta-research suggests that even quantitative empirical work in highly ranked law-journal venues often omits basic transparency signals: in a sample of 300 quantitative empirical articles published between 2018 and 2020, most articles did not include a data-availability statement, most did not indicate the availability of analytic scripts, and preregistration was rare.4 A second, larger study that coded every empirical legal study in twenty top student-edited law reviews from 2010 to 2022 found low rates of readily accessible final datasets under its definition of “data availability,” and reported markedly higher availability rates in top economics and political-science journals over a more recent window.5 Even where law reviews adopted mandatory-on-paper data policies, the same study reports that those policies did not consistently translate into accessible data in practice, a pattern the authors attribute to policy design and enforcement gaps.6
These findings concern quantitative work. But they reveal a deeper institutional truth: law reviews have not built basic verification infrastructure even where mature transparency norms exist in other disciplines. For qualitative and mixed methods research, which lacks law-specific reporting standards altogether, the institutional baseline is weaker still.
The 2002 University of Chicago Law Review symposium on empirical methodology created infrastructure for quantitative legal research. Lee Epstein and Gary King articulated explicit standards for statistical inference that became codified in textbooks, enforced by journals, and transmitted through training programs.7 Theodore Eisenberg situated their intervention within a broader movement toward empirically grounded legal scholarship.8 The Journal of Empirical Legal Studies, founded in 2004, required methodological transparency as a condition of publication.9 Quantitative empirical legal studies professionalized.
Qualitative and mixed methods research developed without equivalent infrastructure. In the same 2002 symposium, Jack Goldsmith and Adrian Vermeule observed that Epstein and King’s statistical framework “does not translate cleanly to the single (or small-number) case study, where a detailed contextual analysis can often uncover causal and other explanatory mechanisms that statistical correlation cannot capture.”10 They identified the problem but offered no constructive solution. The question they left unanswered has persisted: If Epstein and King’s rules do not work for qualitative research, what rules do work, and how should law reviews enforce them?
The intellectual foundations for rigorous qualitative legal research have been developed by multiple scholarly communities since 2002. New Legal Realism, articulated in a programmatic 2005 statement by Howard Erlanger and colleagues, called for “rigorous, genuinely interdisciplinary approaches to the empirical study of law.”11 Elizabeth Mertz and Mark Suchman mapped the relationship between empirical legal studies and New Legal Realism, articulating methodological commitments that distinguish the movement’s approach.12 Lisa Webley provided a systematic treatment of qualitative methods in the Oxford Handbook of Empirical Legal Research.13 The Research Handbook on Modern Legal Realism offers the most comprehensive recent statement of these commitments.14 In the social sciences, methodology handbooks document rigorous standards for case study analysis, process tracing, and mixed methods design.15 Reporting frameworks like GRAMMS, JARS-Qual, and SRQR establish criteria that peer-reviewed journals routinely enforce.16
The intellectual case for qualitative rigor thus exists. What remains incomplete is the institutional architecture for law review verification of qualitative empirical claims. This Article therefore targets qualitative and mixed methods empirical scholarship, using the quantitative transparency crisis as evidence of a broader infrastructure failure rather than as its primary object.
Previous efforts have laid important groundwork within law reviews themselves. In 2017, Katerina Linos and Melissa Carlson published in the University of Chicago Law Review a guide to qualitative methods for law review authors, covering process tracing, theoretically informed case selection, and systematic sampling strategies.17 Their contribution addressed the supply side of the problem: how authors can produce rigorous qualitative work. This Article addresses the demand side: how editors can verify that work without specialized training, and how the field can accommodate research constrained by legal barriers that prevent the transparency verification requires. The standardized methodological abstract and audit rubric proposed here operationalize the methodological principles Linos and Carlson articulated into enforceable editorial infrastructure. The best available evidence doctrine addresses a problem their framework did not confront: what happens when law itself prevents the disclosure that verification demands.18
This Article proposes a governance framework to address the adverse selection problem and strengthen the credibility basis for qualitative and mixed methods claims in legal scholarship. The framework operates through three components.
Part II introduces a claim typology distinguishing what evidence different empirical claims require: pattern claims (assertions about frequency, distribution, or typicality), mechanism claims (assertions about causality and process), and interpretive claims (assertions about meaning and institutional constitution). Each claim type generates distinct inferential risks and requires distinct evidentiary warrants. The typology provides editors with a basis for evaluating claim-evidence fit, enabling differential treatment of rigorous and weak submissions along claim-type lines.19
Part III translates these warrants into governance technologies: a standardized methodological abstract requiring structured disclosure and an audit rubric enabling verification by generalist editors. The SMA requires authors to provide seven key disclosures: claim classification, population definition, selection procedure, disconfirmation criteria, negative case analysis, validation measures, and legal constraints. The audit rubric enables structured verification through severity-tiered checkpoints. If the analysis in Part I is correct, these screening technologies should raise acceptance rates for rigorous work while lowering them for methodologically weak submissions.20
Part IV proposes a best available evidence doctrine addressing a problem distinctive to legal scholarship. Law itself creates opacity through privilege, sealing, and contractual confidentiality that prevents the full transparency empirical verification demands. Existing qualitative reporting standards assume that disclosure is possible; they do not address situations where legal barriers foreclose it. Without accommodation, strict transparency requirements would bias scholarship toward studying easily observable phenomena and away from legally important but confidential questions, producing a different kind of selection problem. The best available evidence doctrine permits constrained research when legal barriers are precisely identified, claims are appropriately narrowed, and compensatory verification is provided.21
Part I establishes why this infrastructure is needed by documenting the transparency deficit in quantitative work, tracing it to institutional causes that affect all empirical scholarship, and explaining the adverse selection dynamic it produces for qualitative and mixed methods research specifically. The problem is not individual incompetence or the absence of qualitative methodology in law or social science. It is structural: law reviews lack the screening technology to distinguish quality, and without that technology, incentives favor adverse selection.
Part I: Why Law Reviews Cannot Verify Qualitative Empirical Claims
Three considerations establish why law reviews currently lack the capacity to evaluate qualitative empirical scholarship. Section A draws on recent transparency audits to show that law reviews have not built verification infrastructure even for quantitative work, where mature standards exist elsewhere. Section B applies the economic logic of adverse selection to explain the sorting consequences: when editors cannot observe methodological quality, rigorous authors migrate to venues that can recognize their investment. Section C identifies a compounding factor specific to legal research—law itself creates the opacity that prevents the transparency verification demands. Together, these three conditions motivate the governance framework developed in Parts II through IV.
A. Evidence from Quantitative Transparency Audits
The credibility problem in quantitative empirical legal scholarship is not hypothetical. Recent meta-research documents systematic transparency failures across top law journals, revealing an institutional environment in which all empirical work, quantitative and qualitative alike, must operate.
A 2024 observational study examined 300 quantitative empirical articles published in highly ranked law journals from 2018 to 2020 and coded whether the articles signaled the availability of underlying data and analytic scripts. In that sample, 81% of articles did not include a data-availability statement and 94% did not indicate the availability of analytic scripts; only a small minority provided scripts or similar materials in an accessible form.22 These measures do not establish that data or code never exist, but they do document a recurring reader-facing problem: absent clear location and access information, third parties cannot readily verify whether the reported findings follow from the stated evidentiary base.
Even among studies that stated data were available, actual accessibility was considerably lower due to broken links and statements indicating data were available only “upon request.”23 Only three percent reported preregistration.24 The field operates without the transparency infrastructure that would permit verification.
The gap between law and other disciplines is dramatic. A comprehensive 2025 study examining every empirical article in the top twenty law journals from 2010 to 2022 found that only fifteen percent had readily accessible datasets.25 In economics and political science, the figure is ninety-nine percent.26 Even journals with mandatory data availability policies showed poor enforcement: of forty-eight articles published under such mandatory policies, only ten actually made data available.27 The infrastructure exists elsewhere. Law reviews have not adopted it.
This absence extends to editorial policy as reflected in author guidelines. In a snapshot of submission instructions, Chin and Zeiler report that 25 of the top 30 student-edited journals they reviewed made no mention of transparency guidance, while faculty-edited journals were more mixed. Some included encouragement language and a small subset required an explicit statement addressing the availability of data, materials, or analytic code.28 Those policy measures do not by themselves establish actual compliance in published articles, but they help explain why student editors and generalist readers often confront empirical claims without a standardized mechanism for asking—and answering—the basic verification question: where are the underlying materials, and under what conditions can they be inspected?
This Article focuses on qualitative and mixed methods empirical work, not on statistical studies. But these quantitative findings demonstrate that law reviews, as institutions, have not built basic verification infrastructure even where mature transparency norms exist in other disciplines. For qualitative and mixed methods research, which lacks law-specific reporting standards altogether, the institutional baseline is weaker still. Authors make claims about how legal actors interpret rules, why institutions behave as they do, and what mechanisms produce observed outcomes. Yet the evidence rarely appears in reviewable form. Selection criteria go undisclosed. Analytic procedures remain implicit. Negative cases disappear. The result is qualitative empirical legal scholarship that cannot be distinguished, on methodological grounds, from sophisticated narrative.
B. Adverse Selection When Editors Cannot Observe Quality
This transparency deficit reflects an institutional gap, not individual failure. The 2002 symposium created infrastructure for quantitative empirical legal research. Epstein and King’s standards became codified, enforced, and transmitted.29 Qualitative and mixed methods research developed without equivalent editorial infrastructure.
Goldsmith and Vermeule identified the problem in 2002, observing that statistical approaches fail for case studies, interpretive analysis, and mechanism inquiry.30 But their critique was diagnostic, not constructive. They demonstrated that quantitative-only standards produce problematic results when applied to qualitative work. They did not propose alternative standards suited to law review implementation.
Rigorous qualitative tools exist in the broader methodological literature. Process tracing frameworks developed in political science provide systematic approaches to establishing causal mechanisms.31 Causal diagrams have been introduced to empirical legal research to clarify causal inference.32 Qualitative reporting standards like SRQR and COREQ establish transparent disclosure requirements.33 But these tools have not been institutionalized in law review editorial practice. They appear in methodology handbooks and specialized venues, not in the submission requirements or review protocols of student-edited journals.
The result is asymmetry. Quantitative work has shared standards, reporting conventions, and institutional enforcement, even if enforcement remains weak. Qualitative and mixed methods work operates without equivalent infrastructure in law reviews. A quantitative legal empiricist knows what she should disclose and knows reviewers will, at least in principle, evaluate her work against explicit criteria. A qualitative legal empiricist faces no such expectations. She may interview practitioners, observe institutions, analyze documents.34 But she encounters no shared requirements about what her methodology section should contain, what negative case analysis requires, or what would make her findings checkable by student editors.
This asymmetry creates conditions for adverse selection in qualitative and mixed methods scholarship, a dynamic George Akerlof analyzed in his foundational work on markets where quality is difficult to observe.35 When editors cannot evaluate methodology, they evaluate what they can observe: prose quality, topic salience, credentials, and fit. Under these conditions, methodological investment offers no competitive advantage in law review placement. High-quality qualitative empirical work, work with rigorous design, transparent methods, and verified claims, looks the same to editors as work with weak methods dressed in sophisticated prose.
The incentive structure follows predictably. High-quality empirical scholars can submit to law reviews, where their methodological investment receives no credit, or to peer-reviewed social science venues, where expert reviewers can identify and reward rigor. For scholars whose qualitative work would survive peer review, the choice often favors venues that can recognize quality. Low-quality work faces opposite incentives: peer review would expose methodological weakness, but law review screening cannot detect it. The rational strategy for weak work is to target venues where weakness is invisible.
The equilibrium is Akerlof’s market for lemons applied to qualitative legal scholarship. When screening cannot distinguish quality, high-quality sellers exit for markets that can recognize value; low-quality sellers remain. Applied to qualitative empirical legal scholarship: high-quality authors increasingly send their best work to peer-reviewed venues; law reviews receive a pool increasingly composed of work that could not survive rigorous methodological scrutiny.36
This is not a claim about author character or editorial competence. It is a claim about incentive structures and institutional design. The lemons dynamic emerges from a mismatch between law reviews’ screening technology (weak for methodological quality in qualitative work) and alternative venues’ screening technology (strong through expert peer review). Given that mismatch, adverse selection is the predicted outcome regardless of individual intentions.
C. How Legal Constraints Compound the Problem
Behind the infrastructure gap lies a deeper problem distinctive to law. Qualitative empirical research requires transparency: evidence must be available, selection procedures disclosed, negative cases reported. But law itself creates opacity that can prevent the transparency empirical verification demands.
Attorney-client privilege shields confidential communications between lawyers and clients from disclosure.37 Judicial sealing and protective orders preserve settlement confidentiality and protect sensitive information.38 Contractual confidentiality agreements and trade secret doctrine bind researchers and firms to non-disclosure.39 Institutional review board restrictions impose confidentiality requirements on research involving human subjects.40 These barriers are not accidental friction. They are governance by design. Privilege encourages candid legal advice. Sealing protects settlement finality. Trade secret doctrine protects innovation incentives. Each serves legitimate public values.
Yet they create a structural consequence for empirical research. When verification requires transparency and law forbids transparency, scholarship confronts the transparency paradox: the demand for disclosure that verification requires conflicts with legal prohibitions protecting important public values. Researchers cannot disclose privileged communications without client consent. They cannot unseal judicial records without court permission. They cannot violate NDAs without destroying the trade secret protection those agreements enable.
The perverse consequence is selection bias of a different kind. Scholars can study phenomena where evidence is freely available: appellate decisions, published regulations, public company filings. They face systematic barriers studying phenomena where evidence is confidential: how settlement actually operates, why firms choose particular competitive strategies, how compliance officers interpret ambiguous rules, how inside counsel advises on regulatory risk. These are among the most legally important questions. But they are precisely the questions that law-created opacity makes hardest to study transparently.
Strict transparency requirements that do not accommodate legal constraints would inadvertently bias qualitative scholarship toward studying the visible and away from studying the important. This is not the same problem as the lemons dynamic analyzed above, but it compounds it. Without accommodation for law-created opacity, qualitative legal empiricism faces twin selection pressures: adverse selection in quality (lemons) and adverse selection in topic (visibility bias).
D. Empirical Credibility and Legal Legitimacy
The stakes extend beyond academic methodology. Law claims authority through reasons that are publicly accessible and contestable.41 Legal rules bind because they can be justified, criticized, and reformed through rational discourse. Empirical claims about how rules operate in practice are central to this justificatory enterprise. When scholars assert that doctrinal changes produce behavioral effects, that regulatory regimes achieve or fail their purposes, that institutional practices diverge from formal requirements, they provide factual predicates on which legal reasoning depends.
If those empirical claims lack credibility, the legal reasoning they support is compromised. A judge citing an empirical study to support doctrinal innovation cannot evaluate whether the study’s findings are warranted if methodology is undisclosed. A legislator relying on scholarly claims about regulatory effects cannot assess whether those claims rest on representative evidence or selected cases. Empirical claims are asserted with scholarly authority but offered without the transparency that authority requires.
The credibility problem is now documented and discussed in the literature. Scholars observe that empirical legal analysis is increasingly viewed with suspicion, treated as difficult to replicate, opaque in its methods, and indistinguishable from advocacy dressed in empirical language.42 Methodologists note that selective reporting and data fishing can “entirely invalidate” results, contributing to the perception of empirical work as untrustworthy.43 The transparency failures documented in quantitative meta-research confirm the institutional conditions that produce these concerns. A field that does not require disclosure cannot credibly claim that its findings are checkable.
Law review articles are cited in judicial opinions, legislative hearings, and regulatory proceedings.44 Empirical claims about how law operates become inputs to legal reasoning. When those claims rest on unverifiable foundations, the reasoning they support is vulnerable. Courts and policymakers relying on opaque scholarship may reach conclusions that transparent evidence would have foreclosed.
Over time, these conditions risk eroding legal scholarship’s epistemic authority. If law reviews cannot distinguish rigorous qualitative empirical work from narrative invention, they cannot credibly certify findings. If readers cannot verify claims, they cannot credibly rely on them. The connection between scholarship and legal legitimacy weakens.
E. Framework Overview
This Article addresses these problems by proposing governance infrastructure that qualitative and mixed methods legal scholarship currently lacks. The solution operates on three levels, each responding to a specific aspect of the diagnosed problem.
First, the typology of claims developed in Part II provides the conceptual foundation for editorial evaluation of qualitative and mixed methods work. Different empirical claims have different inferential structures. Pattern claims about frequency require population definition and negative case analysis. Mechanism claims about causality require within-case evidence of process. Interpretive claims about meaning require participant-centered evidence, not researcher imputation. This typology gives editors a vocabulary for matching claims to evidentiary warrants, providing the basis for differential treatment that the current system lacks. Work meeting claim-type-appropriate standards can be identified and rewarded; work that cannot meet those standards can be identified and rejected or revised.45
Second, the standardized methodological abstract and audit rubric developed in Part III provide governance technologies that enable structured quality screening for qualitative submissions. The SMA requires authors to state their inferential commitments in structured form. The audit rubric permits editors to verify claim-evidence fit through severity-tiered checkpoints. By adopting these tools, law reviews commit to evaluating methodology alongside prose—a commitment whose credibility depends on enforcement—specifically, on whether journals actually return papers with incomplete disclosures and condition publication on Rubric compliance. Part III.D develops the enforcement mechanisms that determine whether this commitment is credible or merely decorative.46
Third, the BAE doctrine developed in Part IV addresses the transparency paradox. When full transparency is legally foreclosed by privilege, sealing, or contractual confidentiality, researchers may conduct and publish constrained research under silver standard verification. Silver standard requires precise identification of legal constraints, claims narrowed to match constrained evidence, compensatory rigor through triangulation and alternative verification, and documented negative case analysis. The doctrine permits studying legally important phenomena without abandoning methodological accountability, addressing the topic selection bias that strict transparency requirements would produce.47
A note on scope is warranted. This framework applies to scholarship making empirical claims about law in action: how legal actors interpret, apply, or respond to doctrine; how legal institutions operate in practice; what effects legal rules produce. Pure doctrinal interpretation, determining what the law means as a normative matter through analysis of authoritative legal materials, lies outside the framework’s direct application. However, when doctrinal scholarship makes assertions about how practitioners understand rules, how institutions implement doctrine, or what behavioral effects legal changes produce, those assertions are empirical claims subject to the evidentiary standards developed here. The distinction is not between doctrinal and empirical scholarship as genres, but between normative legal argument and factual claims about law’s operation.48
Part II: What Different Empirical Claims Require
If law reviews are to evaluate qualitative empirical claims, they need a vocabulary for distinguishing what different claims require. A paper asserting that firms typically behave a certain way makes a different kind of claim than one asserting that doctrinal change caused a behavioral shift, which in turn differs from one asserting that compliance officers understand a regulation as symbolic rather than substantive. Each claim carries distinct inferential risks and demands distinct evidence. The typology developed here specifies those demands and, in doing so, converts what Part I characterized as an intractable evaluation problem into a series of answerable questions.
The typology makes evaluation claim-specific rather than impressionistic. It distinguishes three types of empirical claims, each with distinct inferential risks and verification requirements: pattern claims about frequency and distribution, mechanism claims about causality and process, and interpretive claims about meaning and institutional constitution.49
This approach adapts insights from social science methodology to legal scholarship’s distinctive purposes. Gary Goertz and James Mahoney have documented a fundamental divide between quantitative and qualitative research cultures, each with different assumptions about causation, case selection, and inference.50 The claim typology translates these insights into categories suited for editorial evaluation of qualitative and mixed methods legal scholarship, attending to the interpretive and constitutive dimensions that the New Legal Realism emphasizes.51
A. Three Claim Types and Their Distinct Evidentiary Risks
Empirical claims in qualitative legal scholarship vary in their inferential structure. A paper might assert that “firms typically choose trade secrecy over patents when faced with patent uncertainty.”52 This is a pattern claim about frequency and distribution. A different paper might argue that “doctrinal change operates by shifting reputational costs, which in turn alters strategic behavior.”53 This is a mechanism claim about causality. A third might claim that “compliance officers understand the regulation as addressing systemic risk, not individual transactions.”54 This is an interpretive claim about institutional meaning. Each of these claim types appears in qualitative legal scholarship published in the —and submitted to—student-edited law reviews operating without claim-type-specific verification requirements.55
Each claim type invites distinct inferential risks. The first risks generalizing from biased samples. The second risks plausible stories masquerading as causal evidence. The third risks imputing the researcher’s interpretation to the subjects. These are not methodological preferences. They are evidentiary requirements. Different claims demand different warrants.56
The consequence for editorial evaluation is fundamental. A one-size-fits-all standard cannot accommodate these differences. Asking whether a paper has “good methodology” obscures the question that matters: does the evidence presented warrant the specific claims made? The typology responds by making evaluation claim-specific. Editors assessing a pattern claim should ask whether the population is defined, the selection procedure disclosed, and negative cases analyzed. These questions differ from those appropriate for mechanism claims, which require evidence of causal process, or interpretive claims, which require participant-centered evidence of meaning.
This claim-specific approach equips editors with the vocabulary for asking the right questions: What type of claim is this? What evidence would warrant it? Does the paper provide that evidence? These are questions generalist editors can answer without specialized methodological training, provided they have a framework for asking them.
B. Pattern Claims: Frequency and Distribution
Pattern claims assert frequency or distribution: “firms prefer X,” “regulators typically choose Y,” “courts decide in direction Z.”57 These claims do consequential work in legal argument. Frequency shapes stakes. If a doctrine is rarely applied, arguments for reform based on that doctrine’s effects must account for its limited reach. If enforcement concentrates in particular market segments, compliance costs and deterrent effects will not track formal legal rules alone.58
The inferential risk in pattern claims is selection. Observed instances must connect to the population about which the claim is made.59 The characteristic failure mode is inferential overreach: converting a discrete set of examples into a claim about typicality without demonstrating that those examples represent the broader population.60 This failure appears frequently in qualitative legal scholarship. An author reviews a dozen cases, identifies a pattern across them, and concludes that courts “generally” behave in that manner. The dozen cases may have been selected precisely because they fit the pattern. Cases that do not fit may exist but remain unreported. The selection process determines whether the pattern claim is warranted or rhetorical.
To warrant a pattern claim, an author must disclose three elements. First, the population to which the claim applies.61 If the claim concerns “venture-backed biotechnology firms,” the author should specify what counts as venture-backed, what counts as biotechnology, and whether the sample includes all such firms or a defined subset. Second, the selection procedure.62 Were cases selected randomly? Systematically? Through judgment sampling with stated criteria? The procedure determines what inferences the sample can support. Third, negative case analysis.63 What would constitute a disconfirming case? How did the author search for such instances? What did the search reveal?
Negative case analysis implements what methodologists call analytic induction: the systematic search for cases that would disconfirm the asserted pattern.64 Because pattern claims imply that counterexamples should be rare or fall outside defined scope, rigor requires searching for disconfirming instances and reporting results.65 A paper asserting that “firms typically respond to patent uncertainty by increasing trade secret protection” should state what would count as a disconfirming case, describe how the author searched for such cases, and explain how any discovered counterexamples affect the claim’s scope. This discipline distinguishes pattern claims tested against contrary evidence from pattern claims resting on curated illustrations.
Consider a concrete example. A researcher investigates why closely held businesses incorporate in Delaware rather than their principal place of business. She interviews twenty-five corporate counsel across three states: Delaware, California, and Texas. The sample deliberately includes diversity in firm size, practice focus, and client base. After analysis, she finds that eighteen of twenty-five counsel emphasize legitimacy with investors as the primary driver, while only seven emphasize tax or liability advantages.
What claims can this evidence warrant? The claim “most closely held businesses choose Delaware for legitimacy reasons” is unwarranted. The observed sample provides no warrant for claims about all closely held businesses. The twenty-five interviews are not randomly sampled. They cover three states among fifty. They include only counsel willing to participate.
But a more modest claim is defensible: “Among corporate counsel advising small-to-medium businesses in California, Texas, and Delaware who agreed to participate in this study, legitimacy concerns predominated over economic calculation in Delaware incorporation decisions.” This claim’s scope matches the sample construction. It acknowledges limitations. It invites extension rather than asserting generality.
The evidentiary standard for pattern claims is not statistical significance. It is defensible scope matched with transparent selection. Qualitative research provides what methodologists call “analytical generalization” to theory rather than “statistical generalization” to a population.66 The researcher generalizes to a proposition: that legitimacy concerns can predominate over economic calculation in incorporation decisions. The proposition invites testing in other contexts. The finding is valuable despite modest scope because it identifies a phenomenon that quantitative research cannot easily observe.
Editors evaluating pattern claims should apply four checkpoints. First, is the population defined? If the paper claims that “firms” behave in a certain way, what firms? Where? When? Under what conditions? Second, is the selection procedure disclosed? Did the author select cases before examining outcomes, or afterward? Were criteria stated ex ante or reconstructed ex post? Third, is negative case analysis present? Did the author specify what would constitute disconfirming evidence? Did the author search for such evidence? What did the search reveal? Fourth, is scope proportionate to evidence? Does the claim match what the sample can support, or does it overreach?
These checkpoints do not require methodological expertise. They require attention to what the paper claims and whether disclosed evidence justifies the claim’s scope. An editor applying them can distinguish a pattern claim warranted by systematic inquiry from one resting on selected illustrations.
C. Mechanism Claims: Causality and Process
Mechanism claims answer “why” and “how”: how doctrinal change causes behavioral shift, how a regulation operates to produce market effects, how institutional practices translate doctrine into action.67 Legal reasoning is deeply causal. Lawyers routinely assert that “the statute operates by creating incentives to X” or “the doctrine produces behavioral change through reputational costs.” These are mechanism claims. They assert not merely that an outcome occurred but that it occurred through a particular causal process.
The inferential risk is the causal chain itself. Outcomes can have multiple causes. Observed associations can reflect spurious correlation, reverse causation, or confounding factors.68 Qualitative legal scholarship frequently generates what might be called “just-so stories”: plausible narratives offered without evidence that the mechanism actually operated.69 These stories work backward from observed outcomes to invent causal chains that could have produced them. The chain might sound legally sophisticated and economically sensible, yet it might be wrong. Correlation does not establish mechanism, and plausibility does not establish operation.
To warrant a mechanism claim, an author must demonstrate not merely that an outcome occurred but that the proposed mechanism actually operated. This requires within-case evidence showing intermediate steps. Political scientists call this process tracing: within-case analysis that verifies causal mechanisms by examining observable implications of each step in the proposed causal chain.70 Process tracing does not rely on statistical controls across many cases to isolate causal effects. It relies on detailed examination of individual cases to verify that steps in the proposed causal chain actually occurred and occurred in the hypothesized sequence.71
Process tracing has been underutilized in legal scholarship despite its natural fit with legal reasoning. Lawyers regularly trace causal sequences: establishing that a defendant’s action caused harm, that regulatory intervention produced compliance, that doctrinal change altered behavior. The methodology formalizes this intuition. Recent work has begun adapting process tracing for legal research, particularly in international law contexts where causal claims about treaty compliance and norm diffusion require within-case verification.72 Causal diagrams, which make explicit the hypothesized relationships between variables and identify potential confounders, offer a complementary tool for clarifying mechanism claims before evidence is gathered.73
Two evidentiary tests guide the search for mechanism evidence. The first is the smoking gun test: evidence that, if present, decisively confirms a step in the causal chain.74 A compliance officer stating in an interview, “We redesigned our diversity training program in direct response to the agency’s published enforcement priorities,” provides smoking gun evidence for the link between enforcement signal and organizational response. The statement is probative because it comes from the decision-maker, attributes causation explicitly, and specifies timing. The second test is the hoop test: evidence that must be present if the mechanism operated as proposed, though its presence alone does not confirm the mechanism.75 If the mechanism requires that decision-makers were aware of a legal change, the paper must show awareness. Absence of awareness would refute the mechanism. But presence of awareness alone does not prove the mechanism caused the outcome.
These tests formalize evidentiary logic that legal scholars use implicitly. Lawyers assess evidence by asking what it proves and what alternative explanations it excludes. Process tracing applies that logic systematically to mechanism claims.
Consider a worked example. A researcher claims that the Supreme Court’s Alice decision caused firms in the software industry to shift from patent to trade secret protection. The claim is causal: doctrinal change → strategic response. Without process tracing, the claim rests on temporal correlation (patents declined after Alice) and plausibility (the decision created uncertainty about software patent validity).
Process tracing requires more. The researcher must show that the mechanism’s steps actually occurred in specific cases. Step 1: Firms became aware of Alice and its implications. This is a hoop test; if firms were unaware, the mechanism fails. Evidence might include interviews confirming awareness, internal memoranda referencing the decision, or attendance at CLE programs discussing patent eligibility. Step 2: Firms assessed that Alice threatened their patent strategies. Evidence might include strategy documents, board presentations, or interview statements describing risk assessment. Step 3: Firms considered alternatives including trade secrecy. Evidence might include deliberations about protection options. Step 4: Firms implemented trade secret strategies in response. Evidence might include changes in confidentiality protocols, employee agreements, or explicit statements linking the shift to Alice.
Each step requires evidence that it occurred, not merely that it could have occurred. Smoking gun evidence (explicit statements attributing causation to Alice) strengthens the chain. Hoop evidence (documented awareness, deliberation) is necessary but not sufficient. The conjunction of evidence across steps warrants the mechanism claim in ways that correlation alone cannot.
Editors evaluating mechanism claims should apply distinct checkpoints. First, is the mechanism specified step by step? A claim that “regulation causes compliance” without intermediate steps is not a mechanism claim; it is a correlation assertion. Second, is within-case evidence provided for each step? Does the paper show that each step occurred in actual cases, not merely that each step is plausible? Third, are alternative mechanisms addressed? Could the outcome have occurred through different causal pathways? Has the author considered and ruled out competing explanations? Fourth, does timing evidence support the sequence? Did steps occur in the order the mechanism requires?
These checkpoints enable editors to distinguish mechanism claims warranted by process evidence from just-so stories—and, critically, to require revision from authors who assert causal claims without tracing the mechanism through specific cases.
D. Interpretive Claims: Meaning and Constitution
Interpretive claims assert meaning: how legal actors understand rules, what institutional practices signify, how legal categories constitute social relations.76 These claims address what the New Legal Realism identifies as the constitutive dimension of law: law does not merely regulate pre-existing behavior but shapes the categories through which actors understand their situations.77 A claim that “compliance officers understand the regulation as symbolic rather than substantive” is interpretive. It asserts something about how actors make sense of legal requirements, not about behavioral frequency or causal mechanism.
The inferential risk is imputation. The researcher’s interpretation may not match participants’ understanding.78 Legal scholars are trained to interpret doctrine; the temptation is to read legal materials and attribute meaning to actors based on what the doctrine seems to require, rather than investigating what actors actually believe. An author might claim that “firms interpret the fiduciary duty as requiring shareholder wealth maximization” based on doctrinal analysis of fiduciary standards, without ever asking corporate officers how they understand their obligations. The claim sounds empirical but rests on the researcher’s reading, not participant evidence.
To warrant an interpretive claim, an author must provide participant-centered evidence: evidence of what actors themselves believed, intended, or understood, not evidence of what the researcher thinks they should have believed given doctrine.79 This evidence typically comes from interviews, documents authored by participants, or observation of practice.80 The key criterion is that meaning is derived from participants’ perspectives, not imputed from the researcher’s doctrinal analysis.
Interpretive research faces a distinctive validation challenge. How does the researcher know that her interpretation of participants’ meanings accurately reflects those meanings? Two techniques address this concern. Member checking involves returning findings to participants and asking whether the interpretation resonates with their experience.81 It is not dispositive; participants may not recognize accurate interpretations of their tacit understandings, or may strategically reject unflattering characterizations. But systematic disagreement between researcher interpretation and participant reaction warrants explanation. Saturation provides a different form of validation: the point at which additional interviews or observations yield no new interpretive themes.82 Saturation does not guarantee accuracy, but it indicates that the researcher has engaged enough cases to identify recurring patterns rather than idiosyncratic views.
Consider a worked example. A researcher claims that “immigration lawyers understand prosecutorial discretion as creating a zone of negotiation rather than a binary enforcement decision.” This is an interpretive claim about how lawyers make sense of a legal category. To warrant it, the researcher must provide evidence of lawyers’ understanding, not merely doctrinal analysis of prosecutorial discretion doctrine.
Participant-centered evidence might include interview statements: “When I talk to clients, I explain that we’re not just hoping the government doesn’t notice them. We’re building a case for why discretion should be exercised in their favor. It’s a conversation, not a coin flip.” Such statements provide direct evidence of how lawyers understand the category. Multiple statements from different lawyers identifying similar themes strengthen the claim. Negative cases, lawyers who describe prosecutorial discretion differently, should be reported and explained. The researcher might find that lawyers in different practice contexts understand discretion differently, leading to a refined claim about variation.
The interpretive claim is not warranted by the researcher analyzing prosecutorial discretion doctrine and concluding that it creates negotiation space. That analysis might be correct as doctrinal interpretation, but it does not establish that lawyers understand the doctrine that way. The empirical claim requires empirical evidence: participant accounts of meaning.
Editors evaluating interpretive claims should apply distinct checkpoints. First, is participant-centered evidence present? Does the paper provide evidence of what actors believed, or does it impute meaning based on doctrinal analysis? Second, is saturation documented? How many participants were interviewed? At what point did new themes stop emerging? Third, was validation attempted? Did the researcher employ member checking or other techniques to verify that interpretations reflect participant meaning? Fourth, are negative cases reported? Did any participants understand the legal category differently? How does variation affect the claim’s scope?
These checkpoints address the imputation risk. An author who provides rich participant-centered evidence with documented saturation and member checking demonstrates methodological seriousness. An author who asserts interpretive claims based on doctrinal reading without participant evidence does not. Editors can distinguish these cases using the checkpoints.
E. From Typology to Editorial Practice
The preceding sections developed the claim typology’s three categories and the evidentiary warrants each requires. What remains is to show how the typology changes editorial practice.
With the typology in hand, editors can ask claim-specific questions. Is this a pattern claim? Then population definition, selection procedure, and negative case analysis are required. Is this a mechanism claim? Then process tracing evidence is required. Is this an interpretive claim? Then participant-centered evidence is required. These requirements are not arbitrary methodological preferences. They follow from the inferential structure of each claim type. Pattern claims without transparent selection risk being generalizations from cherry-picked examples. Mechanism claims without process evidence risk being just-so stories. Interpretive claims without participant evidence risk being researcher imputation.
The screening that Part I characterized as impossible under current conditions becomes feasible. High-quality qualitative work can be identified by checking whether required evidentiary elements are present. Work that asserts claims without corresponding evidence can be identified by the same criteria.
Editors need not become methodologists. They need to classify claims and check for claim-appropriate evidence. The standardized methodological abstract and audit rubric developed in Part III operationalize this process, translating the typology into structured disclosure requirements and verification protocols.The central insight is that different claim types require different evidence, and that asking claim-specific questions enables evaluation that prose-quality assessment cannot provide.83
Part III: Making Quality Observable to Generalist Editors
The typology developed in Part II gives editors a conceptual vocabulary for matching claims to evidence. This Part translates that vocabulary into operational tools—a standardized methodological abstract requiring structured author disclosure, and an audit rubric enabling editors to verify claim-evidence fit through severity-tiered checkpoints. The design challenge is to make methodological quality observable to generalist editors without requiring them to become methodologists.
A. From Reporting Standards to Editorial Infrastructure
The analysis in Part I showed why methodological investment currently offers no competitive advantage in law review placement: editors cannot observe it, so it cannot affect acceptance. The governance technologies developed here address that dynamic—not by training editors to evaluate methodology, but by requiring authors to document methodological steps that rigorous research includes and weak research omits. An author who defined a population can describe it. An author who searched for negative cases can report the count and what the search revealed. An author who did not take these steps cannot document them without fabrication. The SMA thus creates a separating mechanism: it converts unobservable quality differences into observable disclosure differences.84
The solution is governance technologies: institutional designs that make quality observable to generalists by shifting the burden of production from editor to author and structuring verification around checkable requirements.
A crucial distinction separates these tools from existing methodological guidance. Qualitative reporting standards—SRQR, COREQ, GRAMMS, and JARS-Qual—tell authors what rigorous work should include.85 They are supply-side tools designed for peer-reviewed venues with expert reviewers.
The SMA and audit rubric are demand-side tools designed for student-edited law reviews. The SMA shifts the burden of production: authors state their inferential commitments in structured form, making methodology visible without requiring editors to reconstruct it from prose. The audit rubric operationalizes verification: editors check for required disclosures against claim-type-specific criteria, enabling structured evaluation by generalist editors.
The relationship between these governance technologies and existing reporting standards warrants careful specification, because a peer reviewer’s natural question is: why not simply adopt SRQR or COREQ for law review use? SRQR’s twenty-one items and COREQ’s thirty-two items represent expert consensus on what rigorous qualitative reporting should contain. They have been validated through Delphi processes and refined across thousands of applications in health sciences and psychology. A law review that required SRQR compliance for qualitative submissions would improve the status quo. The question is whether unmodified adoption would work in the distinctive institutional context of student-edited law reviews, and the answer is that three design differences make adaptation necessary.
First, the SMA is claim-type-specific where existing frameworks are uniform. SRQR applies the same reporting requirements regardless of what the research claims to establish: its Item 8 (sampling strategy) asks the same question whether the paper makes a pattern claim about how frequently firms behave a certain way or an interpretive claim about how compliance officers understand a regulation. The SMA maps disclosure requirements to inferential structure. Population definition is required for pattern claims but not for interpretive claims, because the inferential risk in pattern claims is selection bias—and population definition is what addresses selection bias. Process tracing evidence is required for mechanism claims but not for pattern claims, because the inferential risk in mechanism claims is unverified causal chains—and process tracing is what verifies them. This mapping is not a simplification of SRQR; it is a structural redesign that connects disclosure to inferential logic rather than treating all qualitative work as a single genre.
Second, the SMA converts expert judgments into binary checkable disclosures. SRQR Item 14 asks whether “data analysis” methods are adequately described. SRQR Item 15 asks about “techniques to enhance trustworthiness.” COREQ asks whether the “coding tree” has been provided and whether the “theoretical framework” is identified. Evaluating whether an analytic method is “adequately described,” whether trustworthiness techniques are “appropriate,” or whether a coding tree is well-constructed requires precisely the methodological expertise that student editors lack and that annual turnover prevents them from acquiring. The SMA’s elements ask different questions: Is the population defined? Is the selection procedure disclosed? Are negative cases reported? How many were found? These are presence-or-absence questions. Either the population is defined or it is not. Either negative cases are reported or they are not. This operability comes at a cost: the SMA cannot assess whether a research design is optimal, only whether required disclosures are present. That tradeoff is deliberate. The SMA raises the floor of what law reviews can verify; it does not claim to replicate the ceiling that expert peer review provides.
Third, the SMA integrates with the best available evidence doctrine through Element 7 (Constraint Disclosure), addressing legally mandated opacity that no existing framework confronts. SRQR, COREQ, GRAMMS, and JARS-Qual were designed for health sciences, psychology, and social science research where legal barriers to disclosure are uncommon. They provide no guidance for research constrained by attorney-client privilege, judicial sealing, trade secret doctrine, or contractual confidentiality—constraints pervasive in legal scholarship. Element 7 fills this gap by requiring authors to identify constraints with specific legal citation, enabling editors to route constrained submissions to silver standard evaluation rather than rejecting them for incomplete disclosure. The SMA is thus not a simplified SRQR. It is a different tool designed for a different institutional context, addressing a different audience, with a distinctive feature (constraint accommodation) that the context demands. Journals seeking more comprehensive methodological screening—Northwestern’s empirical issue, JELS, the Journal of Law and Empirical Analysis—can layer SRQR or COREQ requirements on top of the SMA. The frameworks are complementary, not competing.
This integration between the SMA and the BAE doctrine has no analogue in existing qualitative reporting standards.86
The formal model in Appendix D shows that, under specified assumptions, these tools function as screening technologies: by raising the acceptance probability for rigorous work relative to weak work, they reverse the incentive structure that currently makes methodological investment unrewarded. When the screening improvement is large enough, the model predicts that the selection dynamic should reverse.87
This analysis parallels Hillel Bavli’s recent work on credibility in quantitative empirical legal analysis. Bavli’s Design, Analysis, Statistics, and Sensitivity framework addresses the credibility crisis in quantitative legal scholarship by specifying disclosure requirements that make analytical choices transparent and verifiable.88 DASS operates on the supply side, providing authors with a framework for demonstrating credibility. The SMA and audit rubric complement DASS by providing the demand-side infrastructure law reviews need to evaluate qualitative and mixed methods submissions. Together, these frameworks could establish comprehensive methodological accountability for empirical legal scholarship.89
B. Standardized Methodological Abstract
The standardized methodological abstract requires authors to state their inferential commitments in structured form. It functions as a transaction-cost-reducing device, paralleling existing disclosure requirements that law reviews already enforce.90 Bluebook citations standardize how sources appear, reducing verification costs for editors checking accuracy. Conflict-of-interest disclosures standardize how affiliations are reported, making potential bias visible. The abstract standardizes how methodology is disclosed, making claim-evidence fit checkable.
The abstract accomplishes three functions simultaneously. First, it makes methodological choices visible. Authors cannot hide selection logic in hedged prose or bury limitations in footnotes. The structured format requires explicit statements about populations, selection procedures, and negative case analysis. Second, it converts quality assessments into disclosure checks. Evaluating whether a research design is sound requires methodological expertise. Evaluating whether a population is defined, a selection procedure is disclosed, and negative cases are analyzed requires attention to whether required fields are complete and responsive. The SMA replaces the first question, which generalist editors cannot reliably answer, with the second, which they can. Third, it creates enforcement leverage. Failure to complete the SMA triggers a presumption against publication, shifting default rules from acceptance-unless-flawed to rejection-unless-disclosed.
Seven required elements implement these functions, each corresponding to verification needs identified through the claim typology.
Element 1: Claim Classification. The author identifies each principal empirical claim and classifies it as pattern, mechanism, or interpretive. This classification enables claim-specific evaluation. Editors need not determine claim type independently; the author provides that information. Classification also disciplines authors: an author forced to classify claims must confront whether evidence matches claim type. A paper presenting correlations while claiming causation must either supply mechanism evidence or reclassify as pattern.91
Element 2: Population Definition. For pattern claims, the author defines the population to which the claim applies and explains how observed cases relate to that population. This element prevents inferential overreach by requiring authors to bound their claims. The definition must be specific: not “firms” but “venture-backed biotechnology firms in California, Texas, and Delaware during 2015-2020.” Specificity forces acknowledgment of scope limitations rather than obscuring them in hedged prose.92
Element 3: Selection Procedure. The author explains how cases, subjects, or documents were selected. The disclosure specifies whether selection occurred before or after outcomes were examined and whether criteria were systematic or convenience-based. Temporal sequencing matters because post-hoc selection creates cherry-picking risk. An author who selected cases after observing outcomes may have chosen confirming instances while ignoring disconfirming ones.93
Element 4: Disconfirmation Definition. The author states what evidence would disconfirm each claim. This requirement forces articulation of falsifiability conditions ex ante rather than rationalization of findings ex post. For a pattern claim that “firms typically choose trade secrecy when facing patent uncertainty,” the disconfirmation definition might specify: “Finding multiple firms that maintained patent strategies despite documented awareness of doctrinal uncertainty would disconfirm the typicality assertion.” For mechanism claims, the definition specifies what evidence would show the proposed causal process did not operate.94
Element 5: Negative Case Analysis. The author reports results of the negative case search: how many potential counterexamples were identified, how they were investigated, and whether they led to scope adjustment or claim modification. This element is critical for editorial verification. A paper reporting zero negative cases invites scrutiny: either the claim is unusually robust, or the search was inadequate. A paper reporting negative cases and explaining how they refined the claim demonstrates methodological seriousness and provides editors with concrete material for evaluation.95
Element 6: Validation Measures. The author describes validation techniques employed and their results. For pattern claims, validation might include triangulation with different data sources. For mechanism claims, validation might include documentary evidence corroborating interview statements. For interpretive claims, validation might include member checking or multiple coders.96 The element requires reporting results, not merely naming techniques. An author claiming member checking must report what participants said when findings were shared.
Element 7: Constraint Disclosure. The author identifies any legal, contractual, or ethical constraints preventing fuller transparency. This element interfaces with the BAE doctrine developed in Part IV. An author constrained by attorney-client privilege cannot disclose confidential communications. An author bound by protective order cannot share sealed documents. The constraint must be specifically identified and sourced, not merely gestured at. This disclosure enables editors to evaluate whether silver standard verification applies and whether compensatory rigor is adequate.97
This shifts burden of production from editor to author in a manner analogous to burden-shifting in legal doctrine. The author has already conducted the research; the abstract requires documenting it in structured form. Authors who conducted rigorous research can complete the form as a documentation exercise, not an additional research requirement. Authors who cannot complete it likely have not conducted the research the form describes. The inability to specify a population, articulate disconfirmation criteria, or report negative case analysis suggests these methodological steps were not taken.98
The structured format enables fast-fail screening. If required elements are missing, the paper returns without substantive review. This prevents wasted effort on both sides. Editors do not invest time evaluating arguments in papers that cannot satisfy basic disclosure requirements. Authors receive clear signals about what completion requires. The disclosure requirement thus functions as a sorting mechanism: papers demonstrating methodological seriousness proceed to substantive evaluation; papers lacking basic disclosure return for completion.
The complete SMA template, ready for adoption, is reproduced in Appendix A.
C. Audit Rubric
The audit rubric translates disclosure requirements into verification checkpoints. It operationalizes the claim typology’s insight that different claims require different evidence by specifying what editors should check for each claim type. The rubric parallels standards-of-review doctrine in administrative law: just as courts reviewing agency action assess whether findings are supported by substantial evidence and reasoning is adequately explained, editors using the rubric assess whether empirical claims are supported by disclosed evidence and methodology is documented.99
The rubric distinguishes three severity levels to allocate editorial effort efficiently. The complete rubric is reproduced in Appendix B.
Required Items are non-negotiable. Papers missing Required items do not receive detailed feedback; they return with a form letter identifying the missing disclosure. Required items include: claim type classification, population definition (for pattern claims), selection procedure disclosure, negative case analysis, and constraint disclosure (if constraints exist). No paper publishes with missing Required items unless the best available evidence doctrine applies and Silver standard requirements are satisfied. The rationale is straightforward: without these disclosures, editors cannot perform even basic verification. The paper fails at the threshold.100
Strongly Expected Items trigger revision requests. Papers can proceed if authors provide compelling justification for absence, but missing these items presumptively requires revision before acceptance. Strongly Expected items include: validation measures appropriate to claim type, negative case count and disposition, scope adjustment based on negative cases, and response rates or participation rates. Missing a Strongly Expected item does not trigger automatic rejection. It triggers a request for explanation. An author who conducted thirty interviews but reports no response rate must explain why that information is unavailable. An author claiming saturation must report how many interviews preceded the saturation judgment.101
Context-Dependent Items are assessed based on methodological fit, not checklist compliance. Whether they apply depends on research design. Intercoder reliability matters for structured coding but not for interpretive analysis. Member checking matters for interviews but not for archival research. Preregistration matters for confirmatory studies but not for exploratory inquiry.102 Context-Dependent items prevent mechanical application of standards that may not fit the research in question. Editors apply Context-Dependent checkpoints only when the research design makes them relevant.
This severity system creates graduated consequences. Required items operate as hard constraints: missing them triggers rejection or major revision. Strongly Expected items operate as presumptions: missing them invites scrutiny and justification demand. Context-Dependent items operate as flexible standards: editors assess relevance before evaluating compliance. The graduation matches editorial effort to verification need. Papers with all Required items receive substantive review. Papers missing Required items return immediately.
The rubric enables two verification modes. Fast-fail screening—checking whether required SMA elements are present—is estimated at three to five minutes. Does the paper include a completed SMA? Does it classify claims? Does it define populations for pattern claims? If any Required item is missing, the paper returns for completion before substantive evaluation. Fast-fail screening is administrative, not substantive. It does not assess argument quality. It assesses whether required disclosures are present.103
Based on the rubric’s structure—approximately fifteen claim-type-specific checkpoints, most requiring presence-or-absence assessment—we estimate full audit at roughly fifteen to twenty-five minutes per submission. Editors apply Rubric checkpoints matched to the paper’s claim types. A paper making pattern claims receives pattern-claim checkpoints: Is the population defined with sufficient specificity? Is the selection procedure disclosed with temporal information? Are negative cases reported with count and disposition? A paper making mechanism claims receives mechanism-claim checkpoints: Is the mechanism specified step by step? Is within-case evidence provided for each step? Are alternative mechanisms addressed? A paper making interpretive claims receives interpretive-claim checkpoints: Is participant-centered evidence present? Is saturation documented? Was validation attempted?104
Effort scales to claim structure. A paper making only pattern claims does not receive mechanism-claim checkpoints. A paper making only interpretive claims does not receive pattern-claim checkpoints. This scaling prevents over-auditing while ensuring that each claim type receives appropriate scrutiny.
The rubric checkpoints are designed for generalist application. They do not ask editors to evaluate whether the methodology is sophisticated or whether the findings are important. They ask whether required disclosures are present and whether evidence matches claim type. These are checkable questions. Either the population is defined or it is not. Either negative cases are reported or they are not. Either within-case evidence supports each mechanism step or it does not. Editors can answer these questions by examining the SMA and the paper’s methodology section, comparing what is claimed to what is disclosed.105
D. Enforcement Design
Law reviews already enforce institutional requirements. The Bluebook citation system demands standardized formatting; journals refuse to publish non-compliant submissions. Conflict-of-interest policies require disclosure; journals request corrections before publication. Word limits are enforced. These requirements function because journals treat them as conditions of publication, not suggestions.
The parallel is instructive. No one argues that citation formatting is more important than substantive argument. But journals enforce citation standards because standardization serves institutional purposes: it enables cross-referencing, reduces verification costs, and signals professional competence. Methodological disclosure serves analogous purposes. It enables verification, reduces transaction costs for editors, and signals that empirical claims rest on systematic inquiry rather than selective illustration.
Without enforcement, the SMA becomes decorative: a form authors complete without engagement and editors review without consequence. Research on transparency requirements confirms this concern. When psychology journals adopted disclosure requirements without enforcement mechanisms, compliance was minimal and cosmetic.106 Meaningful change occurred only when journals made disclosure a condition of publication and editors applied consequences for non-compliance.107 The lesson applies to law reviews: disclosure requirements without enforcement produce paperwork, not accountability.
To make the governance technologies consequential, law reviews should adopt three enforcement principles.
First, Default Rule. Failure to provide required disclosures triggers a presumption against publication of the empirical claim. Authors cannot satisfy this presumption by promising to provide information later; they must provide it with the submission. The only exception is when legal or contractual barriers prevent disclosure, invoking the best available evidence doctrine. The default rule operates as a screening device: papers lacking required disclosures return before substantive review. Editorial effort concentrates on papers that can satisfy basic requirements.108
Second, Revision Protocol. When editors identify deficiencies through the audit rubric, they reference specific Rubric items in revision letters. The request is not “strengthen your methodology section.” The request is concrete: “Element 5 of your SMA reports zero negative cases found, but your sampling strategy makes negative cases likely. Please explain your search procedure and report any cases that contradict or complicate your pattern claim.” Specific requests create accountability. Authors know what they must provide. Editors can verify whether revisions satisfy requirements.109
This protocol differs from current practice. Law review revision letters often request vague improvements: “strengthen the empirical section,” “clarify methodology,” “address limitations.” These requests are unverifiable. Authors can respond with prose that sounds responsive but changes nothing substantive. The rubric protocol requires concrete disclosure. Either the SMA element is complete or it is not. Either negative cases are reported or they are not.
Third, Publication Condition. No empirical claim publishes without complete SMA and satisfactory Rubric evaluation. Journals may offer authors the option of removing empirical claims that cannot satisfy disclosure requirements while publishing the remaining contribution. A paper might proceed with its theoretical argument while withdrawing an unsupported empirical assertion. But empirical claims without methodological disclosure do not appear in print.110
This principle distinguishes doctrinal analysis from empirical assertion. Doctrinal arguments rest on legal reasoning; they require no methodological disclosure beyond citation of authorities. Empirical claims rest on observations of the world; they require verification infrastructure. A paper can do both, but each component must satisfy its appropriate standard. This distinction preserves editorial discretion over doctrinal contribution while establishing methodological accountability for empirical claims.
E. Implementation: A Minimal Adoption Package
The governance technologies must work within law reviews’ actual institutional constraints, and each constraint generates a specific design requirement. Annual editorial turnover means the framework must be learnable during board transition, which is why the rubric uses structured checkpoints rather than open-ended evaluation. Limited resources mean verification must be time-bounded, which is why the rubric’s severity tiers concentrate editorial effort on Required items before proceeding to Strongly Expected and Context-Dependent items. Competing demands mean screening must integrate with existing workflow, which is why fast-fail screening parallels existing compliance checks for formatting and citation. An implementation framework that ignored these constraints would fail regardless of its theoretical merits.
A minimal adoption package for a flagship law review would include three components.
Component 1: Submission Requirement. Papers presenting empirical claims must include a completed SMA as a condition of submission. The SMA template (reproduced in Appendix A) is a one-page form. Author guidelines specify that papers without SMA will be returned without review. This requirement parallels existing requirements for anonymized submissions, word counts, and citation format. Implementation requires updating author guidelines and training submissions editors to check for SMA completion.111
Component 2: Audit Protocol. Papers passing initial screening receive audit rubric evaluation. The rubric (reproduced in Appendix B) is a two-page checklist. One designated articles editor applies the rubric during substantive review, checking SMA disclosures against Rubric requirements and flagging deficiencies for revision requests. Full audit is estimated at fifteen to twenty-five minutes per paper based on Rubric structure. For a journal receiving one hundred empirical submissions annually, this represents approximately twenty-five to forty hours of editorial time across all submissions, comparable to existing substantive review investments.112
Component 3: Training Module. A training component covers the claim typology, SMA elements, and Rubric application, emphasizing that editors check for disclosure rather than evaluate methodological sophistication. Worked examples demonstrate claim classification, missing-disclosure identification, and specific revision requests.113
The institutional cost is comparable to requirements law reviews already enforce: submission formatting, anonymization protocols, and citation compliance.
The benefits of adoption extend beyond individual paper quality. If the framework is adopted and enforced, it should make methodological investment consequential for acceptance decisions—a change that would give rigorous qualitative scholars reason to submit their strongest work to adopting journals rather than routing it exclusively to peer-reviewed venues. Whether adoption produces this predicted shift is an empirical question that the framework’s operation would help answer.114
Part IV: Best Available Evidence Doctrine
Some of the most important questions in empirical legal scholarship involve phenomena that law itself places behind confidentiality barriers. How litigation strategy actually operates, how settlement dynamics unfold, how corporate counsel advises on regulatory risk—these questions require access to evidence that attorney-client privilege, judicial sealing, contractual confidentiality, trade secret doctrine, and institutional review board restrictions may prevent researchers from disclosing. The governance technologies developed in Part III assume transparent disclosure is possible. This Part develops a framework for accommodating research where it is not, so that legal scholarship’s most consequential questions remain within reach.
The best available evidence doctrine addresses a problem that existing qualitative reporting standards do not confront. SRQR, COREQ, GRAMMS, and JARS-Qual assume researchers can disclose their evidence, methods, and analytical procedures.115 They provide no guidance for situations where legal barriers prevent disclosure. Yet such situations are pervasive in legal scholarship. Scholars studying litigation strategy encounter attorney-client privilege. Scholars studying settlement dynamics encounter judicial sealing. Scholars studying corporate decision-making encounter contractual NDAs and trade secret protection. Scholars studying professional practice encounter IRB confidentiality requirements. Without a framework for accommodating these constraints, transparency requirements would inadvertently bias scholarship toward studying easily observable phenomena and away from legally important questions where evidence is confidential.
This is a different selection problem than the one analyzed in Part I. There, the concern was quality: weak screening makes rigorous work indistinguishable from weak work. Here, the concern is topic: strict transparency requirements that cannot accommodate legal constraints channel scholarship toward easily observable phenomena and away from important but confidential ones. The BAE doctrine addresses this second problem; the tools in Parts II and III address the first.116
A. Five Legal Constraints on Empirical Transparency
Empirical research requires transparency. Readers must be able to evaluate whether evidence supports claims. This requires disclosure of evidence itself, of selection procedures that determined what evidence was gathered, of analytical processes that transformed evidence into findings, and of negative cases that complicate or qualify conclusions. The standardized methodological abstract operationalizes these requirements. Without transparency, verification is impossible, and the credibility that distinguishes empirical scholarship from assertion evaporates.
But law creates opacity. Multiple legal doctrines mandate confidentiality that prevents the disclosure transparency requires.
Attorney-client privilege protects confidential communications between lawyers and clients made for the purpose of obtaining legal advice.117 The privilege exists to encourage candid communication; clients must be able to share sensitive information without fear of disclosure. A researcher studying how corporate counsel advises on regulatory compliance cannot disclose the privileged communications that constitute her primary evidence. Doing so would breach the privilege, expose her sources to liability, and destroy the trust relationships that enabled the research.
Judicial sealing and protective orders restrict disclosure of litigation materials. Federal Rule of Civil Procedure 26(c) authorizes courts to enter protective orders limiting disclosure of discovery materials to protect parties from “annoyance, embarrassment, oppression, or undue burden or expense.”118 Settlement agreements routinely include confidentiality provisions sealing not just settlement terms but underlying litigation records.119 A researcher studying how settlement dynamics vary across case types cannot access the sealed materials that would reveal what actually happens in settlement negotiations.
Contractual confidentiality agreements bind researchers who negotiate access to corporate data. A researcher seeking to study how firms make strategic decisions typically signs a nondisclosure agreement as a condition of access.120 The agreement specifies what can be disclosed: often nothing beyond aggregate patterns, sometimes anonymized examples subject to gatekeeper approval. The researcher who breaches these agreements faces contract damages and reputational consequences that would end future access.
Trade secret doctrine creates a distinctive constraint: disclosure eliminates the legal right being studied. Under the Uniform Trade Secrets Act, information qualifies as a trade secret only if it “derives independent economic value... from not being generally known” and “reasonable efforts” are made to maintain secrecy.121 A researcher studying how firms use trade secrecy as an alternative to patent protection confronts a methodological trap: disclosing the secrets being studied eliminates their legal status as trade secrets and potentially exposes the researcher to misappropriation liability.122
Institutional review board restrictions protect human subjects. The Common Rule requires that research minimize risks to subjects and provide adequate confidentiality protections.123 IRB-approved protocols typically promise confidentiality to participants; disclosure would breach that promise and potentially expose subjects to harm. A researcher studying how compliance officers interpret ambiguous regulations cannot identify specific officers or institutions without violating IRB commitments.124
These constraints share a common structure. Each represents a legal judgment that some value, candid legal advice, settlement finality, commercial confidentiality, innovation incentives, human subject protection, outweighs the value of transparency. Each serves legitimate public purposes. And each creates barriers to the disclosure that empirical verification demands.
The transparency paradox emerges from this conflict. Empirical rigor requires transparency. Legal doctrine mandates opacity. Researchers cannot comply with both demands simultaneously. They cannot disclose privileged communications without breaching privilege. They cannot reveal sealed materials without court permission. They cannot violate NDAs without destroying the relationships that enabled research access. Compliance with legal and ethical obligations means opacity; opacity means research that cannot be verified by conventional standards.
B. Gold Standard and Silver Standard Verification
The best available evidence doctrine resolves the transparency paradox by distinguishing two verification standards. Gold standard verification applies when full transparency is possible. Silver standard verification applies when legal constraints prevent full transparency but the research question is sufficiently important that constrained inquiry serves scholarly purposes.
Gold standard requires complete disclosure. The researcher provides all evidence, explains all selection procedures, documents all analytical steps, and reports all negative cases. Readers can trace the path from raw evidence to stated conclusions. This is the default standard. When no legal barriers prevent disclosure, gold standard applies, and the abstract and rubric evaluate whether the paper satisfies it.
Silver standard permits constrained disclosure when four conditions are satisfied.
First, the legal constraint must be precisely identified and documented. The author must specify the legal doctrine, contractual provision, or ethical requirement that prevents disclosure. “Confidentiality concerns” is insufficient. “The interview subjects are corporate counsel whose communications with their clients are protected by attorney-client privilege under [State] law, and disclosure would breach Rule 1.6 of the Model Rules of Professional Conduct” is sufficient. The constraint must be sourced to specific legal authority, not merely asserted.125
Second, claims must be narrowed to match constrained evidence. An author who cannot disclose firm-specific strategic deliberations cannot claim to have established how “firms generally” make decisions. She can claim to have identified patterns among the constrained sample she studied. The scope of claims must correspond to what constrained evidence can support. Narrowing is not merely cosmetic hedging; it requires genuinely limiting inferential ambition to match available verification.126
Third, compensatory rigor must address the verification gap. When direct disclosure is impossible, indirect verification techniques must substitute. Triangulation uses multiple data sources to corroborate findings. If interview statements cannot be disclosed, documentary evidence supporting those statements can be. Process documentation provides detailed accounts of analytical procedures even when underlying data cannot be shared. Aggregate reporting presents patterns without identifying specifics: “18 of 25 firms exhibited pattern X” provides verification of frequency without disclosing which firms or their specific situations. Member checking confirms interpretations with participants even when verbatim quotations cannot be published.127
Fourth, negative case analysis must be documented even when granular results cannot be disclosed. The author must report that negative case analysis occurred, how many negative cases were identified, and how they affected the claim’s scope, even if specific negative cases cannot be described in detail. “Five of twenty-five firms did not exhibit the expected pattern; all five operated in technology areas where trade secrecy was infeasible, suggesting the pattern is contingent on technology type rather than universal” provides verification-relevant information without identifying specific firms.128
Silver standard does not lower rigor. It shifts verification from direct inspection of evidence to inspection of process and compensatory measures. A paper satisfying silver standard requirements may be more rigorous than a paper satisfying gold standard requirements with less care. The question is whether the verification available, given legal constraints, provides adequate warrant for the claims made.
Existing qualitative reporting standards do not address law-created opacity with structured editorial criteria. General research ethics frameworks address confidentiality as a consideration but do not specify how methodological accountability should operate under legal constraints. The doctrine fills this gap by providing structured criteria that editors can apply to evaluate constrained research.129
C. Constraint Types and Verification Strategies
Different legal constraints permit different verification strategies. This section develops silver standard approaches for the major constraint types encountered in qualitative legal scholarship.
1. Attorney-Client Privilege
Attorney-client privilege presents the paradigm case for silver standard verification. The privilege protects communications, not facts. A client’s legal strategy is privileged when communicated to counsel; the business facts underlying that strategy may not be.130 This distinction creates verification opportunities.
Consider a researcher studying how corporate counsel advises on patent strategy following doctrinal uncertainty. She interviews general counsel at twenty-five technology firms, exploring how Alice Corp. v. CLS Bank International affected patent-versus-trade-secret decisions.131 The communications between counsel and their corporate clients are privileged. Verbatim disclosure would breach the privilege.
Constrained-disclosure verification permits the following:
Aggregate pattern reporting. The researcher reports aggregate findings without firm identification: “Twenty of twenty-five general counsel reported that Alice affected their patent filing recommendations. Fifteen characterized the effect as ‘significant’ or ‘transformative.’ Five reported minimal effect, concentrated in technology areas (semiconductors, mechanical devices) where Alice concerns are less salient.” This aggregation enables pattern claims while respecting confidentiality.
Process tracing through non-privileged evidence. The mechanism claim that doctrinal uncertainty caused strategic shift can be supported through non-privileged evidence. Patent filing data is public. Changes in trade secret protocols may be documented in corporate policies not covered by privilege. The researcher can trace the mechanism through observable steps: Alice decided (public) → patent applications in affected technology areas declined (public patent office data) → firms implemented enhanced confidentiality protocols (confirmable through policy documents if access permitted) → counsel report Alice as cause (interview data, reported in aggregate).
Claim narrowing. Claims must match constrained evidence. “Among general counsel at venture-backed technology firms who agreed to be interviewed, Alice awareness correlated with increased attention to trade secret strategy in 80% of cases. Causal attribution was explicit in 70% of cases. The remaining 20% showed awareness without strategic change, concentrated in technology areas where trade secrecy was infeasible.” This claim is bounded and verified within constraint limits.
Editors applying silver standard to privilege-constrained research should verify that the constraint is legally accurate (is the material actually privileged?), that claims are appropriately narrowed, that compensatory verification is present, and that negative case analysis is documented. A paper meeting these requirements is publishable despite inability to disclose privileged communications.132
2. Judicial Sealing and Protective Orders
Sealed materials present different verification challenges. Unlike privilege, which protects communications, sealing often protects entire categories of documents and proceedings. Settlement agreements, discovery materials, and judicial records may be inaccessible. But sealing rarely covers everything relevant to a research question.
Consider a researcher studying how patent doctrine affects settlement dynamics. She hypothesizes that doctrinal uncertainty increases settlement rates as parties face higher litigation risk. Settlement agreements are sealed. Settlement negotiations are confidential.
Constrained-disclosure verification permits the following:
Public signals of sealed phenomena. Settlement rates by technology area, year, and district are derivable from public docket entries. A case that settles produces a docket notation even when settlement terms are sealed. The researcher can document whether software patent cases showed different settlement rate changes after Alice than other patent cases using entirely public data.
Timing analysis. Whether settlement rates changed after major doctrinal shifts can be studied through public disposition data. If the mechanism claim is correct, settlement rate increases should coincide with doctrinal changes.
Interviews about process without content. Attorneys can discuss settlement processes without disclosing sealed content. How do patent litigators assess settlement value under doctrinal uncertainty? What factors influence settlement timing? These questions can be explored without accessing sealed agreements.
Variation analysis. Whether settlement patterns differ between districts with different doctrinal interpretations can be studied through public data. If the mechanism linking uncertainty to settlement operates as proposed, variation across districts should be observable.
Claims must be narrowed accordingly. “Following Alice, settlement rates in software patent cases increased by 23%, consistent with parties preferring settlement under doctrinal uncertainty. Interviews with patent litigators confirm that Alice affected settlement value assessments, though specific settlement terms remain confidential.” This claim rests on public data and process interviews, not sealed materials.133
3. Contractual Confidentiality and Trade Secrets
Contractual confidentiality and trade secret doctrine create overlapping but distinct constraints. NDAs limit what researchers can disclose about information provided under agreement. Trade secret doctrine creates liability risk for disclosing information that meets the statutory definition. Both permit aggregate disclosure and process documentation.
Consider a researcher studying why firms choose trade secrecy over patents. She gains access to strategic deliberations at twenty firms under NDAs specifying that no firm-identifying information may be disclosed.
Constrained-disclosure verification permits the following:
Aggregate pattern reporting without identification. “Fifteen of twenty firms cited enforcement difficulty as a factor in choosing trade secrecy; twelve cited Alice uncertainty; eight cited speed-to-market considerations.” This aggregation permits pattern claims while respecting confidentiality.
Mechanism documentation without firm identity. “The decision sequence observed across firms followed this pattern: first, assessment of patent eligibility doctrine risk; second, comparison of patent and trade secret protection costs; third, cost-benefit analysis weighing enforcement probability against protection duration; fourth, implementation of chosen strategy.” This mechanism documentation reveals no individual firm’s strategy.
Negative case analysis in aggregate. “Five firms maintained patent-first strategies despite Alice awareness. All five operated in technology areas with strong patent eligibility, suggesting the pattern is contingent on technology type rather than universal.” This negative case analysis refines scope without identifying specific firms.134
Trade secret constraints require particular care. Disclosure must not eliminate trade secret status for participating firms. The researcher should confirm that aggregate reporting does not permit reverse-engineering of firm-specific strategies. When in doubt, the researcher should seek firm approval before publication, as required by most NDA provisions.135
4. Institutional Review Board Restrictions
IRB constraints differ from other categories because they are ethical and regulatory rather than strictly doctrinal. The researcher has legal authority to disclose; the constraint is that disclosure would harm subjects or violate informed consent commitments. IRB restrictions typically permit methodology disclosure while restricting identification.
Consider a researcher studying how compliance officers interpret regulatory ambiguity. She interviews forty compliance officers under an IRB protocol promising confidentiality.
Constrained-disclosure verification permits the following:
Full methodology disclosure. The interview protocol, sampling frame, and selection procedure can be disclosed. IRB restrictions concern subject identification, not methodology transparency. The abstract can be completed with full detail on research design.
Aggregate pattern reporting. “Thirty-four of forty compliance officers described the regulation as creating ‘symbolic’ obligations rather than substantive constraints. Six described it as creating meaningful operational requirements.” This pattern claim preserves subject confidentiality.
Anonymized quotations. Quotations can be reported without identifying speakers if they cannot be traced to individuals: “One officer explained: ‘We treat it as a box-checking exercise. The auditors want to see documentation. Nobody believes it actually prevents fraud.’” Care must be taken that quotation context does not permit identification.136
Saturation and validation documentation. The researcher can report that saturation was reached at twenty-eight interviews, that member checking was conducted with twelve participants, and that participants generally affirmed the interpretation. These verification elements are fully disclosable.
IRB constraints are typically the least restrictive for silver standard purposes because they target identification rather than methodology or findings. Researchers operating under IRB protocols should be able to satisfy most SMA requirements without difficulty.137
D. Integration with the Governance Framework
The best available evidence doctrine integrates with the SMA and audit rubric through Element 7 of the SMA: Constraint Disclosure. When an author identifies legal constraints preventing full transparency, editors evaluate whether silver standard requirements are satisfied rather than whether gold standard disclosure is complete.
The integration operates as follows:
Step 1: Constraint Identification. The author completes SMA Element 7, specifying the legal doctrine, contractual provision, or ethical requirement preventing disclosure and providing legal citation for the constraint.
Step 2: Constraint Verification. Editors verify that the claimed constraint is legally accurate. Is attorney-client privilege actually applicable to the materials in question? Is the NDA actually enforceable? This verification does not require legal expertise beyond what law students possess; it requires checking that the cited legal authority supports the claimed constraint.
Step 3: Claim-Constraint Matching. Editors assess whether claims are appropriately narrowed to match constrained evidence. If the author cannot disclose firm-specific deliberations, claims about “firms generally” are unsupportable. Claims must be bounded to what constrained evidence can warrant.
Step 4: Compensatory Rigor Assessment. Editors evaluate whether compensatory verification techniques are present and adequate. Does the paper triangulate through multiple sources? Does it provide process documentation? Does it report aggregate patterns with sufficient specificity? The assessment is proportional: more ambitious claims require more compensatory rigor.
Step 5: Negative Case Documentation. Editors verify that negative case analysis is documented even if granular results cannot be disclosed. The paper must report that negative case search occurred and how results affected scope, even if specific negative cases cannot be identified.
Papers satisfying silver standard requirements receive the same publication consideration as papers satisfying gold standard requirements. The doctrine does not create a second tier of diminished credibility. It creates an alternative verification pathway for research that cannot satisfy gold standard due to legally mandated opacity. A rigorously conducted silver standard study may provide greater scholarly value than a gold standard study where full transparency is possible but design and execution are less careful.138
E. Objections and Responses
The best available evidence doctrine may face several objections. This section addresses the most significant.
Objection 1: silver standard is a loophole for inadequate research.
The concern is that authors will claim legal constraints to avoid disclosure requirements, using the doctrine as cover for methodological weakness. The response is that silver standard is not easier to satisfy than gold standard; it is different. The compensatory rigor requirements are demanding. Triangulation, process documentation, aggregate reporting, and negative case analysis require substantial methodological investment. An author seeking to avoid rigor would find it easier to submit to journals without disclosure requirements than to fabricate legal constraints and satisfy compensatory requirements.
Moreover, constraint verification is built into the doctrine. Authors must cite specific legal authority for claimed constraints. Editors can evaluate whether the claimed constraints are legally accurate. An author claiming attorney-client privilege for materials that are not privileged would be caught at the verification step. The doctrine does not take constraint claims at face value.139
Objection 2: silver standard undermines verification by permitting non-disclosure.
The concern is that verification requires access to evidence, and silver standard permits withholding evidence. The response is that verification can operate through multiple channels. Direct inspection of evidence is one verification strategy. Process verification, where readers can assess whether methodology was sound even without accessing underlying data, is another. Replication verification, where other researchers can attempt to reproduce findings using similar methods, is a third. Silver standard emphasizes the second and third channels when the first is foreclosed.
Furthermore, the alternative to silver standard is not gold standard transparency; it is no research at all. If law reviews require full disclosure for all empirical claims, research on topics involving law-created opacity cannot be conducted. Scholars will study only topics where evidence is freely available. The field will be biased toward the visible and away from the important. Silver standard permits research on important topics under constrained conditions, accepting reduced verification in exchange for knowledge that would otherwise be unavailable.140
Objection 3: The doctrine is too complex for student editors to apply.
The concern is that silver standard evaluation requires sophisticated judgment that student editors lack. The response is that the doctrine is structured precisely to enable generalist application. The five-step integration process specifies what editors must check at each stage. Constraint verification asks whether cited legal authority supports claimed constraints, a task well within law student competence. Claim-constraint matching asks whether claims are bounded appropriately, a logical assessment that does not require methodological expertise. Compensatory rigor assessment asks whether specific techniques are present, checkable against a list. The doctrine is more structured than substantive peer review, not less.
Editors applying the doctrine may benefit from consultation with faculty advisors or methodological experts in marginal cases. But the core application is designed for generalist editors using structured protocols.141
Objection 4: Legitimate constraints are rare; the doctrine addresses an uncommon problem.
The concern is that law-created opacity affects only a small fraction of legal scholarship, making the doctrine an elaborate solution to an uncommon problem. The response is that the affected topics are disproportionately important. How litigation strategy actually operates, how settlement dynamics function, how corporate decision-making unfolds, how compliance officers interpret ambiguous rules: these questions are central to understanding how law works in practice. They are precisely the questions the law-and-society tradition and New Legal Realism identify as essential. If transparency requirements systematically exclude these questions, the resulting scholarship is impoverished.
Moreover, the frequency of legitimate constraints may be underestimated precisely because current norms discourage constrained research. Scholars may avoid topics involving legal opacity because they perceive no pathway to credible publication. The doctrine opens pathways that may increase research on legally important but confidential phenomena.142
F. Framework Integration
The best available evidence doctrine completes the governance framework. Parts II and III addressed the quality-selection problem: how to make methodological rigor observable so that editorial decisions can reward it. This Part has addressed the topic-selection problem: how to accommodate legally mandated opacity so that important but confidential phenomena remain within the scope of publishable scholarship.
Under this framework, scholars could study legally important phenomena involving privilege, sealing, trade secrets, and confidentiality under structured criteria, rather than being artificially restricted to questions where evidence is publicly accessible.
No single component solves the infrastructure gap identified in Part I. Together, they provide the audit architecture that qualitative empirical legal scholarship has lacked.143
Part V: Implementation Within Law Review Constraints
The preceding Parts developed a governance framework: a typology of claims specifying evidentiary warrants, disclosure and verification tools operationalizing those warrants, and a doctrine accommodating legally constrained research. Whether this framework improves qualitative scholarship depends on whether law reviews adopt it. This Part addresses adoption—showing that the framework integrates with editorial practices law reviews already maintain, and that it can be implemented within the specific constraints of student editing, annual board turnover, and limited institutional resources.
For a practical rollout kit—including an implementation timeline, form-letter templates for fast-fail screening and revision requests, and model submission-guidelines language—see Appendix C.
A. Adoption by a Single Journal
A flagship student-edited law review could adopt the framework’s core elements within one to two volume years through a minimal adoption package requiring modest changes to existing procedures.
Component 1: standardized methodological abstract Requirement. The journal revises its submission guidelines to require a completed SMA for any article making qualitative or mixed methods empirical claims about law in action. Authors complete it as part of submission, alongside existing requirements for anonymization, word count compliance, and citation format. Papers submitted without SMA receive a form letter requesting completion before review proceeds. This requirement parallels existing submission conditions; it adds one page to author obligations and one verification step to initial processing.144
The SMA requirement need not apply to all submissions. Doctrinal articles making no empirical claims about law in action proceed under existing procedures. The trigger is empirical assertion: when an article claims that legal actors behave in certain ways, that institutions operate through particular mechanisms, or that doctrine produces specified effects, the SMA applies. Authors uncertain whether their work triggers the requirement can complete the SMA as a precaution; the form documents research already conducted rather than imposing additional requirements.145
Component 2: audit rubric Protocol. The journal designates one articles editor as “empirical editor” or distributes the function across the articles committee. When a submission includes an SMA, the designated editor applies the rubric reproduced in Appendix B. Fast-fail screening is estimated at three to five minutes: does the SMA classify claims, define populations for pattern claims, disclose selection procedures, and report negative case analysis? If required elements are missing, the paper returns for completion. Full audit is estimated at fifteen to twenty-five minutes: the editor checks claim-type-specific requirements against the SMA and methodology section, flagging deficiencies for revision requests.146
The rubric integrates with existing editorial workflows. Law reviews already use screening memos, admissions checklists, and revision protocols. The rubric adds structured criteria for qualitative and mixed methods submissions without displacing substantive evaluation. Editors still assess whether arguments are compelling, whether contributions are significant, and whether the paper fits the journal’s editorial priorities. The rubric addresses a narrower question: does the methodology disclosed support the empirical claims made? This question is answerable through the rubric’s checkpoints.147
Component 3: best available evidence policy. The journal adds a short clause to its submission guidelines indicating that research constrained by legal barriers to transparency will be evaluated under silver standard criteria specified in the rubric. Authors claiming constraints must complete SMA Element 7 with specific legal citation. Editors verify that claimed constraints are legally accurate, that claims are appropriately narrowed, and that compensatory rigor is present. This policy clause signals that the journal will consider constrained research rather than requiring gold standard disclosure for all empirical work.148
The BAE policy need not be elaborate. A single paragraph in submission guidelines suffices: “Articles making qualitative or mixed methods empirical claims subject to legal constraints on disclosure (attorney-client privilege, judicial sealing, contractual confidentiality, trade secret protection, or IRB restrictions) should complete Element 7 of the standardized methodological abstract with specific legal citation for claimed constraints. Such articles will be evaluated under silver standard criteria, which require constraint verification, appropriately narrowed claims, compensatory rigor, and documented negative case analysis. See [link to Rubric] for evaluation criteria.”149
The framework treats methodological disclosure for qualitative and mixed methods work as an institutional requirement comparable to citation compliance and conflict disclosure.
B. Fitting into Existing Editorial Practice
Law reviews have developed sophisticated editorial procedures over more than a century of student editing. The framework proposed here does not displace those procedures. It supplements them with structured criteria for a category of scholarship, qualitative and mixed methods empirical work, that existing procedures handle poorly.
Submission Processing. Current practice typically involves initial screening for formatting compliance, anonymization, and basic fit. This requirement adds one check: does the submission include a completed SMA if it makes qualitative or mixed methods empirical claims? This check parallels existing compliance verification. Submissions lacking required elements receive form letters specifying what must be provided. A form letter would read: “Your submission appears to make qualitative or mixed methods empirical claims about law in action. Please complete the attached standardized methodological abstract and resubmit. Papers making empirical claims without completed SMA will not proceed to substantive review.”150
Screening and Selection. Current practice involves reading submissions and assessing whether they merit extended consideration. The audit rubric adds structured criteria for qualitative and mixed methods submissions. After reading a submission, the editor completes the rubric’s fast-fail screening. If required elements are present, the editor proceeds to full audit, checking claim-type-specific requirements. Rubric results inform the screening memo: “This submission makes pattern claims about firm behavior. The abstract defines the population as venture-backed biotechnology firms in three states. Selection procedure is disclosed as judgment sampling with stated criteria. Negative case analysis reports five counterexamples and explains how they refined the claim’s scope. Rubric requirements are satisfied. Recommend proceeding to full board consideration.”151
Revision Requests. Current practice involves identifying weaknesses and requesting improvements. The rubric structures revision requests for methodological deficiencies. Instead of “strengthen the methodology section,” the revision letter specifies: “Element 5 of your SMA reports that no negative cases were found, but your sampling strategy (judgment sampling of firms exhibiting the pattern of interest) creates selection bias that makes negative cases likely. Please explain your search procedure for disconfirming cases and report any cases that contradict or complicate your pattern claim. If no negative cases exist, please explain why the sampling strategy did not introduce confirmation bias.” Specific requests enable specific responses and create accountability for revision adequacy. (For model form letters implementing this protocol, see Appendix C.)152
Publication Decisions. Current practice involves balancing multiple factors: argument quality, contribution significance, topic interest, author credentials, and fit with journal priorities. The framework does not change this calculus. It adds one requirement: empirical claims must satisfy disclosure requirements appropriate to claim type. A paper might have a brilliant doctrinal argument but unsupportable empirical assertions. The journal can publish the doctrinal contribution while requiring removal or revision of empirical claims that cannot satisfy Rubric requirements. This granular approach preserves editorial discretion while establishing methodological accountability.153
C. Editors, Faculty, and Authors
The framework addresses three audiences with distinct roles in qualitative and mixed methods legal scholarship.
Student Editors. The framework is designed for generalist application. Student editors need not become methodologists to apply the abstract and rubric. They need to learn the claim typology’s three categories, understand what each SMA element requires, and apply Rubric checkpoints systematically. A one-hour training session, deliverable by outgoing editors to incoming boards during transition, covers these fundamentals. The training emphasizes that editors are checking for disclosure and claim-evidence fit, not evaluating methodological sophistication. The question is whether required information is present and whether evidence matches claim type, not whether the research design is optimal or the findings are important.154
Student editors already perform analogous functions. They verify citation accuracy against Bluebook requirements without being citation scholars. They assess argument coherence without being experts in every doctrinal field. They evaluate writing quality without being professional editors. The SMA and audit rubric extend this pattern: editors verify methodological disclosure against structured requirements without being methodologists. The framework provides the structure that makes verification possible; editors provide the careful attention that makes verification meaningful.155
Faculty Advisors and Methods Experts. Faculty can support implementation without supplanting student judgment. Roles might include: consulting on initial Rubric calibration to ensure severity tiers are appropriately set; serving as “methods advisors” whom boards can consult when submissions present marginal cases; reviewing the first volume’s Rubric applications to identify systematic errors; and helping boards develop institution-specific guidance for recurring situations. These consultative roles preserve student editorial authority while providing expert backup for difficult cases.156
Faculty involvement need not be extensive. Many submissions will clearly satisfy or clearly fail Rubric requirements; these cases require no consultation. Marginal cases, where reasonable editors might disagree about whether requirements are satisfied, benefit from expert input. A faculty advisor might spend five to ten hours per volume consulting on the handful of submissions that present genuine difficulty. This investment is comparable to existing faculty advisor roles at many journals.157
Authors. For scholars conducting qualitative and mixed methods research, the framework is designed to make rigor legible. Authors who have conducted rigorous research—defined populations, disclosed selection procedures, searched for negative cases, employed validation techniques—can document that rigor through structured disclosure. The separating mechanism is straightforward: if an author searched for negative cases, she can report how many she found and how they affected the claim’s scope. If she did not search, she cannot report results she does not have. The SMA’s disclosure requirements are easier to satisfy for authors who did the work than for authors who did not, and that asymmetry is what makes rigor visible to editors who lack the expertise to evaluate it directly.158
The framework’s requirements are not alien impositions. Many align with existing qualitative reporting standards that peer-reviewed journals in other disciplines already enforce. SRQR, COREQ, GRAMMS, and JARS-Qual all require transparent reporting of research design, sampling, analysis, and limitations.159 Authors familiar with these standards will recognize the SMA’s elements. The framework translates established methodological expectations into law review governance rather than inventing new requirements. Authors publishing in both law reviews and social science venues may find that SMA completion requires little beyond what rigorous practice already demands.160
D. Coordination Across Journals
Adoption by a single journal provides local benefits but limited field-wide effects. Authors can submit to non-adopting journals to avoid disclosure requirements. Coordination among journals amplifies impact.
Several coordination mechanisms are feasible. Flagship journals might adopt simultaneously, signaling that methodological accountability is an industry standard rather than an idiosyncratic requirement. Journal associations or conferences might develop model policies that individual journals can adapt. The Society for Empirical Legal Studies or similar organizations might endorse the framework, lending disciplinary credibility. Law school deans or faculty governance bodies might encourage their journals to adopt, creating institutional pressure.161
Coordination need not be formal or simultaneous. Diffusion can proceed through demonstration effects. If one flagship journal adopts and reports positive experience, reduced adverse selection in submission quality, improved author compliance, enhanced reader confidence, other journals have reason to follow. Journals that adopt first bear higher initial costs (developing procedures, training editors, responding to unfamiliar requirements) but may also benefit from signaling institutional seriousness about empirical rigor. As adoption spreads, costs decline (procedures become standardized, author expectations shift) and benefits compound (venue shopping becomes less viable, field norms shift).162
Flagship student-edited law reviews are well positioned to lead this diffusion. Journals that hosted foundational methodological interventions have particular credibility in establishing new standards. Adopting the governance framework developed here would extend a trajectory that began with the 2002 symposium: from diagnosing methodological gaps (Epstein and King), to providing author guidance for qualitative work (Linos and Carlson), to establishing editorial infrastructure that makes qualitative rigor verifiable and enforceable.163
Conclusion
This Article has argued that qualitative and mixed methods empirical legal scholarship suffers from two compounding selection problems—one in quality, one in topic—and has proposed governance infrastructure to address both.
The governance framework developed in this Article offers one architecture for addressing these problems. Under this framework, student editors would treat methodological accountability for qualitative and mixed methods work as a standard part of their institutional role, comparable to citation verification or conflict disclosure. Qualitative and mixed methods submissions would be evaluated against explicit, shared criteria rather than impressions of prose quality and author credentials. Constrained research on legally important but confidential phenomena would be publishable under silver standard verification rather than excluded by transparency requirements that do not accommodate law-created opacity. Courts, agencies, and scholars relying on empirical claims from law reviews could treat those claims as reflecting structured screening rather than informal editorial judgment about what sounds plausible.
A law review might pilot the framework by requiring a one-page SMA for qualitative and mixed methods submissions during a single volume, applying the audit rubric to those submissions, and adding a short BAE policy clause to its guidelines. Faculty advisors could help boards calibrate the rubric and consult on marginal cases. Authors might begin including SMA-like structured abstracts even before formal adoption, modeling the practice and demonstrating demand for methodological accountability. These steps are incremental and reversible; they do not commit journals to permanent transformation.
The framework has limits. The adverse selection prediction for qualitative venue choice remains partly theoretical; direct evidence on author submission patterns awaits empirical study. No structured protocol can eliminate judgment calls; editors applying the rubric will face marginal cases where reasonable people disagree. The claim typology covers predominant claim types but may not capture every form of qualitative inference. These limits mark where further empirical and institutional work lies, not reasons to defer action until perfect solutions emerge.
Stewart Macaulay’s foundational study revealed a gap between law on the books and law in action: formal contract doctrine mattered less to business practice than relational norms and reputation. Qualitative and mixed methods research continues this tradition, investigating how legal actors actually interpret, apply, and respond to law rather than how doctrine formally requires them to behave. If legal scholarship is to engage seriously with law in action, law reviews need governance infrastructure to recognize and reward serious qualitative and mixed methods inquiry. The framework developed here provides one such architecture. Whether it proves adequate, whether modifications or alternatives serve better, and whether law reviews choose to adopt it are questions that scholarship and institutional practice will answer over time.
-
Professor of Law, University of New Hampshire Franklin Pierce School of Law. This Article draws on insights from qualitative fieldwork in which I interviewed over 130 entrepreneurs in the United States and Israel about the impact of law and regulation on innovation. For research support, I thank the John Templeton Foundation, the Asness Family Foundation, and the Institute for Humane Studies. For helpful comments and conversations, I thank Yifat Aran, Avi Bell, Lisa Bernstein, Shahar Dillbary, Michael Dube, Eldar Haber, Adi Libson, Michael McCann, Moran Ofir, Nizan Packin, Daniel Pi, Ilan Talmud, and Eyal Zamir. I also thank participants at the Association of American Law Schools Annual Meeting (January 2025, San Francisco), the Israeli Law and Economics Association Annual Meeting (December 2025), the University of New Hampshire Franklin Pierce School of Law Faculty Workshop (Fall 2025), and the Economics Institute for Law Professors at George Mason University Antonin Scalia Law School (Summer 2025). All errors remain my own. ↩
-
Stewart Macaulay, Non-Contractual Relations in Business: A Preliminary Study, [28 Am. Soc. Rev.]{.smallcaps} 55, 58-62 (1963). ↩
-
Ian R. Macneil, Reflections on Relational Contract, [141 J. Institutional & Theoretical Econ.]{.smallcaps} 541, 543-46 (1985) (describing Macaulay’s study as foundational to relational contract theory). ↩
-
Jason M. Chin et al., The Transparency of Quantitative Empirical Legal Research Published in Highly Ranked Law Journals (2018-2020): An Observational Study, [12 F1000Research]{.smallcaps} 144, 5-8 (2024). ↩
-
Michael J. Matthews & Jason Rantanen, Legal Research as a Collective Enterprise: An Examination of Data Availability in Empirical Legal Scholarship, [41 J.L. Econ. & Org.]{.smallcaps} 570, 575-78 (2025). ↩
-
Hillel J. Bavli, Credibility in Empirical Legal Analysis, [87 Brook. L. Rev.]{.smallcaps} 501, 503 (2022). ↩
-
Lee Epstein & Gary King, The Rules of Inference, [69 U. Chi. L. Rev.]{.smallcaps} 1, 6-15 (2002); see also Hillel J. Bavli, Credibility in Empirical Legal Analysis, [87 Brook. L. Rev.]{.smallcaps} 501 (demonstrating that empirical legal scholarship remains untrusted). ↩
-
Theodore Eisenberg, The Origins, Nature, and Promise of Empirical Legal Studies and a Response to Concerns, [2011 U. Ill. L. Rev.]{.smallcaps} 1713, 1714-18 (2011). ↩
-
Id. at 1717. ↩
-
Jack Goldsmith & Adrian Vermeule, Empirical Methodology and Legal Scholarship, [69 U. Chi. L. Rev.]{.smallcaps} 153, 163 (2002). ↩
-
Howard Erlanger et al., Is It Time for a New Legal Realism?, [2005 Wis. L. Rev.]{.smallcaps} 335, 339-44 (2005). ↩
-
Mark C. Suchman & Elizabeth Mertz, Toward a New Legal Empiricism: Empirical Legal Studies and New Legal Realism, [6 Ann. Rev. L. & Soc. Sci.]{.smallcaps} 555, 563-68 (2010). ↩
-
Lisa Webley, Qualitative Approaches to Empirical Legal Research, in [The Oxford Handbook of Empirical Legal Research]{.smallcaps} 926, 928-45 (Peter Cane & Herbert M. Kritzer eds., 2010). ↩
-
Shauhin Talesh, Elizabeth Mertz & Heinz Klug eds., [Research Handbook on Modern Legal Realism]{.smallcaps} (2021). ↩
-
See Rosie Cameron & Deborah Golenko eds., [Elgar Handbook of Mixed Methods Research]{.smallcaps} (2024); Cheryl Poth, [SAGE Handbook of Mixed Methods Research Design]{.smallcaps} (2023); Alexander L. George & Andrew Bennett, [Case Studies and Theory Development in the Social Sciences]{.smallcaps} 205-32 (2005). ↩
-
Alicia O’Cathain et al., Guidance on How to Develop Complex Interventions to Improve Health and Healthcare, [9 BMJ Open]{.smallcaps} 1, 3-5 (2019) (GRAMMS); Heidi M. Levitt et al., Journal Article Reporting Standards for Qualitative Primary, Qualitative Meta-Analytic, and Mixed Methods Research in Psychology, [73 Am. Psych.]{.smallcaps} 26, 28-32 (2018) (JARS-Qual); Bridget C. O’Brien et al., Standards for Reporting Qualitative Research: A Synthesis of Recommendations, [89 Acad. Med.]{.smallcaps} 1245, 1246-51 (2014) (SRQR). ↩
-
Katerina Linos & Melissa Carlson, Qualitative Methods for Law Review Writing, [84 U. Chi. L. Rev.]{.smallcaps} 213, 218-35 (2017). ↩
-
The distinction between supply-side author guidance and demand-side editorial infrastructure is crucial. Linos and Carlson explain what rigorous qualitative work looks like; this Article provides tools for editors to verify whether submitted work meets those standards and a doctrinal framework for evaluating research constrained by legal barriers that prevent full transparency. ↩
-
The typology adapts categories developed in social science methodology for legal scholarship’s distinctive purposes. See Gary Goertz & James Mahoney, A Tale of Two Cultures: Qualitative and Quantitative Research in the Social Sciences 41-63 (2012). ↩
-
In game-theoretic terms, the SMA and Audit Rubric function as screening technologies that move law reviews from weak infrastructure (acceptance approximately type-independent) to strong infrastructure (acceptance rates α for high-quality work and β for low-quality work, where α > β). When α/p > ρ > β/q, the selection dynamic reverses: high-quality authors prefer law reviews; low-quality authors are deterred. See Appendix D for formal development. ↩
-
Part IV develops this Doctrine in detail, specifying the conditions under which constrained research satisfies Silver Standard verification and how editors should evaluate claims made under legal constraints. ↩
-
Chin et al., supra note 4, at 5-8. ↩
-
Id. at 7-8. ↩
-
Id. at 8 tbl.2. ↩
-
Matthews & Rantanen, supra note 5, at 575-78. ↩
-
Id. at 576. ↩
-
Id. at 582-84. ↩
-
Jason M. Chin & Kathryn Zeiler, Replicability in Empirical Legal Research, [17 Ann. Rev. L. & Soc. Sci.]{.smallcaps} 239, 247-49 (2021). ↩
-
Eisenberg, supra note 8, at 1714-18. ↩
-
Goldsmith & Vermeule, supra note 10, at 159-64. ↩
-
See George & Bennett, supra note 15, at 205-32; Derek Beach & Rasmus Brun Pedersen, [Process-Tracing Methods: Foundations and Guidelines]{.smallcaps} 1-22 (2d ed. 2019). ↩
-
Tyler J. VanderWeele & Nancy C. Staudt, Causal Diagrams for Empirical Legal Research, [10 Law, Probability & Risk]{.smallcaps} 329, 331-45 (2011). ↩
-
O’Brien et al., supra note 16, at 1246-51 (SRQR); Allison Tong et al., Consolidated Criteria for Reporting Qualitative Research (COREQ): A 32-Item Checklist for Interviews and Focus Groups, 19 Int’l J. Quality Health Care 349, 350-56 (2007) (COREQ). ↩
-
See, e.g., Anna Offit, Prosecuting in the Shadow of the Jury, 113 Nw. U. L. Rev. 1071, 1084–88 (2019) (describing an IRB-approved ethnographic study of 133 Assistant United States Attorneys using semi-structured interviews conducted over five years at a United States Attorney’s office, with participants assigned anonymized codes to protect confidentiality); Joanna C. Schwartz, Qualified Immunity’s Selection Effects, 114 Nw. U. L. Rev. 1101, 1113–19 (2020) (combining a docket dataset of 1,183 § 1983 cases with ninety-four attorney surveys and thirty-five semi-structured interviews to assess how qualified immunity shapes attorney case-selection decisions); I. India Thusi, On Beauty and Policing, 114 Nw. U. L. Rev. 1335, 1358–68 (2020) (drawing on a two-year legal ethnography in Johannesburg, South Africa, supplemented by quantitative enforcement indicators, to examine how police perceptions of beauty influenced differential surveillance of sex workers); Jessica A. Roth, Anna D. Vaynman & Steven D. Penrod, Why Criminal Defendants Cooperate: The Defense Attorney’s Perspective, 117 Nw. U. L. Rev. 1351, 1373–77 (2023) (reporting a survey of 146 defense attorneys with open-ended responses across three federal districts, IRB-reviewed and exempt-determined at both Yeshiva University and CUNY); Peter Dixon & Hadar Dancig-Rosenberg, The Multi-Hatted Court: Community Courts as Boundary Organizations, 120 Nw. U. L. Rev. 1259, 1281–84 (2026) (employing a phenomenological qualitative design consisting of nine focus groups with two to fifteen participants each, across diverse stakeholder groups at the Red Hook Community Justice Center, with ethical safeguards including informed consent and anonymization); Lisa Bernstein, Contract Governance in Small-World Networks: The Case of the Maghribi Traders, 113 Nw. U. L. Rev. 1009, 1038–42 (2019) (analyzing approximately two hundred archival Maghribi merchant letters using a network-governance methodology, with an explicit methodological-caution section acknowledging inferential limits of archival case-study evidence). Each of these studies appeared in the Northwestern University Law Review’s annual empirical legal studies issue; none encountered a standardized disclosure requirement or claim-type-appropriate verification rubric of the kind this Article proposes. ↩
-
The market-for-lemons dynamic in scholarly publishing follows the logic George Akerlof developed for markets with asymmetric information about quality. See George A. Akerlof, The Market for “Lemons”: Quality Uncertainty and the Market Mechanism, [84 Q.J. Econ.]{.smallcaps} 488, 489-92 (1970). Applied to qualitative legal scholarship: let VLR denote the value to an author of law review publication and VSS the value of social science venue publication. Let p and q denote acceptance probabilities at the social science venue for high-quality and low-quality work respectively (p > q, reflecting expert screening). Let aW denote the acceptance rate at law reviews under weak infrastructure, which is approximately type-independent because editors cannot distinguish methodological quality in qualitative submissions. Define ρ = VSS/VLR. High-quality authors prefer social science venues when pVSS > aWVLR. Low-quality authors prefer law reviews when aWVLR > qVSS. When ρq < aW < ρp, the unique equilibrium is adverse selection: high-quality authors exit to social science venues; low-quality authors remain in law reviews. See Appendix D for formal development. ↩
-
The lemons prediction finds indirect support in documented patterns. If the adverse selection dynamic operates, we should observe: (1) high-quality qualitative legal scholars increasingly placing their best methodological work in peer-reviewed venues rather than law reviews; (2) systematic quality differentials between qualitative work in law reviews and peer-reviewed journals on comparable topics; and (3) declining credibility assessments of law-review-published empirical claims among sophisticated readers. The credibility concerns documented by Bavli and the transparency failures documented by Chin and colleagues are consistent with this prediction, though direct evidence on venue-choice patterns awaits further research. See Bavli, supra note 6, at 503-10; Chin et al., supra note 4, at 5-8. ↩
-
See Upjohn Co. v. United States, 449 U.S. 383, 389 (1981). ↩
-
Fed. R. Civ. P. 26(c). ↩
-
See Defend Trade Secrets Act of 2016, 18 U.S.C. § 1836 (2018). ↩
-
45 C.F.R. § 46.111(a)(7) (2018). ↩
-
See generally Lon L. Fuller, [The Morality of Law]{.smallcaps} 33-94 (rev. ed. 1969). ↩
-
Bavli, supra note 6, at 503. ↩
-
Id. at 510-28 (explaining that selective reporting “entirely invalidates a study’s results” and developing the DASS framework as response). ↩
-
Epstein & King, supra note 7, at 2. ↩
-
See Suchman & Mertz, supra note 12, at 570-75 (arguing that methodological approaches must be attentive to interpretive and constitutive dimensions of legal phenomena). ↩
-
The governance technologies function as commitment devices because they are observable and enforceable. Announced standards without structured disclosure requirements are cheap talk; authors cannot verify that editors will actually evaluate methodology. The SMA creates verifiable commitment: either the disclosure is complete or it is not. The Audit Rubric creates procedural commitment: editors follow documented protocols with severity-tiered consequences. ↩
-
Part IV specifies Silver Standard requirements and provides worked examples showing how the Doctrine operates across different constraint types (privilege, sealing, trade secrets, IRB restrictions). ↩
-
This distinction aligns with the New Legal Realism’s emphasis on studying law as experienced, not merely as written. See Suchman & Mertz, supra note 12, at 563-68. A doctrinal article analyzing what the best interpretation of a statute should be engages in normative legal argument outside this framework’s scope. But if that article asserts that “courts have generally interpreted the statute to require X” or “compliance officers understand the rule as imposing Y obligation,” those are empirical claims subject to verification. ↩
-
The typology adapts distinctions developed across several methodological traditions. For the pattern/mechanism distinction in qualitative social science, see Gary Goertz & James Mahoney, [A Tale of Two Cultures: Qualitative and Quantitative Research in the Social Sciences]{.smallcaps} 41-63 (2012). For the interpretive dimension in legal scholarship, see Suchman & Mertz, supra note 12, at 570-75. ↩
-
Goertz & Mahoney, supra note 19, at 3-15 (contrasting quantitative culture focused on effects-of-causes with qualitative culture focused on causes-of-effects). ↩
-
See Suchman & Mertz, supra note 12, at 570-75. ↩
-
Cf. Rochelle Cooper Dreyfuss, Trade Secrets: How Well Should We Be Allowed to Hide Them? The Economic Espionage Act of 1996, [9 Fordham Intell. Prop. Media & Ent. L.J.]{.smallcaps} 1, 18-22 (1998) (discussing firm preferences between patent and trade secret protection). ↩
-
Cf. Edward B. Rock, Saints and Sinners: How Does Delaware Corporate Law Work?, [44 UCLA L. Rev.]{.smallcaps} 1009, 1013-16 (1997) (analyzing reputational mechanisms in corporate governance). ↩
-
Cf. Lauren B. Edelman, [Working Law: Courts, Corporations, and Symbolic Civil Rights]{.smallcaps} 45-52 (2016) (analyzing how compliance officers interpret civil rights requirements). ↩
-
Each of these claim types appears in empirical legal scholarship published in the Review’s own empirical issues. Offit makes a pattern claim (prosecutors systematically invoke hypothetical jurors across case types and stages) and an interpretive claim (jurors function as an ethical resource, not merely a strategic one). See Offit, supra note 34, at 1088–1118. Schwartz makes a mechanism claim: qualified immunity raises litigation costs through identifiable intermediate steps, and those cost increases have an equivocal, not straightforward, effect on attorney case-selection decisions. See Schwartz, supra note 34, at 1119–50. Dixon and Dancig-Rosenberg make interpretive claims about how community court professionals understand and manage tensions inherent in the boundary-organization model—claims that required participant-centered evidence from focus groups rather than structural analysis of the court’s formal mandate. See Dixon & Dancig-Rosenberg, supra note 34, at 1284–1311. ↩
-
This insight parallels the evidentiary logic familiar to lawyers: different claims require different proofs. A claim of negligence requires different evidence than a claim of intentional misconduct. Empirical methodology applies analogous logic to scholarly inference. ↩
-
For methodological treatment of pattern claims in qualitative research, see Charles C. Ragin, The Comparative Method: Moving Beyond Qualitative and Quantitative Strategies 34-52 (1987). ↩
-
On how enforcement patterns shape the practical meaning of legal rules, see Lauren B. Edelman et al., Diversity Rhetoric and the Managerialization of Law, [106 Am. J. Soc.]{.smallcaps} 1589, 1595-1601 (2001). ↩
-
Jason Seawright & John Gerring, Case Selection Techniques in Case Study Research: A Menu of Qualitative and Quantitative Options, [61 Pol. Rsch. Q.]{.smallcaps} 294, 296-99 (2008). ↩
-
The failure mode is familiar from legal reasoning: the case that proves too much. A litigator presenting three favorable cases as evidence that courts “generally” rule a certain way commits the same inferential error as a scholar generalizing from selected examples. ↩
-
On population definition in qualitative research, see Matthew B. Miles, A. Michael Huberman & Johnny Saldaña, [Qualitative Data Analysis: A Methods Sourcebook]{.smallcaps} 31-35 (4th ed. 2019). ↩
-
On selection procedures, see Seawright & Gerring, supra note 57, at 299-308. ↩
-
On negative case analysis, see Anselm Strauss & Juliet Corbin, [Basics of Qualitative Research: Techniques and Procedures for Developing Grounded Theory]{.smallcaps} 159-62 (2d ed. 1998). ↩
-
The technique derives from analytic induction as developed by Florian Znaniecki and refined by subsequent methodologists. See W. S. Robinson, The Logical Structure of Analytic Induction, [16 Am. Soc. Rev.]{.smallcaps} 812, 812-18 (1951). ↩
-
Jack Katz, A Theory of Qualitative Methodology: The Social System of Analytic Fieldwork, in [Contemporary Field Research: A Collection of Readings]{.smallcaps} 127, 133-38 (Robert M. Emerson ed., 1983). ↩
-
Robert K. Yin, [Case Study Research and Applications: Design and Methods]{.smallcaps} 37-39 (6th ed. 2018). ↩
-
On mechanism-based explanation, see Peter Hedström & Richard Swedberg, Social Mechanisms: An Introductory Essay, in [Social Mechanisms: An Analytical Approach to Social Theory]{.smallcaps} 1, 7-12 (Peter Hedström & Richard Swedberg eds., 1998). ↩
-
For accessible treatment of causal inference challenges, see Joshua D. Angrist & Jörn-Steffen Pischke, [Mostly Harmless Econometrics: An Empiricist’s Companion]{.smallcaps} 52-68 (2009). ↩
-
The term “just-so story” derives from Rudyard Kipling’s children’s tales explaining animal features through invented narratives. In methodology, it refers to post-hoc causal explanations that sound plausible but lack verification. See Stephen Jay Gould & Richard C. Lewontin, The Spandrels of San Marco and the Panglossian Paradigm: A Critique of the Adaptationist Programme, 205 Proc. Royal Soc’y London B 581, 588 (1979). ↩
-
George & Bennett, supra note 15, at 205-32. ↩
-
Derek Beach & Rasmus Brun Pedersen, [Process-Tracing Methods: Foundations and Guidelines]{.smallcaps} 1-22 (2d ed. 2019). ↩
-
See Natalie R. Davidson, Process-Tracing the Meaning of International Human Rights Law, in [Research Methods in International Law: A Handbook]{.smallcaps} 264, 266-78 (Rossana Deplano & Nicholas Tsagourias eds., 2021); Maiko Meguro, Backlash Against International Law by Non-State Actors: A Process-Tracing Method, in Research Methods in International Law: A Handbook 295, 297-310 (Rossana Deplano & Nicholas Tsagourias eds., 2021). ↩
-
Tyler J. VanderWeele & Nancy C. Staudt, Causal Diagrams for Empirical Legal Research, [10 Law, Probability & Risk]{.smallcaps} 329, 331-45 (2011). Causal diagrams make explicit the hypothesized relationships between variables and identify confounders that must be addressed for causal inference. While developed primarily for quantitative research, they can clarify the logic of mechanism claims in qualitative work by forcing researchers to specify the causal pathway before gathering evidence. ↩
-
Beach & Pedersen, supra note 69, at 95-103. ↩
-
Id. at 103-08. ↩
-
For methodological treatment of interpretive claims, see Clifford Geertz, Thick Description: Toward an Interpretive Theory of Culture, in [The Interpretation of Cultures]{.smallcaps} 3, 5-10 (1973). ↩
-
Suchman & Mertz, supra note 12, at 570-75. On law’s constitutive function more broadly, see Robert W. Gordon, Critical Legal Histories, [36 Stan. L. Rev.]{.smallcaps} 57, 109-13 (1984). ↩
-
The risk parallels a familiar problem in legal interpretation: assuming that statutory meaning is transparent to the interpreter without investigating how regulated parties actually understand requirements. ↩
-
On participant-centered evidence in interpretive research, see Herbert J. Rubin & Irene S. Rubin, [Qualitative Interviewing: The Art of Hearing Data]{.smallcaps} 27-35 (3d ed. 2012). ↩
-
Legal scholarship published in the Review’s empirical issues illustrates each technique. Offit derived her interpretive findings from 133 semi-structured interviews using open-ended prompts, generated thematic codes from unprompted AUSA reflections, and cross-checked themes across the full interview corpus. See Offit, supra note 34, at 1084–88. Dixon and Dancig-Rosenberg used an iteratively refined codebook that clustered first-order codes into higher-order themes across nine focus groups, with role-specific prompts to elicit participant-centered accounts of institutional experience. See Dixon & Dancig-Rosenberg, supra note 34, at 1281–84. Neither article reports member-checking procedures or explicit saturation documentation; the audit rubric proposed in Part III.B would require disclosure of both. ↩
-
On member checking, see Yvonna S. Lincoln & Egon G. Guba, [Naturalistic Inquiry]{.smallcaps} 314-16 (1985). ↩
-
On saturation, see Greg Guest et al., How Many Interviews Are Enough? An Experiment with Data Saturation and Variability, [18 Field Methods]{.smallcaps} 59, 60-65 (2006). ↩
-
The typology is not exhaustive. Some qualitative claims blend types; others may not fit neatly into any category. But the typology covers the predominant claim types in qualitative legal scholarship and provides a starting framework that editors can apply. Borderline cases can be addressed through author classification and editorial dialogue, as the Standardized Methodological Abstract facilitates. ↩
-
See supra Part I.B (analyzing the adverse selection dynamic in qualitative legal scholarship). ↩
-
See O’Brien et al., supra note 16, at 1246-51 (SRQR); Tong et al., supra note 33, at 350-56 (COREQ); O’Cathain et al., supra note 16, at 3-5 (GRAMMS); Levitt et al., supra note 16, at 28-32 (JARS-Qual). ↩
-
Part IV develops the Best Available Evidence Doctrine and explains how it integrates with the SMA’s constraint disclosure element and the Audit Rubric’s evaluation of constrained research. ↩
-
The formal conditions for this reversal are developed in Appendix D. Under strong infrastructure, let α denote the acceptance rate for high-quality work and β the acceptance rate for low-quality work (α > β). When α/p > ρ > β/q, where p and q are acceptance rates at social science venues and ρ = V_SS/V_LR, the equilibrium reverses: high-quality authors prefer law reviews; low-quality authors prefer (or are indifferent to) social science venues where they face acceptance probability q. ↩
-
Bavli, supra note 6, at 520-35 (developing the DASS framework). ↩
-
The complementarity is precise: DASS addresses quantitative credibility through specification of design, analysis, statistical, and sensitivity choices; the SMA and Audit Rubric address qualitative credibility through specification of claim type, population, selection, disconfirmation, negative cases, validation, and constraints. A law review adopting both frameworks would have comprehensive disclosure requirements for empirical legal scholarship regardless of methodological approach. ↩
-
On how standardized disclosure requirements function as governance technologies, see Brian A. Nosek et al., Promoting an Open Research Culture, [348 Science]{.smallcaps} 1422, 1424 (2015). ↩
-
The classification requirement prevents a common evasion: papers that imply causation through prose (“the regulation drove firms to...”) while avoiding explicit causal claims that would require mechanism evidence. Forced classification makes inferential commitments explicit. ↩
-
On the importance of population specification in qualitative research, see Miles, Huberman & Saldaña, supra note 59, at 31-35. ↩
-
Post-hoc selection is not inherently invalid; exploratory research may legitimately identify cases after preliminary analysis reveals relevant patterns. But the selection procedure must be disclosed so that readers can evaluate what inferences the sample supports. See Seawright & Gerring, supra note 57, at 299-308. ↩
-
On falsifiability in qualitative research, see Karl R. Popper, [The Logic of Scientific Discovery]{.smallcaps} 78-92 (1959). Popper’s falsificationism has been critiqued and refined, but the core insight remains: claims that cannot in principle be disconfirmed are not empirical claims. Requiring authors to specify disconfirmation conditions operationalizes this insight for editorial review. ↩
-
The negative case requirement implements analytic induction. See Robinson, supra note 62, at 812-18; Katz, supra note 63, at 133-38. ↩
-
On validation techniques in qualitative research, see Lincoln & Guba, supra note 78, at 289-331 (discussing credibility, transferability, dependability, and confirmability). ↩
-
The constraint disclosure element is essential for integrating the Best Available Evidence Doctrine. Without disclosure of constraints, editors cannot assess whether Silver Standard verification applies or whether compensatory rigor requirements are satisfied. See infra Part IV. ↩
-
This burden-shifting parallels burden-shifting in discrimination law, where plaintiffs establish prima facie case and defendants must articulate legitimate reasons. See McDonnell Douglas Corp. v. Green, 411 U.S. 792, 802-04 (1973). The SMA establishes what disclosure is required; authors unable to provide it bear the consequence of non-disclosure. ↩
-
See Motor Vehicle Mfrs. Ass’n v. State Farm Mut. Auto. Ins. Co., 463 U.S. 29, 43 (1983) (requiring agencies to examine relevant data and articulate satisfactory explanation for action). ↩
-
Required items are non-negotiable because they represent the minimum information needed to evaluate any empirical claim. A pattern claim without population definition cannot be evaluated for scope. A paper without negative case analysis cannot be evaluated for robustness. These are threshold requirements, not best practices. ↩
-
The distinction between Required and Strongly Expected items reflects different functions. Required items determine whether evaluation is possible. Strongly Expected items determine whether evaluation reveals adequate rigor. A paper might satisfy all Required items (evaluation is possible) while missing Strongly Expected items (evaluation reveals deficiencies). ↩
-
On the context-dependence of qualitative standards, see John W. Creswell & Cheryl N. Poth, [Qualitative Inquiry and Research Design: Choosing Among Five Approaches]{.smallcaps} 253-63 (4th ed. 2018) (noting that validation strategies vary by tradition). ↩
-
Fast-fail screening serves efficiency purposes analogous to pleading standards in civil procedure. Just as Twombly and Iqbal require facial plausibility before discovery proceeds, fast-fail screening requires disclosure before substantive review proceeds. See Bell Atl. Corp. v. Twombly, 550 U.S. 544, 570 (2007); Ashcroft v. Iqbal, 556 U.S. 662, 678 (2009). ↩
-
The claim-type-specific checkpoints implement the Claim Typology’s core insight: different claims require different evidence. Applying pattern-claim checkpoints to interpretive claims, or mechanism-claim checkpoints to pattern claims, would produce category errors. The Rubric avoids this by matching checkpoints to claim types. ↩
-
The generalist-accessibility requirement distinguishes the Audit Rubric from peer review. Peer reviewers assess methodological sophistication and contribution to disciplinary knowledge. Editors using the Rubric assess disclosure completeness and claim-evidence fit. These are different tasks requiring different expertise. The Rubric is designed for the second task. ↩
-
See Nosek et al., supra note 87, at 1423-24 (documenting initial failure of voluntary disclosure norms). ↩
-
See Brian A. Nosek et al., The Preregistration Revolution, [115 PNAS]{.smallcaps} 2600, 2603-04 (2018) (documenting improved compliance after journals adopted mandatory preregistration). ↩
-
The default rule parallels summary judgment standards: the moving party must come forward with evidence; the non-moving party cannot rest on allegations. See Celotex Corp. v. Catrett, 477 U.S. 317, 322-23 (1986). Authors making empirical claims must come forward with disclosure; editors need not search for evidence of rigor in undisclosed methodology. ↩
-
Specific revision requests also create a record. If the author resubmits without addressing identified deficiencies, editors can point to the prior request. This accountability mechanism prevents revision theater, where authors make cosmetic changes without substantive improvement. ↩
-
This principle may seem harsh, but it merely applies to empirical claims the same standard law reviews apply to other content. A paper with inadequate citation support would not be published; editors would require the author to substantiate claims or remove them. The same logic applies to empirical claims: substantiate with methodology or remove. ↩
-
The SMA template in Appendix A is designed for immediate adoption. It requires no customization. Law reviews can incorporate it into submission systems by adding a required upload field and a checkbox confirming SMA completion. ↩
-
This time estimate assumes that many submissions will not include empirical claims requiring SMA evaluation. For journals where empirical submissions are a larger fraction, the time investment scales accordingly, but remains modest relative to substantive review time. ↩
-
The training module could be standardized across law reviews through coordination among flagship journals or through resources developed by organizations like the Society for Empirical Legal Studies. Standardization would reduce implementation costs and promote consistent application. ↩
-
On how institutional adoption can shift market equilibria through signaling and network effects, see Paul A. David, Clio and the Economics of QWERTY, [75 Am. Econ. Rev.]{.smallcaps} 332, 332-37 (1985). ↩
-
See O’Brien et al., supra note 16, at 1246-51 (SRQR disclosure requirements assume ability to report data sources and analytical procedures); Tong et al., supra note 33, at 350-56 (COREQ checklist includes items on data presentation that assume disclosure is possible); Alicia O’Cathain et al., Good Reporting of a Mixed Methods Study (GRAMMS), [8 BMC Med. Rsch. Methodology]{.smallcaps} 1, 2-4 (2008) (GRAMMS criteria assume transparent reporting of all methods); Levitt et al., supra note 16, at 28-32 (JARS-Qual standards for reporting qualitative findings assume data accessibility). ↩
-
The two selection problems are analytically distinct but empirically compounding. A field suffering from both lemons and topic bias produces low-quality research on unimportant questions, the worst of both worlds. ↩
-
Upjohn Co. v. United States, 449 U.S. 383, 389 (1981). ↩
-
Fed. R. Civ. P. 26(c)(1). ↩
-
See Laurie Kratky Doré, Secrecy by Consent: The Use and Limits of Confidentiality in the Pursuit of Settlement, [74 Notre Dame L. Rev.]{.smallcaps} 283, 302-18 (1999) (documenting prevalence and scope of settlement confidentiality provisions). ↩
-
See David E. Pozen, The Leaky Leviathan: Why the Government Condemns and Condones Unlawful Disclosures of Information, [127 Harv. L. Rev.]{.smallcaps} 512, 545-52 (2013) (discussing confidentiality agreements governing access to government and corporate information). ↩
-
Unif. Trade Secrets Act § 1(4) (Unif. L. Comm’n 1985). ↩
-
See Defend Trade Secrets Act of 2016, 18 U.S.C. § 1836(b) (2018) (creating federal civil cause of action for trade secret misappropriation). ↩
-
45 C.F.R. § 46.111(a)(7) (2018). ↩
-
Recent empirical legal scholarship published in the Review’s empirical issues illustrates both the prevalence and the management of IRB constraints. Offit’s five-year ethnographic study was governed by an IRB-approved protocol titled “An Ethnographic Study of Lay Participation in the United States Criminal Justice System”; oral consent was obtained from all interviewees; identifying information was excluded from any publication. See Offit, supra note 34, at 1085. Roth, Vaynman, and Penrod submitted their defense-attorney survey to the IRBs of both Yeshiva University and CUNY, each of which issued an exempt determination; attorney anonymity was promised to all participants. See Roth, Vaynman & Penrod, supra note 34, at 1352. Under the best available evidence doctrine, such IRB-mandated constraints would qualify as silver-standard conditions, satisfying the legal-constraint verification criterion provided the constrained findings are disclosed, claims are appropriately scoped, and compensatory rigor is documented. ↩
-
The specificity requirement prevents abuse. An author asserting “confidentiality” without legal citation cannot invoke Silver Standard. The Doctrine applies only when legal constraints are documented with precision sufficient to permit editor verification. ↩
-
Claim narrowing is not mere hedging. Phrases like “our findings suggest” or “this research indicates” do not narrow claims; they soften assertions. Narrowing requires genuinely limiting scope: “among the firms we studied” rather than “firms generally”; “in the jurisdictions where we conducted interviews” rather than “across jurisdictions.” ↩
-
On triangulation, see Norman K. Denzin, [The Research Act: A Theoretical Introduction to Sociological Methods]{.smallcaps} 301-10 (3d ed. 1989). On process documentation as verification strategy, see Lincoln & Guba, supra note 78, at 319-27. ↩
-
Aggregate negative case reporting serves the same function as detailed negative case analysis: it demonstrates that the researcher searched for disconfirming evidence and refined claims in response. The verification is less direct but still meaningful. ↩
-
The American Bar Association’s Model Rules of Professional Conduct address confidentiality in legal practice but do not specify how empirical researchers should handle confidentiality constraints. See Model Rules of Pro. Conduct r. 1.6 (Am. Bar Ass’n 2020). Research ethics frameworks such as the Belmont Report address confidentiality as a consideration but provide no structured criteria for constrained research publication. See Nat’l Comm’n for the Prot. of Human Subjects of Biomedical & Behavioral Rsch., [The Belmont Report: Ethical Principles and Guidelines for the Protection of Human Subjects of Research]{.smallcaps} (1979). ↩
-
See Upjohn, 449 U.S. at 395-96 (distinguishing privileged communications from underlying facts). ↩
-
Alice Corp. v. CLS Bank Int’l, 573 U.S. 208 (2014). ↩
-
The verification strategy for privilege-constrained research parallels documentary evidence rules in litigation. When original documents are privileged, secondary evidence may be admissible. See Fed. R. Evid. 1004. The principle translates to empirical research: when primary evidence cannot be disclosed, secondary and aggregate evidence can still provide verification. ↩
-
Sealing constraints may be partially addressable through judicial motion. Researchers may petition to unseal records for research purposes, though success rates vary by jurisdiction and record type. See Eugene Cerruti, “Dancing in the Courthouse”: The First Amendment Right of Access Opens a New Round, [29 U. Rich. L. Rev.]{.smallcaps} 237, 269-74 (1995). Silver Standard provides a framework for research when unsealing efforts fail or are impractical. ↩
-
The distinction between aggregate pattern reporting and firm identification is not always clear. Researchers should evaluate whether aggregate reports permit inference of firm identity through context, timing, or other clues. When identification risk exists, additional anonymization measures may be required. ↩
-
Trade secret constraints have a self-enforcing quality that other constraints lack. An author who discloses trade secrets may face misappropriation liability regardless of NDA terms. 18 U.S.C. § 1836(b). This creates strong incentives for author compliance with Silver Standard limits. ↩
-
On anonymization techniques in qualitative research, see Martin Tolich, Internal Confidentiality: When Confidentiality Assurances Fail Relational Informants, [27 Qualitative Soc.]{.smallcaps} 101, 103-09 (2004). ↩
-
IRB protocols often anticipate publication and specify what can be disclosed. Researchers should ensure that their IRB approvals cover anticipated publication uses. Retroactive requests to modify IRB protocols for publication purposes may face delays or denials. ↩
-
The claim that Silver Standard does not create a second tier of diminished credibility may seem optimistic. Readers may discount constrained research relative to fully transparent research. But this discount is appropriate: constrained research does provide less verification. The Doctrine’s purpose is not to eliminate that discount but to provide a principled framework for evaluating constrained research rather than excluding it entirely. ↩
-
Constraint fabrication would also constitute academic misconduct. An author falsely claiming legal constraints to avoid disclosure would be analogous to an author falsifying data. The reputational and professional consequences provide additional deterrence beyond the Doctrine’s verification mechanisms. ↩
-
This framing invokes a familiar tradeoff in evidence law: the best evidence rule yields to practical necessity when original evidence is unavailable. See Fed. R. Evid. 1004 (permitting other evidence when original is lost, destroyed, or otherwise unobtainable). Silver Standard applies analogous logic to empirical research. ↩
-
Law reviews already consult faculty advisors on specialized questions outside student expertise. The Doctrine does not require more consultation than current practice; it structures the questions that consultation should address. ↩
-
This possibility suggests a research agenda: do transparency requirements affect topic selection in empirical legal scholarship? The prediction would be that topics involving legal opacity are understudied relative to their importance. Testing this prediction would require measures of topic importance independent of publication frequency, a methodological challenge but not an insurmountable one. ↩
-
The framework’s completeness should not be overstated. Qualitative and mixed methods research involves judgments that no checklist can fully capture. The framework provides infrastructure that makes evaluation possible; it does not guarantee that all evaluation judgments will be correct. But infrastructure that enables imperfect evaluation is superior to no infrastructure at all. ↩
-
The SMA requirement parallels data availability requirements that economics and political science journals have adopted. See Matthews & Rantanen, supra note 5, at 576-78 (documenting near-universal data availability in those disciplines). The difference is that those requirements apply to quantitative data; the SMA applies to qualitative and mixed methods disclosure. ↩
-
The trigger language, “empirical claims about law in action,” tracks the scope clarification in Part I. Pure doctrinal interpretation lies outside the framework; empirical premises embedded in doctrinal work fall within it. See supra note 48 and accompanying text. ↩
-
Time estimates derive from pilot testing of comparable audit instruments in other disciplines. See Nosek et al., supra note 87, at 1424 (reporting that transparency checklists can be completed in comparable timeframes). ↩
-
The Rubric’s integration with existing workflows distinguishes it from peer review. Peer reviewers assess methodological sophistication and contribution to disciplinary knowledge, tasks requiring domain expertise. The Rubric assesses disclosure completeness and claim-evidence fit, tasks requiring careful attention to structured criteria. ↩
-
The BAE policy signals openness to constrained research. Without such a signal, authors conducting important research under legal constraints may assume that law reviews will reject their work for insufficient transparency and submit elsewhere or decline to pursue the research at all. ↩
-
This policy language is illustrative. Individual journals may prefer different formulations suited to their submission systems and editorial voice. ↩
-
Form letters are standard practice for submission deficiencies. The SMA form letter adds one category to existing deficiency notices (missing anonymization, incorrect formatting, excessive length). ↩
-
Screening memos already summarize submission characteristics for board consideration. Adding Rubric results to these memos documents methodological evaluation alongside substantive assessment. ↩
-
Specific revision requests contrast with current practice, which often involves general requests to “strengthen methodology” or “address limitations.” General requests permit cosmetic revision without substantive improvement. Specific requests tied to Rubric items create accountability. ↩
-
The granular approach, publishing doctrinal contributions while revising or removing unsupported empirical claims, respects the mixed nature of much legal scholarship. Many excellent articles combine doctrinal analysis with empirical assertions; each component should meet its appropriate standard. ↩
-
Training time estimates reflect comparable training for other editorial protocols. New editors learn Bluebook citation, submission processing, and revision procedures during orientation; SMA/Rubric training adds one module to existing training. ↩
-
The analogy to citation verification is precise. Editors verify that citations follow Bluebook format and support stated propositions without being experts in every cited source. Similarly, editors verify that SMA disclosures are complete and evidence matches claim type without being experts in qualitative methodology. ↩
-
Faculty consultation on difficult cases is already common at many law reviews. The framework structures what consultation should address rather than creating a new practice. ↩
-
Five to ten hours per volume represents modest faculty investment. Many faculty advisors already spend comparable time consulting on selection decisions, revision strategies, and publication disputes. ↩
-
The legibility point is central. Under current conditions, rigorous qualitative work competes with weak work on equal footing because editors cannot tell them apart. The framework makes rigor observable. ↩
-
See supra notes 16, 33, 112 and accompanying text (describing SRQR, COREQ, GRAMMS, and JARS-Qual reporting standards). ↩
-
Authors publishing in peer-reviewed social science venues already satisfy disclosure requirements comparable to, and often more demanding than, the SMA. For these authors, law review adoption of the framework removes a competitive disadvantage: their methodological investment, currently invisible to law review editors, becomes visible. ↩
-
Coordination mechanisms for journal policy reform have been studied in other contexts. See Nosek et al., supra note 104, at 2603-04 (documenting how coordinated adoption of preregistration requirements transformed psychology journal practices). ↩
-
On how demonstration effects and network externalities drive institutional diffusion, see David, supra note 111, at 332-37. ↩
-
The trajectory from diagnosis to author guidance to editorial infrastructure reflects a natural progression. Epstein and King identified the problem; subsequent work, including Linos and Carlson’s contribution in this Review, addressed author practices; the present framework addresses editorial practices that make author rigor consequential for publication outcomes. ↩