An initial assessment of any study’s potential contribution to policy should begin with three questions: What is being studied? Why is it expected to work? How is program performance assessed (286)?
Why do some expect that providing an exit option will improve education generally? … Does choice somehow affect teaching, learning, or the broader educational environment? … Policy makers need to know how program X and outcome Y are correlated: theory lets us understand correlation as causation. Without theory, it is hard to assess the evidentiary basis for regarding a program as a success or failure. William Howell and Paul Peterson, for example, report finding a positive achievement gain for African Americans—but not Latinos or whites—participating in a privately funded voucher program. 13 This is described as evidence of the positive benefit of vouchers. Yet what, exactly, did the voucher schools—and more specifically the presence of vouchers—do to cause such gains? Why were the gains racially confined? Is this something specific to the voucher approach? We do not know. There is no satisfactory causal explanation for the reported correlation. 14
In addition to the what and why, it is also important to ask how, that is, how is success or failure being assessed? The default dependent variable for many education studies is a test score measure (one review found roughly three-quarters to 377 studies used test scores as their dependent variable). 15 While much is made of achievement scores, they represent only a narrow dimension of education. The same program that increases test scores may also increase dropout rates. 16 Increasing civic engagement among students may also decrease their tolerance for dissenting views. 17 Satisfied parents do not necessarily mean higher-achieving children. 18 Which outcomes are analyzed and how they are measured can themselves determine the results of a policy study. 19. In short, what constitutes “success” or results in a “better” school is very much in the eye of the beholder, and whether a program works or fails depends on what facets of education are considered politically important. Whatever choice does or does not affect, consumers of choice studies are well advised to put dependent variables into context before buying into sweeping policy conclusions (287).
How, then, can consumers trust our [academics’] research? They should form independent judgments of the veracity of our studies by asking this basic question: Were the data treated fairly? 23 Fair means that the researcher makes known his or her preferences and offers demonstrable assurance that he or she has adhered to scholarly conventions designed to minimize the influence of those preferences. Armed with such a reference point, consumers can be reasonably confident that data are driving a study’s policy conclusions rather than vice versa. 24
For choice studies these reference points are not always clear, and consumers may lack an insider’s knowledge of what “good scholarly practice” means. These problems can be reasonably, albeit imperfectly, dealt with by following a series of simple steps:
Note institutional affiliations. … Stanford University’s Hoover Institution has a public education task force that reads like a who’s who of researchers associated with pro-choice and voucher arguments (including Paul Peterson, John Chubb, Terry Moe, Caroline Hoxby, Chester E. Finn, Jr., and Diane Ravitch). Scholars affiliated with the Education Policy Studies Laboratory at Arizona State University (for example, Alex Molnar and Peter Cookson), tend more toward a skeptical view of choice and, especially, privatization and commercialism in education. …
Note funding sources. … For example, the Olin Foundation, the Bradley Foundation, and the Friedman Foundation support vouchers, and all three have generously supported high-profile academic choice studies. 26 …
Confirm quality. …Peer review by nonpartisan specialists helps ensure the credibility of scholarly work. …
Look for replication. … Indefinite data embargoes should invite outright skepticism. Without the possibility of replication, it is difficult to independently assess the validity of prescriptive claims (287-8).
Choice research is heavily based on variants of regression analysis—the standard statistical tool kit of social science research. Regression has well-known limitations, though (288). 32
Most choice studies are also correlational rather than truly experimental. There are good reasons for this. …randomly assigning students to different schools or choice programs presents formidable legal, ethical, financial, and logistical problems. … Even randomized field trials, however, are several steps from the experimental ideal. 33 For example, by definition those who apply for vouchers—whether they are awarded them or not—are a self-selected rather than randomly chosen group. Of this self-selected group, a significant percentage (up to 50 percent or more) awarded a voucher will not actually use it. Receiving schools may select nonrandomly among those students offered a voucher. Of those who do use their voucher, many will use it temporarily: voucher programs tend to have high turnover among participants (289). 34
John Chubb and Terry Moe issued probably the best known, and certainly the most sweeping, prescription based on a scholarly study of school choice: “We think reformers would do well to entertain the notion that choice is a panacea…Choice is a self-contained reform with its own rationale and justification. It has the capacity all by itself to bring about the kind of transformation that, for years, reformers have been seeking to engineer in myriad other ways.” 35 Their advice to engage in a wholesale, market-based transformation of education is remarkable since it springs from a study that did not examine any sort of choice program. Chubb and Moe inferred the effects of universal choice from a study of public and private school differences. Their prescriptive leap was to assume that universal choice would result in more schools maximizing the benefits they associated with private schools while minimizing the problems they associated with public schools. Perhaps choice will do this, but their actual study contains no direct analysis or empirical evidence of any such impact (289).
Smith, Kevin. “Data Don’t Matter? Academic Research and School Choice.” Perspectives on Politics 3:2 (June 2005): 285-299.