blog posts and news stories

Where's Denis?

It’s been a busy month for Empirical CEO Denis Newman, who’s been in absentia at our Palo Alto office as he jet-sets around the country to spread the good word of rigorous evidence in education research.

His first stop was Washington, DC and the conference of the Society for Research on Educational Effectiveness (SREE). This was an opportunity to get together with collaborators, as well as plot proposal writing, blog postings, webinars, and revisions to our research guidelines for edtech impact studies. Andrew Jaciw, Empirical’s Chief Scientist, kept up the company’s methodological reputation with a paper presentation on “Leveraging Fidelity Data to Make Sense of Impact Results.” For Denis, a highlight was dinner with Peg Griffin, a longtime friend and his co-author on The Construction Zone. Then it was on to Austin, TX, for a very different kind of meeting—more of a festival, really.

At this year’s SXSWEDU, Denis was one of three speakers on the panel, “Can Evidence Even Keep Up with Edtech?” The problem presented by the panel was that edtech, as a rapidly moving field, seems to be outpacing the rate of research that stakeholders may want to use to evaluate these products. How, then, could education stakeholders make informed decisions about whether to use edtech products?

According to Denis, the most important thing is for a district to have enough information to know whether a given edtech product may or may not work for that district’s unique population and context. Therefore, researchers may need to adapt their methods both to be able to differentiate a product’s impact between subgroups, as well as to meet the faster timelines of edtech product development. Empirical’s own solution to this quandry, Evidence as a ServiceTM, offers quick-turnaround research reports that can examine impact and outcomes for specific student subgroups, with methodology that is flexible but rigorous enough to meet ESSA standards.

Denis praised the panel, stating, “In the festival’s spirit of invention, our moderator, Mitch Weisberg, masterfully engaged the audience from the beginning to pose the questions for the panel. Great questions, too. I got to cover all of my prepared talking points!”

You can read more coverage of our SXSWEDU panel on EdSurge.

After the panel, a string of meetings and parties kept the energy high and continued to show the growing interest in efficacy. The ISTE meetup was particularly important following this theme. The concern raised by the ISTE leadership and its members—which are school-based technology users—was that traditional research doesn’t tell the practitioners whether a product is likely to work in their school, given its resources and student demographics. Users are faced with hundreds of choices in any product category and have little information for narrowing down the choice to a few that are worth piloting.

Following SXSWEDU, it was back to DC for the Consortium for School Networking (CoSN) conference. Denis participated in the annual Feedback Forum hosted by CoSN and the Software & Information Industry Association (SIIA), where SIIA—representing edtech developers—looked for feedback from the CIOs and other school district leaders. This year, SIIA was looking for feedback that would help the Empirical team improve the edtech research guidelines, which are sponsored by SIIA’s Education Technology Industry Network (ETIN). Linda Winter moderated and ran the session like a focus group, asking questions such as:

  • What data do you need from products to gauge engagement?
  • How can the relationship of engagement and achievement indicate that a product is working?
  • What is the role of pilots in measuring success?
  • And before a pilot decision is made, what do CoSN members need to know about edtech products to decide if they are likely to work?

The CoSN members were brutally honest, pointing out that as the leaders responsible for the infrastructure, they were concerned with implementability, bandwidth requirements, and standards such as single sign-on. Whether the software improved learning was secondary—if teachers couldn’t get the program to work, it hardly mattered how effective it may be in other districts.

Now, Denis is preparing for the rest of the spring conference season. Next stop will be New York City and the American Education Research Association (AERA) conference, which attracts over 20,000 researchers annually. The Empirical team will be presenting four studies, as well as co-hosting a cocktail reception with AERA’s school research division. Then, it’s back on the plane for ASU-GSV in San Diego.

Find more information about Evidence as a Service and the edtech research guidelines.

2018-03-26

Presenting at AERA 2018

We will again be presenting at the annual meeting of the American Educational Research Association (AERA). Join the Empirical Education team in New York City from April 13-17, 2018.

Research presentations will include the following.

For Quasi-Experiments on EdTech Products, What Counts as Being Treated?
Authors: Val Lazarev, Denis Newman, & Malvika Bhagwat
In Roundtable Session: Examining the Impact of Accountability Systems on Both Teachers and Students
Friday, April 13 - 2:15 to 3:45pm
New York Marriott Marquis, Fifth Floor, Westside Ballroom Salon 3

Abstract: Edtech products are becoming increasingly prevalent in K-12 schools and the needs of schools to evaluate their value for students calls for a program of rigorous research, at least at the level 2 of the ESSA standards for evidence. This paper draws on our experience conducting a large scale quasi-experiment in California schools. The nature of the product’s wide-ranging intensity of implementation presented a challenge in identifying schools that had used the product adequately enough to be considered part of the treatment group.


Planning Impact Evaluations Over the Long Term: The Art of Anticipating and Adapting
Authors: Andrew P Jaciw & Thanh Thi Nguyen
In Session: The Challenges and Successes of Conducting Large-Scale Educational Research
Saturday, April 14 - 2:15 to 3:45pm
Sheraton New York Times Square, Second Floor, Central Park East Room

Abstract: Perspective. It is good practice to identify core research questions and important elements of study designs a-priori, to prevent post-hoc “fishing” exercises and reduce the role of drawing false-positive conclusions [16,19]. However, programs in education, and evaluations of them, evolve [6] making it difficult to follow a charted course. For example, in the lifetime of a program and its evaluation, new curricular content or evidence standards for evaluations may be introduced and thus drive changes in program implementation and evaluation.

Objectives. This work presents three cases from program impact evaluations conducted through the Department of Education. In each case, unanticipated results or changes in study context had significant consequences for program recipients, developers and evaluators. We discuss responses, either enacted or envisioned, for addressing these challenges. The work is intended to serve as a practical guide for researchers and evaluators who encounter similar issues.

Methods/Data Sources/Results. The first case concerns the problem of outcome measures keeping pace with evolving content standards. For example, in assessing impacts of science programs, program developers and evaluators are challenged to find assessments that align with Next Generation Science Standards (NGSS). Existing NGSS-aligned assessments are largely untested or in development, resulting in the evaluator having to find, adapt or develop instruments with strong reliability, and construct and face validity – ones that will be accepted by independent review and not considered over-aligned to the interventions. We describe a hands-on approach to working with a state testing agency to develop forms to assess impacts on science generally, and on constructs more-specifically aligned to the program evaluated. The second case concerns the problem of reprioritizing research questions mid-study. As noted above, researchers often identify primary (confirmatory) research questions at the outset of a study. Such questions are held to high evidence standards, and are differentiated from exploratory questions, which often originate after examining the data, and must be replicated to be considered reliable [16]. However, sometimes, exploratory analyses produce unanticipated results that may be highly consequential. The evaluator must grapple with the dilemma of whether to re-prioritize the result, or attempt to proceed with replication. We discuss this issue with reference to an RCT in which the dilemma arose. The third addresses the problem of designing and implementing a study that meets one set of evidence standards, when the results will be reviewed according to a later version of those standards. A practical question is what to do when this happens and consequently the study falls under a lower tier of the new evidence standard. With reference to an actual case, we consider several response options, including assessing the consequence of this reclassification for future funding of the program, and augmenting the research design to satisfy the new standards of evidence.

Significance. Responding to demands of changing contexts, programs in the social sciences are moving targets. They demand a flexible but well-reasoned and justified approach to evaluation. This session provides practical examples and is intended to promote discussion for generating solutions to challenges of this kind.


Indicators of Successful Teacher Recruitment and Retention in Oklahoma Rural Schools
Authors: Val Lazarev, Megan Toby, Jenna Lynn Zacamy, Denis Newman, & Li Lin
In Session: Teacher Effectiveness, Retention, and Coaching
Saturday, April 14 - 4:05 to 6:05pm
New York Marriott Marquis, Fifth Floor, Booth

Abstract: The purpose of this study was to identify factors associated with successful recruitment and retention of teachers in Oklahoma rural school districts, in order to highlight potential strategies to address Oklahoma’s teaching shortage. The study was designed to identify teacher-level, district-level, and community characteristics that predict which teachers are most likely to be successfully recruited and retained. A key finding is that for teachers in rural schools, total compensation and increased responsibilities in job assignment are positively associated with successful recruitment and retention. Evidence provided by this study can be used to inform incentive schemes to help retain certain groups of teachers and increase retention rates overall.


Teacher Evaluation Rubric Properties and Associations with School Characteristics: Evidence from the Texas Evaluation System
Authors: Val Lazarev, Thanh Thi Nguyen, Denis Newman, Jenna Lynn Zacamy, Li Lin
In Session: Teacher Evaluation Under the Microscope
Tuesday, April 17 - 12:25 to 1:55pm
New York Marriott Marquis, Seventh Floor, Astor Ballroom

Abstract: A 2009 seminal report, The Widget Effect, alerted the nation to the tendency of traditional teacher evaluation systems to treat teachers like widgets, undifferentiated in their level of effectiveness. Since then, a growing body of research, coupled with new federal initiatives, has catalyzed the reform of such systems. In 2014-15, Texas piloted its reformed evaluation system, collecting classroom observation rubric ratings from over 8000 teachers across 51 school districts. This study analyzed that large dataset and found that 26.5 percent, compared to 2 percent under previous measures, of teachers were rated below proficient. The study also found a promising indication of low bias in the rubric ratings stemming from school characteristics, given that they were minimally associated with observation ratings.

We look forward to seeing you at our sessions to discuss our research. We’re also co-hosting a cocktail reception with Division H! If you’d like an invite, let us know.

2018-03-06

Jefferson Education Accelerator Contracts with Empirical for Evidence as a Service™

Jefferson Education Accelerator (JEA) has contracted with Empirical Education Inc. for research services that will provide evidence of the impact of education technology products developed by their portfolio companies. JEA’s mission is to support and evaluate promising edtech solutions in order to help educators make more informed decisions about the products they invest in. The study is designed to meet level 2 or “moderate” evidence as defined by the Every Student Succeeds Act. Empirical will provide a Student Impact Report under its Evidence as a Service offering, which combines student-level product usage data and a school district’s administrative data to conduct a comparison group study. Denis Newman, Empirical’s CEO stated, “This is a perfect application of our Evidence as a Service product, which provides fast answers to questions about which kids will benefit the most from any particular learning program.” Todd Bloom, JEA’s Chief Academic Officer and Research Associate Professor at UVA’s Curry School of Education, commented: “Empirical Education is a highly respected research firm and offers the type of aggressive timeline that is sorely needed in the fast-paced edtech market.” A report on impact in the school year 2017-2018 is expected to be completed in July.

2018-02-20

IES Published Our REL Southwest Study on Trends in Teacher Mobility

The U.S. Department of Education’s Institute of Education Sciences published a report of a study we conducted for REL Southwest! We are thankful for the support and engagement we received from the Educator Effectiveness Research Alliance throughout the study.

The study was published in December 2017 and provides updated information regarding teacher mobility for Texas public schools during the 2011-12 through 2015-16 school years. Teacher mobility is defined as teachers changing schools or leaving the public school system.

In the report, descriptive information on mobility rates is presented at the regional and state levels for each school year. Mobility rates are disaggregated further into destination proportions to describe the proportion of teacher mobility due to within-district movement, between-district movement, and leaving Texas public schools. This study leverages data collected by the Texas Education Agency during the pilot of the Texas Teacher Evaluation and Support System (T-TESS) in 57 school districts in 2014-15. Analyses examine how components of the T-TESS observation rubric are related to school-level teacher mobility rates.

During the 2011-12 school year, 18.7% of Texas teachers moved schools within a district, moved between districts, or left the Texas Public School system. By 2015-16, this mobility rate had increased to 22%. Moving between districts was the primary driver of the increase in mobility rates. Results indicate significant links between mobility and teacher, student, and school demographic characteristics. Teachers with special education certifications left Texas public schools at nearly twice the rate of teachers with other teaching certifications. School-level mobility rates showed significant positive correlations with the proportion of special education, economically disadvantaged, low-performing, and minority students. School-level mobility rates were negatively correlated with the proportion of English learner students. Schools with higher overall observation ratings on the T-TESS rubric tended to have lower mobility rates.

Findings from this study will provide state and district policymakers in Texas with updated information about trends and correlates of mobility in the teaching workforce, and offer a systematic baseline for monitoring and planning for future changes. Informed by these findings, policymakers can formulate a more strategic and targeted approach for recruiting and retaining teachers. For instance, instead of using generic approaches to enhance the overall supply of teachers or improve recruitment, more targeted efforts to attract and retain teachers in specific subject areas (for example, special education), in certain stages of their career (for example, novice teachers), and in certain geographic areas are likely to be more productive. Moreover, this analysis may enrich the existing knowledge base about schools’ teacher retention and mobility in relation to the quality of their teaching force, or may inform policy discussions about the importance of a stable teaching force for teaching effectiveness.

2018-02-01

How Efficacy Studies Can Help Decision-makers Decide if a Product is Likely to Work in Their Schools

We and our colleagues have been working on translating the results of rigorous studies of the impact of educational products, programs, and policies for people in school districts who are making the decisions whether to purchase or even just try out—pilot—the product. We are influenced by Stanford University Methodologist Lee Cronbach, especially his seminal book (1982) and article (1975) where he concludes “When we give proper weight to local conditions, any generalization is a working hypothesis, not a conclusion…positive results obtained with a new procedure for early education in one community warrant another community trying it. But instead of trusting that those results generalize, the next community needs its own local evaluation” (p. 125). In other words, we consider even the best designed experiment to be like a case study, as much about the local and moderating role of context, as about the treatment when interpreting the causal effect of the program.

Following the focus on context, we can consider characteristics of the people and of the institution where the experiment was conducted to be co-causes of the result that deserve full attention—even though, technically, only the treatment, which was randomly assigned was controlled. Here we argue that any generalization from a rigorous study, where the question is whether the product is likely to be worth trying in a new district, must consider the full context of the study.

Technically, in the language of evaluation research, these differences in who or where the product or “treatment” works are called “interaction effects” between the treatment and the characteristic of interest (e.g., subgroups of students by demographic category or achievement level, teachers with different skills, or bandwidth available in the building). The characteristic of interest can be called a “moderator”, since it changes, or moderates, the impact of the treatment. An interaction reveals if there is differential impact and whether a group with a particular characteristic is advantaged, disadvantaged, or unaffected by the product.

The rules set out by The Department of Education’s What Works Clearinghouse (WWC) focus on the validity of the experimental conclusion: Did the program work on average compared to a control group? Whether it works better for poor kids than for middle class kids, works better for uncertified teachers versus veteran teachers, increases or closes a gap between English learners and those who are proficient, are not part of the information provided in their reviews. But these differences are exactly what buyers need in order to understand whether the product is a good candidate for a population like theirs. If a program works substantially better for English proficient students than for English learners, and the purchasing school has largely the latter type of student, it is important that the school administrator know the context for the research and the result.

The accuracy of an experimental finding depends on it not being moderated by conditions. This is recognized with recent methods of generalization (Tipton, 2013) that essentially apply non-experimental adjustments to experimental results to make them more accurate and more relevant to specific local contexts.

Work by Jaciw (2016a, 2016b) takes this one step further.

First, he confirms the result that if the impact of the program is moderated, and if moderators are distributed differently between sites, then an experimental result from one site will yield a biased inference for another site. This would be the case, for example, if the impact of a program depends on individual socioeconomic status, and there is a difference between the study and inference sites in the proportion of individuals with low socioeconomic status. Conditions for this “external validity bias” are well understood, but the consequences are addressed much less often than the usual selection bias. Experiments can yield accurate results about the efficacy of a program for the sample studied, but that average may not apply either to a subgroup within the sample or to a population outside the study.

Second, he uses results from a multisite trial to show empirically that there is potential for significant bias when inferring experimental results from one subset of sites to other inference sites within the study; however, moderators can account for much of the variation in impact across sites. Average impact findings from experiments provide a summary of whether a program works, but leaves the consumer guessing about the boundary conditions for that effect—the limits beyond which the average effect ceases to apply. Cronbach was highly aware of this, titling a chapter in his 1982 book “The Limited Reach of Internal Validity”. Using terms like “unbiased” to describe impact findings from experiments is correct in a technical sense (i.e., the point estimate, on hypothetical repeated sampling, is centered on the true average effect for the sample studied), but it can impart an incorrect sense of the external validity of the result: that it applies beyond the instance of the study.

Implications of the work cited, are, first, that it is possible to unpack marginal impact estimates through subgroup and moderator analyses to arrive at more-accurate inferences for individuals. Second, that we should do so—why obscure differences by paying attention to only the grand mean impact estimate for the sample? And third, that we should be planful in deciding which subgroups to assess impacts for in the context of individual experiments.

Local decision-makers’ primary concern should be with whether a program will work with their specific population, and to ask for causal evidence that considers local conditions through the moderating role of student, teacher, and school attributes. Looking at finer differences in impact may elicit criticism that it introduces another type of uncertainty—specifically from random sampling error—which may be minimal with gross impacts and large samples, but influential when looking at differences in impact with more and smaller samples. This is a fair criticism, but differential effects may be less susceptible to random perturbations (low power) than assumed, especially if subgroups are identified at individual levels in the context of cluster randomized trials (e.g., individual student-level SES, as opposed to school average SES) (Bloom, 2005; Jaciw, Lin, & Ma, 2016).

References:
Bloom, H. S. (2005). Randomizing groups to evaluate place-based programs. In H. S. Bloom (Ed.), Learning more from social experiments. New York: Russell Sage Foundation.

Cronbach, L. J. (1975). Beyond the two disciplines of scientific psychology. American Psychologist, 116-127.

Cronbach, L. J. (1982). Designing evaluations of educational and social programs. San Francisco, CA: Jossey-Bass.

Jaciw, A. P. (2016). Applications of a within-study comparison approach for evaluating bias in generalized causal inferences from comparison group studies. Evaluation Review, (40)3, 241-276. Retrieved from https://journals.sagepub.com/doi/abs/10.1177/0193841X16664457

Jaciw, A. P. (2016). Assessing the accuracy of generalized inferences from comparison group studies using a within-study comparison approach: The methodology. Evaluation Review, (40)3, 199-240. Retrieved from https://journals.sagepub.com/doi/abs/10.1177/0193841x16664456

Jaciw, A., Lin, L., & Ma, B. (2016). An empirical study of design parameters for assessing differential impacts for students in group randomized trials. Evaluation Review. Retrieved from https://journals.sagepub.com/doi/10.1177/0193841X16659600

Tipton, E. (2013). Improving generalizations from experiments using propensity score subclassification: Assumptions, properties, and contexts. Journal of Educational and Behavioral Statistics, 38, 239-266.

2018-01-16

Spring 2018 Conference Season is Taking Shape


We’ll be on the road again this spring.

SREE

Andrew Jaciw and Denis Newman will be in Washington DC for the annual spring conference of the The Society for Research on Educational Effectiveness (SREE), the premier conference on rigorous research. Andrew Jaciw will present his paper: Leveraging Fidelity Data to Making Sense of Impact Results: Informing Practice through Research. His presentation will be a part of Session 2I: Research Methods - Post-Random Assignment Models: Fidelity, Attrition, Mediation & More from 8-10am on Thursday, March 1.

SXSW EDU

In March, Denis Newman will be attending SXSW EDU Conference & Festival in Austin, TX and presenting on a panel along with Malvika Bhagwat, Jason Palmer, and Karen Billings titled Can Evidence Even Keep Up with EdTech? This will address how researchers and companies can produce evidence that products work—in time for educators and administrators to make a knowledgeable buying decision under accelerating timelines.

AERA

Empirical staff will be presenting in 4 different sessions at the annual conference of the American Educational Research Association (AERA) in NYC in April, all under Division H (Research, Evaluation, and Assessment in Schools).

  1. For Quasi-experiments on Edtech Products, What Counts as Being Treated?
  2. Teacher evaluation rubric properties and associations with school characteristics: Evidence from the Texas evaluation system
  3. Indicators of Successful Teacher Recruitment and Retention in Oklahoma Rural Schools
  4. The Challenges and Successes of Conducting Large-scale Educational Research

In addition to these presentations, we are planning another of our celebrated receptions in NYC so stay tuned for details.

ISTE

A panel on our Research Guidelines has been accepted at this major convention, considered the epicenter of edtech with thousands of users and 100s of companies, held this year in Chicago from June 24–27.

2017-12-18

APPAM doesn’t stand for A Pretty Pithy Abbreviated Meeting

APPAM does stand for excellence, critical thinking, and quality research.

The 2017 fall research conference kept reminding me of one recurrent theme: bridging the chasms between researchers, policymakers, and practitioners.

photo of program

Linear processes don’t work. Participatory research is critical!

Another hot topic is generalizability! There is a lot of work to be done here. What works? For whom? Why?

photo of city

Lots of food for thought!

photo of cake

2017-11-06

IES Publishes our Recent REL Southwest Teacher Studies

The U.S. Department of Education’s Institute of Education Sciences published two reports of studies we conducted for REL Southwest! We are thankful for the support and engagement we received from the Educator Effectiveness Research Alliance and the Oklahoma Rural Schools Research Alliance throughout the studies. The collaboration with the research alliances and educators aligns well with what we set out to do in our core mission: to support K-12 systems and empower educators in making evidence-based decisions.

The first study was published earlier this month and identified factors associated with successful recruitment and retention of teachers in Oklahoma rural school districts, in order to highlight potential strategies to address Oklahoma’s teaching shortage. This correlational study covered a 10-year period (the 2005-06 to 2014-15 school years) and used data from the Oklahoma State Department of Education, the Oklahoma Office of Educational Quality and Accountability, federal non-education sources, and publicly available geographic information systems from Google Maps. The study found that teachers who are male, those who have higher postsecondary degrees, and those who have more teaching experience are harder than others to recruit and retain in Oklahoma schools. In addition, for teachers in rural districts, higher total compensation and increased responsibilities in job assignment are positively associated with successful recruitment and retention. In order to provide context, the study also examined patterns of teacher job mobility between rural and non-rural school districts. The rate of teachers in Oklahoma rural schools reaching tenure is slightly lower than the rates for teachers in non-rural areas. Also, rural school districts in Oklahoma had consistently lower rates of success in recruiting teachers than non-rural school districts from 2006-07 to 2011-12.

This most recent study, published last week, examined data from the 2014-15 pilot implementation of the Texas Teacher Evaluation and Support System (T-TESS). In 2014-15 the Texas Education Agency piloted the T-TESS in 57 school districts. During the pilot year teacher overall ratings were based solely on rubric ratings on 16 dimensions across four domains.

The study examined the statistical properties of the T-TESS rubric to explore the extent to which it differentiates teachers on teaching quality and to investigate its internal consistency and efficiency. It also explored whether certain types of schools have teachers with higher or lower ratings. Using data from the pilot for more than 8,000 teachers, the study found that the rubric differentiates teacher effectiveness at the overall, domain, and dimension levels; domain and dimension ratings on the observation rubric are internally consistent; and the observation rubric is efficient, with each dimension making a unique contribution to a teacher’s overall rating. In addition, findings indicated that T-TESS rubric ratings varied slightly in relation to some school characteristics that were examined, such as socioeconomic status and percentage of English Language Learners. However, there is little indication that these characteristics introduced bias in the evaluators’ ratings.

2017-10-30

New Article Published on the Processes Involved with Scaling-Up or Abandoning an Innovation

Our study of scaling up an innovation that challenges conventional approaches to research is being published in the Peabody Journal of Education and is now available online at Taylor & Francis

The article, “School Processes That Can Drive Scaling-Up of an Innovation or Contribute to Its Abandonment”, looks at the drivers of school-level processes that predict the growth or the attrition of a school’s team implementing an innovation. We looked for the factors that helped to explain the school-level success or failure of a high school academic literacy framework, Reading Apprenticeship, developed by WestEd’s Strategic Literacy Initiative (SLI). The work was funded by an i3 validation grant on which we were independent evaluators. SLI had an innovative strategy for scaling-up, involving school-based cross-disciplinary teacher teams, and brought the framework to 274 schools across five states. This strategy follows research literature that views scale-up as increasing local ownership and depth of commitment. In this study, we show that there are factors working both for and against the increase of teachers and schools joining and staying in an innovation. Given wide variation in teacher uptake, we can identify processes present in the initial year that predicted gains and losses of participants.

Clicking on this link will allow you to read the abstract (and the full article if you subscribe to the journal). If you don’t already subscribe, but you would like to read the article, send us an email, and we will share with you a link that will grant you a free download of the article.

2017-10-20

Sure, the edtech product is proven to work, but will it work in my district?

It’s a scenario not uncommon in your district administrators’ office. They’ve received sales pitches and demos of a slew of new education technology (edtech) products, each one accompanied with “evidence” of its general benefits for teachers and students. But underlying the administrator’s decision is a question often left unanswered: Will this work in our district?

In the conventional approach to research advocated, for example, by the U.S. Department of Education and the Every Student Succeeds Act (ESSA), the finding that is reported and used in the review of products is the overall average impact for any and all subgroups of students, teachers, or schools in the study sample. In our own research, we have repeatedly seen that who it works for and under what conditions can be more important than the average impact. There are products that are effective on average but don’t work for an important subgroup of students, or vice versa, work for some students but not all. Some examples:

  • A math product, while found to be effective overall, was effective for white students but ineffective for minority students. This effect would be relevant to any district wanting to close (rather than further widen) an achievement gap.
  • A product that did well on average performed very well in elementary grades but poorly in middle school. This has obvious relevance for a district, as well as for the provider who may modify its marketing target.
  • A teacher PD product greatly benefitted uncertified teachers but didn’t help the veteran teachers do any better than their peers using the conventional textbook. This product may be useful for new teachers but a poor choice for others.

As a research organization, we have been looking at ways to efficiently answer these kinds of questions for products. Especially now, with the evidence requirements built into ESSA, school leaders can ask the edtech salesperson: “Does your product have evidence that ESSA calls for?” They may well hear an affirmative answer supported by an executive summary of a recent study. But, there’s a fundamental problem with what ESSA is asking for. ESSA doesn’t ask for evidence that the product is likely to work in your specific district. This is not the fault of ESSA’s drafters. The problem is built into the conventional design of research on “what works”. The U.S. Department of Education’s What Works Clearinghouse (WWC) bases its evidence rating only on an average; if there are different results for different subgroups of students, that difference is not part of the rating. Since ESSA adopts the WWC approach, that’s the law of the land. Hence, your district’s most pressing question is left unanswered: will this work for a district like mine?

Recently, the Software & Information Industry Association, the primary trade association of the software industry, released a set of guidelines for research explaining to its member companies the importance of working with districts to conduct research that will meet the ESSA standards. As the lead author of this report, I can say it was our goal to foster an improved dialog between the schools and the providers about the evidence that should be available to support buying these products. As an addendum to the guidelines aimed at arming educators with ways to look at the evidence and questions to ask the edtech salesperson, here are three suggestions:

  1. It is better to have some information than no information. The fact that there’s research that found the product worked somewhere gives you a working hypothesis that it could be a better than average bet to try out in your district. In this respect, you can consider the WWC and newer sites such as Evidence for ESSA rating of the study as a screening tool—they will point you to valid studies about the product you’re interested in. But you should treat previous research as a working hypothesis rather than proof.
  2. Look at where the research evidence was collected. You’ll want to know whether the research sites and populations in the study resemble your local conditions. WWC has gone to considerable effort to code the research by the population in the study and provides a search tool so you can find studies conducted in districts like yours. And if you download and read the original report, it may tell you whether it will help reduce or increase an achievement gap of concern.
  3. Make a deal with the salesperson. In exchange for your help in organizing a pilot and allowing them to analyze your data, you get the product for a year at a steep discount and a good ongoing price if you decide to implement the product on a full scale. While you’re unlikely to get results from a pilot (e.g., based on spring testing) in time to support a decision, you can at least lower your cost for the materials, and you’ll help provide a neighboring district (with similar populations and conditions) with useful evidence to support a strong working hypothesis as to whether it is likely to work for them as well.
2017-10-15
Archive