blog posts and news stories

Updated Research Guidelines Will Improve Education Technology Products and Provide More Value to Schools

Recommendations include 16 best practices for the design, implementation, and reporting of Usable Evidence for Educators

Palo Alto, CA (April 25, 2018) – Empirical Education Inc. and the Education Technology Industry Network (ETIN) of SIIA released an important update to the “Guidelines for Conducting and Reporting Edtech Impact Research in U.S. K-12 Schools” today.

Authored by Empirical Education researchers, Drs. Denis Newman, Andrew Jaciw, and Valeriy Lazarev, the Guidelines detail 16 best practices for the design, implementation, and reporting of efficacy research of education technology. Recommendations range from completing the product’s logic model before fielding it to disseminating a study’s results in accessible and non-technical language.

The Guidelines were first introduced in July 2017 at ETIN’s Edtech Impact Symposium to address the changing demand for research. They served to address new challenges driven by the accelerated pace of edtech development and product releases, the movement of new software to the cloud, and the passage of the Every Student Succeeds Act (ESSA). The authors committed to making regular updates to keep pace with technical advances in edtech and research methods.

“Our collaboration with ETIN brought the right mix of practical expertise to this important document,” said Denis Newman, CEO of Empirical Education and lead author of the Guidelines. “ETIN provided valuable expertise in edtech marketing, policy, and development. With over a decade of experience evaluating policies, programs, and products for the U.S. Department of Education, major research organizations, and publishers, Empirical Education brought a deep understanding of how studies are traditionally performed and how they can be improved in the future. Our experience with our Evidence as a Service™ offering to investors and developers of edtech products also informed the guidelines.”

The current edition advocates for analysis of usage patterns in the data collected routinely by edtech applications. These patterns help to identify classrooms and schools with adequate implementation and lead to lower-cost faster turn-around research. So rather than investing hundreds of thousands of dollars in a single large-scale study, developers should consider multiple small-scale studies. The authors point to the advantages of looking at subgroup analysis to better understand how and for whom the product works best, thus more directly answering common educator questions. Issues with quality of implementation are addressed in greater depth, and the visual design of the Guidelines has been refined for improved readability.

“These guidelines may spark a rebellion against the research business as usual, which doesn’t help educators know whether an edtech product will work for their specific populations. They also provide a basis for schools and developers to partner to make products better,” said Mitch Weisburgh, Managing Partner of Academic Business Advisors, LLC and President of ETIN, who has moderated panels and webinars on edtech research.

Empirical Education, in partnership with a variety of organizations, is conducting webinars to help explain the updates to the Guidelines, as well as to discuss the importance of these best practices in the age of ESSA. The updated Guidelines are available here: https://www.empiricaleducation.com/research-guidelines/.

2018-04-25

Where's Denis?

It’s been a busy month for Empirical CEO Denis Newman, who’s been in absentia at our Palo Alto office as he jet-sets around the country to spread the good word of rigorous evidence in education research.

His first stop was Washington, DC and the conference of the Society for Research on Educational Effectiveness (SREE). This was an opportunity to get together with collaborators, as well as plot proposal writing, blog postings, webinars, and revisions to our research guidelines for edtech impact studies. Andrew Jaciw, Empirical’s Chief Scientist, kept up the company’s methodological reputation with a paper presentation on “Leveraging Fidelity Data to Make Sense of Impact Results.” For Denis, a highlight was dinner with Peg Griffin, a longtime friend and his co-author on The Construction Zone. Then it was on to Austin, TX, for a very different kind of meeting—more of a festival, really.

At this year’s SXSWEDU, Denis was one of three speakers on the panel, “Can Evidence Even Keep Up with Edtech?” The problem presented by the panel was that edtech, as a rapidly moving field, seems to be outpacing the rate of research that stakeholders may want to use to evaluate these products. How, then, could education stakeholders make informed decisions about whether to use edtech products?

According to Denis, the most important thing is for a district to have enough information to know whether a given edtech product may or may not work for that district’s unique population and context. Therefore, researchers may need to adapt their methods both to be able to differentiate a product’s impact between subgroups, as well as to meet the faster timelines of edtech product development. Empirical’s own solution to this quandry, Evidence as a ServiceTM, offers quick-turnaround research reports that can examine impact and outcomes for specific student subgroups, with methodology that is flexible but rigorous enough to meet ESSA standards.

Denis praised the panel, stating, “In the festival’s spirit of invention, our moderator, Mitch Weisberg, masterfully engaged the audience from the beginning to pose the questions for the panel. Great questions, too. I got to cover all of my prepared talking points!”

You can read more coverage of our SXSWEDU panel on EdSurge.

After the panel, a string of meetings and parties kept the energy high and continued to show the growing interest in efficacy. The ISTE meetup was particularly important following this theme. The concern raised by the ISTE leadership and its members—which are school-based technology users—was that traditional research doesn’t tell the practitioners whether a product is likely to work in their school, given its resources and student demographics. Users are faced with hundreds of choices in any product category and have little information for narrowing down the choice to a few that are worth piloting.

Following SXSWEDU, it was back to DC for the Consortium for School Networking (CoSN) conference. Denis participated in the annual Feedback Forum hosted by CoSN and the Software & Information Industry Association (SIIA), where SIIA—representing edtech developers—looked for feedback from the CIOs and other school district leaders. This year, SIIA was looking for feedback that would help the Empirical team improve the edtech research guidelines, which are sponsored by SIIA’s Education Technology Industry Network (ETIN). Linda Winter moderated and ran the session like a focus group, asking questions such as:

  • What data do you need from products to gauge engagement?
  • How can the relationship of engagement and achievement indicate that a product is working?
  • What is the role of pilots in measuring success?
  • And before a pilot decision is made, what do CoSN members need to know about edtech products to decide if they are likely to work?

The CoSN members were brutally honest, pointing out that as the leaders responsible for the infrastructure, they were concerned with implementability, bandwidth requirements, and standards such as single sign-on. Whether the software improved learning was secondary—if teachers couldn’t get the program to work, it hardly mattered how effective it may be in other districts.

Now, Denis is preparing for the rest of the spring conference season. Next stop will be New York City and the American Education Research Association (AERA) conference, which attracts over 20,000 researchers annually. The Empirical team will be presenting four studies, as well as co-hosting a cocktail reception with AERA’s school research division. Then, it’s back on the plane for ASU-GSV in San Diego.

Find more information about Evidence as a Service and the edtech research guidelines.

2018-03-26

Presenting at AERA 2018

We will again be presenting at the annual meeting of the American Educational Research Association (AERA). Join the Empirical Education team in New York City from April 13-17, 2018.

Research presentations will include the following.

For Quasi-Experiments on EdTech Products, What Counts as Being Treated?
Authors: Val Lazarev, Denis Newman, & Malvika Bhagwat
In Roundtable Session: Examining the Impact of Accountability Systems on Both Teachers and Students
Friday, April 13 - 2:15 to 3:45pm
New York Marriott Marquis, Fifth Floor, Westside Ballroom Salon 3

Abstract: Edtech products are becoming increasingly prevalent in K-12 schools and the needs of schools to evaluate their value for students calls for a program of rigorous research, at least at the level 2 of the ESSA standards for evidence. This paper draws on our experience conducting a large scale quasi-experiment in California schools. The nature of the product’s wide-ranging intensity of implementation presented a challenge in identifying schools that had used the product adequately enough to be considered part of the treatment group.


Planning Impact Evaluations Over the Long Term: The Art of Anticipating and Adapting
Authors: Andrew P Jaciw & Thanh Thi Nguyen
In Session: The Challenges and Successes of Conducting Large-Scale Educational Research
Saturday, April 14 - 2:15 to 3:45pm
Sheraton New York Times Square, Second Floor, Central Park East Room

Abstract: Perspective. It is good practice to identify core research questions and important elements of study designs a-priori, to prevent post-hoc “fishing” exercises and reduce the role of drawing false-positive conclusions [16,19]. However, programs in education, and evaluations of them, evolve [6] making it difficult to follow a charted course. For example, in the lifetime of a program and its evaluation, new curricular content or evidence standards for evaluations may be introduced and thus drive changes in program implementation and evaluation.

Objectives. This work presents three cases from program impact evaluations conducted through the Department of Education. In each case, unanticipated results or changes in study context had significant consequences for program recipients, developers and evaluators. We discuss responses, either enacted or envisioned, for addressing these challenges. The work is intended to serve as a practical guide for researchers and evaluators who encounter similar issues.

Methods/Data Sources/Results. The first case concerns the problem of outcome measures keeping pace with evolving content standards. For example, in assessing impacts of science programs, program developers and evaluators are challenged to find assessments that align with Next Generation Science Standards (NGSS). Existing NGSS-aligned assessments are largely untested or in development, resulting in the evaluator having to find, adapt or develop instruments with strong reliability, and construct and face validity – ones that will be accepted by independent review and not considered over-aligned to the interventions. We describe a hands-on approach to working with a state testing agency to develop forms to assess impacts on science generally, and on constructs more-specifically aligned to the program evaluated. The second case concerns the problem of reprioritizing research questions mid-study. As noted above, researchers often identify primary (confirmatory) research questions at the outset of a study. Such questions are held to high evidence standards, and are differentiated from exploratory questions, which often originate after examining the data, and must be replicated to be considered reliable [16]. However, sometimes, exploratory analyses produce unanticipated results that may be highly consequential. The evaluator must grapple with the dilemma of whether to re-prioritize the result, or attempt to proceed with replication. We discuss this issue with reference to an RCT in which the dilemma arose. The third addresses the problem of designing and implementing a study that meets one set of evidence standards, when the results will be reviewed according to a later version of those standards. A practical question is what to do when this happens and consequently the study falls under a lower tier of the new evidence standard. With reference to an actual case, we consider several response options, including assessing the consequence of this reclassification for future funding of the program, and augmenting the research design to satisfy the new standards of evidence.

Significance. Responding to demands of changing contexts, programs in the social sciences are moving targets. They demand a flexible but well-reasoned and justified approach to evaluation. This session provides practical examples and is intended to promote discussion for generating solutions to challenges of this kind.


Indicators of Successful Teacher Recruitment and Retention in Oklahoma Rural Schools
Authors: Val Lazarev, Megan Toby, Jenna Lynn Zacamy, Denis Newman, & Li Lin
In Session: Teacher Effectiveness, Retention, and Coaching
Saturday, April 14 - 4:05 to 6:05pm
New York Marriott Marquis, Fifth Floor, Booth

Abstract: The purpose of this study was to identify factors associated with successful recruitment and retention of teachers in Oklahoma rural school districts, in order to highlight potential strategies to address Oklahoma’s teaching shortage. The study was designed to identify teacher-level, district-level, and community characteristics that predict which teachers are most likely to be successfully recruited and retained. A key finding is that for teachers in rural schools, total compensation and increased responsibilities in job assignment are positively associated with successful recruitment and retention. Evidence provided by this study can be used to inform incentive schemes to help retain certain groups of teachers and increase retention rates overall.


Teacher Evaluation Rubric Properties and Associations with School Characteristics: Evidence from the Texas Evaluation System
Authors: Val Lazarev, Thanh Thi Nguyen, Denis Newman, Jenna Lynn Zacamy, Li Lin
In Session: Teacher Evaluation Under the Microscope
Tuesday, April 17 - 12:25 to 1:55pm
New York Marriott Marquis, Seventh Floor, Astor Ballroom

Abstract: A 2009 seminal report, The Widget Effect, alerted the nation to the tendency of traditional teacher evaluation systems to treat teachers like widgets, undifferentiated in their level of effectiveness. Since then, a growing body of research, coupled with new federal initiatives, has catalyzed the reform of such systems. In 2014-15, Texas piloted its reformed evaluation system, collecting classroom observation rubric ratings from over 8000 teachers across 51 school districts. This study analyzed that large dataset and found that 26.5 percent, compared to 2 percent under previous measures, of teachers were rated below proficient. The study also found a promising indication of low bias in the rubric ratings stemming from school characteristics, given that they were minimally associated with observation ratings.

We look forward to seeing you at our sessions to discuss our research. We’re also co-hosting a cocktail reception with Division H! If you’d like an invite, let us know.

2018-03-06

Jefferson Education Accelerator Contracts with Empirical for Evidence as a Service™

Jefferson Education Accelerator (JEA) has contracted with Empirical Education Inc. for research services that will provide evidence of the impact of education technology products developed by their portfolio companies. JEA’s mission is to support and evaluate promising edtech solutions in order to help educators make more informed decisions about the products they invest in. The study is designed to meet level 2 or “moderate” evidence as defined by the Every Student Succeeds Act. Empirical will provide a Student Impact Report under its Evidence as a Service offering, which combines student-level product usage data and a school district’s administrative data to conduct a comparison group study. Denis Newman, Empirical’s CEO stated, “This is a perfect application of our Evidence as a Service product, which provides fast answers to questions about which kids will benefit the most from any particular learning program.” Todd Bloom, JEA’s Chief Academic Officer and Research Associate Professor at UVA’s Curry School of Education, commented: “Empirical Education is a highly respected research firm and offers the type of aggressive timeline that is sorely needed in the fast-paced edtech market.” A report on impact in the school year 2017-2018 is expected to be completed in July.

2018-02-20

IES Published Our REL Southwest Study on Trends in Teacher Mobility

The U.S. Department of Education’s Institute of Education Sciences published a report of a study we conducted for REL Southwest! We are thankful for the support and engagement we received from the Educator Effectiveness Research Alliance throughout the study.

The study was published in December 2017 and provides updated information regarding teacher mobility for Texas public schools during the 2011-12 through 2015-16 school years. Teacher mobility is defined as teachers changing schools or leaving the public school system.

In the report, descriptive information on mobility rates is presented at the regional and state levels for each school year. Mobility rates are disaggregated further into destination proportions to describe the proportion of teacher mobility due to within-district movement, between-district movement, and leaving Texas public schools. This study leverages data collected by the Texas Education Agency during the pilot of the Texas Teacher Evaluation and Support System (T-TESS) in 57 school districts in 2014-15. Analyses examine how components of the T-TESS observation rubric are related to school-level teacher mobility rates.

During the 2011-12 school year, 18.7% of Texas teachers moved schools within a district, moved between districts, or left the Texas Public School system. By 2015-16, this mobility rate had increased to 22%. Moving between districts was the primary driver of the increase in mobility rates. Results indicate significant links between mobility and teacher, student, and school demographic characteristics. Teachers with special education certifications left Texas public schools at nearly twice the rate of teachers with other teaching certifications. School-level mobility rates showed significant positive correlations with the proportion of special education, economically disadvantaged, low-performing, and minority students. School-level mobility rates were negatively correlated with the proportion of English learner students. Schools with higher overall observation ratings on the T-TESS rubric tended to have lower mobility rates.

Findings from this study will provide state and district policymakers in Texas with updated information about trends and correlates of mobility in the teaching workforce, and offer a systematic baseline for monitoring and planning for future changes. Informed by these findings, policymakers can formulate a more strategic and targeted approach for recruiting and retaining teachers. For instance, instead of using generic approaches to enhance the overall supply of teachers or improve recruitment, more targeted efforts to attract and retain teachers in specific subject areas (for example, special education), in certain stages of their career (for example, novice teachers), and in certain geographic areas are likely to be more productive. Moreover, this analysis may enrich the existing knowledge base about schools’ teacher retention and mobility in relation to the quality of their teaching force, or may inform policy discussions about the importance of a stable teaching force for teaching effectiveness.

2018-02-01

How Efficacy Studies Can Help Decision-makers Decide if a Product is Likely to Work in Their Schools

We and our colleagues have been working on translating the results of rigorous studies of the impact of educational products, programs, and policies for people in school districts who are making the decisions whether to purchase or even just try out—pilot—the product. We are influenced by Stanford University Methodologist Lee Cronbach, especially his seminal book (1982) and article (1975) where he concludes “When we give proper weight to local conditions, any generalization is a working hypothesis, not a conclusion…positive results obtained with a new procedure for early education in one community warrant another community trying it. But instead of trusting that those results generalize, the next community needs its own local evaluation” (p. 125). In other words, we consider even the best designed experiment to be like a case study, as much about the local and moderating role of context, as about the treatment when interpreting the causal effect of the program.

Following the focus on context, we can consider characteristics of the people and of the institution where the experiment was conducted to be co-causes of the result that deserve full attention—even though, technically, only the treatment, which was randomly assigned was controlled. Here we argue that any generalization from a rigorous study, where the question is whether the product is likely to be worth trying in a new district, must consider the full context of the study.

Technically, in the language of evaluation research, these differences in who or where the product or “treatment” works are called “interaction effects” between the treatment and the characteristic of interest (e.g., subgroups of students by demographic category or achievement level, teachers with different skills, or bandwidth available in the building). The characteristic of interest can be called a “moderator”, since it changes, or moderates, the impact of the treatment. An interaction reveals if there is differential impact and whether a group with a particular characteristic is advantaged, disadvantaged, or unaffected by the product.

The rules set out by The Department of Education’s What Works Clearinghouse (WWC) focus on the validity of the experimental conclusion: Did the program work on average compared to a control group? Whether it works better for poor kids than for middle class kids, works better for uncertified teachers versus veteran teachers, increases or closes a gap between English learners and those who are proficient, are not part of the information provided in their reviews. But these differences are exactly what buyers need in order to understand whether the product is a good candidate for a population like theirs. If a program works substantially better for English proficient students than for English learners, and the purchasing school has largely the latter type of student, it is important that the school administrator know the context for the research and the result.

The accuracy of an experimental finding depends on it not being moderated by conditions. This is recognized with recent methods of generalization (Tipton, 2013) that essentially apply non-experimental adjustments to experimental results to make them more accurate and more relevant to specific local contexts.

Work by Jaciw (2016a, 2016b) takes this one step further.

First, he confirms the result that if the impact of the program is moderated, and if moderators are distributed differently between sites, then an experimental result from one site will yield a biased inference for another site. This would be the case, for example, if the impact of a program depends on individual socioeconomic status, and there is a difference between the study and inference sites in the proportion of individuals with low socioeconomic status. Conditions for this “external validity bias” are well understood, but the consequences are addressed much less often than the usual selection bias. Experiments can yield accurate results about the efficacy of a program for the sample studied, but that average may not apply either to a subgroup within the sample or to a population outside the study.

Second, he uses results from a multisite trial to show empirically that there is potential for significant bias when inferring experimental results from one subset of sites to other inference sites within the study; however, moderators can account for much of the variation in impact across sites. Average impact findings from experiments provide a summary of whether a program works, but leaves the consumer guessing about the boundary conditions for that effect—the limits beyond which the average effect ceases to apply. Cronbach was highly aware of this, titling a chapter in his 1982 book “The Limited Reach of Internal Validity”. Using terms like “unbiased” to describe impact findings from experiments is correct in a technical sense (i.e., the point estimate, on hypothetical repeated sampling, is centered on the true average effect for the sample studied), but it can impart an incorrect sense of the external validity of the result: that it applies beyond the instance of the study.

Implications of the work cited, are, first, that it is possible to unpack marginal impact estimates through subgroup and moderator analyses to arrive at more-accurate inferences for individuals. Second, that we should do so—why obscure differences by paying attention to only the grand mean impact estimate for the sample? And third, that we should be planful in deciding which subgroups to assess impacts for in the context of individual experiments.

Local decision-makers’ primary concern should be with whether a program will work with their specific population, and to ask for causal evidence that considers local conditions through the moderating role of student, teacher, and school attributes. Looking at finer differences in impact may elicit criticism that it introduces another type of uncertainty—specifically from random sampling error—which may be minimal with gross impacts and large samples, but influential when looking at differences in impact with more and smaller samples. This is a fair criticism, but differential effects may be less susceptible to random perturbations (low power) than assumed, especially if subgroups are identified at individual levels in the context of cluster randomized trials (e.g., individual student-level SES, as opposed to school average SES) (Bloom, 2005; Jaciw, Lin, & Ma, 2016).

References:
Bloom, H. S. (2005). Randomizing groups to evaluate place-based programs. In H. S. Bloom (Ed.), Learning more from social experiments. New York: Russell Sage Foundation.

Cronbach, L. J. (1975). Beyond the two disciplines of scientific psychology. American Psychologist, 116-127.

Cronbach, L. J. (1982). Designing evaluations of educational and social programs. San Francisco, CA: Jossey-Bass.

Jaciw, A. P. (2016). Applications of a within-study comparison approach for evaluating bias in generalized causal inferences from comparison group studies. Evaluation Review, (40)3, 241-276. Retrieved from https://journals.sagepub.com/doi/abs/10.1177/0193841X16664457

Jaciw, A. P. (2016). Assessing the accuracy of generalized inferences from comparison group studies using a within-study comparison approach: The methodology. Evaluation Review, (40)3, 199-240. Retrieved from https://journals.sagepub.com/doi/abs/10.1177/0193841x16664456

Jaciw, A., Lin, L., & Ma, B. (2016). An empirical study of design parameters for assessing differential impacts for students in group randomized trials. Evaluation Review. Retrieved from https://journals.sagepub.com/doi/10.1177/0193841X16659600

Tipton, E. (2013). Improving generalizations from experiments using propensity score subclassification: Assumptions, properties, and contexts. Journal of Educational and Behavioral Statistics, 38, 239-266.

2018-01-16
Archive