blog posts and news stories

EIR 2023 Proposals Have Been Reviewed and Awards Granted

While children everywhere are excited about winter break and presents in their stockings, some of us in the education space look forward to December for other reasons. That’s right, the Department of Education just announced the EIR grant winners from the summer 2023 proposal submissions. We want to congratulation all our friends who were amongst that winning list.

One of those winning teams was made up of The MLK Sr Community Resources Center, Connect with Kids Network, Morehouseand Spelman Colleges, New York City Public Schools, The Urban Assembly, Atlanta Public Schools, and Empirical Education. We will evaluate the Sankofa Chronicles: SEL Curriculum from American Diasporas with the early-phase EIR development grant funding.

The word sankofa comes from the Twi language spoken by the Akan people of Ghana. The word is often associated with an Akan proverb, “Se wo were fi na wosankofa a yenkyi.” Translated into English this proverb reminds us, “It is not wrong to go back for that which you have forgotten.” Guided by the philosophy of sankofa, this five year grant will support the creation of a culturally-responsive, multimedia, social emotional learning (SEL) curriculum for high school students.

Participating students will be introduced to SEL concepts through short films that tell emotional and compelling stories of diverse diaspora within students’ local communities. These stories will be paired with an SEL curriculum that seeks to foster not only SEL skills (e.g., self-awareness, responsible decision making) but also empathy, cultural appreciation, and critical thinking.

Our part in the project will begin with a randomized control trial (RCT) of the curriculum in the 2025–2026 school year and culminate in an impact report following the RCT. We will continue to support the program through the remainder of the five-year grant with an implementation study and a focus on scaling up the program.

Check back for updates on this exciting project!

2023-12-07

Multi-Arm Parallel Group Design Explained

What do unconventional arm wrestling and randomized trials have in common?

Each can have many arms.

What is a 3 arm RCT?

Multi arm trials (or multi arm RCTs) are randomized experiments in which individuals are randomly assigned to multiple arms: usually two or more treatment variants, and a control (a 3-arm RCT).

They can be referred to in a number of ways.

  • multi-arm trials
  • multi-armed trials
  • multiarm trials
  • multiarmed trials
  • multi arm RCTs
  • 3-arm, 4-arm, 5-arm, etc RCTs
  • multi-factorial design (a type of multi-arm trial)

a figure illustrating a 2-arm trial with 2 arms with one labeled treatment and one labeled control

a figure illustrating a 3-arm trial with 3 arms with one labeled treatment 1, one labeled treatment 2, and one labeled control

When I think of a multiarmed wrestling match, I imagine a mess. Can’t you say the same about multiarmed trials?

Quite the contrary. They can become messy, but not if they’re done with forethought and consultation with stakeholders.

I had the great opportunity to be the guest editor of a special issue of Evaluation Review on the topic of Multiarmed Trials, where experts shared their knowledge.

Special Issue: Multi-armed Randomized Control Trials in Evaluation and Policy Analysis

We were fortunate to receive five valuable contributions. I hope the issue will serve as a go-to reference for evaluators who want to explore options beyond the standard two-armed (treatment-control) arrangement.

The first three articles are by pioneers of the method.

  • Larry L. Orr and Daniel Gubits: Some Lessons From 50 Years of Multi-armed Public Policy Experiments
  • Joseph Newhouse: The Design of the RAND Health Insurance Experiment: A Retrospective
  • Judith M. Gueron and Gayle Hamilton: Using Multi-Armed Designs to Test Operating Welfare-to-Work Programs

They cover a wealth of ideas essential for the successful conduct of multi-armed trials.

  • Motivations for study design and the choice of treatment variants, and their relationship to real-world policy interests
  • The importance of reflecting the complex ecology and political reality of the study context to get stake-holder buy-in and participation
  • The importance of patience and deliberation in selecting sites and samples
  • The allocation of participants to treatment arms with a view to statistical power

Should I read this special issue before starting my own multi-armed trial?

Absolutely! It’s easy to go wrong with this design, but if done right, it can yield more information than you’d get with a 2-armed trial. Sample allotment matters depending on the question you want to ask. In a 3-armed trial you have to ask yourself a question: Do you want 33.3% of the sample in each of the three conditions (two treatment conditions and control) or 25% in each of the treatment arms and 50% in control? It depends on the contrast and research question. So it requires you to think more deeply about what question it is you want to answer.

This sounds risky. Why would I ever want to run a multi-armed trial?

In short, running a multi-armed trial allows a head-to-head test of alternatives, to determine which provides a larger or more immediate return on investment. It also sets up nicely the question of whether certain alternatives work better with certain beneficiaries.

The next two articles make this clear. One study randomized treatment sites to one of several enhancements to assess the added value of each. The other used a nifty multifactorial design to simultaneously tests several dimensions of a treatment.

  • Laura Peck, Hilary Bruck, and Nicole Constance: Insights From the Health Profession Opportunity Grant Program’s Three-Armed, Multi-Site Experiment for Policy Learning and Evaluation Practice
  • Randall Juras, Amy Gorman, and Jacob Alex Klerman: Using Behavioral Insights to Market a Workplace Safety Program: Evidence From a Multi-Armed Experiment

More About 3 Arm RCTs

The special issue of Evaluation Review helped motivate the design of a multiarmed trial conducted through the Regional Educational Laboratory (REL) Southwest in partnership with the Arkansas Department of Education (ADE). We co-authored this study through our role on REL Southwest.

In this study with ADE, we randomly assigned 700 Arkansas public elementary schools to one of eight conditions determining how communication was sent to their households about the Reading Initiative for Student Excellence (R.I.S.E.) state literacy website.

The treatments varied on these dimensions.

  1. Mode of communication (email only or email and text message)
  2. The presentation of information (no graphic or with a graphic)
  3. Type of sender (generic sender or known sender)

In January 2022, households with children in these schools were sent three rounds of communications with information about literacy and a link to the R.I.S.E. website. The study examined the impact of these communications on whether parents and guardians clicked the link to visit the website (click rate). We also conducted an exploratory analysis of differences in how long they spent on the website (time on page).

How do you tell the effects apart?

It all falls out nicely if you imagine the conditions as branches, or cells in a cube (both are pictured below).

In the branching representation, there are eight possible pathways from left to right representing the eight conditions.

In the cube representation, the eight conditions correspond to the eight distinct cells.

In the study, we evaluated the impact of each dimension across levels of the other dimensions: for example, whether click rate increases if email is accompanied with text, compared to just email, irrespective of who the sender is or whether the infographic is used.

We also tested the impact on click rates of the “deluxe” version (email + text, with known sender and graphic, which is the green arrow path in the branch diagram [or the red dot cell in the cube diagram]) versus the “plain” version (email only, generic sender, and no graphic, which is the red arrow path in the branch diagram [or green red dot cell in the cube diagram])

a figure illustrating the multi arms of the RCT and what intervention each of them received

a figure of a cube illustrating multi-armed trials

That’s all nice and dandy, but have you ever heard of the KISS principle: Keep it Simple Sweetie? You are taking some risks in design, but getting some more information. Is the tradeoff worth it? I’d rather run a series of two-armed trials. I am giving you a last chance to convince me.

Two armed trials will always be the staple approach. But consider the following.

  • Knowing what works among educational interventions is a starting point, but it does not go far enough.
  • The last 5-10 years have witnessed prioritization of questions and methods for addressing the questions of what work for whom and under which conditions.
  • However, even this may not go far enough to get to the question at heart of what people on the ground want to know. We agree with Tony Bryk that practitioners typically want to answer the following question.

What will it take to make it (the program) work for me, for my students, and in my circumstances?

There are plenty of qualitative, quantitative, and mixed methods to address this question. There also are many evaluation frameworks to support systematic inquiry to inform various stakeholders.

We think multi-armed trials help to tease out the complexity in the interactions among treatments and conditions and so help address the more refined question Bryk asks above.

Consider our example above. One question we explored was about how response rates varied across rural schools when compared to urban schools. One might speculate the following.

  • Rural schools are smaller, allowing principals to get to know parents more personally
  • Rural and non-rural households may have different kinds of usage and connectivity with email versus text and with MMS versus SMS

If these moderating effects matter, then the study, as conducted, may help with customizing communications, or providing a rationale for improving connectivity, and altogether optimizing the costs of communication.

Multi-armed trials, done well, increase the yield of actionable information to support both researcher and on-the-ground stakeholder interests!

Well, thank you for your time. I feel well-armed with information. I’ll keep thinking about this and wrestle with the pros and cons.

2023-05-31

New Research Project Evaluating the Impact of EVERFI’s WORD Force Program on Early Literacy Skills

Empirical Education and EVERFI from Blackbaud are excited to announce a new partnership. Researchers at Empirical will evaluate the impact and implementation of the WORD Force program, a literacy adventure for K-2 students.

The WORD Force program is designed to be engaging and interactive, using games and real-world scenarios to to teach students key reading and literacy skills and understand how to use them in context. It also provides students with personalized feedback and support, allowing them to work at their own pace and track their progress.

We will conduct the experiment within up to four school districts—working with elementary school teachers. This is our second project with EVERFI, and it builds on our 20 years of extensive experience conducting large-scale, rigorous randomized controlled trial (RCT) studies. (Read EVERFI’s press release about our first project with them.)

In our current work together, we plan to answer these five research questions. 1. What is the impact of WORD Force on early literacy achievement, including on spoken language, phonological awareness, phonics, word building, vocabulary, reading fluency, and reading comprehension, for students in grades K–2? 2. What is the impact of WORD Force on improving early literacy achievement for students in grades K-2 from low- to middle-income households, English Language Learner (ELL) students, by grade, and depending on teacher background (e.g., years of teaching experience, or responses to baseline survey about orientation to literacy instruction)? 3. What is the impact of WORD Force on improving early literacy achievement for students in grades K-2 who struggle with reading (i.e., those in greatest need of reading intervention) as determined through a baseline assessment of literacy skills? 4. What are realized levels of implementation/usage by teachers and students, and are they associated with achievement outcomes? 5. Do impacts on intermediate instructional/implementation outcomes mediate impacts on achievement ?

Using a matched-pairs design, we will pair teachers who are similar in terms of years of experience and other characteristics. Then, from each pair, we will randomize one teacher to the WORD Force group and the other to the business-as-usual (BAU) control group. This RCT design will allow us to evaluate the causal impact of WORD Force on student achievement outcomes as contrasted with BAU. EVERFI will offer WORD Force to the teachers in BAU as soon as the experiment is over. EVERFI will be able to use these findings to identify implementation factors that influence student outcomes, such as the classroom literacy environment, literacy block arrangements, and teachers’ characteristics. This study will also contribute to the growing corpus of literature around the efficacy of educational technology usage in early elementary classrooms.

For more information on our evaluation services, please visit our research services page and/or contact Robin Means.

All research Empirical Education has conducted for EVERFI can be found on our EVERFI webpage.

2023-04-13

Empirical Education Wraps Up Two Major i3 Research Studies

Empirical Education is excited to share that we recently completed two Investing In Innovation (i3) (now EIR) evaluations for the Making Sense of SCIENCE program and the Collaboration and Reflection to Enhance Atlanta Teacher Effectiveness (CREATE) programs. We thank the staff on both programs for their fantastic partnership. We also acknowledge Anne Wolf, our i3 technical assistance liaison from Abt Associates, as well as our Technical Working Group members on the Making Sense of SCIENCE project (Anne Chamberlain, Angela DeBarger, Heather Hill, Ellen Kisker, James Pellegrino, Rich Shavelson, Guillermo Solano-Flores, Steve Schneider, Jessaca Spybrook, and Fatih Unlu) for their invaluable contributions. Conducting these two large-scale, complex, multi-year evaluations over the last five years has not only given us the opportunity to learn much about both programs, but has also challenged our thinking—allowing us to grow as evaluators and researchers. We now reflect on some of the key lessons we learned, lessons that we hope will contribute to the field’s efforts in moving large-scale evaluations forward.

Background on Both Programs and Study Summaries

Making Sense of SCIENCE (developed by WestEd) is a teacher professional learning model aimed at increasing student achievement through improving instruction and supporting districts, schools, and teachers in their implementation of the Next Generation Science Standards (NGSS). The key components of the model include building leadership capacity and providing teacher professional learning. The program’s theory of action is based on the premise that professional learning that is situated in an environment of collaborative inquiry and supported by school and district leadership produces a cascade of effects on teachers’ content and pedagogical content knowledge, teachers’ attitudes and beliefs, the school climate, and students’ opportunities to learn. These effects, in turn, yield improvements in student achievement and other non-academic outcomes (e.g., enjoyment of science, self-efficacy, and agency in science learning). NGSS had just been introduced two years prior to the study, a study which ran from 2015 through 2018. The infancy of NGSS and the resulting shifting landscape of science education posed a significant challenge to our study, which we discuss below.

Our impact study of Making Sense of SCIENCE was a cluster-randomized, two-year evaluation involving more than 300 teachers and 8,000 students. Confirmatory impact analyses found a positive and statistically significant impact on teacher content knowledge. While impact results on student achievement were mostly all positive, none reached statistical significance. Exploratory analyses found positive impacts on teacher self-reports of time spent on science instruction, shifts in instructional practices, and amount of peer collaboration. Read our final report here.

CREATE is a three-year teacher residency program for students of Georgia State University College of Education and Human Development (GSU CEHD) that begins in their last year at GSU and continues through their first two years of teaching. The program seeks to raise student achievement by increasing teacher effectiveness and retention of both new and veteran educators by developing critically-conscious, compassionate, and skilled educators who are committed to teaching practices that prioritize racial justice and interrupt inequities.

Our impact study of CREATE used a quasi-experimental design to evaluate program effects for two staggered cohorts of study participants (CREATE and comparison early career teachers) from their final year at GSU CEHD through their second year of teaching, starting with the first cohort in 2015–16. Confirmatory impact analyses found no impact on teacher performance or on student achievement. However, exploratory analyses revealed a positive and statistically significant impact on continuous retention over a three-year time period (spanning graduation from GSU CEHD, entering teaching, and retention into the second year of teaching) for the CREATE group, compared to the comparison group. We also observed that higher continuous retention among Black educators in CREATE, relative to those in the comparison group, is the main driver of the favorable impact. The fact that the differential impacts on Black educators were positive and statistically significant for measures of executive functioning (resilience) and self-efficacy—and marginally statistically significant for stress management related to teaching—hints at potential mediators of impact on retention and guides future research.

After the i3 program funded this research, Empirical Education, GSU CEHD, and CREATE received two additional grants from the U.S. Department of Education’s Supporting Educator Effectiveness Development (SEED) program for further study of CREATE. We are currently studying our sixth cohort of CREATE residents and will have studied eight cohorts of CREATE residents, five cohorts of experienced educators, and two cohorts of cooperating teachers by the end of the second SEED grant. We are excited to continue our work with the GSU and CREATE teams and to explore the impact of CREATE, especially for retention of Black educators. Read our final report for the i3 evaluation of CREATE here.

Lessons Learned

While there were many lessons learned over the past five years, we’ll highlight two that were particularly challenging and possibly most pertinent to other evaluators.

The first key challenge that both studies faced was the availability of valid and reliable instruments to measure impact. For Making Sense of SCIENCE, a measure of student science achievement that was aligned with NGSS was difficult to identify because of the relative newness of the standards, which emphasized three-dimensional learning (disciplinary core ideas, science and engineering practices, and cross-cutting concepts). This multi-dimensional learning stood in stark contrast to the existing view of science education at the time, which primarily focused on science content. In 2014, one year prior to the start of our study, the National Research Council pointed out that “the assessments that are now in wide use were not designed to meet this vision of science proficiency and cannot readily be retrofitted to do so” (NRC, 2014, page 12). While state science assessments that existed at the time were valid and reliable, they focused on science content and did not measure the type of three-dimensional learning targeted by NGSS. The NRC also noted that developing new assessments would “present[s] complex conceptual, technical, and practical challenges, including cost and efficiency, obtaining reliable results from new assessment types, and developing complex tasks that are equitable for students across a wide range of demographic characteristics” (NRC, 2014, p.16).

Given this context, despite the research team’s extensive search for assessments from a variety of sources—including reaching out to state departments of education, university-affiliated assessment centers, and test developers—we could not find an appropriate instrument. Using state assessments was not an option. The states in our study were still in the process of either piloting or field testing assessments that were aligned to NGSS or to state standards based on NGSS. This void of assessments left the evaluation team with no choice but to develop one, independently of the program developer, using established items from multiple sources to address general specifications of NGSS, and relying on the deep content expertise of some members of the research team. Of course there were some risks associated with this, especially given the lack of opportunity to comprehensively pilot or field test the items in the context of the study. When used operationally, the researcher-developed assessment turned out to be difficult and was not highly discriminating of ability at the low end of the achievement scale, which may have influenced the small effect size we observed. The circumstances around the assessment and the need to improvise a measure leads us to interpret findings related to science achievement of the Making Sense of SCIENCE program with caution.

The CREATE evaluation also faced a measurement challenge. One of the two confirmatory outcomes in the study was teacher performance, as measured by ratings of teachers by school administrators on two of the state’s Teacher Assessment on Performance Standards (TAPS), which is a component of the state’s evaluation system (Georgia Department of Education, 2021). We could not detect impact on this measure because the variance observed in the ordinal ratings was remarkably low, with ratings overwhelmingly centered on the median value. This was not a complete surprise. The literature documents this lack of variability in teaching performance ratings. A seminal report, The Widget Effect by The New Teacher Project (Weisberg et al., 2009), called attention to this “national crisis”—the inability of schools to effectively differentiate among low- and high-performing teachers. The report showed that in districts that use binary evaluation ratings, as well as those that use a broader range of rating options, less than 1% of teachers received a rating of unsatisfactory. In the CREATE study, the median value was chosen overwhelmingly. In a study examining teacher performance ratings by Kraft and Gilmour (2017), principals in that study explained that they were more reluctant to give new teachers a rating below proficient because they acknowledge that new teachers were still working to improve their teaching, and that “giving a low rating to a potentially good teacher could be counterproductive to a teacher’s development.” These reasons are particularly relevant to the CREATE study given that the teachers in our study are very early in their teaching career (first year teachers), and given the high turnover rate of all teachers in Georgia.

We bring up this point about instruments as a way to share with the evaluation community what we see as a not uncommon challenge. In 2018 (the final year of outcomes data collection for Making Sense of SCIENCE), when we presented about the difficulties of finding a valid and reliable NGSS-aligned instrument at AERA, a handful of researchers approached us to commiserate; they too were experiencing similar challenges with finding an established NGSS-aligned instrument. As we write this, perhaps states and testing centers are further along in their development of NGSS-aligned assessments. However, the challenge of finding valid and reliable instruments, generally speaking, will persist as long as educational standards continue to evolve. (And they will.) Our response to this challenge was to be as transparent as possible about the instruments and the conclusions we can draw from using them. In reporting on Making Sense of SCIENCE, we provided detailed descriptions of our process for developing the instruments and reported item- and form-level statistics, as well as contextual information and rationale for critical decisions. In reporting on CREATE, we provided the distribution of ratings on the relevant dimensions of teacher performance for both the baseline and outcome measures. In being transparent, we allow the readers to draw their own conclusions from the data available, facilitate the review of the quality of the evidence against various sets of research standards, support replication of the study, and provide further context for future study.

A second challenge was maintaining a consistent sample over the course of the implementation, particularly in multi-year studies. For Making Sense of SCIENCE, which was conducted over two years, there was substantial teacher mobility into and out of the study. Given the reality of schools, even with study incentives, nearly half of teachers moved out of study schools or study-eligible grades within schools over the two year period of the study. This obviously presented a challenge to program implementation. WestEd delivered professional learning as intended, and leadership professional learning activities all met fidelity thresholds for attendance, with strong uptake of Making Sense of SCIENCE within each year (over 90% of teachers met fidelity thresholds). Yet, only slightly more than half of study teachers met the fidelity threshold for both years. The percentage of teachers leaving the school was congruous with what we observed at the national level: only 84% of teachers stay as a teacher at the same school year-over-year (McFarland et al., 2019). For assessing impacts, the effects of teacher mobility can be addressed to some extent at the analysis stage; however, the more important goal is to figure out ways to achieve fidelity of implementation and exposure for the full program duration. One option is to increase incentivization and try to get more buy-in, including among administration, to allow more teachers to reach the two-year participation targets by retaining teachers in subjects and grades to preserve their eligibility status in the study. This solution may go part way because teacher mobility is a reality. Another option is to adapt the program to make it shorter and more intensive. However, this option may work against the core model of the program’s implementation, which may require time for teachers to assimilate their learning. Yet another option is to make the program more adaptable; for example, by letting teachers who leave eligible grades and school to continue to participate remotely, allowing impacts to be assessed over more of the initially randomized sample.

For CREATE, sample size was also a challenge, but for slightly different reasons. During study design and recruitment, we had anticipated and factored the estimated level of attrition into the power analysis, and we successfully recruited the targeted number of teachers. However, several unexpected limitations arose during the study that ultimately resulted in small analytic samples. These limitations included challenges in obtaining research permission from districts and schools (which would have allowed participants to remain active in the study), as well as a loss of study participants due to life changes (e.g., obtaining teaching positions in other states, leaving the teaching profession completely, or feeling like they no longer had the time to complete data collection activities). Also, while Georgia administers the Milestones state assessment in grades 4–8, many participating teachers in both conditions taught lower elementary school grades or non-tested subjects. For the analysis phase, many factors resulted in small student samples: reduced teacher samples, the technical requirement of matching students across conditions within each cohort in order to meet WWC evidence standards, and the need to match students within grades, given the lack of vertically scaled scores. While we did achieve baseline equivalence between the CREATE and comparison groups for the analytic samples, the small number of cases greatly reduced the scope and external validity of the conclusions related to student achievement. The most robust samples were for retention outcomes. We have the most confidence in those results.

As a last point of reflection, we greatly enjoyed and benefited from the close collaboration with our partners on these projects. The research and program teams worked together in lockstep at many stages of the study. We also want to acknowledge the role that the i3 grant played in promoting the collaboration. For example, the grant’s requirements around the development and refinement of the logic model was a major driver of many collaborative efforts. Evaluators reminded the team periodically about the “accountability” requirements, such as ensuring consistency in the definition and use of the program components and mediators in the logic model. The program team, on the other hand, contributed contextual knowledge gained through decades of being intimately involved in the program. In the spirit of participatory evaluation, the two teams benefited from the type of organization learning that “occurs when cognitive systems and memories are developed and shared by members of the organizations” (Cousins & Earl, 1992). This type of organic and fluid relationship encouraged the researchers and program teams to embrace uncertainty during the study. While we “pre-registered” confirmatory research questions for both studies by submitting the study plans to NEi3 prior to the start of the studies, we allowed exploratory questions to be guided by conversations with the program developers. In doing so, we were able to address questions that were most useful to the program developers and the districts and schools implementing the programs.

We are thankful that we had the opportunity to conduct these two rigorous evaluations alongside such humble, thoughtful, and intentional (among other things!) program teams over the last five years, and we look forward to future collaborations. These two evaluations have both broadened and deepened our experience with large-scale evaluations, and we hope that our reflections here not only serve as lessons for us, but that they may also be useful to the education evaluation community at large, as we continue our work in the complex and dynamic education landscape.

References

Cousins, J. B., & Earl, L. M. (1992). The case for participatory evaluation. Educational Evaluation and Policy Analysis, 14(4), 397-418.

Georgia Department of Education (2021). Teacher Keys Effectiveness System. https://www.gadoe.org/School-Improvement/Teacher-and-Leader-Effectiveness/Pages/Teacher-Keys-Effectiveness-System.aspx

Kraft, M. A., & Gilmour, A. F. (2017). Revisiting the widget effect: Teacher evaluation reforms and the distribution of teacher effectiveness. Educational Researcher, 46(5), 234-249.

McFarland, J., Hussar, B., Zhang, J., Wang, X., Wang, K., Hein, S., Diliberti, M., Forrest Cataldi, E., Bullock Mann, F., and Barmer, A. (2019). The Condition of Education 2019 (NCES 2019-144). U.S. Department of Education. National Center for Education Statistics. https://nces.ed.gov/pubsearch/pubsinfo.asp?pubid=2019144

National Research Council (NRC). (2014). Developing Assessments for the Next Generation Science Standards. Committee on Developing Assessments of Science Proficiency in K-12. Board on Testing and Assessment and Board on Science Education, J.W. Pellegrino, M.R. Wilson, J.A. Koenig, and A.S. Beatty, Editors. Division of Behavioral and Social Sciences and Education. The National Academies Press.

Weisberg, D., Sexton, S., Mulhern, J., & Keeling, D. (2009). The Widget Effect: Our National Failure to Acknowledge and Act on Differences in Teacher Effectiveness. The New Teacher Project. https://tntp.org/wp-content/uploads/2023/02/TheWidgetEffect_2nd_ed.pdf

2021-06-23

SREE 2020 Goes Virtual

We, like many of you, were excited to travel to Washington DC in March 2020 to present at the annual conference of the Society for Research on Educational Effectiveness (SREE). This would have been our 15th year attending or presenting at the SREE conference! We had been looking forward to learning from a variety of sessions and to sharing our own work with the SREE community, so imagine our disappointment when the conference was cancelled (rightfully) in response to the pandemic. Thankfully, SREE offered presenters the option to share their work virtually, and we are excited to have taken part in this opportunity!

Among the several accepted conference proposals, we decided to host the symposium on Social and Emotional Learning in Educational Settings & Academic Learning because it incorporated several of our major projects—three evaluations funded by the Department of Education’s i3/EIR program—two of which focus on teacher professional development and one that focuses on content enhancement routines and student content knowledge. We were joined by Katie Lass who presented on another i3/EIR evaluation conducted by the Policy & Research Group and by Anne Wolf, from Abt Associates, who served as the discussant. The presentations focused on unpacking the logic model for each of the respective programs and collectively, we tried to uncover common threads and lessons learned across the four i3/EIR evaluations.

We were happy to have a turnout that was more than we had hoped for and a rich discussion about the topic. The recording of our virtual symposium is now available here. Below are materials from each presentation.

We look forward to next year!

9A. Unpacking the Logic Model: A Discussion of Mediators and Antecedents of Educational Outcomes from the Investing in Innovation (i3) Program

Symposium: September 9, 1:00-2:00 PM EDT

Section: Social and Emotional Learning in Educational Settings & Academic Learning in Education Settings

Abstract

Slides

Organizer: Katie Lass, Policy & Research Group

Impact on Antecedents of Student Dropout in a Cross-Age Peer Mentoring Program

Abstract

Katie Lass, Policy & Research Group*; Sarah Walsh, Policy & Research Group; Eric Jenner, Policy & Research Group; and Sherry Barr, Center for Supportive Schools

Supporting Content-Area Learning in Biology and U.S. History: A Randomized Control Trial of Enhanced Units in California and Virginia

Abstract

Hannah D’Apice, Empirical Education*; Adam Schellinger, Empirical Education; Jenna Zacamy, Empirical Education; Xin Wei, SRI International; and Andrew P. Jaciw, Empirical Education

The Role of Socioemotional Learning in Teacher Induction: A Longitudinal Study of the CREATE Teacher Residency Program

Abstract

Audra Wingard, Empirical Education*; Andrew P. Jaciw, Empirical Education; Jenna Zacamy, Empirical Education

Uncovering the Black Box: Exploratory Mediation Analysis for a Science Teacher Professional Development Program

Abstract

Thanh Nguyen, Empirical Education*; Andrew P. Jaciw, Empirical Education; and Jenna Zacamy, Empirical Education

Discussant: Anne Wolf, Abt Associates

2020-10-24

Report Released on the Effectiveness of SRI/CAST's Enhanced Units

Summary of Findings

Empirical Education has released the results of a semester-long randomized experiment on the effectiveness of SRI/CAST’s Enhanced Units (EU). This study was conducted in cooperation with one district in California, and with two districts in Virginia, and was funded through a competitive Investing in Innovation (i3) grant from the U.S. Department of Education. EU combines research-based content enhancement routines, collaboration strategies and technology components for secondary history and biology classes. The goal of the grant is to improve student content learning and higher order reasoning, especially for students with disabilities. EU was developed during a two-year design-based implementation process with teachers and administrators co-designing the units with developers.

The evaluation employed a group randomized control trial in which classes were randomly assigned within teachers to receive the EU curriculum, or continue with business-as-usual. All teachers were trained in Enhanced Units. Overall, the study involved three districts, five schools, 13 teachers, 14 randomized blocks, and 30 classes (15 in each condition, with 18 in biology and 12 in U.S. History). This was an intent-to-treat design, with impact estimates generated by comparing average student outcomes for classes randomly assigned to the EU group with average student outcomes for classes assigned to control group status, regardless of the level of participation in or teacher implementation of EU instructional approaches after random assignment.

Overall, we found a positive impact of EU on student learning in history, but not on biology or across the two domains combined. Within biology, we found that students experienced greater impact on the Evolution unit than the Ecology unit. These findings supports a theory developed by the program developers that EU works especially well with content that progresses in a sequential and linear way. We also found a positive differential effect favoring students with disabilities, which is an encouraging result given the goal of the grant.

Final Report of CAST Enhanced Units Findings

The full report for this study can be downloaded using the link below.

Enhanced Units final report

Dissemination of Findings

2023 Dissemination

In April 2023, The U.S. Department of Education’s Office of Innovation and Early Learning Programs (IELP) within the Office of Elementary and Secondary Education (OESE) compiled cross-project summaries of completed Investing in Innovation (i3) and Education Innovation and Research (EIR) projects. Our CAST Enhanced Units study is included in one of the cross-project summaries. Read the 16-page summary using the link below.

Findings from Projects with a Focus on Serving Students with Disabilities

2020 Dissemination

Hannah D’ Apice presented these findings at the 2020 virtual conference for the Society for Research on Educational Effectiveness (SREE) in September 2020. Watch the recorded presentation using the link below.

Symposium Session 9A. Unpacking the Logic Model: A Discussion of Mediators and Antecedents of Educational Outcomes from the Investing in Innovation (i3) Program

2019-12-26

New Multi-State RCT with Imagine Learning

Empirical Education is excited to announce a new study on the effectiveness of Imagine Math, an online supplemental math program that helps students build conceptual understanding, problem-solving skills, and a resilient attitude toward math. The program provides adaptive instruction so that students can work at their own pace and offers live support from certified math teachers as students work through the content. Imagine Math also includes diagnostic benchmarks that allows educators to track progress at the student, class, school, and district level.

The research questions to be answered by this study are:

  1. What is the impact of Imagine Math on student achievement in mathematics in grades 6–8?
  2. Is the impact of Imagine Math different for students with diverse characteristics, such as those starting with weak or strong content-area skills?
  3. Are differences in the extent of use of Imagine Math, such as the number of lessons completed, associated with differences in student outcomes?

The new study will use a randomized control trial (RCT) or randomized experiment in which two equivalent groups of students are formed through random assignment. The experiment will specifically use a within-teacher RCT design, with randomization taking place at the classroom level for eligible math classes in grades 6–8.

Eligible classes will be randomly assigned to either use or not use Imagine Math during the school year, with academic achievement compared at the end of the year, in order to determine the impact of the program on grade 6-8 mathematics achievement. In addition, Empirical Education will make use of Imagine Math’s usage data for potential analysis of the program’s impact on different subgroups of users.

This is Empirical Education’s first project with Imagine Learning, highlighting our extensive experience conducting large-scale, rigorous, experimental impact studies. The study is commissioned by Imagine Learning and will take place in multiple school districts and states across the country, including Hawaii, Alabama, Alaska, and Delaware.

2018-08-03

Partnering with SRI and CAST on an RCT

Empirical Education and CAST are excited to announce a new partnership under an Investing in Innovation (i3) grant.

We’ll evaluate the Enhanced Units program, which was written as a development proposal by SRI and CAST. This project will aim to integrate content enhancement routines and learning and collaboration strategies, enhancements to improve student content learning, higher order reasoning, and collaboration.

We will conduct the experiment within up to three school districts in California and Virginia—working with teachers of high school science and social studies students. This is our first project with CAST, and it builds on our extensive experience conducting large-scale, rigorous, experimental impact studies, as well as formative and process evaluations.

For more information on our evaluation services and our work on i3 projects, please visit our i3 /EIR page and/or contact Robin Means.

2017-07-27

Determining the Impact of MSS on Science Achievement

Empirical Education is conducting an evaluation of Making Sense of SCIENCE (MSS) under an Investing in Innovation (i3) five-year validation grant awarded in 2014. MSS is a teacher professional learning approach that focuses on science understanding, classroom practice, literacy support, and pedagogical reasoning. The primary purpose of the evaluation is to assess the impact of MSS on teachers’ science content knowledge and student science achievement and attitudes toward science. The evaluation takes place in 66 schools across two geographic regions—Wisconsin and the Central Valley of California. Participating Local Educational Agencies (LEAs) include: Milwaukee Public Schools (WI), Racine Unified School District (WI), Lodi Unified School District (CA), Manteca Unified School District (CA), Turlock Unified School District (CA), Stockton Unified School District (CA), Sylvan Unified School District (CA), and the San Joaquin County Office of Education (CA).

Using a Randomized Control Trial (RCT) design, in 2015-16, we randomly assigned the schools (32 in Wisconsin and 34 in California) to receive the MSS intervention or continue with business-as-usual district professional learning and science instruction. Professional learning activities and program implementation take place during the 2016-17 and 2017-18 school years, with delayed treatment for the schools randomized to control, planned for 2018-19 and 2019-20.

Confirmatory impacts on student achievement and teacher content knowledge will be assessed in 2018. Confirmatory research questions include:

What is the impact of MSS at the school-level, after two years of full implementation, on science achievement in Earth and physical science among 4th and 5th grade students in intervention schools, compared to 4th and 5th grade students in control schools receiving the business-as-usual science instruction?


What is the impact of MSS on science achievement among low-achieving students in intervention elementary schools with two years of exposure to MSS (in grades 4-5) compared to low-achieving students in control elementary schools with business-as-usual instruction for two years (in grades 4-5)?

What is the impact of MSS on teachers’ science content knowledge in Earth and physical science compared to teachers in the business-as-usual control schools, after two full years of implementation in schools?

Additional exploratory analyses are currently being conducted and will continue through 2018. Exploratory research questions examine the impact of MSS on students’ ability to communicate science ideas in writing, as well as non-academic outcomes, such as confidence and engagement in learning science. We will also explore several teacher-level outcomes, including teachers’ pedagogical science content knowledge, and changes in classroom instructional practices. The evaluation also includes measures of fidelity of implementation.

We plan to publish the final results of this study in fall of 2019. Please check back to read the research summary and report.

2017-06-19

Presenting at AERA 2017

We will again be presenting at the annual meeting of the American Educational Research Association (AERA). Join the Empirical Education team in San Antonio, TX from April 27 – 30, 2017.

Research Presentations will include the following.

Increasing Accessibility of Professional Development (PD): Evaluation of an Online PD for High School Science Teachers
Authors: Adam Schellinger, Andrew P Jaciw, Jenna Lynn Zacamy, Megan Toby, & Li Lin
In Event: Promoting and Measuring STEM Learning
Saturday, April 29 10:35am to 12:05pm
Henry B. Gonzalez Convention Center, River Level, Room 7C

Abstract: This study examines the impact of an online teacher professional development, focused on academic literacy in high school science classes. A one-year randomized control trial measured the impact of Internet-Based Reading Apprenticeship Improving Science Education (iRAISE) on instructional practices and student literacy achievement in 27 schools in Michigan and Pennsylvania. Researchers found a differential impact of iRAISE favoring students with lower incoming achievement (although there was no overall impact of iRAISE on student achievement). Additionally, there were positive impacts on several instructional practices. These findings are consistent with the specific goals of iRAISE: to provide high-quality, accessible online training that improves science teaching. Authors compare these results to previous evaluations of the same intervention delivered through a face-to-face format.


How Teacher Practices Illuminate Differences in Program Impact in Biology and Humanities Classrooms
Authors: Denis Newman, Val Lazarev, Andrew P Jaciw, & Li Lin
In Event: Poster Session 5 - Program Evaluation With a Purpose: Creating Equal Opportunities for Learning in Schools
Friday, April 28 12:25 to 1:55pm
Henry B. Gonzalez Convention Center, Street Level, Stars at Night Ballroom 4

Abstract: This paper reports research to explain the positive impact in a major RCT for students in the classrooms of a subgroup of teachers. Our goal was to understand why there was an impact for science teachers but not for teachers of humanities, i.e., history and English. We have labelled our analysis “moderated mediation” because we start with the finding that the program’s success was moderated by the subject taught by the teacher and then go on to look at the differences in mediation processes depending on the subject being taught. We find that program impact teacher practices differ by mediator (as measured in surveys and observations) and that mediators are differentially associated with student impact based on context.


Are Large-Scale Randomized Controlled Trials Useful for Understanding the Process of Scaling Up?
Authors: Denis Newman, Val Lazarev, Jenna Lynn Zacamy, & Li Lin
In Event: Poster Session 3 - Applied Research in School: Education Policy and School Context
Thursday, April 27 4:05 to 5:35pm
Henry B. Gonzalez Convention Center, Ballroom Level, Hemisfair Ballroom 2

Abstract: This paper reports a large scale program evaluation that included an RCT and a parallel study of 167 schools outside the RCT that provided an opportunity for the study of the growth of a program and compare the two contexts. Teachers in both contexts were surveyed and a large subset of the questions are asked of both scale-up teachers and teachers in the treatment schools of the RCT. We find large differences in the level of commitment to program success in the school. Far less was found in the RCT suggesting that a large scale RCT may not be capturing the processes at play in the scale up of a program.

We look forward to seeing you at our sessions to discuss our research. You can also view our presentation schedule here.

2017-04-17
Archive