blog posts and news stories

Report Released on The Efficacy of PCI’s Reading Program — Level One

Empirical Education and PCI Education have released the results of a one-year randomized control trial on the efficacy of PCI’s Reading Program — Level 1 for students with moderate to severe disabilities. Conducted in the Brevard and Miami-Dade County school districts, the study found that, after one year, students in the PCI program had substantial success in learning sight words in comparison to students in the control group — equivalent to a 21 percentile point difference. Though researchers found that students’ grade level had no effect on achievement with the program, they found a small moderating effect of the phonological pre-assessment: students starting with greater phonological skills benefit more from PCI than students starting with lower scores. This report will be presented at the 2009 AERA conference in San Diego, CA. A four-year follow-on study is being conducted with a larger group of students in Florida.

2008-11-01

Final Report on “Local Experiments” Project

Empirical Education released the final report of a project that has developed a unique perspective on how school systems can use scientific evidence. Representing more than three years of research and development effort, our report describes the startup of six randomized experiments and traces how local agencies decided to undertake the studies and how the resulting information was used. The project was funded by a grant from the Institute of Education Sciences under their program on Education Policy, Finance, and Systems. It started with a straightforward conjecture:

The combination of readily available student data and the greater pressure on school systems to improve productivity through the use of scientific evidence of program effectiveness could lead to a reduction in the cost of rigorous program evaluations and to a rapid increase in the number of such studies conducted internally by school districts.

The prevailing view of scientifically based research is that educators are consumers of research conducted by professionals. There is also a belief that rigorous research is extraordinarily expensive. The supposition behind our proposal was that the cost could be made low enough to allow experiments to be conducted routinely to support district decisions with local educators as the producers of evidence. The project contributed a number of methodological, analytic, and reporting approaches with potential to lower costs and make rigorous program evaluation more accessible to district researchers. An important result of the work was bringing to light the differences between conventional research design aimed at broadly generalized conclusions and design aimed at answering a local question, where sampling is restricted to the relevant “unit of decision making” such as a school district with jurisdiction over decisions about instructional or professional development programs. The final report concludes with an understanding of research use at the central office level, whether “data-driven” or “evidence-based” decision making, as a process of moving through stages in which looking for descriptive patterns in the data (i.e., data mining for questions of interest) will precede the use of statistical analysis of differences between and associations among variables of interest using appropriate methods such as HLM. And these will precede the adoption of an experimental research design to isolate causal, moderator, and mediator effects. It is proposed that most districts are not yet prepared to produce and use experimental evidence but would be able to start with useful descriptive exploration of data leading to needs assessment as a first step in a more proactive use of evaluation to inform their decisions.

For a copy of the report, please choose the Toward School Districts Conducting Their Own Rigorous Program Evaluation paper from our reports and papers webpage.

2008-10-01

Maui Community College Hires Empirical Education for an Evaluation of NSF-Funded Project

In Hawaii, Ho’okahua means “to lay a foundation”. Focusing on Hawaiian students over multiple years, the Ho’okahua Project aims to increase the number of Maui Community College (MCC) students entering, persisting, and succeeding in college level science, mathematics, and other STEM (Science, Technology, Engineering and Math) degree programs. Several strategies have already been implemented, including a bridge program with the high schools from which the MCC student community is largely drawn.

The Maui Educational Consortium provides leadership for this work and has been instrumental in a number of other initiatives for increasing the capacity to achieve their goals. For example, the implementation of Cognitive Tutor for Algebra 1 was the subject of a related Empirical Education randomized experiment. Another important capacity fostered by the Educational Consortium, working with the University of Hawai’i Office of the State Director for Career and Technical Education, is an initiative called HI-PASS, which aggregates student data across high school and community college. Initially in its evaluation, Empirical Education will be using information on math courses developed through the HI-PASS project to follow the success of students from the earlier study.

2008-08-22

Blue Valley Schools and Empirical Education Explain Local Program Evaluations

Dr. Bo Yan, Program Evaluator for the Blue Valley Schools in Kansas, and Dr. Denis Newman, president of Empirical Education, co-presented at the Northwest Evaluation Association’s annual Members Seminar in Portland OR. The topic was how school districts can use their own testing such as that administered by the NWEA member districts to conduct their own local program evaluations. Dr. Yan, who is expecting to conduct seven such evaluations in his district this coming year, used an evaluation of READ 180 as an illustration of a comparison group design using primarily statistical controls. Dr. Newman presented the randomized control work Empirical Education has done with the Maui school system to evaluate math software and curriculum from Carnegie Learning (Cognitive Tutor: Year 1 and Year 2). Both emphasized the importance of local evaluations to estimate the impact of the programs for the specific populations and resources available to the district. They also made clear the need for a comparison group from the local district since the improvement a district can expect is anchored in its own students’ current level of achievement. While the presentation focused mainly on the use of NWEA’s Measures of Academic Progress (MAP) in the quantitative estimation of the program’s impact, the presenters also emphasized the importance of gathering information on implementation and of the conversations that must go on to integrate evaluation findings into the district’s decision-making. NWEA will be providing this presentation, along with the PowerPoint slides, as a podcast.

2008-06-27

Development Grant Awarded to Empirical Education

The U.S. Department of Education awarded Empirical Education a research grant to develop web-based software tools to support school administrators in conducting their own program evaluations. The two-and-a-half year project was awarded through the Small Business Innovative Research program administered and funded by the Institute of Education Sciences in the U.S. Department of Education. The proposal received excellent reviews in this competitive program. One reviewer remarked: “This software system is in the spirit of NCLB and IES to make curriculum, professional development, and other policy decisions based on rigorous research. This would be an improvement over other systems that districts and schools use that mostly generate tables.” While current data-driven decision making systems provide tabular information or comparisons in terms of bar graphs, the software to be developed—an enhancement of our current MeasureResults™ program—helps school personnel create appropriate research designs following a decision process. It then provides access to a web-based service that uses sophisticated statistical software to test whether there is a difference in the results for a new program compared to the school‘s existing programs. The reviewer added that the web-based system instantiates a “very good idea to provide [a] user-friendly and cost-effective software system to districts and schools to insert data for evaluating their own programs.” Another reviewer agreed, noting that: “The theory behind the tool is sound and would provide analyses appropriate to the questions being asked.” The reviewer also remarked that “…this would be a highly valuable tool. It is likely that the tool would be widely disseminated and utilized.” The company will begin deploying early versions of the software in school systems this coming fall.

2008-05-22

IES Holds its First Workshop on How States and Districts Can Evaluate Their Interventions

Empirical Education staff members participated in a workshop held in Washington sponsored by the U.S. Department of Education’s Institute of Education Sciences (IES) on April 24. The goal of the event was to encourage “locally initiated impact evaluations.” The primary presenter, Mark Lipsey of Vanderbilt University, called upon his extensive experience in rigorous evaluations to illustrate and explain the how and why of local evaluations. David Holdzkom of Wake County (NC) provided the perspective from a district where the staff has been conducting rigorous research on their own programs. Empirical Education assisted IES in preparing for the workshop by helping to recruit school district participants and presenters. Of the approximately 150 participants, 40% represented state and local education agencies (the other 60% were from universities, colleges, private agencies, R&D groups, and research companies). An underlying rationale for this workshop was the release of an RFP on this topic. Empirical Education expects to be working with a number of school districts on their responses to this solicitation, which is due October 2, 2008.

2008-05-01

Two-Year Study on Effectiveness of Graphing Calculators Released

Results are in from a two-year randomized control trial on the effectiveness of graphing calculators on Algebra and Geometry achievement. Two reports are now available for this project, which was sponsored by Texas Instruments. In year 1 we contrasted business as usual in the math classes of two California school districts with classes equipped with sets of graphing calculators and led by teachers who received training in their use. In the second year we contrasted calculator-only classrooms with those also equipped with a calculator-based wireless networking system.

The project tracked achievement through state and other standardized test scores and implementation through surveys and observations. For the most part, the experiment could not discern an impact as a result of providing the equipment and training for the teachers. Data from surveys and observations make clear that the technology was not used extensively (and by some, not at all) suggesting that training, usability, and alignment issues must be addressed in adoption of this kind of program. There were modest effects, especially for Geometry, but these were often not found consistently for the two measurement scales. In one case contradictory results for the two school districts suggests that researchers should use caution in combining data from different settings.

2008-03-01

At NCES Conference, Empirical Education Explains a Difficulty in Linking Student and Teacher Records

The San Francisco Bay Area, our “home town”, was the site for the 2008 National Center for Educational Statistics (NCES) conference. From February 25 to 29, educators and researchers from all over the country came to discuss data collection and analysis. Laurel Sterling and Robert Smith of Empirical Education presented “Tracking Teachers of Instruction for Data Accuracy and Improving Educational Outcomes”. Their topic was the need to differentiate between, the teachers who actually perform instruction for students and the teachers with whom those students are officially registered. They explained that in our research we keep track of the “teacher of instruction” vs. “teacher of registration”. Without this distinction we are unable to properly identify the student clusters or associate student growth with the right teacher. Sharing instructional responsibilities within a grade-level team is common enough to be of concern. In a large experiment involving teachers in grades 4 through 8, 17% reported teaching students who were not assigned to them on the official class roster. The audience was very lively and in the question period contributed to the topic. One district IT manager indicated that there is movement in this direction even at the state level. For a copy of the presentation, send us a message on our contact page.

2008-02-29

Empirical Education Joins the What Works Clearinghouse Research Team

Mathematica Policy Research, Inc. has subcontracted with Empirical Education to serve as one of the research partners on the new What Works Clearinghouse (WWC) team. This week, Empirical research staff joined a seminar to talk through the latest policies and criteria for judging the quality and rigor of effectiveness research.

Last summer, the Department of Education granted leadership of the WWC to Mathematica, (formerly led by AIR), which put together a team consisting of Empirical, RAND, SRI, and a number of other research organizations. This round of work is expected to have a greater emphasis on outreach to schools, industry, and other stakeholders.

2008-01-29

Maui Schools Sign Subscription Agreement for Empirical Education Research Services

Empirical Education will be providing research services to the Maui School District through an innovative subscription arrangement for MeasureResults™, an Internet-based research and consulting offering. The initial application of this service will be investigations of the longer term impact of the Cognitive Tutor program that was implemented under a Math and Science Partnership program grant.

The company’s MeasureResults service is a response to the ever increasing demands on school systems to validate their program and spending decisions based on the analysis of solid data. Most districts do not have the staff and facilities to set up data and run complex statistical analyses. In Maui, the service will take advantage of the sophisticated data warehousing capabilities being put in place statewide. MeasureResults is designed to simplify the technical and logistical steps of conducting experiments by building powerful and verified analytical techniques into an uncomplicated framework. The offering includes consultative services on research design and web-based interfaces to gather data, automate analysis, and generate reports. MeasureResults is bundled with technical support that includes review of all analyses and reports by trained statisticians.

2007-12-17
Archive