blog posts and news stories

Empirical Presents at AERA 2012

We will again be presenting at the annual meeting of the American Educational Research Association (AERA). Join the Empirical Education team in Vancouver, Canada from April 13 – 17, 2012. Our presentations will span two divisions: 1) Measurement and Research Methodology and 2) Research, Evaluation and Assessment in Schools.

Research Topics will include:

Current Studies in Program Evaluation to Improve Student Achievement Outcomes

Evaluating Alabama’s Math, Science and Technology Initiative: Results of a Three-Year, State-Wide Randomized Experiment

Accommodating Data From Quasi–Experimental Design

Quantitative Approaches to the Evaluation of Literacy Programs and Instruction for Elementary and Secondary Students

We look forward to seeing you at our sessions to discuss our research. You can also download our presentation schedule here. As has become tradition, we plan to host yet another of our popular AERA receptions. Details about the reception will follow in the months to come.

2011-11-18

Need for Product Evaluations Continues to Grow

There is a growing need for evidence of the effectiveness of products and services being sold to schools. A new release of SIIA’s product evaluation guidelines is now available at the Selling to Schools website (with continued free access to SIIA members), to help guide publishers in measuring the effectiveness of the tools they are selling to schools.

It’s been almost a decade since NCLB made its call for “scientifically-based research,” but the calls for research haven’t faded away. This is because resources available to schools have diminished over that time, heightening the importance of cost benefit trade-offs in spending.

NCLB has focused attention on test score achievement, and this metric is becoming more pervasive; e.g., through a tie to teacher evaluation and through linkages to dropout risk. While NCLB fostered a compliance mentality—product specs had to have a check mark next to SBR—the need to assure that funds are not wasted is now leading to a greater interest in research results. Decision-makers are now very interested in whether specific products will be effective, or how well they have been working, in their districts.

Fortunately, the data available for evaluations of all kinds is getting better and easier to access. The U.S. Department of Education has poured hundreds of millions of dollars into state data systems. These investments make data available to states and drive the cleaning and standardizing of data from districts. At the same time, districts continue to invest in data systems and warehouses. While still not a trivial task, the ability of school district researchers to get the data needed to determine if an investment paid off—in terms of increased student achievement or attendance—has become much easier over the last decade.

The reauthorization of ESEA (i.e., NCLB) is maintaining the pressure to evaluate education products. We are still a long way from the draft reauthorization introduced in Congress becoming a law, but the initial indications are quite favorable to the continued production of product effectiveness evidence. The language has changed somewhat. Look for the phrase “evidence based”. Along with the term “scientifically-valid”, this new language is actually more sophisticated and potentially more effective than the old SBR neologism. Bob Slavin, one of the reviewers of the SIIA guidelines, says in his Ed Week blog that “This is not the squishy ‘based on scientifically-based evidence’ of NCLB. This is the real McCoy.” It is notable that the definition of “evidence-based” goes beyond just setting rules for the design of research, such as the SBR focus on the single dimension of “internal validity” for which randomization gets the top rating. It now asks how generalizable the research is or its “external validity”; i.e., does it have any relevance for decision-makers?

One of the important goals of the SIIA guidelines for product effectiveness research is to improve the credibility of publisher-sponsored research. It is important that educators see it as more than just “market research” producing biased results. In this era of reduced budgets, schools need to have tangible evidence of the value of products they buy. By following the SIIA’s guidelines, publishers will find it easier to achieve that credibility.

2011-11-12

Empirical's Chief Scientist co-authored a recently released NCEE Reference Report

Together with researchers from Abt Associates, Andrew Jaciw, Chief Scientist of Empirical Education, co–authored a recently released report entitled, “Estimating the Impacts of Educational Interventions Using State Tests or Study-Administered Tests”. The full report released by the The National Center for Education Evaluation and Regional Assistance (NCEE) can be found on the Institute of Education Sciences (IES) website.The NCEE Reference Report examines and identifies factors that could affect the precision of program evaluations when they are based on state assessments instead of study-administered tests. The authors found that using the same test for both the pre- and post-test yielded more precise impact estimates; using two pre-test covariates, one from each type of test (state assessment and study- administered standardized test), yielded more precise impact estimates; using as the dependent variable the simple average of the post-test scores from the two types of tests yielded more precise impact estimates and smaller sample size requirements than using post-test scores from only one of the two types of tests.

2011-11-02

Expertise Provided for New York Times Front Page Story

Empirical’s CEO, Denis Newman, was one of the experts consulted by New York Times reporter Trip Gabriel in his Sunday Times, front page story, “Inflating the Software Report Card.” Newman’s commentary on the first article in this series can be seen here. The article also refers to the guidelines for evaluation research issued by the Software and Information Industry Association (SIIA), which can be found on the SIIA site. In addition, the report referred to in the article—which was not authored by Newman but a team of company researchers—can be found on our reports and papers page. (Some readers were confused by the misspelling of Newman’s first name as “Dennis”.)

2011-10-11

Join Empirical Education at ALAS, AEA, and NSDC

This year, the Association of Latino Administrators & Superintendents (ALAS) will be holding its 8th annual summit on Hispanic Education in San Francisco. Participants will have the opportunity to attend speaker sessions, roundtable discussions, and network with fellow attendees. Denis Newman, CEO of Empirical Education, together with John Sipe, Senior Vice President and National Sales Manager at Houghton Mifflin Harcourt and Jeannetta Mitchell, eight-grade teacher at Presidio Middle school and a participant in the pilot study, will take part in a 30-minute discussion reviewing the study design and experiences gathered around a one-year study of Algebra on the iPad. The session takes place on October 13th at the Salon 8 of the Marriott Marquis in San Francisco from 10:30am to 12:00pm.

Also this year, the American Evaluation Association (AEA) will be hosting its 25th annual conference from November 2–5 in Anaheim, CA. Approximately 2,500 evaluation practitioners, academics, and students from around the globe are expected to gather at the conference. This year’s theme revolves around the challenges of values and valuing in evaluation.

We are excited to be part of AEA again this year and would like to invite you to join us at two presentations. First, Denis Newman will be hosting the roundtable session on Returning to the Causal Explanatory Tradition: Lessons for Increasing the External Validity of Results from Randomized Trials. We examine how the causal explanatory tradition—originating in the writing of Lee Cronbach—can inform the planning, conduct and analysis of randomized trials to increase external validity of findings. Find us in the Balboa A/B room on Friday, November 4th from 10:45am to 11:30am.

Second, Valeriy Lazarev and Denis Newman will present a paper entitled, “From Program Effect to Cost Savings: Valuing the Benefits of Educational Innovation Using Vertically Scaled Test Scores And Instructional Expenditure Data.”

Be sure to stop by on Saturday, November 5th from 9:50am to 11:20am in room Avila A.

Furthermore, Jenna Zacamy, Senior Research Manager at Empirical Education, will be presenting on two topics at the National Staff Development Council (NSDC) annual conference taking place in Anaheim, CA from December 3rd to 7th. Join her on Monday, December 5th at 2:30pm to 4:30pm when she will talk about the impact on student achievement for grades 4 through 8 of the Alabama Math, Science, and Technology Initiative, together with Pamela Finney and Jean Scott from SERVE Center at UNCG.

On Tuesday, December 6th at 10:00am to 12:00pm Jenna will discuss prior and current research on the effectiveness of a large-scale high school literacy reform together with Cathleen Kral from WestEd and William Loyd from Washtenaw Intermediate School District.

2011-10-10

New Reports Show Positive Results for Elementary Reading Program

Two studies of the Treasures reading program from McGraw-Hill are now posted on our reports page. Treasures is a basal reading program for students in grades K–6. Although the first study was a multi-site study while the second was conducted in the Osceola school district, both found positive impacts on reading achievement in grades 3–5.

The primary data for the first study were scores supplied with district permission by Northwest Evaluation Association from their MAP reading test. The study uses a quasi-experimental comparison group design based on 35 Treasures and 48 comparison schools primarily in the midwest. The study found that Treasures had a positive impact on overall elementary student reading scores, the strongest effect being observed for grade 5.

The second study’s data were provided by the Osceola school district and consist of demographic information, FCAT test scores, and information on student transfers during the year (between schools within the districts and from other districts). The dataset for this time series design covered five consecutive school years from 2005–06 to 2009–10, including two years prior to introduction of the intervention and three years after the introduction. The study included exploration of moderators that demonstrated a stronger positive effect for students with disabilities and English learners than the rest of the student population. We also found a stronger positive impact on girls than on boys.

Check back for results from follow-up studies, which are currently underway in other states and districts.

2011-09-21

Comment on the NY Times: In Classroom of Future, Stagnant Scores

The New York Times is running a series of front-page articles on “Grading the Digital School.” The first one ran Labor Day weekend and raised the question as to whether there’s any evidence that would persuade a school board or community to allocate extra funds for technology. With the demise of the Enhancing Education Through Technology (EETT) program, federal funds dedicated to technology will no longer be flowing into states and districts. Technology will have to be measured against any other discretionary purchase. The resulting internal debates within schools and their communities about the expense vs. value of technology promise to have interesting implications and are worth following closely.

The first article by Matt Richtel revisits a debate that has been going on for decades between those who see technology as the key to “21st Century learning” and those who point to the dearth of evidence that technology makes any measurable difference to learning. It’s time to try to reframe this discussion in terms of what can be measured. And in considering what to measure, and in honor of Labor Day, we raise a question that is often ignored: what role do teachers play in generating the measurable value of technology?

Let’s start with the most common argument in favor of technology, even in the absence of test score gains. The idea is that technology teaches skills “needed in a modern economy,” and these are not measured by the test scores used by state and federal accountability systems. Karen Cator, director of the U.S. Department of Education office of educational technology, is quoted as saying (in reference to the lack of improvement in test scores), “…look at all the other things students are doing: learning to use the Internet to research, learning to organize their work, learning to use professional writing tools, learning to collaborate with others.” Presumably, none of these things directly impact test scores. The problem with this perennial argument is that many other things that schools keep track of should provide indicators of improvement. If as a result of technology, students are more excited about learning or more engaged in collaborating, we could look for an improvement in attendance, a decrease in drop-outs, or students signing up for more challenging courses.

Information on student behavioral indicators is becoming easier to obtain since the standardization of state data systems. There are some basic study designs that use comparisons among students within the district or between those in the district and those elsewhere in the state. This approach uses statistical modeling to identify trends and control for demographic differences, but is not beyond the capabilities of many school district research departments1 or the resources available to the technology vendors. (Empirical has conducted research for many of the major technology providers, often focusing on results for a single district interested in obtaining evidence to support local decisions.) Using behavioral or other indicators, a district such as that in the Times article can answer its own questions. Data from the technology systems themselves can be used to identify users and non-users and to confirm the extent of usage and implementation. It is also valuable to examine whether some students (those in most need or those already doing okay) or some teachers (veterans or novices) receive greater benefit from the technology. This information may help the district focus resources where they do the most good.

A final thought about where to look for impacts of technologies comes from a graph of the school district’s budget. While spending on technology and salaries have both declined over the last three years, spending on salaries is still about 25 times as great as on technologies. Any discussion of where to find an impact of technology must consider labor costs, which are the district’s primary investment. We might ask whether a small investment in technology would allow the district to reduce the numbers of teachers by, for example, allowing a small increase in the number of students each teacher can productively handle. Alternatively, we might ask whether technology can make a teacher more effective, by whatever measures of effective teaching the district chooses to use, with their current students. We might ask whether technologies result in keeping young teachers on the job longer or encouraging initiative to take on more challenging assignments.

It may be a mistake to look for a direct impact of technology on test scores (aside from technologies aimed specifically at that goal), but it is also a mistake to assume the impact is, in principle, not measurable. We need a clear picture of how various technologies are expected to work and where we can look for the direct and indirect effects. An important role of technology in the modern economy is providing people with actionable evidence. It would be ironic if education technology was inherently opaque to educational decision makers.

1 Or we would hope, the New York Times. Sadly, the article provides a graph of trends in math and reading for the district highlighted in the story compared to trends for the state. The graphic is meant to show that the district is doing worse than the state average. But the article never suggests that we should consider the population of the particular district and whether it is doing better or worse than one would expect, controlling for demographics, available resources, and other characteristics.

2011-09-12

New RFP calls for Building Regional Research Capacity

The US Department of Education (ED) has just released the eagerly anticipated RFP for the next round of the Regional Education Laboratories (RELs). This RFP contains some very interesting departures from how the RELs have been working, which may be of interest especially to state and local educators.

For those unfamiliar with federal government organizations, the RELs are part of the National Center for Education Evaluation and Regional Assistance (abbreviated NCEE), which is within the Institute of Education Sciences (IES), part of ED. The country is divided up into ten regions, each one served by a REL—so the RFP announced today is really a call for proposals in ten different competitions. The RELs have been in existence for decades but their mission has evolved over time. For example, the previous RFP (about 6 years ago) put a strong emphasis on rigorous research, particularly randomized control trials (RCTs) leading the contractors in each of the 10 regions to greatly expand their capacity, in part by bringing in subcontractors with the requisite technical skills. (Empirical conducted or assisted with RCTs in four of the 10 regions.) The new RFP changes the focus in two essential ways.

First, one of the major tasks is building capacity for research among practitioners. Educators at the state and local levels told ED that they needed more capacity to make use of the longitudinal data systems that the ED has invested in through grants to the states. It is one thing to build the data systems. It is another thing to use the data to generate evidence that can inform decisions about policies and programs. Last month at the conference of the Society for Research on Educational Effectiveness, Rebecca Maynard, Commissioner of NCEE talked about building a “culture of experimentation” among practitioners and building their capacity for simpler experiments that don’t take so long and are not as expensive as those NCEE has typically contracted for. Her point was that the resulting evidence is more likely to be used if the practitioners are “up close and immediate.”

The second idea found in the RFP for the RELs is that each regional lab should work through “alliances” of state and local agencies. These alliances would cross state boundaries (at least within the region) and would provide an important part of the REL’s research agenda. The idea goes beyond having an advisory panel for the REL that requests answers to questions. The alliances are also expected to build their own capacity to answer these questions using rigorous research methods but applying them cost-effectively and opportunistically. The capacity of the alliances should outlive the support provided by the RELs. If your organization is part of an existing alliance and would like to get better at using and conducting research, there are teams being formed to go after the REL contracts that would be happy to hear from you. (If you’re not sure who to call, let us know and we’ll put you in touch with an appropriate team.)

2011-05-11

A Conversation About Building State and Local Research Capacity

John Q Easton, director of the Institute of Education Sciences (IES), came to New Orleans recently to participate in the annual meeting of the American Educational Research Association. At one of his stops, he was the featured speaker at a meeting of the Directors of Research and Evaluation (DRE), an organization composed of school district research directors. (DRE is affiliated with AERA and was recently incorporated as a 501©(3)). John started his remarks by pointing out that for much of his career he was a school district research director and felt great affinity to the group. He introduced the directions that IES was taking, especially how it was approaching working with school systems. He spent most of the hour fielding questions and engaging in discussion with the participants. Several interesting points came out of the conversation about roles for the researchers who work for education agencies.

Historically, most IES research grant programs have been aimed at university or other academic researchers. It is noteworthy that even in a program for “Evaluation of State and Local Education Programs and Policies,” grants have been awarded only to universities and large research firms. There is no expectation that researchers working for the state or local agency would be involved in the research beyond the implementation of the program. The RFP for the next generation of Regional Education Labs (REL) contracts may help to change that. The new RFP expects the RELs to work closely with education agencies to define their research questions and to assist alliances of state and local agencies in developing their own research capacity.

Members of the audience noted that, as district directors of research, they often spend more time reviewing research proposals from students and professors at local colleges who want to conduct research in their schools, rather than actually answering questions initiated by the district. Funded researchers treat the districts as the “human subjects,” paying incentives to participants and sometimes paying for data services. But the districts seldom participate in defining the research topic, conducting the studies, or benefiting directly from the reported findings. The new mission of the RELs to build local capacity will be a major shift.

Some in the audience pointed out reasons to be skeptical that this REL agenda would be possible. How can we build capacity if research and evaluation departments across the country are being cut? In fact, very little is known about the number of state or district practitioners whose capacity for research and evaluation could be built by applying the REL resources. (Perhaps, a good first research task for the RELs would be to conduct a national survey to measure the existing capacity.)

John made a good point in reply: IES and the RELs have to work with the district leadership—not just the R&E departments—to make this work. The leadership has to have a more analytic view. They need to see the value of having an R&E department that goes beyond test administration, and is able to obtain evidence to support local decisions. By cultivating a research culture in the district, evaluation could be routinely built in to new program implementations from the beginning. The value of the research would be demonstrated in the improvements resulting from informed decisions. Without a district leadership team that values research to find out what works for the district, internal R&E departments will not be seen as an important capacity.

Some in the audience pointed out that in parallel to building a research culture in districts, it will be necessary to build a practitioner culture among researchers. It would be straightforward for IES to require that research grantees and contractors engage the district R&E staff in the actual work, not just review the research plan and sign the FERPA agreement. Practitioners ultimately hold the expertise in how the programs and research can be implemented successfully in the district, thus improving the overall quality and relevance of the research.

2011-04-20

Quasi-experimental Design Used to Build Evidence for Adolescent Reading Intervention

A study of Jamestown Reading Navigator (JRN) from McGraw-Hill (now posted on our reports page), conducted in Miami-Dade County Public Schools, found positive results on the Florida state reading test (FCAT) for high school students in their intensive reading classes. JRN is an online application, with internal record keeping making it possible to identify the treatment group for a comparison design. While the full student, teacher and roster data for 9th and 10th grade intensive reading classes were provided by the district, JRN—as an online application—provided the identification of the student and teacher users through the computer logs. The quasi-experimental design was strengthened by using schools with both JRN and non-JRN students. Of the 70 schools that had JRN logs, 23 had JRN and non-JRN intensive reading classes and sufficient data for analysis.

Download the 2010 report here.

2011-04-15
Archive