November 8, 2010
Alliance Evaluators and the American Evaluation Association
The Alliance was well represented at this year's Annual Conference of the American Evaluation Association (AEA). Held November 10–13 in San Antonio, Texas, over 2,500 evaluators and practitioners from education, health, labor, social services, and other areas shared their work and ideas around the theme of Evaluating with Validity. A brief description of Alliance papers and presentations accepted by AEA is provided below.
- Evaluating Literacy Curricula for Adolescents: Results From Three Years of Striving Readers
Author: Kim Sprague
The Education Alliance at Brown University is conducting a randomized control trial (RCT) evaluating the effectiveness of two adolescent literacy interventions on the reading achievement of low performing students, as implemented by two school districts in western Massachusetts. Both teachers and students in five participating high schools were randomly assigned to one of three conditions: READ180, Xtreme Reading, or the control condition (business-as-usual). Year 3 results indicated positive effects on reading achievement for one of the two targeted interventions as compared to the control group. Despite challenges in the specification for implementation and monitoring, patterns emerged in the analysis of implementation levels within the assigned treatment group and the Treatment on the Treated (TT) group. Results are presented in the context of the implementation study results. The final five-year study results for all eight grantees will provide administrators and educators with the information necessary to make informed choices about which program to select and how best to implement it within their schools.
- Adaptations to Evaluation Design: Two Examples of Ensuring Quality in Practice
Authors: Stephanie Feger and Elise Laorenza
Often external evaluators initially establish quality through development of an appropriate design, selection of instruments, and planning for data collection and analysis. However, evaluation plans often require adaptations based on program modifications. Evaluation adaptations are frequently needed to clarify key program components discovered over the course of implementation and also to determine benchmarks of program impact. Evaluation adaptations can improve evaluation relevance, an indicator of overall evaluation quality, and provide new tools and/or data to identify program activities that can more effectively contribute to program goals and outcomes. Through discussion of two evaluation studies of statewide science programs this roundtable will explore; (1) the development of a student reflection instrument as an evaluation adaptation in the context of program benchmarks, (2) the process for aligning evaluation adaptations with original methods and the integration of results, and (3) the utilization of evaluation adaptations to support program goals and improve program impact.
- Use of Implementation Rubrics as Indicators of Evaluation Quality
Authors: Elise Laorenza, Joye Whitney, and Stephanie Feger
This presentation will share experiences (1) developing two qualitative evaluation rubrics measuring program implementation levels, and (2) ensuring the evaluation rubrics are useful for program planning and policy decision-making. The roundtable discussion will raise questions of how evaluators determine the quality of implementation evaluations when the focus is on qualitative data. We propose to use our experience with implementation rubrics to frame the dialogue around two key indicators of quality, evaluation use and mixed methods. A completed evaluation study of a summer learning program provides an opportunity to reflect on how program stakeholders and evaluators identify factors of quality. While evidence suggests that usefulness was attained, focusing stakeholders on program descriptions rather than quantitative data was a challenge. The roundtable will address the dilemma of when and how to assess the quality of qualitative evaluation designs, and strategies developed to enhance use of evaluation tools.
- Assessing program implementation in multi-site evaluations: The development, alignment, and incorporation of evidence-based rubrics in rigorous evaluation design
Authors: Amy Burns and Tara Smith
This presentation shares a multi-step approach to program implementation assessment developed by the Education Alliance at Brown University. Evaluators provide examples from rigorous evaluations of four districts which received federal Magnet School Assistance Program funding to describe: the development of implementation rubrics that align with districts' logic models; data sources for evidence-based measures that are used to identify compliance with logic models; rubric scoring processes; and the incorporation of these rubric data into multivariate statistical models. The presenters seek to promote discussion on ways to address challenges in "quantifying" implementation data. [Note: While this paper was accepted for conference presentation, authors are unable to attend as scheduled. Contact authors directly for information on this paper.]
- Alliance STEM Evaluations
In the summer of 2010, Alliance evaluators completed three multi-year evaluation studies of the implementation and impact of STEM programs funded through the National Institutes for Health and the National Science Foundation.
The Alliance completed the first year evaluation study of a state-level Mathematics and Science Program. Partners in the program implementation, remarked on the evaluation design for capturing the nuances of a very complex project.
The Alliance continues to partner with the Rhode Island Technology Enhanced Science (RITES) program and in July completed a year two report, which has supported the program in building partnerships among the state’s K–12 community.