Building on What We Have Learned About Quality in Expanded Learning and Afterschool Programs: Working Toward the Development of a Quality Indicator System

Carol McElvain

Director, Afterschool and Expanded Learning, American Institutes for Research

For almost a decade and half, my colleagues and I at the American Institutes for Research (AIR)1 (and our predecessor organizations, Learning Point Associates and NCREL) have watched the expanded learning community grow and develop in many positive ways, both its day-to-day practice as well as in its knowledge of what works well and of how to measure what works, particularly in relation to the 21st Century Community Learning Centers initiative. The significant growth in the number, sophistication, and strength of 21st Century Community Learning Centers since 1997 has been quite remarkable: from 10 schools in 1997 to almost 11,000 afterschool and summer learning programs in schools and community centers in every state in 2012–13. These programs are now broadening and deepening learning for almost 1.7 million students, engaging over a quarter-million parents, and coordinating 40,000 school-community partnerships that provide a variety of important academic supports and enriched learning opportunities through afterschool and summer programs.


During this time period, a number of local and state expanded learning initiatives were also launched, and those already underway experienced dramatic growth. Local efforts sprang up nationwide, including on the East Coast, such efforts as the Providence Afterschool Alliance (PASA) and The After School Corporation (TASC) in New York; in the heartland, such efforts as After School Matters in Chicago and STRIVE in Cincinnati; and on the West Coast, such efforts as LA’s BEST in Los Angeles and the Partnership for Children and Youth in the Bay Area of Northern California. 


State efforts also were initiated and refined, including in California, which has the largest state effort, the Afterschool Education and Safety (ASES) program based on Proposition 49.


As with any growth, it has not happened without a certain amount of initial growing pains and with significant opportunities to learn and improve. As a training partner in the original incarnation of the 21st Century Community Learning Centers program, continuing on with the development of professional development materials and trainings, including Beyond the Bell: A Toolkit to Create Quality Afterschool Programs (Kaplan, McElvain, & Walter, 2005), and in other work supporting the program and its operation, my colleagues and I have had the privilege of a close view of the positive changes and growth in program development and measurement. It is worth stepping back a minute to think back about the magnitude of the growth in and learnings from the 21st Century Community Learning Centers with an eye toward encouraging and supporting further developments in the field in the years ahead.


While the ultimate goal of educational support programs like the 21st Century Community Learning Centers initiative is increased student achievement and student success, such growth is not possible in isolation and is dependent on critical supporting factors.

While the ultimate goal of educational support programs like the 21st Century Community Learning Centers initiative is increased student achievement and student success, such growth is not possible in isolation and is dependent on critical supporting factors. This is where a high quality expanded learning program after school and during the summer can play a pivotal role. Focusing on the end-game of test scores at the end of the school year in just a few subjects in isolation has sometimes left key actors, who either work in or are responsible for programs, in a quandary. Reports about year-end test scores and other outcome measures are often received after the program year has ended. Yet this information is critically needed when the programs are actually operating in order to make key decisions regarding how programs might best serve students and build their improvement efforts. 


In these data, teachers report that regular program participants tend to show improved homework completion, class participation, attendance, classroom behaviors, English and math classroom grades, and reading and math achievement scores, with those students who have higher program attendance showing the greatest improvement.

Studies are clear that high quality afterschool programs structured in a variety of ways bring many positive outcomes for students, including achievement in terms of test scores (Durlak, Mahoney, Bohnert, & Parente, 2010). Furthermore, for almost 10 years, the Profile and Performance Information Collection System (PPICS) has been collecting annual data on all 21st Century Community Learning Centers across the nation, working through their respective state departments of education. More recently several state education departments (for example, those in Texas and New Jersey) have expanded upon the federal PPICS system to collect and analyze more data on the Centers in their states. In these data, teachers report that regular program participants tend to show improved homework completion, class participation, attendance, classroom behaviors, English and math classroom grades, and reading and math achievement scores, with those students who have higher program attendance showing the greatest improvement (Naftzger, Vinson, Manzeske, & Gibbs, 2011; American Institutes for Research, 2012).


This recent knowledge that high quality afterschool programs work and make a positive difference is indeed a “game changer.” This means that we should spend much less time arguing about whether quality afterschool programs work and much more time on working to ensure that all programs are effective and to make high quality programs more accessible and scalable. 


While empirical research investigating the impact of program quality on youth outcomes is still emerging, it is now generally agreed that in conjunction with youth characteristics, community context, and youth participation, higher levels of program quality promote many robust outcomes, including


  • active youth engagement,

  • higher attendance in school,

  • better school grades, 

  • positive social behaviors,

  • improved homework completion and class participation, and

  • fewer disciplinary issues to disrupt their learning.


These are all building blocks to improvement of student achievement (Birmingham, Pechman, Russell, & Mielke, 2005; Black, Doolittle, Zhu, Unterman, & Grossman, 2008; Durlak & Weissberg, 2007; Granger, 2008; Lauer, Akiba, Wilkerson, Apthorp, Snow, & Martin-Glenn, 2006; Vandell, Shumow, & Posner, 2005). Further, many of these outcomes can be measured during the time the afterschool program is operating, so that adjustments can be made both in the school-day program and in the afterschool program to try to improve them.


Our hope is that, in the near future, the field will devote itself and its resources to pursuing the development of consistent measures of these interim indicators of program quality to help programs see where their critical levers of change are to promote high quality programming, both in organization and direct program-level supports. 


Other articles in this compendium will focus on what those studies have found, but this article will focus on what the development of a robust program quality indicator system might be able to measure and demonstrate to those who might support the expansion of high quality expanded learning programs afterschool and summers. What we are increasingly trying to accomplish is to provide more real-time indicators and information to the educators and community organizations working in afterschool and summer programs so they can adjust, change, and improve opportunities and programming, as appropriate, at the time they are actually operating and not after the fact. This system builds on the research, evaluation, and quality assessment work that has developed over the past decade and puts it in a context that is both actionable and measurable, with short- and long-term outcomes. It also creates the opportunity for any participant in the delivery of services to see how they play a part in creating positive outcomes. 


Based on the Weikart Center’s approach to program point of service quality (Smith, Peck, Denault, Blazevski, & Akiva, 2010), we use the following frame to suggest that organizational processes (such as those described in Beyond the Bell) are integral for delivery of those services:


The critical point underlying a quality indicator system is that quality indicators focus primarily on quality implementation while the program is functioning as opposed to reviewing end-of-the-year information received after the program year has ended. The idea here is to help centers engage with data related to the adoption of quality practices and approaches, help identify strengths and weaknesses relative to these areas, and focus staff reflection on those areas where there are opportunities for growth and further development from a practice standpoint. Based on the research we have seen, we believe that better implementation from a quality perspective will better support the achievement of desired youth outcomes.


It is important to recognize that the development of a quality indicator system is not meant to duplicate or replace existing efforts. We recognize that many states and programs have developed or adopted quality assessment processes that are also reflective of the research on program quality, as well as local context. In contrast, the quality indicator system we are developing is intended to integrate the multiple efforts in place toward achieving high-quality programs that are appropriately reflective of context and best practice. Creating a quality indicator system is intended to emulate the quality improvement practices used in other education and business sectors and is directed toward the end of putting in place best practices that support positive youth outcomes and student success, including achievement.


Quality indicators have a benefit that is twofold. First, they support the integration of continuous quality improvement practices, data collection efforts, and responsibility toward aligning with industry-defined quality standards. Secondly, quality indicators describe valuable data on program processes and support quality practices at the point of service that are purported to promote positive youth outcomes. This information is critical to assessing the relationship between program quality and youth outcomes. The great benefit of a quality indicator system is that it helps develop both formative and summative evaluation and affords the opportunity to use data gathered in ways that are meaningful for program leaders, staff, and participants. 


Quality indicators should meet the following criteria:


  • Represent promising, evidenced-based practices that are relevant to the local context and the goals and principles of the program 

  • Are informed by multiple data sources (e.g., PPICS, surveys)

  • Allow program leaders and staff to make data-driven decisions and provide tools for collaboration and reflection related to organizational processes and program practice 

  • Help programs leaders and staff strive toward alignment with local and national systems of program quality (e.g., state- or organization-developed program quality standards) 

  • Help programs move towards practices that ultimately support positive youth outcomes 


A quality indicator system has multiple practical elements, including staff and leadership surveys, aligned resources for building program quality (i.e., planning tools), optional components of technical assistance (e.g., technical assistance on using data to drive program development), and the quality indicators themselves: staff, partnerships, and practices. Under each domain, there are multiple elements, as indicated in the following figure:


Indicator Domain Indicator Element
Staff Staff Recruitment and Retention
Staff Professional Development
Opportunities for Staff Reflection and Improvement
Partnerships School Partnerships
Community Partnerships
Family Partnerships
Youth as Partners
Practices Practices That Support Implementation Quality
Academic Skill Building Practices
Youth Development Practices
Family Engagement Practices
Quality Improvement Practices

Critical in this understanding is that high quality programming is comprised of both program-level interactions and the organization of the program itself. Program-level interactions are the ways direct program staff work with their participants. They include elements such as how staff structure activities, the variety of activities they provide, how staff talk to students or provide leadership or develop opportunities for them, and how engaged children are in the activities in which they participate. Organizational elements are comprised of the overarching structure, including the program and its management. Program elements include such things as the adoption of a quality framework; evaluation and monitoring; the process for selecting staff; program partnerships and relationships with families, the schools with which they work, and other stakeholders in the community. Management context elements include opportunities for staff professional development, ongoing staff supervision, and program monitoring and evaluation.


Programs need a quality framework and a related set of indicators to support high quality programming within all contexts of program operations. 


Developing a system that provides timely, interpretable, and actionable data regarding how programs are functioning from a quality perspective guides ongoing quality improvement efforts. This also gives programs the time and support they need to use data to drive toward higher-quality-related decision making. 


The initial goals of a quality indicator system would focus on both short- and longer-term outcomes. Critical to that process is the combination of self-assessment and other data measures to give a better picture to programs and staff about where they are and where they want to head. Implementing a reflective self-assessment process would first raise program awareness of organizational quality indicators, and would also provide a base understanding of how well a program is implementing quality indicators. The self-assessment process is a strategy that is more likely to engage program staff and management in identifying training and professional development needs. 


Longer-term goals of a quality indicator system include the following: 


  • Programs will see, over time, how they can use the self-assessment process and the data they have developed from ongoing assessment of point-of-service quality to help programs develop yearly quality improvement plans. 

  • Programs receive ongoing support through training and professional development areas targeted for program improvement and in making data driven program decisions. 

  • Programs gain experience and knowledge in using evaluation to inform an ongoing cycle of quality improvement; 

  • Student growth on short term and long term goals are measured to evaluate program impacts.


Conclusion


Over more than a decade, the expanded learning field has learned and accomplished a great deal. It is now generally agreed that as a result of higher levels of program quality in afterschool and summer learning programs, we are increasingly seeing significant positive and student outcomes. Because of these learnings and positive developments, building a system of program quality indicators is the next logical developmental step in application of what we, as a field, have learned. These indicators have been identified in conjunction with the many implementation studies and evaluations of effective expanded learning programs, as well as from the research and literature spanning multiple fields, including youth development, conditions for effective learning, and effective classroom practices. 


Now is the time—and the opportunity is ripe—to use these many learnings to enhance the extensive expanded learning infrastructure for afterschool and summer learning programs that is already in place in just about every state and to strengthen the professional practice of the tens of thousands of individuals who work in them, from schools and from other child- and youth-serving organizations, in just about every community across America.


Footnotes

  1. The author would like to acknowledge the significant contribution of her colleagues at AIR toward the development of this work and their input on this article, particularly Neil Naftzger, Deborah Moroney, Jaime Singer, and Fausto López. Any errors or misstatements are the author’s. We are also grateful for the work of Charles Smith and the Weikart Center for Youth Development at the Forum for Youth Investment.

References


American Institutes for Research. (2012). Texas 21st Century Community Learning Centers: Interim evaluation report. Retrieved from http://www.tea.state.tx.us/WorkArea/linkit.aspx?LinkIdentifier=id&ItemID...

Birmingham, J., Pechman, E. M., Russell, C. A., & Mielke, M. (2005). Shared features of high-performing after-school programs: A follow-up to the TASC evaluation. Washington, DC: Policy Studies Associates.


Black, A. R., Doolittle, F., Zhu, P., Unterman, R., & Grossman, J. B. (2008). The evaluation of enhanced academic instruction in after-school programs: findings after the first year of implementation (NCEE 2008-4021). Retrieved from http://ies.ed.gov/ncee/pdf/20084021.pdf

David P. Weikart Center for Youth Program Quality. (2011). School-age PQA. Ypsilanti, MI: Forum for Youth Investment.


Durlak, J. A., Mahoney, J. L., Bohnert, A. M., & Parente, M. E. (2010). Developing and improving after-school programs to enhance youth’s personal growth and adjustment: A special issue of AJCP. American Journal of Community Psychology, 45(3–4), 285–293.


Durlak, R., & Weissberg, R. (2007). The impact of after-school programs that promote personal and social skills. Chicago, IL: CASEL.


Granger, R. C. (2008). After-school programs and academics: Implications for policy, practice, and research. Social Policy Report, 22(2). Retrieved from After-School Programs and Academics: Implications for Policy, Practice, and Research


Kaplan, J., McElvain, C., & Walter, K. (2005). Beyond the bell: A toolkit for creating effective afterschool programs (Rev. 3rd ed.). Naperville, IL: Learning Point Associates.


Lauer, P. A., Akiba, M., Wilkerson, S. B., Apthorp, H. S., Snow, D., & Martin-Glen, M. L. (2006). Out-of school time programs: A meta-analysis of effects for at-risk students. Review of Educational Research, 76, 275–313.


Naftzger, N., Vinson, M., Manzeske, D., & Gibbs, C. (2011). New Jersey 21st Century Community Learning Centers (21st CCLC) impact report 2009–10. Naperville, IL: American Institutes for Research.


Smith, C., Peck, S. C., Denault, A.-S., Blazevski, J., & Akiva, T. (2010). Quality at the point of service: Profiles of practice in after-school settings. American Journal of Community Psychology, 45(3), 358–369.


Vandell, D. L., Shumow, L., & Posner, J. (2005). After-school programs for low-income children: Differences in program quality. In J. L. Mahoney, R. W. Larson, & J. S. Eccles (Eds.), Organized activities as contexts of development: Extracurricular activities, after school and community programs (pp. 437–456). Mahwah, NJ: Erlbaum.