It is important to clarify part of the current narrative that American education is slow to change and is not improving. In reality, schools have changed enormously over the past few decades and have made some real improvements. It is also true that our society has continued to ratchet up the expectations we have for schools, and schools have not been able to keep pace with those growing demands. An increasingly sophisticated and competitive global economy means the skills students need are greater than ever. School also remains the primary way we attempt to address issues of inequality. This creates a scenario where we have higher standards, for more students, and often coupled with a reduction in resources.
In response to continually rising demands education has become quick to change. New change ideas spread like wildfire across states, districts, and schools. Unfortunately, while schools have proven capable of quick change, as a field we are still relatively slow to improve. A habit of fast change, slow improvement shows up systemically as well as individually. Expectations for education will continue to be high, and our own aspirations are as well, so we need to learn to improve faster.
System Level Pattern
School leaders often bear the brunt of the pressure for schools to improve, and they know that improvement requires change. There is also no shortage of promising change ideas for them to consider. As Tony Bryk and colleagues note in Getting Better at Getting Better, “Believing in the power of some new reform proposal and propelled by a sense of urgency, educational leaders often plunge headlong into large-scale implementation of new ideas.” An idea is identified, training is provided, and teachers are expected to implement. They go on to note that “invariably, outcomes fall far short of expectations, enthusiasm wanes, attention wanders, and the school moves on to the next idea without ever really understanding why the last one failed. Such is the pattern in schools – implement fast, learn slow, and burn good will as you go.” Seasoned teachers will find that pattern all too familiar.
A parallel pattern occurs at the level of the individual classroom. A key insight from improvement science is that variation in outcome is the natural state of complex systems. Classrooms are complex, social systems. Variation in outcome means that when we talk about “best practices” we should still assume that when initially used, they work at best for many students, much of the time as opposed to all students all the time.
For example, let’s say I try a reading comprehension strategy for my students prior to taking on a challenging new text in science. If it is a good strategy, we can assume it will work “pretty well” for many students, “really well” for a few, and “not very well” for a few. Improvement science calls this pattern “predictable variation” and it nicely corresponds to own experience:
We have fallen into a problematic response to this natural variation. When faced with strategies that work for some students and not for others, we multiply the number of strategies we use and hope that in the mix we have a little something for everyone.
Rather than develop the know-how to make something work for more students more of the time, we often opt to do something else in addition. How often have we all picked up a new idea with the satisfaction of having “another tool in our toolboxes?” That is not a bad thing on the face, but we need to acknowledge the underlying logic of the response. Multiplying our instructional strategies is useful, but using them all to address variation is also a time and resource intense way to improve. Teachers and schools continually feel pressed to do more. Is part of that due to the way we respond to variation in outcome?
A Better Way
Whether tackling change at an individual or organizational level, we need to recognize that finding “what works?” is an overly simplistic understanding of what we need to do to make a meaningful improvement in most cases. We need to start with the assumption that the real question is “What works, for whom, and under what circumstances?” and then use a method that can help us design and develop refinements to our strategies to make them work for more people in more contexts. It channels our energy away from adding something for everyone and towards knowing what really works best in our classrooms and learning what it takes to make that work for more students. It reduces the negative variation common to complex systems. Reducing negative variation is the fundamental problem of improvement.
The Plan-Do-Study-Act (PDSA) cycle, a basic method of inquiry improvement research, can enable this kind of rapid learning. PDSA pushes educators to start with small, rapid tests of a change idea and then expand the use as they develop the know-how to make it work. A general set of principles guides this approach:
- Whenever possible, learn quickly and cheaply.
- Since some changes will fail, keep initial tests small to limit negative consequences for those involved in early cycles.
- Develop empirical evidence at every step to guide subsequent improvement cycles.
These rapid cycles are designed to target the “slow to improve” challenge because learning is built into each step of the way. Initially, we learn what the new strategy is like and whether it has promise in our context. Assuming it does, we learn what it looks like in a small variety of contexts. Again, assuming that learning moves us forward, we learn what additional scaffolds and supports are necessary to have the same strong results across diverse contexts. When we learn that, we have learned how to reduce unwanted variation in outcome and gotten to real improvement.
Let’s consider that same reading strategy with the goal improving reading complex texts across an entire school. Our typical response would be to make everyone aware of students’ struggles with complex texts and why that matters, do a whole staff PD on some new reading strategies and then ask teachers to go back and use them and then expect improvement. In many “data driven” districts, we might even see some pre-post testing brought in as part of the process. What is missing, however, is an approach that creates real learning, actual on the ground know-how, among the staff. And so we can predict that early results will fall short of expectations, energy will fade, and the school will move on having used considerable time and energy in the process.
What if we approached that reading improvement goal with the intent to learn fast? We would likely start by trying to learn fast in a very small context, perhaps one classroom with one teacher. The goal of the early test would be to determine whether or not a particular strategy has promise and if so, learning what does it take to make a strategy work. We might try it with an eager language arts teacher in one of their classes and press them to collect evidence of it working with most/all students through some revisions.
Assuming it has promise and we learn how to make it work there, we apply that new “how-to” across a more diverse set of classrooms, maybe 2-3 more, to see if it translates and what else it might take to work in a broader context. The above early ELA tester, might show three interested colleagues outside of ELA what they learned about the strategy for them to test out. Again, we assume those teachers would test and refine the approach to learn if, and how-to make it work in their setting.
Having established strong evidence of promise and having developed expansive local know-how, we expand our testing to our most challenging contexts. Since the earlier tests have built our confidence in the strategy, this test is more about learning what it would take to use that strategy where it is least likely to work easily. This might involve additional scaffolds, supports, or modifications. With the reading strategy example, we could imagine a school where one of the math facilitators is not confident in their ability to support a strategy like that and perhaps uncertain how it would work with the limited disciplinary reading they do. Freeing up another teacher to co-plan and facilitate the strategy in their class might be a scaffold to address that challenge.
By the end of the last round of testing, we are ready to come back to the whole staff with a plan. People are likely aware of the work because of the expanding set of tests going on, but now we bring the entire group in on a school-wide plan that is energized by a set of teacher-testers who have seen tangible improvements and grounded in ideas that have shown to be locally successful.
The organization benefits from PDSA cycles as they:
- Build, collect, and help synthesize the practical knowledge needed to implement a change idea.
- Develop early, local capacity among front-line improvement testers.
- Create front-line advocates for the change idea as they see it begin to work in expanding contexts.
At a classroom level, teachers can benefit from PDSA cycles’ ability to:
- Shift efforts from “doing more” to “doing better.”
- Take back data as a way to build momentum and motivation. Think about how much easier it is to stick to a diet when the scale shows a drop in weight. How great if we could more regularly see that kind of near-term data in our own work?
- Begin to help teachers understand their impact. It is not insignificant that John Hattie’s meta-analyses of educational initiatives show that teachers’ awareness of their impact, what he terms “Collective Teacher Efficacy,” has one of the largest effect sizes.
- Create a space to fail safely. Much has been made of the importance of failing to learn, but failure for teachers is tricky when you are dealing with real kids. PDSA style tests help keep those early failures minimal in impact and aimed at generating useful knowledge.
Without question, using a formal method for improvement takes time and effort. As simple as many of the ideas in improvement science can seem, their use runs counter to many of our personal and organizational habits and structures. Still, we live in a time when we all want more for each student who passes through our doors. Getting more can’t merely be the result of doing more. We need to use methods that let us know what is working and for whom and to focus our changes on making things work for more individuals in more contexts. Recognizing our own habits when it comes to change, and working to shift those habits towards processes that lead to improvement is an exciting way we can all begin to work differently.
This blog originally appeared on P21.org.