A Model for Evidence-Based Innovation in Education

We can’t keep offering the same educational experience to our children and expect different results

Go to the profile of David Dockterman
Sep 08, 2016
4
0

At a time when the US education system leaves nearly two-thirds of all eighth grade students below proficient in math and reading, approaches that are new and different have great appeal. Districts across the USA have created innovation zones to encourage new ideas and approaches. Charter schools have long promised positive change by releasing school leaders and teachers from the bounds of union rules and legislative handcuffs. But we can’t keep offering the same educational experience to our children and expect different results: we need something new. Innovative, though, isn’t necessarily effective.

An evidence-based approach towards developing new educational practices and programs increases the chances of the innovation being effective. Many elements of this approach are already in use. My hope here is to provide an accessible framework for classroom teachers, administrators, publishers and educational technology entrepreneurs.

Where Innovation Originates

We are all innovators. Sometimes we innovate to solve a problem. Sometimes it’s to tease our desire for novelty. Sometimes it’s about staying ahead of the competition, in business and socially. These innovations can be small and mundane – tinkering with a recipe or creating a technology workaround – or comprehensive and unexpected, such as the sharing economy or driverless cars.

Our drive for innovation pervades education as well. As a teacher I was constantly inventing and tweaking lessons to enliven my classroom or provide a more effective path to learning. Today’s entrepreneurial environment for educational technology, with pitch competitions and incubators, promises financial rewards for all kinds of innovative methods, policies, and products.

Where do those innovative ideas in education come from? While rhetoric for research- and evidence-based approaches remains high, I argue that theory is the real driver of new practices. In the model I present here, research encompasses evidence of all kinds. Research captures results but doesn’t necessarily explain why those results occurred.

That’s the job of theory. We all carry theories of learning in our heads, even if often we don’t explicitly define them. Imagine, for instance, I encounter a young girl struggling to tie her shoelaces. How would I respond? Would I tie them for her? Would I model each step and have her repeat what I did? Would I hand her a URL of a shoelace-tying video to watch overnight and work with her on the steps the next day? My decision likely rests on a theory of action most appropriate for the timing and circumstance. That theory is fed by my own experiences, those shared by others, and maybe – just maybe – some evidence from rigorous scientific research. That collection of evidence broadly constitutes the research directing my theory. My experience with the young girl adds to that evidence base, potentially reinforcing or prompting a revision of my theory of action.

In this dynamic relationship, innovation is fueled by both an openness to a wide range of evidence and by a willingness to test and revise theories. The use of higher-quality research from the learning sciences (and beyond) coupled with a commitment to rigorous testing and revision can help us generate more effective innovations.

From Story to Problem Definition

A critical first step in this innovation process involves making sure we’re solving the right problem. For guidance on this step I recommend invoking what Richard Neustadt and Ernest May call the “Goldberg Rule”. In their book Thinking in Time: The Uses of History for Decision-Makers (1986), Neustadt and May describe how Avram Goldberg, then the CEO of the New England supermarket chain Stop & Shop, responded when one of his managers came to him with an issue. Rather than ask the manager what the problem was, Goldberg would encourage the manager to tell the story of what was happening, so Goldberg could figure out what the real problem was.

Different people can look at the same situation and see different problems. The story contains the background information, the nuance, and the context. Knowing the story, Neustadt and May argued, helps reveal motive and normalcy, what’s driving the actions we want to control. Then we can discern the true problem. Doctors similarly collect a patient’s story – medical, personal, and historical – to uncover what might be behind a set of symptoms. The back-story matters.

We can see this same pattern in education. Data, for instance, may show that a school has a high dropout rate, but they don’t indicate why. Unraveling the story, though, might reveal that many older, low-performing students stop coming to school because they’re embarrassed that they can’t read. Keep digging, and other possible causes could emerge. Sometimes the problem is clear, but often it’s muddy. Tackling a high dropout rate by adding more vocational and theoretically high-interest courses, for example, might help encourage some students, but it misses the underlying problem of student reluctance to put their self-esteem at risk because of low reading ability. Taking in the full context, embracing the data and the surrounding narrative can guide problem definition.

So we need to start by telling the story of the situation we want to change. It should be a rich story, encompassing the students, the teachers, and the institutional and community cultures. In the world of human-centered design, innovators would go and live with the community they want to impact. They would observe the routines of its members and work to ascertain the motives and pressures driving their decisions and actions.

Then we need to share our story with others. Outsiders offer additional perspectives, bringing their own experience-based theories. As particular elements seem to gain importance – like adolescent motivation, the use of norms, best practices for teaching reading to older students, the need to feel valued, and so on – we can delve into them with research from the learning and behavioral sciences. The story helps us define the problem, and we need to be ready to make revisions, because problem definition, like innovation, is a dynamic process itself.

From Theory of Learning to Theory of Action

Problem definition opens avenues for potential treatment. Sometimes the science identifies the appropriate intervention for the problem, but learning is complex. Maybe the research hasn’t reached the problem we’ve identified yet. Maybe the complicated mix of issues we’re tackling requires a blend of interventions. Maybe we got the problem definition wrong. Maybe what worked in one setting, potentially a clinical one, doesn’t translate readily to the wilds of, say, an urban classroom.

Let’s go back to our high school dropouts. Filling out the story of these students’ lives has revealed, for many of them, an embarrassment about their inability to read. They avoid school to avoid humiliation. This definition of the problem generates the need to create a safe place for these students to grow their literacy skills. Or maybe we should instead focus on making the potential value of learning to read worth more than the risk of looking stupid. What’s the right way to think about motivation here? And is it the same for all students?

And what about the reading instruction itself? Why can’t these older students read? Are they new to English? Have they just had poor instruction? Can the students decode but not comprehend? Answering each question calls for further digging into the situation and into the learning sciences. What else can we find in the story of these students, and what does the research suggest, if anything, will be effective? Research on teaching decoding to young children has a much deeper well to draw from than similar research for older non-readers. Non-native English speakers have been picking up the language for generations, but some of those learners were already literate in another language. In addition, those learners often belonged to a supportive community that shared this educational need. Potential humiliation among peers might not have been a significant mitigating factor. Or was it?

Research and theory should constantly be informing each other. A theory about the specific needs of these students should drive the search for data to confirm it. Research, whether anecdotal (e.g. what happened in a neighboring school district) or culled from the learning sciences, can prompt a theory for a potential intervention. How can we decide what innovation to invent and try?

Here it helps to be explicit about our theories of learning and our theories of action. What do we believe about what these students need, and what should be done to address those needs? When matching theories with the research supporting them, sometimes the evidence will be strong and clear. But often we must take a leap of faith – they are part of the innovation process. We just can’t depend on them being correct, or at least working on the first try.

To bolster our confidence and instincts about those leaps, we can broaden our search through the existing research. If we can’t find much about, for instance, humiliation avoidance among adolescent non-readers, we might consider the research about embarrassment and performance anxiety in general. Can we learn something from neuroscientists who purposefully make subjects anxious in order to study the effects of stress? Studies on motivation and mindset might also add to our thinking. Our willingness to look beyond the obvious research can help guide our innovative leaps, providing at least a small evidence-based rationale for those less clear parts of our theory of action. In the end, though, we’ll only know if our innovation works if we try it.

A Mindset for Innovation

For every leap of faith in our theory of action, we should anticipate initial failure. The expectation of failure in the attempt to solve complex problems now seems almost commonplace, from economic policy to education, at least within the academic community. Even if we’ve managed to identify the right problem(s) and construct an innovation based on high-quality evidence, we still have the issue of implementation. Can we get people to embrace and apply the innovation appropriately in the target settings? The human factor is an enormous variable.

Tim Harford’s book Adapt: Why Success Always Starts with Failure looks across a range of innovations; he urges us to build the expectation of failure into our planning. We need to keep the stakes and investment low early on so that the anticipated bumps in the road aren’t catastrophic. We need alternatives, different variations of our plans or different plans altogether, so that we have ways to respond when our innovation falters. And we need to establish mechanisms to gather constructive feedback. Those early mistakes are learning opportunities; they feed the research pool and help us revise our theories and improve our innovation and its implementation.

Fortunately, the emerging field of improvement science has migrated from medicine to education. Frameworks like these and design-based implementation research provide structures for monitoring the implementation of innovations to constantly learn from and improve our efforts. They reflect a broader shift to more agile methods for creating products across industries. Rather than making a plan, fully building it out, and then hoping that it works, the agile approach focuses on quickly building and testing incremental elements of the plan. Don’t wait until it’s too late to make changes to find out your innovation has fatal flaws. Fail early and often to learn and improve, but do it systematically so that the lessons captured are valid.

This approach to innovation rests on a willingness to learn from mistakes. Without the right mindset, the learning can easily fall on deaf ears. The more we invest in an idea, the more we want to defend and protect it. The goal of having our innovation succeed can take precedence over our goal of solving the problem. Gathering useful feedback is useless if we don’t listen to it.

Innovation will always have a high risk of failure, but we can increase our chance of effectiveness with thoughtful and systematic processes. Rich stories of the situation we’re working to change can help us identify the appropriate mix of problems we need to solve. Being explicit about the evidence underlying our theories about the problem, and potential solutions, can focus our efforts on the riskiest elements of our proposed innovation. An iterative implementation plan with clear feedback loops can provide valid evidence to further guide and revise our theories of action, edging toward an effective solution. And a non-defensive mindset that seeks feedback (as opposed to confirmation) can best turn the inevitable early stumbles into improvement. Yes, we need to do things differently, but we must also things effectively.

References

Ash K. School Districts Embrace Second Generation of ‘Innovation Zones’. Education Week 33, 21 (2014).

Bryk AS, Gomez LM, Grunow A LeMahieu P. Learning to improve: How America's schools can get better at getting better. Harvard Education Press: Cambridge, MA, USA, 2015.

Fishman BJ, Penuel WR, Allen AR., Cheng BH, Sabelli N. Design-Based Implementation Research: An Emerging Model for Transforming the Relationship of Research and Practice. In: Fishman BJ, Penuel WR (eds.), National Society for the Study of Education, 112, 2. Teachers College Press: New York, NY, USA, 2013: 136-156.

Harford T. Adapt: Why Success Always Starts with Failure. Little, Brown: London, UK, 2011

Improvement Science Research Network (ISRN). Retrieved from http://www.improvementscienceresearch.net (2016).

Neustadt R, May E. Thinking in Time: The Uses of History for Decision Makers. The Free Press: New York, NY, USA, 1986.

U.S. Department of Education, Institute of Education Sciences, National Center for Education Statistics, National Assessment of Educational Progress (NAEP), various years, 1990–2015 Mathematics and Reading Assessments.

Go to the profile of David Dockterman

David Dockterman

Lecturer, Harvard Graduate School of Education

I operate at the intersection of research and practice. In 1982 I helped found Tom Snyder Productions, an early pioneer in educational technology, while getting my doctorate at the Harvard Graduate School of Education. Over the last 30 plus years, I have continued to balance lives in the academic and publishing worlds, serving now as Chief Architect, Learning Sciences for Houghton Mifflin Harcourt and Lecturer at the Harvard Graduate School of Education. In both roles, I support the development of research-driven innovative practices to tackle challenging educational problems. At Tom Snyder, and later at Scholastic and HMH, I designed dozens of award-winning computer programs including: Decisions, Decisions; Science Court (which was also a highly-acclaimed TV show on ABC Saturday morning); and The Great Ocean Rescue. Most recently, I served as a key advisor for the development of MATH180 and READ 180 Universal. My long-time Harvard course, Innovation by Design, fosters an iterative, agile approach to innovation across a range of educational issues. My new course on Adaptive Learning delves into new ways that technology can support personalized learning in different contexts and for individuals and groups. Productive failure and a growth mindset fuel innovation and learning, and I have become adept at infusing the underlying research from behavioral psychology and cognitive science to foster those dispositions among students, teachers, and institutional leaders. I am a Fellow of the International Society for Design and Development in Education, an Editorial Board Member for the journal npj Science of Learning, and a Senior Fellow for the International Center for Leadership in Education.

No comments yet.