This is the last of a series of five posts about the role of evidence in teachers’ professional decision making. In part 1, Charlotte worried that an emphasis on evidence-based practice would lead to prescribed practice, which would narrow teachers’ opportunities and options for making their own decisions about practice. In part 2, she discussed the role of a teacher, and the purpose of education. Read part 3, in which she discusses the role of cultural and instrumental research, and suggests that education research holds a unique role for informing education practice. In part 4, she took a closer look at what evidence is, what forms of data are collected, and some of the limitations of evidence.
Evidence is one of several pieces of information that can inform a teacher to make decisions
Whether it comes from formal academic research, assessment programs, school programs, or class assessment, evidence can be interesting, informative, and useful. It can reveal possible relationships between practice and student learning.
Evidence of “effectiveness” is valuable, in some cases it is necessary, but it is very rarely sufficient to justify (or contest) decisions about educational practice. It is one of several sources of information that can be reasoned from for a professional educator to justify (or contest) decisions about their practice. Other sources of information, as Gary Jones touches on, include
- Professional knowledge (theories of practice, pedagogy, technology, education, and child and adolescent development, etc);
- Content knowledge (theories and frameworks of knowledge, understanding, skills, and capabilities related to the content that is to be learned by students);
- Contextual knowledge (knowledge of students, their parents, and other stakeholders; knowledge of context, including physical, social, and cultural environment, policies, laws, and local practices); and
- Experience (learned habits and practices developed through rehearsal and reflection; see Berliner, 2004).
Evidence can inform decisions about classroom practice, but the evidence must be thoughtfully and tentatively applied. My friend and mentor David Geelan suggests:
“Good evidence is always valuable. The problem arises when we surge on ahead of the available evidence and get dogmatic on the basis of poor evidence filled in with guesses and assumptions.”
Simply picking up Hattie’s Visible Learning (for example) and taking the effect sizes that were calculated at face value is not recommended. An understanding of the purpose, context, specifications and limitations of the meta-analysis is necessary for using that evidence appropriately. What meta-analyses like Hattie’s Visible Learning can suggest to us is which practices we can have a high confidence in, and which we may need to be more tentative about. Though the presentation of effect sizes suggests approximately how likely a practice is to raise students’ (as a group, not on any individual student’s) academic achievement, Hattie’s analyses have come under fire for using inappropriate calculations. It would be inappropriate to read any more than that into the results of his analysis, yet politicians, policymakers, principals, and teachers themselves, with varying degree of skill at evaluating evidence, commonly hold up Hattie’s results as justification for implementing, banning, or changing practices and policies. The phrase “evidence-based” is meaningless if it is not engaged with critically.
In order to critically evaluate evidence, to know whether it is sound, comprehensive, and substantive, one must be able to know and understand why and how the evidence was collected and analysed, and how it is represented, interpreted, and communicated. An understanding of the theories, frameworks, and backing that underpin the practices that are being researched is also needed. Otherwise, the claim that a practice is “evidence-based” can be made to support almost the decision to use any practice in the classroom (including learning styles!).
Teachers should be encouraged to use research to guide them in making decisions about their practice
On the surface, it might seem that the requirement to use programs and practices that are promoted as “evidence-based” would save teachers time: less decisions to make, less time to spend with research (either formal on informal). A little of this might be acceptable; most of us are happy to accept that a curriculum is necessary to guide content selection, and that some sort of representation of achievement is necessary at the end of the compulsory years of schooling, particularly for those students who wish to enter tertiary education. A lot of prescription, whether or not such approaches are “evidence-based,” is not. A tightly prescriptive curriculum, mandated programs, and compulsory assessment bind teachers and their students to a structure that is inflexible. This reduces their capacity to respond to the various needs of their students along the way to attaining goals.
Evidence is a useful tool, but to use it appropriately for making decisions requires an understanding about what constitutes useful evidence, appropriate methodologies for data collection, contextual knowledge that will help a teacher to know whether or not the research they’re examining is relevant, and most importantly, whether it’s actually going to be useful for a particular student on any one day. When the term “evidence-based” is thrown around as justification for prescribing a particular program, practice or pedagogy, the nuances of the population that was sampled, the contexts for and of the research, the method and methodology for data collection, and the analysis of the results, are lost.
The research literacy required for evaluating claims and supporting evidence and theory is one of the goals of pre-service teacher education. Research literacy involves an understanding of educational theories and philosophies and how and why they arose, how they are consistent and inconsistent and how they might be critically dissected, how evidence is derived and how it might be properly analysed, and how all these ideas are communicated. This is necessary preparation for graduating teachers to be able to justify professional decisions they will make in the classroom.
An ideal situation is one in which teachers can make and use appropriate interpretations of evidence from accessible research, where their interpretations are consistent with our best theories of learning and teaching (mechanisms and theoretical frameworks), to make appropriate decisions about practices that will achieve desirable outcomes for their diverse groups of students in the contexts in which they are teaching. Professional development workshops and conferences, teacher networks, associations, and magazines might all be avenues for teachers to access and evaluate research and evidence.
Let’s also give teachers the time, space, and access they need to evaluate research critically, and purposefully, to inform their decisions. Let’s have some research-informed decision-making, rather than “evidence-based” practice.
What do you think?
Berliner, D.C. (1994). Expertise: The wonders of exemplary performance. In J.N. Mangieri & C.C. Block (Eds.), Creating powerful thinking in teachers and students (pp. 141-186). New York: Holt, Rinehart & Winston.
Berliner, D. C. (2004). Describing the behavior and documenting the accomplishments of expert teachers. Bulletin of Science, Technology & Society, 24, 200-212.
Biesta, G. (2007). Why “what works” won’t work: Evidence‐based practice and the democratic deficit in educational research. Educational theory, 57(1), 1-22.
Biesta, G. J. (2010). Why ‘what works’ still won’t work: From evidence-based education to value-based education. Studies in Philosophy and Education, 29(5), 491-503.
Hattie, J. (2008). Visible learning: A synthesis of over 800 meta-analyses relating to achievement. Routledge.
Merton, R. K. (1973). The sociology of science: Theoretical and empirical investigations. University of Chicago press.
Sanderson, I. (2003). Is it ‘what works’ that matters? Evaluation and evidence-based policy-making. Research papers in education, 18(4), 331-345.
Shavelson, R. J., & Towne, L. (2002). Scientific research in education. National Academy Press, 2101 Constitution Avenue NW, Lockbox 285, Washington, DC 20418.