Research Besmirched—When Practitioners Just Don’t Believe! Why Research is the Best Source of Information, Even When It has Limitations and Flaws!

Published in Neuroscience
Research Besmirched—When Practitioners Just Don’t Believe! Why Research is the Best Source of Information, Even When It has Limitations and Flaws!
Like

Introduction: In the learning field, research insights can help practitioners (trainers, teachers, instructional designers, elearning developers) build more effective learning interventions. Unfortunately, some practitioners look at the flaws and limitations in the research and reject research entirely. This article, by noted research-translator, Will Thalheimer, PhD, provides insights into balancing research limitations and benefits—by examining the workplace learning field.

WHEN PRACTITIONERS ARE SKEPTICAL

I work in the workplace learning field. My job, as I see it, is to take the scientific research on learning, memory, and instruction and convey that research to help practitioners build more effective learning interventions. Most of the time my contributions are welcome—sometimes commended or even celebrated. But maybe my audiences are just being nice. Maybe those with doubts or skepticism keep quiet.

I recently gave a talk where someone in the audience raised her hand and said that she thought all educational research was biased. After momentarily being stunned by the sweeping arc of her attack, I parried with a rather meandering defense of the scientific endeavor. I could have done better.

Just a few weeks ago, I was involved in a private social-media discussion with a whole host of workplace-learning-industry thought leaders. During this conversation, a few folks raised objections to the use of research in making learning-design decisions.

Let me admit that, as a person who has devoted the last 18 years of my life to research-to-practice, my blood boiled just a bit. More specifically, I was alarmed that some of my fellow thought leaders would dismiss research so easily. Still, I endeavored to listen and here’s what I heard:

Some of the Critiques of the Research on Learning from Workplace Learning Practitioners

  1. Much of the research has been done on students rather than in the workplace.

  2. Learners are asked to learn facts and knowledge and simply regurgitate them.

  3. Tests of learning are given soon after learning, not after more realistic delays.

  4. The learning contexts used are not realistic—some are even laboratory-like.

While I agree with these critiques to some extent, I think they miss the big picture. Indeed, I think they miss practicality. I will get to this point later, but first, some reflections.

PROBLEMS WITH THE RESEARCH

Studies Done in Laboratory-like Conditions

Yes, many studies are done in the lab. That's a good thing and a bad thing. It's good because it enables researchers to control the factors that are under study. It is bad in that the results MAY OR MAY NOT generalize to real-world situations. It takes a wise research translator to draw out the correct inferences—or at least reasonable inferences.

Without adequate experimental controls, we can’t tell what caused what. For example, if you provide learners with repetitions of realistic scenario-based decisions and you space these over time, you’ll never know what caused the learning benefits. It could have been the realistic context, or the retrieval practice, or the effect of spacing—or some combination. Untangling the factors is critical. Suppose a trainer decides to share realistic scenarios with learners and has them discuss the scenarios—utilizing the context factor but not the retrieval practice or spacing factors. What’s likely to happen? Without research or carefully-observed experience, we wouldn’t know whether context had benefits on its own. To reiterate, this is why controlled conditions, even lab-like conditions, are beneficial.

Of course, we may find that a factor works in the lab, but not in the real world. I once critiqued the seductive-details research for this very reason. In controlled learning events, researchers found that people learned less when the learning materials included interesting yet mostly-irrelevant information. These “seductive details” tended to distract learners from the main points. What the researchers failed to examine, however, was whether such seductive details might actually be valuable to motivate attention in longer, real-world learning applications—for example, when learners have to sit through a 90-minute lecture or read a two-hour textbook chapter. The average learning events in the research was about 4 minutes long! You read that correctly. In the research, the average learning event took only four minutes! Under those conditions, motivation and attention wouldn’t even come into play.

To be fair, at the time of my critique, the researchers had not actually tested the seductive details affect for longer time intervals. Unfortunately, a quick scan of more recent research shows similar unrealistically-short learning events.

The point is that sometimes research is so unrealistic as to be practically meaningless. Indeed, a conclusion drawn from an unrealistic experiment can send the wrong message to practitioners!

Fortunately, most current research designs are not so obviously lacking in realism.

Studies Done with Very Short Learning Events

Yes, many studies are done with relatively short time frames. This is BAD when learners are tested immediately after learning something. When we assess learning then, we are measuring comprehension, but we’re failing to measure whether the learning intervention will support realistic long-term memory. Researchers who test learning immediately after the learning event do it either because they’re only interested in comprehension or they do it because it’s cheaper and easier to test learners in the same session in which they learned. Lazy? Yes! Harmful to practical insights? Yes!

When research time frames range from a couple of days to a week or so, the research results are much more robust. Realistic forgetting comes into play even after a day or so, making the learning results much more realistic. Of course, utilizing months-long time delays in research designs may be more persuasive to learning practitioners, whose learners often must remember what they’ve learned for longer than a few weeks. Moreover, testing learners at longer delays may reveal that some learning methods may not be worth the investment if differences between treatments and controls are insubstantial.

Fortunately, many current research studies utilize retention intervals of a few days or a week or more.

Studies Done using Unrealistic Learning Content

In the mid 1900’s, researchers regularly relied on learning materials that were less than authentic. Nonsense syllables and paired-associate learning was all the rage. Researchers at that time believed that by using over-simplified learning materials that they could rule out extraneous influences and focus on underlying cognitive mechanisms. Unfortunately, research on unrealistically-simple materials may not relate to real-world learning situations. Certainly, practitioners are right to be skeptical.

Fortunately, most current research utilizes more meaningful learning materials.

Studies Done using Students

Yes, much of the learning research is done on college students. For workplace learning professionals, this is both good and bad. It’s good because college students’ brains and learning structures are, for the most part, fully formed. In most ways, their learning will be similar to middle aged learners—of the kind that most workplace-learning professionals will train. Research on college students, while more relevant for workplace learning than research on younger students, still is not a perfect analog for workplace learning. Recent research has found that brains may still be developing even into a person’s twenties. Similarly, a college student’s motivation to learn may be vastly different than a middle-aged worker, so the type of learning methods required may differ as a result. At a minimum, workplace learning professionals may have sufficient reason to be skeptical unless research findings have also been replicated with people 25 or older.

IMPROVED RESEARCH

Over the last ten years or so, researchers have begun using much more realistic designs in their experiments. They are using more realistic materials, more realistic time frames, and are also testing for transfer as well—not transfer as in transfer to the job, but transfer as in transfer to non-studied tasks...


BEING PRACTICAL—NOT THROWING OUT THE BABY

But all of this is sand dust compared to the big stone here. Research may be imperfect, but it's worth should be compared to other sources that may guide us in our work. Let's examine the other methods we use to get guidance for learning design to see how they compare.

Feedback is the lifeblood of improvement. But what feedback do we get in the workplace learning field that gives us good information about how successful our learning interventions have been?

The most popular method of feedback in the training field are smile sheets (learner-feedback forms) and yet smile sheets are virtually uncorrelated with tests of learning or on-the-job behavior. As research has shown, learners don't always know their own learning, so they are more likely to give us faulty feedback than valid feedback. So, traditional smile sheets are not giving us good feedback. I published a book this year offering a new methodology for learner-feedback design, which will give us better feedback. Still, we must also go beyond learner feedback.

What about our own intuitions—we as learning professionals—about what makes good learning? Nope! Unfortunately, we learning professionals have also demonstrated that we don't always know what works and doesn't. For example, we tend to think that intensive practice is better than spacing practice over time (which is wrong). We think that presenting learning objectives to learners is necessary (which is wrong). We think that immediate feedback is always better than delayed feedback (which is wrong). We think that neuroscience is a good place to get learning-design ideas (which is wrong). So, again, our own intuitions are often not a good source of insight.

What about the tests of learning we present to our learners. Certainly, these must give us good information? No, we tend to give these to our learners immediately after the learning (and so we aren't measuring the ability of the learning to support remembering). We tend to use stupid, trivia-focused questions (not the kinds of realistic decision-making we are asking the research to provide). We generally give learners our tests in the same environments in which they learned the concepts (so our results are inflated because the context helps remind the learners of what they learned--which is not realistic because those contextual cues will not be available during learning). Our tests of learning are severally biased so we tend to get poor feedback from them.

What about getting wisdom from our experts, from our traditions, from our celebrities? This may sound good, but by what metrics are these folks gauging the success of their recommendations? We've seen that our field's feedback loops are poorly wrought. Why would we assume that our experts are doing any better? Skinner was wrong that learning is as simple as operant reinforcement. Schank was wrong that all learning has to be goal-based. Gagne was wrong that we ought to only worry about attention during the first phase of his nine events. Mager was wrong about presenting learning objectives to learners. Kirkpatrick was wrong to think that level 1's are correlated with level 2, 3, and 4.

RESEARCH MUST NOT BE EVALUATED IN A VACUUM!!

So, when we rightly criticize the weaknesses of research, we have to ask ourselves—COMPARED TO WHAT? What else offers valid guidance? What else offers BETTER guidance?

Right now in our field there are thousands of vendors trying to sell their wares. Some of these vendors have more effective products and services than others. If we worked in a field that had good feedback loops, there would be some objective metrics on which to evaluate their level of success. BUT SINCE WE HAVE NO SUCH METRICS—OR WE DON'T USE GOOD METRICS—we are left in the dark. And thus, our organizations often make poor choices (or not the best choices).

We’ve seen that research isn’t always perfect. But when these imperfections lead us to ignore it—or lead practitioners to ignore it—we and they are making a fundamental error. We are failing to compare the benefits of research to the benefits of our other sources of feedback.

Research has the advantage of isolating the factors that matter. Our other sources of feedback rarely do. Research has the advantage of competing hypotheses. Researchers compete with each other—and set up their experiments to test competing hypotheses—so that over time, the best ideas rise to the top. It’s not a perfect system, and mistakes do get made along the way, but research is more likely to provide effective recommendations than our other sources of insight.

What can be really helpful is to combine the wisdom from the research with the wisdom of practical insights. This is why good research translators provide such great value (see article).

MAKING THE RESEARCH MORE PRACTICAL

In an ideal world, there would be more applied research—more research that tests recommendations in real-world contexts. Unfortunately, there is no business model for this—that is, people have a hard time getting paid for doing practical research in the learning field. In our universities, researchers tend to do research to develop theories—because that’s where glory and financial rewards lie. In the workplace learning field, learning professionals are paid to crank out training interventions. There appear to be serious disincentives to engaging in any type of practical research.

Over the years, I have argued that enlightened vendors should engage in practical research to gain a competitive advantage. I have seen very little evidence that this is being done, except for a few consultancies that sell high-end products and services—for example to the U.S. Department of Defense.

I actually think that now, more than ever, enlightened vendors may benefit from engaging in a modest program of practical research. The benefits would be fivefold. First, by engaging in research, they’d be able to create more effective learning interventions. Second, by telling their clients and prospective clients of their research efforts, they’d build a brand as a premium provider. Third, by having a research-based premium brand, they’d be able to charge higher prices. Fourth, by attracting the most enlightened clients, they’d be able to do more interesting work and have deeper and more lucrative relationships as they play the role of trusted advisers. Fifth, by engaging in research, an enlightened vendor would be able to create eyepopping content-marketing materials. That is, by sharing the results of their research in white papers, webinars, and conference presentations; an enlightened vendor could draw more interest than their competitors—and ultimately win more business. Creating potent public-interest content is more important than ever in a marketplace that is inundated with a tsunami of content marketing.

BOTTOM LINE

Drawing wisdom from the research—while not always easy—is one of the most valuable things that we as learning practitioners can do. It is a failure of perspective to throw-out the research-baby with the research-bathwater. Research isn’t perfect, but it provides far better insights than what we are providing for ourselves in terms of learner feedback and tests of learning.

Research translation is especially valuable when it assesses the research from a practical perspective. Practitioners should look to great research translators, including such folks as Ruth Clark, Julie Dirksen, Clark Quinn, Patti Shank, Karl Kapp, me (Will Thalheimer), and others.

Practical applied research may provide vendors with a competitive advantage, helping them to build a high-integrity, high-premium brand—while leading the field with their example.

WHAT HAVE YOU SEEN WORK? WHAT GETS IN THE WAY?

Sometimes practitioners are open and even hungry for research. What have you seen that works to motivate the use of research? What kinds of things get in the way to make practitioners less likely to utilize research?

Please sign in or register for FREE

If you are a registered user on Research Communities by Springer Nature, please sign in

Subscribe to the Topic

Neuroscience
Life Sciences > Biological Sciences > Neuroscience