Learning as Roadtripping
Imagine that you decide to take a day trip with your family. Everyone gets excited, you pack up the car, pile in, and start driving. Then you drive in circles all day, return home at night, and proudly announce “we just drove for eight hours!” This is basically what we do all too often in education: we capture simple metrics such as learners’ “seat time,” or the number of learners who signed up for a MOOC, rather than what we really care about: progress towards a meaningful learning destination.
Unless you’re Sal in On the Road, you know that this is no way to travel. When you plan a trip, you presumably know where you’re starting, and then you select a destination. You probably also research your options and choose whether you’ll fly, go by car, or maybe take a train. Even after you choose your means of conveyance, you then plan out your route. Though trip duration and traffic are factors in trip planning, we usually try to minimize them in order to make the journey as pleasant as possible.
We have to do the same thing in learning: there are so many different approaches we can take, but not all of them are equally effective or appropriate for our learners. We need to start with some clear outcomes to work towards and we need to get clear on where we’re starting. This means we need data. You may quickly realize that you don’t know where you are starting because you don’t have high-quality evidence that learning is happening, or perhaps you don’t have evidence that your learners are interested and engaged.
The Null Hypothesis as a Starting Point
If you don’t have high-quality data, it might be safest to assume that your learners aren’t engaged, they aren’t interested, and they aren’t learning. I call this “the null hypothesis of online learning.” Let’s be clear here: if you collect evidence that disproves this (seemingly pessimistic) statement, that will put you head and shoulders above most online learning experiences, for which this evidence simply doesn’t exist. This gives you a first goal (outcome) to strive for: disprove the null hypothesis.
With this goal in mind, you then need to make some decisions about what to measure. You can typically use platform analytics to calculate engagement metrics (persistence, time on task), and you can measure learner’s interest using survey responses. As an aside, I’ve found that the Net Promoter Score question is a great way to measure learners’ interest and enthusiasm, plus it allows for apples-to-apples comparisons with some other online providers. When it comes to measuring learning, you might choose to start with a pre/post test. It’s not a perfect instrument, but it’s certainly better than nothing. To return to the road trip metaphor, with these simple metrics in hand, you now have the coordinates that define your starting point.
Plotting Your Route
Next, you need to figure out how to convey your learners to their destination. There is no “Google Maps of learning,” so you need to follow a design process that accounts for where they are headed (what learning, interest, and engagement you want to foster), how people learn, and the unique needs of your learners. I’ve written before about leveraging learning science as you design online learning, and about the questions you can ask to better “meet learners where they are.” Here, I want to focus on another powerful tool you can use: Design Thinking.
Design Thinking is typically described in five distinct stages (Empathize, Define, Ideate, Prototype, and Test). These steps are often defined separately and in order, but this is a highly iterative process with a lot of interconnections between the stages, so I mentally group the stages of Design Thinking into three phases.
- Empathize and Define: put yourself in your learners’ shoes and define (write down) their unique needs, pain points, and challenges. You should test your assumptions by talking to people who are in your target audience: this is a key aspect of the empathy exercise. For me, this stage also involves some due diligence: I search for learning opportunities that are currently available for my audience and spend some time evaluating them. That helps me to get very specific about the pain points and challenges that my learners currently face.
- Ideate and Prototype: This is your chance to get clear on what features and functionalities you might provide to address your learners’ specific needs, and then start building solutions (prototypes). It’s valuable to actually write down the features you’re aiming to create, then rank those features by priority. Prototyping allows you to re-balance your priorities on the fly (“How essential is this feature, really?”), but even more importantly, it is your first chance to test a working model with your target audience. Perfectionism is verboten at this stage; it’s much more important to get your prototype in front of learners. In addition to observing how they use the prototype, be sure to collect their feedback and then act upon what you learn. Early user testing is particularly critical because your involvement in creating the prototype compromises your ability to evaluate it objectively.
- Test: When you have a prototype that works and an audience willing to try it out, run a pilot. Be sure you are collecting the data you want to measure at the time of this pilot. If you don’t get the data, you won’t know whether your learners are making progress towards the destination that you’ve defined. That said, you should strive to avoid “death by survey.” Only collect the data you absolutely need to move to the next Design Thinking cycle.
This whole process can be highly iterative. As you get feedback from your learners, make sure you’re considering whether to fold that into your next prototype. Keep in mind that your learners may not report every problem they encounter. Though qualitative feedback is helpful, watch your key metrics (indicators of learning, engagement, and interest) to figure out what to do next. Even as you learn from testing and get ideas for improvements and features, you should be re-ranking your priorities, because it isn’t possible to address every issue simultaneously.
Keep in mind that your testing process may give you evidence that your prototype is not working. That’s great! The beauty of this process is it allows you to learn, adjust, and pivot rapidly. When you’ve been through one cycle, take a pause to decide whether you want to proceed with the overall concept you’ve developed (usually with at least a few tweaks) or you want to go back to the drawing board.
Though we humans have an incredible capacity to learn, that doesn’t mean we can or should assume that our audience is really learning what we teach. There are so many intrinsic and extrinsic barriers to learning. It’s much safer to hypothesize that learning isn’t happening and then set out to get evidence to disprove this hypothesis. Just as we need to do some trip planning to avoid lengthy and pointless journeys, we need to thoughtfully design “learning pathways” that make the most of lessons from learning science and minimize our learners’ pain points and distractions. An intentional, user-centered approach – such as Design Thinking – can help us to do just that.