Podcasting and Learning

I've been orbiting and occupying this big ol' Ed Techy world for quite a long time. So, I've read countless papers, proposals, articles, and chapters on "Why my media is better for learning than your media." I've studied this from a cognitive, social, motivational, and developmental perspective, and I am still not convinced that one medium is better for learning than any other. I do believe that all media have different affordances that make them better-suited for certain contexts, learning styles or learning tasks. Needless to say, I've done (and continue to do) my homework  on this one, and I get a little annoyed when people who just perhaps haven't done as much homework on this topic make blanket statements like this one:

 A new study found that students who listen to lectures on podcasts test better than those who listen in class. (You can read the entire article here.)

The "new study" in question is taken from a 2009 article in New Scientist, and it's reporting on a study done in the same year by Dani McKinney. I found these links in a 2012 blog post by Michael Hyatt. My issue is not with bloggers quoting bloggers quoting bloggers, some of whom don't understand educational technology research. The problem I have with this quote, and those like it, is that it is stated as if this were "proven" fact. Here are some things to consider when reading results from this type of research:

  1. Learning is complex and influenced by many, many factors: motivation, engagement, prior knowledge, environment, aptitude AND delivery method. To say that students who listen to a podcast at home will score better on a test than those who come to class and listen to the lecture is absurd. This can lead to all kinds of misconceptions about learning, multi-tasking, learning styles and media. "You can learn astrophysics while washing the dishes and updating your Facebook status!"
  2. Most educational technology research studies have a long list of limitations. The findings are almost always limited to the study in question, with some suggestions on how to scale it up or replicate the results. This is called "job security" for us academic types.
  3. Teacher-created exams may not be the best measure for student learning. Sure, they're great for assigning grades, but they are often mismatched with the learning objectives, and if they are multiple choice, there's a chance the students could guess and get the right answer. The test items may also be written in such a way that it is easy to eliminate answer choices and choose the right one without really knowing the material. I did not read the entire research article, so I will not make any judgments about this instructor's exam.

Online and hybrid learning environments are here to stay, and the research into best practices and learning outcomes for this model of teaching will only get broader and better. Studies such as the one I reference above are an important part in this process and must be done. Any research study should produce more questions than answers, which this one obviously did (Do podcast lectures have the same effect over an entire semester?) But please, if you are going to quote studies like this, look at the original research article and temper your statements with a qualifier or two. Michael Hyatt and Mile Elgan have thousands and thousands of readers. If I go back and read this tonight, I will have one. People believe what they read, and they especially believe what they read when they WANT to believe it. If you want to do better on tests, or learn something new, try the time-tested strategy of applying yourself and taking ownership of your learning. Don't expect media, podcasts or otherwise, to do for you what you aren't willing to do for yourself.

Research and Evidence

From early on in my doctoral studies, I gravitated toward research that had practical implications. I am not suggesting that survey research is not practical, but for the most part it really didn't interest me that much. I was far more interested in studies that measured things that matter to teachers and students: time on task, engagement with the instruction, student artifacts and learning. As a teacher, these were the things that interested me. I had to be sensitive to each student as an individual and the different factors that directly influenced their lives, but I felt more compelled to make my classroom as exciting as possible than I did to try to change their lives at home or their attitudes toward school. This is just where I chose to put my time and energy. So, when I see research that is really creative, unique or practical, I am suddenly interested. There are two such studies that I find fascinating. The first is a study about the influence of success or failure on perception. This could have easily been done with a research instrument (survey, questionnaire, etc.), but these researchers chose to measure the influence of success or failure (in this case, kicking field goals) by having participants adjust a miniature goal post to the size they thought was to scale after they had just attempted 10 field goals. People who kicked too low routinely adjusted the mini goal post too high; people who kicked wide right or left would judge the distance between the goal posts to be more narrow than they really are. Even more surprising, the more field goals a person made, the wider they adjusted the goal posts. It's fascinating to think that people standing side-by-side, based on their success at kicking field goals, were actually not looking at the same object. I really like this study because it accurately reflects how people might actually perceive objects or experiences with which they have had past success or failure. I used to notice something similar with my students in regard to reading ability. Those who struggled with reading were more likely to perceive words with a lot of letters as harder. I found them skipping past or mumbling long words, even if the words weren't really that hard to read (e.g., doorstop). I have no formal data, just my own experience, to back this up, but based on the findings of the field goal study it makes sense that this phenomenon would apply to other areas of life.

The second study, if one can call it that, is based on a series of VW commercials. The premise is that people will be more likely to do otherwise mundane or bothersome activities if they are made to be fun. You need to watch the videos to see what I am talking about. What I find interesting is the way they measure the influence of "fun" on the desired behavior: the number of people using a recycling  bin, number of people using the stairs and the weight of the trash in a garbage can. Each of these outcomes measure exactly what the fun was meant to increase. No surveys or other validated instruments; just an increase in the thing that is meant to be increased.

Of course, student outcomes aren't as tidy as the number of people to use the stairs instead of the escalator in a 24-hour period. Concepts such as "understanding," "effort," and "engagement" are really hard to define, thus, are hard to measure. But there are some things that teachers would like see more of from their students that can be measured: time on task, attention to detail, and higher-order thinking. These two studies have breathed a little life into my interest in student outcomes and classroom-based research. They are innovative, creative and, at least to the people who are interested in perception or increasing civic-minded behavior, relevant. Research should be, if nothing else, relevant.

I can still hear the words of two of my professors ...

Professor A: By the time you leave my class, I want you all to be from Missouri. Why Missouri? Because it's the Show-Me State, and if you make claims based on your research, you need to show me. Your data should show me something.

Professor B: If something exists, then it exists in some amount and can, therefore, be measured.

I didn't realize this at the time, but these have become words to live by.

Google Transcriber? Far from Beta

I have been using Google Voice for about a month now, and I'm really starting to like it. I have yet to use it for academic purposes, but it comes in handy for making long distance calls from work. I have also put a call widget on my family blog, and it's been fun listening to messages from family and friends from all over the country. I was also excited to learn that Google transcribes the messages into text, in case I want to get the gist of the message before listening to it. This would really come in handy in case I got any messages from angry students. :) One might assume that since Google has knocked just about every other project out of the park, then their transcriptions would be spot on. Well, think again. I have two examples below using messages from my mom and mother-in-law to my two sons (the transcriptions are below the audio widget):

Hi Tina, Kurt being Sam innate sense. This is granny Karen. We love you. Granddad night. Thought about you today. Labor day. We had a fun, Labor Day, so birthday celebration with the and that and for me. I would like to dominos tonight that we miss you all. We hope you had a fun day today. Also, Hi, Okay, I want to tell you bye bye. We'd love to seeing your pictures on the blog spot. Thank you. Bye bye. We love you.

Hey guys, It's G G. We're just hoping that you have sometime today to visit with us on skype. I'm fixing to go out grocery shopping. It's the 920 here 10:20 your time, so I should be. I'll be back here at noon, so if you have time before your afternoon match ups this afternoon. G. G in. Paul Paul would love to visit with you on the web cam. Bye bye.

I was thinking about using this tool to record some phone interviews for a research project I am starting, but obviously I will have to do some serious proofreading. Still, even if this tool gets 60-70% of the words right, I will have saved myself a bundle of time and effort. Using Google Voice, you can record calls and it will send you a transcription of the conversation. I am going to test it out this week and see how it goes.

Cookies in a Flash

When I read this article, I chuckled a little. Developers of web content and sites that host web content are getting better and better at collecting data about how people are using their tools. Some examples are Youtube's Insight and Visible Measures. The latter of these two examples actually will display in a graph when people are rewinding the scrub bar and re-watching portions of the video. Granted, I think a person has to have about 50,000 or so views for this to work, but it does give a pretty accurate display of audience engagement with a video. In the tool I helped design for my dissertation -- PrimaryAccess Storyboard -- we used something similar to Flash cookies to collect data about how the students were using the tool. This tool is built using Flash, and Bill (the programmer) was able to add in some code that records everything the user does while logged into this application. I was able to get detailed information from each student, such as when he or she logged in, whether or not they logged in after school from a different computer, every single move they made with the mouse or keyboard (I didn't know what they wrote and deleted, just the number of characters typed and deleted). I was even told how much time each student spent on task. Of course, we who have been teachers know that a general number for time on task, such as 52 minutes, does not really tell me much about what the student was doing. Using the rest of the data, I was able to see the differences in how the students used their time. Some would experiment with different images, layering them on top of each other and tinkering with the order. Other students were just playing with the tool, adding characters, spinning them around and deleting it.

This approach to data collection was not perfect, and Bill and I are still thinking about ways to make this data more useful to teachers, but it did answer a longstanding issue I have always had with technology in the classroom. Computers, just like  in other contexts, make it easy for students to look busy when they really aren't. As an observer, I was able to see how students would deftly switch between their work and Solitaire without the teacher ever noticing. Even though the data told me how long they stayed on task, I was able to infer from the rest of the data that they really weren't very engaged. And even if they looked engaged, I could see how they were spending their time. Some used the tool creatively to make the best product possible, while others played around but did not put a lot of thought into their final product.

I see a lot of potential in this kind of data collection. We can learn a lot about how students use different tools and scaffold projects in a way that anticipates these patterns. This also opens the door to looking at how teachers make instructional decisions based on the feedback they get from the students' tool use. Given what they know about how a student is using his or her time, can they make better decisions for directing student effort before the final product is turned in? I think there are lot of good questions, and hopefully some answers, that will come from this kind of data.