BYOD … or bring me your questions! It’s all good.

 "The times they are a'changing" by brett jordan is licensed under CC BY.
Not an unusual view when you stand at the front of a lecture hall. Image credit: “The times they are a’changing” by brett jordan is licensed under CC BY. https://www.flickr.com/photos/x1brett/1472187414/

About a year ago, I switched from using clickers in my classes to a web-based classroom response system (CRS) – Lecture Tools – where students bring their own internet-enabled devices (BYOD), as I’ve mentioned here before. After three terms, I am generally happy with the system as a replacement for clickers, and I’ll likely talk more about that later.

This is a rather rambly account of something small I tried that worked out. I’m hoping that it might be of use/interest to other folks (or, at least, maybe some of the references will be). Oh, and it has a bit of my philosophy on class attendance. (I’m sure you were curious!) Continue reading “BYOD … or bring me your questions! It’s all good.”

The tenacious myth of preferred learning styles

We care about addressing ALL LEARNING STYLES (real or imagined)!
We care about addressing ALL LEARNING STYLES (real or imagined)!

Learning styles (the idea we each have a preferred style, such as visual or auditory, and that those should be catered to for effective learning) are a myth. This shouldn’t need to be said again. Other people have said it well. (You can skip below for a list of references.)

But it’s a tenacious, popular myth. I understand how attractive the idea is … when I was a neophyte graduate student in a TA training workshop, I remember the satisfaction of completing a learning styles inventory (like this: http://www.personal.psu.edu/bxb11/LSI/LSI.htm & this: http://www.learning-styles-online.com/inventory/ & this: http://www.educationplanner.org/students/self-assessments/learning-styles.shtml & I really need to stop because this is just irritating me …) and figuring out that I was a “kinaesthetic” learner. Of course! Of course, I was a science grad student, and this made sense! We do experiments! I learn by doing! (I didn’t think about the fact that I could probably have found a rationale for being a “visual” learner …) It was an easy way for me to think about my learning! And to justify why I didn’t perform so well in some courses … those ones were not tailored to my learning style! (Woe to those poor nasal learners … )

That was back in 1994.

Now there is ample evidence that teaching towards preferred learning styles does not seem to actually help people learn. Even trying to reliably categorize people into preferred learning styles is fraught with issues. Meanwhile, many teachers/professors and students waste time and energy on this, efforts they could be directing elsewhere. (Check out the book “Make It Stick: The Science of Successful Learning” by Brown, Roediger and McDaniel for a good overview of what we DO know about teaching/learning based on recent cognitive science research.)

Continue reading “The tenacious myth of preferred learning styles”

More evidence of benefits from increased course structure

red mercedes car
image by motorhead – stockarch.com

Sarah L. Eddy and Kelly A. Hogan (2014) recently published a paper “Getting Under the Hood: How and for Whom Does Increasing Course Structure Work?”, a nice example of the next wave of discipline-based educational research (DBER) that goes beyond asking “Does active learning work?” to explore details of how active learning interventions actually work, and differential impacts on sub-populations of students. Here, Eddy and Hogan describe their results of a study based on the work led by Scott Freeman at the University of Washington (see Freeman et al. 2011, Haak et al. 2011).

Continue reading “More evidence of benefits from increased course structure”

Thinking (and reading) about grading

I just finished my intersession course (yay!), and am trying to catch up on some reading. Schinske and Tanner’s “Teaching More by Grading Less (or Differently)” paper, recently published in CBE-Life Sciences Education includes lots of good stuff: a brief history of grading in higher ed, purposes of grading (feedback and motivation to students; comparing students; measuring student knowledge/mastery) and ending with “strategies for change” to help instructors who want to maximize benefits of grading while reducing the pitfalls. There are many interesting points and suggestions in this paper, and hopefully it will be one of the ones we discuss in an upcoming oCUBE journal club meeting.

In the meantime … anyone else want to chat about some of the stuff discussed in the paper? <:-)

Reference:
Schinske, J., and Tanner, K. 2014. Teaching More by Grading Less (or Differently). CBE-Life Sciences Education 13(2): 159-166.
http://www.lifescied.org/content/13/2/159.short

Test question quandary: multiple-choice exams reduce higher-level thinking

Last fall, I read an article in CBE-Life Sciences Education by Kathrin F. Stanger-Hall Multiple-choice exams: an obstacle for higher-level thinking in introductory science classes. (CBE-Life Sciences Education, 2012, Vol 11(3), 294-306.) I was interested and disturbed by the findings … though not entirely surprised by them. When I got the opportunity to choose a paper for the oCUBE Journal Club, this was the one that first came to mind, as I’ve wanted to talk to other educators about it. I’m looking forward to talking to oCUBErs, but I suspect that there are many other educators who would also be interested in this paper, and some of the questions/concerns that it prompts.

The study:

Graph showing lower fairness in grading SET in MC+SA group
Figure 4. from Stanger-Hall (2012). “Student evaluations at the end of the semester. The average student evaluation scores from the MC + SA class are shown relative to the MC class (baseline).” Maybe reports of student evaluations of teaching should also include a breakdown of assessments used in each class?

Stanger-Hall conducted a study with two large sections of an introductory biology course, taught in the same term by the same instructor (herself), with differences in the types of questions used on tests for each section.  One section was tested on midterms by multiple-choice (MC) questions only, while midterms in the other section included a mixture of both MC questions and constructed-response (CR) questions (e.g., short answer, essay, fill-in-the blank), referred to as MC+SA in the article. She had a nice sample size: 282 students in the MC section, 231 in the MC+SA section. All students were introduced to Bloom’s Taxonomy of thinking skills, informed that 25-30% of exam questions would test higher-level thinking*, and provided guidance regarding study strategies and time.  Although (self-reported) study time was similar across sections, students in the MC+SA section performed better on the portion of the final exam common to both groups, and reported use of more active study strategies vs. passive ones. Despite higher performance, the MC+SA students did not like the CR questions, and rated “fairness in grading” lower than those in the MC-only section. (I was particularly struck by Figure 4, illustrating this finding.)

Continue reading “Test question quandary: multiple-choice exams reduce higher-level thinking”