The tenacious myth of preferred learning styles

We care about addressing ALL LEARNING STYLES (real or imagined)!
We care about addressing ALL LEARNING STYLES (real or imagined)!

Learning styles (the idea we each have a preferred style, such as visual or auditory, and that those should be catered to for effective learning) are a myth. This shouldn’t need to be said again. Other people have said it well. (You can skip below for a list of references.)

But it’s a tenacious, popular myth. I understand how attractive the idea is … when I was a neophyte graduate student in a TA training workshop, I remember the satisfaction of completing a learning styles inventory (like this: http://www.personal.psu.edu/bxb11/LSI/LSI.htm & this: http://www.learning-styles-online.com/inventory/ & this: http://www.educationplanner.org/students/self-assessments/learning-styles.shtml & I really need to stop because this is just irritating me …) and figuring out that I was a “kinaesthetic” learner. Of course! Of course, I was a science grad student, and this made sense! We do experiments! I learn by doing! (I didn’t think about the fact that I could probably have found a rationale for being a “visual” learner …) It was an easy way for me to think about my learning! And to justify why I didn’t perform so well in some courses … those ones were not tailored to my learning style! (Woe to those poor nasal learners … )

That was back in 1994.

Now there is ample evidence that teaching towards preferred learning styles does not seem to actually help people learn. Even trying to reliably categorize people into preferred learning styles is fraught with issues. Meanwhile, many teachers/professors and students waste time and energy on this, efforts they could be directing elsewhere. (Check out the book “Make It Stick: The Science of Successful Learning” by Brown, Roediger and McDaniel for a good overview of what we DO know about teaching/learning based on recent cognitive science research.)

Continue reading “The tenacious myth of preferred learning styles”

Experiential Food Education

In the last couple of days, my most popular tweets have involved science-y food items:

Protist pancakes by Nathan Shields - http://www.saipancakes.com/
Pancakes and picture by Nathan Shields  http://www.saipancakes.com/
plant cell pizza
Plant cell pizza! From: http://www.pinterest.com/pin/231794712045067230/ – Julie Newton

Both of these images were brought to my attention by a couple of the smart, young women I am lucky to know (Fatima and Renee), and judging from the number of favourites and re-tweets, the images seemed to be appreciated by many of the folks who follow me (and their followers). Food does seem to be a really good way to get people’s attention and engagement! Continue reading “Experiential Food Education”

More evidence of benefits from increased course structure

red mercedes car
image by motorhead – stockarch.com

Sarah L. Eddy and Kelly A. Hogan (2014) recently published a paper “Getting Under the Hood: How and for Whom Does Increasing Course Structure Work?”, a nice example of the next wave of discipline-based educational research (DBER) that goes beyond asking “Does active learning work?” to explore details of how active learning interventions actually work, and differential impacts on sub-populations of students. Here, Eddy and Hogan describe their results of a study based on the work led by Scott Freeman at the University of Washington (see Freeman et al. 2011, Haak et al. 2011).

Continue reading “More evidence of benefits from increased course structure”

BYOD and classroom web response systems – my intersession experience

Lecture ToolsI recently finished teaching an intersession introductory microbiology course. It was a relatively small class (at least, for me) – just over 50 students – and it was a blended, flipped class. (I may post more about the flipping/blending later.) For the in-person classes, I used a couple of Bring-Your-Own-Device (BYOD) web-based classroom interaction systems: Lecture Tools and Learning Catalytics. (In previous offerings of the course, I used clickers.)  In this post, I’ll refer to these types of systems as WRS (Web Response Systems). We had access to both systems (at no cost to the students*), and used Lecture Tools regularly.

As I discussed in an earlier post, I had hoped I could use this experience to help me make decisions about moving away from clickers to a WRS. The fact that I met my students in class for only three hours once a week for six weeks was perhaps not the best way to gather a lot of data, but it was nice to try out new technology in a smaller class. Here are some of the things I observed/learned: Continue reading “BYOD and classroom web response systems – my intersession experience”

Thinking (and reading) about grading

I just finished my intersession course (yay!), and am trying to catch up on some reading. Schinske and Tanner’s “Teaching More by Grading Less (or Differently)” paper, recently published in CBE-Life Sciences Education includes lots of good stuff: a brief history of grading in higher ed, purposes of grading (feedback and motivation to students; comparing students; measuring student knowledge/mastery) and ending with “strategies for change” to help instructors who want to maximize benefits of grading while reducing the pitfalls. There are many interesting points and suggestions in this paper, and hopefully it will be one of the ones we discuss in an upcoming oCUBE journal club meeting.

In the meantime … anyone else want to chat about some of the stuff discussed in the paper? <:-)

Reference:
Schinske, J., and Tanner, K. 2014. Teaching More by Grading Less (or Differently). CBE-Life Sciences Education 13(2): 159-166.
http://www.lifescied.org/content/13/2/159.short

BYOD thoughts – moving on from clickers

EugeneClickersIs it time for me to move away from clickers? Can I use an online system that will do the job, and make use of devices that students already own (and can use for other purposes, unlike clickers)?

In most of my larger classes, I’ve found clickers (classroom response systems) very helpful in providing feedback to both students and me, encouraging discussion … and waking up students in 8:30 classes!  Classroom response systems, as educational technologies, can be helpful tools but also have potential pitfalls; how they are used makes a huge difference in terms of outcomes. (Want to know more about clickers? Here’s a plug for an essay I wrote back in 2008 – and the references within).

[Note – I find clickers useful in LARGE classes. In my dream-teaching-world, I’d have class sizes that would allow me to do a lot more interaction with all my students that wouldn’t require technology!]

As tools, they may not be the only (nor the best) option available. I didn’t expect clickers to actually be around all that long – I’d figured technology would emerge that allowed students to use their own devices to do the same thing (and, hopefully, more). Indeed, we now have both free (e.g., Four Good Alternatives to Clicker Systems) and commercial systems that provide this functionality (e.g., LectureTools, Learning Catalytics, Top Hat). Until recently, some things discouraged me from using these alternatives – technical barriers, and financial concerns – so I’ve continued to use clickers.

Continue reading “BYOD thoughts – moving on from clickers”

Pervasive, persistent, problematic “prokaryote”

There are reasons to avoid using “prokaryote” in biology teaching.  So, why are so many biologists resistant to the idea?

Why not use “prokaryote”?  Norman Pace published a one-page piece in Nature, “Time for a change” that raised concern about use of “prokaryote” (in education), and the common biology textbook paradigm of splitting organisms up into prokaryotes vs. eukaryotes. Pace highlighted many of the differences between archaea and bacteria, discussed evolutionary relationships/history, and made a case for avoiding use of the term prokaryote with students.  (Check out the 2005 article by Jan Sapp discussing the history behind the prokaryote-eukaryote dichotomy, too.) Pace expanded on this with a lengthier educational piece in 2008.

Continue reading “Pervasive, persistent, problematic “prokaryote””

Teaching-stream faculty positions – response to Globe & Mail article

Yesterday, an article was published by the Globe and Mail, “For a new kind of professor, teaching comes first“* by James Bradshaw. The story raised some positive points (e.g., qualified academics may prefer to focus on teaching; educational research is carried out by some teaching-focussed professors). Unfortunately, there were some inaccuracies about teaching-focussed faculty positions at York University,  and some disheartening statements from James Turk, Executive Director of the Canadian Association of University Teachers (CAUT/ACPPU). The CAUT/ACPPU is supposed to represent all sorts of university/college staff members, not only research faculty. (It may not be common knowledge that there are teaching-stream faculty positions at many Ontario universities already, although we are in the minority compared to research stream faculty.)

Continue reading “Teaching-stream faculty positions – response to Globe & Mail article”

Studying/learning resources for students – what to share?

LearnI’m working on some information to share with my students about studying/learning strategies. (Note – in my current position, I’m teaching classes to students who have at least one year of university under their belts.) I keep wanting to expand it, but I fear that the chances of students actually reading it are inversely proportional to its length! I am posting it here so that I can get constructive feedback, and hopefully other folks might be able to use some of it (as students or instructors). (Some aspects are specific to the University of Windsor/microbiology, but most of this is pretty general.)

Some studying/learning tips/resources:

I am often asked how to best study (for my classes, and others). Certainly, many students in microbiology have already developed effective studying/learning strategies for university classes, but here are some points about learning that might be helpful:

Continue reading “Studying/learning resources for students – what to share?”

Test question quandary: multiple-choice exams reduce higher-level thinking

Last fall, I read an article in CBE-Life Sciences Education by Kathrin F. Stanger-Hall Multiple-choice exams: an obstacle for higher-level thinking in introductory science classes. (CBE-Life Sciences Education, 2012, Vol 11(3), 294-306.) I was interested and disturbed by the findings … though not entirely surprised by them. When I got the opportunity to choose a paper for the oCUBE Journal Club, this was the one that first came to mind, as I’ve wanted to talk to other educators about it. I’m looking forward to talking to oCUBErs, but I suspect that there are many other educators who would also be interested in this paper, and some of the questions/concerns that it prompts.

The study:

Graph showing lower fairness in grading SET in MC+SA group
Figure 4. from Stanger-Hall (2012). “Student evaluations at the end of the semester. The average student evaluation scores from the MC + SA class are shown relative to the MC class (baseline).” Maybe reports of student evaluations of teaching should also include a breakdown of assessments used in each class?

Stanger-Hall conducted a study with two large sections of an introductory biology course, taught in the same term by the same instructor (herself), with differences in the types of questions used on tests for each section.  One section was tested on midterms by multiple-choice (MC) questions only, while midterms in the other section included a mixture of both MC questions and constructed-response (CR) questions (e.g., short answer, essay, fill-in-the blank), referred to as MC+SA in the article. She had a nice sample size: 282 students in the MC section, 231 in the MC+SA section. All students were introduced to Bloom’s Taxonomy of thinking skills, informed that 25-30% of exam questions would test higher-level thinking*, and provided guidance regarding study strategies and time.  Although (self-reported) study time was similar across sections, students in the MC+SA section performed better on the portion of the final exam common to both groups, and reported use of more active study strategies vs. passive ones. Despite higher performance, the MC+SA students did not like the CR questions, and rated “fairness in grading” lower than those in the MC-only section. (I was particularly struck by Figure 4, illustrating this finding.)

Continue reading “Test question quandary: multiple-choice exams reduce higher-level thinking”