Ancillary fee anxiety

AnxietyCat Ancillary
Anxiety cat is anxious about ancillary fees

I had originally planned to write (and actually wrote a draft of) a post to explore my questions and concerns about asking students to pay for access to a web-based classroom response system (WBCRS henceforth), like Lecture Tools (now integrated into Echo 360), Top Hat, or Learning Catalytics. My major concern? These tools are basically ways to teach huge classes better, to bring in the interactivity and communication aspects difficult to achieve in the large class setting – kind of a “large class tax” on students. (I’ve used Lecture Tools for several terms – see my previous posts here, here, and here.)

 

I’d hoped to gain some clarity,  maybe spark some conversation with colleagues about the issues relating to using a WBCRS at a cost to students. As part of my thinking, I considered some of the other ancillary items we routinely ask students to purchase (i.e., not usually included in their tuition, but required for a course). I was originally thinking that a teaching tool is really different from a required textbook, dissection kit, safety glasses, or a lab coat. Now I’m not only concerned about the ethics/fairness of asking students to purchase licenses for a WBCRS, but also requiring textbooks and disposable lab coats! Continue reading “Ancillary fee anxiety”

Pervasive, persistent, problematic “prokaryote”

There are reasons to avoid using “prokaryote” in biology teaching.  So, why are so many biologists resistant to the idea?

Why not use “prokaryote”?  Norman Pace published a one-page piece in Nature, “Time for a change” that raised concern about use of “prokaryote” (in education), and the common biology textbook paradigm of splitting organisms up into prokaryotes vs. eukaryotes. Pace highlighted many of the differences between archaea and bacteria, discussed evolutionary relationships/history, and made a case for avoiding use of the term prokaryote with students.  (Check out the 2005 article by Jan Sapp discussing the history behind the prokaryote-eukaryote dichotomy, too.) Pace expanded on this with a lengthier educational piece in 2008.

Continue reading “Pervasive, persistent, problematic “prokaryote””

Test question quandary: multiple-choice exams reduce higher-level thinking

Last fall, I read an article in CBE-Life Sciences Education by Kathrin F. Stanger-Hall Multiple-choice exams: an obstacle for higher-level thinking in introductory science classes. (CBE-Life Sciences Education, 2012, Vol 11(3), 294-306.) I was interested and disturbed by the findings … though not entirely surprised by them. When I got the opportunity to choose a paper for the oCUBE Journal Club, this was the one that first came to mind, as I’ve wanted to talk to other educators about it. I’m looking forward to talking to oCUBErs, but I suspect that there are many other educators who would also be interested in this paper, and some of the questions/concerns that it prompts.

The study:

Graph showing lower fairness in grading SET in MC+SA group
Figure 4. from Stanger-Hall (2012). “Student evaluations at the end of the semester. The average student evaluation scores from the MC + SA class are shown relative to the MC class (baseline).” Maybe reports of student evaluations of teaching should also include a breakdown of assessments used in each class?

Stanger-Hall conducted a study with two large sections of an introductory biology course, taught in the same term by the same instructor (herself), with differences in the types of questions used on tests for each section.  One section was tested on midterms by multiple-choice (MC) questions only, while midterms in the other section included a mixture of both MC questions and constructed-response (CR) questions (e.g., short answer, essay, fill-in-the blank), referred to as MC+SA in the article. She had a nice sample size: 282 students in the MC section, 231 in the MC+SA section. All students were introduced to Bloom’s Taxonomy of thinking skills, informed that 25-30% of exam questions would test higher-level thinking*, and provided guidance regarding study strategies and time.  Although (self-reported) study time was similar across sections, students in the MC+SA section performed better on the portion of the final exam common to both groups, and reported use of more active study strategies vs. passive ones. Despite higher performance, the MC+SA students did not like the CR questions, and rated “fairness in grading” lower than those in the MC-only section. (I was particularly struck by Figure 4, illustrating this finding.)

Continue reading “Test question quandary: multiple-choice exams reduce higher-level thinking”