Biology online/open-book exams, Part 2: time for reflection & discussion

In a previous post, I shared some tips on making online open-book tests. Those were mostly practical points, pulled together quickly after COVID-19 abruptly pushed us online. Originally, I anticipated writing a second post going more in depth about some of the challenges, practical and ethical, of online testing in large courses, particularly survey courses in biology (and other disciplines) that tend to be content-heavy. (It’s drafted, and a VERY LONG READ …) Instead, I’ll just mention my concerns at the broadest level … and why I’ll likely revisit the details later.

Continue reading “Biology online/open-book exams, Part 2: time for reflection & discussion”

Biology online/open-book exams, Part 1: general tips/considerations & examples

With COVID-19 shifting us all online, and as someone who has taught online (fully and partially), I have had a number of colleagues approach me with questions and ask advice. Though I don’t consider myself an “expert”, on online teaching, I can (and want to) share what I do know, and my own experiences, and I’m going to try to do that here.
Continue reading “Biology online/open-book exams, Part 1: general tips/considerations & examples”

BYOD … or bring me your questions! It’s all good.

 "The times they are a'changing" by brett jordan is licensed under CC BY.
Not an unusual view when you stand at the front of a lecture hall. Image credit: “The times they are a’changing” by brett jordan is licensed under CC BY. https://www.flickr.com/photos/x1brett/1472187414/

About a year ago, I switched from using clickers in my classes to a web-based classroom response system (CRS) – Lecture Tools – where students bring their own internet-enabled devices (BYOD), as I’ve mentioned here before. After three terms, I am generally happy with the system as a replacement for clickers, and I’ll likely talk more about that later.

This is a rather rambly account of something small I tried that worked out. I’m hoping that it might be of use/interest to other folks (or, at least, maybe some of the references will be). Oh, and it has a bit of my philosophy on class attendance. (I’m sure you were curious!) Continue reading “BYOD … or bring me your questions! It’s all good.”

More evidence of benefits from increased course structure

red mercedes car
image by motorhead – stockarch.com

Sarah L. Eddy and Kelly A. Hogan (2014) recently published a paper “Getting Under the Hood: How and for Whom Does Increasing Course Structure Work?”, a nice example of the next wave of discipline-based educational research (DBER) that goes beyond asking “Does active learning work?” to explore details of how active learning interventions actually work, and differential impacts on sub-populations of students. Here, Eddy and Hogan describe their results of a study based on the work led by Scott Freeman at the University of Washington (see Freeman et al. 2011, Haak et al. 2011).

Continue reading “More evidence of benefits from increased course structure”

Why change a university grading scale?

LettersI recently mentioned the 2014 paper by Schinske and Tanner that is a great review on various aspects of grading, including some of the history of grading in higher education.

Schinske and Tanner highlighted the fact that grades were developed as a method for universities to communicate (e.g., between schools). This is still an important function that grades play today (within/between schools, and beyond), and there are clear benefits from having a valid, reliable grading system. In the early 20th century, percentage (100-point) scales were frequently used (Cureton, 1971).  The letter grade system adopted at Harvard was apparently a result of faculty members’ concerns about the reliability of grades measured on a percentage scale, and it was believed that a letter grade system (with 5-categories) would provide increased reliability. 

Even today, issues with reliability (as well as validity) of grading exist. (Schinske and Tanner discuss this as well.)  Thus, I found it a bit surprising last fall when the University of Windsor (where I currently teach) switched from a letter grade/point system (a 13 point scale) to a percentage system for final grades. I could understand if this shift were bringing the school’s grade reporting in line with many others in the same region (e.g., within a province or country); with different grading systems/scales used by various universities, it can be challenging to make comparisons between students from different schools for things like scholarships, professional school applications, etc. However, from my observations (and the OMSAS Undergraduate Conversion Chart), the percentage system isn’t the most widely used grading scale in Ontario, nor across Canada.

I’m not sure why the change to a percentage grade system was made. It is possible that the rationale was  provided in some form, but that I didn’t receive it, or have overlooked it. I’ve asked colleagues here, who also didn’t know. Some (quick) searching of the university website hasn’t pulled up anything helpful, but again, it could be there and I’m not finding it (as my search terms are pretty common words on a university website). Although I’ll be a bit embarrassed if someone posts a link to something that explains it, I’d still appreciate knowing!

A few questions come to mind: Why did this university change from letter grades to percentages? Is this something that has happened at other institutions? Are there schools that have recently taken the opposite approach (moving from percentages to a point system)? (From the OMSAS chart, I’m guessing that Dalhousie and the University of Toronto made changes, but I don’t know in which direction.) Have any changes in grading systems/scales been accompanied by initiatives relating to how grades are determined?

As ever, I’m interested in seeing your comments (and, hopefully, answers to some of my questions)!

References:
Cureton L.W. 1971. The history of grading practices. NCME Measurement in Educ. 2(4):1-8. Link to pdf.
Schinske, J., and Tanner, K. 2014. Teaching More by Grading Less (or Differently). CBE-Life Sciences Education 13(2): 159-166.
http://www.lifescied.org/content/13/2/159.short

 

Thinking (and reading) about grading

I just finished my intersession course (yay!), and am trying to catch up on some reading. Schinske and Tanner’s “Teaching More by Grading Less (or Differently)” paper, recently published in CBE-Life Sciences Education includes lots of good stuff: a brief history of grading in higher ed, purposes of grading (feedback and motivation to students; comparing students; measuring student knowledge/mastery) and ending with “strategies for change” to help instructors who want to maximize benefits of grading while reducing the pitfalls. There are many interesting points and suggestions in this paper, and hopefully it will be one of the ones we discuss in an upcoming oCUBE journal club meeting.

In the meantime … anyone else want to chat about some of the stuff discussed in the paper? <:-)

Reference:
Schinske, J., and Tanner, K. 2014. Teaching More by Grading Less (or Differently). CBE-Life Sciences Education 13(2): 159-166.
http://www.lifescied.org/content/13/2/159.short

Test question quandary: multiple-choice exams reduce higher-level thinking

Last fall, I read an article in CBE-Life Sciences Education by Kathrin F. Stanger-Hall Multiple-choice exams: an obstacle for higher-level thinking in introductory science classes. (CBE-Life Sciences Education, 2012, Vol 11(3), 294-306.) I was interested and disturbed by the findings … though not entirely surprised by them. When I got the opportunity to choose a paper for the oCUBE Journal Club, this was the one that first came to mind, as I’ve wanted to talk to other educators about it. I’m looking forward to talking to oCUBErs, but I suspect that there are many other educators who would also be interested in this paper, and some of the questions/concerns that it prompts.

The study:

Graph showing lower fairness in grading SET in MC+SA group
Figure 4. from Stanger-Hall (2012). “Student evaluations at the end of the semester. The average student evaluation scores from the MC + SA class are shown relative to the MC class (baseline).” Maybe reports of student evaluations of teaching should also include a breakdown of assessments used in each class?

Stanger-Hall conducted a study with two large sections of an introductory biology course, taught in the same term by the same instructor (herself), with differences in the types of questions used on tests for each section.  One section was tested on midterms by multiple-choice (MC) questions only, while midterms in the other section included a mixture of both MC questions and constructed-response (CR) questions (e.g., short answer, essay, fill-in-the blank), referred to as MC+SA in the article. She had a nice sample size: 282 students in the MC section, 231 in the MC+SA section. All students were introduced to Bloom’s Taxonomy of thinking skills, informed that 25-30% of exam questions would test higher-level thinking*, and provided guidance regarding study strategies and time.  Although (self-reported) study time was similar across sections, students in the MC+SA section performed better on the portion of the final exam common to both groups, and reported use of more active study strategies vs. passive ones. Despite higher performance, the MC+SA students did not like the CR questions, and rated “fairness in grading” lower than those in the MC-only section. (I was particularly struck by Figure 4, illustrating this finding.)

Continue reading “Test question quandary: multiple-choice exams reduce higher-level thinking”