This is the second in a collection of posts about topics covered in “A Learner-Centered Approach to Effective Teaching,” a workshop series offered by the UC Davis Center for Excellence in Teaching and Learning (CETL). By writing these posts, I hope to solidify my grasp of the topics covered and provide useful information for those unable to attend the workshop.
What’s in a typical syllabus? Most begin with a description of the course, lay out a series of lecture topics and reading assignments, and then delve into grading policies. These policies are generally designed with three purposes in mind: to provide accountability to students, to determine the suitability of each student for advancement, and to describe how students compare to some standard. The tools used to enact these policies – tests, quizzes, graded essays etc – are known as summative assessments. While these types of assessments do a good job at determining whether a student has achieved particular learning outcomes, they do little to actually promote learning.
This leads me to what is often missing from syllabi – a reasonable outline of how students are expected to achieve the outcomes and evaluate their progress (at frequent intervals) along the way. This is where formative assessments come into play. These assessments are designed to promote learning and are typically either ungraded or counted as participation points. The information that these assessments provide can be primarily useful to students, instructors, or both, depending on the type. Here are some examples of each (I have focused on methods I have used or found particularly interesting from the workshop):
Formative assessments that are primarily helpful to students
- Practice quizzes, exams, and problem sets. Posting practice summative assessments can give students a better sense of what type of questions they will be asked and allows them to evaluate their own learning. To truly facilitate learning, though, I feel that answer keys should not be posted with practice problems. This prevents memorization and forces engagement with the problems. I had a lot of success with this approach in an upper level plant physiology course I TA’d – I posted practice exam problems and then had review sessions where students discussed their answers and collaboratively worked through the problems with input from myself and the instructors.
- Exam wrappers. This is an idea that was presented in the workshop. Basically, after each midterm exam or major assignment, students would receive a short questionnaire asking them to reflect on their performance and study process prior to the exam. I think that this would be particularly useful in courses with cumulative exams, as it would force students to identify patterns of mistakes (“I got three questions about monophyletic groups wrong, maybe I don’t understand that concept”), and give them a chance to really learn from the exam. To motivate students to actually complete the self-reflection, you could offer to add a few bonus points to the exam score for completing it.
Formative assessments that are primarily helpful to instructors
- Background knowledge probe. When planning a class at any level, you make assumptions about what your students will know or be able to do when they walk into your classroom. Are those assumptions actually valid? A good way to find out is to do a brief questionnaire at the start of the semester to see if your students are where you think they should be. I think this would be particularly useful when starting at a new institution, where students have taken different preparatory courses and may have come from states with different high school curricula (perhaps less of an issue as Common Core adoption advances).
- Mid-semester evaluation. This is basically the same as the dreaded end-of-term evaluation, with one major difference – you can actually do something about the issues students raise!
- The muddiest point. This is another idea from the workshop and one I really like. After each class or unit, you ask your students to submit a sentence or two describing what they found most confusing or unclear. This allows you to reteach concepts if necessary and improve your instruction. For large classes, this could be done online, so that the comments are dumped into a single column in a spreadsheet that the instructor could scan through to find common issues.
Formative assessments are helpful to both students and instructors
- Guided questioning. This is a mainstay of my teaching in the BIS2C lab course, where students work in small groups to complete activities at a series of stations. Based on my experience teaching the course, I know which stations tend to be difficult for students, and plan a series of questions to ask each group of students to guide them to a better understanding of the activity. For example, one activity is to have students place cards with specific bacterial species onto a phylum-level phylogenetic tree of bacteria. These cards state whether the particular species is pathogenic or not, so the end product is a tree that has both pathogenic and non-pathogenic bacteria in most phlya, indicating that pathogenicity evolved multiple times (it is homoplasious). The target learning outcome for the station is for students to be able to identify homoplasious traits and realize that these traits confound phylogenetic analysis. To help students achieve this outcome, I ask them to describe what kind of tree they would draw using only pathogenicity as a trait and to compare that tree to the one we know is correct.
- Instant polling. This has become very common in large lecture classes, where students answer multiple-choice questions using clickers. I think there are two ways to do instant polling well. The first is to engage students prior to teaching material by creating “expectation failure.” For example, prior to teaching plant water transport, I might ask the following clicker question: “A redwood tree is 100m in height. Do you think the tree must expend energy to move from the roots to its highest leaves?” If you haven’t learned that water transport is passive in plants, you would almost certainly think energy is required, and would be quite surprised to find out it isn’t. This expectation failure should pique your curiosity and motivate you to engage with the subsequent discussion of water transport. The second good use of clickers is to assess how well students grasped an explanation of a difficult concept – for example at the end of an introductory phylogenetics lecture, I might use clickers to ask students to determine whether different taxonomic groups were monophyletic, paraphyletic, or polyphyletic. If the students do poorly, then I can revise my lesson plan and re-teach the ideas using a different approach.
- Student generated test questions. In this approach, the instructor solicits test questions from students, usually for bonus points, or the promise that the best questions will appear on the exam. This forces students to think about what concepts are important in a given unit, and gives the instructor insight into how students are thinking about the material. A fun permutation of this approach, used by one of my undergraduate horticulture professors, is to offer bonus points for original subject-matter related jokes.
The above approaches offer some good starting points for using formative assessments to drive student learning and improve teaching. I hope that the next time you sit down to write a syllabus, you will not only consider what summative assessments you will use to grade students, but also what formative assessments you will use to help them meet the learning outcomes of the course. Please comment below if you use other types of formative assessments and want to share your success stories.
One thought on “Will this affect my grade? Using formative assessments to drive learning”
Pingback: 5 Active Learning Strategies to Try in Your Classroom | The Prospective Prof