Four-way High Fives During Exams: Adding a Group Phase to Provide Immediate Feedback and Increase Enjoyment

Jared Stang, University of British Columbia, Vancouver
Joss Ives, University of British Columbia, Vancouver

Active learning strategies, such as peer instruction and collaborative group work, are important components of many contemporary physics classrooms.1 A key part of the efficacy of these teaching techniques comes from increased student access to feedback—“the most powerful single influence” on student achievement.2 For maximum impact, feedback should focus on performance and learning, address small chunks of material, and be timely to and match the purpose of the assessment.2

The two-phase collaborative group exam is an active learning strategy that provides students with an opportunity for feedback, in situations typically absent of timely feedback. In a two-phase exam, students first complete the exam individually—the solo phase—and then form groups to complete the same or similar questions in the group phase. Students receive fine-grained and responsive feedback directly matched to the assessment, immediately, from their peers, when they still care about it. This innovation can be effective with many types of low- or high-stakes assessments, such as quizzes, midterms, or final exams.

During the group phase, students are animated, enthused and often smiling, and the room is loud. Students often leave the test with positive body language. In fact, it is our experience that group exams are the course activity with the highest level of student engagement. This feedback and strong engagement translates into learning. The average for the group phase typically exceeds the average for the solo phase by 15-20%, indicating that on average students discuss or see more correct understanding than they brought to the solo phase. Studies on retention of this learning3,4 have shown a statistically significant increase in retention when a student had a group phase.

Our workshop explored several practical aspects of group exam implementations. Participants first identified characteristics of questions that may facilitate learning in the group phase: those with salient conceptual pieces rather than procedural calculations, those with a high ratio of sense-making to answer-making, and those which may necessitate input from a diverse range of perspectives. We identified algebra-heavy, traditional "plug-and-chug" style problems as less effective for a group phase. A fundamental design principle is to maximize feedback opportunities by maximizing group conversations.

Next, we had the participants consider some aspects of group exam design and then shared our recommended implementation for first-time users: Start with a low-stakes assessment, provide 10 minutes of group phase time for every 20 minutes of solo phase time, and place a much higher grading weight on the solo exam (e.g., a weighting of 85% solo and 15% group or similar). As with all active learning strategies, implementation should include telling and showing the students why you chose the activity and making sure the students know the logistics of the activity. Some participants articulated a desire to present groups with more difficult or synthesizing problems. While this is a viable group assessment strategy, given our primary framing of the two-phase exam as a feedback activity, we default to using the same or very similar problems for the group phase, sometimes edited to be more discussion friendly.

Group formation raises further implementation choices. With respect to group size, workshop participants noted the possible constraints of too many group members as “too many cooks in the kitchen,” and for some group members reduced participation and perhaps marginalization, consistent with our own observations. Based on group-phase performance results collected over the past few years in our courses, we recommend groups of three or four.

In courses where students work within assigned groups for extended periods of time, it works well to maintain those groups for the group exam. However, many courses will require that ad-hoc groups are formed for the group exam, and the choice between instructor-formed and student-formed groups—an open question in the two-phase exam literature—will need to be made. Instructors choosing to form groups themselves should be careful to avoid isolating female or minority students, advice we extend from Heller and Hollabaugh’s observations5 that group dysfunction is higher in groups with isolated females. We tend to let the students choose their own groups, but recommend that instructors offer to facilitate for those students who find it challenging to form a group. The literature provides some support for student-formed groups, with female students seeing more value in group work6 and groups engaging in more productive scientific behaviours when friends work together.7

We closed the workshop by sharing comments from a recent student survey we ran after a sophomore Chemistry midterm. Responses to a prompt asking for advice for future students to get the most out of their group exam experience included themes of consensus (“Discuss each answer in depth, to make sure all group members understand why they reached that decision”; roughly 40% of comments), speaking up/sharing (“Don’t be afraid to share contrasting opinions or bring up new possibilities, that’s what makes group exams beneficial!”; roughly 25% of comments), listen/respect (“Listen to and respect everyone’s opinions, even if you don’t agree with them”; roughly 15% of comments), and know your group before (“Get to know your group members before the exam”; roughly 15% of comments).

Overall, two-phase group exams are a low-barrier, easy-to-implement way to incorporate active learning and feedback into a traditional summative assessment. Furthermore, they are overwhelmingly well-received by students: In surveys, we typically see more than 95% of students recommend continued use of two-phase exams for their midterms, matching or exceeding previously reported results.8 We love these exams, and suspect that you might too.

For more information, please see our workshop resources online at https://osf.io/9q86r/ or contact us by email (jared@phas.ubc.ca and joss@phas.ubc.ca).

Jared Stang is a lecturer at the University of British Columbia.

Joss Ives is a senior instructor at the University of British Columbia.

References

1. S. Freeman, S. L. Eddy, M. McDonough, M. K. Smith, N. Okoroafor, H. Jordt, & M. P. Wenderoth, “Active learning increases student performance in science, engineering, and mathematics,” Proceedings of the National Academy of Sciences, 111(23), 8410-8415 (2014).

2. G. Gibbs & C. Simpson, “Conditions under which assessment supports students’ learning,” Learning and teaching in higher education, (1), 3-31 (2005).

3. B. H. Gilley & B. Clarkston, “Collaborative testing: Evidence of learning in a controlled in-class study of undergraduate students,” Journal of College Science Teaching, 43(3), 83-91 (2014).

4. J. Ives, “Measuring the Learning from Two-Stage Collaborative Group Exams,” 2014 PERC Proceedings [Minneapolis, MN, July 30-31, 2014], edited by P. V. Engelhardt, A. D. Churukian, and D. L. Jones, doi:10.1119/perc.2014.pr.027.

5. P. Heller & M. Hollabaugh, “Teaching problem solving through cooperative grouping. Part 2: Designing problems and structuring groups,” American Journal of Physics, 60(7), 637-644 (1992).

6. S. L. Eddy, S. E. Brownell, P. Thummaphan, M. C. Lan, & M. P. Wenderoth, “Caution, student experience may vary: Social identities impact a student’s experience in peer discussions,” CBE Life Sciences Education, 14(4), 1–17 (2015). http://doi.org/10.1187/cbe.15-05-0108

7. M. Azmitia & R. Montgomery, “Friendship, transactive dialogues, and the development of scientific reasoning,” Social Development, 2(3), 202–221 (1993). http://doi.org/10.1111/j.1467-9507.1993.tb00014.x

8. G. W. Rieger & C. E. Heiner, Examinations that support collaborative learning: The students' perspective,” Journal of College Science Teaching, 43(4), 41-47 (2014).


Disclaimer – The articles and opinion pieces found in this issue of the APS Forum on Education Newsletter are not peer refereed and represent solely the views of the authors and not necessarily the views of the APS.