Writing Better MCQs: Small Fixes, Big Gains

 By Dr Drew Tarmey & Dr Matt Mears



Multiple-choice questions (MCQs), or in some specific cases, single best answer (SBA) questions, are a staple of assessment in higher education, especially in large, professionally oriented programmes. But while they're easy to deliver and mark, they’re not always easy to write well.

In a recent Science Teaching Network workshop, Dr Drew Tarmey (University of Manchester) guided participants through the subtleties of effective MCQ design. Drawing on examples from medicine, physiology, and general science, the session unpacked the surprisingly delicate art of asking a good question—one that’s clear, fair, and truly tests the intended learning.

Rather than focusing on trick questions or rote recall, the emphasis was on designing MCQs that assess applied knowledge and clinical or conceptual reasoning. We explored what goes into a well-constructed question: how to write purposeful stems and lead-ins, how to build plausible distractors, and how to align question content with what’s actually taught.

The session also tackled common pitfalls—like ambiguous phrasing, grammatical clues, and unbalanced content—and offered practical techniques such as the “cover-up test” to help spot problems early. Alongside this, we looked at how digital platforms like Blackboard and NUMBAS influence the way questions are delivered and experienced by students.

Across all of this, one thread ran consistently through the session: MCQs aren’t just a logistical solution—they’re a pedagogical tool. And like any tool, their effectiveness depends on how thoughtfully they’re used.

The slide above illustrates common points where MCQ assessments can go wrong—from the initial writing phase to exam delivery. At nearly every point of failure, collaboration acts as a safeguard:

  • Insufficient items written: When writing is left to just one or two individuals under pressure, the result is a shallow question bank. Team-based writing sessions spread the load and generate a broader range of questions.
  • Poor quality questions: Without peer review, it's easy to miss vague lead-ins, biased phrasing, or grammatical clues. Reviewing questions as a group catches issues early and improves overall clarity.
  • Imbalanced or unblueprinted coverage: Working with others makes it easier to map your question set onto the intended curriculum, ensuring no topic is over- or under-assessed.
  • Proofreading and logic errors: Typos, broken formatting, and inconsistent option structures can undermine otherwise strong questions. Colleagues are more likely to spot these mistakes.
  • Overexposure or inappropriate reuse: When a question is reused too frequently or appears in too many places, its value diminishes. Shared question banks and tagging systems help manage item usage strategically.
Including others in the authoring and reviewing process not only reduces the risk of technical issues or student confusion—it also builds a stronger culture around assessment. It enables less experienced colleagues to develop their skills in a supportive setting, and fosters a sense of collective ownership over the quality and fairness of student evaluation.

Whether it's through departmental writing retreats, informal buddy systems, or a shared Google Doc with comments, creating MCQs together isn't just easier — it's better.

In the next post, we'll dive into the anatomy of a well-written MCQ, showing what pitfalls to avoid, and how to create an effective MCQ.


Dr Drew Tarmey works at the School of Medical Sciences at the University of Manchester

Dr Matt Mears (he/him/they/them) is a Senior University Teacher in Physics within the School of Mathematical and Physical Sciences, and has been responsible for the second year laboratory on and off since 2012. You can contact him by email (m.mears@sheffield.ac.uk) or just put a coffee chat into their diary.