When should we use open-ended assessment tasks?
Understanding when they work, why they matter, and how to use them well
In our post on knowledge architecture, we explored how the structure of knowledge within a subject shapes the way we assess it. This post looks at why some subjects rely more on open-ended tasks than others—and how to use them well in your own classroom.
What do we mean by openness in assessment?
An assessment task is more open if it allows for a wider range of acceptable responses. This contrasts with a closed task that has a clearly defined correct answer. Openness isn’t a binary distinction, but a continuum. For example:
Very closed: “New Delhi is the capital of India – true or false?”
Somewhat closed: “State two tools that might be suitable for making a tongue and groove joint.”
Somewhat open: “Why did the Great War come to an end?”
Very open: “Tell me as much as you can about Buddhism.”
Openness doesn’t just affect how we mark a response—it influences the kind of thinking the task demands. More open questions ask students to make choices: what knowledge to use, how to organise it, and how to express it.
By definition, open-ended tasks require us to be at least somewhat open about what counts as a good answer. That doesn’t mean anything goes—but it does mean stepping away from fixed responses and accepting multiple valid constructions.
What openness isn’t
It’s easy to assume that open-ended tasks are always long, wide-ranging, or difficult, but this isn’t necessarily so.
First, openness isn’t about length. A very open task might ask for a six-word story. A closed one might require pages of working to prove a theorem or an hour’s practical work in DT. The length of response tells us little about how open a task is.
Second, openness isn’t the same as curriculum breadth. Open questions often draw on multiple areas, but so do many closed ones—such as a science question requiring knowledge from biology, chemistry, and physics. An open task might, by contrast, focus narrowly on a single concept but invite a creative or varied response.
Third, openness doesn’t predict difficulty. Closed questions can be extremely hard—especially if the answer is narrow or unfamiliar. Open tasks may be more accessible, particularly for lower attainers, because they allow students to draw on what they do know.
In short: openness is not about length, breadth, or difficulty. Understanding what it isn’t helps us make better choices about when—and how—to use it.
The problem with openness
If open-ended tasks allow for richer, more authentic responses, why not use them all the time?
Because they come with trade-offs. Openness can surface deep understanding—but it also introduces challenges around clarity, marking, and fairness.
The first is ambiguity—and not all ambiguity is helpful. Sometimes students don’t know what’s expected (task ambiguity). When expectations are unclear, students who understand the material may miss the mark, while others guess what’s wanted. In those cases, ambiguity weakens the signal. More useful ambiguity arises where the task allows for multiple valid responses (interpretive openness) or is intentionally under-specified to test how students construct meaning (constructive ambiguity).
Second, marking becomes more subjective. Closed tasks allow for clear criteria. Open tasks are harder to mark consistently, especially when applied across different students or teachers—raising concerns about reliability in high-stakes contexts.
Third, validity is not necessarily high. A task may feel rich but fail to assess what we really care about. Without a tight link between task and construct, we risk rewarding performance over understanding.
So why use open-ended tasks at all?
Given the challenges with open-ended tasks, it’s fair to ask: why use them at all?
The answer lies in what they let us observe. Open tasks reveal not just what students know, but how they think—what knowledge they draw on, how they organise it, and how they apply it in loosely cued situations.
Take the question: “Why did a King of England die at the Battle of Hastings?” It’s a somewhat open task. There isn’t one correct answer—but there are many poor ones. What matters is whether the student selects relevant knowledge and builds a coherent explanation.
Here are four key reasons we might choose an open format:
To test knowledge transfer when cues are weak
We may have a clear idea of what we want to see—such as reference to the succession crisis—but leave the question open to test whether students can find and apply that idea independently.To allow for multiple valid answers
Students might emphasise William’s strategy, or Harold’s exhaustion after marching south. Both are legitimate, depending on how they’re argued.To assess synthesis and flexible thinking
Sometimes the goal is to see whether students can connect ideas across topics—drawing comparisons, spotting patterns, or forming integrated arguments.To invite creative or subjective responses
In expressive subjects, we may value originality or voice. The goal isn’t a single “right” answer, but a meaningful, constructed response.
Why some subjects use openness more than others
If openness is so powerful—yet so problematic—why do some subjects use it more than others?
The answer lies in the nature of knowledge. As we discussed in our post on knowledge architecture, each subject has its own internal structure: how knowledge is built, agreed upon, and used. This shapes both teaching and assessment.
Consider two contrasting examples.
In mathematics, there’s typically high agreement about what counts as knowledge and hierarchical knowledge structures. Tasks are often necessarily closed. If you ask a student to “solve for x,” there is a correct answer—and making it an open-ended task to explore their understanding of the concept risks losing clarity or purpose. Many maths teachers (rightly) push back against that.
In history, while factual accuracy matters, assessment often focuses on interpretation and argument about real, contested events. Open-ended tasks make sense here—they reflect the subject’s values and ways of thinking.
What matters is alignment. An open assessment task may be ideal in one subject—or in one part of a subject’s curriculum—yet unhelpful elsewhere.
The dangers of misalignment
When assessment tasks don’t align with a subject’s knowledge architecture, problems arise—sometimes subtly, sometimes with serious consequences.
In subjects with high agreement and convergence—like maths, physics, or grammar—open-ended tasks can blur what students are meant to show. A vague or partial response might reflect misunderstanding, poor recall, or simply confusion about the question. The result is noise: a weak signal about what the student knows.
At the other end, forcing closed questions into subjects that rely on open responses can flatten complexity. In history or literature, an overuse of fixed-response tasks risks promoting surface recall over deeper thinking, argument, or interpretation.
How to use open tasks well
Once you’ve decided an open-ended task is right for your subject and purpose, the next challenge is designing it well.
Poorly designed tasks can confuse students and produce unreliable results. But with care, they can offer rich insights into student thinking. Here are five key principles:
Be clear about what you’re trying to find out
What kind of response will show the understanding you’re looking for—knowledge recall, synthesis, flexible thinking? The prompt should be open to the student, but not open in your own mind.Give cues—when appropriate
Total openness is rarely helpful. If the goal is argument or connection, scaffolds like short sources, guiding themes, or reminders of the task’s aim can prompt better thinking—without “giving away” the answer.Design with marking in mind
You may not define a single right answer, but you should know what good looks like. Model responses or agreed marking criteria (e.g. evidence, structure, originality) help ensure consistency—especially across multiple markers.Help students learn how to respond
Struggles often stem from uncertainty about how to answer, not what to say. Use scaffolded approaches in lessons to help students learn how to structure their responses to open-ended tasks.Moderate if using for high-stakes, comparative assessments
When used for grading—especially across classes or schools—plan how to maintain consistency. This might include moderation meetings, annotated exemplars, or structured comparative judgement.
Open-ended tasks can be messy—but also deeply meaningful, especially when they reflect the subject, curriculum, and thinking we truly want to assess.