How do you objectively assess open questions?
A common opinion about open questions is that their assessment is subjective and laborious. So why use them anyway? How do you objectively assess open questions? And how do you ensure that your assessment is as efficient as possible? This blog explains when open questions have added value and gives tips on how to prevent common mistakes in the assessment of open questions.
When do you use open questions?
We define open questions as questions where the candidate has to formulate the answer himself/herself. Closed questions are questions where the candidate chooses the correct answer or a combination of answers from a number of options.
Why would you want to use open questions when you can also test with closed questions? If that is indeed just as possible, testing with closed questions is recommended. After all, closed questions have a number of advantages, such as the ease of automatic correction when using digital testing software or objective assessment (an answer is right or wrong).
However, it may sometimes be necessary to use open questions. Think of situations such as:
- The candidate is tested on active knowledge and its active application. That often involves a situation in which the candidate has to be creative and must create an own answer or solution.
- The analysis or substantiation is just as – or perhaps more – important than the final answer. This is, for example, the case when giving a calculation.
- When the answer to the closed question presents itself as the only logical alternative. This also means that it is difficult to come up with plausible alternative (wrong) answers.
- Multiple answers can be correct and it is impossible to give all the correct answers. If, in that case, you nevertheless decide to use closed questions, the objectivity of the question comes under threat.
- There are no fixed standards or clear norms within a certain industry. For example: how exactly to conduct a sales conversation or a conversation as a manager. While some answers are better than others, that does not make them wrong. This is difficult to measure in a closed question.
Efficient assessment of open questions
The assessment of open questions is carried out efficiently by means of digital testing. There is no need to distribute exam papers. A corrector receives an email about when the exam work will be ready for assessment. He need not browse and cannot accidentally skip questions, as the system ensures that the questions are presented sequentially. Questions and answers are conveniently placed together, so the chance of errors is small. In addition, no time is lost if a second corrector is used. Immediately after completing the first assessment, this second corrector can receive an email with the message that exam work is ready for assessment.

Tips for objectively assessing open questions
Tip 1 – Be as specific as possible in the question
- Only ask open questions, which can be answered both correctly and incorrectly. The solution often lies in making an open question so specific that only someone who has learned or experienced it can answer it. So do not ask for general knowledge.
Tip 2 – Do not formulate the example answers too narrowly
- Keep the example answer limited. Only write down what answer elements are required and not what the most perfect answer is. Also: if you feel that there are various formulations that are also correct, include them.
For example: Other phrases with the same meaning are also regarded as correct.
- If you feel that, alongside the formulated answers, other answers can be given that are not wrong, note them. This may be the case with more creative solutions where not one, but several roads lead to Rome. For example: “Other answers to be judged by the corrector.”
- Give correctors the opportunity to post comments. This is often possible in digital testing. The corrector can then, based on the candidate’s answers, indicate that he or she has seen that the sample answer must be expanded.
Tip 3 – Work with positive assessments
- Indicate what you get points for instead of what you get points deduction for. By working with assessment aspects (in digital testing), you can indicate this very precisely and score for each aspect. In case of assessment aspects, give a short characterisation of a sub-answer with a scoring system. For example:
- Type of sub-answer 1 – 1 point
- Type of sub-answer 2 – 1 point
- Type of sub-answer 3 – 1 point
Total points: 3
Tip 4 – Provide overview and structure
- Use clear assessment aspects (see also tip 3) to avoid confusion among correctors.
- Do not ask more than two ‘sub-questions’ in one question. Or, if you do, use a., b. and c. questions. The candidate and the corrector will then know how many answers need to be given. Or: simply ask more questions about the subject.
Tip 5 – Avoid overlap in the answer model
- Look critically at the ‘example answers’ and do not ‘overask’. Do they overlap? Then adjust the number of answers to be given downwards. For example: If there are three reasons, where two overlap, ask for two and not three reasons. Then put the three reasons in the answer model.
- Use assessment aspects. It can also help to indicate what should not be regarded as correct.
Tip 6 – Be alert to the HORN and HALO effect
- In the HORN effect, the overall assessment of a candidate weighs negatively in the scoring. With the HALO effect, on the other hand, this count positively. Result: the corrector gives too many or too few points.
- Use segmented assessment. With segmented assessment, the corrector first checks all ‘1’ questions and then all ‘2’ questions and so on. This masks the overall picture of how a candidate is ‘doing’.
- Make sure that the final score of a candidate (in digital assessment) is not visible. If the corrector sees that the candidate needs just another point to pass, he or she may be tempted to add another point somewhere. Since the aim is to give an opinion per question (and not on the final score), it is better to keep the overall score hidden.
A general advice on how to improve the assessment of open questions is to consult regularly the correctors. What do they notice? What do you notice? In response to complaints and objections, give examples of assessment errors and discuss together how to prevent them.
Conclusion
There are good reasons to ask open questions and, despite the disadvantages, there are (especially with digital testing) numerous measures you can take to ensure that open questions can be assessed efficiently and objectively. Of course, the human factor always remains present, but isn’t that also what we want? Want to give your opinion or know more? Then contact us.