How do you evaluate open-ended questions objectively?
A common judgment about open-ended questions is that their assessment is subjective and laborious. So why use these anyway? How do you evaluate open-ended questions objectively? And how to make sure your assessment is so efficiently as possible? In this blog, you’ll read about when an open-ended question adds value and get tips on how to avoid common open-ended grading mistakes.
Content
When do you use open-ended questions?
Open-ended questions mean questions in which the candidate must formulate the answer on his or her own. Closed questions are those in which the candidate chooses the correct answer or a combination of answers from a set of options.
Why would you want to use open-ended questions when you can also test with closed-ended questions? If that is indeed just as well, testing with closed questions is recommended. Indeed, this has a number of advantages, such as the convenience of automatic correction when using digital assessment software or objective grading (an answer is right or wrong).
Still, it may be necessary to use open-ended questions. Consider situations such as:
- The candidate is tested on active knowledge and its active application. This often involves a situation where the candidate has to be creative and come up with/formulate the answer himself.
- The analysis or rationale is as important or more important than the final answer. This is the case, for example, when giving a calculation.
- When a closed question quickly becomes an “open door” question. When the answer is given, it is soon clear that it is the right one. This also means that it is difficult to come up with plausible alternative (wrong) answers.
- Multiple answers can be correct and it is impossible to give all the correct answers. Were closed testing to be used in that case anyway, you would run into precisely the problem of the objectivity of the question.
- There are no set standards or clear norms within a particular industry. For example: how exactly to conduct a sales call or a conversation as a manager. Some answers are better than others, but not immediately wrong. This is difficult to measure in a closed-ended question.
Assessing open-ended questions efficiently
Assessment of open-ended questions runs efficiently through digital assessment. This is because there is no need to distribute exam works. A reviewer will receive an email about when which exam work is ready for them. He does not have to scroll or accidentally skip questions because the system ensures that the questions are presented sequentially. In orderly fashion, question and answer are together, reducing the chance of error. In addition, no time is lost when a second reviewer is used. Immediately after completing the first review, this second reviewer may receive an email notifying him that an exam work is ready for him.
Assessing open-ended questions objectively – tips
How do you evaluate open-ended questions objectively? Read the (partial) tips below:
Tip 1 – Be as specific as possible in the question statement
- Ask only open-ended questions, which can be answered either right or wrong. The solution often lies in making an open-ended question so specific that only someone who has learned about it or is experienced can answer it. So don’t ask about general knowledge.
Tip 2 – Don’t formulate sample answers too tightly
- Keep the sample answer limited. Write in it only what must necessarily be mentioned and not what is the most perfect answer. Further: if you feel that there are all kinds of formulations that are also good, put that in.
For example, “Other phrases with the same tenor are also counted well.
- If you feel that there may be other answers besides those formulated that are not wrong, put that in there. This may be the case with more creative solutions where not one, but several roads lead to Rome. For example, “Other answers at the discretion of the proofreader.
- Allow reviewers to post corrector comments. In digital assessment, this is often possible. The evaluator can then indicate from the candidate’s answers that he has seen that the sample answer should be expanded to include additional, supplementary answers.
Tip 3 – Work with positive reviews
- Indicate what you get points for instead of what you get point deductions for. By working (in digital assessment) with assessment aspects, you can indicate that very precisely and you can score per aspect. For assessment aspects, give a brief characterization of a partial answer with a point rating.For example:
- Characterization partial answer 1 – 1 point
- Characterization partial answer 2 – 1 point
- Characterization partial answer 3 – 1 point
Total points: 3
Tip 4 – Provide overview and structure
- Use clear assessment aspects (see also tip 3) to avoid confusion among assessors.
- Ask no more than two “sub-questions” in one question. Or, if you do, work with a., b. and c. questions. The candidate and the assessor then know how many answers should actually be given. Or: just ask more questions on this topic.
Tip 5 – Avoid overlap in the answer model
- Look critically at the “sample answers” and don’t ask for too much. Do these overlap? Then adjust the number of answers to be given downward. For example: If there are three reasons, with two overlapping, ask for two and not three reasons. Then do put the three reasons in the answer template.
- Use assessment aspects. Furthermore, it may also help to indicate what should not be counted correctly.
Tip 6 – Be alert to the Horn and Halo effect.
- At the
Horn effect
the overall judgment of a candidate weighs negatively in scoring. With a
Halo effect
is actually positive. Result: the evaluator gives too many or too few points.
- Use segmented assessment. In segmented assessment, a reviewer first reviews all questions ‘1’ and then reviews all questions ‘2’ and so on. The overall picture of how a candidate “does it” is then missing.
- Ensure that a candidate’s final score (in digital assessment) is not visible. When a reviewer sees that with one more point he may just come out on a pass, he may be tempted to add another point somewhere. Since the goal is to give a judgment per question (and not the final score), it is better to keep the total score hidden.
Consultation with reviewers
A general observation to improve the assessment of open-ended questions is regular consultation with assessors. What do they notice? What do you notice? In response to complaints and objections, give examples of errors of judgment and discuss together how to avoid them.
Conclusion
There are good reasons to ask open-ended questions and, despite the drawbacks, there are (especially with digital assessment) numerous measures you can take to ensure that open-ended questions can be assessed efficiently and objectively. It remains human work, of course, but isn’t that also what we want? Give your opinion or learn more? If so, please contact us.