A common opinion about open questions is that assessing them is subjective and laborious. So why use them? How can you assess open questions objectively? And how can you ensure that your assessment is as as efficiently as possible? In this blog, you can read about when an open question adds value and get tips on how to avoid common mistakes when assessing open questions. Contents When should you use open questions? Efficient assessment of open questions Assessing open questions objectively – tips Conclusion When should you use open questions? By open questions, we mean questions that require the candidate to formulate their own answer. Closed questions are questions where the candidate selects the correct answer or a combination of answers from a number of options. Why would you want to use open questions when you can also test with closed questions? If that is indeed just as effective, testing with closed questions is recommended. This has a number of advantages, such as the convenience of automatic correction when using digital test software or assess it objectively (an answer is either right or wrong). Nevertheless, it may be necessary to use open questions. Consider situations such as: The candidate is assessed on their active knowledge and its active application. This often involves a situation in which the candidate must be creative and come up with/formulate the answer themselves. The analysis or reasoning is just as important, if not more important, than the final answer. This is the case, for example, when providing a calculation. When a closed question quickly becomes an ‘open door’ question. When the answer is given, it quickly becomes clear that it is the correct one. This also means that it is difficult to come up with plausible alternative (incorrect) answers. Multiple answers may be correct, and it is impossible to provide all the correct answers. If closed-ended testing were to be used in such a case, this would raise issues regarding the objectivity of the question. There are no fixed standards or clear norms within a particular industry. For example: how exactly you should conduct a sales conversation or a conversation as a manager. Some answers are better than others, but not immediately wrong. This is difficult to measure in a closed question. Efficient assessment of open questions The assessment of open questions is efficient thanks to digital testing. This eliminates the need to distribute examination papers. An assessor receives an email informing them when which exam papers are ready for them. They do not have to scroll through the papers and cannot accidentally skip any questions, because the system ensures that the questions are presented in sequence. Questions and answers are clearly arranged next to each other, minimising the risk of errors. In addition, no time is lost when a second assessor is called in. Immediately after completing the first assessment, this second assessor can receive an email notifying them that an exam paper is ready for them. Assessing open questions objectively – tips How do you assess open questions objectively? Read the tips below: Tip 1 – Be as specific as possible in your question Only ask open questions that can be answered correctly or incorrectly. The solution often lies in making an open question so specific that only someone who has studied or has experience in the subject can answer it. So do not ask questions about general knowledge. Tip 2 – Do not formulate the sample answers too rigidly Keep the example answer brief. Only include what absolutely needs to be mentioned, rather than what would be the most perfect answer. Furthermore, if you feel that there are other formulations that are also correct, include those as well.For example: ‘Other formulations with the same meaning are also considered correct.’ If you feel that there may be other answers besides those listed that are not incorrect, please add them. This may be the case with more creative solutions where there is more than one way to achieve the same result. For example: ‘Other answers to be assessed by the examiner’. Allow assessors to add corrector comments. This is often possible in digital assessment. The assessor can then use the candidate's answers to indicate that they have noticed that the model answer needs to be expanded with additional, supplementary answers. Tip 3 – Work with positive reviews Indicate where points are awarded rather than where points are deducted. By working with assessment criteria (in digital testing), you can indicate this very precisely and award points for each criterion. For each assessment criterion, give a brief description of a partial answer with a point allocation. For example: Characterisation of partial answer 1 – 1 point Characterisation of part answer 2 – 1 point Characterisation of part answer 3 – 1 markTotal number of points: 3 Tip 4 – Ensure clarity and structure Use clear assessment criteria (see also tip 3) to avoid confusion among assessors. Do not include more than two ‘sub-questions’ in a single question. Or, if you do, use a., b. and c. questions. This way, both the candidate and the assessor will know how many answers are actually required. Alternatively, simply ask more questions on this topic. Tip 5 – Avoid overlap in the answer model Take a critical look at the ‘sample answers’ and do not ask too many questions. Do they overlap? If so, reduce the number of answers to be given. For example: if there are three reasons, two of which overlap, ask for two reasons rather than three. Be sure to include all three reasons in the answer model. Use assessment criteria. It may also be helpful to indicate what not may be considered good. Tip 6 – Be aware of the Horn and Halo effect When Horn effect the overall assessment of a candidate is taken into account negatively in the scoring. In the case of a Halo effect this is actually positive. Result: the assessor awards too many or too few points. Use segmented assessment. In segmented assessment, an assessor first reviews all questions marked ‘1’, then all questions marked ‘2’, and so on. This means that there is no overall overview of how a candidate is performing. Ensure that the final score of a candidate (in digital testing) is not visible. If an assessor sees that an extra point might just make the score passable, they may be tempted to add an extra point somewhere. Since the aim is to give an assessment for each question (and not for the final score), it is better to keep the total score hidden. Consultation with assessors A general tip for improving the assessment of open questions is to regularly consult with the assessors. What do they notice? What do you notice? In response to complaints and objections, give examples of assessment errors and discuss together how to prevent them. Conclusion There are good reasons to ask open-ended questions and, despite the disadvantages, there are countless measures you can take (especially with digital testing) to ensure that open-ended questions can be assessed efficiently and objectively. Of course, it remains human work, but isn't that what we want? Would you like to give your opinion or find out more? Then please contact with us.