How do you create a good test?

Five things to keep in mind

Assessments are conducted on a large scale: in publicly funded and private education, within company training programmes, by examination institutes and by professional associations. But are they always done properly? Who isn't familiar with that unpleasant feeling that can creep up on you after an exam? Common comments include ‘They asked for pointless details’ or ‘It was unclear what some of the questions meant’. It's easy to fall into a trap. In this blog, we highlight five pitfalls when it comes to developing tests. Keep them in mind and create a good test!   

  1. The test is too short;
  2. The test is about ‘peripheral matters’;
  3. There is debate about the answer;
  4. The scoring system is unfair or insufficiently clear;
  5. The test contains too much text.

The test is too short.

If a test is (too) short, you cannot adequately demonstrate your knowledge and insights. Moreover, the score you achieve on a test that is too short may be based on chance. If you happen to get questions about the part you have mastered well, you will achieve a good result. But it could just as easily turn out the other way around. This means that the reliability of a test that is too short. Moreover, a test that is too short is usually not valid. After all, based on too few questions, you cannot see to what extent someone actually masters the material because you simply do not measure what you intend to measure.

The scope of a test depends greatly on the types of questions used, the purpose of the test (formative or summative), the value attached to the test, the scope of the subject matter, and the time available for the examination. In general, the more questions there are, the more reliable the test is.

The test is about peripheral issues.

What is actually being asked and why does one question fit into the test and another not? Always look at the learning objectives that underlie the test and ask questions about the core of those learning objectives. For example, if you want to test whether candidates understand the different phases involved in building a house, there is no point in asking a question about the properties of tropical hardwood. Or worse still: asking when tropical hardwood was first used in construction.

How can you prevent such ‘off-topic’ questions? Leave test questions determine by a subject expert other than the test developer. This reviewer looks at the questions with a fresh perspective and is therefore well placed to filter out irrelevant questions.

There is debate about the answer.

A question may be compelling or well-formulated, but there is something wrong if there is disagreement among experts about the correct answer. Always check this:

  • Is the question specifically Enough? If the question is too broad, it is not clear which answer is meant. Often, several answers are correct.
  • Is there in practice agreement questionable standards? Questions about issues that have not yet been fully clarified in the sector can lead to disagreement about what constitutes a good or bad answer. This causes problems in tests. For example: a question about how a mediator should act in a particular situation. As long as it does not contravene the professional code of conduct, a course of action is not immediately wrong.
  • Is a question unambiguousOr do different (experienced) people interpret the question differently?

The same applies to this trick: determination by one or more subject matter experts is necessary to prevent discussion after completion.

The scoring system is unfair or insufficiently clear.

Sometimes you see excellent exam questions that touch on the core of the learning objective or even the subject itself, and where candidates are expected to provide multiple answers. How do you deal with scoring in such cases? Suppose ten items need to be filled in, do you get ten points? That is fair, but then this part carries relatively heavy weight. And what if you award three points, when do you get one, two or three points?

Or: You can get ten points for each question, even if only one answer is required. This can then lead to differences in scoring between assessors because everyone acts from their own perspective.

In short, leaving room for interpretation in the answer model often results in an assessment that is not entirely accurate. Therefore, specify in the answer model how many points are awarded for each answer. Clear guidelines will help you keep the scoring consistent. objectively and honest possible.

The test contains too much text.

The test developer assumes, sometimes incorrectly, that candidates ‘just need to read properly’. Be careful not to test reading skills, as this is not usually a learning objective. Set the questions in such a way that simple possible language. Do not use narrative cases to make the test more enjoyable. A test does not need to be enjoyable. Always check questions for superfluous information.

Conclusion

Keep these five common pitfalls in mind when developing your test. However, there is much more to creating a high-quality test. If you would like to know more about this topic or about test quality in general, please visit our downloads page.