New perspective on the value of assessment tools

A new meta-analysis (Sackett et al., 2021) that studied the predictive value of assessment tools recently caused quite a stir in scientific and HR land, as it takes a new look at the value of assessment tools. Are assessment tools suitable for predicting job performance? And if so, which tools do this best in our own specific selection context?

General
17.03.2022
Amelie Vrijdags

Hudson's R&D department continuously analyzes the results of our proprietary assessment tools and keeps its finger on the pulse of scientific evolutions, to keep the focus on evidence-based methodologies and ensure maximum validity for our clients.

As a doctor in psychology and Senior Consultant Research & Development, colleague Amelie Vrijdags took a close look at the new meta-study, discussed it in detail with Prof. Filip Lievens (co-author of the study) and explains some crucial insights in the paper "The value of assessment tools in personnel selection".

Groundbreaking meta-study

So what is groundbreaking about this meta-study? What strikes us most are the statistical techniques used. The authors convincingly show that older meta-analyses systematically overestimate the predictive value of tools. When more corrective analysis techniques are used (which the authors do), we see that the predictive validity of most tools is nevertheless somewhat lower than the older studies claimed. 

But apart from these updated statistical techniques. How can we properly understand the results of this meta-study? Which assessment tools are most useful now? Answering this question is less obvious than it seems.

Different insights

We zoom in on some aspects that are important to keep in mind when interpreting the results of meta-analyses.

1) First of all, the new validity figures are generally somewhat lower than in previous studies, but they still score very well. For example, the relationship between cognitive testing and work performance (validity of 0.31 in the new meta-study) is a lot stronger than the effect of ibuprofen on pain reduction (validity of 0.14).

2) Meta-analyses summarize the results of a large number of "primary" studies. These studies examine a wide variety of occupations, instruments, and work settings. Thus, the validity coefficients from meta-analyses are not exact numbers, but averages. Indeed, the predictive validity of a given assessment tool varies considerably depending on which study you would look at. For example, structured interviews have the highest average validity in the new meta-analysis, but their predictive value also varies greatly across studies. Sometimes they predict very well and sometimes very poorly.

3) That the predictive value of assessment tools varies so much is also to be expected because it depends on a large number of factors, including:

  • The relevance of the tool used to the job for which one is selecting.

For example, a personality trait like extraversion is a relevant predictor for a job in sales, but is much less so for a job in accounting, for example.

  • The quality of the tool used (the test content, the scoring method, etc.).

For example, the meta-analysis groups under the heading "cognitive ability test" a whole range of tools that, however, all measure slightly different things and were developed according to different standards.

  • The expertise of the assessor/interviewer/test administrator.

We know from research that the proper application of a selection technique is very dependent on the training of the person using it. Think, for example, of the cognitive biases of interviewers who work purely from their "gut feeling." In a solid interview training course, however, you learn that gut feeling is not a reliable advisor and you also learn techniques to increase the reliability of an interview.

4) Validity is not the only "holy grail." For some organizations, equal opportunity, cost and practicality, or the way candidates experience the selection process are at least as important as its predictive validity.

5) Finally, HR professionals should not only consider general meta-studies. It is also important to study the validity of the tools that they use in their specific context; in order to test whether the results are in line with the meta-study. In this way, it is possible to check whether their own instruments predict as they should. If not, this is a sign that the process or tool needs to be adjusted.

What can we conclude from this?

In fact, the most important conclusion is that there is no one-size-fits-all solution. Many types of selection tools are useful, provided they are used under the right conditions and in the right way. Unfortunately, the selection technique that scores perfectly for all important parameters (strong predictive validity, without discriminating against minorities, at a low cost and with a positive candidate experience) does not exist. Developing a selection procedure will always involve some kind of trade-off. In doing so, it is crucial to strive for a good balance between the needs and requirements of the candidate and those of the selecting organization:

That's why we advise HR professionals to adhere to four simple guidelines:

1) Align selection methods with job requirements as much as possible.

2) Engage experienced (and ideally also certified) test facilitators, assessors and interviewers who have been thoroughly trained

3) Choose instruments that have been developed according to the highest quality standards.

4) Combine different assessment tools. Each tool provides different insights. By combining them smartly but also correctly, as in an assessment center, you will get a more complete picture of the characteristics of candidates that are relevant to the job.

Learn more?

Download the summary of the paper "The value of assessment tools in personnel selection”

Contact us

Submit your HR challenge to us. Together we look at how we can help you.

Newest jobs