Automated sifting tools, also known as candidate screening software, are a set of data-driven technologies that assess the applications received for a particular role. They often use natural language processing to evaluate CVs and personal statements and assign them a score. This score is then typically used by the recruiter or firm as part of the prioritisation process for invitation to interview. In most cases, the tool will score or rank the candidates using keyword search results based on criteria defined by the employer; for example, matching the keywords in the job overview or candidate specification.
For recruiters and firms, the big opportunity that automated sifting tools can offer is time and cost savings. They can help relieve a large portion of the manual work associated with reviewing applications, freeing up resource to focus on other aspects of the hiring process. This can be a more efficient way for recruiters to identify the most suitable candidates, particularly if there is a high volume of applications for a role, such as on graduate recruitment schemes.
There is also the potential for this technology to remove the human bias inherent in a traditional recruitment process, by applying the same standardised assessment to every application. Vendors are developing tools which actively seek to improve the diversity of recruitment pools. One example is the Contextual Recruitment System developed by Rare: a tool that seeks to level the playing field for candidates from a lower socioeconomic background. It captures thirteen different markers of disadvantage (e.g. whether a candidate has been a young carer, or whether their parents went to university) and allows employers to understand how far a candidate has outperformed their school average, contextualising their ability to perform in the role.1
However, if a tool is trained on biased data, it is extremely likely to perpetuate existing workforce biases. This was evidenced when Amazon’s pilot algorithm apparently downgraded the applications of candidates who attended women-only universities, having been trained on 10 years of historical employment data.2 This algorithm was developed as part of an experiment and was not used in a real-world context.
There are additional risks around the functionality of these tools. Firstly, they are trained to be prescriptive and therefore cannot account for the array of different ways candidates might articulate their suitability. For example, a candidate may use a synonym in place of an exact keyword, and therefore be unnecessarily rejected from the process. This means recruiters may miss out on suitable talent. Similarly, if a candidate has non-standard formatting on their CV (e.g. the CV includes graphics), the tool may downrank an application that otherwise meets the job criteria.
Finally, it is sometimes possible for candidates to manipulate the tool to gain a higher score or ranking, resulting in an unfair process. For example, a recent BBC documentary showed how it is possible to include keywords in white text on a CV that is invisible to the human eye but would be picked up on by a data-driven tool. This risk reinforces the case that it is good practice for a human reviewer to be in the loop. In practice, this means making sure that recruiters are periodically checking the results of the tool by comparing a sample of the output to a sample assessed by a human reviewer. Having a human ‘in the loop’ does not mean requiring an individual to review every decision of the automated sifting tool, but should be focused on monitoring whether or not the system is working as intended.
1 https://contextualrecruitment.co.uk/
2 https://www.reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G