Skip to main content
Recrutiment & Employment Confederation
News

AI and recruitment — 7 tips to ensure compliance with data protection law

News from our business partners

  avatar

Written by

 

This is a guest blog by REC business partner Brabners.

Recruiters are increasingly relying on AI to streamline processes, improve efficiency and to screen and select candidates. However, the use of AI can also lead to risks for candidates and their information rights.

Recruiters could risk being in breach of their data protection obligations, especially if they transfer personal data and confidential information into an AI system without authority and/or proper safeguards.

Brabners’ data protection law specialists Eleanore Beard and Sara Ludlam provide 7 practical tips for recruiters to ensure data protection compliance when using AI tools.

 

Caution over automated processing for candidate selection

When using AI, it’s important to comply with the data protection principles of safety, security, transparency and explainability, fairness, accountability, governance, contestability and redress.

You should conduct data protection impact assessments when using AI to collect and process personal data to understand the impact on people’s privacy. Where AI is processing biometric data/emotion detection, more controls will be needed.

You should also be aware that Article 22 of the UK GDPR prohibits decision making based solely on automated processing which has a legal effect on, or substantially affects, the relevant individual. AI that selects candidates based on profiling criteria is likely to fall within this prohibition, which could cause problems for many apparently time-saving AI applications.

 

7 tips to ensure data protection compliance

  1. Address data protection in contracts — Ensure that contracts with AI providers include clauses covering the sharing of personal data which address data protection, confidentiality and security measures.
  2. Ensure that AI processing is “fair” and lawful — Monitor AI results for potential or actual fairness, accuracy, or bias and ensure meaningful human checks and reviews, with issues being addressed quickly.
    1. You should also have a lawful basis for collecting the data. This could be contractual or legal or based on consent. Where you are relying on consent, this will only be valid if:
      1. Consent has been freely given
      2. You’ve provided full information on how you intend to use the data
      3. he candidate can opt out after providing the data.
  3. Transparency — Inform individuals about the purpose for collecting personal data, including:
    1. what personal data is processed by AI and how,
    2. The logic involved in making predictions or producing outputs,
    3. Whether you will use their personal data for training, testing, or otherwise developing the AI.
  4. Minimise data — Avoid collecting more than the minimum personal data required to develop, train, test and operate each element of the AI.
  5. Implement robust security measures — Protect personal data from unauthorised access, disclosure or alteration. Security measures may include encryption, access controls and regular security assessments to identify and address vulnerabilities.
  6. Ensure data is accurate and up to date — Implement measures to verify the quality of the data at the point of collection and throughout its lifecycle. Establish procedures for updating or rectifying inaccuracies.
  7.  Storage limitations — Define retention periods for collecting personal data based on the purposes for which it was collected. Regularly review and securely delete or anonymise data once it’s no longer necessary or if individuals withdraw their consent. Check that the AI system you use allows you to remove personal data.

For more guidance on AI tools and data protection, read our guide and contact our team of experts

 

 

​​​Brabners_600.png

Learn more about Brabners