How to create an AI policy for your recruitment process

Metaview
Metaview
26 Sep 2023 • 12 min read

Introduction

AI-enabled recruitment is no longer a farflung idea — it’s a reality. And as AI deepens its impact on recruiting, it won’t just be an add-on to the hiring process, it’ll rewire the way we hire people entirely.

As we enter a brave new world of talent acquisition strategy, many recruitment leaders are finding that building a set of formalized guidelines around AI usage can help them steer their teams through these changes, give some guidance on experimentation, and provide an useful ethical compass for doing so.

In talent acquisition teams, an AI usage policy can become the north star that governs best practice and ethical standards as organizations navigate the shift between being AI-assisted, and becoming AI-powered.

An AI policy provides the guardrails for how AI fits into your recruitment process. It means your recruitment team can optimize processes by outsourcing repetitive tasks, operating more cohesively, minimizing risk, and hiring better talent, faster.

But most importantly, it means they can spend their time on the most critical bit of hiring — driving high-quality interactions with candidates and hiring teams to ultimately find the best-fit talent.

If you’re thinking of creating an AI usage policy for your team, we’re here to help. In this guide, we’ll outline what an AI policy is, its key components, and how to build one that reflects the needs of your team.

Who is this guide for?

We wrote this guide to help talent acquisition teams get the most from implementing AI in their hiring process. This guide is for you if:

  • You’re a talent acquisition, recruitment, or HR leader
  • You’re a talent acquisition or recruitment operations specialist

1. What an AI policy is and how it can help

An AI policy is a set of guiding principles that outlines how your team works with AI. It guides everyone to use the technology productively and safely, offers clarity on how, why, and when to use AI tooling, and ensures that usage falls within ethical and legal boundaries — as well as your organization’s core values.

But it’s far more than just a set of rules on how to use (or not use) AI — it also builds a framework for proactive risk mitigation and decision-making in the future. And because AI is evolving all the time, it’s not a static document.

In the recruitment process, AI has the potential to be immensely powerful. Think automating all of your most tedious tasks: scheduling candidate interviews, preparing interview notes for hiring managers, scouring the internet to source candidate profiles that may or may not be a fit.

But if you’re starting to worry because you don’t have a policy governing AI usage yet, don’t sweat it. According to a 2023 report by McKinsey, four in five respondents reporting AI adoption said they didn’t have an established policy on using generative AI.

You don’t need a formalized AI policy to reap the benefits of AI — but it can help. Here’s why:

  • Proactively mitigate risk: AI is in its infancy, and as a result, recruiting teams are still figuring out how to leverage it while keeping ethics a priority. Implementing an AI usage policy means recruiting teams can stay proactive, not reactive, in mitigating risks, and maintaining data privacy.
  • Maintain candidate trust: According to a 2023 survey from Gartner, only 54% of candidates trust their organization to be honest with them during the hiring process. Having a publicly stated and frequently updated AI policy will maintain trust, ensuring access to high quality talent.
  • Increase operational efficiency: Your talent acquisition team can’t work their best if they’re deluged by manual, repetitive tasks. But to use AI to maximize efficiency, there needs to be some consistency in how it’s applied. An AI policy keeps everyone aligned on how, when, and why AI is leveraged, so everyone’s using it as efficiently as possible.
  • Improve candidate experience: For candidates, the hiring process can often seem like a black box. Timelines and steps are often unclear, or uncommunicated, leading to a poor candidate experience. Implementing an AI policy means teams have the tools to speed up this process, leaving more room for the human touches that set your candidate experience apart.

2. What should your AI policy include?

While your exact implementation of AI in your hiring process is likely to differ compared to that of other teams, there are a few specific things in common that your policy should cover. According to 2022 research into organizational governance around AI, effective AI policy isn’t just about getting the right tech — it’s the context outside of the tech that matters. This falls under three key pillars:

  • Infrastructure: Your team’s culture, infrastructure, innovation mindset, and ways of working. These have an impact on your AI readiness and chance of success.
  • Processes: Your processes and systems, and how easily AI can integrate into how you work.
  • People: Your employees’ competencies, knowledge, and appetite in using AI.

In a nutshell: The people, processes, and culture are just as important. Because without laying the groundwork for talent acquisition teams to trust their tooling, candidates can’t trust the talent acquisition team — and AI could end up just being a point solution that solves a sole issue, rather than part of your hiring culture. And according to a 2021 research paper, AI without these components in place means companies won’t be able to realize its full value, or the associated performance gains at an organizational level.

Getting these components right alongside the right tooling will help you maximize the impact of AI across your whole recruitment process, as well as build a framework for leveraging it that scales in the future.

In practice, this means:

  • Evaluating your team’s AI readiness, infrastructure, and culture.
  • Understanding which processes could benefit from an AI-based approach.
  • Helping your employees understand why and how to use AI.
  • Measuring and monitoring the success of any new changes, and adjusting as necessary.
AI policy checklist:

Goals and purpose
Legal and ethical standards
Data privacy policy
Communication plan
Key stakeholders
Employee enablement
Feedback and measurement mechanisms

3. Creating your AI usage policy — a 4-step framework

Creating an AI policy for your recruiting process might sound like an overwhelming undertaking. It might even feel prescriptive. We get it.

But it doesn’t have to be. When done well, your AI policy should fit into the flow of how your team works, not against it. It should offer teams the flexibility to experiment and build ways of working that respond to their exact needs.

This boils down into four key steps:

  • Evaluate your current opportunities, risks, and context.
  • Understand the context for deployment in terms of laws and risks.
  • Communicate your policy to employees and candidates.
  • Measure your policy’s success, and adjust as your needs — and the AI tools — evolve.

Here’s how it works.

Step 1. Evaluate your current setup, and how you’re using (or not using) AI already

At this stage, you’ll need to take stock of your existing recruitment processes, identifying any risks, opportunities, and pre-existing uses of AI. This will help you identify what you need to change, and prioritize what is essential right now versus what can wait.

Identify your pain points and opportunities

Start by mapping out your end-to-end talent acquisition process, and identifying the processes, tasks, and human input required for each stage of recruitment. Consider the following questions as you go:

  • What are your current goals for the recruitment function?
  • Where are the biggest pain points in your current process?
  • Are there any processes or tasks you can automate to achieve these goals?
  • Are there any quick wins you can make?

If you’re already using AI in some processes, evaluate how well these contribute to your goals. Use qualitative and quantitative data from recruiting outcomes to inform your evaluation, such as candidate drop-out rates, time-to-hire, and candidate experience surveys.

💡
PRO TIP: Evaluate how AI can automate manual tasks and processes

AI can’t replace the human part of hiring. But it can shoulder some of the manual, repetitive tasks that form part of every talent acquisition team’s day-to-day workload, leaving recruiters with more time to focus on the differentiated, human aspects of the job.

When it comes to tasks like scheduling candidate interviews, managing contracts and candidate documents, and typing up interview notes, AI can perform them more quickly and efficiently than a human can.

Metaview is one way talent acquisition teams can leverage AI to reap these huge productivity gains. Metaview completely removes the need for recruiters to take notes because their AI does it for you. Whether it’s screening calls, intake meetings, interviews, or debriefs, Metaview perfectly summarizes everything that was discussed.

With Metaview, recruiters save an average of 5 to 10+ hours every week — meaning they can operate more efficiently, productively, and focus on high-quality interactions with candidates and hiring teams.

Weigh up internal risks

As with any emerging technology, it’s important to consider the potential risks when bringing AI into your recruitment process. Your policy will help you both proactively avoid this risk, as well as set out best practice guidelines to help mitigate it.

At an internal level, consider how implementing AI into your talent process might introduce new challenges, as well as their impact on both the organization and the candidate, and how you’ll take steps to resolve them.

Potential internal risks may relate to the efficiency, experience, and fairness of your process, such as:

  • Introduction of algorithmic bias
  • Maintaining the right balance between automation and the human touch
  • Under-resourcing around training employees with new technology

For example, implementing a tool that can sift and screen resumes could introduce the risk of algorithmic bias if not used in the right way. Mitigating this risk could include an ethical standard in your policy that humans always have inputs into decision-making.

Make space for experimentation

Giving your team the chance to experiment with AI is essential — it’s likely to lead to some serious gains across your team’s performance, productivity, and efficiency.

A 2023 paper found that generative AI (like ChatGPT) can improve productivity by 14% on average. Countless others link the use of AI to improved individual efficiency, employer branding, and organizational performance and culture.

Running thoughtful pilot experiments that leverage AI on a small scale can build your team’s confidence in experimentation. Keep these on track by making sure they stay connected to business or team goals — and try testing new tools and approaches with dummy data.

💡
PRO TIP: Get inspiration from other teams

Look at different AI use cases across the organization for inspiration. As an example, your customer success team might already be using generative AI to automate customer call transcription — or maybe the product team has a strong AI-driven project management workflow.

Step 2. Understand how and where your policy applies

AI legislation is currently a moving target. In the last year alone, data from Stanford found that legislative bodies in 127 countries passed 37 laws that included the words ‘artificial intelligence’.

As AI continues to evolve, understanding how these changing legal and regulatory requirements apply to your recruiting plans, goals, and tooling choices will mean your policy stays current — and that your organization stays safe from risk.

Know the law and regulatory requirements wherever you operate

The laws on AI vary across the globe. Knowing what applies where your organization operates is essential from both an ethical and a compliance perspective.

The EU, for example, launched its AI Act in 2023, which classifies different uses of AI according to their risk. Across the pond, New York City followed suit, requiring organizations to notify candidates if AI is used in the hiring process.

As you structure your policy, understanding these requirements means you can operate to the highest of ethical and compliance standards. If you operate across multiple regions, you’ll also need to bear in mind how these laws overlap or interact.

Partnering with your legal and ethics & compliance teams will help you make sure your policy follows regulatory and legislative best practice across your locations.

Define your data policy

Candidate data — and how you’ll use it — is a critical component of your AI policy. As candidates become ever more wary of organizations collecting their personal information, you’ll need to draft a comprehensive policy that sets out, in plain terms, how your process and tooling uses and stores this data.

Reserve a section of your AI policy to define:

  • What data you’ll collect: Are you collecting data on skills and competencies, or does your talent data include demographic information, like location, interests, or other characteristics?
  • Who can access the data: Is it just members of the recruiting team? Hiring managers? Will there be different levels of access depending on role?
  • How the data will be used: If you’re implementing tooling based on machine learning (for example, a resume screening tool), will candidate data be used to help it learn?
  • How long you’ll store the data: How will you store and use data after a candidate has entered into your hiring process? How will you manage requests for data deletion?

Make sure this information is clearly available as a separate explainer style resource to send to candidates, or in clear view on your careers site. Or better yet, embed candidate consent and data usage notices in the existing flows candidates are already using (for example, tools like Metaview help you include this information and collect consent as part of existing interview scheduling flows).

Identify and assign key stakeholders to manage your policy

Building a successful AI policy hinges on making sure you’re putting the right fail safes to implement and maintain it correctly. Assign key stakeholders — both among your recruitment team, wider HR/people team, and across your organization — who will act as the guardians of your policy.

Members of your HR and talent acquisition teams will have firsthand knowledge of current and evolving talent challenges and needs. Meanwhile, external stakeholders, including senior leaders, the ethics and compliance team, and operations, can help with adoption and resourcing.

Step 3. Communicate your policy and enable your employees

As AI increasingly becomes part of the way we work, coaching and training the people using it will be essential. While your primary users will be the talent acquisition team, this also applies to everyone else involved in hiring across your organization, from department leads to hiring managers.

Create training materials around your AI tooling and policy

To embed AI-enabled hiring across the organization and truly maximize its benefits, you’ll need to create training and employee enablement materials.

Create practical training around what your policy entails and what it means for your team. Providing guides for any tooling and skills needs will ensure consistency of messaging and increase trust in new tooling and processes. Make this scalable by creating a slide deck that department heads can deliver with hiring managers and interview panels within their business unit.

Define how you’ll communicate your use of AI to candidates

A 2023 study found that candidates agree that AI can have a positive impact on the recruitment experience by speeding up time-to-hire and increasing efficiency. But a lack of understanding on how it’s deployed may lead to candidates feeling less confident to apply for a role.

Avoiding this means telling candidates upfront exactly how, where, and why AI factors into the recruitment process.

You’ll need to address:

  • What recruitment stages, or processes use AI.
  • How the AI works and what it does in each use case.
  • Your data policy relating to AI.
  • When you want to share this information with candidates.

Step 4. Measure, audit, adjust, repeat

AI isn’t static — and neither is your policy. And according to a 2022 report by Deloitte, 50% of business leaders report lack of post-launch support and maintenance as one of their top challenges when scaling AI.

Once you’ve committed a set of guidelines to paper, establishing processes around how and when you update your policy and measure its success will ensure that it not only stays relevant for your evolving recruiting needs, but also compliant with legal and regulatory changes.

Build a process to audit and adjust your policy

To make sure your AI policy keeps up with the pace of the technology, you’ll need to review, refine, and update it over time. Taking an agile mindset to this process may help you review and refine your policy with a critical eye. First, define how often you’ll revisit your policy, who’s responsible for managing it, and what triggers you to revisit it.

Then, ask:

  • Is this policy still serving its core goals?
  • Is this policy still aligned with organizational goals?
  • Are there any new risks or opportunities we need to review?
  • How can we test new approaches to our recruitment process?
  • Do we have any emerging skills gaps among our team that need addressing?

Collect feedback from employees and candidates

As part of your policy review, collecting qualitative and quantitative feedback from users and candidates will help identify how well it’s working.

  • For employees: Run focus groups or surveys to gauge how successfully AI has integrated into your recruitment process, how well users understand how to implement it, issues with tooling, and suggestions for changes or use cases.
  • For candidates: Collect data as part of a candidate experience survey. Monitor how candidate satisfaction scores are trending, as well as qualitative feedback on how candidates perceive the process, or any difficulties using the tools.

Measure the success and impact of your policy

As your AI policy and implementation scales across the organization, measuring its progress and impact — both on the talent function, and in terms of larger business goals — will help you gauge its success.

Track hiring outcome data, such as time-to-hire, cost-per-hire, drop-out-rates at each stage, pipeline diversity, and candidate satisfaction rates to proactively see how AI shapes hiring trends over time. On a larger scale, connect your hiring data to business outcomes, like employee retention, productivity, and cost or time savings.

Powering the future of talent acquisition

AI has the power to reshape the entire recruitment process, meaning teams can hire better, faster, and more efficiently. But the thing that will make it successful in the long-term isn’t necessarily about the technology we have at our disposal — it’s how we enable recruitment teams with the right processes, coaching, and opportunities to integrate it successfully.

As AI becomes even more embedded in every aspect of how we work, a usage policy isn’t a requirement in recruitment teams navigating this shift successfully. But implementing one can help teams evolve their processes more quickly, and experiment with new ways of working alongside the technology to get the best results for their exact needs.

Want our AI to take your interview notes for you?
Get our latest updates sent straight to your inbox.
Subscribe to our updates
Stay up to date! Get all of our resources and news delivered straight to your inbox.

Other resources

2024 Year in Interviews
Blog • 7 min read
Metaview
Metaview 20 Dec 2024
Writing Rejection Emails
Blog • 4 min read
Metaview
Metaview 26 Sep 2024