AI in HR: When AI Turns Criminal
Artificial Intelligence is already transforming the workplace, from how we recruit and onboard, to monitor, support and even exit employees. It’s clever, powerful… and a little bit terrifying.
As HR professionals, AI can save us time, streamline admin and even make decision-making smarter. However, there’s a darker side too. Think biases, misinformation, privacy breaches and even AI scams.
That’s why we’re tackling this topic head-on in our next HR Sounding Board Session: When AI Turns Criminal with Jake Moore, Global Cybersecurity Advisor. Before you grab your spot (and we know you will), let’s dive into why understanding AI in HR is essential.
Why HR needs to pay attention to AI
Remember when AI was just something from a sci-fi movie? HAL 9000, Blade Runner and Ex Machina were as distant realities. Fast forward to 2025, and we’re casually using ChatGPT to generate anything from recipes and shopping lists to rewriting emails and generating policy templates.
AI is no longer a “future of work” buzzword; it’s here, embedded into all our platforms, apps and daily workflows. Which means we’re all using it already… whether you realise it or not.
So how is that impacting us HR pros?
Let’s start with the good...
When used wisely, AI can be a lifesaver for under-pressure HR teams. Think of it as your super-speedy (slightly robotic) assistant:
Admin automation: Goodbye tedious tasks. AI can summarise CVs, auto-respond to FAQs, schedule interviews, and manage onboarding flows — freeing up your time for the work that actually matters.
Predictive insights: AI can spot patterns in absenteeism, attrition, or engagement before they spiral. It's like having an early warning system for your people problems.
Enhanced employee experience: From personalised learning paths to nudges for wellbeing check-ins, AI can make people feel more supported, without adding to your to-do list.
Templates on tap: Letters, questionnaires, policies, frameworks... AI can whip up a solid starting point in seconds.
Now for the bad…
AI isn’t all shiny dashboards and miracle time-savers; there are pitfalls and they’re worth knowing:
Dehumanisation: People want to be supported, not processed. Rely too much on AI, and you risk losing the personal touch that actually builds trust.
Bias: AI only knows what it’s trained on. If your data’s biased, the decisions will be too.
Phantom data: AI sometimes makes stuff up (hello, “hallucinations”). That stat it just gave you? Yeah, they might not exist in the real world, so don’t always assume it’s a fact.
That robotic tone: We can all spot a ChatGPT-generated message now. You know the one: over-polite, full of m-dashes, and just... not… quite… human.
And finally, the ugly…
This is where things get messy - legally, ethically, reputationally – and where HR needs to be extra careful:
Privacy & GDPR: Copy-pasting sensitive employee data into an AI tool? Big no. Conversations you thought were private can end up surfacing elsewhere. Yes, even on Google.
AI scams & deepfakes: We’re entering an era where phishing emails can sound like your CEO, and deepfake videos are good enough to fool just about anyone. It’s not sci-fi anymore - it’s a genuine risk.
AI-generated misinformation: AI tools can’t always tell fact from fiction. Whether it’s hallucinating employment laws or citing non-existent legal cases, it can land you in trouble if you don’t know what you’re looking at. Use it blindly and you could end up with policies based on fantasy.
No accountability: AI can influence decisions on hiring, firing, and promotions. But when things go wrong (and they will), who’s accountable? Spoiler: It’s still you.
How to approach AI responsibly in HR
We get it, it can be a bit of a minefield and new whizzy AI apps and chatbots are popping up left, right and centre. You don’t need to become an AI engineer overnight, but you do need to establish a practical approach to how your business uses these tools.
Start with these five steps:
Audit your systems: Know what AI tools your teams are already using (officially or unofficially).
Check the risks: What kind of data is being processed? Where’s it stored? Who has access?
Create a policy: Set expectations, boundaries, and approved tools. Top tip: make it practical, not just legal jargon and involve your managers who will be enforcing the policy.
Train your managers: They don’t need to be AI experts, but they do need to understand the basics, especially around bias and privacy.
Set a process: Who signs off new tools? How do you handle complaints? What’s your escalation route if something goes wrong?
Join us: The HR Sounding Board Session
We get it, there’s a lot and that’s why we’re bringing in the expert: Jake Moore.
He’ll be lifting the lid on the darker side of artificial intelligence, taking you deep into the criminal underworld of today’s most powerful tech. In the name of research, he’s used AI to create phishing sites that are virtually indistinguishable from legitimate ones, sent AI-generated scam messages convincing enough to fool anyone and even used AI powered face-swapping tools to pass a video job interview under a false identity.
But it doesn’t stop there – with tools at our fingertips, Jake will reveal how AI can clone voices, generate documents and produce fake identities in seconds, giving cybercriminals everything they need to exploit trust at scale. Nothing is real anymore but can you spot it?
Streetwise HR will look at what it all means for us as HR professionals ‘on-the-ground’, what can we do to help spread awareness in our own workplaces, how can we spot the fakes and what to do if you suspect AI is doing all the work.
You won’t want to miss this one. Grab your ticket here.
Can’t wait? Let’s talk now.
Whether you're wrestling with AI policies, wondering how to train your teams, or just need help cutting through the hype, we're here for the human-to-human chat.