Shadow AI in HR: How to Counter This Widespread High-Risk Practice
- Jean-Baptiste Audrerie

- Aug 18
- 2 min read
Updated: Aug 19
"Shadow AI" in HR represents a critical challenge today for human resources directors.
With 70% of HR professionals already using generative AI and 50% of employees turning to unauthorized tools, managing risks related to "Shadow AI" in the workplace environment becomes a priority for HR.
Shadow AI Usage in HR: Current State
Shadow AI in HR manifests primarily through unauthorized use of ChatGPT and other AI tools for: job posting creation, resume analysis, interview preparation, personalized training development, payroll automation, HR policy drafting, and predictive turnover analytics.
The numbers are telling:
81% of HR leaders explore AI solutions (Gartner 2023), while 38% of employees share sensitive information with AI tools without authorization.
Generative AI usage surged from 74% to 96% between 2023 and 2024, intensifying Shadow AI risks in HR.

Critical Risks of Shadow AI in HR
Data breaches and legal non-compliance
Shadow AI in HR exposes organizations to sensitive personal data leaks:
Personal data, Contact information, Addresses,
Personal and professional backgrounds
Salaries,
Evaluations, Notes, Comments, Skills,
Disciplinary files and employment relations
Medical records,
Recruitment information,
Coaching and development data,
etc.
One in five companies has already experienced a data breach related to unauthorized generative AI.
Non-compliance with PIPEDA (federal) and Bill 25 (Quebec) represents a major risk. Bill 25 requires designating a data protection officer, strict governance policies, and informed consent. Penalties can reach 20 million euros or 4% of global revenue.
Bias and reputational damage
Unvalidated AI tools can generate discriminatory bias in recruitment and evaluation, exposing the organization to legal recourse and significant reputational harm.
Free vs. paid solutions: Risk management
Free solutions (basic ChatGPT) present heightened risks for Shadow AI in HR as they often use data to enhance their models. Paid versions offer better confidentiality guarantees and stricter processing agreements, but require evaluation according to Bill 25 criteria, notably the PIA (Privacy Impact Assessment).
"Anti-Shadow AI" Action Plan in HR
1. Governance and Policies
Designate a data protection officer (Bill 25) and establish clear policies defining authorized tools and validation procedures.
2. Training and Awareness
Deploy a specific training program on Shadow AI in HR covering security risks, PIPEDA/Bill 25 obligations, and best practices.
3. Authorized Solutions and Monitoring
Propose secure alternatives compliant with Canadian and Quebec requirements, while implementing monitoring tools to detect unauthorized usage.
Conclusion
Shadow AI in HR constitutes a major challenge requiring a proactive approach combining governance, training, and authorized solutions. CHROs and HR leaders who anticipate these challenges will transform risks into secure innovation opportunities, preserving PIPEDA/Bill 25 compliance and employee trust.
Consult our AI in HR article series:
Specify and quantify your HRIS and AI in HR strategy: write to us, schedule an appointment to discuss your needs, subscribe to our newsletter and download one of our 2025 HRIS mappings now here → https://www.nexarh.com/cartographies-hr-tech-hcm-talent





Comments