Practical AI Governance Guide for CHROs and HRDs
- Jean-Baptiste Audrerie
- Aug 14
- 16 min read
Updated: Aug 18
A strategic framework for responsible and high-performing AI transformation of the HR function
1 - Introduction
Artificial intelligence is radically transforming the landscape of human resources and business management.
While 96% of leaders recognize that adopting generative AI increases the likelihood of security risks [1], and AI-related incidents have exploded by 1,278% between 2022 and 2023 according to the OECD [2], implementing robust AI governance is no longer an option, but a strategic necessity.
The simultaneous rise of algorithmic AI (ML, deep-learning, recommendation engines), generative AI (LLM) and, more recently, agentic AI capable of chaining workflows without supervision, opens three value axes in HR: Efficiency, Decision, Experience (our "EDE" approach).
But it also creates a bundle of risks:
Bias and discrimination (e.g., class action Mobley v. Workday 2025) that can engage the company's responsibility for "disparate impact" Holland & Knight.
Hallucinations, shadow AI and data leaks: 27% of content entered into unapproved generative tools already contains sensitive information Cyber Sierra.
Technical complexity: multiplication of agentic micro-services, model drift, data chain difficult to trace.
Reputation and employer confidence: any breach harms brand image and talent attractiveness.
Faced with these challenges, ignoring governance or outsourcing it "by default" to a vendor exposes to regulatory (EU AI Act, AIDA, EEOC) and financial risk.
For Chief Human Resources Officers (CHROs) and Human Resources Directors (HRDs) of medium and large enterprises, the challenge goes far beyond simple regulatory compliance. It's about building a framework that allows leveraging AI's transformative potential while protecting the organization against emerging risks.
This AI governance must clearly distinguish internal AI projects focused on efficiency, decision and experience (EDE) from commercial initiatives intended for customers, partners and citizens.
The progressive entry into force of the European AI Act, with its financial sanctions applicable from August 2025 [3], reinforces the urgency to act. But beyond compliance, well-designed AI governance becomes a competitive advantage, allowing innovation in complete safety and developing an organizational culture adapted to future challenges.
AI Governance: the essential guide for CHROs and HRDs wishing to secure their internal artificial intelligence projects and enhance HR value.

2- Guiding principles of AI governance in enterprise
Effective AI governance rests on four fundamental pillars that guide all decisions and actions related to artificial intelligence in the organization [4].
Organizational empathy
It constitutes the first principle. Organizations must understand the societal implications of AI, not just the technological and financial aspects. This involves anticipating and addressing AI's impact on all stakeholders: employees, customers, partners, and society as a whole. For HRDs, this means evaluating how AI will transform jobs, required skills and employee experience.
AI bias control
It represents a major challenge, particularly critical in HR processes. It is essential to rigorously examine training data to avoid integrating real-world prejudices into AI algorithms. Gender, racial or social biases can lead to discriminatory decisions in recruitment, performance evaluation or promotion, exposing the company to considerable legal and reputational risks.
AI transparency
This guiding principle requires that the way AI algorithms function and make decisions be clear and open. Organizations must be ready to explain the logic and reasoning underlying results obtained through AI. This requirement is particularly important in the HR context where automated decisions can directly affect employees' careers and well-being.
Responsibility
This principle of responsibility requires organizations to proactively define and respect high standards for managing the significant changes that AI can bring. This includes clearly defining responsibilities in case of error or erroneous decision made by AI, a crucial aspect for maintaining trust and legitimacy of automated processes.
3- The importance of governance in an AI strategic plan
The integration of artificial intelligence into business strategy can no longer be done in an improvised manner. Structured governance becomes the indispensable foundation for transforming technological opportunities into sustainable competitive advantages, while mastering the risks inherent to these emerging technologies.
Distinguishing internal challenges from commercial projects
CHROs and HRDs must operate a fundamental distinction between AI applications intended for internal enhancement (efficiency, decision, employee experience) and those oriented toward external customers. Internal AI projects in HR primarily aim at improving recruitment processes, predictive talent analysis, administrative task automation and enriching employee experience. These initiatives require specific governance, centered on protecting employees' personal data and preserving equity in HR decisions.
Specific challenges of vendors' algorithmic AI
The use of AI tools developed by external vendors raises particular challenges. Machine learning, deep learning, recommendation and predictive decision algorithms integrated into HR solutions can introduce undetected biases and opaque processing logic. Dependence on these "black boxes" requires increased vigilance and reinforced control mechanisms to ensure that automated decisions remain aligned with the organization's values and objectives.
Emerging challenges of generative AI
The explosion in the use of generative AI tools like ChatGPT 5, Claude Sonnet or Google Gemini in professional environments creates new risks that HRDs must anticipate. The "Shadow AI" phenomenon - the uncontrolled use of AI tools by employees - represents a major threat to data security and regulatory compliance [5]. AI hallucinations, these plausible but factually incorrect responses, can lead to costly judgment errors. Data leaks to free solutions expose the company to confidentiality violations, while the variable quality of prompts and language models (LLM) can generate inconsistent or biased results.
The emergence of agentic AI and workflow automation
Agentic AI, capable of automating complex task chains, introduces a new dimension of complexity. The multiplication of automated workflows can create interdependencies that are difficult to trace and control. This evolution requires a more sophisticated governance approach, capable of managing not only individual AI decisions, but also automated action sequences that can have cascading repercussions on the entire organization.
4 - Risk Scale and Mitigation Means
Effectively managing AI risks requires a structured approach that categorizes threats according to their likelihood of occurrence and potential impact. This classification allows HR departments to prioritize their efforts and allocate resources optimally.
Critical Risks (High Impact, High Probability)
Security and data protection risks constitute the most critical category. Unauthorized reuse of sensitive information in training future models, leaks of employee personal data, and unauthorized access to confidential information represent immediate threats [6]. These risks can lead to GDPR penalties of up to 4% of global annual revenue, costly legal disputes, and an irreversible loss of trust among employees and partners.
Priority mitigation measures:
Implementation of secure AI solutions with encryption of data in transit and at rest,
Exclusive use of enterprise versions of generative AI tools,
Systematic anonymization of data submitted to algorithms,
Intensive training of teams in security best practices.
Major Risks (High Impact, Moderate Probability)
Algorithmic bias and discriminatory decisions constitute a major risk, particularly in recruitment, performance evaluation, and career management processes. Algorithms can perpetuate or amplify existing inequalities, leading to discriminatory practices based on gender, ethnicity, age, or other protected characteristics [7]. Consequences include legal sanctions, reputational damage, and the loss of diverse talent.
Mitigation methods:
Regular auditing of algorithms to detect bias,
Diversifying development and validation teams,
Implementing human oversight mechanisms for critical decisions,
Establishment of fairness metrics in automated processes.
Significant Risks (Moderate Impact, High Probability)
Processing errors and hallucinations in generative AI represent a daily risk, but generally less critical. Employees may receive incorrect information, make decisions based on flawed analyses, or disseminate factually inaccurate content. Although the individual impact is moderate, the high frequency of these incidents can gradually erode trust in AI tools and affect productivity.
Mitigation methods:
Training users to think critically about AI results,
Implementing systematic validation processes,
Creating clear principles and safeguards for the appropriate use of generative tools,
Establishing feedback mechanisms to continuously improve the quality of results.
Emerging Risks (Variable Impact, Increasing Probability)
Shadow AI and the uncontrolled use of AI tools by employees constitute an emerging risk whose impact can quickly escalate. Employees are increasingly using personal or free AI tools for their work, creating security vulnerabilities and compliance issues that are difficult to detect [8]. This practice can expose sensitive data, create uncontrolled technology dependencies, and compromise the consistency of organizational processes.
Mitigation methods:
Implementation of a clear AI usage policy,
Provision of approved and secure AI tools to employees,
Raising awareness of the risks of Shadow AI,
Implementation of systems to detect and monitor unauthorized use.
Systemic risks (Very high impact, low probability)
Architectural failures and cascading outages of agentive AI systems represent rare but potentially catastrophic systemic risks. A failure in a critical AI system can paralyze entire HR processes, compromise business continuity, and generate considerable costs. The increasing complexity of AI architectures increases the likelihood of these systemic incidents.
Mitigation methods:
Design resilient architectures with failover mechanisms,
Maintain manual backup processes,
Regular business continuity testing,
Diversify suppliers and technologies to avoid single points of failure.

5- Practical implementation: key steps and best practices
Implementing effective AI governance can be approached at two levels of ambition: a minimum foundation essential to ensure basic compliance and security, and a level of excellence that transforms governance into a strategic competitive advantage.
The minimum required: the fundamentals of compliance
The minimum level of AI governance must enable compliance with legal obligations and protect the organization against major risks. This approach includes the creation of a multidisciplinary project team bringing together representatives from HR, IT, legal, and operational departments [9]. Assessing current and future uses of AI is an essential prerequisite, allowing existing tools to be mapped and emerging needs to be anticipated.
Risk identification and assessment must lead to the definition of clear rules and procedures to limit critical exposures. A basic AI charter must establish fundamental ethical principles: transparency of automated decisions, accountability in the event of errors, and fairness in data processing. Compliance with current regulations, particularly the GDPR and the AI Act, is a non-negotiable foundation.
Basic employee training on AI risks and best practices represents a minimal but essential investment. This awareness must cover the dangers of Shadow AI, the risks of data leaks, and the principles for validating generative AI results.
Excellence: Towards Strategic and Differentiating Governance
Excellence in AI governance goes beyond simple compliance to become a driver of performance and innovation. It requires the creation of a permanent AI governance committee with a dedicated budget and clear decision-making powers. This committee includes external experts, business representatives, and data scientists to ensure a holistic and forward-looking vision.
The excellence approach is characterized by sophisticated data governance, including complete traceability of data flows, systematic valuation of information assets, and the implementation of advanced privacy protection mechanisms such as pseudonymization and differential anonymization. Validation and certification processes for AI algorithms are formalized, with regular audits and continuous performance metrics.
Training is becoming strategic and differentiated according to the population: general awareness for all employees, in-depth training for managers and advanced users, and technical expertise for IT and data teams. AI skills development programs are integrated into career paths, anticipating changes in professions and skills needs.
Table 1: Comparison of Minimum AI Governance vs. Excellence
Pillars | Minimum requirements (compliance) | Ideal (excellence) |
Framework & Policy | AI policy approved by Executive Leadership. | Adoption of a certifiable AI Management System (ISO/IEC 42001:2023). |
Inventory & Classification | Register of HR AI systems flagging "high-risk" cases. | Dynamic inventory linked to the CMDB/MLOps; criticality scoring updated monthly. |
Committees | Quarterly AI committee with HR and Data Protection Officer (DPO) participation. | AI Governance Board (quarterly) + HR Algorithmic Risk Review (monthly) + Incident Working Group (weekly) with a published RACI matrix. |
Project process | Impact assessment (AIA/DPIA) before deployment. | |
Technical controls | Bias testing at go-live. | Continuous monitoring (e.g., Evidently AI, Arize) + drift alerts + automated retraining with human-in-the-loop approval. |
Stages of Progression Toward Excellence
The transition from minimal to excellent generally follows a structured path of four phases.
The initiation phase focuses on compliance and reducing critical risks.
The structuring phase sees the emergence of formalized processes and more sophisticated governance.
The optimization phase integrates AI into the overall strategy and develops advanced capabilities.
Finally, the innovation phase transforms AI governance into a source of competitive advantage and continuous innovation.
This progression requires increasing investment in human, technological, and financial resources, but generates exponential returns in terms of performance, resilience, and innovation capacity. Organizations that achieve excellence in AI governance position themselves as leaders in their sector and attract top talent, creating a virtuous circle of performance and innovation.
6 - Team training and awareness
The success of AI governance in HR largely depends on team engagement and competence. Training and awareness therefore constitute essential levers for ensuring governance framework adoption and effectiveness.
Differentiated training programs
Training programs must be adapted to different user profiles: executives, HR managers, end users, technical teams. Each audience requires a specific level of detail and pedagogical approach.
For executives, training must focus on strategic challenges, major risks and business opportunities of AI in HR. For HR managers, it must cover operational aspects, governance processes and control tools. For end users, it must be practical and centered on usage best practices.
Ethical awareness
AI ethical awareness must be transversal and continuous. It must cover questions of bias, transparency, privacy respect and human dignity. This awareness must be concrete and illustrated by real use cases and examples of good and bad practices.
Ethical awareness must also include training on reporting and processing mechanisms for ethical concerns. Teams must know how to identify problematic situations and whom to turn to in case of doubt.
Technical skills development
Technical AI skills development must be progressive and adapted to each function's needs. It's not about training all collaborators to become AI experts, but giving them the understanding keys necessary to use these tools in an enlightened and responsible manner.
This technical training must cover basic AI concepts, different types of algorithms used in HR, model performance and quality evaluation methods, and bias detection and correction techniques.
Table 2: Example training plan to support AI governance
Module | Key objectives |
AI Fundamentals & Limits (4 h) | Understand ML/LLM vs search engine; concept of hallucinations. |
Data Quality & Lifecycle (4 h) | Provenance, bias, versioning, anonymization. |
Responsible Prompt Engineering (3 h) | Write, test, review, and maintain a prompt library; avoid "Shadow AI". |
Access Control & Data Protection (2 h) | Roles, classification, encryption, logs. |
Critical Thinking & Validation (3 h) | Verify AI outputs; detect drift or stereotypes. |
Legal & Ethical Framework (3 h) | Bill C-27, Law 25 (Quebec), EU AI Act, AIDA; internal policies; redress procedures. |
Executive Training: Strategic Vision and Responsibility
Leaders must develop a thorough understanding of the strategic challenges of AI and their responsibilities regarding governance. Their training should cover the business implications of AI, reputational and legal risks, and opportunities for competitive differentiation. Executives must master basic AI concepts to make informed decisions about technology investments and strategic directions.
Executive training also includes an understanding of emerging regulatory frameworks, including the European AI Act and its implications for the organization. Executives must be able to articulate the company's AI vision, communicate ethical choices, and drive the cultural transformation necessary for the responsible adoption of AI.
Executive Training: Operational Steering and Change Management
Middle managers play a crucial role in the operational implementation of AI governance. Their training must prepare them to identify opportunities for applying AI in their areas of responsibility, assess the associated risks, and support their teams in the adoption of new tools and processes.
Managers must develop AI-specific change management skills, including the ability to reassure employees about the evolution of their roles, identify training needs, and maintain engagement in a context of rapid technological transformation. They must also master the principles of human supervision of automated systems and know how to detect warning signs related to bias or malfunctions.
Employee Training: Responsible Use and Critical Thinking
Employee training must develop an "AI mindset" that combines basic technical skills and critical thinking. This training covers several essential dimensions for the responsible use of AI tools.
Awareness of "Shadow AI" is a central element, explaining why the use of unauthorized tools can expose the company to security and compliance risks. Employees must understand the differences between free and professional versions of generative AI tools, and the implications of each choice in terms of data privacy and security.
Training in prompt quality and interaction with language models (LLM) is becoming an essential professional skill. Employees must learn to formulate precise queries, contextualize their requests, and iterate to achieve optimal results. This skill includes understanding the limitations of models and the ability to identify potentially erroneous or biased responses.
Developing Critical Thinking and Validation
Critical thinking regarding AI results is a fundamental cross-functional skill. Employees must develop systematic validation reflexes, learn to cross-reference sources, verify the consistency of information, and identify warning signs indicating possible hallucinations or algorithmic bias.
Training must also cover best practices for collaborative review and validation. Double-checking processes, peer review mechanisms, and escalation protocols in case of doubt must become organizational reflexes.
Specialized Training Programs by Domain
Each functional area requires a training approach tailored to its specificities. In recruitment, training must cover bias detection in selection algorithms and audit techniques for automated processes. In performance management, the emphasis is on interpreting predictive analytics and maintaining fairness in evaluation.
For support functions such as finance and legal, training focuses on regulatory implications and compliance risks. IT teams develop advanced technical skills in AI system security and data governance.
Innovative teaching methods and continuous assessment
The effectiveness of training relies on teaching methods adapted to the teaching of complex technological concepts. Simulations and practical cases allow learners to experiment with AI tools in a secure environment. Collaborative workshops encourage the sharing of experiences and the emergence of contextualized best practices.
Continuous assessment of acquired skills is based on qualitative and quantitative metrics: knowledge tests, professional practice assessments, manager feedback, and employee self-assessment. This approach allows training programs to be adapted to emerging needs and ensures progressive and sustainable skills development throughout the organization.
7- Some legal and practical references in AI for moving forward
AI governance relies on a rapidly evolving regulatory and standards ecosystem, which HR managers and HR directors must master to ensure compliance and anticipate future developments.
International Norms and Standards
ISO/IEC 23053:2022 provides a framework for the use of AI in organizations, covering aspects of governance, risk management, and ethical compliance. ISO/IEC 23894:2023 focuses specifically on AI risk management and offers assessment and mitigation methodologies.
The NIST AI Risk Management Framework (AI RMF 1.0) offers a structured approach to identifying, assessing, and managing AI risks. Although American, this framework influences international practices and provides practical tools for organizations.
Bill C-27 in Canada: An Integrated Approach
Introduced in June 2022, Bill C-27 constitutes an ambitious reform that modernizes Canada's entire legal framework for data and artificial intelligence [13].
The AIIA, which forms the third part of this bill, aims to regulate international and interprovincial trade and commerce in AI systems. This integrated approach harmonizes the protection of personal data with AI regulations, creating a coherent legal ecosystem for businesses.
The main objective of the AIIA is to ensure that AI systems deployed in Canada are safe and non-discriminatory, while preserving innovation and economic competitiveness. This balanced approach recognizes that AI can generate considerable benefits for society while requiring appropriate oversight to address risks.
Bill 25 in Quebec: A Strengthened Data Protection Framework
Quebec has the strictest privacy legislation in Canada, with Bill 25, which modernizes the Act respecting the protection of personal information in the private sector [23]. This law imposes particularly stringent obligations for the use of AI, particularly regarding consent, transparency, and data retention periods.
For Quebec HR departments, Bill 25 establishes a demanding but clear framework for the use of AI in HR processes. Obligations include conducting privacy impact assessments, implementing appropriate security measures, and documenting personal data processing processes.
European Regulatory Frameworks
The European AI Act (EU Regulation 2024/1689) is the global benchmark for regulating artificial intelligence [10]. Coming into force on August 1, 2024, it establishes a risk-based approach with specific obligations for high-risk AI systems. Financial penalties, applicable from August 2025, can reach €35 million or 7% of global annual turnover for the most serious violations.
The GDPR (General Data Protection Regulation) remains the benchmark for personal data protection and fully applies to AI systems. The principles of data minimization, transparency, and informed consent take on a particular dimension in the context of AI, particularly for HR applications that process sensitive employee data.
Academic References and AI Think Tanks
The OECD's work on AI, notably the OECD Principles on AI, adopted in 2019 and revised in 2024, provides strategic guidance for human-centered and trustworthy AI. The OECD's AI Incident Observatory is a valuable resource for understanding emerging risks and good prevention practices.
The Partnership on AI, a consortium of leading technology companies and research organizations, regularly publishes practical guides and recommendations for responsible AI. Their work on algorithmic bias and fairness in AI is particularly relevant for HR applications.
Practical Tools and Resources
The ANSSI (French National Agency for Information Systems Security) published "Security Recommendations for a Generative AI System" in 2024, which provide precise technical guidelines for securing AI deployments in enterprises [11].
CIGREF (French Large Enterprises IT Club) offers an "AI Act Implementation Guide" with practical tools and checklists for organizations [12]. This guide includes AI policy templates and risk assessment grids adapted to the French context.
8 - Conclusion for taking action with AI
AI governance is no longer an option for HR Directors and HR Directors of medium and large companies: it is a strategic imperative that determines the organization's ability to leverage the transformative potential of artificial intelligence while managing the associated risks.
The urgency to act is reinforced by the gradual implementation of the European AI Act and the explosion of AI-related incidents observed in recent years. But beyond regulatory compliance, well-designed AI governance is becoming a decisive competitive advantage, enabling secure innovation and developing an organizational culture adapted to the challenges of the future.
The first steps towards effective governance
To initiate this process, HR Directors must begin by building a multidisciplinary project team bringing together HR, IT, legal, and operational expertise. This team will be tasked with mapping current and future uses of AI, identifying priority risks, and defining the initial elements of an AI charter adapted to the organizational context.
Employee training and awareness are an immediate and essential investment. Developing an "AI Mindset" that combines basic technical skills and critical thinking will enable the organization to leverage AI tools while avoiding the pitfalls of Shadow AI and inappropriate uses.
Towards a Sustainable Transformation
AI governance must be designed as an evolving process, capable of adapting to technological innovations and regulatory changes. Organizations that invest today in robust and scalable governance are positioning themselves as leaders in their sector and creating the conditions for a successful and sustainable digital transformation.
The challenge goes far beyond simple risk management: it's about building the foundations of an AI-enhanced organization, where technology amplifies human capabilities while preserving the values and ethics that define the company's identity. For HR Directors and Chief Human Resources Officers (CHROs), this is an opportunity to play a central role in this transformation and demonstrate the strategic value of the HR function in the era of artificial intelligence.

References:
[1] Deloitte. (2024). "State of Generative AI in the Enterprise" https://www2.deloitte.com/us/en/insights/focus/cognitive-technologies/state-of-ai-and-intelligent-automation-in-business-survey.html
[2] OECD. (2024). "AI Incidents Monitor" https://oecd.ai/en/incidents
[3] European Union. (2024). "Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonized rules on artificial intelligence (AI Act)"https://eur-lex.europa.eu/eli/reg/2024/1689/oj?locale=fr
[4] IBM. (2024). "What is AI governance?" https://www.ibm.com/fr-fr/think/topics/ai-governance
[5] Varonis. (2025). "Hidden Risks of Shadow AI" https://www.varonis.com/blog/shadow-ai
[6] Mister IA. (2025). "AI risk management: incidents to avoid in enterprise" https://www.mister-ia.com/article/risques-ia-entreprise-incidents
[7] KPMG. (2023). "The different risks of generative AI" https://kpmg.com/fr/fr/articles/data-ia/risque-ia-generative.html
[8] Hub One. (2025). "Shadow AI: how to frame uncontrolled AI usage" https://www.hubone.fr/oneblog/shadow-ai-en-entreprise-comment-maitriser-les-risques-de-lia-utilisee-hors-des-radars/
[9] Eurécia. (2024). "8 steps for effective AI governance in enterprise" https://www.eurecia.com/blog/gouvernance-efficace-IA-entreprise/
[10] Deloitte. (2025). "EU AI Act: understanding the first regulatory framework on artificial intelligence" https://www.deloitte.com/fr/fr/our-thinking/explore/tech/EU-AI-Act.html
[11] ANSSI. (2024). "Security recommendations for a generative AI system" https://cyber.gouv.fr/publications/recommandations-de-securite-pour-un-systeme-dia-generative
[12] CIGREF. (2025). "AI Act implementation guide: User manual and tools to implement AI governance" https://www.cigref.fr/guide-de-mise-en-oeuvre-de-lai-act-impacts-contractuels-des-projets-dia
[13] Government of Canada. (2022). "Bill C-27: An Act to enact the Consumer Privacy Protection Act, the Personal Information and Data Protection Tribunal Act and the Artificial Intelligence and Data Act" https://www.parl.ca/documentviewer/fr/44-1/projet-loi/C-27/premiere-lecture
[14] Innovation, Science and Economic Development Canada. (2025). "Artificial Intelligence and Data Act" https://ised-isde.canada.ca/site/innover-meilleur-canada/fr/loi-lintelligence-artificielle-donnees
[15] Naaia. (2024). "Canada: focus on AIDA, AI regulation" https://naaia.ai/la-reglementation-de-lia-au-canada-focus-sur-la-liad/
[16] Dentons. (2025). "Trends to watch in 2025: Artificial intelligence regulation" https://www.dentons.com/fr-ca/insights/newsletters/2025/january/23/global-regulatory-trends-to-watch/dentons-canadian-regulatory-trends-to-watch-in-2025/artificial-intelligence-trends-to-watch-in-2025
[17] OBVIA. (2024). "International Observatory on Societal Impacts of AI and Digital Technology" https://www.obvia.ca/
[18] Government of Quebec. (2024). "Ministerial Order 2024-02 of the Minister of Cybersecurity and Digital Technology dated June 27, 2024" https://www.publicationsduquebec.gouv.qc.ca/fileadmin/gazette/pdf_encrypte/lois_reglements/2024F/83874.pdf
[19] Government of Quebec. (2025). "Artificial intelligence obligations and framework" https://www.quebec.ca/gouvernement/faire-affaire-gouvernement/services-organisations-publiques/services-transformation-numerique/reussir-sa-transformation-numerique/accompagnement-des-organismes-publics/intelligence-artificielle-dans-ladministration-publique/obligations-et-encadrement-de-lintelligence-artificielle
[20] Quebec Innovation Council. (2024). "Ready for AI: Meeting the challenge of responsible artificial intelligence development and deployment" https://conseilinnovation.quebec/intelligence-artificielle/
[21] Quebec Ministry of Cybersecurity and Digital Technology. (2024). "Guide to best practices for using generative AI" https://cdn-contenu.quebec.ca/cdn-contenu/adm/min/cybersecurite_numerique/Publications/Strategie_cybersecurite_numerique_2024-2028/GU_bonnes_pratiques_utilisation_IA_generative_VF.pdf
[22] SOQUIJ. (2025). "SOQUIJ unveils its ethical framework for responsible use of artificial intelligence" https://soquij.qc.ca/a/fr/nouvelles/2025-02-12-soquij-devoile-cadre-ethique-pour-utilisation-responsable-intelligence-artificielle
[23] Government of Quebec. (2021). "Law 25: An Act to modernize legislative provisions respecting the protection of personal information"
Specify and quantify your HRIS and AI in HR strategy: write to us, schedule an appointment to discuss your needs, subscribe to our newsletter and download one of our HRIS 2025 mappings now here → https://www.nexarh.com/cartographies-hr-tech-hcm-talent

Comments