Javascript is required
logo-dastralogo-dastra

AI literacy: the weapon against Shadow AI

AI literacy: the weapon against Shadow AI
Leïla Sayssa
Leïla Sayssa
27 August 2025·5 minutes read time

Since February 2, 2025, the AI Act requires providers and deployers of artificial intelligence systems to ensure that their personnel and users of these systems have a sufficient level of knowledge and a good understanding of AI (Article 4 of the AI Act).

This requirement varies based on technical skills, experience, level of education, and the context of AI system use, as well as the individuals or groups involved.

What are the consequences of failing to meet AI literacy obligations?

The obligation to promote AI literacy under Article 4 of the AI Act has been in force since February 2, 2025. However, enforcement by competent national authorities will only begin in August 2025, as Member States have until that date to formally designate these authorities.

Although Article 4 does not attach explicit fines to AI literacy, regulators may treat non-compliance as an aggravating factor in broader investigations, particularly where organizations fail to demonstrate due diligence in areas such as bias management. Conversely, evidence of even basic training programs can strengthen a company’s defense during audits or litigation.

From August 2, 2026, when the penalty regime takes effect, providers and deployers of AI systems risk civil liability if the absence of adequate training leads to harm suffered by consumers, business partners, or third parties.

What can organizations do right now?

AI proficiency should be seen as a core governance tool, not just a compliance checkbox. It is about ensuring that employees understand the:

  • Sensitive data exposed to external platforms, sometimes located in foreign jurisdictions;

  • data transfers to foreign territories without adequate oversight;

  • increased exposure to data breaches and litigation;

  • reputational damage in the event of public incidents and loss of customer trust;

  • blind spots in risk management, with traceability and auditing becoming impossible when using uncontrolled tools.

There is no one-size-fits-all model. Training content must vary according to roles, levels of responsibility, and specific use cases.

The European Commission emphasizes a risk-proportionate approach: the more critical or sensitive the system, the more thorough, structured, and supervised the training must be. What matters most is that each audience receives sufficient and relevant information to properly manage the use of AI.

The 'Living repository' of the AI Office supports the implementation of Article 4 by sharing examples and practices.

While using these examples does not automatically establish compliance, they encourage learning and consistency across the market.

Practical steps to improve AI proficiency

  • Assess training needs: Audit existing programs to identify gaps in knowledge.

  • Adopt a tiered approach: Provide baseline training to all employees, then introduce role-specific modules.

    • Developers: spotting bias in code.

    • Executives: interpreting AI risk reports.

    • Sales teams: knowing what not to promise to clients.

  • Run crisis simulations: e.g., “Our chatbot leaked customer data—what do we do?”

  • Document initiatives: Keep thorough records of all training to support accountability in audits.

Risks of poor AI literacy: Shadow AI

Without adequate literacy, organizations face the rise of Shadow AI—the unauthorized use of AI tools by employees without oversight from IT, legal, or compliance teams. This mirrors Shadow IT but comes with AI-specific risks:

  • leaks of sensitive data through unsecured external tools,

  • unauthorized cross-border data transfers,

  • increased exposure to breaches, litigation, and reputational harm.

Shadow AI, which mirrors the issues of Shadow IT while adding specific risks related to AI, is a early warning signal of a gap between the speed of AI innovation and organizational governance.

Real-world examples:

  • Internal security incident: a company like Samsung saw its proprietary code leak after engineers shared it with ChatGPT.

  • Liability deficits: a large law firm had to publish guidelines on its AI proficiency after some lawyers were unable to justify their sources during AI-assisted legal research.

How to assess and manage shadow AI

Strong & robust governance is essential to tackle Shadow AI.

Here a few helpful measures against Shadow AI:

  1. Launch a confidential survey among your employees with key questions (What AI tools do you use? What types of data do you share? How do you integrate AI results into your deliverables?). Allow a disclosure period without penalties.

  2. Engage with departments: meet with managers to identify used tools, approval processes, and the perceived value of AI.

  3. Establish graded access zones:

    • Green zone: non-sensitive data, pre-approved tools;

    • Yellow zone: prior review required;

    • Red zone: strict prohibition (e.g., fully autonomous decision-making systems).

      Access to certain zones should be contingent on mandatory prior training.

  4. Provide employees with approved alternatives: Provide secure, validated tools to reduce unauthorized usage;

  5. Pilot programs: Start with one department, empower “AI champions,” then scale organization-wide.

  6. Involve lawyers and compliance officers from the design stage of projects;

  7. Develop analytical tools: monitor the adoption, compliance, and business impact of AI within the organization.


Bottom line: Ultimately, Shadow AI highlights a growing gap between the speed of artificial intelligence adoption and companies’ ability to properly regulate its use. Without clear policies, training, and secure solutions, innovation develops in the shadows, exposing organizations to increasingly critical legal, financial, reputational, and operational risks.


About the author
Subscribe to our newsletter

We'll send you occasional emails to keep you informed about our latest news and updates to our solution

* You can unsubscribe at any time using the link provided in each newsletter.