AI illiteracy and the threat of shadow AI



In the corporate world of Nigeria today, from the high-rise buildings of Victoria Island to the industrial hubs of Kano, the conversation is changing rapidly. Officials are no longer just talking about the “use” of Artificial Intelligence; Smart people focus on mastering its responsible and strategic application. AI is not a tool of the future for some tech wizard; It is here now, changing every sector. But as we rush to adopt this powerful technology, we face a critical, hidden need: AI literacy.

This isn't just a “skills gap” that HR can fix with a quick, afternoon workshop. For any modern organization, AI illiteracy is a dynamic operational, legal, and security liability. We cannot afford a purely experimental approach, “playing” with tools we don't really understand. The stakes related to data integrity, regulatory compliance and brand reputation are very high, and the disruption is huge. The business community is already seeing the consequences, as confirmed by recent global and local information.

There is a lot of enthusiasm, but understanding is lagging behind. A local indicator of this comes from the KPMG 2025 Nigeria Snapshot. In a revealing study, they found that a staggering 66% of Nigerian employees admit to relying on AI output at work without verifying accuracy. This is not efficient adoption; This is “blind faith,” and it is a ticking time bomb.

A global parallel comes from McKinsey & Company (2025), which identifies a widespread “illiteracy barrier.” They warn that although employees are keen, they often lack the critical judgment to move beyond simple, shallow signals. This leads to shoddy work, legal risks and huge losses in real returns on investment. Workers are experimenting, but as the World Economic Forum (2025/2026) notes, without guidance on boundaries, they are essentially “experimenting in the dark.”

The true, hidden danger of this void is the rise of shadow AI.

When organizations fail to provide clear, supported, and secure AI roadmaps, employees never stop using AI to make their jobs easier. They just hide it. Shadow AI, the use of unproven, personal, and unsecured AI tools for corporate purposes, has become a big blind spot. Employees may, often with good intentions, input confidential customer data or proprietary strategies into public models trained on that data, inadvertently leading to catastrophic security breaches.

Recognizing the urgent need for structured AI training is the first step on an essential journey. Organizations can no longer rely on employees to “figure it out.” They should make literacy compulsory. The world-class faculty of AI in Nigeria has empowered everyone from entry-level employees to C-suite executives in various sectors to transform from passive users to AI-literate leaders.

Ground level:

AI illiteracy is no longer a skills issue; This is a basic business risk. The rise of shadow AI proves that employees are adopting AI faster than their organizations. Ignoring this gap invites operational failure, massive data breaches, and reputation ruin. Your organization needs a clear AI mandate: a formal policy paired with ongoing, structured training that prioritizes critical thinking and validation.

Dotun Adeoye is a technology entrepreneur, AI governance leader and co-founder of AI in Nigeria. He has over 30 years of global experience across Europe, North America, Asia and Africa advising organizations on AI transformation.


Source link