The recent growth of generative AI has brought unprecedented productivity gains. However, beneath its impressive streak lies a looming privacy crisis—created by our own data.
Platforms such as ChatGPT and Copilot continue to access sensitive information fed by users, potentially exposing businesses to serious compliance and privacy risks.
Beyond basic identifiers like names and email addresses, users—including employees—routinely input proprietary data into AI systems.
Whether rewriting client proposals, creating internal performance plans, or seeking improvements to confidential strategies, most users end up divulging far too much sensitive data.
According to Anna Collard, SVP of Content Strategy & Evangelist at KnowBe4 Africa, the casual, conversational nature of AI lulls users into a false sense of security, leading to oversharing that would never occur in traditional work settings.
“Many people don’t realize just how much sensitive information they’re inputting,” she said.
All users are at risk—but when it comes to employees, the stakes are much higher. By inputting confidential data into public AI tools, they risk exposing client details, internal operations, and strategic plans of entire organizations to competitors, hackers, or regulators.
For instance, a report by Harmonic Security, a pioneering cybersecurity firm, found that free-tier AI tools like ChatGPT account for 54% of sensitive data leaks, largely due to lax controls and permissive licensing.
Even paid tools adopted without IT oversight—often dubbed “semi-shadow AI”—pose significant risks, as business units bypass security protocols in pursuit of efficiency.
This is why organizations must move beyond policy documents and implement tangible safeguards: training employees on safe AI usage and restricting access to unapproved tools.
“Cyber hygiene now includes AI hygiene. This should involve restricting access to generative AI tools without oversight, or only allowing those approved by the company.
Organizations need to adopt a privacy-by-design approach when it comes to AI adoption, which includes using platforms with enterprise-level data controls and deploying browser extensions that detect and block sensitive data from being entered,” Collard added.
Additionally, compliance programs should align with international standards governing AI management systems to ensure ethical and legal adherence.
Since ChatGPT went mainstream in 2022, the generative AI market has ballooned to over Sh3.23 trillion ($25 billion) as of 2024. Yet its rapid adoption has outpaced awareness of the dangers lurking in careless usage. Users must be careful what they feed these platforms—because once data is out there, there’s no taking it back.