Revolutions seem to be popular in 2025, and industries are not being left behind. There has been a dramatic and rapid adoption of Artificial Intelligence (AI) across workplaces in Africa.
In fact, according to a study conducted by the Toluna research agency and commissioned by Kaspersky, 81.7% of the professionals surveyed admitted to now integrating AI tools into their daily tasks, signalling a profound shift in how work is accomplished.
From drafting emails and editing texts to generating images and analyzing data, AI has moved from a theoretical concept to a practical, everyday tool that the masses are using daily. This trend is underpinned by a widespread understanding of the technology, with 94.5% of respondents stating they know what the term “generative AI” means.
For a majority, this is no longer abstract knowledge; AI has become an integral part of the daily workflow. According to the survey, it is most commonly used for writing or editing texts (63.2%), managing work emails (51.5%), data analytics (50.1%), and creating images or videos with neural networks (45.2%).
However, despite this enthusiastic embrace, the survey, which included 2,800 online interviews with employees and business owners in Kenya, Turkey, South Africa, Pakistan, Egypt, Saudi Arabia, and the UAE, uncovered a severe and dangerous gap in workforce preparedness.
While AI tools are being widely deployed, often as part of ‘shadow IT’ where employees use them without official guidance, the necessary cyber-security training is lagging far behind.
A full third of professionals (33%) reported receiving no AI-related training at all. More alarmingly, among those who had undergone some form of instruction, the focus was predominantly on how to use AI effectively, with only 48% learning about effective prompt creation.
In fact, the survey showed only a mere 38% had received any guidance on the critical cyber-security aspects of using neural networks. This disparity leaves organisations acutely vulnerable to a spectrum of AI-related risks, from data leaks to sophisticated prompt injections, even as 72.4% of respondents stated that these tools are permitted at their work.
This situation creates a complex challenge for companies. To navigate it successfully, businesses and firms should avoid the extremes of a total ban, which stifles innovation, and a free-for-all approach, which invites security disasters. Instead, the most effective strategy is to implement a formally documented, company-wide policy that establishes a tiered access model, calibrating AI use to the data sensitivity of each department.
This policy should be backed by comprehensive training for all employees on the responsible and secure use of AI, while IT specialists are equipped with advanced knowledge on exploitation techniques and defense strategies.
Furthermore, organisations are recommended to secure all devices with robust cyber-security solutions, conduct regular surveys to monitor AI usage and its associated risks, and employ specialized AI proxies to sanitize data queries.
Through this balanced and informed approach, organisations can confidently harness the immense power of AI to drive efficiency and innovation, while rigorously protecting themselves from the new generation of risks that this powerful tech brings.