Intelligent CIO APAC Issue 54 | Page 43

FEATURE : 2025 CIOS ’ PRIORITY note of this already , they ’ ll be hearing it from their security teams soon enough as they look to alleviate workloads for understaffed departments .
AI models themselves are the next focus of AI-centred attacks .
Last year , there was a lot of talk about cybersecurity attacks at the container layer – the less-secured developer playgrounds . Now , attackers are moving up a layer to the machine learning infrastructure . I predict that we ’ ll start seeing patterns like attackers injecting themselves into different parts of the pipeline so that AI models provide incorrect answers , or even worse , reveal the information and data from which it was trained . There are real concerns in cybersecurity around threat actors poisoning large language models with vulnerabilities that can later be exploited .
Although AI will bring new attack vectors and defensive techniques , the cybersecurity field will rise to the occasion , as it always does . Organisations must establish a rigorous , formal approach to how advanced AI is operationalised . The tech may be new , but the basic concerns – data loss , reputational risk and legal liability – are well understood and the risks will be addressed .
Concerns about data exposure through AI are overblown .
People putting proprietary data into large language models to answer questions or help compose an email pose no greater risk than someone using Google or filling out a support form . From a data loss perspective , harnessing AI isn ’ t necessarily a new and differentiated threat . At the end of the day , it ’ s a risk created by human users where they take data not meant for public consumption and put it into public tools .
This doesn ’ t mean that organisations shouldn ’ t be concerned . It ’ s increasingly a shadow IT issue , and organisations will need to ratchet up monitoring for unapproved use of generative AI technology to protect against leakage . p
www . intelligentcio . com INTELLIGENTCIO APAC 43