Intelligent CIO APAC Issue 41 | Page 15

NEWS

Gradient Institute receives donation from Cadent for AI safety research

The project will aim to investigate the potential misuse of large language models for manipulating individuals for commercial , political or criminal purposes – and to explore original technical solutions against such threats . profits are reinvested in charities and projects dedicated to causes such as AI safety .
Cadent ’ s Managing Director , James Gauci , encourages others to consider supporting Gradient Institute ’ s vital research .
Gradient Institute Chief Scientist Dr Tiberio Caetano

Australia ’ s Gradient Institute , an independent , nonprofit research institute that works to build safety , ethics , accountability and transparency into AI systems , has received a donation from ethical technology studio Cadent to develop research on technical AI Safety .

This donation will support a three-month research project of a PhD student working on AI Safety , under the supervision of Gradient Institute researchers .
Gradient Institute ’ s Chief Scientist , Dr Tiberio Caetano , highlights the importance of investment in AI Safety research .
“ Today ’ s reality is that AI systems have become very powerful , but not as safe as they are powerful .
“ If we want to keep developing AI for everyone ’ s benefit , it ’ s imperative that we focus more on making these systems safer to close this gap ,” he said .
The donation is a key part of Cadent ’ s mission . As a Social Traders Certified social enterprise , more than 50 % per cent of Cadent ’ s annual
“ In an age where the latest large-scale hack or major AI model is just around the corner , ethical considerations in technology and AI have become paramount ,” he said . “ We believe that all technologists must rise to the occasion .”

MITRE and Microsoft collaborate over Generative AI security risks

MITRE and Microsoft have added a data-driven generative AI focus to MITRE ATLAS .

The new framework update and associated new case studies dare pitched as directly addressing unique vulnerabilities of systems that incorporate generative AI and LLM like ChatGPT and Bard .
The updates to MITRE ATLAS – which stands for Adversarial Threat Landscape for Artificial-Intelligence Systems – are intended to realistically describe the rapidly increasing number and type of attack pathways in LLM-enabled systems that consumers and organizations are rapidly adopting . Such characterizations of realistic AI-enabled system attack pathways can be used to strengthen defenses against malicious attacks across a variety of consequential applications of AI , including in healthcare , finance and transportation .
“ Many are concerned about security of AI-enabled systems beyond cybersecurity alone , including large language models ,” said Ozgur Eris , Managing Director of MITRE ’ s AI and Autonomy Innovation Center . “ Our collaborative efforts with Microsoft and others are critical to advancing ATLAS as a resource for the nation .”
Ram Shankar Siva Kumar , Microsoft data cowboy , said :
“ Microsoft and MITRE worked with the ATLAS community to launch the first version of the ATLAS framework for tabulating attacks on AI systems in 2020 , and ever since , it has become the de facto Rosetta Stone for security professionals to make sense of this ever-shifting AI security space .
“ The latest ATLAS evolution to include more LLM attacks and case studies underscores the framework ’ s incredible relevance and utility .”
www . intelligentcio . com INTELLIGENTCIO APAC 15