Intelligent CIO APAC Issue 49 | Page 45

CIO OPINION integrating principles of ethical AI into their generative AI projects from Day One .
One of the areas of concern regarding generative AI laid out by Singapore ’ s National AI Strategy ( NAIS 2.0 ) is the current inadequate levels of transparency regarding large language models ( LLMs ). It is critical in light of questions about the existence of biases that have been built into the models and affect the validity , credibility and even legality of their outputs .
Similar to children who take up after their parents , AI models can unintentionally absorb patterns from the data they are trained on due , in part , to insufficient awareness among data scientists regarding historical and societal biases that may be present in the data .
This lack of awareness can have far-reaching consequences in various fields . For example , in healthcare , biases in data or algorithms can negatively impact patient care and resource allocation , while those in human resources can affect recruitment , evaluation and decision-making processes .
It is crucial to address biases and actively work towards creating more inclusive AI systems . SAS has outlined six core principles to guide responsible innovation , including the use of AI . These are human centricity , transparency , inclusivity , privacy and security , robustness and accountability .
To foster accountability in using AI , organisations must recognise that it is a shared responsibility of all people and entities involved in an AI system .
One way to encourage accountability is by implementing clear decision workflows that assign ownership and add transparency to the AI system .
www . intelligentcio . com INTELLIGENTCIO APAC 45