FEATURE: AUTONOMOUS INFRASTRUCTURE
FOR AUSTRALIA’ S INDUSTRIAL SECTOR, NOW IS THE TIME TO INTERROGATE EVERY ASSUMPTION ABOUT CONTROL, RESILIENCE AND RISK IN THE AUTONOMOUS ERA.
Moreover, many safety mechanisms such as fail-safes and override systems are built on outdated assumptions.
These static rules struggle to cope with the fluid, adaptive nature of modern AI. And when systems fail, they often do so silently until the consequences are felt in the real world.
When autonomy becomes a threat
Perhaps most alarming is the rise of what some experts call‘ threat autonomy’. This new class of risks arises when autonomous systems are manipulated not by brute force but by subtle exploitation of their logic. An attacker does not need to hack a system – they can simply poison its inputs.
Imagine a scenario where AI-powered systems controlling a city’ s water network are fed slightly skewed sensor data. The AI might overcompensate, adding incorrect chemical doses to the supply.
Or in a smart factory, attackers might introduce subtle variations in input data, prompting the AI to degrade product quality in pursuit of flawed efficiency goals. In a hyper-connected system, these distortions can trigger cascading failures across entire industrial ecosystems.
Rethinking oversight and accountability
To mitigate these risks, Australian organisations must rethink how control is defined and exercised in autonomous systems. First and foremost is the need for explainability. If a machine decides to shut down a turbine or reroute power in the grid, decision-makers need to know why. Incorporating explainable AI( XAI) frameworks is essential for transparency, compliance and trust. Preventative and Zero Trust hybrid OT architectures should also be reviewed as the foundation of any rethink of autonomous operations risk mitigation strategies.
26 INTELLIGENTCIO APAC www. intelligentcio. com