‘Guard barriers’ introduced to increase AI security
6 mins read

‘Guard barriers’ introduced to increase AI security

Listen to Australian and international news and follow trends with
Twenty-five years ago, if you told someone you could talk on a landline and use the Internet at the same time, they would have thought you were dreaming.
If you told them that the Internet could give you a pretty good essay on the rise of polyester in the 20th century in a matter of seconds, they would call you crazy.
But now artificial intelligence is everywhere.
It’s been almost two years since the launch of Chat GPT, a large AI-powered language model.
And it’s clear that the way we use technology has changed forever.
“We heard the message loud and clear from the wider Australian public that while AI has huge benefits, the public wants to be protected if things go wrong.”
This is Minister of Science and Industry Ed Husic.
He says it’s difficult to regulate.
“This is arguably one of the most complex policy challenges facing any government anywhere in the world, and the Australian Government is committed to implementing measures to ensure the safe and responsible use of AI in this country.”
But the government now has a plan.
Step one is a voluntary code to help companies use AI.
According to the government’s Responsible AI Index (RII), around 80 per cent of businesses using the technology believe they are doing the right thing, but fewer than a third of them actually follow best practices.
The solution is now available and Minister Husic says it will help businesses achieve their own standards.
“The Australian Government wants to bridge the gap between best intentions and best practice. The voluntary code gives businesses practical ways to achieve what they want to achieve, which is to use AI safely and responsibly.”
Nicole Gillespie is a professor of management at the University of Queensland and chair of the Organisational Trust Department. She has been researching attitudes towards artificial intelligence in Australia for several years.
He says regulations are what society wants.
“They’re really good at responding to public demand for regulation. So our 2023 survey found that 70 per cent of Australians believe that regulation of AI is necessary and that they have a really clear preference for AI to be regulated by government and existing regulators or by a dedicated, independent AI regulator.”
The voluntary guide highlights 10 areas the government calls protective barriers.
The first section covers processes, ensuring that businesses have risk management systems in place and employees are trained in the use of AI, protected data and documentation maintained for compliance checks.
Australian Chamber of Commerce and Industry chief executive Andrew McKellar says education is key.
“A lot of companies are starting to grapple with this, but we need to increase the awareness of AI in business. We need to understand the risks and benefits of these new technologies, and of course we need to start that education process.”
The law also sets out requirements for human oversight, stating that companies are responsible for their use of AI, must test the systems they use, there must be human oversight and the ability to challenge decisions made by AI.
Those creating AI models must be transparent about how they work, and companies creating or using AI must engage with groups that could be impacted by AI systems.
Co-founder of the Institute for Human Technology at the University of Technology Sydney, Professor Ed Santow, says this will help mitigate biases in AI.
“We have the phenomenon of algorithmic bias, in other words, when an AI system derails and starts treating people less favorably because of their gender, their skin color, their disability, and other things that it has no control over.”
This means that when AI is used to make decisions in situations such as recruiting or approving applications, it may have a built-in discrimination function.
Professor Gillespie believes knowledge can help people question their decisions.
“AI is increasingly being used in very invisible ways, behind the scenes, to make important decisions about people. So unless it’s clear that AI is being used, it’s often hard for people to know if there’s maybe some bias or if a decision has been made against them that isn’t correct.”
The next step is mandatory code for risky uses of AI.
What qualifies as high risk is still under consideration, but the government is considering areas such as education, law enforcement, employment, biometrics, the criminal justice system, health and safety, and access to services.
Professor Santow believes there are situations where AI is not suitable.
“When it comes to decisions of great importance, where human rights are at stake, it is very important that they are not made by a machine. You need good human oversight, in other words, people who are responsible for the decision but can also overturn the decision if something goes wrong.”
Minister Husic said the mandatory code will also apply to suppliers of AI models or products.
“They’re going to require organizations that are developing and deploying AI models to properly assess those risks, and they need to plan how they’re going to manage them, test their AI systems to make sure they’re secure, make all of those processes transparent, be clear about when AI is being used, especially when it looks like a human, make sure that a person can take control of the AI ​​at any time, and make sure that people are held accountable for security issues that might arise.”
Professor Gillespie believes regulation is key to building trust.

“Australians are among the most cautious and cynical about the use of AI. We need these interventions. We need stronger regulation to provide one of those foundations on which we can rely when we trust these technologies.”