Less than a third (28 per cent) of Australian businesses say they comply with existing regulation around using customer data to train AI. Less than one in four (24 per cent) say they only use anonymised customer data to train AI.
The results, from a recent report by trust management platform company Vanta, are concerning. They suggest that business leaders are currently unaware of the privacy implications of mixing AI and customer data.
Moreover, the Australian government will soon bring new privacy laws into effect to protect customer data and privacy.
Jonathan Coleman, Vanta’s APAC general manager, said that using customer data to train AI could lead to the personal information being resurfaced, damaging customer trust.
“Soon, even more stringent regulation will be in place that forces organisations to take measures to protect their customers’ data, and use AI safely and ethically,” said Vanta.
Australians behind the curve when it comes to AI compliance
Only half (54 per cent) of Australian businesses say they have a formal policy governing the use of AI. In comparison, 65 per cent of UK companies say the same.
In September, a report with similar findings was published by Australia’s National AI Centre. It found that 23 per cent of businesses have implemented oversight and control measures when it comes to their AI use.
A large barrier to adoption of responsible AI practices is that there is currently no legislation around responsible AI use in Australia. While the Government is working on a set of mandatory guardrails, these are still being developed.
For more information about AI legislation and customer trust, we wrote an article on this topic last month.