Customers don’t trust businesses to use AI responsibly. What must change?

Consumers still don’t trust that their data is safe in the hands of businesses who use AI.

According to the 2025 Qualtrics Customer Experience Trends report, the vast majority (86%) of Australian consumers said that they placed “low trust” in local organisations to use the technology responsibly.

“Consumers in Australia are some of the biggest sceptics anywhere in the world when it comes to AI,” said Isabelle Zdatny, Customer Loyalty Specialist, Qualtrics. “Companies are more excited than consumers about using AI, and there’s a lot of work to do to persuade people of the benefits.”

What has happened?

We’ve seen a lot of excitement around AI – from businesses and consumers alike – since its inception. Some small-businesses are already using AI-powered customer experiences, from chat-bots to bespoke marketing campaigns.

But according to Qualtrics, initial customer excitement about AI has given way to distrust. 

Though customers appreciate tailored experiences when interacting with businesses, they’re also anxious about how their data is being used. Over half (56 per cent) of customers expressed concerns about the misuse of their own personal information when interacting with AI.

Can businesses be trusted to use AI responsibly?

The National AI Centre claims that most businesses are not using AI responsibly, though Australian law still doesn’t stipulate what responsible AI usage means exactly.

Last month, the Centre released a report on responsible AI practices across Australian organisations. One of its findings was that, while 78 per cent of businesses believe they are using AI responsible, only 29 per cent actually are. Only 23 per cent of businesses have implemented oversight and control measures when it comes to their AI use.  

The National AI Centre said that businesses were assessed on 38 “responsible AI practices” across five dimensions: accountability and oversight, safety and resilience, fairness, transparency and explainability, and contestability.

The report concluded that Australian businesses need guidance in adopting safe and responsible AI practices. Despite this, there isn’t a huge amount of easily-accessible information that small businesses can use to gauge how responsible their use of AI is.

The Government is currently working on initiatives to promote – and potentially even mandate – responsible AI usage. Last month, it released a Voluntary AI Safety Standard, and took feedback on a proposals paper for introducing “mandatory guardrails” for AI in “high-risk settings”.

Could AI legislation impact small businesses?

It’s still unclear exactly how this set of mandatory guardrails would regulate small businesses’ use of AI.

“The definition of ‘high risk’ AI is still being developed, so the exact nature of the impact on small businesses in Australia remains unclear,” said Jean Lukin and Andrew Hynd of law firm Holding Redlich in communication with ISB.

“The proposal paper does not propose a general carve-out/exemption for small businesses, meaning AI developed or deployed by these businesses could still potentially fall into the ‘high risk’ category.”

One benefit of the legislation is that it would, at long last, provide small businesses with a clearer picture of what “responsible AI use” actually is. However, it could also increase regulatory compliance costs.

What we do know is that the proposed guardrails are supposed to reflect regulations in jurisdictions like the EU and Canada, said Lukin and Hynd. They are also intended to align with international standards such as ISO/IEC42001:2023 Artificial Intelligence Management System, which Australia has adopted.

The uncertainty surrounding AI may leave customers waiting a while before they feel they can fully trust their interactions with AI-powered businesses.