Key facts: According to research from HP and Microsoft, 81% of Australian employees admit to using free AI tools to share sensitive information Free AI platforms can store and reuse proprietary data without organizational oversight, which can pose significant security risks Australian Government will introduce Responsible Use of AI Policy in September 2024, signaling increased oversight of AI governance Privacy law breaches are highest Potential fines of $50 million or 30% of the adjusted amount Revenue companies are urged to develop in-house AI solutions rather than relying on public platforms to maintain data security
Australian businesses face a new type of data risk: their own employees. A recent report conducted by HP and Microsoft surveyed Australian business and IT decision makers and revealed new findings that 81% of employees admit to using free AI tools to share sensitive information.
This data was revealed in the 2025 HP Windows 11 SMB Survey, conducted in June 2025 by Edelman Data & Intelligence on behalf of HP. The survey surveyed 500 Australian respondents, including business and IT decision makers.
While public platforms such as ChatGPT promise short-term efficiency gains, they can also have serious long-term consequences. By providing sensitive business data to external systems, companies risk compromising trade secrets, compromising intellectual property, and undermining customer trust.
A recent comment in a Canberra Times article: “AI is being used like drinking from a firehose. AI has entered this space fairly quickly,” said Brad Pulford, managing director of HP Australia and New Zealand.
“The pace of AI adoption is so fast that many organizations are struggling to keep up,” said Kosala Aravinda, CEO of Australian AI developer Blockstars Technology. “But efficiency at any cost is a dangerous trade-off. Every sensitive document or customer record dropped into an AI system could eventually spiral out of control. The reputational, legal, and competitive risks are severe.”
Blockstars noted that while employees are rapidly leveraging these tools, few understand that free AI platforms are often trained based on user input. This means that proprietary data can be stored, reused, or exposed in ways that organizations cannot see or regulate. For Australian businesses, the pressure to act quickly and deploy their own in-house AI to protect themselves is paramount.
Adding to this urgency, the Australian Federal Government has declared the adoption of AI a national priority. In September 2024, the Responsible Use of AI Policy came into effect, positioning Australia as a leader in safe and responsible AI. The Government also launched the National Framework for Assurance of Artificial Intelligence in Government, underscoring its commitment to consistent and reliable use of AI across the public sector. Blockstars says these moves by the Australian Federal Government are a clear signal that private organizations need to prepare for increased oversight and higher standards of AI governance.
Australia’s Privacy Act 1988 already has strict requirements for the handling of personal and confidential information. Amendments passed in late 2022 increased fines for serious violations to the greater of $50 million or 30% of adjusted sales. The law is currently under active review, with a particular focus on AI and data security, with indications that stricter obligations are likely to be imposed.
“Companies that do not take data governance seriously risk being caught up in this wave of reform,” Kosala added. “The cost of deploying AI recklessly far exceeds the cost of compliance. A major breach can cripple a business financially, reputationally, and operationally, which is why companies need to build their own in-house AI tools.”
The consequences of a data breach go far beyond fines. Litigation fees, class actions, breach actions, and lost contracts all come with significant costs. For many companies, the reputational damage is even greater. Once a customer or partner loses trust, it is often impossible to regain it. In competitive industries such as finance, healthcare, and professional services, this lack of trust can be deadly.
“Companies spend decades building brands,” Kosala says. “One mishandling of a data set by a free AI tool can have its work undone overnight. Once the public assumes that you have been careless with the information, the cost of rebuilding that trust becomes almost insurmountable.”
“In the near future, this will not just be a security discussion, but an operational one, especially for Tier 1 and Tier 2 companies,” Kosala said. “The strategic question is simple: Do you host your own AI internally or rely on public sources? Companies that invest in secure in-house AI will win business. Companies that don’t will be left behind.”
“Efficiency is not free. Efficiency must be balanced with responsibility,” Kosala concluded. “The winners of this new era will be those who deploy AI in a way that keeps their most valuable asset – their data – secure.”
Blockstars Technology is calling on Australian organizations to stop relying on free, public AI tools and instead build private, in-house solutions that maintain full control, compliance and confidentiality.
About us:
Blockstars Technology specializes in secure, enterprise-grade AI platforms that enable organizations to reap the benefits of artificial intelligence without compromising privacy, compliance, or control. For more information, please visit:
Contact details:
blockstars technology
Kirsty Att (email protected)
+61 404 682 986
blockstars.ai