Close Menu
Versa AI hub
  • AI Ethics
  • AI Legislation
  • Business
  • Cybersecurity
  • Media and Entertainment
  • Content Creation
  • Art Generation
  • Research
  • Tools
  • Resources

Subscribe to Updates

Subscribe to our newsletter and stay updated with the latest news and exclusive offers.

What's Hot

BMW introduces humanoid robots to manufacturing sites across Europe for the first time

March 14, 2026

Introducing the NVIDIA NeMo Retriever generalizable agent retrieval pipeline

March 14, 2026

Coding, web apps with Gemini

March 13, 2026
Facebook X (Twitter) Instagram
Versa AI hubVersa AI hub
Sunday, March 15
Facebook X (Twitter) Instagram
Login
  • AI Ethics
  • AI Legislation
  • Business
  • Cybersecurity
  • Media and Entertainment
  • Content Creation
  • Art Generation
  • Research
  • Tools
  • Resources
Versa AI hub
Home»Content Creation»PwC and UNSW join forces to fight content creation fraud
Content Creation

PwC and UNSW join forces to fight content creation fraud

versatileaiBy versatileaiNovember 27, 2025No Comments3 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
#image_title
Share
Facebook Twitter LinkedIn Pinterest Email

UNSW Business School has partnered with accounting giant PwC to develop a research-based solution to combat fraud, misinformation and AI-generated deepfakes.

As the world of social media, AI and content creators grows bigger than ever, a new framework has been created by PwC and UNSW Business School to leverage financial collateral to protect accountability and stop misinformation and deepfakes.

The organizations say the solution was created to hold content creators accountable for the accuracy of their claims.

This has been achieved through the introduction of “truthfulness bonds,” which transform the publication of content from a “risk-free activity to a financially responsible one” and require creators to stake funds guaranteeing the legitimacy of their content.

It was noted that this also includes the ability for readers to dispute claims by setting counter-bonds if they suspect inaccuracies, and is aimed at creating a robust environment for the pursuit of truth.

The research team behind the framework consisted of former PwC Senior Associate and UNSW Distinguished Fellow Lucas Barbosa, UNSW Business School Associate Professors Sam Kershner and Eric Lim, PwC AI Partner Rob Kopel, and former PwC AI Lead Tom Pagram.

Barbosa said the research team published their findings in a paper demonstrating how financial incentives can align market forces with truth-seeking behavior.

“The impetus for the research came from conversations about advances in AI and the futility of trying to spot deepfakes in the future,” he said.

“We realized that detecting deepfakes is a fool’s errand, and as these GenAI models become more intelligent, solutions focused on verifying authenticity become more robust.”

The motivation for creating this framework comes from research that identified a “growing crisis” in which deepfake fraud is expected to quadruple globally in 2024 and cybercrime, including identity fraud using deepfake technology, is now the most reported type of fraud worldwide.

Regarding the authenticity bond system, it was set up to utilize a closed-loop mechanism where content creators are required to stake collateral to reflect their trust in the material.

Mr Lim said this financial commitment would deter unnecessary conflict and ensure that all challengers face equal risks.

“If inaccuracies are proven, fines collected from inaccurate content will be used to fund rewards for accurate assessments,” he said.

“This mechanism is a way for independent content creators to enhance their reputation as the new gold standard in news dissemination, and a way for traditional news outlets to rebuild trust and build larger audiences on new social media platforms.”

Truthfulness bonds are expected to make the biggest difference within the insurance industry and deter false claims, as they allow claimants to vouch for the accuracy of their submissions.

“The economic model underlying truth guarantees incorporates multiple layers of incentives. Creators of higher-stakes content gain visibility and are incentivized to provide verified information,” Barbosa said.

“Challengers stand to benefit from stronger bonds through successful conflict and fostering a culture of accountability. This framework is designed to transform how we interact with information in the digital age and restore trust and authenticity in an increasingly complex information ecosystem.”

author avatar
versatileai
See Full Bio
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Previous ArticleAdvance learning and research through new AI art generation tools
Next Article Why AI content is the new engine for online sales
versatileai

Related Posts

Content Creation

Pocket FM and OpenAI partner on content production: Rediff Moneynews

March 12, 2026
Content Creation

Pocket FM partners with OpenAI for AI-powered content creation – Indian Television Dot Com

March 11, 2026
Content Creation

Luma unveils AI agent to orchestrate multimodal creation

March 10, 2026
Add A Comment

Comments are closed.

Top Posts

Gemini’s Security Safeguard Advance – Google DeepMind

May 23, 202512 Views

Wix Get 1 hour to expand generative AI capabilities and accelerate product innovation – TradingView News

May 23, 20259 Views

G7 skirts are safety discussions for Touchy AI – Politico

June 16, 20256 Views
Stay In Touch
  • YouTube
  • TikTok
  • Twitter
  • Instagram
  • Threads
Latest Reviews

Subscribe to Updates

Subscribe to our newsletter and stay updated with the latest news and exclusive offers.

Most Popular

Gemini’s Security Safeguard Advance – Google DeepMind

May 23, 202512 Views

Wix Get 1 hour to expand generative AI capabilities and accelerate product innovation – TradingView News

May 23, 20259 Views

G7 skirts are safety discussions for Touchy AI – Politico

June 16, 20256 Views
Don't Miss

BMW introduces humanoid robots to manufacturing sites across Europe for the first time

March 14, 2026

Introducing the NVIDIA NeMo Retriever generalizable agent retrieval pipeline

March 14, 2026

Coding, web apps with Gemini

March 13, 2026
Service Area
X (Twitter) Instagram YouTube TikTok Threads RSS
  • About Us
  • Contact Us
  • Privacy Policy
  • Terms and Conditions
  • Disclaimer
© 2026 Versa AI Hub. All Rights Reserved.

Type above and press Enter to search. Press Esc to cancel.

Sign In or Register

Welcome Back!

Login to your account below.

Lost password?