The data, a major survey conducted by one of the Big Four Consultancy groups, was collected using automated artificial intelligence agents rather than humans, and Australians determined that even in regulated environments like government, they had deep distrust about AI technology and its outcomes.
This is a point from the latest EY Artificial Intelligence Index, a multinational sentiment analysis sweep covering 15 countries including Australia, the US, China, South Korea, India, Germany, Sweden, Brazil and France.
Australia’s sample base is only over 1,000.
“Australians are very concerned about potential negativity than people in other markets. In contrast, everyday people in markets like China, UAE, India and South Korea dominately believe that the impact of AI will be positive over the next five years,” the consulting report states.
That’s a bit of a problem for the technology and consulting industry, as large sector sales and mainstream AI adoption in the public sector remains at the whims of everyday people who come and go to work based on emotions, at least in democracy.
Only 38% of Australians think they are “excited by the future of AI and what it means to me,” and the UK, France and New Zealand aren’t the only ones who are excited about it.
We certainly aren’t as excited as consulting and IT companies.
“There is concern that organizations are not responsible for negative AI use. This trust deficit is important for both markets to address the next few years in order to ensure they do not fall behind other parts of the world,” warned EY.
“To do this, we need a single vision between governments and commercial organisations to build some degree of confidence that AI regulations and investments are being deployed ethically and transparently.”
The problem is, according to EY Sweep, “governmental organizations are one of the groups that New Zealanders and Australians most trust to manage AI in a way that aligns with their best interests.”
“In Australia, historical policy issues such as ‘robodebt’ may be a key factor supporting this, but in New Zealand there is a significant national conversation about modernizing the country’s online safety regulations. ”
Putting aside the fact that no artificial intelligence is used in RobodeBT, EY argues that “deep community consultation” is needed and that “having a clear vision of AI that is effectively communicated to the community is important to overcome these perceptions.”
It is eye-opening where Australians are comfortable with AI being used by the government.
The Sentiment Index considered that 26% of that sample were satisfied with AI ‘General Practitioners’ who could engage rather than human physicians (sic) for basic medical consultations. We have not yet seen how the Australian Medical Association and Royal Australian General Practitioners feel about it.
Does AI pay membership fees? What role does a specialized agency play in accrediting AI agents? How do compensation and vicarious liability entitise?
There is a safer future for policy blowouts and legislative drafters.
Only 33% felt that AI was used to “providing policy and legal guidance,” while only 31% were used to AI “designing policies/legal voted by the government.”
“This indicates that we are clearly hesitant among the public to allow AI to play an important role in the formation of laws, reflecting concerns about possible bias in accountability and automated decision-making.”
And how will the government know when people finally understand the grand benefits of AI that have been applied to many aspects of government? Now, AI is useful.
“As part of this project, our team used a trained AI research agent (AIRA) to engage respondents in deep conversational interviews. This allowed us to expand our quantitative research by creating a large qualitative dataset with extensive insight into the topic of AI.”
“The calibration process included briefings by AIRA on the core goals of the study, the desired conversation flow with the respondents, and the length of conversation preferences. Once the calibration process was completed, AIRA was deployed with real-world respondents across all 15 markets of the study, including New Zealand and Australia.”
So, what is better than deploying AI to find out why humans are resisting AI? Buy AI to overcome human resistance!
“Our AIRA is a market-leading research technology. If you or your team want to try out your own research agent, reach out to our representatives at EY.
And ChatGpt sends sympathy.
read more:
ai, and everything I fear