Close Menu
Versa AI hub
  • AI Ethics
  • AI Legislation
  • Business
  • Cybersecurity
  • Media and Entertainment
  • Content Creation
  • Art Generation
  • Research
  • Tools

Subscribe to Updates

Subscribe to our newsletter and stay updated with the latest news and exclusive offers.

What's Hot

Microsoft and Hugging Face expand their collaboration

May 20, 2025

Utah has enacted AI fixes targeting mental health chatbots and generation AI | Sheppard Mullin Richter & Hampton LLP

May 19, 2025

The growing issues regarding social media AI

May 19, 2025
Facebook X (Twitter) Instagram
Versa AI hubVersa AI hub
Tuesday, May 20
Facebook X (Twitter) Instagram
Login
  • AI Ethics
  • AI Legislation
  • Business
  • Cybersecurity
  • Media and Entertainment
  • Content Creation
  • Art Generation
  • Research
  • Tools
Versa AI hub
Home»AI Legislation»Special committee on AI implementation makes 13 key recommendations
AI Legislation

Special committee on AI implementation makes 13 key recommendations

By November 27, 2024No Comments6 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
Share
Facebook Twitter LinkedIn Pinterest Email

The report from the Senate Select Committee on AI Deployment was tabled today (27 November) and includes a dissenting opinion from two Liberal National Party senators and additional comments from the Australian Greens and independent senator David Pocock. A 222-page report was submitted. The report is summarized in 13 key recommendations put forward by the committee.

Regulation of AI in Australia

The initial focus of the commission’s report is on what it calls “high-risk uses of AI,” particularly with regard to deepfakes, privacy and data security, and bias and discrimination.

As Dr. Darcy Allen, Professor Chris Berg and Dr. Aaron Lane state in their paper, “biases in generative AI models partly reflect biases inherent in humans.” That’s what it says.

“These models are trained on huge datasets… Naturally, biases from the datasets will be embedded in the models. , capturing bias,” they said.

The committee has three recommendations in this area. I quote them verbatim for clarity.

1. The Australian Government has announced an economy-wide initiative to regulate high-risk uses of AI in line with option 3 set out in the government’s Introducing mandatory guardrails for AI in high-risk environments: a proposal. Introducing new dedicated legislation.
2. As part of the AI-Specific Act, the Australian Government has adopted a principles-based approach to defining high-risk AI uses, including a clearly defined non-exhaustive list of high-risk AI uses. be complemented by
3. The Australian Government ensures that the non-exhaustive list of high-risk AI uses explicitly includes general-purpose AI models, such as large-scale language models (LLMs).

Developing local AI industry

“The essential challenge for Australia is to develop the AI ​​industry through policies that maximize the wide-ranging opportunities presented by AI technology, while ensuring appropriate protection,” the report said. .

According to the commission, AI is a transformative technology that is being developed by organizations large and small in both the private and public sectors. It was also found that sovereign AI capabilities were a key point in many of the submissions made in the report.

There is only one broad and comprehensive recommendation in this area.

4. The Australian Government continues to strengthen the financial and non-financial support it provides to support Australia’s sovereign AI capabilities, focusing on Australia’s existing areas of comparative advantage and unique Indigenous perspectives.

How AI will impact workers and industry

Perhaps unsurprisingly, the majority of the committee’s recommendations concern the benefits and risks of AI for employers and employees, as well as the industry as a whole.

The commission said the creative industries were particularly at risk, while the healthcare sector could benefit immensely from increased adoption of AI, while also facing “very serious risks”. I am doing it.

Overall productivity was identified as an area where significant improvement could be made. According to a submission from Microsoft Corporate Vice President Steven Worrall, “Australia has a great foundation to build on. AI will create 200,000 new jobs and contribute up to $115 billion to the economy annually. It is predicted that it will.”

The committee makes six recommendations in this area:

5. The Australian Government will ensure that the final definition of high-risk AI, regardless of whether a principles-based or list-based approach to the definition is taken, will include ensure that the use of AI provided is clearly included.
6. The Australian Government extends the existing occupational health and safety legal framework to apply to workplace risks posed by the introduction of AI.
7. The Australian Government will provide thorough guidance to workers, workers’ organizations, employers and employers’ organizations on the need for further regulatory action and the best approach to address the impact of AI on work and the workplace. ensure that they are consulted;
8. The Australian Government, through CAIRG, continues to consult with creative workers, rights holders and their representative bodies on appropriate solutions to the unprecedented theft of copyrighted materials by multinational technology companies operating in Australia. .
9. The Australian Government requires developers of AI products to be transparent about the use of copyrighted works in training datasets and to ensure that the use of such works has appropriate licenses. requesting that it be granted and that a fee be paid.
10. The Australian Government will implement appropriate measures to ensure that creators are fairly compensated for commercial works generated by AI based on copyrighted material used to train AI systems. Urgent further consultation with the creative industries to consider mechanisms.

Automating the decision-making process

AI is increasingly being used for automated decision making (ADM). While this has significant benefits and increases efficiency, it also carries similar risks when it comes to transparency and accountability.

“Transparency is key to the responsible use of ADM by Australian organizations in both the public and private sectors,” the Law Council of Australia said in a submission.

Biases built into AI-based decision-making processes are also a concern.

“AI draws inferences from patterns in existing data,” the ARC Center of Excellence on Automated Decision Making and Society said in a submission to the committee. “If a bias is embedded in the data used to train a model, the model will tend to perpetuate that bias…”

The committee’s recommendations are:

11. The Australian Government introduces the right for individuals to request meaningful information about how substantially automated decisions with legal or similarly significant effects are made in its review of privacy laws. Implement recommendations regarding automated decision making, including Proposition 19.3.
12. The Australian Government implements recommendations 17.1 and 17.2 of the Robo-Debt Royal Commission on establishing a consistent legal framework covering ADM in government services and a body to monitor such decisions. This process should be informed by the consultation process currently being led by the Department of the Attorney General and be consistent with the guardrails for high-risk uses of AI being developed by the Department of Industry, Science and Resources.

Environmental impact

We already know that data centers used to power generative AI come with significant environmental costs, something that was discussed in a number of submissions to the Committee.

Dr Catherine Foley, Australia’s chief scientist, said: “[Training]a model like GPT-3…is estimated to use about 1500 megawatt hours…(this is) about 1.5 million hours of Netflix. It’s equivalent to watching.”

Another submission, this time from the Department of Industry, Science and Resources, notes that “a single data center could consume the energy equivalent to heating 50,000 homes in a year.”

The committee’s final recommendations focus on making AI growth sustainable.

13. The Australian Government has established a coordinated integrated framework to manage the growth of AI infrastructure in Australia to ensure that growth is sustainable, delivers value to Australians and is in the national interest. approach.

The HTML version of the full report can be found here and is well worth a read.

david hollingworth

David Hollingworth has been writing about technology for over 20 years and has worked on a variety of print and online titles during his career. He enjoys understanding cyber security and can especially talk about Lego.

author avatar
See Full Bio
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Previous ArticleMax Tegmark on the AGI Manhattan Project
Next Article AI SPERA secures $9 million investment to accelerate global AI security expansion

Related Posts

AI Legislation

Utah has enacted AI fixes targeting mental health chatbots and generation AI | Sheppard Mullin Richter & Hampton LLP

May 19, 2025
AI Legislation

NY lawmakers ask House GOP not to block AI regulations

May 16, 2025
AI Legislation

Medicare can cover AI-based medical devices under newly introduced legislation

May 15, 2025
Add A Comment
Leave A Reply Cancel Reply

Top Posts

Introducing walletry.ai – The future of crypto wallets

March 18, 20252 Views

Subscribe to Enterprise Hub with your AWS account

May 19, 20251 Views

The Secretary of the Ministry of Information will attend the closure of the AI ​​Media Content Training Program

May 18, 20251 Views
Stay In Touch
  • YouTube
  • TikTok
  • Twitter
  • Instagram
  • Threads
Latest Reviews

Subscribe to Updates

Subscribe to our newsletter and stay updated with the latest news and exclusive offers.

Most Popular

Introducing walletry.ai – The future of crypto wallets

March 18, 20252 Views

Subscribe to Enterprise Hub with your AWS account

May 19, 20251 Views

The Secretary of the Ministry of Information will attend the closure of the AI ​​Media Content Training Program

May 18, 20251 Views
Don't Miss

Microsoft and Hugging Face expand their collaboration

May 20, 2025

Utah has enacted AI fixes targeting mental health chatbots and generation AI | Sheppard Mullin Richter & Hampton LLP

May 19, 2025

The growing issues regarding social media AI

May 19, 2025
Service Area
X (Twitter) Instagram YouTube TikTok Threads RSS
  • About Us
  • Contact Us
  • Privacy Policy
  • Terms and Conditions
  • Disclaimer
© 2025 Versa AI Hub. All Rights Reserved.

Type above and press Enter to search. Press Esc to cancel.

Sign In or Register

Welcome Back!

Login to your account below.

Lost password?