The report from the Senate Select Committee on AI Deployment was tabled today (27 November) and includes a dissenting opinion from two Liberal National Party senators and additional comments from the Australian Greens and independent senator David Pocock. A 222-page report was submitted. The report is summarized in 13 key recommendations put forward by the committee.
Regulation of AI in Australia
The initial focus of the commission’s report is on what it calls “high-risk uses of AI,” particularly with regard to deepfakes, privacy and data security, and bias and discrimination.
As Dr. Darcy Allen, Professor Chris Berg and Dr. Aaron Lane state in their paper, “biases in generative AI models partly reflect biases inherent in humans.” That’s what it says.
“These models are trained on huge datasets… Naturally, biases from the datasets will be embedded in the models. , capturing bias,” they said.
The committee has three recommendations in this area. I quote them verbatim for clarity.
1. The Australian Government has announced an economy-wide initiative to regulate high-risk uses of AI in line with option 3 set out in the government’s Introducing mandatory guardrails for AI in high-risk environments: a proposal. Introducing new dedicated legislation.
2. As part of the AI-Specific Act, the Australian Government has adopted a principles-based approach to defining high-risk AI uses, including a clearly defined non-exhaustive list of high-risk AI uses. be complemented by
3. The Australian Government ensures that the non-exhaustive list of high-risk AI uses explicitly includes general-purpose AI models, such as large-scale language models (LLMs).
Developing local AI industry
“The essential challenge for Australia is to develop the AI industry through policies that maximize the wide-ranging opportunities presented by AI technology, while ensuring appropriate protection,” the report said. .
According to the commission, AI is a transformative technology that is being developed by organizations large and small in both the private and public sectors. It was also found that sovereign AI capabilities were a key point in many of the submissions made in the report.
There is only one broad and comprehensive recommendation in this area.
4. The Australian Government continues to strengthen the financial and non-financial support it provides to support Australia’s sovereign AI capabilities, focusing on Australia’s existing areas of comparative advantage and unique Indigenous perspectives.
How AI will impact workers and industry
Perhaps unsurprisingly, the majority of the committee’s recommendations concern the benefits and risks of AI for employers and employees, as well as the industry as a whole.
The commission said the creative industries were particularly at risk, while the healthcare sector could benefit immensely from increased adoption of AI, while also facing “very serious risks”. I am doing it.
Overall productivity was identified as an area where significant improvement could be made. According to a submission from Microsoft Corporate Vice President Steven Worrall, “Australia has a great foundation to build on. AI will create 200,000 new jobs and contribute up to $115 billion to the economy annually. It is predicted that it will.”
The committee makes six recommendations in this area:
5. The Australian Government will ensure that the final definition of high-risk AI, regardless of whether a principles-based or list-based approach to the definition is taken, will include ensure that the use of AI provided is clearly included.
6. The Australian Government extends the existing occupational health and safety legal framework to apply to workplace risks posed by the introduction of AI.
7. The Australian Government will provide thorough guidance to workers, workers’ organizations, employers and employers’ organizations on the need for further regulatory action and the best approach to address the impact of AI on work and the workplace. ensure that they are consulted;
8. The Australian Government, through CAIRG, continues to consult with creative workers, rights holders and their representative bodies on appropriate solutions to the unprecedented theft of copyrighted materials by multinational technology companies operating in Australia. .
9. The Australian Government requires developers of AI products to be transparent about the use of copyrighted works in training datasets and to ensure that the use of such works has appropriate licenses. requesting that it be granted and that a fee be paid.
10. The Australian Government will implement appropriate measures to ensure that creators are fairly compensated for commercial works generated by AI based on copyrighted material used to train AI systems. Urgent further consultation with the creative industries to consider mechanisms.
Automating the decision-making process
AI is increasingly being used for automated decision making (ADM). While this has significant benefits and increases efficiency, it also carries similar risks when it comes to transparency and accountability.
“Transparency is key to the responsible use of ADM by Australian organizations in both the public and private sectors,” the Law Council of Australia said in a submission.
Biases built into AI-based decision-making processes are also a concern.
“AI draws inferences from patterns in existing data,” the ARC Center of Excellence on Automated Decision Making and Society said in a submission to the committee. “If a bias is embedded in the data used to train a model, the model will tend to perpetuate that bias…”
The committee’s recommendations are:
11. The Australian Government introduces the right for individuals to request meaningful information about how substantially automated decisions with legal or similarly significant effects are made in its review of privacy laws. Implement recommendations regarding automated decision making, including Proposition 19.3.
12. The Australian Government implements recommendations 17.1 and 17.2 of the Robo-Debt Royal Commission on establishing a consistent legal framework covering ADM in government services and a body to monitor such decisions. This process should be informed by the consultation process currently being led by the Department of the Attorney General and be consistent with the guardrails for high-risk uses of AI being developed by the Department of Industry, Science and Resources.
Environmental impact
We already know that data centers used to power generative AI come with significant environmental costs, something that was discussed in a number of submissions to the Committee.
Dr Catherine Foley, Australia’s chief scientist, said: “[Training]a model like GPT-3…is estimated to use about 1500 megawatt hours…(this is) about 1.5 million hours of Netflix. It’s equivalent to watching.”
Another submission, this time from the Department of Industry, Science and Resources, notes that “a single data center could consume the energy equivalent to heating 50,000 homes in a year.”
The committee’s final recommendations focus on making AI growth sustainable.
13. The Australian Government has established a coordinated integrated framework to manage the growth of AI infrastructure in Australia to ensure that growth is sustainable, delivers value to Australians and is in the national interest. approach.
The HTML version of the full report can be found here and is well worth a read.
david hollingworth
David Hollingworth has been writing about technology for over 20 years and has worked on a variety of print and online titles during his career. He enjoys understanding cyber security and can especially talk about Lego.