ChatGPT, Meta and Google’s generative AI should be designated ‘high risk’ under new laws, bipartisan panel recommends

ChatGPT and Meta and Google’s generative AI products should be designated “high risk” under AI laws that could strictly regulate or even ban the riskiest AI technologies.

That’s the bipartisan recommendation of a special parliamentary inquiry into fast-growing tech, which has also made a stunning accusation against tech giants that they have committed “unprecedented theft” from Australia’s creative workers.

The senators said that if Amazon, Meta and Google’s use of copyrighted content without permission or compensation isn’t already illegal, “it should be.”

It recommended that work begin “urgently” to develop a mechanism for creators to be paid if their work is used to train commercial AI models.

The findings set the stage for the federal government to introduce global legislation that could explicitly ban certain uses of artificial intelligence and a comprehensive framework to cover its use in healthcare, the office, online or anywhere else in society.

Husic looks out of the room, the lower room behind him.

Ed Husic develops the government’s response to the rapid rise in popularity of AI. (ABC News: Nick Haggarty)

The government has set up the parliamentary committee to consider whether it should respond to the rise of AI with “whole of the economy” legislation, tweaks to existing laws or a lighter approach to regulations developed in partnership with industry.

The committee opted for the strongest response.

The inquiry’s chairman, Labor senator Tony Sheldon, said AI was a big opportunity for Australia, but if companies wanted to operate in the country, they should not be able to exploit Australians.

“Artificial intelligence has incredible potential to significantly improve productivity, wealth and well-being, but it also creates new risks and challenges to our rights and freedoms that governments around the world must address,” said Senator Sheldon.

“We need new standalone AI laws to control big tech and put strong protections in place for high-risk uses of AI, while existing laws should be amended as needed.

“General-purpose AI models must be treated as high-risk by default, with mandatory requirements for transparency, testing and accountability. If these companies want to operate AI products in Australia, these products should create value rather than rip us off. data and income”.

You have a high risk for democracy, rights at work

The committee specifically recommended that tools like OpenAI’s ChatGPT, known as large language models, be “explicitly” included on a list of high-risk AI uses, as well as AI tools used in the workplace to supervise workers or track their results.

“In doing so, these developers will be subject to higher testing, transparency and accountability requirements than many low-risk, low-impact uses of AI,” it said.

It noted that AI-generated content originating in Russia was used in an attempt to disrupt and influence the recent US presidential election, saying AI’s potential to “hurt democracy” was perhaps the most important risk it posed. .

It also said the risk of discrimination, bias and error from AI algorithms was widely recognized and there was global concern about a lack of transparency – but Amazon, Google and Facebook and Instagram owner Meta, he was uncooperative and refused to answer questions directly from the investigation.

The senators said their interactions with AI developers “only heighten” their concerns about how the models work.

A woman appears on a television screen, with a row of politicians seated at tables next to him.

The senators said the response of Meta and other big tech companies to the investigation only caused them more concern. (ABC News: Adam Kennedy)

An AI Act could establish mandatory guardrails by identifying high-risk or low-risk types of technology, such as an AI tool used in surgery or one used in an online chess game, as well as specifically listing certain products where necessary.

Similar legislation introduced in Europe sets out a ‘risk’ framework to ban social scoring tools like those used in China and potentially real-time facial recognition tools like the one Bunnings recently breached to use .

The committee found that confidence in AI was lower in Australia and led to lower adoption rates here than in other countries, and strong safeguards could give the public confidence that the industry can grow safely.

Senators agreed that a “risk-based” approach that could reduce the most significant risks of artificial intelligence without unnecessary intervention in low-risk tools could ensure the multibillion-dollar industry could develop safely without be stifled.

AI companies have committed “unprecedented theft” from ads

The committee also found that multinational technology companies operating in Australia had committed “unprecedented theft” from creative workers.

It said developers of AI products should be forced to be transparent about the use of copyrighted work in their training datasets, and that the use of that work be properly licensed and paid for.

A mechanism to ensure fair remuneration to creators whose work is used should also be developed in consultation with the creative industry.

The inquiry heard a “significant body of evidence” that AI was already impacting Australia’s creative industries, and while this could bring some productivity gains, stakeholders almost unanimously expressed “serious” concerns about the impact AI on jobs and their quality. thing.

It said that while US AI systems were able to take advantage of copyrighted material, the committee heard that use likely amounted to copyright infringement under Australia’s stricter copyright laws.

“There is no part of the workforce at a more acute and urgent risk of the impact of unregulated AI disruption than the more than one million people who work in the creative industries and related supply chains,” the committee said.

“If the large-scale theft of tens of thousands of Australians’ creative works by large multinational technology companies, without authorization or remuneration, is not already illegal, then it should be.”

It said the notion put forward by Google, Amazon and Meta that their “stealing” of Australian content was for the greater good because it ensured Australian culture was represented in AI production was a “farce”.