OTTAWA – The battle between AI companies and copyright owners was marked by the early victory of US publishers in mid-February, when a court found legal research firms had no right to use rival content.
But even as the number of legal cases increases, the clear answer to the question of whether artificial intelligence companies can train AI products using copyrighted content is still a long way to go.
“We’ve been having this conversation for quite some time,” said Carries Craig, a professor at York University’s law school that specializes in intellectual property. “But that’s still early.”
“There’s a lot going on at the same time, but it’s not clear at all where all these balls in the air actually land.”
Generating AI can create text, images, videos, and computer code based on simple prompts, but the system must first investigate the vast amount of existing content.
A coalition of Canadian news publishers, including the Canadian media, is suing Openai in Ontario courts to train ChatGpt-generated artificial intelligence systems using news content. Since its release in late November, there has been no development in this case.
In mid-February, a group of major US media companies and the owner of the Toronto Star filed a copyright infringement lawsuit in a New York court in which the Canadian Artificial Intelligence Company is in charge of Co.
It followed a series of similar lawsuits launched in the United States, including several involving news publishers. The New York Times are suing Openai and Microsoft, but the owners of the Wall Street Journal and New York Post are targeting Perplexity, an AI-powered conversation search engine. Some of these cases date back to 2023.
In mid-February, a US court determined that Los Intelligence, a now-repeated legal research firm, was not permitted to use content from Thomson Reuters’ own legal platform, Westrow, to build competing platforms under US copyright law.
Jane Ginsberg, a professor at Columbia University’s law school, said she is studying intellectual property and technology.
“As far as I know, all other cases have either been recently filed or are in preliminary stages, addressing primarily procedural issues rather than substantial copyright issues,” she added.
Craig said the US decisions have no relation to what will happen in Canada and are “certainly not authoritative in a legal way.”
However, courts dealing with new issues like this may still look to previous cases for guidance.
“The American incident is important, but it appears to prove that Canada doesn’t decide the direction it will take,” Craig said.
She said that because different cases include different platforms with different technical qualities, “it doesn’t necessarily make it clear exactly how far a particular reason or arbitration is.”
While the court interprets existing law, Ottawa is consulting how Canadian copyright laws can be updated to combat the emergence of generated AI.
Canadian creators and publishers hope that the government will do something to curb companies to train their AI generated using content. Meanwhile, artificial intelligence companies claim that using materials for training does not violate copyright, and that limiting their use would limit the development of Canadian AI.
The federal government recently released a “what we’ve heard” report on these consultations. The government said it “continues to think about how Canada’s concerns brought about by generative AI, including those raised by the cultural and technological industries, will be addressed.”
In the UK, the government is consulting whether tech companies will be able to use copyrighted materials to train AI models if creators do not explicitly opt out.
This led to 1,000 musicians protesting and signing silent albums in their names. Elton John and Paul McCartney are opposed to the plan, with several British newspapers running wraparounds on their front pages criticizing government talks.
Craig said in Canada, “the lack of consultation means that policy decisions will ultimately need to be made.”
“And I think that policy decisions will depend on politics… I think it depends, in particular, on what other jurisdictions are doing, especially on the development in the US and the development of Europe,” she said.
“So, that’s a lot of things that still have to be shaken up.”
In Canada, changes to the law will almost certainly have to wait until after a federal election, where federal elections could be called within weeks. Long-term uncertainty can lead parties to reach a licensing agreement before these questions are resolved.
Craig said the hope among the cultural industry and publishers is that it can establish that using materials for AI training can potentially provide copyright liability.
“There is then a baseline where we will start negotiations not only to the settlement in certain cases, but also to negotiate transaction licenses and consider them with policymakers for collective licensing solutions,” she said.
Ginsburg said more and more publishers would be licensed to AI companies. For example, the Associated Press signed a deal with both Openai and Google’s Gemini.
She said the quality of AI companies could be achieved through cutting the internet or using pirated books. When AI output is returned to the Internet and rescraped, the data is constantly degraded.
“The need to use high-quality source data will ultimately bring copyright owners and AI companies closer to the negotiation table,” Ginsburg said.
Use files from Tara Deschamps and the Associated Press
The report, which was first published on March 2, 2024, by Canadian report.
Anja Kaladegriya, Canadian media