New Delhi: A new wave of ethical scrutiny has engulfed the tech industry after Google confirmed that YouTube videos were used to train the AI-powered video generator VEO 3 without explicit consent or compensation from the creator.
The revelation raised widespread concern across the creator economy, raising important questions regarding ownership, consent, and the use of user-generated content in artificial intelligence development.
VEO 3 is placed as a generated video model that can generate movie-quality clips from a simple text prompt. It includes features such as realistic visuals, ambient sound, and dialogue, and will be integrated into YouTube shorts later this year.
Speaking at Cannes Lions 2025, YouTube CEO Neal Mohan described VEO 3 as a transformational tool for democratizing short storytelling, especially as YouTube shorts watch more than 200 billion views every day.
But behind Veo’s innovation is the controversial development. The model was partially trained with YouTube’s vast content library, reportedly over 2 billion videos, without the author’s active knowledge. A YouTube spokesperson said in a statement to CNBC that “we have always used YouTube content to improve our products, which has not been changed by the advent of AI,” and remains consistent with the YouTube creator agreement.
However, the creator expressed shock and said he was aware that he could use his videos to train AI systems that could ultimately compete with their content. The updated Terms of Use from September 2024 grants you a wide range of YouTube rights to use uploaded content, including Machine Learning and AI Applications, but does not offer you the opt-out of training through Google’s own model only by third parties such as Apple and Anthropic.
YouTube’s AI training practices reflect a wider trend among technology companies competing to develop generative models. Openai faced backlash over allegations that it had transcribed over more than a million hours of YouTube content in 2024. Nvidia reportedly trained AI using decades of YouTube footage, with companies like Meta, Salesforce and Apple relying on public video data to bolster their own AI systems.
In particular, Meta trained the Llama model using Facebook and Instagram posts to encourage legal action from affected users. Meanwhile, Apple is facing a pushback from publishers over AI licensing transactions that lack clarity in attribution or compensation.
The issue has also reached emerging AI companies like Prplexity AI, which has recently been valued at $14 billion. That AI-powered search engine web scraping practice set fire to its fire from outlets like Forbes to replicate articles without permission. Collectively, these cases highlight the growing ethical concern. Are high-tech companies using content published under “fair use” at the expense of content creators?
For creators, Veo 3 presents both promises and dangers. Some welcome AI as a tool to simplify production, while others fear being sidelined by machine-generated content. “It’s plausible that we’re getting data from creators who have spent years building channels to create synthetic versions of our work (poor facsimiles).”
Technology company Vermillio used the Trace ID tool to evaluate content generated by Veo. In one case, Brodie Moss videos earned 71% visual similarity and 90% or more audio matches, raising concerns that AI tools could replicate the creator’s unique style or portrait. YouTube has teamed up with the Creative Artists Agency to introduce protection tools for public figures, but small creators are not protected.
Economic outcomes are also rising. Over 25% of creators in the YouTube Partner Program make money through shorts. As the amount of video generated by AI increases, algorithmic preferences and fear of viewer fatigue can alienate human content.
The debate is also unfolding in the legal world. A 2024 report from the U.S. Copyright Office argued that the bulk of copyrighted material for uncompensated AI training is “not used fairly under current law.” Legal experts point to a recent lawsuit suing Midi Joanie over the use of the iconic character. This is an early example of a potential wave of litigation against the generator AI platform.
To reassure users, Google has introduced VEO 3 compensation. This promises to cover legal liabilities arising from copyright disputes that include the output. YouTube also allows creators to request Takedowns for AI content violations. However, critics argue that these safeguards are reactive rather than preventive, and that creators are still denied the choice to opt out of the AI training dataset entirely.
Concerns become even more pronounced in the European Union, where data usage regulations are stricter. Experts suggest that Google’s reliance on AI training creator content could violate the AI Act, which mandates the transparency and traceability of training data.
The VEO 3 controversy has amplified industry-wide demands for ethical reform in AI development. Here are the key suggestions for traction:
Transparency and consent: Notify users when content is being used in training datasets with the right to opt out.
Reward model: Establishing a royalty or licensing system, as well as those used in music streaming.
Regulatory oversight: Enhanced copyright and data protection laws regarding the use of AI-ERA content.
Author Tool: Expands access to detection and identity protection features such as trace IDs to all creators.
As AI is integrated deeper into content platforms, clashes between innovation and creator rights intensify. Companies driving the AI revolution: Google, Meta and Apple have built their platforms behind their creators. Now, when they try to redefine content creation through machine learning, creators are asking for a seat at the table.
Industry experts warn that without meaningful protection and compensation, the generation AI boom that undermines the community that first successfully made a platform like YouTube a reality is at risk.