Google is actively utilizing content from the vast YouTube library to train its AI models, such as Gemini and the new video and audio generator Veo 3, reports CNBC citing sources.

One of the informants stated that a selection of 20 billion videos is being used for training. Google confirmed this information but clarified that it only pertains to a portion of the content and within agreements with creators and media companies.

A YouTube representative explained that the company has always used its own content to improve its services, and the emergence of generative AI has not changed this. "We recognize the importance of guarantees, so we have developed robust protection mechanisms for creators," the company stated.

However, experts are concerned about the implications for copyright. They believe that using others' videos to train AI without the creators' knowledge may lead to a crisis in intellectual property. Although YouTube claims it has previously communicated this, most creators were unaware that their content was being used for training.

Google does not disclose how many videos have been used to train the models. But even if it is 1% of the library, that amounts to over 2.3 billion minutes of content—40 times more than competitors.

Creators grant YouTube broad permission to use their content when uploading videos. However, there is no option for them to refuse the use of their videos for Google's training models.

Representatives of organizations advocating for digital rights believe that creators' years of work are being leveraged for AI development without compensation or even notification. For instance, the company Vermillio created a service called Trace ID, which determines the similarity of AI-generated videos to original content. In some cases, the match reached over 90%.

Some creators are not opposed to their content being used for training, viewing new tools as opportunities for experimentation. However, most believe that the situation is opaque and requires clearer rules.

YouTube has even entered into an agreement with the Creative Artists Agency to develop a system for managing AI-generated content that mimics famous individuals. Yet, the mechanisms for removing or tracking similar content are still imperfect.

Meanwhile, calls are already being made in the U.S. to provide authors with legal protection that would allow them to control the use of their creativity in the realm of generative AI.

Recently, Google updated its internal content moderation rules on YouTube—now videos that partially violate rules can remain online if deemed of public importance.