Senior executives at New York’s 24/7 news operations recently revealed their biggest concerns. It’s about managing ever-increasing content demands with fewer team members as the industry grapples with continued headcount reductions. Today, it takes a producer an average of five minutes to find a particular shot in an organization’s vast media library. For a 10-minute story package, it would take 8 hours of work just to gather the clips needed for a rough cut. This is a process that is becoming unsustainable due to a shrinking workforce.
With rapid advances in technology, the days of straining traditional systems to work at this pace will soon be over. Multimodal and generative AI is transforming media workflows, reducing content discovery time from eight hours to minutes in some cases, significantly accelerating story creation.
Enhanced access and collaboration
Cloud computing enables remote access to digitized media libraries, connects previously siled media departments, and enables real-time collaboration between teams. But the biggest paradigm shift in content sourcing and discovery in recent years is multimodal generative AI (GenAI).
Multimodal AI is a type of machine learning designed to mimic human perception. It differs from more traditional unimodal AI in that it ingests and processes multiple data sources, including video, still images, audio, sound, and text, to create a more detailed and nuanced understanding of media content. Masu. The most well-known example of GenAI is ChatGPT, which is now regularly used to answer questions and brainstorm ideas.
When used for media indexing, multimodal AI analyzes video from all angles, recognizes faces, reads on-screen text, logos, landmarks, objects, actions, shot types, transcriptions, and performs semantic analysis. generate a description. This allows content creators to search media management systems for exact clips rather than complete video files, and drill down to details such as shot type, scene summary, and the most engaging soundbites identified by AI. Masu. Fundamentally, multimodal AI generates powerful metadata, giving content teams a real advantage. This is especially true in live reporting scenarios, such as US election coverage, where speed is key to capturing key moments and cutting and editing stories together for publication first.
The deep search experience enabled by multimodal AI also opens up the possibility of creating niche content packages and collections around specific themes or genres to satisfy different audiences and advertisers.
Reduce production costs
Although the media industry is not yet at the point where feature-length blockbusters are created entirely using AI, many GenAI applications have already proven to be transformative in pre- and post-production. More applications are rapidly emerging.
Lionsgate recently signed a deal with Runway to create and train new models that allow creatives to generate cinematic videos. Hollywood studios expect to save millions of dollars by using GenAI to “augment, enhance, and supplement” their current operations.
At the Tokyo International Film Festival last month, film and technology leaders highlighted the potential of AI to save millions of dollars in production costs by drastically reducing the traditional cost of filming on location. did.
Production companies are under pressure to create more engaging content with fewer resources. Ad revenue has been mixed for linear TV networks and streamers, which have slashed content budgets and pulled back from commissioning new programming. Multimodal and GenAI enable deeper exploration of vast media archives, unearthing unreleased footage ripe for reuse. That means coming up with new documentaries, behind-the-scenes, and top-notch specials, and establishing new revenue streams that don’t require expensive filming.
In Tom Hanks’ latest film, Here, visual effects startup Metaphysic applied GenAI to age the actors up and down from 18 to 80. This work traditionally takes hundreds of artists and months to complete.
A prompt-driven experience will make building rough cuts even more accessible and efficient. Content creators simply tell GenAI what type of story they want to create, and it automatically scans the clips in their media collection and selects the ones that align with their narrative. You can also use AI prompts to efficiently filter content to aid quality control and compliance. Commands such as “Find scenes with adult content” can help editors isolate and review specific video elements that need to be changed or removed to meet audience standards in a particular region. Masu.
Reveal revenue for archived content
AI’s ability to efficiently analyze and index large amounts of archival media is like holding the key to Aladdin’s cave. Licensing movie footage can cost nearly $10,000 for 60 seconds. Comprehensively indexing a media organization’s hundreds of thousands of hours of video archives would take a human logger more than a lifetime. Multimodal and GenAI revolutionize the process, not only in the speed of indexing but also in how the technology helps prioritize tapes for digitization, sale, and reuse. Advanced AI models are showing great potential in their ability to accurately identify what is recorded on physical tape by simply scanning paper labels and run sheets. This approach allows large-scale archival digitization projects to prioritize tapes with the highest potential for resale and reuse.
The media and entertainment industry is undergoing a period of significant change, and traditional content workflows and systems will become increasingly unsustainable as they continue to restructure. Advances in multimodal and GenAI provide new ways for organizations to transform processes to create more content with fewer resources, unearth valuable content hidden within archives, and drive future growth. We offer exciting opportunities to establish revenue streams.