DeepMind details AI work with YouTube on video compression and AutoChapters
Besides research, Alphabet’s artificial intelligence lab is tasked with applying its various innovations to help improve Google products. DeepMind today detailed three specific areas where AI research helped “enhance the YouTube experience.”
Since 2018, DeepMind has worked with YouTube on a label quality model (LQM) that more accurately identifies what videos meet advertiser-friendly guidelines and can display ads.
Since launching to production on a portion of YouTube’s live traffic, we’ve demonstrated an average 4% bitrate reduction across a large, diverse set of videos.
Calling YouTube one of its key partners, DeepMind starts with how its MuZero AI model helps “optimize video compression in the open source VP9 codec.” More details and examples can be found here.
By learning the dynamics of video encoding and determining how best to allocate bits, our MuZero Rate-Controller (MuZero-RC) is able to reduce bitrate without quality degradation.
Most recently, DeepMind is behind AutoChapters, which are available for 8 million videos today. The plan is to “scale this feature to more than 80M auto-generated chapters over the next year.”
Collaborating with the YouTube Search team, we developed AutoChapters. First we use a transformer model that generates the chapter segments and timestamps in a two-step process. Then, a multimodal model – capable of processing text, visual, and audio data – helps generate the chapter titles.
DeepMind has previously worked on improving Google Maps ETA predictions, Play Store recommendations, and data center cooling.
More on YouTube:
FTC: We use income earning auto affiliate links. More.
Check out 9to5Google on YouTube for more news: