TwelveLabs to Bring Its State-of-the-Art Video AI Models to Amazon Bedrock
The expanded TwelveLabs/AWS collaboration enables developers to more easily access and use TwelveLabs’ capabilities to make video more searchable and discoverable

LAS VEGAS--Amazon Web Services (AWS) and TwelveLabs have announced that TwelveLabs' state-of-the-art multimodal foundation models, Marengo and Pegasus, will soon be available in Amazon Bedrock.
The deal is notable because AWS is the first cloud provider to offer models from TwelveLabs and because TwelveLabs is developing AI-based solutions that make it easier to understand what is in video assets, making it easier to search and discover what is in video. That opens up a host of opportunities for creators and owners of video libraries, which has traditionally been difficult to search.
Amazon Bedrock is a fully managed service that offers developers access to high-performing models from leading AI companies through a single API.
Seamless access to TwelveLabs’ advanced video understanding capabilities will enable developers and enterprises to transform how they search, analyze, and generate insights from video content, leveraging the security, privacy, and performance of AWS, the two companies reported.
"Video contains nearly 80% of the world's data, yet most of it remains unsearchable and underutilized," said Jae Lee, co-founder and CEO of TwelveLabs. "By making our models available through Amazon Bedrock, we're empowering even more enterprises to bring video understanding to their existing infrastructure. Our technology enables users to search across their entire content library—from videos collected 10 years ago or 10 minutes ago—to find the precise moment they’re looking for in less than a single second, and then interpret and analyze those moments. This opens the door for all kinds of novel uses. Through the collaboration with AWS, we can extend powerful capabilities to customers and accelerate innovation across industries."
While video is commonly regarded as one of the world's largest unsearchable data sources, TwelveLabs’ has developed cutting-edge technology capable of turning video libraries and assets into a trove of accessible information. Whether it’s giving a sports network the ability to instantly pull every instance of a specific play style or commentator reaction or helping a broadcaster identify recurring themes across large volumes of footage, TwelveLabs helps teams turn their video archives into usable, indexable assets, unlocking both operational efficiency and new revenue opportunities, it explained.
The integration will benefit multiple industries, from media, entertainment, advertising and beyond, the two companies reported.
Get the TV Tech Newsletter
The professional video industry's #1 source for news, trends and product and tech information. Sign up below.
For example, it will allow film and TV studios can rapidly manage video workloads from dailies, content repacking, and archive management. Sports Leagues and Teams can efficiently create match highlights and create customized fan focused content at scale. News Agencies and Broadcasters can quickly manage large libraries to find the moments that matter.
And, streaming services can better package and distribute content across platforms and more effectively insert relevant video ads, the two companies explained.
More specifically, TwelveLabs provides some of these capabilities:
- Natural language video search that pinpoints precise content moments
- Deep video understanding without requiring pre-defined labels
- Multimodal AI processing visual, audio, and text simultaneously
- Temporal intelligence connecting related events across time
- Enterprise solutions scaling extensive video libraries into accessible knowledge
With Marengo and Pegasus available in Amazon Bedrock, AWS customers can use TwelveLabs’ models to build and scale generative AI applications without managing underlying infrastructure. Using Amazon Bedrock, customers gain access to a broad set of capabilities while maintaining complete control over their data, benefiting from enterprise-grade security and utilizing cost control features—all essential for deploying AI responsibly at scale, AWS said.
More specifically, TwelveLabs’ fully managed, serverless models in Amazon Bedrock allow developers to:
- Create applications that search through videos, classify scenes, summarize content, and extract insights using natural language
- Build sophisticated video understanding features without specialized AI expertise
- Scale video processing from small collections to massive libraries with consistent performance
- Deploy solutions with enterprise-grade security and governance controls
“Video understanding is revolutionizing how industries like media & entertainment, sports, automotive, and education work with and discover content,” said Samira Panah Bakhtiar, general manager of Media & Entertainment, Games, and Sports at AWS. “Over the last year, I have consistently said that natural language semantic search is a 'strategic unlock' for our entertainment customers, as they reexamine their existing intellectual property and breathe new life into it. By bringing TwelveLabs’ advanced models to Amazon Bedrock, we’re helping our customers make sense of any video moment, unlocking the full value of their treasured video assets. Businesses will now be able to easily search, categorize, and extract insights from their vast video libraries, enabling new use cases and better user experiences that were previously impossible without significant technical expertise.”
AWS and TwelveLabs' integration partner Monks expressed their excitement: "We've been putting AI to work across the entire video value chain for IP holders, broadcasters and brands. TwelveLabs in Amazon Bedrock makes it easier to realize opportunities for monetization in broadcast news, entertainment and sports by making it simpler and more secure to build and scale applications with powerful video understanding," said Lewis Smithingham, EVP Strategic Industries at Monks.
This announcement builds on a strong existing work between AWS and TwelveLabs and continues the momentum of their Strategic Collaboration Agreement (SCA).
"This integration with Amazon Bedrock represents the next phase in our collaboration with AWS, making our video understanding AI more accessible to enterprises worldwide," added Lee.
To learn about TwelveLabs’ industry leading models, please explore twelvelabs.io, Marengo 2.7 and Pegasus 1.2. Find out more about TwelveLabs models in Amazon Bedrock here.
George Winslow is the senior content producer for TV Tech. He has written about the television, media and technology industries for nearly 30 years for such publications as Broadcasting & Cable, Multichannel News and TV Tech. Over the years, he has edited a number of magazines, including Multichannel News International and World Screen, and moderated panels at such major industry events as NAB and MIP TV. He has published two books and dozens of encyclopedia articles on such subjects as the media, New York City history and economics.