AI is already helping and guiding us in so many aspects of our daily lives. Digital assistants like Siri and Alexa help us perform tasks and find information. Online streaming platforms suggest media we’ll love that we didn’t even know existed. And autocorrect saves us time when typing and prevents spelling mistakes (although left unchecked it can cause some embarrassing moments — just Google “autocorrect fails”).
For DAMs, AI is being used for many different purposes - such as recognizing and tagging objects in visuals or classifying assets by the specific human model that appears.
And yet change is upon us in the form of generative AI - a new, revolutionary form of artificial intelligence that will completely change the way that we create visual assets. Companies that embrace it have an opportunity to stand out from their competitors while slow adopters face being left behind - so it’s important to understand what’s coming and prepare accordingly.
To understand generative AI and why it’s such a big leap in terms of technological evolution, let’s first take a closer look at the AI we’re using right now.
The AI we all use every day is discriminative The technical term for the AI we all use on a daily basis is discriminative artificial intelligence - AI that helps us better understand data that already exists.
WoodWing integrates with discriminative AI to help with recognition (for example, identifying objects in uploaded visuals and adding tags accordingly), understanding (to help drive future content creation) or recommendations (which asset will perform best on which platform).
Discriminative AI is great at understanding existing data but can’t create anything new. For that AI had to evolve.
Generative AI enables computers to learn the underlying pattern related to an input, and then use that to generate similar content completely from scratch.
This is known as synthetic data, as it’s been artificially created. There are already several companies making significant advances in this space - generating synthetic data for machine learning training, creating visual bots, writing short or long-form text and generating images and videos.
While this technology is still at an early stage, it can already produce some impressive results. Let’s take a look at some examples:
“We provide solutions for Content Orchestration, so leading brands and publishers can tell their stories using an open, transparent, and online platform. Here you can find over 1,500 projects and products from more than 80 brands that can lead the way in creating powerful content solutions that leverage content, technologies, and partnerships.”
The italicized text was generated from a prompt of the non-italicized text, taken from the WoodWing homepage, using DeepAI’s text generation API.
Enter text, select a style and create a work of art. These were all generated from the term “Digital Asset Management” using Wombo:
Create photographs and videos from scratch or recast existing assets with Bria
Bria integrates seamlessly into content management and digital asset management systems to enable anyone to create photo-realistic marketing assets - both image and video - from scratch. The high level of control lets you implement branding styles at scale across a whole library, adjust sentiment, recast models and create videos from still photographs.
Apply pre-defined brand styles across your entire asset library with a click of a button in WoodWing Assets
By building in generative AI capabilities to their platforms, DAMs are uniquely positioned to help their users to customize an infinite number of visuals in seconds. When a company does a brand refresh, new brandbook guidelines on colors, mood, and even logos can be applied across all existing visuals in one go. Existing assets can also be repurposed to tell different stories, localize for new markets or become more engaging.
Companies use A/B testing to make data-driven decisions, but it’s time-consuming to create many variations of visuals. This means tests are normally limited to text, colors, and layouts. DAMs can use generative AI to automate and scale the creation of image variations for their users, it’s faster and easier to A/B test different facial expressions, models, backgrounds, and props.
Given that videos drive higher engagement than still images online imagine a world where DAMs offer a button to transform any photograph into a dynamic video in just one click. Thanks to generative AI, it’s around the corner.
Bria’s smart search understands complex concepts to return accurate results to natural-language queries. In this example, a search for “I love you baby” shows images of couples and not of babies.
And text generation can be used for better search. For example, Bria is able to translate visuals into descriptions that go way beyond basic object tagging. This means users can search across their asset library using natural language, returning much more accurate results. The search can also deal with typos and different languages. And if an image doesn’t exist in the library - the AI can generate it for you.
The world is moving in a direction of AI-generated media - within a few years synthetic visuals will likely make up a majority of marketing assets.
So DAMs that harness this technology fast and early will have a strong competitive advantage in what’s already becoming a crowded market. Offering users the ability to automate and scale visual storytelling is a compelling benefit to marketers who are always looking to drive efficiencies and get assets to market faster than before.
As with any revolutionary technology, generative AI is not without its risks. So it’s important that DAMs seek partners who incorporate guardrails for responsible use of these new tools. If customers demand an ethical product, technology providers will deliver it (and it should be said that many generative AI companies are already working hard to build in ethical considerations).
It’s an exciting time for anyone in the business of creativity. Those who embrace these changes fast will build strong foundations for future success.