This article is part of our series exploring the business of artificial intelligence
As every year, Adobe’s Max 2021 event featured product revelations and other innovations underway at the world’s leading computer graphics software company.
Among the most interesting features of the event is Adobe’s continued integration of artificial intelligence into its products, a place the company has explored in recent years.
Like many other companies, Adobe relies on deep learning to improve its applications and strengthen its position in the video and image editing market. In turn, the use of AI shapes Adobe’s product strategy.
AI-powered image and video editing
Sensei, Adobe’s AI platform, is now integrated into all products in its Creative Cloud suite. Among the features revealed at this year’s conference was an auto-hide tool in Photoshop, which lets you select an object simply by hovering your mouse over it. A similar function automatically creates mask layers for all the objects it detects in a scene.
The auto mask feature is a great time saver, especially in images where objects have complex outlines and colors and would be very difficult to select with conventional tools.
Adobe has also improved Neural Filters, a feature added to Photoshop last year. Neural filters use machine learning to add enhancements to images. Many filters are applicable to portraits and images of people. For example, you can apply skin smoothing, transfer makeup from a source image to a target image, or change a subject’s expression in a photo.
Other neural filters make more general changes, such as coloring black and white images or changing the background landscape.
The Max conference also unveiled some previews and upcoming technologies. For example, a new feature in Adobe’s photo collection product called “in-between” takes two or more photos that were captured at a short interval of each other and creates a video automatically generating the images. that were between the photos. .
Another feature in development is ‘on point’, which helps you search Adobe’s huge library of images by providing a benchmark pose. For example, if you provide him with a photo of a person sitting and reaching out, the machine learning models will detect the person’s pose and find other photos where people are in similar positions.
AI features have also been added to Lightroom, Premiere, and other Adobe products.
The challenges of delivering AI products
When you take a look at Adobe’s AI features individually, none of them are groundbreaking. While Adobe didn’t provide any architectural or implementation details at the event, anyone with AI research can immediately relate each of the features presented to Max to one or more articles and presentations. made at conferences on machine learning and computer vision in recent years. years. Auto-masking uses object detection and segmentation with deep learning, an area of research that has seen tremendous progress recently.
Style transfer with neural networks is a technique that is at least four years old. And generative adversarial networks (GANs), which power many of the imaging capabilities, have been around for more than seven years. In fact, many of the technologies used by Adobe are open source and available for free.
The real genius behind Adobe’s AI is not the superior technology, but the company’s strategy to deliver products to its customers.
A successful product must have a differentiating value that convinces users to start using it or move from their old solutions to the new application.
The advantages of applying deep learning to different image processing applications are very clear. They improve productivity and reduce costs. The assistance provided by deep learning models can help reduce the barrier to artistic creativity for people who lack the skills and experience of expert graphic designers. In the case of auto-masking and neural filters, the tools allow even power users to solve their problems faster and better. Some of the new features, such as the “midstream” feature, address issues that were not resolved by other applications.
But beyond superior functionality, a successful product must be delivered to its target audience in a smooth and cost effective manner. For example, suppose you are developing a state-of-the-art neural filter application and want to sell it on the market. Your target users are graphic designers who already use a photo editing tool such as Photoshop. If they want to apply your neural filter, they’ll have to constantly port their images between Photoshop and your app, causing too much friction and degrading the user experience.
You will also have to face the costs of deep learning. Many user devices lack the memory and processing capacity to run neural networks and require cloud-based processing. Therefore, you will need to configure web servers and APIs to serve the deep learning models, and you will also need to ensure that your service will stay online and available as usage grows. You only recover these costs when you reach a large number of paying users.
You will also need to figure out how to monetize your product in a way that covers your costs while keeping users interested in using it. Will your product be an ad-based free product, freemium model, one-time payment, or subscription service? Most clients prefer to avoid working with multiple software companies that have different payment models.
And you will need an awareness strategy to make your product visible in the market it is intended for. Are you going to run ads on social media, make direct sales and contact design companies, or use content marketing? Many products fail not because they don’t solve a central problem, but because they cannot reach the right market and deliver their product cost-effectively.
And finally, you will need a roadmap to iterate and continuously improve your product. For example, if you’re using machine learning to improve images, you’ll need a workflow to continually collect new data, find out where your models are failing, and fine-tune them to improve performance.
Adobe’s AI strategy
Adobe already has a very large share of the graphics software market. Millions of people use Adobe’s apps every day, so the company has no problem reaching its target market. Whenever he has a new deep learning tool, he can immediately use the vast reach of Photoshop, Premiere, and other apps in his Creative Cloud suite to make it visible and available to users. Users don’t need to pay or install new apps; they just need to download the new plugins in their apps.
The company’s gradual transition to the cloud over the past few years has also paved the way for the seamless integration of deep learning into its applications. Most of Adobe’s AI functionality runs in the cloud. For its users, the experience of cloud-based features is no different than using filters and tools that run directly on their own devices. Meanwhile, Adobe’s cloud scale allows the company to run deep learning inference in a very cost effective manner, which is why most of the new AI features are being made available for free for users who already have a Creative Cloud membership.
Finally, the cloud-based deep learning model gives Adobe the opportunity to run a highly efficient AI factory. As Adobe’s cloud offers deep learning models to its users, it will also collect data to improve the performance of its AI features in the future. For example, the company acknowledged at the Max conference that the auto-hide feature does not yet work for all objects but will improve over time. The continued iteration will in turn allow Adobe to improve its AI capabilities and strengthen its position in the market. AI in turn will shape the products that Adobe deploys in the future.
Running applied machine learning projects is very difficult because most companies fail to complete them. Adobe is an interesting case study of how the combination of the right elements can turn advances in AI into profitable business applications.