How Shutterstock Uses Machine Learning to Improve the User Experience

by   |   July 19, 2016 1:30 pm   |   0 Comments

Michael Petry, Mobile Product Owner, Shutterstock

Michael Petry, Mobile Product Owner, Shutterstock

Most companies know by now that the key to making smart and strategic decisions is to look at both current and past data as a cornerstone for future business. Business intelligence teams and other analysts are brought on to enable more efficient decision making across every department. This can lead to visible changes for customers or viable improvements to process for employees.

Advances in computer vision have opened up opportunities to apply data like never before. As artificial intelligence has become an increasingly popular topic of late and corresponding neural networks have improved, it’s a great time to revisit how – and when – your company is applying its data.

At Shutterstock, we ask contributors to enter between seven and 50 keywords with each image they submit to our collection. This process helps form the metadata buried beneath the images that serves as a core part of the data we collect, rely on, and use on a daily basis. Data like this guides us in better assessing our needs, at present and in the future. Powered by metadata, we can study both customer behavioral patterns and contributor styles to ensure that there’s structure while the collection grows organically.

Arduous Image Labeling

The keywording process for our 100,000 contributors has long been onerous and time-consuming. But to be a successful stock contributor, you also must be skilled at labeling your images. Effective labeling ensures that images are displayed prominently and discovered by potential customers down the line.

Many stock contributors pay close attention and rely on their own data as they are planning, basing their next shoot on either what’s been popular up until that point or what they project will be an emerging trend or search term during the upcoming season. Keywords, however, remain complex because, for example, there are only so many ways you can describe a tree.

Related Stories

How to Improve Machine Learning: Tricks and Tips for Feature Engineering.
Read the story »

Troubleshoot Virtual IT Environments with Machine Learning.
Read the story »

How Machine Learning Will Improve Retail and Customer Service.
Read the story »

Understanding the Promise and Pitfalls of Machine Learning.
Read the story »

The keywording process is especially cumbersome for those working on mobile devices and going through the tedious process of typing on a small screen and keyboard. Contributors know that they need to take keywording seriously and to be as efficient as they can when entering this metadata, yet it’s difficult to maintain focus. Artists tend to be visual or auditory learners and thinkers, and choosing the most effective words can sometimes get in the way of their true craft.

The mobile team in particular has made great progress recently in helping alleviate some pains for contributors using our mobile app, enabling them to upload in batches, adding a preview screen for their images, autocomplete keywords, and more. But, time and again, we kept coming back to the same issue, keywording, and trying to come up with a better solution for faster and more effective mobile keywording. We knew the best way forward had to do with autotagging and similar approaches that would allow people to click options rather than type in full words repeatedly. Suggestions like these can be helpful, but sometimes they can also get in the way. We all have seen examples of bots that were almost too perfect for their own good. We didn’t want to roll out automated suggestions until we were confident that they would be just as accurate as what our contributors would come up with themselves.

Machine Learning Offers a Solution

Then, things changed earlier this year, and we saw the path forward. A group of Shutterstock’s engineers spent a year working on teaching the computer to learn and master its 90 million images. Once machine learning emerged as an option, we moved quickly. We paired up with other engineering teams to outline the problem we had been grappling with for many years, and together we applied our reverse image search service to identify what’s contained within each image.

Once the computer recognizes what’s inside of an image by breaking down images into their key components and primary characteristics, we have much more data to work with and,  as a result, can make more informed decisions. We are introducing pixel data to enable smarter keyword suggestions for our contributors. This technology surfaces keywords that are proven to resonate more with customers. This is especially important because while our contributors want to make great art, they also want to make more money while doing it.

It’s too early to know what kind of impact this innovation will have on the contributors’ success. But we are confident that this enhancement to their workflow will pay dividends in both their time and mental space. We anticipate that this improved process for keywording will help Shutterstock customers as well. With more accurate and reliable tagging beneath the images, users will locate the images they need faster than before. We are confident that, over time, the overall site experience will be enhanced for everyone, without any disruption or significant adjustments.

Blend of Man and Machine

Toward that end, we didn’t want to mandate these keywords as substitutes for contributors’ own insight. We wanted a balanced and nuanced combination of metadata and pixel data, blending man with machine. Our concern was that, if we depended solely on the computer’s suggestions, we would wind up seeing the same list of homogenous keywords over and over again. In other words, what was popular and successful would always remain that way. With this approach, contributors remain in control of the integrity and vision of their work.

eBook: Empowering the Insight-Driven Organization

 

Keywording can be extremely subjective at times, so we are trying to take some of the guesswork out of it. As the machine learns, it will become better at understanding mobile-specific trends. But what sometimes gets lost in the conversation about machine learning is that we, too, are learning – gaining a better understanding of the computer’s strengths and limitations. Too often, people look at machine learning as a replacement for people. In truth, the best way to implement machine learning is by learning alongside the machine. Our recipe to blend pixel data with metadata means that we will continue to monitor the computer and to correspond with contributors about these improvements. And our engineers will continue to pay close attention so that the next version we roll out – for the desktop – will be even better than what we provide today.

Data has done more than inform us and educate us; it has paved the way for a creative solution to a longstanding issue.

Michael Petry is the Mobile Product Owner at Shutterstock.

Subscribe to Data Informed for the latest information and news on big data and analytics for the enterprise, plus get instant access to more than 20 eBooks.



eBook: crack the unstructured data code with deep learning




Tags: , , , , , , ,

Post a Comment

Your email is never published nor shared. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>