BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

User Data Is So 2018. Here Comes Content Data

Following
This article is more than 5 years old.

When the history of media is written years from now, one of the key questions scholars are likely to ask is why successful publishers allowed other platforms to steal their audiences, why they rarely tried to fight back and why they didn’t at least demand that the Silicon Valley giants share all the data they were collecting, if not the revenue.

That’s a future a company called IRIS.TV is looking to stop from happening.

IRIS.TV is a video personalization and programming platform. Their platform creates playlists out of videos on publisher sites in the hopes of (a) keeping the viewer there for longer periods of time, (b) learning their likes and dislikes and (c) getting them to return to the publisher’s site because they’ve had a positive experience with their video offering.

The company started out using algorithms to determine what viewers might want to watch next, moving from there into early forms of AI (artificial intelligence) to understand preferences.

This week, IRIS.TV is making a giant leap forward, pairing with IBM Watson Media team to use their technology to improve recommendations. “What we do is parse through the video and identify the various elements, creating detailed metatags,” explains David Mowrey, Head of Product and Development at IBM Watson Media. “Our automated system goes through the video second-by-second and analyze both the visual elements and the acoustic elements. Unlike humans, we don’t miss anything.”

Metatags, for those who are unfamiliar with the term, are HTML tags affixed to pieces of content to let search engines understand what’s in them in order so that they can be more easily searched and catalogued. Since metatagging video is a relatively new phenomenon, most older video content has limited metatagging, e.g., an episode of Seinfeld may just be tagged with “Comedy” and “Jerry Seinfeld.” Nothing about the well-known gags in the episode or any of the words associated with them (e.g., “close-talker.”) That makes finding that particular episode difficult and makes it difficult to recommend it to someone who likes shows with similar gags.

It’s also why Netflix has a dedicated team of hundreds devoted to watching and tagging the movies and TV shows it owns, breaking them down so they can be sorted into the quirky categories its recommendation engine serves up to users (“Dark Comedies Starring Dark-Haired Women” and the like.)

For short-form video, the tagging is also done by humans and it’s generally just a sentence or two describing the overall topic of the video, e.g., “Olympic skater Roberta Jackson describes her training routine.” That leaves out the music she trains to, the funny story about the arena she trains at, where that arena is actually located, who comes to watch her—hundreds of details that Watson can easily identify, but that get lost in those single sentence descriptions.

Think of it as the difference between a one-minute trailer and a full-on movie review.

“We can use all that data that IBM Watson uncovers and combine it with our own contextual data to create far more meaningful recommendations and playlists,” Richie Hyden, Co-Founder and Chief Operating Officer at IRIS.TV, tells me. “It lets us start to see patterns in what the user likes and be able to surprise and delight them with something that might not have been as obvious without the deep content-based data we get from IBM Watson.”

With IBM Watson, Hyden feels that recommendations and playlists are just the beginning. “It’s easy to see how this can translate into advertising and branded content,” he notes. “You can see the types of products, the types of ads that people respond to, understand the various elements that are in the ad, and use that to serve up a better experience. That’s a win for the consumer and for the advertiser.”

“What we’re really trying to do is put the ‘R’ back in ‘ROI’,” Hyden continues. “And we’re doing that by using machine learning and advanced AI to understand an individual consumer, and then seeing how can we use that intelligence to make their consumer experience better—for video, for branded content and for advertising—in a way that also helps create a better return on investment for the publisher.”

The beauty of having all that metadata is obvious to anyone who has ever attempted to put together any sort of recommendation engine.

Recommendations are tricky. Get a few wrong and the user totally loses faith in you. And people are tricky too—they don’t always like what they’re supposed to like and they crave serendipity, the ability to discover things seemingly randomly.

That’s where data comes in.

Right now the industry is focused on having data around viewers—who they are, what they like, what they’ve bought, who their friends are. What’s been lacking however, is deep data around the content. Combine the two, and you have a powerful tool for matching the right viewer with the right programming and the right viewer with the right ads. People respond to more than just top-level storylines and with the proper AI program in place, networks and advertisers will be able to take advantage of these connections.

“The advantage we have over the other companies playing in the space is that we don’t own this data,” notes Hyden. “We’re like Switzerland—we’re neutral. What we can do is analyze the data, both about the viewers and about the programming and help publishers to make better decisions, help them to improve the overall journey for the user and get the most value out of their programming. We think that as television and video evolve, that is going to be increasingly valuable proposition.”

Indeed.