Next Big Things: Creativity gets automated

Keep your eye on neural networks, and forget devising just a single clever campaign, our pundits say.


This story appears in the September 2015 issue of strategy.

For our Next Big Things issue, we reached out to creative minds in several corners of the industry to get a sense of which recent developments excite them, and what their next steps and applications should be. Here are some AI and automation ideas our pundits predict to make waves in the industry.

Dreaming of more creative sheep

mikkoMikko Haapoja, director of creative technology, Jam3

You’ve probably seen Google’s eye catching Deep Dream images floating around the internet. At first glance, you’d think these images are simply examples of ’60s psychedelic art, but they’re in fact visualizing the work of a form of AI, called a neural network, as it tries to find animals within networks of images.

The main purpose of neural networks is not to create trippy images, but to solve “fuzzy” or non-binary problems. Fuzzy problems are the kinds of things humans are very good at solving but give computers a hard time.

With its neural network, Google is trying to solve whether you’d like to visit a specific site or not. Simply knowing what your consumer wants is already a powerful thing in advertising, but there are also potential creative uses.

What if Coke wanted to create a website where users could share images of happiness? Currently, to recognize the emotion a user is conveying, a facial feature recognition library could be used to output the shape of a “happy” user’s facial features. Then a developer would meticulously write code that analyzes the geometry of the mouth, for instance, to check if the corners of the mouth are upturned. This technique is tedious to write and very error prone.

The most amazing part of neural nets is they are applications that can learn. The developer instead writes the base application, which then learns what faces look like when conveying different emotions through “training” as images of people are passed through the neural net. This is similar to the way banks have been analyzing handwritten cheques for years, using object character recognition, with an accuracy of more than 99%.

The rise of “easy to use” neural network libraries, the stunning results of experiments like Deep Dream and the proven reliability in solving problems with object character recognition at banks make neural networks a really interesting technology to watch as they become incorporated into more creative concepts and applications.

Creativity gets automated

todd_1Todd Lawson, CCO, Dashboard

Automated creativity is changing a digital creative’s role. Universally accessible big data, app development and faster cross-platform server-side coding platforms have allowed us to work with more automated processes in real-time. Early systems are already letting retail clients create online campaigns from customizable experiences and automated media builders, as well as use dynamic built-in testing of various versions offered by many SaaS cos.

Right now, these systems rely on agencies to create large libraries of completed assets such as banners, social ads, videos or a mobile version. We spend our grunt hours and client money manually creating dozens of them, each with alternate messaging.

What we’re starting to move towards is true real-time content creation: libraries not filled with finished singular assets, but a collection of sub-components that dynamically form to create the finished ad unit or experience. Not the kind of variety we see in the latest media-serving case study, but true user data-based experience delivery with little need to monitor or update manually.

Say Toyota set up every dealer franchise website and media hosting on a single platform. They could connect to a trusted auto feed to access every car image and specification, and build out an initial library of individual, brand-approved styles and assets. Then, with coded logic, we would change each user’s experience based on their interactions in real-time on the micro level: vehicle trim based on colours they research, button sizes, headlines and copy compositions that grabbed their attention, as well as offer hierarchy, types of embedded content and decide what not to show. Similar to how current re-marketing display ads offer up a link back, but even more granular into the marketing message’s DNA.

A creative team’s role won’t be devising a single clever campaign, but to better optimize input and output of content creation systems with a human touch. It means forcing results based on live consumer data and creative intent, not creative mandate.

Intelligent ads and product design

marc_cattapan_headshotMarc Cattapan, technology and UX director, Grey Canada

For Volvo’s “6 Billion Hours” campaign, Grey Canada delivered pre-roll using a set of rules designed to correlate a vehicle feature to the relevancy of a YouTube video being watched. We strategized against popular video uploads and searches provided by Google’s DoubleClick team to identify what pre-roll would play based on keywords, video title and description.

Setting up and testing the rules of engagement absorbed many man hours, and testing their relevancy was assumptive, since we had no way of knowing if that visitor was in the market for a new car or if the vehicle met that visitor’s current transportation needs.

As AI becomes more robust and commonplace in the advertising process, those same data sets used in the Volvo pre-roll will be less prescriptive and more fluid. The customization of the ads will be more instantaneous and personalized.

The ad will be manipulated to match the user’s propensity to search for specific colours of a car, whether they currently own a vehicle, their immediate family size (to determine the cargo capacity), what their credit rating allows them to afford and whether they like to spend leisure time in wilderness or urban environments.

Getting to that stage of intelligent marketing will hinge on the development of artificial neural networks. As technology becomes more integrated into our lives, we provide neural networks with more data and these algorithms become more evolved. With more data they will anticipate and participate in conversations with empathy and timing, knowing what cues lead into the next topic and at which point to interject key messaging, and which topics to steer clear from.

Google is already integrating neural networks’ ability to learn through its speech and facial recognition software, but the potential of neural networks’ AI has further possibilities. Advertising will be closely tied to the manufacturing process, as this “on demand” economy forces the build and design of products to be integrated into when and how a consumer responds to ads. Cost of producing inventory will plummet, but that doesn’t mean media spend will increase, as a single ad can be dynamic and far-reaching. Suddenly, ads will have the ability to target consumers with the intimacy of a whisper, instead of trying to shout louder amongst the noise.

Want more Next Big Things? Check out these future-looking ideas in data and mobile and smart devices.

VR photo: Stefano Tinti /; Imaging: Jam3