By Nick Paget
In April, Meta’s chief technology officer Andrew Bosworth told Nikkei Asia that Facebook’s top brass are redirecting their attention toward using generative AI to create ads on Meta’s platforms. Weeks later, while presenting the company’s Q1 results, Mark Zuckerberg excitedly pointed to the possibilities for AI to both create and optimize ads.
I believe this has the potential to be catastrophic for our industry and civilization at large because an AI that combines Meta’s behavioural data, content distribution algorithm and generative AI represents a truly terrifying power to shape human behaviour, change the outcomes of elections and literally shape the course of human history.
Think I’m exaggerating? Cast your mind back to 2018. People were outraged to learn that Cambridge Analytica had combined Facebook’s ad targeting systems with huge quantities of ill-gotten personal data in order to show voters in the 2016 U.S. presidential election advertising that was tailored to their personalities. Famously (or infamously), neurotic Americans saw ads that framed the second amendment as a means of ensuring their personal security, conscientious Americans saw ads that framed it as an important part of their heritage, and so on.
Make no mistake: the planned integration of Facebook’s data and ad network with generative AI will make Cambridge Analytica look positively paleolithic. Meta’s algorithm already serves up an endless stream of outrage-inducing content from around the internet that puts people in the optimal frame of mind to respond to ads’ messages. And now they are proudly supercharging this unprecedented tool of mass manipulation with generative AI.
If Meta creates an ad-producing generative AI, gives it access to their petabytes of behavioural data and lets it loose on their algorithmic ad network, then as our sons and daughters scroll through their feeds they won’t just see one of five ads that frame a product, service or social issue in terms of one of the “Big Five” personality traits. They’ll see totally unique creative, made just for them.
ChatGPT has digested 570 GB of internet databases representing over 300 billion words – a company of Meta’s size and resources could train its AI on every publicly available study, article, journal and paper ever written in the fields of both psychology and sociology, as well as a comprehensive database of all of the most effective ads, stories, speeches and modes of persuasion in the history of communication. Ads produced by such an AI would reflect the viewer’s specific personality, their unique proclivities, their personal tastes and purchase history. Obviously.
But they will also take into consideration their weaknesses, their neuroses, their insecurities, their current state of mind, their mood, the weather, at which times of day they’re most open to new ideas, which kinds of faces they prefer on Wednesdays, what colours they like most in the afternoons, what tone of voice they prefer on days when they’ve been chatting with particular friends and a thousand other surprisingly compelling factors that the people who programmed the Black Box AI never actually taught it to consider.
Before you say that ads experienced in common can be more powerful than bespoke collateral, know that the ads produced by this generative AI will be tested on thousands of “lookalikes” in real time, and the most effective ones will be selected, optimized and then customized – the actors, the lighting, the music, the wording – so as to be perfectly pitched and optimally compelling for the viewers and the critical sub-section of their network with whom it is optimal for them to share the experience.
I’ll be the first to admit that all of the above sounds like the predictable invective of a 21st century Luddite railing against the new hyper-effective means of production that’s coming for his job. But while I am a marketer, I’m also a father. And I worry that if we don’t raise the flag on this soon, a totally unprecedented power to reshape our individual and collective psyches will be unleashed, sold to anyone who can afford it and then swiftly normalised, just like Cambridge Analytica’s modular content model was.
So what do we do about it?
As advertisers who leverage data, segmentation, targeting and timing to create highly effective means of persuasion, it may seem self-serving or even hypocritical for us to be the ones who draw a bright line between the forms of persuasion that constitute fair play and others that represent insidious manipulation.
But it is precisely because we are human advertisers that it falls to us to take the impacts that ads have on culture into consideration. Because we can be very confident that an AI created by Meta – the company that has repeatedly put growth and profits ahead of the safety, mental well-being and security of both users and society at large – will not.
It’s because we know what goes into making ads that it behooves us to alert the uninitiated to the following fact: ads that bring to bear quadrillions of data points about audiences, billions of data points about individuals, and millions of data points about the art of persuasion are truly unprecedented and have the potential to overwhelm the defences of even the most savvy consumers of media, let alone those of their children.
So before Meta starts selling the services of an amoral, nearly omniscient Black Box creative AI to the highest bidders, I propose that as professional contributors to the cultural conversation, we are the first to ask the question: How safe will our cultural waters be if Meta provides its clients with dynamite?
Nick Paget is VP and ECD at LG2.