“AI is not going to take your job. The person who uses AI is going to take your job…. and when AI increases the productivity of your company, what happens? You hire more people.”—Jensen Huang, CEO of NVIDIA, speaking at Columbia Business School in 2024
We’ve been here before. In 1779, Ned Ludd destroyed two stocking frames in protest at the automation of textile knitting. From that action, a movement was born and a term, still used today: Luddite. Synonymous with those who resist technological advancement, the term is once again being heard as AI usage grows.
As humans, new technology can feel challenging. It must be learned, behaviours adapted, and it casts the shadow of, 'Will it take my job?'
Marketing is somewhat conflicted. On the one hand, it is highly creative, yet on the other, heavily engineering-driven. So, can AI be the ultimate marketing solution? If we can automate processes, cut through the clutter and complexity, and focus more on what we believe we should be doing as marketers, what is there to worry about?
As the great Nick Cave put it, “As far as I know, algorithms don’t feel. Data doesn’t suffer.” It takes human capacity to cut through and see the idea—the creativity. To contextualise: Picasso completed Guernica in 1937—a visceral human response to the bombing of a small Spanish town. Could AI tools like Midjourney have created such work? What about the works of Shakespeare, Miller, or Atwood? Music from Mozart, Reich, The Beatles, or Taylor Swift? How would they have emerged without human experience, observation, and knowledge?
For a modern example, look at Charli XCX. That green (HEX code #89CC04)? That font (Arial)? The size of that font (stretched, low resolution)? It took six months of creative iteration to get the exact colour, the exact font size, quality, and position to evoke the desired reaction.
Given the apparent importance of human nuance, we set out to explore the role of AI in marketing and how marketers should use it by examining three areas of tension:
- Creativity versus efficiency
- Authenticity in voice versus the risk of bland artificiality
- The desire to democratise knowledge versus the risk of increased misinformation through such access
Let's explore each one.
Creativity versus efficiency
Sasser and Koslow (2012) describe marketing and creativity as an organic process, “like a delicate flower to be nurtured”. Creativity develops over time; it finds and feels its way based on context, emotions, and responses to circumstances. Marketing cannot thrive without it.
So, why might marketers turn to creativity automation? Bluntly, marketing has changed. While nine out of 10 marketers believe marketing should involve the freedom to create game-changing ideas, research indicates they spend upwards of 16 hours per week on routine tasks (HubSpot), such as raising purchase orders or dealing with procurement, and a further 14 hours processing data (Treasure Data). In the blink of an eye, two-thirds of a marketer’s week is consumed by anything but creative work.
Every marketer needs headspace to think. Here, new areas for brands to play are found, points of distinctiveness and differentiation are uncovered, and opportunities to stand out from competitors are defined. This space is critical for brand growth. Emotion, fame, and impact—factors consistently shown to drive higher returns from advertising investment—rest in the lap of creativity.
In Lemon (2019), Orlando Wood describes how advertising has turned “sour”. Ads have become “left-brained”: Rational, short on stories, and focused on function. They follow a very formulaic style. It is easy to see how brands have arrived at this: Such ads are fast to produce, cost less, and require minimal creative input.
Not all ads are like this. Recent Axe/Lynx campaigns for example, are long on story and emotion. However, in a time-pressured world, coupled with the existence of AI, it is easy to see how marketers might lean into left-brained, formulaic AI-driven approaches—fast, low-cost, with little creative input.Pepsi ad, 2022 (left-brained) Axe / Lynx Deodorant, 2024 (right-brained)
This would be a misuse of AI; efficiency at the cost of creativity. A shift towards Morgan and Field’s “dull” advertising—ads that have been shown to require higher excess share of voice (eSOV) to drive growth, leading to inefficiency. The cost of dull advertising is high.
All is not lost. AI is finding a role in driving creativity. The Cadbury ‘MyAd’ featuring Shah Rukh Khan was only possible because AI enabled legitimate deepfake versions of a core advert. Tools like Scibids and Adobe GenStudio can remove much of the heavy lifting, reducing the manual, tedious work that has taken time and resources away from creativity and discovery.
Authenticity versus artificiality
Synthetic data is the shiny new toy in the marketing world, offering the ability to rapidly scale real-world data without the time and cost of traditional methods. It automates the data collection process—a dream?
While much evidence points to synthetic data replicating the real world effectively, studies also indicate that this may not always be the case. The challenge is that synthetic data allows work to be done faster, but without the knowledge wrought by human expertise. The risk for marketers? They take what they see for granted.
“Those aren't your memories, they’re somebody else’s,” Deckard tells Rachael in Blade Runner. A hard truth even for a synthetic being. Large language models (LLMs) face a similar challenge: They only 'learn' from what they have been given. The 'memories' are not real.
LLMs do not discern bias in their training data. They are machine learning, pattern recognition tools; they do not learn or build knowledge. They are chatbots producing responses based on the information they have. Feed a model significantly biased data—positive or negative—and it will infer those to be the truth.
If all brands work from the same ‘memories’, nuance disappears. For example, GWI recently made their data available via an API. Applying a generative AI tool over this would yield rapid insights, but a generic dataset produces generic results. In marketing, this leads all brands to converge toward the same point.
All models have biases. AI models are particularly WEIRD: built in Western, educated, industrialised, rich, and democratised countries. They do not reflect the majority. When using these models in APAC, it is essential to critically evaluate the data used to build out solutions. How does a WEIRD model work when providing solutions for Japan, India, Thailand, or even Australia?
The cultural edges and nuances are removed. Prompting a model to think like a specific culture only works within the boundaries within which it was programmed. Without local context, or if built with the cultural perspective of another region, a model’s output will be biased or stereotypical, not truly local.
We can learn from programmatic advertising. A solution lies in calling on bespoke datasets, augmenting core data to bring nuance back in, and using AI to fine-tune. The larger challenge in marketing will be developing a new cadence for bespoke data to ensure synthetic data continually finds new edges.
Democratisation versus disinformation
For three decades, marketers have sought ways to deliver effective knowledge management solutions. Even with the best examples, it has taken experts to manipulate structured data and turn it into something meaningful. The future of data analysis lies in the democratisation of understanding.
Generative AI models are driving this change. Sitting across a dataset, they can extract meaning, tabulate, and chart in seconds—work that previously required a brief and an extended turnaround time. More interesting is the ability to update content on the fly as data refreshes, delivering insights in real time.
But caution is necessary. If AI models are taken as omniscient and trusted without question, we lose the power of human judgement. This undermines what marketers are trained to do: Question, confirm, explore, and find the edges that provide opportunities.
With AI models proven to create disinformation, human checks will be required for years to come. Not because the input data is wrong, but because there is enough data for the model to build false patterns.
So, the human touch remains essential. In the case of AI-generated image and video content, verification processes must be implemented to avoid blindly accepting insights. Removing AI fantasy, bias, or manipulation requires the hand of a knowledgeable, capable, and questioning marketer. Does it slow the process? Perhaps. But it remains faster than relying on teams of experts.
AI and the human touch
As we navigate the evolving landscape of AI in marketing, we are reminded of both the opportunities and risks that technological advancement presents. From the Luddites to today’s AI sceptics, the fear of automation replacing human ingenuity remains.
Yet, the real danger lies not in AI itself, but in how we choose to integrate it. Marketers must resist the temptation of prioritising efficiency over creativity. AI should handle the mundane, allowing us to focus on what machines cannot replicate: human insight, emotion, and creativity. This must be done through democratised knowledge but with a tone of authenticity. Nuance and local context must be injected to avoid homogenisation.
Ultimately, AI should be a tool that amplifies human potential, not a replacement for it. As we continue to adapt, the mandate is clear: harness AI thoughtfully to cultivate a future where technology and humanity co-create, ensuring that the art of marketing thrives in the digital age.
Ben Tuff is the chief product officer for UM Asia-Pacific.