Digitas.com David Beckham's Malaria No More campaign has raised fresh concerns around synthetic content. Digitas Chief Product Officer, Rafe Blandford, explores how brands can navigate this landscape.

The new Malaria No More campaign, featuring David Beckham speaking in nine languages about the need to stamp out the disease, was a strong example of technology’s power to do potential good in the world.

Yet there’s deep unease in some quarters about the automation that enabled the charity to synthesise Beckham’s voice. He isn’t, after all, fluent in all of the languages featured in the campaign film.

This is the latest high profile example of so-called “” technology (in this case provided by UK firm Synthesia), enabling a brand or organisation to replicate reality. Politicians on both sides of the Atlantic have warned about the potential damage that could ensue if it’s deployed as a propaganda weapon. And yet, despite all the ethical and legal issues that will undoubtedly arise, the creative potential is strong for advertisers and their agencies.

There’s no doubt in my mind that 2019 is the year of synthetic content, but what are the responsibilities for brands when it comes to the use of this emerging technology - are they in danger of perpetuating the cynicism that surrounds fake news, and of alienating their audiences?

First, a bit of context. Synthetic content is obviously not new. After all, we’ve talked about “Photoshopping” images for decades, and it’s been used to satisfying effect in Hollywood films – from Brad Pitt’s ageing-process-in-reverse in The Curious Case of Benjamin Button, through to ’s de-aged appearance in : . But its adoption has accelerated, allowing developers to burst through “the valley” and create convincing synthetic images, video and audio for far less money, and with much reduced effort, than previously.

That’s because we’re in a technology race that has seen the emergence of genuine adversarial networks (GANs), systems featuring two neural networks working in competition - one as the creator, the other as the discriminator - until they produce something that’s really detailed and convincing. They operate like the human brain, but really fast and at an almost unlimited scale.

The resulting improvements in the quality and delivery times of simulated audio, video, sounds, and text are remarkable. In audio, for instance, advances from the likes of Lyrebird and Baidu Deep Voice have led to creators being able to base highly realistic and extensive simulations of a

human’s voice on just two to three minutes of captured audio (the previous standard was 30 minutes or so).

For advertisers, the applications are extensive. If you’ve got Judi Dench voicing your commercial, you don’t need her in a studio for hours, you can capture a sample in just a few minutes and then do what you like in terms of creating and developing a script.

The scope will widen along with the growth in computational creative - enabling brands to issue personalised, machine-created, messages from celebrities or brand mascots to individuals, and create sonic branding and voiceovers much more easily and consistently.

But maintaining trust levels will be vital. The general assumption now that a great deal of news and content that surrounds us is fake presents a real challenge for marketers. However, if it’s difficult for people to detect if something’s real or not, then the importance of a trusted platform grows in significance – advertisers using synthetic content need to own or have access to spaces on which to connect and talk while being secure.

And the rise in this content will only make the strength of a brand more impactful. Brands that are trusted, on channels that are trusted, will be the most powerful in the future, when the uncanny valley is truly closed and humans won’t be able to spot the difference between “real” and “synthetic” content.

That’s when brands must embrace one of their most important responsibilities, to acknowledge the difference between entertainment and manipulation when using synthetic content. No-one will really mind if an advertiser is recreating reality for an audience to enjoy an experience, to entertain them. But manoeuvring people into parting with money or signing up for a new service is a different matter. Then you really have to ask the question, will people be comfortable interacting with a synthetic creation?

The answer’s already out there. It’s about being honest and upfront. We’ve seen it with Google’s Duplex assistant, which can make calls on a user’s behalf to book appointments or order products. Its initial launch provoked an ethical storm before Google announced that it would identify itself as an AI when contacting a person on the phone.

Beyond the obvious legal and anti-fraud concerns that companies are likely to face, the ethical issues are those that marketers will wrestle with, and the challenge will only grow as we enter an arms race that’s set to intensify over the next decade. But the brands that have built the highest levels of trust will have the biggest permission to really push the boundaries with synthetic content