This Program Makes It Even Easier to Make Deepfakes

vice.com/en_us/article/kz4amx/fsgan-program-makes-it-even-easier-to-make-deepfakes

Samantha Cole DJ Pangburn Samantha Cole Samantha Cole, Emanuel Maiberg Todd Feathers Samantha Cole Matthew Gault Lorenzo Franceschi-Bicchierai August 19, 2019

More like this

In September, we saw the launch of Generated Photos, a collection of 100,000 images of AI- generated faces for use in stock images. Another company, Rosebud AI, is now taking that concept a step further, with faces that aren't just part of a static, stock database, but customizable and personalizable. Users will be able algorithmically place any face onto any body in their collection.

Maybe you're thinking, another AI face generator? Yes, another AI face generator. But this time, you'll be able to upload any face into a system that places it onto another person's stock-image body.

Rosebud AI, a San Francisco-based synthetic media company, launched Generative.Photos this week with a Product Hunt page and demo site. The demo only uses its pre-loaded models for now, but includes placeholders for uploading your own photos and a signup for a user waitlist.

"Generative.photos is a first step in our synthetic stock photo and API offering, which will eventually allow users to edit and fully synthesize visual content with an intuitive interface," Lisha Li, the founder of Rosebud AI, wrote on Product Hunt. "We focused on bringing forth a way to diversify stock photo content since it was a need we heard voiced by stock photo users. All the faces in our 25k photo collection are not of real people."

If this diversity line is sounding familiar, that's because it's also what Generated Photos claimed it was setting out to fix. Li also says it wants to give "consumers the power to choose an advertising model that they can relate too," with more diverse models. She wrote that what makes Generative.Photos different from other attempts is the context: It's giving a fictitious, generated face a stock body and background, and adjusting it to whatever skin color or gender an advertiser or marketer wants.

Li told Motherboard that Rosebud AI's tool are still in closed beta. But releasing something into the world before establishing public terms of use—or considering any kind of guidelines or prevention measures for the tool's potential for malicious use—is

1/3 unfortunately not uncommon. We see it again and again with AI programs hustled out into the wild before any ethical guidelines are established, especially, like deepfakes, Deepnude, Generated Photos.

In addition, Generated Photos and Rosebud AI are allowing people to create their own realities, letting companies demonstrate artificial diversity where there actually isn’t any. Rather than real diversity, we get algorithmically generated, customizable stock images.

"It’s pretty harmful and a major oversight to launch any kind of project where users can add content to a repository and not check and verify if that content is 'harmful' or not," machine learning designer Caroline Sinders, a fellow with Mozilla Foundation and the Harvard Kennedy School who studies biases in AI systems, told Motherboard. "It’s even more of an oversight and downright neglectful not to have policies that define 'harm' in terms of contention and actions. In 2019, this is a major issue for a company to not have these things."

Update: Following publication, Li told Motherboard that Rosebud AI's self-serve tool is not open yet, as it is still in closed beta, and will require users to sign a term of service that reflects a code of ethics before using the beta version of the tools.

In anticipation of the in 1969, William Sapphire, a journalist and speechwriter, prepared a speech for then President , in the event that things were to go horribly wrong in space. Titled “In Event of Moon Landing,” Sapphire penned a solemn and poetic elegy—one that obviously never had to be made. In a new immersive installation, In Event of Moon Landing, which was seen this past weekend at International Film Festival Amsterdam, a simulated Richard Nixon delivers Sapphire’s speech for an alternative Moon Landing history, using deepfake audio and video technologies.

The deepfake Nixon speaks to installation viewers in a faithfully recreated 1960s era living room, complete with a vintage television set, wallpaper, furniture, and the decade’s TV ads. A work of science fiction, In Event of Moon Landing is both a commentary on the threat of political deepfakes, but also a demonstration of the artistic potential of synthetic media.

“The moon landing is one of the most memorable historic events, at least within the last 50 to 100 years, so that what would be interesting is an alternative history of this seminal event,” one of the installation’s co-creators, Francesca Panetta—a journalist and fellow at the MIT Center for Advanced Virtualit—told Motherboard. “Just as people say, ‘Where were you on 9/11?’ or ‘Where were you when JFK was shot?’, people ask ‘Where were you on the day of the Moon Landing?’ What happens if we use deepfake technologies to provide this alternative history, but using a real documentary archive piece, which is this Bill Sapphire speech written for Richard Nixon if the astronauts had not been able to make it back to Earth.”

2/3 Most deepfake videos are used for creating nonconsensual porn, not fake news. But the worry among journalists, politicians, and other observers is that this technology could be used to influence an election, like making a politician seem to say or do something that did not actually happen.

As Panetta says, for a long time, people have said history is written by the victors, or that history is fluid; but now, with deepfake technology, history is even more fragile. So, rather than exploring deepfakes within the context of current news, the team was thinking about what it meant to retroactively rewrite a past event.

“It was a lot harder than the popular perception of deepfake creation is,” Co-creator Halsey Burgund, a sound artist and fellow at the MIT Open Documentary Lab, said. “This is a two- part deepfake creation. One part is the visuals of Nixon speaking, and then his synthetic voice.”

To synthesize Nixon’s voice, the team worked with a Ukrainian company called Respeecher. Respeecher uses speech-to-speech synthetic voice production, a process in which they input into their AI model a speech by a voice actor, which then outputs the same speech with the same performative components—pacing, inflection—but with the target person’s voice. In this case, Nixon’s.

For the visuals, the team worked with Canny AI (the same team that worked on the deepfake Mark Zuckerberg video). They filmed an actor reading the speech, and selected target videos of Nixon they wanted to use.

As for the installation itself, the speech is just the last two minutes of a six-minute video, experienced inside a 1960s era living room. The installation is meant to simulate the day of the moon landing, as if viewers are crowding around the television as people did on that historic day.

“Since the aim is creating a more discerning public around deepfakes because forensic technologies are always available that can automatically detect them for end users, there is a newspaper within the setting actually includes how deepfakes are made and what are the issues around them,” Harrell said. “And there is a lot of discussion around convolutional neural networks [like the one used by Respeecher] and algorithmic bias, and how these techniques can mislead. And I think that’s something important about this project.”

Panetta said that the team is currently making a digital version of In Event of Moon Landing, which they plan on releasing to the public in the spring of 2020.

3/3