James Earl Jones approves using recordings to recreate Darth Vader’s voice with AI Startup Respeecher uses sound clips to clone actor’s voice

James Earl Jones has been the iconic voice of Darth Vader since the start of Star Wars, but at 91, it looks like he’s ready to let the AI ​​take over. Jones has licensed the rights to the voice archives of his work, allowing Ukrainian startup Respeecher to leverage AI technology and recreate the sound of his voice in Disney Plus’ Obi-Wan Kenobi.

To do this, Respeecher uses sound clips to clone an actor’s voice, allowing a studio to record new lines without the actor present. Matthew Wood, Skywalker Sound’s supervising sound editor, said he presented the Jones option once he mentioned he was considering ending the role of Darth Vader. After Jones gave Lucasfilm permission to use the AI-generated voice, the studio commissioned Respeecher to produce a similar output to the voice of Jones, who played the dark side villain of 45 years ago, in Obi-Wan Kenobi from Disney Plus.

That’s why you might notice that Vader sounds a lot like he did in previous Obi-Wan movies, as opposed to the actual voice of Jones in The Rise of Skywalker in 2019. Despite the studio’s use of AI for Vader’s voice, Wood says Jones takes on the role of a benevolent godfather and still helps guide the studio’s portrayal of the villain.

Matthew Wood was the sound editor supervising the reception of transmissions from Ukraine.

What Respeecher could do better than anyone was recreate the unforgettably menacing way Jones, now 91, spoke decades ago. Wood estimates he recorded the actor at least a dozen times, the last time being a brief line of dialogue in The Rise of Skywalker in 2019. He had mentioned that he was looking to eliminate this particular character, explains Wood. So how can we move forward? When he finally pitched Jones the work of Respeecher, the actor approved of using his archival voice recordings to keep Vader alive even through artificial means appropriate, perhaps, for a half-mechanical character. Jones is credited with guiding the performance on Obi-Wan Kenobi, and Wood describes his contribution as that of “a benevolent godfather”. They update the actor on their plans for Vader and heed his advice on how to stay on track.

Before the invasion, there was an almost constant flow of information between Wood, Deborah Chow the director and showrunner of Obi-Wan Kenobi and the Respeecher team. Wood says: For a character like Darth Vader, who might have 50 lines in a mission, I might have a back and forth of almost 10,000 files. A lot of that was changes in dialogue and later tweaks. As the Russian attack loomed, Wood says, he began to step back. He remembers thinking, I don’t need to come back to them as they hear the air raid siren to let them know that this particular part is a little different. But the attitude of the Respeecher team, he said, was: “Work, work in the face of adversity, persevere”.

Alex Serdiuk, CEO and co-founder of the voice cloning company, knows that creating the voice of Darth Vader for a TV show is not a life-or-death venture. Still, he’s proud of their Obi-Wan Kenobi contribution and wants the world to know that the Ukrainians helped make this particular trip to the galaxy far, far away possible, even under horrific circumstances. We are creating workplaces for people, we are creating jobs, we are giving them money, we are contributing to the Ukrainian economy, and that is very meaningful,” he says. But also, hopefully, more people will hear about Ukraine – our tech community, our start-ups – because of it.”

Respeecher’s work continued, mostly on still-secret projects.

This isn’t Respeecher’s first time working with Lucasfilms, either. The startup also generated a voice for the younger version of Luke Skywalker in Disney Plus’s The Mandalorian and The Book of Boba Fett. In a press release, Respeecher explains that he used clips from many early years of radio shows, interviews, ADRs, and dubs with Mark Hamill to digitally recreate Skywalker’s voice.

Other AI text-to-speech tools such as Voicemod, Veritone, Descript and Resemble AI have also emerged as potential ways for celebrities and creators to digitally recreate their voices. As one netizen pointed out, the trend could become popular among celebrities who want to boost their income with minimal effort by cloning and renting their voices. Or, in Jones’ case, it could help preserve the voice of one of cinema’s most notorious villains.

AI and art, a difficult mix?

While on the audio side there may be a twist that some people find interesting, a big gray area remains when it comes to creative.

Take visual art for example.

The arrival of widely available image synthesis models, such as Midjourney and Stable Diffusion, has caused an intense online battle between artists who view AI-assisted works as a form of theft and those who enthusiastically welcome these new creative tools. Established artist communities are at a crossroads, as they fear that non-AI works will be drowned in an unlimited supply of AI-generated works, as these tools have become very popular among some of their members.

In banning CGI art on its art portal, Newgrounds wrote: We want to keep the focus on art made by people and not flood the art portal with art gnr by computer. Fur Affinity cited concerns about the ethics of how image synthesis models learn from existing artwork, writing: Our goal is to support artists and their content. We do not believe it is in the interest of our community to allow AI-generated content on the site. These are just the latest moves in a rapidly evolving debate over how art communities (and art professionals) can adapt to software that can potentially produce limitless works of beautiful art at a pace that no human working without the tools could struggle.

It’s no secret that image synthesis models like Stable Diffusion were trained, in part, using stock photography websites. With the appearance of AI art on sites like Shutterstock, if future AI image models trained on images retrieved from the internet learn from their own production, the future of art could effect be very recursive.

But what does it mean to be able to generate any type of visual content, image or video, with a few lines of text and the click of a button? What will it be like when you can generate a movie script with GPT-3 and a movie animation with DALL-E 2? And looking deeper, what will happen when social media algorithms not only select content for your feed, but generate it? What will happen when, in a few years, this trend meets the metaverse and virtual reality worlds are generated in real time, just for you?

These are all important questions to consider. Some think that in the short term, this means that human creativity and art are deeply threatened. Perhaps in a world where anyone can generate any image, graphic designers as we know them today will be redundant. However, history shows that human creativity finds a way. The electronic synthesizer did not kill music, and photography did not kill painting. On the contrary, they have catalyzed new forms of art.

It is important to be mindful of the implications of automation and what it means for humans who might be “replaced”. But that doesn’t necessarily mean being afraid of becoming obsolete. Rather, the question we should be asking is what do we want from machines and how can we best use them for the benefit of humans, says Cansu Canca, associate professor of Northeastern research, and founder and director of the AI ​​Ethics Lab .

Concerns about AI’s incursion into art go beyond accusations of digital plagiarism. Derek Curry, a Northeastern associate professor of art and design, isn’t convinced that AI art will ever replace the creative work of humans. By its very nature, technology has its limits. It can’t produce anything that hasn’t already been trained, so it’s impossible for it to create legitimately new things,” Curry explains.

Anyway, in what could be a first, a New York-based artist named Kris Kashtanova has received a US copyright for his graphic novel that features AI-generated artwork, according to his Instagram feed.

I got the copyright from the United States Copyright Office for my tech-generated graphic novel A. I opened the way it was made and put Midjourney on the cover page. It has not been modified in any other way. Just the way you saw it here.

I tried to argue that we own copyright when we do something using AI. I saved it as a work of visual art. My certificate is in the mail and I received the number and confirmation today that it was approved.

My lawyer friend gave me this idea and I decided to create a precedent.

According to their announcement, Kashtanova introduced her request by noting that her artwork was AI-assisted, but not entirely AI-created. Kashtanova wrote the story for the comic, created the layout, and made artistic choices for putting the images together.

Source: Respeecher

And you?

What do you think of the impact of AI on audiovisual in general and on audiovisual art in particular?

What potential uses do you see with Respeecher’s technology?

Do you believe that we should embrace technological advances in this area, trying to refine them to avoid harmful impacts (economic, political or even socio-cultural)? Or do you prefer to see this technology reserved for a wide-ranging group of people who will help understand the impacts, correct them, before seeing the technology available more generally?

See as well :

Artist Receives First Known U.S. Copyright Registration for AI-Generated Artwork Amid Heated Online Debate Over Ethics of AI Art

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *