The Rise of AI Taylor Swift

AI Taylor Swift is crazy. She is calls up Kim Kardashian to complain about her “lame excuse for a man”, Kanye West. (Kardashian and West are in real life divorced.) She threatens to skip Europe on her Eras Tour if her fans don't stop asking her about international dates. She is insulting people who can't afford tickets to her concerts and uses an unusual amount of profanity. She is a bit rude.

But she can also be very sweet. She gives a vanilla pep talk: “If you're having a bad day, just know you're loved. Don't give up!” And she only love the outfit you wear to her concert.

She is also a fan creation. Based on tutorials posted on TikTok, many Swifities use a program to create hyper-realistic sound bites using Swift's voice and then circulate them on social media. The tool, whose beta was launched in late January by ElevenLabs, offers “Instant Voice Cloning.” In fact, it allows you to upload an audio sample of a person's voice and make it say whatever you want. It's not perfect, but it's pretty good. The audio has some tonal damage here and there, but it tends to sound pretty natural – close enough to fool you if you're not paying enough attention. Dark corners of the internet immediately used it to make celebrities say offensive or racist things; ElevenLabs said in response that it “can track all generated audio to the user” and could consider adding more guardrails– such as manually verifying each submission.

Whether this has been done is unclear. After I handed over $1 to try out the technology for myself — a discounted price for the first month — my upload was approved almost immediately. The slowest part of the process was finding a clear one-minute audio clip of Swift to use as the source for my custom AI voice. Once it was approved, I could use it to create fake audio right away. The whole process took less than five minutes. ElevenLabs declined to comment on its policy or the possibility of using its technology to fake Taylor Swift's voice, but it provided a link to its guidelines on voice cloning. The company told The New York Times earlier this month that it wants to create a “universal detection system” in collaboration with other AI developers.

The arrival of AI Taylor Swift feels like a teaser of what's to come in a strange new era defined by synthetic media, when the lines between real and fake might stain into meaninglessness. For yearsexperts have warned that AI would lead us to a future of endless misinformation. Now that world is here. But despite apocalyptic expectations, the Swift fandom is doing well (for now). AI Taylor shows us how human culture can evolve alongside more and more complex technology. Swifties, for the most part, don't seem to use the tool maliciously: they use it to play and make jokes among themselves. Giving fans this tool is “like giving them a new kind of pen or brush,” explains Andrea Acosta, a Ph.D. candidate at UCLA studying K-pop and its fandom. They explore creative uses of the technology, and when someone seems to be going too far, others in the community aren't afraid to say so.

In some ways, fans may be uniquely well-prepared for the fictional future: They've had conversations about the ethics of using real people in fan fiction for years. And while all fandoms are different, researchers say these communities tend to have their own norms and be somewhat self-regulating. They may be some of the Internet's most diligent investigators. K-pop fans, Acosta told me, are so good at parsing what's true and what's fake that they sometimes manage to stop misinformation about their favorite artist from circulating. For example, BTS fans have been known to call out factual inaccuracies in published articles on Twitter.

The possibilities for fans hint at a lighter side of audio and video produced by generative AI. “Where [are] a lot of fears — and a lot of them are very legitimate — about deepfakes and how AI will play with our perceptions of what reality is, said Paul Booth, a professor at DePaul University who has studied fandoms and technology. for two decades, told me. “These fans illustrate different parts of that, which is the playfulness of the technology and how it can always be used in a fun and maybe more engaging way.”

But AI Taylor Swift's viral spread on TikTok adds a wrinkle to this dynamic. It's one thing to debate the ethics of so-called real-life fiction among fans in a quiet corner of the internet, but on such a large and algorithmically constructed platform, the content can instantly reach a huge audience. Swifties playing with this technique share a knowledge base, but other viewers may not. “They know what she's said and what she hasn't said, don't they? They can almost instantly clock, Okay, this is an AI; she never said that,” Lesley Willard, program director of the Center for Entertainment and Media Industries at the University of Texas at Austin, told me. “It's when they leave that space that it becomes more concerning.”

Swifties on TikTok are already establishing norms regarding voice AI, based at least in part on how Swift herself might feel about it. “If a bunch of people start saying, ‘Maybe this isn't a good idea.' It could affect her negatively,” a 17-year-old TikTok Swiftie named Riley told me, “most people take it to heart.” Maggie Rossman, a professor at Bellarmine University who studies the Swift fandom, believes that if Taylor come out against specific audio bits or certain uses of the AI ​​voice, then “we would see it shut down among a large portion of the fandom.”

But this is a challenging area for artists. They don't necessarily want to crush their fans' creativity and the sense of community it builds—fan culture is good for business. In the new world, they must navigate the tension between allowing some remixing while maintaining ownership of their voice and reputation.

A rep for Swift did not respond to a request for comment about how she and her team are thinking about this technology, but fans are confident she's listening. After her official TikTok account “liked” a video using the AI ​​voice, one commenter exclaimed, “SHE HEARD THE SOUND,” followed by three crying emojis.

TikTok, on the other hand, has just been released new guidelines for society for synthetic media. “We welcome the creativity that new artificial intelligence (AI) and other digital technologies can unlock,” the guidelines state. “But AI can make it harder to distinguish between fact and fiction, which poses both societal and individual risks.” The platform doesn't allow AI recreation of private figures, but allows “more room” for public figures — as long as the media is identified as AI-generated and adheres to the company's other content policies, including those on misinformation.

But borderline Swift fans can probably only do so much damage. They might destroy TicketmasterSure, but they're unlikely to bring about AI armageddon. Booth thinks of all this in terms of “degrees of concern.”

“My concern with fandom is, like, Oh, people will be confused and upset, and that can cause stress,” he said. “My concern with [an AI fabrication of President Joe] Biden is, like, It could cause a nuclear apocalypse.”


#Rise #Taylor #Swift

Source link

Leave a Reply