For the final chapter of my Lorecore Trilogy (read the first part here, and the second here), I’ve collaborated with Y7, a duo based in Salford, England, who specialize in theory and audiovisual work. Ever since generative AI flooded our feeds—just think of Moncler Pope or Fake Drake—Y7 and I jested about how we had now entered the “Promptcore” era. Here, according to a neologism from “The Lexicon of Lorecore,” the zeitgeist is taken over by “Deepfake Surrender”—“to accept that soon, everyone or everything one sees on a screen will most likely have been generated or augmented by AI to look and sound more real than reality ever did.” Y7 and I also agreed that, so far, most material outputted from generative AI apps (ChatGPT, DALL-E, Midjourney) is decidedly mid. But, does it have to be?
I set about writing a film script. The premise is “Lore Island at the end of the end of the internet.” An apocalyptic meet-cute! Y7 then used the script as prompts in a panoply of text-to-content AI models (such as Runway's Gen-2, Midjourney’s Infinite Zoom and AudioLDM etc.) to produce characters, voices, video, theme music, and sound design. At the time of release, Hollywood’s writers and actors are on a sensational strike, in part out of existential anxiety about the survival of their crafts once studios use this kind of technology to auto-generate film and television content. In “Core + Lore,” Y7 and I have not set out to make writing, set design, scoring, or acting redundant through an aesthetics of illusory realism. That would be truly mid. Instead, we are interested in what Boris Groys calls “the chaos hidden behind the smooth surface” promised by generative AI, at this nascent moment in time. Or, as one of our film’s protagonists, Lore, blurts out: “Prompting is winning.”