Conversations

Aleatoric
is
the
NFT
Project
Letting
Chance
Take
the
Wheel

Aleatoric is the NFT Project Letting Chance Take the Wheel

Matt Condon and Evan Casey on how the agency of AI and sleep talking can distill into comical audio-visual vignettes

Text Zine
Published 18 Jan 2022

Rorschach tests, REM research, and ketamine-assisted sleep studies may unlock how our subconscious manifests, and whereas these exercises are determined by serious scientific inquisitiveness, collaborators Matt Condon and Evan Casey fill in the gaps left to machine-learning fun and artistic merit with their NFT project, Aleatoric. For them, nocturnal rambling and the dance with AI have never been more fruitful.

Liam Casey (ZORA): What are both your backgrounds, and what led you down the rabbit hole of Web3?

Evan Casey: I was interested in the art and AI space for many years, and really just became fascinated by this idea of combining human creative inspirations. As a drawer and painter, I've always loved making art but also had this side of me interested in programming and machine learning. So being able to combine those two and seeing some of the early explorations in deep dream and style transfer were inspiring for me. [I] carried that through some of my work over the past few years, building tools for animators through Cadmium that does AI for hand-drawn animations. Seeing some of the stuff this year with VQGAN+CLIP has been really cool to see, the grassroots community that's come together and hacked these algorithms that are not really designed to make art, but finding ways of using them.

Matt Condon: I showed up in crypto during the 2017 bull market and got into the tech pretty quickly. [I] learned about the socio-side of crypto—the self sovereignty and governance. Very quickly I discovered NFTs via CryptoPunks and was like, ‘Oh, yeah, that makes way more sense to me.’ I've always been a fan of conceptual art, things that poke at how people think or poke at the seams of an idea, or make you ask a question, but I didn't realize that until recently. I find it delightful and started this project where I have a chip in my hands and I've connected it to an NFT. So if you meet me IRL, you can get an NFT by tapping my hand with your phone. These are stickers of my face, and there's different rarities. I got the chip in 2018. So I'm the first person in the world with an NFT implanted in their body.

Matt Condon, Evan Casey

LC: How did Aleatoric come into fruition? How did you approach the project and what were your processes like?

MC: Back in 2018, I was going through this crisis of understanding digital scarcity. I didn't have an art background, I didn't have a philosophy background, so I was rediscovering what ownership is. [I thought]: ‘Okay, all digital scarcity is architected, but that means nothing is authentic’. How do you decide if this is real or true?; what's true is what people believe is true. I wanted Aleatoric to ask the question of how one decides the scarcity of something without having conscious control over said scarcity. A partner told me that I sleep talk at night, and I was like, ‘Okay.’

I started recording [the sleep talking] and what comes out is hilarious, insane babble. I was like, ‘There's got to be something I can do with this.’ The idea struck later where it was like, ‘Oh, I can connect this to an NFT and get at this question of digital scarcity and abundance.’ I didn't know how to visualize this babble, but then the VQGAN+CLIP thing came out and I was like, ‘Oh, text to video: that's exactly what I need’.

The process is I sleep talk at night, and then every morning I wake up, check to see if any of them are me talking, and if there are, we've scripted it up such that I just airdrop it over to my computer, I finally have an SSH connection to Evan's GPU cluster, I can script the actual AI CLIP thing, which I don't understand at all, which is I think is important, and then generate the image and upload it to ZORA.

LC: Was there any say in how the imagery and visuals were integrated, like the Aleatoric 9 // pirates?

EC: We don't do anything in terms of dictating what the visuals are. Those all come from Matt’s hilarious sleep talk. It's coming from this model that's just been trained on billions of images, and it can effectively pull out infinitely many images, but it's reactive to these words. So I think with the Aleatoric 9 // pirates, I think pirates showed up in the actual text and that was what triggered. One of the cool things about this type of art is that it can combine different concepts together in ways that might be unexpected.

MC: It's mostly the transcript prompts and the model itself will come up with whatever concept it wants to fit that. One of the first ones [ it’s gonna be hot], I just said the words ‘It's going to be hot’, which could be referencing so many different things, but then the model put sexy bodies and skin tones and a noticeable butt in there. And I'm like, ‘Oh, okay. That must have been what I was dreaming about’.

LC: Were there any challenges in the project itself, how the essays have evolved from the start up until now? And what do you envision for the future?

EC: I don't know if we had any significant ones. It's a very conceptual piece that also takes a twist on some of the other AI art that has exploded this year. What's different about this is this uncontrollable piece where we don't really know what's going to come out of it, which is different from a lot of art that's out there. Going deeper into that, and this is totally not going to happen, but some of Matt's speech is maybe a language in itself, and so translating it can be really challenging. Wouldn't it be cool if we could somehow decode that language in its raw form?

MC: Yeah, that's true. With the transcription, it loses texture, timber, and flavor of my, say, ‘Ooh’ or ‘aah’. And the transcription is like, ‘It's going to be hot,’ but maybe that could be input into a model a bit more…linguistically?

EC: I think I had this tweet that was like: we need a CLIP for made-up languages ASAP.

MC: Right? Let me feed the raw audio files into this thing and then it transcribes meaning internally and then reflects that—that's a whole research paper. Difficulties were [that] initially we didn't script it up, we just had each little Lego piece, but it wasn't put together and I just wanted to launch it. So every morning that I sleep-talked, I had about an hour worth of work to do, and it also required Evan to be online in the morning to run the model on his GPUs. Part of the art is that I don't skip anything, it's sort of a process piece and I record all the time. Even though right now I'm in a hotel room with a very loud AC, I still record, but the clips don't come out because it's just AC noise.

For the future, I would love to explore seasons of styles because I think that the method we're using is just one of the many ways to express or turn sound or concepts into a visual. I like it a lot, but we were also thinking: what about collages? Could we break each transcription block out and produce a single image for that one and then collage it together somehow? Or maybe there's an audio to visual—like a visualizer-type thing—that makes sense here. The time horizon for this is we didn't put an end date, which is a lesson I've always learned is: put an end date on your projects. But there's no end date in sight.

LC: There’s a human element and the AI element where you've wavered some of your own agency to these technologies. How much do you see this project as your own or if it's collaborative?

MC: I definitely disassociate a little bit from the clips; I hear the sounds, and I don't feel any emotion about them. It feels like a completely different person because I guess in some senses it is. In terms of authorship, people will say this all the time with AI, [if] it's a collaboration: it's a sort of dance. And in a very map territory sort of way, the model is visualizing what I'm saying, but it's also providing information back in this feedback loop where I say a thing, it could be interpreted one way, the model provides some more information, and now it's interpreted it in a different way.

EC: That's a really cool point. When I'm thinking about stuff, drawing or painting, a lot of it comes from journaling. I'll wake up first thing in the morning and maybe write stream-of-consciousness or a little poem. In some ways it's similar to that because we do have these subconscious things, these emotions, and we're channeling that through art. So I think that's pretty cool, [where] Matt is subconsciously doing it, and we are in control of our subconscious indirectly.

LC: Have you ever considered having other people doing sleep talking, or sleep studies, where you could also construct these recordings?

MC: I could totally see this being something where there's a collective choosing the model of the day, or the model of the month, and people are submitting their sleep babble from around the world and there's this whole lively collector thing of like, ‘Oh, that speaks to me, I think that's really funny’, and you're just collecting your favorites.

LC: Any final thoughts?

MC: Right now, every piece is a one-of-one. One of the innovative things about digital scarcity is that usually all of the auction mechanics we're familiar with assume that the supply of something is fixed: it assumes that there's one object or there's a thousand additions and the design space of auction mechanics for objects that could have any arbitrary supply is still in flux.

Dan Finlay of MetaMask fame came up with an auction mechanic, which he jokingly, but also accurately, acronym-ed a ‘MATT Auction’ based on me. It's a very simple auction mechanic based on interest and how much someone is willing to pay for X copies of a thing. There's an optimal point in this graph of ten people having five different numbers they want to pay for X amount of additions; there's an optimal point that both maximizes the amount of people who get the thing and maximizes the revenue for the artist. I think it's a perfect digitally native auction mechanic, and I would love to apply it to Aleatoric such that as many people who want one could have one, assuming that we're also maximizing artist return thinkings.


Share article
Link copied!