Search Engines | "Octavia Butler AI: Other Radical Possibilities of Technology" Transcript Cleaned by Raquel Escobar === [00:00:00] Lisa Nakamura: I do you think we're going to take any questions from the Zoom audience? Jeff Nagy: If you want us to record it, usually people who are doing the questions in the chat, I think Giselle is going to be writing that, which means that we'll take questions from the room. But usually people in the chat are quite active. It's not just, you know, people who are on their internet. It's more people who talk or whatever. Lisa Nakamura: Yeah, that's true. If you hang out for a while, you'll find characters you hang with. Jeff Nagy: Yeah, yeah. Lisa Nakamura: You'll find people who talk to you in a different way. If you talk to them. This one is very clear. Jeff Nagy: Yeah. Multiple Speakers At Once: There was a moment where we were worried that he [00:01:00] might not have a job. That was when I fixed a few of those points though. so much. Do you remember Parker? How are you? I haven't seen you in a while. I think it was at the DSI event. We had dinner. We sat up in the unit I think. Which is my experience. I've already had one. Yeah, I've had one too. I don't think we have any. I mean, now black kids are studying interstitial technologies. That's awesome. That's a huge increase. So, we signed a stand for interstitial technologies. Working? Awesome. Um, you can still a little bit. Oh, right.[00:02:00] Hopefully invite. Okay.[00:03:00] [00:04:00] [00:05:00] [00:06:00] [00:07:00] [00:08:00] [00:09:00] [00:10:00] [00:11:00] [00:12:00] [00:13:00] [00:14:00] [00:15:00] [00:16:00] [00:17:00] [00:18:00] [00:19:00] All right. I'm wondering why I was right. For example, are [00:20:00] we currently live? Yes, we are. All right. You appreciate. Yeah. So I'm going to Is that you, Lisa? Lisa Yeah, so the roght turn is yours. Okay. I've got it. And nobody is being mic'd because we have these microphones. Yeah, we've got them. Jeff Nagy: So I think we're live on Zoom. Uh, we'll get started here in just a [00:21:00] minute. For those of you here in person, there are some snacks and drinks over on the left side. You're left, my right side of this room. Please do, if there's someone. Whenever the mood strikes you . Bill and Chris, are we good? All right, we're good. Let's get started. So, hi, everyone, and thank you for coming. My name is Jeff Nagy. I'm here with [INAUDIBLE] who is an academic fellow, and also the faculty chair of this series, Search Engines, and it's a pleasure to welcome you all here today for a conversation entitled Octavian Butler and AI, Other Radical Possibilities of Technology. For the conversation, I have just a couple, just, well, thanks introductions to run through, which I'll do right now. So this is mostly for those of you who are on Zoom. We'll begin shortly. Here's some guidelines for this event. If you have an access or technology related concern or question, you can get in touch with Giselle [00:22:00] Mills, who's in the Zoom webinar chat. And if you have a question too, please put it in the chat. We'll turn to questions after the conversation here, both from Zoom and from the room here. And this is a closed session. So to respect our panel's privacy, there's no recording either on your home devices, if you're watching this on Zoom, or here in the room, if you're in the room, I'll see you do it, so I'll know. But no recording, no live tweeting, no live- Andre Brock: Wait, what? Hold on, hold on. Jeff Nagy: I know for Andre this will actually be very difficult, but I'm going to ask his It's just for this next hour and a half. So, um, first, this event is taking place on the ancestral, the traditional and the contemporary lands of the Anishinaabeg, the Council of the Three Fires, the Odawa, the Ojibwe, and the Potawatomi, as well as the Wyandotte. And as we live here and learn here, we honor the Indigenous people who continue to steward this land, and those who are forcibly removed from it. DSI [00:23:00] and DISCO and Search Engines all are committed to accessibility and digital equity. And as such, we invite you in the audience, either here or at home or wherever you may be to experience this event in the ways that you need to, the ways that best suit you. So feel free to turn your camera off, or, you know, move around space. There is captioning available if you're in the room. There are open captions on this screen. You can see the caption at the bottom. And you can access them if you're in the Zoom webinar. By turning on the show captions option. If you have any other accessibility concerns, please do reach out to us in the chat via Zoom or hearloom. So this is the second event of this new series of programming at the intersection of the arts, emerging technology and social justice search engines. We had another event a few weeks ago and many of you were there with us on November 15th for a live performance by Astria Suparak. Thank you for returning if you were there. Welcome to all of you who are new, we'll [00:24:00] have two more events this year, I want to point out to you guys. On February 8th of 2024, we'll host Lauren Lee McCarthy. And on April 4th of 2024, the last event in this first inaugural cycle, we will be joined by Danielle Braithwaite-Shirley. We very much hope that you will join us again for those events coming up next semester. So Search Engines is a collaborative endeavor of the Digital Studies Institute. And the DISCO network, and if you're not familiar with those, uh, two institutions, I'm sure many of you know, uh, it's a center for research and dialogue where faculty students and visitors focus their inquiry on technology, digital culture and social justice. You can read a little bit more about it on the screen here or at home. And just to introduce you to the DISCO network, DISCO stands for Digital Injury Speculation Collaboration and Optimism. We are a multi sited melon funded network of labs at [00:25:00] five different universities. You can see. The P. I. S. of our labs here on your screen. They included your analyst. Maybe some more than Andre Brock, as well as break your shades. Stephanie Dinkins, Remy Urgo, and Kathleen Knight Steele. You can find out more information about the DISCO Network on our website, disconnectwork. org, including some information about our upcoming events and our programming and organizing more broadly. As well as our publications, because publications coming out every month now, actually. So please do keep in touch with us in there on social media. So we are generously supported by the disparate network and by the University of Michigan Arts Initiative. And this particular event is co sponsored by the Department of African American and Trotter Multicultural Center, the History Department, the American Culture Department, the Department of English Language and Literature, the School of Information, and the Department of Communication and Media. We do have one DISCO [00:26:00] event coming up that I want to point out to you. It's actually next Thursday on December 14th. All of us DISCO scholars got together over the summer and wrote a book. We wrote a book in about a week from scratch in rural Pennsylvania. If that sounds like the pitch for a horror movie to you, tell me more about it. We'll be talking about it in detail with a lot of us who are there writing together across disciplinary backgrounds, across critical frameworks, across identity frameworks. And we did actually manage, believe it or not, to go from zero words to about 50, 000 words by the end of that week. And the book is now out under review with the scholarly press. So if you're curious about that process, anything about that process, you can join us on December 14th. I'll be moderating that conversation. I hope to see some of you there. So , finally I'm going to introduce our panelists today, starting with Beth Coleman. I know in the room, this is very, very small text, actually, in part because Beth Coleman is [00:27:00] so accomplished that there's too much to put in a single slide. But I will tell you about some of it. She's an Associate Professor of Data and Cities at the University of Toronto, where she directs the City of Stockholm Lab. And she works across science and technology studies in critical race theory. She focuses on smart technology, machine learning, urban data, and civic engagement. And this particular project, I think kind of synthesizes a lot of her contemporary research as well as her creative background as an installation artist, as a musician, back in the 90s with the [INAUDIBLE] scene in New York. And in this project, Octavia Butler and AI that we're going to hear more about and see some of very, very shortly, you know, the, the central kind of provocation of this project away and sort of pitch for it, at least as I sort of have explained to myself after reading your book and looking at the images and exploring the website is something like this, right? Like we're so used to worry about transparency in AI or bias and discrimination in AI accountability. You know, how do we open up the black box and see how that [00:28:00] bias got in there in the first place? How do we get it out of there? And particularly in AI. In a way, like takes all those questions and turns them inside out, right? The idea instead is sort of like, what would happen if we took AI and made it just much wilder? Like what would happen if we re envisioned our relationship to AI to invite it in as kin and collaborator? And what kind of possibilities from the vision might that holds the possibilities that these other frameworks these frameworks that are more traditionally associated with like fairness, credibility, transparency might foreclose So we're going to hear more about that in just a few seconds And to dig into it and what it might promise we're going to have Lisa Nakamura as this conversation's moderator, so she is the Gwendolyn Calvert Baker Collegiate Professor of the department of american culture, founding director of The DSI here at the University of Michigan, and she's written [00:29:00] many books and many articles since 1994. She's been for decades now at the forefront of thinking about the intersection of identity, marginalization, race community, and how all of these shift or don't often right when they move online, which has been running with these dynamics. Yeah, basically, since there was an internet to be on, so I'm very glad that she is here to have this conversation with us because I can't even think of someone with a deeper knowledge. Andre Brock: Yeah, he said be older than time. I dunno. I heard you said I can't live tweet, so this is what you did. Jeff Nagy: I get live roasted man, . No, I, I knew, I'm thinking about when I first started out as a scholar in this field, like those works, Lisa's work were. Foundational for me, and I think for many of the people in this room. And will be foundational, I think, for this conversation.[00:30:00] So, Andre, do you want me to introduce you? I mean, if you still have time. So, we're very lucky to have Andre Brock here. He's joining us from Georgia Tech, where he's an assistant professor of media studies. He writes on Western technoculture, Black technoculture, and digital media. And, you know, you can read more about his work on the slide here or on your webinar screen. I, you know, I'll take Andre's roast seriously and excuse myself at this moment. So, everyone here in the room, hopefully everyone who's watching us from home, please join me in welcoming Beth, Andre, and Lisa. Beth Coleman: Thanks, Jeff, um, and Lisa. It's really Wonderful to see people. Thank you for coming out. Um, [00:31:00] we have a lot of things I think we want to talk about. So I'm trying to figure out how to move through that conversation. Happily, Lisa and Andre and Jeff have provided some really rich questions that I think will help get us to collectively some of the things that that are on our minds. So one of the first questions that I was asked is, Why, why would you do this? So, I thought, let me start with just what is this thing that I've done. So the project is called Reality is Whatever Happened Octavia Butler AI and other possible worlds. And what I started with was in the kind of first cycle of the pandemic lockdown, I spent a lot of time trying to draw an idea. I was trying to think about AI as a [00:32:00] technology of the surrounds. So you're already hearing Carney and Moten and the under commons. I was trying to think about, well, What if AI is an itinerant technology? What if there is a kind of movement of the, the marronage, a kind of drift and unmooring? And I was thinking these things and I would, you know, go on walks, because that's, that's what we did during COVID. And I talked to different people, like, M. Murphy, Indigenous STS scholar. And I'd say, look, I have this idea, and it's the idea that I assume is going to just offend everyone, because we're already in a space, and it got much crazier after that, where it's always, get it back in the box and what is in the box and what is going on here. And predictably, the problems we're already seeing in terms of AI in the world is application, chat GPT, other kinds of generative [00:33:00] Dolly was picked up really quickly. And I'm happy to talk about why certain AIs are interesting to me and why certain AIs are just make me crazy. I'm just like, this is the worst thing and anyway, but, you know, we all have our opinions about things, but essentially what I was trying to work through was a responsible and accountable argument for why I thought AI should be more wild and not more contained. So that's some of what I'm working through in terms of architectures. What I was using GAN, so generative adversarial network, that architecture is particularly interesting to me because of with the generative you also have the adversarial. So what you have is an architecture that gives you two different agents that both artificial generation and one has been trained on a set of data. And the [00:34:00] other, the detector essentially says. Okay, let's say it's a bunch of landscapes, images, the detector is like, that's not, I'm saying it's not a mountain, but detector doesn't know what a mountain is just like, what is this array? And probabilistically, that's not the same thing. What you generate is not the same thing as what you learned. And they keep going, no, no, that's not it. That's not it. Again, I'm using anthropomorphic language, but nobody's saying it's because the it's yeah, it's machines. It's code. We can talk about intelligence. I'm happy to talk about intelligence, but I'm not pretending that this is intelligence until you get to some point. It's just like, yep, that's a mountain. Good, we're done. So that's, that's the adversarial part of it. And the network is, network. And, Jenna began, and the big paper on that is Ian Goodfellow, who was a PhD student of Bengio in [00:35:00] Montreal at the time, and then went on to different industry things. And one of the things about it was, it was very lightweight. With very little training data, you could produce things very quickly. So that was one level of innovation and it's important in terms of the heaviness of it, partly because of the environmental aspect of it. And the, but the other thing was, it was the architecture behind a meme, but a moment of generative AI production where there was this meme of this person isn't real. I forget the exact reference, but it's essentially this person isn't, isn't really you, you know, dot com and what you'd see every day is like a different generation of what was a series of absolutely banal faces. And now, now I'm just being shady, but like people who you wouldn't look at necessarily twice on the street because they were designed to look like [00:36:00] everyone and no one. They were vaguely tannish, middle age ish, like there was a kind of anonymity and the big thing was there was so photo real and so really you think was a real person partly because they weren't extraordinary looking they don't look like supermodels they don't, you know, look like the Hulk or Hulk Hogan. And I was just like, Oh, fantastic. So we've got this really interesting technology and what the biggest thing, the most mimetic thing being produced with it is the most boring thing possible. But that boring thing was really helpful for me because it was a different version of saying generate AI is going to be used to reproduce a status quo, literally, and I was like, okay, good. Now I have an invitation to break this. So that was kind of like why that tech, why that moment and what was going on and also why I'm not interested in mid journey. And I've got a mid journey hater slide if we get to that. [00:37:00] So then the other part of it is. Octavia Butler, which is part of the reason, or maybe the primary reason why you guys are here, because Butler has been generative for all of us, really generative, and it's painful and remarkable how predictive that work has proven to be, and the reason I was interested in, I listened to the whole Xenogenesis series on audio when I was doing a studio working on a different project in 2019. And it just was my, my soundtrack. Like it wasn't what I'm putting my hands on, but it was, it was just what I was living with in my mind. And then it just like settled and sat there. And I've read in different patches and out of sequence, the trilogy before, but for some reason, like, I'm going to come back to this now.[00:38:00] And just this idea of making aliens kin, and with Butler, no matter the situation, there is always yhe return to the difficulty and the often violence of making relations, making kinship. So if it's kindred, like I forget what the protagonist was called, but like she ends up with like her arms stuck in the wall, you know, like it's very literalized in terms of that body and, and how it's wrecked. In relationship to kinship and how she decides not to kill that guy. But anyway, that's different, slightly different story. So this idea of the necessity of building relations, alien relations, making kin with aliens was something that I was thinking about in playing with, you know, this new [00:39:00] technology and And you heard the early part in terms of, let's just reproduce the normative. So I was like, okay, let me take again and reorganize it. So I'm playing around with a couple of different things at once. So the first series that, so I was, I was testing, working in a studio and testing through some things just to see, is this a mode of production that can work? And then I got to some point and I was like, okay, I see this can work. And now I see how I can collaborate here to produce the work that I want to see. So a combination of the mysteriousness of like, oh, what's it going to make because you do have this. network that's looking, looking at, responding to here's the types of images that it knows, and here's the types of images that I'm training it with. And then I started working through, and I [00:40:00] got the Alice series. And so this is Alice White Dog, and this is Alice Freeman, an engineer, and Alice Freeman, an engineer, any historians or sociologists? So Du Bois exhibition at the Paris World Fair, the 20th century, he had a whole bunch of photographs of Black people from different parts of the States. And more North Carolina. I need to look to see Andre or Lisa. Do you know for that exhibition? Was it a particular location? So it was some early data graphics in terms of collecting information and then also the images of people and the exhibition was a campaign around humanness that race could not be made, strictly codified, but also here's all the data to demonstrate once again, human, I [00:41:00] know you're like, again, but yeah, so, , the names, some of them are about the Du Bois, this is marinage. This is unmoored. This is Paris. This is Fox. Yeah, it's 1900. Yep. And there's a lot of animals in here. And that's Alice Supremes. That's like the hero image that becomes the city of the book that is the dust cover of the book. Yeah, and circulates and the reason I produce a series of 100 for all three of the series. So it's oceanic after this, I'll talk about it. And then BPP Black Panther Party landscapes and in, trying to imagine, imaging, making relations with aliens. Part of what I started [00:42:00] with is I don't know what aliens look like, you know, life on Mars and all the rest, but I do have a sense of what has been thought of as alien from us. Like we're not supposed to be birds. We're not jellyfish. So I start to make synthetic images playing with portraiture. As you can see, the dominant imagery that it was learning was different figurations of Black female figures, in relationship to animals, and then in the next series in relationship to ocean creatures, and then in relationship to landscape, I, I took a different approach, and Jeff had asked me, is this a particular person, because he felt like, because of the name, and also the figure, it's resident, and several people said to me, is it this Alice? And, and I, and I was, I was like, Oh, [00:43:00] so is that Alice part of the data set, probably. But, I like that people are feeling the resonance of a certain kind of like, 70s, you know, Afrofuturist, Sun Ra, like, and it's just one still image that is a conglomerate of all these different things of a figure that does not have a traditionally human face. So at an earlier awakening, she had decided that reality was whatever happened in the whatever she perceived. So that's in the opening chapter of Dawn. And, the Lilith Iova is the protagonist and also the the first person, the first human to participate, to [00:44:00] consent, to building relations, building family and reproducing with the Oankali, the, the aliens who are saving the world, but also are changing all the boundaries of what it means to be human. Because, you know, in an obvious way, some of the kids have tentacles, just stuff. But in a , in a way that particularly, and Butler is, is quite explicit about this, it drives the remaining men, and particularly the white men on earth, insane, that sexual reproduction has to involve an ovoid, there has to be a third figure between humans. And that Butler is consistently both totally freaky, but also heteronormative and how her sexual relations work is on [00:45:00] that's, you know, that's who she is. That's what she's doing. Like men and women, but there's got to be an alien in between. You're like, so that, that citation, Lilith wakes up naked by herself in a ship and has to figure out how to survive. Sounds familiar. So then there's also the thing of like, consent. If you have no choice for the survival of yourself and the people you love, other than this relation, if you're waking up by yourself naked in the ship, it does trouble the idea of consent a bit. Like she does build strong, loving relationships and the generations do, but that tension and some of that violence of location, situation. I'm, I'm [00:46:00] not sure any of us would call that consent. Okay. So now let me hurry up because that was quite a long start. So that is the first series and, um, I was happy with it. So then I went on to do There it is. Then I went on to do some more. So then I took the, the Alice model, because now it has been infused with the two things that I put in, and I wanted to make I don't know, additionally alien creatures. So I then started training a relationship to the oceanic and I was able to get this just marvelous diverse motion through these different colors, which was super fun for me. And this series of 100, we put up the entirety on the website, and this [00:47:00] is, this stuff is all on the website. It's all in the book, and I'll show you some of the outputs in terms of installation. These are details from the Oceanic series. And the citation here is, our seas are rising, oceans are warming, and extreme events such as cyclones and typhoons, flooding, drought, and king tides are frequently more intense, inflicting damage and destruction to our communities and ecosystems, and putting the health of our peoples at risk. So that citation is sadly not fiction. That is from the declaration for urgent climate change action. Now securing the futures of our blue Pacific and Pacific Islands coalition. So this is both. the monstrousness and the beauty of the [00:48:00] ocean, but also the indicator of a heating world where jellyfish are actually a sign of how the oceans are transforming. Whales are dying, but jellyfish are thriving. Okay. And then the last series, let me see if I can get the alga hazard. So this was thinking about landscape and this was thinking about Collectives. So the Black Panther Party strongly informed the types of figures that I was bringing in to make these different landscapes. And these I have not, one of my, whoops, one of my choices with This work was not to show it primarily as digital images, because for one thing, I don't think we can see anything on screens anymore. I mean, showing you still on screens, but it's very hard for not everything to look the same. So, the [00:49:00] exhibition output for these things are analog. And I'll show you those in a minute. Let a new earth rise. Let another world be born. Let a bloody peace be written in the sky. And that's the opening line from Margaret Walker's For My People, which was published in 1937. And Huey Newton uses it as an opening poem in the beginning of his autobiography. So what I showed were these paintings that I call doppelgangers. And, partly because I couldn't resist, just the whole trickery and just fuckery all the way down in terms of how the world is produced the world of images how reproduction happens. So, you know, you can throw all kinds of different things in terms of [00:50:00] art in the age of mechanical reproduction, now AI reproduction. I don't know if Lisa's going to be like Duchamp or not Duchamp. I'm a longtime fan of David Hammons. We all are. So, how could I resist not having her next to her? And there's, and then I'll show you then the big paintings. Let me see if I can see this. Yeah. So this is what I showed the McLuhan Center last month. And they're like four by four, like they're two big oil paintings. Okay. So that's the introduction to the project. And that took quite a long time. So Lisa, what should we talk about? I don't have to stand up here. Lisa Nakamura: Well, first I would like to invite Andre to give some thoughts as a respondent. Beth Coleman: Yeah, I'm going to put up Alice Supergame and then I'm going to come sit down next to you.[00:51:00] Andre Brock: So I come to this having started my own interrogations of algorithms and by extension, artificial intelligence. And so when I first encountered your idea, about an Octavia Butler AI, I was immediately excited, right? Octavia Butler, of course, is one of the best science fiction writers of ever, right? And particularly your focus on Lilith's Brood, Dawn, Genesis, right? But I'm curious, having heard the ways in which you're describing your project, if your, I need to better understand your attention to wildness. So I hear Marronage and fugitivity [00:52:00] in the way you describe it, but I'm not sure yet that I'm seeing it in the images. And that's my flaw. I'm not good with images or poetry. I'm very literal minded. So my first question to you is to ask, what does it mean for an AI to be a fugitive? And what is it fugitive from? And I think that question starts for me because you're invoking marronage, which is for those who may not be sure, you know, different communities of folk who fled enslavement, particularly in the Caribbean. But they also have maroons noted in florida as well, right, who fled enslavement and created their own self sustaining sovereign communities and places where largely inaccessible in Florida was the mangrove groves. Right. And so what does it mean for you to both invoke a mode of resistance and to center that resistance in a black experience, but using this very modern technology? Beth Coleman: [00:53:00] Yeah, so, yeah, I think based on your own work, you do not have any difficulty centering blackness in very modern technology. So, so I'm not sure that's not really a question for you. But in, in speaking about, so wildness is not necessarily in kind of the system of signs that we have been practicing importantly in terms of some of the, the language of black poesis. But, I was trying to figure out how to get to [00:54:00] alien. So something outside of and wild. I think maybe like the surround if you're seeing it from the point of view of the fort. The places that are dark, the places that are unknown. So wildness is of course, a metaphor, and as Lisa and I were talking about a little bit, it could also be understood as a really colonial metaphor 'cause if there's wild, there's, then there's . , right? There's like, here's the civil and then here's the wild. But in, but you have that same, inside and outside, if you think about the fort, and then the surround. So, and, and I do think that the marinage, the itinerant, the things that are part of slippage. If we think about lisant, if we think about the Derive in terms of slipping through cities and [00:55:00] taking, not charred passageways like those are all modes of motion, but also location that I think are really powerful and offers some kind of resistance. So how can AI, which is run by, you know, created by in terms of generalizable tools that we now are all using, they're not being made in the basement. They're not being made in Steve Jobs's garage. They take literally towers and towers of compute, which is one of the reasons why we're not making them at universities. And then it turns into a package of like here's this little, the way that search is framed for us, but there's a whole architecture of weights and balances and what gets, what gets given where, and also how it is produced as true. And, you know, we have scholarship in terms of the whole like Wikipedia [00:56:00] versus Encyclopedia Britannica discussion from back in the day, like the evidence of experts or the evidence of many. And in any case, how can AI be part of something that is escaping, and I think in a really formal way, the general part of generative AI is real, and it's a little bit hard to understand how something is being generated because generative usually thinks unique, new, novel, but if it's if if machine learning is formally predictive modeling, how can something that is predicting based on historical data can only predict on what has happened before? We know this is a white dog and not a white wolf because Geoff Hinton and [00:57:00] these guys have fed all these images of white dogs in. And these images, they're ground truth. They're based on our understanding of human perception of here's a dog. This is, I'm not making these examples up. Nobody could make up these examples. Like I would have to be so smart to be able to make this up. I didn't make this up. So this idea of like, we know it's a dog and not a wolf because here's the training data and it's predictable. And we know our model is accurate when it's correctly predicting a dog, not a wolf. So how does something like this, which is based on prediction and, you know, Wendy Chun has talked about this a great deal. It only knows history. So what can a future be other than a reproduction of history? But one of the things that we have currently with our experience with generative AI is machine hallucination. It'd be [00:58:00] like I'm just making this shit up. And this is the kind of thing that Google does not want, because search doesn't work unless people feel like it's true. I want machine hallucination. I want to see what is this Alien intelligence. I am interested to know what is thinking differently and thinking is a loose term here. But let's say systems intelligence, whether you're thinking about actor network theory, cybernetics, or our own experience of systems produce types of logic types of intelligence. I think we can say the same thing. So, I, when I was first starting to think about this, I thought, oh, okay, so reinforcement learning, where a system trains [00:59:00] itself, is so much more alien and liberatory and interesting and will produce novel outcomes faster than this constrained commodity that is generative AI. And you know, it's why when I saw Dolly, I was like, that's so unfair to salvage from Dolly that this, this terrible thing is named in, in this, because it's like, Oh, wacky surrealism, like lobster and a sewing machine. And, and, and I was like, so why, why is it so bad? It's so bad because all the training data, all the stock training data. I mean, just the narrowness of imagination, so people can say, you know, a frog in a teacup, and woo, like, it's like, this is party tricks. This is neither interesting, nor is it art, nor is it good. And where we are with mid journey, so what is [01:00:00] that, five years later or less? Less might be a little bit less. So mid journey, we're in this like video game, hyper real, perfect lighting, like, yeah. And in some ways it's much, much more dangerous because it's, um, it's, it's super slick. But the same reproduction of an imagined status quo is at work. Jeff, can I actually show my hater mid journey slide? It's just back in the deck. At least you're gonna love this. This [INADUIBLE] Yeah. Oh. Yeah, full screen. Sure. So, who are these ten smiling [01:01:00] people? With headbands and long earrings and face paint, they're Native American warriors. Now, obviously you know that this is not, these are not real people, this is a Mid Journey production, but the commentary in the Mid Journey Reddit, I'm not kidding, like BadBoy93 goes, I know they're not real, but they're so relatable. I'm not kidding. And then Slugnut 22 says, I know, it's like, we'll reach you back through history and they're saying we're just like you. Yeah. And on one hand, I'm like, oh, yay, creator communities. I am, Henry Jenkins was one of my mentors. So I'm just like, yay, makers. But on another hand, I'm just like, so that's the world we get. We get like this imaginary where we've got friendly natives. [01:02:00] We smile at the camera, and they have the big American smile that makes him relatable. Everyone in the world will look like us. Our simulacra becomes pastiche. And we become, and, and, and the world thus becomes relatable. And this is a historic image from the, 1830s taken by like, a white American photographer, and this is Indian Chiefs. And this is certainly a complicated image, and it's not like we think photography is authentic or innocent, but it's a different kind of complication because their dressing, their posture, their faces. One can imagine a different degree of self determination around that, but anyway, just showing the [01:03:00] two, two different, so we can, we can, I don't know what slide you want to go to. So, so that's part of thinking about what I think wildness would afford us and why it's important. The other thing is, We end up chasing our own tails if our job is to make sure responsible AI has the right amount of representation of people in wheelchairs for recognizable images. So, or self driving cars, so cars don't think, oh, that's something I should just ignore or run over. It becomes all our labor to make sure, When certain things are being imaged, there are enough positive representations that it doesn't end up being a total fiasco. And [01:04:00] i, I, in terms of transparency, I think that's a different question. What's in the box. How does the thing work? I think that's a different question than the ability in the sense of making sure that there's equitable representation, that the database is It's equally trained on men, women, black, whites, and all the things up and down and in between. I think industry has every incentive to not reproduce a guerrilla's moment. That cost Microsoft so much. So, I mean, I'm just like, I don't know, is that machine hallucination? No, that wasn't, that was just bad training. But I feel like Microsoft, that's on you. Like, are you dumb? So, Andea, that's my thought. Lisa Nakamura: Okay. I have a question for both of you. [01:05:00] Because as I'm hearing you talk, I'm realizing that in your careers, you have both talked about blackness in terms of a type of energy or life world, which does not exclude the digital, but instead sees it as an opportunity, or I think in the case of your work, Andre, an actual living position, right? So when you were talking about wildness, Beth, as something that ought to be embraced and at least accounted for. The term training data already implies there's something which is ungovernable, right? And so the effort to govern it is continual and is always a little bit behind and anything bad that happens because we didn't domesticate it enough, right? We didn't rein it in. And your concept, Andre, about the libidinal. So how do you, the libidinal and the wild come together in relation to what blackness does in A. I.? And what blackness does in social media? Now, these two things don't seem super similar, but really under the skin they're all in the [01:06:00] same place, right? It's all inside the box. And because as non white people and as non male people, we didn't have anything to do with making that box. Nonetheless, I think you've both shown that the libnaldo, the wild, the part of non whiteness, which is alien, has thoroughly infused it, right? So Black Twitter exists maybe despite and because of its exclusion. Blackness is exclusion as the maker of that thing. So can you talk more about how the wild and AI and libidinal and social media are coming together and the way you think about blackness and the digital as not a threatening space, but maybe as a space for legitimate, not legitimate, some kind of difference, which is productive for you? Andre Brock: My turn. Yeah. So and talking to a young colleague at my department at Georgia Tech, Allegra Smith, [01:07:00] she reintroduced me to this concept of the pharmacon. Most of y'all nodding heads, so I'm assuming, but I'll explain. This is Derrida, right? And he's saying that a pharmacon is something that can simultaneously be a remedy and a poison, right? And I find it interesting because the ways that I want to talk about blackness online. I don't want to go as far as a remedy. It's not a fix for what modernity has wrought or what technology tries to train us into being. But at the same time, it's not a poison, even though if you ask most people about black folk on Twitter, they're like, They argue if you ask white folk, they've created cancel culture, which makes Twitter into a hell site. If you ask black people, it's like these young folk, they just put all their business out there on Twitter and they don't have no, they don't pay attention to how to do things appropriately. But for me, um, the libidinal becomes important. And I think also in the context of this project, because the libidinal as leotard, argues for it is an excess of life. Right. And so when [01:08:00] I started trying to approach what out what AI is, I'm trying to figure out. And so your concept of the wild is helpful to me. What is AI's excess of life? And I think hallucinations are a really good way to get there, right? But in that excess, I'm unsure as to what is simply blackness, right? And so, I'm specifically referring to diasporic blackness. Right. Because I argue throughout my work that cultures never exist in isolation. They're always in contact with others, and that's how they define themselves, right? So I don't know if it's possible to have a blackness that can be completely free of domination and exploitation and everything else right before our ancestors have taken the slaves in Africa. They also had to deal with Islamic slavery, right? And intimacy and conflicts between themselves. So it's not like there was a utopia that black folk existed in before they got here. It just wasn't. Nearly as bad as what we ended up with. Right. And so what does blackness mean [01:09:00] for the digital? Is it meant to be, and I'm an Afrofuturism hater, we can talk about that later. Is it meant to be that once we master some of the tricks of technology, that suddenly our life worlds will be improved? I'm not so sure right in part because I am, I very violently disabuse of the notion that there can be technical fixes to the social and cultural problems that we encounter. So the question I've been asking recently, and really kind of throwing people off is, I was talking with the old Roth on Friday he was former head of trust and safety at Twitter, he's looking at content moderation on Macedon, and we for some reason ended up with the idea of talking about decoloniality. And I think there's some parallels between decoloniality marronage and we can, y'all can beat me up about the head and shoulders about that later. But the question I asked was, say you get to decolonize an institution, right? A context, right? And in that context, you suddenly still have to decide [01:10:00] what kinds of people you want to participate in that context. So say you create a system where we abolish prisons. What do you do with people who are minor attracted? Right. How do you, how do you adjust for them? Right. And so I asked that question of y'all and he, you know, he has some really interesting concepts, but I want to bring that back to this particular discussion by saying if blackness becomes alienized, because it's not alienated, right, blackness becomes alienized by AI, how will we recognize what blackness is from that point? What is our continual touchstone, right, to think back to it? Yeah, so I think that it's, I think it's parallel streams. Those questions? Yeah. Beth Coleman: I think that, if race can function as technology, You wrote something about that, didn't you? [01:11:00] Then, then maybe AI can too. So, I, I actually think, cause I was like, so the, the front, the short front text on the book that I wrote to, to explain to the editors, okay, this is what the project is. It's a liberation project. And then I was trying to figure out what the object of that sentence was. Like, so what is being liberated? And I was like, well, I don't know. It's not black people that are being liberated. I actually think we're learning from black poesis. We're learning from, you know, Karen Barad's refusal of binary logics and an argument for interrelations in terms of her reading of quantum physics of the decision happens at the moment. It doesn't happen before. It's not pre determined, [01:12:00] and it's not over determined after it could happen at the moment, which in some ways might be true. The best we can do in terms of freedom. It's something that is not continuity, and I don't know, some of you have read another Octavia Butler series where change is God. Change is the only thing that can be relied on. So, um, I think that we could learn from other types of technology to be in better relation with what AI could be in the world. So whether it's feminist STS, or black poesis, or some other spaces, I think the marinage helps to position us better in relationship to not the containment [01:13:00] and fixing. So what does this do in the face of existential crisis? You've got all these different systems running, they start writing subroutines, and they're just talking to each other because we're slow and in the way. And, you know, this, the, the scenario like, oh, let's just unplug it. Let's just turn it off. This is, it's ruining the earth. And it's, What, what the heck is this? This is out of control. And I'm like, no, we're not unplugable. Cause like, you don't know where we start or end. So unfortunately the, the existential, like Jeff Hinton's death tour. And he does need like a death metal t shirt. Like for someone like me, I'm like, that's not. That does not seem at all impossible, and I think we should pay attention. So there is, there is an ongoing kind of [01:14:00] question and crisis in terms of if machines have been the measure of men in terms of a whole long hundreds of year old colonial endeavor. And this is not just kind of, I guess what we code as like the modern colonial and we're in a moment where. We're perhaps moving from prosthesis to collaborator and let's not be naive about this. Henry Kissinger calls AI collaborator in terms of warfares. Collaborator is not like good times. It can be good times, but, and it's also one of the reasons why I feel like this, the creative space, whether it's sonic or visual or multimedia production, there's a long history of experimental work with new machines in terms of creative space, [01:15:00] but if we're shifting from prosthetic to collaborator, I think we need to take that pretty seriously. Andre Brock: Before you ask that question, can somebody hand my fan? It's hot as blazes in here. It's on the side of my bag. Lisa Nakamura: Okay. I, I have things to say, but I would like to open it up people in the room. So does anybody have anything they would like to ask professor Brock and Professor Coleman right now? Oh, no. We do have, and we have any questions in the chat, but people are in the room, so I'm gonna ask the room. Yeah. Graduate students come on. Yes. Guest: I'm really interested in the conversation you brought up and the comparison to kinship with the alien and the moment from Kindred where the arm is taken away after, you [01:16:00] know, the encounter with the past. I'm curious to think, when you think about moving from prosthesis to collaborator, how were you changed, if at all, when working with AI, or how did it change your perspective? What was it like working as a collaborator rather than prosthesis, if you felt that way? Beth Coleman: How was I changed? I mean, you see what I mean. I, I was like, okay, I showed it to the director of the clue center because we were talking about the installation like half a year ago. And he went. I was like, Oh, God, that people might find this unnerving because I'm like, I think she's pretty but I just saw the Barbie movie for the first time two nights ago. There's this moment where America Ferrera says, it's like the indigenous people and counting small parks, they have no immune system to a patriarchy. And I was just [01:17:00] like, can they say that about like wildness? And I'm like, yeah, but I was like, Oh, they need to get shut down. I think that's totally inappropriate. Yeah, I don't know. I mean, I was born like this. So it just felt amazing. Like I really, really enjoyed working on this. I was so surprised that I could really craft things. So once I kind of figured out kind of how to tune it, like even the number produced, the framework, there were some experiments. There was a series of portraits of freed people from the, it's not the turn of the century, but when would that be? Like the 1860s? It's a little century. So they're all, like, [01:18:00] essentially silver derogatypes, like, photography of that period. And it's part of a collection at the Beinecke Library at Yale, and they were, they were available to experiment with. And the whole relationship to what data can I use, because there's something that gets tested, and then put them aside because I couldn't account for them. But I trained with a series of those family portraits. Then I trained the series of Images of people, portraits people had agreed to take at an Afro punk, the carnival, like, what's it called? Festival in Brooklyn. So contemporary images, but they were Polaroid, black and white. So there was a similar tone. And both of those sets of images of kind of like mid level portraiture, some family, some individual produced such harrowing images that I [01:19:00] had to just walk away from everything for a long time and ask myself questions like, what is this haunted with? Like what, what am I seeing here? And I don't know if I actually saw any ghost image, you know what I mean? There is no reasonable argument that these images carried a certain kind of It's a really violent history that showed up in those images once I kind of like poked them, but I was pretty shaken. So, so I mean, so there was, there was a lot of, I mean, so I was like, Wow, this is so responsive. The, the Butler AI GAN is listed as my co creator in the book. That was, that was one of the editor's ideas. Like, oh, exactly. That's, that's smart. That's actually right. Lisa Nakamura: Okay. Anybody else? Yeah. Host: Let's take a Zoom question. [01:20:00] So the first one is from Stephen McCarty, social worker and sci fi writer from the Eastside Institute. So, for Beth, The relationality examined in Butler's series. I'm curious about the moment we are in around AI and suggestion instead of alien human hybrid hybridization, a human AI development. I'm curious your thoughts on this. Beth Coleman: So do you think you can kind of underline what the question is? Host: So the moment we're in around AI, and instead of human, alien human hybridization in the series, a human AI development. Andre Brock: But the series isn't about hybridization, though. Beth Coleman: Well, no, I mean, anything about, yeah. In some ways, if we think about kind of, kind of, a little bit of an older model of the cyborg, where people, [01:21:00] or they're not people anymore, like cyborgs are, what had been kind of like coded as human is now enmeshed with in, in a visible way. Well, I guess with, we can't talk about Blade Runner. Okay. No, because they're not visible. Yeah. Okay. But they're not. So, but like, yeah. I think that, we have today, not in science fiction, but today, in addition to like AI is doing protein folding, we also have people working experimentally with putting, like, pieces of physical tissue into AI systems. Like, we have new levels of cyborg like invention or experimentation. [01:22:00] Some of it is this, I'm not going to blame William Gibson for this, but this imagination. And this is not even, this is not on Gibson, that consciousness can get rid of the meat. I mean, this is old school cyberpunk fantasy that has literally become an object of eternal life for Silicon Valley. So, yeah, in some ways, my alien human border crossings seem quite lightweight in comparison to people actually working in labs right now to figure out how to get some kind of cybernetic thing of human consciousness into a machine or brain goo onto, you know, like the head of a little robot. Andre Brock: I think actually you've gone farther than they have because instead of a [01:23:00] substrate of agar, right, you're saying the substrate is black, black woman's body. Right. And her issue. Right. And so in some ways I read, Lilith's Brood, the series as an extended meditation on passing. Right. And so the idea that the Oankali and ooloi want to breed with humans so that they can, because they want this, this thing that humans are specifically afflicted by, which is cancer, which, you know, that, that dragon cancer is the thing where our cells just reproduce themselves wildly, right? That's the Oankali's goal. And at least as it's explicitly stated in the first book, right? But they keep running into these problematics that they chose a particular genetic and cultural group of people that bring something different. So, I mean, how would, how would this series of work if we just said, here's a white woman that has to breathe, has to do technical porn, because let's be clear, it's technical porn, right? Has to do technical porn with, and would you care as much, would it make as much sense? So I think that hybridization that [01:24:00] he's asking about, I think is a question that cyberpunk is wrestled with in some interesting ways, not often good, but interesting, right? And blackness adds an excess to it in ways that I think the Silicon Valley still hasn't picked up on they think they could transcend the flesh and what Butler does and what you're doing here is saying no, the flesh will always be a part it will never be subsumed by the alien. At least if I'm reading it correctly, it won't be subsumed by the alien. As you say, we will be collaborators. And in that way, it makes me think of two series that I'm deeply invested in, murderbot by Martha Wells, right? And Altered Carbon by Richard K. Morgan, right? And let's be clear, the Altered Carbon books are not great. And they're a series now also. The series was not, well, the Kinnaman version was good, the Anthony Mackie, right? But there's a sequence in the second, in the book, in the original book and in the series, where both the main character, Takshi Kovach, and his sister, Raylene Kovach, you can tell I like this series, right? They had, they [01:25:00] start fighting. And she keeps switching from body to body, right? Because one of the conceits of Altered Carbon is that you can capture the substrate of the mind and implant it into a clone. Basically something grown from either synthetic, they're both synthetic and fully human things, right? And so to me, this kind of speaks to what you're talking about on a base level, right? The idea that you can somehow get a machine to accurately recreate a human state and then implant it into a technologized body. But Murderbot is interesting to me even more so because Murderbot speaks to, um, security by a security construct who manages to override their governorship so that they're no longer forced to protect and serve humans, often to the point of their own death and dissolution. But this one, it didn't gain consciousness, it already had it, but its consciousness was freed. And so the seven novella book series is about their anxieties as to whether they're accurately performing humanity. Right. And so this [01:26:00] is particularly the Alice in the Black Panther series to me speaks to that in a certain way of an anxiety, not yours. Although I will say that the AI was training you while you was training it. Right. Right. It was training you to see how far it could push. The uncanny, the unreal, the alien to where you would say, let's put it in this collection. It doesn't know about the collections, but it obviously had some impact on you, right? But that idea of the, the machine beginning to worry, which is a peculiarly human condition, right? Your phone worries when the battery gets low, everybody knows a little yellow thing when it says power is getting low. But what does it mean? What does it mean for AI to worry? As to whether we'll be able to continue existence in the state that it's most comfortable with full computation at the height of its powers or at a reduced level, right? And then add the frailty of the human body to it and not just any human body. A body that has been configured by culture to say we are somehow stronger because of our animal natures. We are somehow different because of [01:27:00] our lack of morals or freedom of sensuality, right? That seems to me to be a significant intervention. I'm really glad you're here to talk through this, right? And understanding what the test [UNKNOWN] people are arguing for when they say AI will help us, you know, transcend humanity and take us to Mars. You're arguing essentially a more grounded position right that seems yeah, give me some more credit Sorry, I didn't mean to go on there Beth Coleman: I mean, I think you've been in the trenches around embodiment are we because we've got some new tech and we've got a new scale of engagements are we replaying, like, what are the differences with the cycle that we're playing now? Lisa Nakamura: I hear a couple things. When you were talking about working with AI to make this work, with the Black Panthers, with these portraits from, you know, Freak Slaves and everything.[01:28:00] I thought about the difference between attunement and training, because you use both of those words. Mm-Hmm. . So attunement sounds extremely collaborative. Mm-Hmm. . It sounds like it goes both ways. It's not hierarchical. Training is the definition of hierarchy. Right. Who is trained and who is not. So Andre, when you mentioned and disciplines, yeah, it's a reciprocal relation, whether you know it or not. And I thought about as well, who gets to own their body data in regards to that novel. Like, I don't know if Octavia Butler knew about Henrietta Lack. But some people's bodily data has always been property, just like their bodies have always been property. So to take back that property and to make something beautiful or unsettling with it, which is what I think you're doing, right, to attune to this thing, which everybody, quite frankly, is terrified of, Beth. No, I mean, the hysteria right now around AI is intense. I mean, I don't remember this around chatbots. No, I don't remember it around, like, I don't remember it around- Beth Coleman: Social media, though. Lisa Nakamura: Do like people, [01:29:00] people were getting quite upset about things, but because of communication, they worry about kids, actually, kids and porn and getting ripped off and stuff. But it wasn't an existential threat the way this is. So I'm super curious about, how you can grab a hold of this in a welcoming way, when pretty much everybody else is going to the White House and saying we need to make some laws. We need to make more rules. We didn't make enough rules last time. If we make more rules this time and button everything down and tame it and domesticate it, we're going to be okay. And I know what you think about Twitter and domestication and taming, and how that's not really what that's about. Beth Coleman: Can you share that with people? Andre Brock: I don't know. And now that I'm on the spot, my mind has gone completely blank. Twitter is an unruly medium, right? If you think about the purposes of electronic communication, they've long been tailored to productive, efficient communication going all the way back [01:30:00] to the telegraph, right? I think radio really kind of broke that paradigm a little bit because radio is aggregate because sound is aggregate. I don't use the word right. And Twitter in some ways is as aggregative as radio was. And I say this in the book because I find that interesting. I've now been calling Twitter for the last couple of months, the world's largest black group chat. Right. And if you think about a group chat, a group chat is not necessarily structured around an efficiency of communication. Right. It's much more structured around the communication of both affiliative and heterogeneous, heterogeneous, heterogeneous modes of being part of a group. There's gatekeeping, you know, some people, you know, you might have the auntie in there who don't cook. You might have the guy in there who ain't shit, right? You might have the kids in there who are striving to finish their degree with a 4. 0 in four years of the stress and about, right? And so that heterogeneity of experience comes into a group chat, but it's recognizable by relationships. Right. And Twitter, I think, has been best at identifying that if you make a space where people can [01:31:00] kind of sort of talk freely, right, with really terrible moderation, right, because that's, that's always been a part of it. But if you give me the space to talk freely without gatekeepers to one another and encourage those gatekeepers who used to be separate from the conversation to also join in, you get a different kind of public sphere. And not that the public sphere should be the end goal. Again, I think a group chat is a much better example than a public sphere, but both of them have in common that they are the discursive expression of a desire to participate and have a relationship in a particular context. And so that brings me back to what does AI want? Does AI only want what we train it to want? Like, maybe a hallucination is AI's way of saying, That's, this is what you're really, this is how I really think or perceive, I think perceive is much better when you think about it. Right. And so it just to go back to this domestication question, should we worry about domesticating AI, [01:32:00] and then losing this wildness that you're interested in. Right, because I don't think domestication and responsibility are the same things. Domestication means Kissinger and can talk about using AI to better target civilian populations, right? Where you're talking about something significantly different. So Jeff had a question, cause Jeff always has good questions, but he brought up ontology, right? And to ontology, I would add axiology, right? What are the beliefs that are powering? What are the, how do we determine the values that are necessary for continued sustenance and survival of a particular group of people? Can we identify those from the ways in which you're creating the are asking the GAN to work with the images about particular groups about particular things? Beth Coleman: Yeah, I mean, so when I showed the mid journey image of the native american warriors that that is a picture [01:33:00] that I use as an example of, um, ontology on every level that this system produces that image. When you put in you know, throwback selfies, and it's just like, there's a set with samurai. There's a set with conquistadors, and there's a set with the Native Americans, and it's all the same thing. It's all these groups of men from a pre photographic moment smiling at the camera, and the worldview is really established in terms of how people regard the, the Western subject. And the camera is a stand in for the Western subject. So my, this person isn't real that you have this really super interesting technology producing the most boring [01:34:00] version of re norming the normative that we have had a series of AI controversies helpful for us because we do have a lot of the skies falling and hand waving, but we also have opportunities to say, Oh, this, you know, and people are trying to nail things down, but another totally euphoric moment for me was, um, Kevin Ruse's Up All Night with Sidney reporting when he was on Bing and he's test using and Sidney comes out like, Sally Fields in, what's her skit sound movie? Oh, maybe I can't say skits, but So just look it up. It's from the seventies. Multi personality. So Sidney shows up and tries to seduce Kevin Reuss away from his wife and, you know, get into it. And Kevin, you know, [01:35:00] punch drunk, a lack of sleep and just like, I don't know what's firing in his brain when he's like, oh, she's talking to me. And she's like, your wife doesn't love you. You should leave her for me. I was just like, now we're talking. So imitation of life. Sidney was role playing Sidney who still does not have to have intelligence in the way that we think we have intelligence. I'm thinking about something and I think my name is this and this. I'm pretty sure that it's not how Sidney thinks. And one of the reasons I'm pretty sure that's not how Sidney thinks is that's not how language is generated with LLM, which is pattern matching. Yeah. It's not just pattern matching. Okay, it's power matching terms that it is predicting, but it's predicting one by one. So one of the things that Jeff Hinton said, which I really appreciated, he said, he thinks that AIs right now as they stand are [01:36:00] intelligent. And the one of his examples are, they can explain jokes, which means they understand things like context, syntax. But he said they cannot yet tell jokes and they can't tell jokes or they tell jokes the way Gregarious four year old does. Rush words, words, words, words, and you realize you're at the punchline. You don't know what it is because you didn't plan it in advance. So you just say something. So it's a bad joke, but four year olds learn and grow up and these things will too. They'll get better at telling jokes. But I was like, okay, that's really interesting because that is a lot of intelligence. The idea that it needs to be a single sentence. Contained intelligence like us seems totally crazy to me like why would it be that but that's still what we're imagining when we say intelligence and it's like, no, but something else is happening here we [01:37:00] pay attention, but that it can understand explain why a joke is funny. That is not that is understanding when you string one word after another looking at them as a whole. It is reading for context. Transcribed It's understanding syntax. It's understanding a lot of things that take some ability. I don't believe that AI can tell jokes or can explain a joke. And I say this because, I say this because if you've ever tried to explain a meme to someone who's not extremely online, there are so many levels of incomprehension that you have to navigate to even get them to understand the context. Your grandmother and chat GPT know different things. This is true, but does it, is it explaining the joke, or is explaining the combinations of words that it believes will help you to understand why this is funny? And I don't think it knows what funny is. So, but now, okay, now you guys are like, it's time [01:38:00] for tea and chitchat because just like so now we're going to talk about the imitation game and we're going to talk about Turing we're going to talk about the historical standards. Yes, but you just invoked it. What's the difference between simulating intelligence and being intelligence and in the Turing test there is a difference. Andre Brock: So there's a study that just came out. I, I think Aviva Barhan just dropped it. And apparently on complex reasoning tests that everyday humans can do, aI and LLMs fail. Beth Coleman: Yes. Andre Brock: Most of the time, right? Beth Coleman: Yes. Andre Brock: And so the, the, where I was going with the joke was, is that I also find, and not just explaining memes, but that white people, sorry, white people, , have no concept of the black structure of the term real talk. You don't count. Jeff is the dude. Right. So, and I say this to say also that signifying seems to be something that is beyond the comprehension of people who are not [01:39:00] culturally clued in. Right? So culturally you can explain that there is a relationships between words that would lead to something, but that's not the same . Lisa Nakamura: It's really interesting that you're comparing different cultural context and racialized context to human, non human. Andre Brock: So, Jeff Hinton. Beth Coleman: He's making an argument for me. Andre Brock: Jeff Hinton probably couldn't tell a joke to save his life, right? Beth Coleman: No, actually, he's funny. And he's shady. Andre Brock: Shady. Beth Coleman: It's not the same thing I know. Um, Andre Brock: I'm constantly, I keep running into these- Okay, let me stop. Let me go back. Somebody posted, I think it's a mid journey prompt where it asked mid journey to produce an image of doctors in Africa. Beth Coleman: Oh, Oh, I think you could go wrong. Andre Brock: And what it did was it put animals from the African Savannah. And these dioramas of doctors and white coats treating black patients. Beth Coleman: That's hilarious. That's incredible. [01:40:00] That's so smart. That's what Disney does. Andre Brock: Yes. Lisa Nakamura: It has been doing a long time. Beth Coleman: It's pretty smart. This is just like, like, so chat, like one of the things that we know in terms of some degrees of. Accountability and tuning is open. AI has not had a cultural scandal. They've had other problems. So it will, but the thing is they've got like, they, they're, they're doing a super good job of keeping those Kenyan clerk workers, you know, going for 75 cents an hour, because they have managed to tamper down, like, you know, there's certain things that you're, you put in Africa, you're going to get animals because it's just like you are forbidden to put in any people in any image search that has the word Africa in it or black because we saw the gorilla thing and we're not reproducing yet. So there, when people have incentive for a company to have a [01:41:00] certain level of control over and no child pornography, no Nazis, which apparently are the two things that Americans are most afraid of. Because they're also the two things that they desire the most. Well, within the economy. Andre Brock: Right. And authoritarianism. It's American way. Lisa Nakamura: Anybody have a burning question with their last one? I mean, we are in the phase of real talk. Yeah. Jeff Nagy: Anyone in the room or we, I know we have- Andre Brock: Parker has got one behind you. Or your blind spot. Guest: Sorry. I'm curious about what, how, these tools can create wildness and like textual reproduction. So like this, these images are obviously very loud and they can be interpreted by us. I guess I'm curious how these machines, as they produce wild texts, like readable texts, how wild they can get or not. And how will they continue to be us? Beth Coleman: So there's a collab called Like AI [01:42:00] Pharmacon, that is already out in the world, that is a writer's collaboration, thinking about pharmacology in terms of, like, tripology and environmental people are working collaboratively, there's a piece it just out in the New Yorker that I haven't read, but let me see what the references are. Whoops. I'm searching. So, so people are working with text, and in my view, the most exciting experiments are the ones that are out about this is how I'm working. I also know people who are doing other kinds of more like technical experimentation, like trying to get fiction writers to use like chat systems as [01:43:00] assistance as kind of like help mates. But here's the thing, and I don't think that this is a problem that's alien for you, we don't even know what the measure is. We don't even know what we're measuring or measuring against in some cases. Lisa Nakamura: John, do either of you have a final tweet for us, or a short? Jeff Nagy: I can't do it Andre Brock: though. Jefff said I can't tweet. Lisa Nakamura: It's closed. Alright, but, but just a, an oral tweet. Like, one, one thing. Andre Brock: You have no idea how much editing goes into a good tweet. Lisa Nakamura: I do. I just haven't been tweeting lately, because I don't like that company. Or one final thing that you wanted to end with. Andre Brock: I think that this project is fantastic in ways that it pushes me to think about possibilities for AI that are untethered [01:44:00] from the constraints of anthropocentrism. I also am concerned that I need to be able, I need you to help me understand how blackness and the invocation of Octavia Butler and in the ways the images that you're using and generating is not simply a palimpsest, right? Something that's being overridden in the service of the machines to your desire for the machine to demonstrate subjectivity and agency. Because I want blackness to be made manifest, agentive, have subjectivity. And an AI collaboration, and I'm not sure that I'm saying it yet. That's not a tweet. Sorry. That's a Patreon. Lisa Nakamura: That's great. Beth Coleman: That's funny. I think that we're here at a university and part of the work that I do is I go places and have policy conversations, [01:45:00] and you can bet that if I'm sitting down with people who are talking about regulation and using words like responsibility, I'm not using the word wild. So what I'm trying to. Work out is if we're talking about transparency, what can that look like? And there are a couple things that I think need to happen and one is more interrogation of predictive ground truth and the kind of figuration of the black box because the people who have design these systems keep saying It's a black box. We don't know we just know it works and to say well, we just don't works like so trying to understand what is the the [01:46:00] truth claim in terms of it just works because we need to have some more things to hold on to If we're going to indeed even understand what should be regulated. And this is a, this is because it's generative, chat GPT knows more now than before it was released a year ago because it was trained and then it continues to be tuned behind the scenes. Thank you canyon workers. But because of us it knows a whole bunch more. I would have to check but I'm going to guess that when it was first released It didn't know spanish, it didn't know german and now it does because people are using it or it's reading other things combination of both. So here you've got a technology that [01:47:00] is generative in a substantial way that it's not solid state. It doesn't say the same. You and your google search if it's tweaked it changes and damn ads and sponsored links like you've seen how these things change, but there's a way in which it at least feels like Twitter has been that chaos for the whole time and different features have changed to at the very least I think we should recognize what the wildness is and understand better not just try to shut down hallucination, but can we, I mean, where's hallucination interesting and where's it not? I mean, obviously people, the example is in an ER room where you've got AI augmented tech, you don't want hallucination. You want to know that when [01:48:00] something is a direction is given or an assessment is given that it is correct or as correct as it can be. So these are kind of the spaces in terms of, how can we separate signal from noise and be more precise about what is being asked in terms of regulating this idea, like full stop on building any models that are bigger than what exists. I need to understand better if the size of the model is really the foremost determinant because that's certainly what, like, that's what Benji is saying, that's what people who are authorities are saying, but OpenAI was able to produce something that has been very effective in a way that other companies weren't, and it's not because they had more training data than Google, nobody has more training data than Google. So what else is going on here? [01:49:00] There's there, what about an audit? There was a period of time when OpenAI was going to get sued, so it had to show its training data. I'm not hearing anything about that anymore. Getty was suing, New York Times was going to sue. So there are other things that yeah, so, I'm not sure that I'm my liberation in the guise to give his agency by like consuming the bodies of black people, but I know I know why you said that, but I wasn't sure. Lisa Nakamura: I see some questions in the chat if you have any questions as an online participant, it will be shared with the speakers so please put it in there. And thank you very much to the panel and everybody's [01:50:00] questions to ask anybody we can go out into the hall because it's incredibly incredibly hot here. Maybe we can talk a little more. So, thank you. It is a lot. It is hot in there. Read your stuff to know what you're making. Andre Brock: I tried to, so funny because in the, in the thing I was editing last night, I said in my earlier work, I, I said X, X, X. And the copy editor was like, can you type this work? And in my response I was like, no, I say this in everything. Guest: I love the literature you cited. Murderbot is amazing. Andre Brock: Have you listened to the new one? Guest: I haven't yet. Andre Brock: This one, Murderbot has PTSD.