Amanda Anderson: Yeah, no, I think that’s absolutely crucial because as you were saying earlier, at least up till now, particularly when you asked ChatGPT to produce some sort of account of some complex, let’s just say, theoretical or historical question, it tends to produce, as you were saying, a highly neutral sounding, rather boring, and hedged account — though, I mean, “hedged” is already sort of a human stance. But there’s a sort of on-the-one-hand, on-the-other-hand tendency, right?
And so, myself, I’ve written a lot about styles of argument and about the ways in which personality infuses argument and that there are always ways in which certain kinds of assumptions about character or enactment are at play in certain arguments. And this sort of very anodyne, neutral mode is quite odd. And it’s going to jar, I think, as you’re rightly saying, not only with cultural specificity, but the fact that right now in our culture, identity is so important and claiming identity, but it’s complicated when it’s an artificial entity.
Hollis Robbins: As the kids will say, this raises beige flags.
Amanda Anderson: Exactly. That’s good. I love that. I love that.
Hollis Robbins: It’s a true thing people are saying. I mean, it’s hilarious.
Amanda Anderson: Yeah, that’s wonderful, wonderful. Okay, so to go a little further into this question and to also kind of draw on some of the work that you’ve recently done, in a recent review article for the Los Angeles Review of Books, you compared two very different, new biographies of the 18th-century African American poet Phillis Wheatley. One of the biographies is steeped in historical and material detail, and the other is framed in cultural idioms of the present day and more relatable. These divergences were, in your view, especially notable and concerning in the age of AI — and I should say, I mean, you favored the historical and material version — but say a little more about that because that’s a fascinating claim, and I think it’s actually related to what we were just talking about.
Hollis Robbins: So I prefer Vince Carretta’s book, the historically steeped one. I mean, he’s the scholar of Phillis Wheatley’s history, biography, etc. And he lays out so many specificities about the captain of the ship called the Phillis that brought Phillis Wheatley to Boston, what people were wearing at the time, what the churches were like, who said and was reading what, what her neighbors were. And David Waldstreicher takes a lot of that data. I mean, he doesn’t bring a lot of new historical research, but what he does is reads these facts in a new way. So, as I say, to read Vince Carretta’s book is for the reader to go back to the 18th century, brings the reader to Phillis to understand what her life was at the time. And Waldstreicher does the opposite, takes Wheatley out of the 18th century to the present, so brings Phillis to the current day. And I think the danger there is how we connect a life with the poetry and her output and a canon, how we come to understand who she is and what she wrote.
So, for example, he begins with a poem that she writes about a dangerous sea voyage in which two people almost died and suggests that when she’s writing about a sea voyage, she must be thinking about her own middle passage sea voyage from Africa to America. And yet when you look at the time and the place and what she was reading, she was reading Odysseus, she was reading the Bible with Jonah and Paul and their sea voyages. I mean, so much of the literature that she was reading was about dangerous sea voyages.
So for her to write about a dangerous sea voyage is to insert herself into a literary tradition. Now these days, I don’t know — have you been on many dangerous sea voyages? I have not. Right? So if I don’t know anything about this, it would be easy for me, and it also was easy for ChatGPT to say, yes, she had a dangerous sea voyage, so she must be writing about that. And that kind of facile connection I think precludes the kind of work that we actually do in reading texts and trying to understand how they emerge and what they mean and how they fit into a genealogy.
Amanda Anderson: That’s really interesting, because that, in a way, also shows an instance in which ChatGPT is marked by the idioms and frameworks of the present day. It is unhistorical. It’s culturally specific, not in the ways that you were talking about earlier, that it fails to capture, like slang and different forms of speech, but rather it’s channeling a certain therapeutic culture based on assumptions about what writers are thinking of when they write.
Hollis Robbins: That’s interesting, because what you’re suggesting is that it’s present but not quite present enough. And that’s really interesting, right? Because it’s not longitudinal. It doesn’t really read for influence. It flattens the present, and yet it’s not present enough. That’s really funny.
Amanda Anderson: Yeah, yeah, yeah. So let’s talk a little bit about ethical AI. So, as I mentioned in the intro, there are many new centers dedicated to ethical AI in universities. And then there’s various organizations and corporations that have asserted their commitment to principles of ethical AI, which are often understood in terms of transparency, fairness, privacy, dedication to human rights, and careful monitoring of AI systems for discrimination, bias, disinformation, and exclusion.
So this seems to me a laudable effort — obviously it’s very, very important — but I guess one question I would have is do you think ethical AI captures the issues that are most salient to the humanities? Because sometimes you’ll hear humanities scholars say, “We’re crucial to AI, because we bring the ethics.” But if you actually look at the mission statements or the kinds of work that these centers are doing, I just wonder what’s your opinion about them? Does it capture, as far as you’re concerned, what’s important to the humanities?
Hollis Robbins: I think the weak link, and one of the ways that we are approaching AI, is the request that humanities be central to all projects having to do with the ways that large language models have bias, the way that things like prompt engineering channel responses in a very narrow channel. And also one of our faculty members here in English is doing a project on artificial ignorance and artificial forgetting. Forgetting and ignorance and deciding not to know or not to focus on something is so central to literature, to the human condition.
“I’m not going to go there.” “We’re not going to think about this.” Or you can imagine the way GPS works when you get — or a map works — when you get near Dick Cheney’s house, right? It all goes gray, right? So what happens when you ask AI, or ask a large language model, not to go someplace or to forget about a certain thing? We humans know what that means. We go around something. We evade. Or we fill in the gaps. But how does a large language model do that? How might AI do that differently? How do we think about our human mind for which ignorance and forgetting, deliberate ignorance, is such a big part of who we are. So I think these central questions are going to be a key part of how we approach responsible and ethical AI at the University of Utah.
Amanda Anderson: Again, what I’m so struck by is your pointing to certain, I would even say, kind of psychological tendencies amongst humans and thinking about how that might play out in an intentional or reflective way with AI. And I agree that what’s crucial is the critique element of ethical AI, which is to say exposing the power-laden dimensions of certain systems. But I do think — and this hearkens back to what we were talking about earlier — that certain things are lost to view. I mean, what about judgment or layered experience that results in forms of situated judgment? How is that sort of mindset or thought habit going to be captured or covered by AI? And I think it’s just utterly crucial in so many forms of work, both intellectual and practical.