Cogut Institute for the Humanities

Models-Scale-Context: AI and the Humanities

Collaborative Humanities Lab led by historian Holly Case and computer scientist Suresh Venkatasubramanian

From 2025 to 2028, the collaborative humanities lab explores three basic terms of everyday and scientific discourse, highlighting assumptions and frameworks through which AI is imagined and implemented. Questioning what these terms mean across disciplines and technological practices is an invitation to explore the modes of thinking, being, and doing that have shaped AI and could shape its possible futures.

Banner image: Still from “Pixillation” by Lillian F. Schwartz, 1970, from the collections of the Henry Ford.

About the Lab

Humanities scholarship on the background conditions accompanying AI’s emergence within the broad field of big data has, to this point, largely framed its analyses through forms of critique, focusing on the negative social and political effects of algorithms, surveillance, and the influence of entrenched ideologies on the building of structures and systems. While engaging with questions of power and privilege, the lab brings to the fore modes of thinking and methods within the various humanities disciplines and how these have prefigured and shaped AI, often unintentionally, but with wide-ranging implications. New thinking in the humanities will need to take account of these imbrications and be brought into conversation with parallel debates in the sciences around the ethics and politics of unintended consequences.

We have been struck by different levels of awareness and instrumentalization of what is left out of models across disciplines. Though modeling is viewed more critically in the humanities generally, the humanities are also prone to presume that modeling is a largely unreflexive undertaking in the sciences, which it is not. Interest,  there is a difference in the function of reflexivity across disciplines. We plan to study:

  • How models are understood across disciplines and various techno-creative endeavors.
  • The relationship between models and metaphors/analogies, theories, or ideal types/forms, and how these valences in the meaning and function of models inform our approach to large language models.
  • How to characterize the relationship between a model and the thing itself, or instances in which models take on lives of their own, and how this quality of emergence relates to the notion of subjectivity/personhood.
  • How different data and algorithmic models are understood and operate the world, and what aspects of society need rethinking given recent technological advances such as text-to-image diffusion models, LLMs, and other deep/machine learning breakthroughs.

The humanities have long considered — and creatively mobilized — the qualitative impact of variations in scale for understanding society, history, culture, and law: consider Jonathan Swift’s Gulliver’s Travels, Edmund Burke’s Reflections on the Nature of the Sublime, W.E.B. Du Bois’ scalar projections of sociological insights in “The Princess Steel,” the French Annalistes’ concept of micro-history and the longue durée, the legal concept of genocide, the artistic concepts of the miniature and the Gesamtkunstwerk, the intimate scale of the smart home in Nnedi Okorafor’s “Mother of Invention,” or the superpowers of the postwar Japanese “Astro Boy.” 

In considering how scale affects essence and meaning, we ask:

  • How phenomena change or exhibit emergent properties as they increase in scale, especially with regard to AI systems and other examples from various fields where a model, theory, or ideology changes when subjected to significant re-scaling (spatial, temporal, quantitative, or otherwise), and how such transformations relate to conversations about emergent properties of AI systems.
  • How centralized and distributed approaches to scaling differ in terms of social architecture, and what the implications of these differences are for scalability, robustness, and efficiency. Does centralization or decentralization of AI inherently offer greater transparency or democratic potential? How to confront excessive concentration of power in the case of centralized scaling, or address violent and/or anti-democratic tendencies in a decentralized AI environment?
  • How the scale of systems affects their scrutability and the ethical implications of scaling out systems to a point where their decisions are no longer interpretable by humans or commensurable with human-scale ethics.
  • How historical or contemporary technological scaling has impacted societal norms and ethical boundaries, and how we might weigh and define societal values relevant to AI governance and design at different scales (fairness, privacy, access. etc.).

The various disciplines and the tech world often have vastly different ways of framing “context” and its relevance to human — and non-human — activity, with wide-ranging impacts and implications. We therefore will consider:

  • The normative role of contextualization framed as an antidote to the dangers of abstraction and decontextualization; and the methods (historicization, cultural analysis, biography, political economy, etc.), nature, and limits of the presumed prophylactic function of contextual analysis as it relates to AI.
  • The need to reevaluate the nature and value of context, not least because the current disciplinary understanding of context and its significance cannot adequately answer the question of how to analyze and weigh the contextual origins of AI-generated text and images.
  • The realms (contexts) where artificial intelligence systems should not tread, or should be temporally, spatially, or operationally constrained, and how we might determine and advocate for such boundaries. For example, Hollis Robbins recently referred to ChatGPT as “not present enough,” i.e., not able to capture emergent cultural forms quickly enough, which raises the question of how “present” we might wish it to be (spatially, temporally, quantitatively, etc.), and to what ends.
  • The question of what becomes of context in the course of various mathematical, algorithmic, analytic, or other sorts of operations (i.e., statistical inference, theorization, codification, etc.); how context functions within existing legal, political, and/or ethical categories and settings; and the extent to which its historic and contemporary uses are being wielded to develop, make sense of, and/or govern AI (for example the notion of mens rea/premeditation, majority, citizenship, mitigating circumstances, personhood, rights, privilege, locus standi, criminal responsibility, restorative justice, etc.).

In exploring these themes, the lab will seek to wed theory and practice to facilitate research in the humanities and AI and give researchers the time and resources to study this fast-moving field. As the symbolist poet Paul Valéry wrote in 1894, “the problems of composition and those of analysis are reciprocal.” It is impossible to separate technical skills and practical applications of AI from the critical thinking and ethical frameworks utilized by humanities researchers for humanistic and humanistic social science questions.

“ Our goal is for the lab to operate flexibly and inclusively so as to enhance exchanges within an already vibrant community of faculty and doctoral students at Brown and to catalyze important new research on AI. ”

Holly Case and Suresh Venkatasubramanian

Upcoming Events

Past Events