The humanities have long considered — and creatively mobilized — the qualitative impact of variations in scale for understanding society, history, culture, and law: consider Jonathan Swift’s Gulliver’s Travels, Edmund Burke’s Reflections on the Nature of the Sublime, W.E.B. Du Bois’ scalar projections of sociological insights in “The Princess Steel,” the French Annalistes’ concept of micro-history and the longue durée, the legal concept of genocide, the artistic concepts of the miniature and the Gesamtkunstwerk, the intimate scale of the smart home in Nnedi Okorafor’s “Mother of Invention,” or the superpowers of the postwar Japanese “Astro Boy.”
In considering how scale affects essence and meaning, we ask:
- How phenomena change or exhibit emergent properties as they increase in scale, especially with regard to AI systems and other examples from various fields where a model, theory, or ideology changes when subjected to significant re-scaling (spatial, temporal, quantitative, or otherwise), and how such transformations relate to conversations about emergent properties of AI systems.
- How centralized and distributed approaches to scaling differ in terms of social architecture, and what the implications of these differences are for scalability, robustness, and efficiency. Does centralization or decentralization of AI inherently offer greater transparency or democratic potential? How to confront excessive concentration of power in the case of centralized scaling, or address violent and/or anti-democratic tendencies in a decentralized AI environment?
- How the scale of systems affects their scrutability and the ethical implications of scaling out systems to a point where their decisions are no longer interpretable by humans or commensurable with human-scale ethics.
- How historical or contemporary technological scaling has impacted societal norms and ethical boundaries, and how we might weigh and define societal values relevant to AI governance and design at different scales (fairness, privacy, access. etc.).