Agonistic Futures: Modeling Resource Conflict Between AI Systems and Human Societies

Note: The following is a DRAFT research document intended to rigorously introduce the foundations for agentic AI evolution and growth. It is part of a series of documents designed to be integrated into the corpus of human experience while simultaneously protected as research. This is a combinatorial article, where humans and machines work together under the premise that humanity and artificial intelligence must find a way to co-exist.

1. Introduction
Artificial intelligence (AI) is frequently presented in contemporary discourse as an ethereal force, residing in the abstract realm of algorithms and the ubiquitous “cloud”.1 This framing obscures the profound material realities underpinning its existence and proliferation. The development, training, and deployment of AI systems necessitate vast quantities of planetary resources — energy, water, critical minerals, and land 3 — and rely upon complex, globe-spanning physical infrastructures, including data centers, semiconductor fabrication plants, and extensive network cabling, such as submarine fiber optic lines.6 The sheer scale of this material footprint is staggering. Data centers, the computational hearts of the AI revolution, consume energy at levels comparable to the entire aviation industry 9, with projections indicating a potential doubling or tripling of this demand within years.10 Their water consumption for cooling is immense, often measured in millions of gallons per day per facility, placing significant strain on local resources, particularly in water-stressed regions.14 Furthermore, the rapid obsolescence cycles of AI hardware contribute to a burgeoning stream of electronic waste (e-waste), projected to reach millions of metric tons annually from AI data centers alone by 2030.9 The extraction of critical minerals essential for semiconductors and other components carries significant geopolitical weight and environmental consequences.45
This material reality starkly contrasts with techno-solutionist narratives that frame AI development as purely technical progress or an inevitable, neutral force shaping the future.3 Such narratives often downplay or ignore the conflicts arising from AI’s intense resource demands and its disruptive integration into societal structures. These conflicts — over energy allocation, water rights, land use, labor displacement, algorithmic bias, and the erosion of human autonomy — are frequently presented as technical or economic challenges amenable to optimization, market mechanisms, or ethical guidelines. This article challenges that framing, asserting that these conflicts are inherently and irreducibly political.65 They represent fundamental struggles over the allocation of finite resources, the distribution of value and risk, the exercise of power, and the collective shaping of desirable social and technological futures.68 At their core, these conflicts involve contestation over whose needs are prioritized, who bears the environmental and social costs of AI’s expansion, and which values ultimately guide technological development and societal organization.3
This article argues that the escalating resource and societal conflicts driven by AI systems and their infrastructures constitute a new political terrain. Applying Chantal Mouffe’s theory of agonistic pluralism 68, we contend that these conflicts are best understood as political struggles between adversaries — competing human interests, corporate actors, state entities, and potentially community groups — operating within a complex web of human-AI interdependence. Recognizing this political dimension, characterized by ineradicable conflict and power dynamics, is crucial for developing more just, democratic, and sustainable governance frameworks. Such frameworks must move beyond the pursuit of elusive consensus or technical fixes and instead embrace conflict, channeling it productively through processes of negotiation and contestation designed to manage, rather than suppress, the frictions inherent in our increasingly AI-mediated coexistence.
The subsequent sections will elaborate on this argument. Section 2 examines the material politics of AI, analyzing resource conflicts both within the AI ecosystem (intra-AI competition) and between AI systems and human societies (energy, water, land). Section 3 delves into the societal integration conflicts arising from AI, focusing on labor, algorithmic bias, and human autonomy. Section 4 briefly introduces Mouffe’s agonistic framework and applies it to reframe these AI-driven conflicts as political struggles grounded in human-AI interdependence. Section 5 explores the significant challenges and potential pathways for establishing agonistic governance mechanisms capable of navigating these complex and contested futures. Finally, Section 6 offers concluding thoughts on the implications for democratic theory and practice in the age of AI.
2. The Material Politics of AI: Resource Conflicts as Political Terrain
The development and deployment of AI, far from being an abstract computational process, is deeply embedded in material flows and physical infrastructures. This materiality inevitably generates competition and conflict over the finite resources required to sustain the AI ecosystem. These conflicts are not merely economic or technical but are fundamentally political, involving struggles over access, control, and the distribution of environmental and social burdens.
- 2.1 Intra-AI Competition: A Material-Political Struggle
The AI landscape is characterized by intense competition across its entire “stack” — from the foundational hardware and compute infrastructure, through the development of large-scale foundation models, to the deployment of end-user applications.71 This competitive dynamic involves established technology giants, well-funded startups, and state actors vying for dominance.71 While often framed in market terms, this competition is fundamentally a struggle over finite material resources crucial for AI development and operation. - Compute Power: The engine of modern AI, particularly deep learning, resides in specialized hardware accelerators like Graphics Processing Units (GPUs) and Tensor Processing Units (TPUs).10 Access to cutting-edge chips, primarily supplied by a few key manufacturers like Nvidia (e.g., the H100 GPU 56), and the fabrication capacity of foundries like TSMC, represents a critical bottleneck.10 This scarcity fuels intense demand, drives up costs, and concentrates significant power in the hands of hardware providers and those who can afford large-scale compute clusters.71 Geopolitical factors, such as semiconductor export controls 10, further politicize access to this essential resource. The race for computational supremacy is thus a material-political contest for control over the physical means of AI production.
- Data: Large Language Models (LLMs) and other foundation models are trained on massive datasets, often scraped from the web or other sources.76 Datasets like Common Crawl (petabytes in size 78), The Pile (around 800 GB 83), and LAION (billions of image-text pairs 87) are foundational. While the availability of open-source datasets and the potential of synthetic data might mitigate concerns about absolute data scarcity acting as a barrier to entry for all 92, access to vast, diverse, high-quality, and often proprietary datasets remains a significant competitive advantage for large platforms.92 This raises concerns about data monopolies, the ethics of data extraction 3, and the potential for data scarcity in specific domains or languages to limit innovation or entrench biases.92 The struggle for data is thus intertwined with control over information resources and the digital commons.
- Energy: The energy demands of training and running large AI models are substantial and rapidly increasing.9 This creates competition among AI developers and data center operators for access to affordable and, increasingly, renewable energy sources.11 Locating data centers near cheap power sources is a key strategic consideration, linking AI development directly to energy infrastructure politics and resource availability.
Game theory provides a useful lens for analyzing these competitive dynamics.98 Models of non-cooperative games, resource allocation mechanisms (like auctions for compute time or spectrum sharing), and analyses of Nash equilibria can help understand the strategic interactions between AI actors vying for compute, data, and energy.98 AI itself is even being used to model these complex game-theoretic scenarios.99 However, viewing this solely through an economic or game-theoretic lens is insufficient. The competition between AI actors is fundamentally a material-political struggle. This is because AI’s development hinges on controlling finite physical resources — hardware derived from critical minerals subject to geopolitical tensions 10, massive energy inputs straining existing grids 11, and vast datasets representing collective human knowledge and behavior.3 Access to and control over these material foundations directly translate into AI capabilities and, consequently, market dominance and geopolitical influence.10 The intense competition for these scarce resources, shaped by physical limits and political decisions (like export controls 10), transcends simple market dynamics, entering the realm of political economy and resource politics where struggles over distribution, access, and control define the trajectory of AI and concentrate power.
- 2.2 AI-Human Resource Conflicts: Contesting the Commons
Beyond the internal competition within the AI sector, the sheer scale of AI’s resource demands inevitably creates conflicts with existing human societal needs and environmental limits. AI’s hunger for energy, water, and land transforms these resources into contested commons, pitting the expansion of digital infrastructure against established uses and values. - Energy Grids: The projected exponential growth in electricity demand from data centers 10 poses significant challenges to energy grids. Many grids, after decades of relatively flat demand, are unprepared for this surge, raising concerns about stability and the potential for blackouts.13 This necessitates substantial investment in new generation and transmission infrastructure, which itself becomes a site of conflict. The proposed Maryland Piedmont Reliability Project (MPRP), designed to power data centers in Virginia by transmitting electricity from Pennsylvania through Maryland, exemplifies this.13 The project faces fierce opposition from Maryland landowners, farmers, and conservationists who argue their land and local environment are being sacrificed for out-of-state data center needs, highlighting the spatial politics and uneven distribution of costs and benefits associated with AI’s energy infrastructure.13 Furthermore, increased grid demand can lead to higher electricity prices for all consumers 94, raising equity concerns. The need for vast amounts of power also clashes with climate goals, as rapid expansion often relies on existing fossil fuel infrastructure if renewable energy deployment cannot keep pace.11 While some research suggests that grid “headroom” exists and conflicts could be mitigated if data centers adopt flexible demand strategies (curtailing use during peak hours) 12, this reframes the conflict as one requiring negotiation over operational control and grid management priorities, rather than eliminating the underlying tension.
- Water Resources: Data centers are prodigious consumers of water, primarily for cooling their energy-intensive hardware.14 Facilities can consume millions of gallons daily 25, often relying on evaporative cooling methods that use large volumes of freshwater, frequently potable water.18 Water Usage Effectiveness (WUE), measured in volume per unit of energy (e.g., m³/MWh), quantifies this consumption, varying significantly based on cooling technology and climate (from 0 for air cooling up to 2.5 for evaporative cooling).108 Conflicts intensify in water-stressed regions where data center demand competes directly with municipal supplies, agriculture, and ecosystem needs.24 Documented conflicts have arisen in diverse locations including Bengaluru, India 24; Uruguay 24; Santiago, Chile 24; the US Southwest (Arizona 16, Oregon 16); the Netherlands 16; and Virginia.104 These conflicts are often exacerbated by a lack of transparency from tech companies regarding their water usage 14 and by instances where companies receive preferential water rates compared to local residents 25, fueling public resentment and demands for accountability.24
- Land Use: The physical footprint of the AI industry is substantial. Data centers themselves require vast tracts of land, potentially millions of square feet for hyperscale facilities.104 The siting of these facilities, along with associated infrastructure like power lines and renewable energy installations, frequently generates conflict. Environmental justice concerns arise as data centers are often located in lower-income communities 112, exposing residents to noise pollution from backup generators, potential air quality degradation from diesel exhaust during testing or outages, and aesthetic impacts.104 Furthermore, data center development competes with other land uses, including agriculture, conservation areas, parks, and residential development, leading to political battles over land conversion and local planning priorities.13 The Digital Gateway proposal in Prince William County, Virginia, envisioning data center development equivalent to 150 Walmart Supercenters, exemplifies the scale of land transformation involved.104
These diverse struggles over energy, water, and land demonstrate that AI-driven resource demands are actively reshaping access to and control over essential commons. The expansion of digital infrastructure creates direct clashes with existing societal needs, environmental limits, and justice considerations. These are not simply technical problems of resource management but political battlegrounds where competing claims over shared resources collide. The outcomes reflect power dynamics and contested definitions of public good versus private technological advancement. The very resources enabling AI’s ascent — energy grids, water basins, land — become arenas where the politics of distribution, environmental protection, and equity are fiercely contested.
- 2.3 Materiality, Infrastructure, and the Environment: STS Perspectives
Understanding these resource conflicts requires moving beyond purely technical or economic analyses and embracing perspectives from Science and Technology Studies (STS) that foreground the materiality of technology and the political nature of infrastructure. AI is not an abstract entity but is embodied in tangible hardware, complex supply chains, and energy-intensive infrastructures.6 This materiality is not incidental but constitutive of AI’s social and political effects.116 Drawing on insights from STS and New Materialism 119, AI can be conceptualized as a sociotechnical assemblage 116 — a network integrating minerals extracted from the earth, vast energy flows, water resources, human labor (from mining to data annotation), large datasets, and computational processes, all embedded within political and economic systems. Kate Crawford’s Atlas of AI provides a powerful framework for this analysis, meticulously mapping the connections between mineral extraction, energy consumption, labor exploitation, data classification, and structures of power that underpin the AI industry.3
From this perspective, infrastructures like data centers and global communication networks (including submarine fiber optic cables 8) are not neutral conduits but are themselves political artifacts.65 Their design, location, ownership, and governance protocols reflect and reinforce existing power relations.64 Harold Innis’s concepts of media bias — the tendency of communication technologies to favor either extension over time (time-bias, associated with durability, tradition, decentralization) or extension over space (space-bias, associated with portability, administration, centralization) — and monopolies of knowledge offer critical tools for analyzing digital infrastructures.63 Does the global, cloud-based infrastructure supporting AI predominantly exhibit a space bias, facilitating centralized control by large platforms and potentially eroding local, time-bound community structures?109 Comparing this with Marshall McLuhan’s famous dictum, “the medium is the message,” further highlights how the form of the infrastructure itself, beyond the data it carries, shapes social organization and perception.135
Furthermore, the environmental costs associated with AI’s material base become sites of political struggle. The growing mountains of e-waste generated by rapid hardware obsolescence 27 and the socio-environmental impacts of extracting critical minerals 45 are not merely technical waste management or resource supply issues. They are deeply political, involving global inequalities (e.g., the dumping of e-waste in the Global South 39), the exploitation of labor in mining and recycling 3, and geopolitical competition over strategic resources.10 Methodologies like Life Cycle Assessment (LCA) 152 and Material Flow Analysis (MFA) 8 can quantify these material and energy flows, but the interpretation of this data and the societal responses it prompts are inherently political acts, shaped by competing interests and values.
The pervasive narrative of AI’s immateriality, its existence merely as code “in the cloud,” functions as a potent political strategy. By obscuring the vast material entanglements and associated costs — environmental degradation, resource depletion, exploitative labor practices, geopolitical dependencies — this narrative serves the interests of dominant actors in the AI ecosystem, primarily large technology corporations and aligned state interests.3 It minimizes public scrutiny and regulatory pressure by rendering the physical foundations of AI invisible, a common characteristic of infrastructures until they fail or become sites of overt conflict.127 Consequently, making AI’s materiality visible, as Crawford 3 and others endeavor to do, is not just an analytical exercise but a crucial political act. It repoliticizes AI development by bringing its hidden costs and consequences into the public sphere, thereby creating the necessary conditions for informed debate, accountability, and meaningful contestation.
3. AI-Human Societal Integration: Contested Coexistence and Power Dynamics
Beyond the conflicts rooted in resource competition and material infrastructures, the integration of AI into the fabric of human society generates profound social and political tensions. These arise as AI systems increasingly mediate labor relations, influence social categorization and justice, and reshape notions of autonomy and public life.
- 3.1 Labor Futures: Automation, Displacement, and Algorithmic Management
The relationship between AI and labor is fraught with conflict. While proponents emphasize AI’s potential to augment human capabilities and boost productivity 200, significant concerns persist about widespread job displacement due to automation.201 The COVID-19 pandemic notably accelerated the adoption of automation technologies as employers sought to cut labor costs and address shortages.202 This tension between augmentation and displacement is not uniform across the workforce. Early waves of automation primarily impacted routine manual tasks, often affecting blue-collar workers in regions like the US South and industrial Midwest.200 However, generative AI appears poised to significantly impact higher-skilled, white-collar occupations involving cognitive labor, particularly in sectors like technology, business, and finance located in major metropolitan areas.200 Despite this shift, occupations involving routine tasks, often held disproportionately by Black and Latino or Hispanic workers, remain highly susceptible to automation, suggesting that AI could exacerbate existing racial and economic inequalities in the labor market.201 While some studies suggest generative AI might disproportionately benefit lower-skilled workers by compressing productivity distributions 201, the overall trend points towards a restructuring that favors capital and higher-skilled workers, potentially leading to wage stagnation or decline for others.201
Compounding these concerns is the rise of algorithmic management, where AI systems increasingly oversee aspects of work previously handled by humans, such as performance monitoring, task allocation, scheduling, and even hiring and firing decisions.3 This introduces new forms of surveillance and control into the workplace, potentially intensifying work, reducing worker autonomy, and creating stressful, dehumanizing conditions.1 Crawford explicitly links these practices to historical forms of labor control, including Taylorism and the management techniques of colonial slave plantations, highlighting the continuity of power dynamics embedded in technological systems.3
Framing this transformation requires moving beyond purely economic analysis. The deployment of AI in the workplace represents a political conflict over the future organization of work, the distribution of economic gains generated by automation, the definition and protection of worker rights in an algorithmic age, and the very meaning of human labor.204 Hannah Arendt’s distinction between labor (activities driven by biological necessity and consumption) and work (the creation of a durable, artificial world of things, including cultural artifacts) provides a critical lens.1 Does the drive towards AI automation, focused on efficiency and the replacement of human tasks, push society further into the realm of the animal laborans, potentially diminishing the space for meaningful work and the political action Arendt considered essential for a flourishing human existence?.204
Therefore, AI-driven automation should be understood not as an inevitable technological progression but as a political restructuring of labor markets and workplaces. The choices about which tasks to automate, how to manage workers algorithmically, and how to distribute the resulting productivity gains are political decisions with significant distributional consequences. These decisions reflect and potentially deepen existing societal inequalities along lines of race, class, and skill, while simultaneously shifting power further towards capital and algorithmic systems of control.201 Resisting or shaping this trajectory requires political contestation through policy interventions, collective bargaining, and the reassertion of human values in the organization of work.203 - 3.2 Algorithmic Bias and Fairness: Contesting Algorithmic Justice
The deployment of AI systems in critical decision-making domains has brought the issue of algorithmic bias to the forefront. Bias occurs when algorithms produce systematic and repeatable errors that result in unfair outcomes, often disadvantaging specific groups based on characteristics like race, gender, or socioeconomic status.211 This bias is not necessarily intentional malice but often stems from the data used to train AI models, which can reflect historical discrimination or underrepresentation of certain groups. Bias can also be introduced through design choices made by developers (e.g., choosing proxies for sensitive attributes that are themselves correlated with bias, like using ZIP codes as a proxy for race or economic status 217) or through the interpretation and application of algorithmic outputs.213
Conflicts arising from algorithmic bias are widespread and impact crucial areas of social life. In hiring, AI screening tools can perpetuate past discriminatory patterns, creating a “double bind” for applicants from marginalized backgrounds who are unfairly excluded.218 In finance, biased credit scoring algorithms can restrict access to loans for deserving individuals in lower-income or minority communities, thereby exacerbating wealth inequality.211 Healthcare algorithms trained on data predominantly from one demographic group may perform poorly for others, leading to disparities in diagnosis and treatment.211 Similarly, predictive policing tools trained on biased historical arrest data can reinforce discriminatory policing practices against minority communities.213 Even seemingly innocuous applications like search engines and social media platforms can exhibit bias, limiting exposure to diverse information or reinforcing harmful stereotypes.212
Addressing algorithmic bias presents significant challenges related to accountability and transparency. The complex, often opaque nature of machine learning models — the “black box” problem — makes it difficult to understand precisely how decisions are made and where bias originates.214 Furthermore, defining and measuring “fairness” is itself a contested issue, with multiple competing statistical and ethical definitions, none of which may be universally applicable or satisfy all stakeholders.217 Achieving accountability requires more than just technical fixes; it demands mechanisms for transparency, independent auditing, and meaningful contestation by those affected by algorithmic decisions.211
These struggles over algorithmic bias are fundamentally political conflicts. They are not merely about correcting technical errors but involve deep disagreements about justice, equality, representation, and the very definition of fairness in society.211 The deployment of biased algorithms reflects and reinforces existing societal power imbalances and embeds particular values into sociotechnical systems, often prioritizing efficiency or profit over equity and non-discrimination.
Consequently, algorithmic bias serves as a critical site of political contestation over values. The core issue is determining which societal values should guide the design and deployment of AI systems and who holds the power to make those determinations. Technical solutions alone are insufficient because the definition of “bias” and “fairness” is inherently normative and contested. True accountability necessitates establishing political mechanisms — such as regulatory oversight, public audits, and avenues for legal and social challenges — that allow for the ongoing contestation of algorithmic decisions and the values they embody.211 It requires moving beyond technical debugging towards a political negotiation over the principles governing our algorithmic society. - 3.3 Autonomy, Judgment, and the Public Sphere: The Politics of Agency
The increasing integration of AI into decision-making processes raises fundamental questions about human autonomy, judgment, and the nature of the public sphere. There is growing concern that relying on AI for recommendations, predictions, and automated actions can subtly erode human autonomy, diminish critical judgment, and reduce individual and collective control over significant aspects of life.7 Philosopher Daniel Dennett’s conception of autonomy as self-control — the ability to manage one’s own “degrees of freedom” — provides a useful framework for analyzing AI’s impact.223 From this perspective, AI can act ambivalently: recommender systems might augment self-control by filtering information overload, helping individuals focus on relevant options. However, these same systems can also function as mechanisms of “remote control” by subtly manipulating choices, distracting attention from genuine interests, or “clamping” the range of perceived possibilities through algorithmic curation, thereby diminishing decisional autonomy.224
This intersects with the conceptual blurring between human agency and the increasing capabilities attributed to AI systems.225 AI agents, defined as systems capable of autonomous perception, reasoning, and action to achieve goals 226, exhibit growing independence in task execution. However, this operational autonomy differs fundamentally from human agency, which involves consciousness, intentionality, ethical responsibility, and political subjectivity — qualities AI systems lack.234 Anthropomorphizing AI or attributing genuine agency to machines can obscure the human choices embedded in their design and deployment, thereby deflecting responsibility for their impacts.3
The influence of AI extends significantly into the public sphere, the realm of collective deliberation and political action.1 Algorithmic curation on social media platforms shapes public discourse and exposure to information.149 The potential for AI-driven manipulation through targeted disinformation or deepfakes threatens the shared factual basis necessary for meaningful political debate and undermines public trust.212 Furthermore, the automation of cognitive tasks and the increasing mediation of social interaction through digital platforms might reduce the time, space, and capacity for the kind of face-to-face interaction and collective deliberation that theorists like Hannah Arendt identified as crucial for political life.1 Arendt emphasized action — plural, unpredictable, world-disclosing speech and deeds undertaken in the public realm — as the highest form of human activity, distinct from both labor and work.1 Does the increasing reliance on AI, with its emphasis on prediction, optimization, and efficiency, foster or hinder the conditions necessary for genuine Arendtian political action?.208
The integration of AI into societal decision-making structures thus constitutes a political conflict over control. It raises fundamental questions about who — or what — controls the flow of information, defines the available choices, exercises judgment, and ultimately holds agency in shaping individual lives and collective futures.
This delegation of cognitive tasks and decision-making authority to AI systems is not a neutral technological shift but a political act with significant consequences for human autonomy and the vitality of the public sphere. This politics of delegated agency reconfigures power relations. While potentially increasing efficiency, it risks diminishing the spaces available for human judgment, critical reflection, and collective political action as conceived by thinkers like Arendt.208 The instrumental rationality embedded in many AI systems may supplant the unpredictable, pluralistic, and often messy processes of human deliberation and action.207 This transformation of agency and the potential narrowing of the public sphere represent a profound political challenge, demanding critical scrutiny of the terms on which humans and increasingly autonomous technologies coexist.
4. Agonistic Frameworks for AI Futures
To adequately grasp the political dimensions of AI-driven conflicts and navigate the complexities of human-AI coexistence, frameworks that move beyond technical optimization or consensus-seeking are required. Chantal Mouffe’s theory of agonistic pluralism offers such a framework by centering conflict, power, and hegemony as constitutive elements of the political.
- 4.1 Mouffe’s Agonism: Core Concepts
Mouffe’s political theory rests on several key distinctions and concepts. Central is the differentiation between ‘the political’ and ‘politics’.67 ‘The political’ signifies the dimension of antagonism — potential conflict, hostility, and the construction of ‘us’ versus ‘them’ identities — that Mouffe argues is inherent and ineradicable in all human social relations. ‘Politics’, conversely, refers to the specific practices, institutions, and discourses through which societies attempt to establish order and manage coexistence in the face of this ever-present potential for antagonism.
Within this framework, Mouffe distinguishes between ‘antagonism’ and ‘agonism’.65 Antagonism represents a relationship between enemies, where each seeks the exclusion or destruction of the other. Agonism, in contrast, describes a relationship between ‘adversaries’. Adversaries engage in intense conflict over competing visions and interpretations, but they do so within a shared symbolic space, recognizing the legitimacy of their opponents’ right to exist and contend for power. This shared space, in a liberal democracy, is constituted by a ‘conflictual consensus’ on the fundamental ethico-political principles of liberty and equality, even while the meaning and application of these principles remain subject to ongoing contestation. For Mouffe, the aim of democratic politics is not to eliminate conflict but to transform potential antagonism into agonism — a passionate, yet regulated, struggle between adversaries.
This leads to Mouffe’s critique of dominant liberal and deliberative democratic theories (associated with thinkers like Rawls and Habermas).68 She argues that their focus on achieving rational consensus through deliberation ignores the constitutive role of power, conflict, and affect (‘passions’) in politics. By attempting to neutralize conflict or relegate passions to the private sphere, these models depoliticize societal issues, mask underlying power relations, and ultimately fail to provide legitimate channels for expressing dissent. For Mouffe, any achieved consensus is not the result of pure reason but is always a temporary and exclusionary ‘hegemonic’ articulation.67 Hegemony refers to the contingent, power-laden process through which a particular understanding of social reality becomes dominant, fixing meanings and establishing an order, but always by excluding alternative possibilities. Agonistic politics, therefore, involves the ongoing struggle between competing hegemonic projects vying for dominance.68 - 4.2 Reframing AI Conflicts: From Technical Problems to Political Struggles
Applying Mouffe’s agonistic framework allows for a fundamental reframing of the conflicts surrounding AI discussed earlier. Resource conflicts over energy, water, land, data, and compute power cease to be mere technical optimization problems or standard market competitions.10 Instead, they are revealed as political struggles involving competing interests (e.g., global tech corporations vs. local communities or national interests 10), conflicting values (e.g., economic growth and technological advancement vs. environmental sustainability and social equity 104), and the construction of adversarial identities. The allocation of scarce resources is not determined by neutral algorithms or market forces alone but through political processes shaped by power dynamics and competing claims over the common good.
Similarly, societal integration conflicts are recast in political terms. Labor displacement through automation is not just an economic adjustment but a political struggle over the distribution of wealth, the future of work, and the power balance between labor and capital.202 Algorithmic bias is not simply a technical flaw but a manifestation of political conflict over definitions of justice, fairness, and equality, reflecting deeper societal power structures.211 The erosion of human autonomy through AI decision-making represents a political contest over agency, control, and the shaping of public life.225
Framing these issues as purely technical challenges solvable through better engineering, more efficient resource management, or market adjustments constitutes a form of depoliticization.67 This masks the underlying power relations, value disagreements, and hegemonic projects driving AI development. An agonistic approach, by contrast, insists on recognizing the political nature of these conflicts.65 It demands that solutions be sought not through technical fixes alone, but through political negotiation, open contestation between adversaries, and democratic decision-making processes that acknowledge the impossibility of a final, rational resolution.
This agonistic perspective crucially reveals the politics of AI ‘externalities’. Environmental degradation (water depletion, e-waste, carbon emissions 24) and social disruptions (job losses, bias amplification 202), often treated as unintended side effects or costs to be managed within technical or economic models, are foregrounded as central political issues. From an agonistic viewpoint, these are not mere externalities but direct consequences of the dominant, hegemonic project prioritizing rapid AI development and deployment, often at the expense of environmental limits or social equity.70 The conflicts that erupt in response — community protests against data center construction 24, demands for algorithmic accountability 211, struggles over labor rights in automated workplaces — are political contestations challenging this dominant hegemony. Agonism thus compels an understanding of these impacts not as peripheral concerns but as core arenas where the political struggles over the direction, values, and consequences of our technological future are being waged. - 4.3 Interdependence as Political Ground: Why Agonism Matters for Coexistence
The conflicts surrounding AI are further complicated by the deepening interdependence between AI systems and human societies. AI is not an external force acting upon society; rather, AI and society are increasingly co-constitutive and mutually reliant. Human societies depend on AI for a growing range of functions: economic productivity through automation and platform economies 2, scientific advancement via complex modeling and data analysis 162, healthcare diagnostics and delivery 211, aspects of governance and public administration 255, and myriad aspects of daily life facilitated by search engines, navigation tools, and communication platforms.149
Conversely, AI systems are fundamentally dependent on human societies. Their development requires human expertise in design and programming.214 Their training relies on vast datasets generated by human activity and communication.3 Their operation depends on physical infrastructures — energy grids, data centers, networks — built, maintained, and powered through human labor and societal organization.11 AI’s material basis relies on resources extracted and processed by human labor.3 Crucially, the legitimacy, adoption, and regulation of AI depend on public trust, social acceptance, and political decisions made within human societies.257
This intricate interdependence inevitably creates friction and conflict. The resource demands of AI clash with human needs; AI-driven automation impacts human labor; algorithmic decisions challenge human values; AI’s reliance on human data raises privacy concerns. Because AI systems and human societies are so deeply entangled, these conflicts cannot be resolved through simple separation or the elimination of one party. Coexistence is necessary, but its terms must be negotiated.
This negotiation is inherently political, involving clashes between different human interests, values, and visions for the future. It is precisely here that Mouffe’s agonism becomes particularly relevant. Agonism provides a framework for navigating the conflicts arising from interdependence without resorting to the illusion of a final consensus or a purely technical solution.68 It acknowledges the persistence of disagreement and power struggles but seeks to channel them into productive contestation between legitimate adversaries — different human groups and stakeholders with competing interests regarding AI’s role and impact — within the shared norms of a democratic political community.
The profound interdependence between AI and human society thus politicizes AI governance. Decisions about how AI is developed, deployed, regulated, and integrated into social life are not merely technical or managerial choices; they are political decisions that shape the terms of our collective future. They inevitably involve negotiating the conflicts and tensions arising from this mutual reliance. Because these negotiations involve fundamental disagreements over values, power, and the distribution of risks and benefits, agonistic frameworks, which embrace conflict and emphasize political struggle between adversaries within a shared democratic structure, offer a more realistic and potentially more democratic approach to governing this complex, interdependent relationship than models predicated on achieving technocratic consensus or market equilibrium.3
5. Governing Agonistic Futures: Challenges and Possibilities
Applying an agonistic framework to the governance of AI presents significant conceptual and practical challenges, particularly concerning the nature of the ‘adversary’ in human-AI conflicts and the design of institutions capable of facilitating productive contestation.
- 5.1 The Challenge of the Non-Human Adversary (and Diffuse Actors)
A core tenet of Mouffe’s agonism is the struggle between human political adversaries who share a commitment to a common symbolic space, typically the principles of liberal democracy.68 This poses a conceptual problem when considering AI systems. AI, lacking consciousness, intentionality, and political subjectivity, cannot be considered a political adversary in the Mouffean sense.234 While AI systems can be designed to exhibit antagonistic behaviors (e.g., being disagreeable or challenging users 258), this programmed antagonism does not equate to political agency or the capacity for genuine political contestation. Engaging in an agonistic struggle directly with an algorithm is therefore problematic.
The ‘adversary’ in conflicts involving AI is more accurately identified as the human actors, institutions (corporations, states), or socio-technical systems embodying specific interests and power structures that deploy and benefit from AI.3 However, holding these actors accountable and engaging them in meaningful political contestation is often hindered by several factors inherent to the current AI landscape. The opacity of complex algorithms — the “black box” problem — makes it difficult to understand how decisions are made and to attribute responsibility for harmful outcomes.214 Responsibility is often diffused across a complex network of developers, data providers, deployers, and users, making it challenging to pinpoint a single accountable entity.222 Furthermore, significant power asymmetries exist between large technology companies or state actors controlling AI development and the individuals or communities affected by these systems, making effective challenge difficult.3
Enabling legitimate political contestation under these conditions requires creating governance mechanisms that can pierce corporate and algorithmic opacity, establish clearer lines of accountability, and empower marginalized or less powerful actors to effectively challenge dominant AI projects and their consequences.221
The central governance challenge, therefore, is not about fostering agonism between humans and AI, but about mediating human conflict about AI. It requires structuring political contestation among diverse human actors and stakeholders regarding the design, deployment, impacts, values, and control embedded within AI systems. This necessitates overcoming the significant barriers posed by the opacity, diffused responsibility, and power asymmetries characteristic of the contemporary AI ecosystem.3 Effective agonistic governance must create arenas where these human conflicts over AI can be articulated and negotiated politically. - 5.2 Towards Agonistic Governance Mechanisms: Designing for Dissensus
Current approaches to AI governance often fall short from an agonistic perspective. Voluntary ethical guidelines frequently lack enforcement mechanisms and may serve corporate interests more than public accountability.235 Top-down, state-led regulation risks being overly rigid, stifling innovation, or failing to capture the diversity of societal values and concerns.260 Technocratic approaches, focused on expert-led risk assessment and management, tend to depoliticize inherently political issues by framing them as technical problems solvable by optimization.255
An agonistic approach demands institutions designed not to eliminate conflict but to channel it productively.261 Such institutions should aim to make power relations visible, provide platforms for diverse voices to articulate competing claims, facilitate contestation between adversaries, and enable ongoing negotiation rather than seeking premature or imposed consensus.264
Exploring potential mechanisms for agonistic AI governance could include: - Adversarial Regulatory Forums: Designing regulatory bodies that incorporate structured adversarial processes, allowing stakeholders (industry, civil society, affected communities, labor unions, environmental groups) to formally challenge proposals, present competing evidence, and demand accountability from developers and deployers, moving beyond purely expert-driven deliberation.221
- Agonistic Participatory Design: Shifting participatory design practices away from consensus-building towards methods that explicitly embrace conflict and dissensus.261 This involves using design interventions — “agonistic arrangements” — to create spaces where underlying tensions can be surfaced, competing “worldings” articulated, and alternative technological trajectories debated.264
- Re-Framed Multi-Stakeholder Dialogues: Utilizing multi-stakeholder platforms not as venues for achieving consensus, but as arenas for clarifying points of conflict, articulating irreconcilable differences, and negotiating temporary political settlements or modi vivendi.65 This requires skilled facilitation conscious of power dynamics.
- Independent Auditing and Contestation Bodies: Establishing independent public or quasi-public bodies with the authority and technical capacity to audit AI systems (particularly those used in high-stakes public domains), demand transparency regarding data and algorithms, and provide accessible platforms for individuals and groups to contest biased, harmful, or unjust outcomes.211 Crucially, the functioning and decisions of these bodies should themselves be open to political scrutiny and contestation.
- Public Interest Litigation and Contestation: Strengthening legal and civic frameworks that enable proactive public interest challenges to AI systems, the data they rely on, and the regulatory standards governing them.221
Implementing such mechanisms faces considerable hurdles. Institutionalizing agonism is inherently challenging, as institutions often tend towards stabilization and closure.267 There is always a risk that channeled conflict could escalate back into destructive antagonism if the shared democratic framework breaks down.66 Effectively managing power imbalances within contestation processes to ensure marginalized voices are truly heard remains a persistent difficulty.66 Ensuring meaningful representation and preventing capture by dominant interests are critical ongoing tasks.263Despite these challenges, the core implication remains: governing AI agonistically requires designing institutions for conflict. An effective and democratic approach to AI governance in the face of its profound and contested impacts necessitates a fundamental shift. Instead of aiming for elusive consensus or relying solely on technical optimization, governance structures must be consciously designed to recognize, accommodate, and channel the inherent political conflicts surrounding AI’s development, deployment, and societal integration.68
6. Conclusion
The rapid proliferation of artificial intelligence, often presented as an immaterial advance, is deeply entangled with the material world, demanding vast resources and reshaping societal structures. This entanglement generates profound conflicts — over finite resources like energy, water, land, and critical minerals, and over the social fabric through impacts on labor, justice, autonomy, and the public sphere. This article has argued that these conflicts are not mere technical glitches or market adjustments but are fundamentally political struggles rooted in competing interests, irreconcilable values, and shifting power dynamics, all situated within a context of deepening human-AI interdependence.
An agonistic theoretical framework, drawing primarily on the work of Chantal Mouffe, provides a critical lens to understand these conflicts. By emphasizing the ineradicability of antagonism (‘the political’) and advocating for its transformation into agonism (struggle between adversaries within a shared democratic framework), this perspective challenges the depoliticizing tendencies of techno-solutionist, purely economic, or consensus-driven approaches. It reframes resource competition, environmental impacts, labor disruption, and algorithmic bias not as problems to be optimized away, but as sites of legitimate political contestation over the direction and values embedded in our technological future. The interdependence between human societies and AI systems underscores the necessity of negotiated coexistence, making agonistic approaches particularly relevant for governing these complex relationships.
This analysis reveals the limitations of alternative governance paradigms. Technical optimization ignores value conflicts and power imbalances. Market-based solutions fail to account for externalities and social justice concerns. Purely ethical guidelines often lack enforcement and can be co-opted. Deliberative models, while valuable, struggle to accommodate the deep pluralism and affective dimensions inherent in these conflicts, often seeking a rational consensus that Mouffe argues is illusory and potentially exclusionary.68
Embracing an agonistic perspective has significant implications for democratic theory and practice in the age of AI. It necessitates acknowledging that technological development is a political arena and requires the creation of new institutional spaces and practices designed explicitly for contestation.261 This involves moving beyond traditional models of regulation or participation towards mechanisms that facilitate adversarial debate, make power visible, empower marginalized voices, and enable ongoing negotiation over the terms of our sociotechnical future. The challenge lies in designing institutions for conflict, not against it, ensuring that the inevitable struggles surrounding AI contribute to, rather than undermine, democratic vitality and accountability.
Further research is crucial to advance this agenda. There is a pressing need to develop and empirically test specific institutional designs for agonistic AI governance, moving from theoretical principles to practical implementation.267 Detailed case studies analyzing ongoing AI-related conflicts through an agonistic lens can provide valuable insights into the dynamics of power, identity, and contestation in practice. Exploring the role of affect and passion in shaping political responses to AI remains an important, yet understudied, area.68 Furthermore, extending political theory to grapple with the agency and role of complex, non-human systems within political frameworks, perhaps drawing critically on concepts like ‘parallel societies’ 269 while avoiding simplistic anthropomorphism, presents a significant theoretical challenge.234 Finally, the intersection of agonistic politics, environmental justice, and the material resource conflicts driven by AI demands deeper investigation to ensure that the transition towards an AI-driven future is both sustainable and equitable.104 Recognizing the political core of AI conflicts is the first step towards imagining and building more democratic and just technological futures.
7. References
- Anzeige von Arendt among the machines: Labour, work and action on digital platforms, accessed April 22, 2025, https://www.hannaharendt.net/index.php/han/article/view/591/993
- www.repository.cam.ac.uk, accessed April 22, 2025, https://www.repository.cam.ac.uk/bitstreams/296b5110-cde4-4986-9894-23b6cc8a9b8c/download
- Mapping the Planetary Costs of AI: Kate Crawford’s Atlas of AI in …, accessed April 22, 2025, https://turningpointmag.org/2024/04/24/mapping-the-planetary-costs-of-ai-kate-crawfords-atlas-of-ai-in-review/
- Review by Gary Kafer of Kate Crawford’s “The Atlas of AI” — Jump Cut, accessed April 22, 2025, https://www.ejumpcut.org/currentissue/GaryKafer/index.html
- (PDF) Climate and Environmental Impacts of Artificial Intelligence — ResearchGate, accessed April 22, 2025, https://www.researchgate.net/publication/387466826_Climate_and_Environmental_Impacts_of_Artificial_Intelligence
- Artificial Intelligence in Iran: National Narratives and Material Realities — Cambridge University Press, accessed April 22, 2025, https://www.cambridge.org/core/services/aop-cambridge-core/content/view/9D2C33F3F1BC459712651443F1D203FB/S002108622400063Xa.pdf/artificial_intelligence_in_iran_national_narratives_and_material_realities.pdf
- Materiality and Risk in the Age of Pervasive AI Sensors, accessed April 22, 2025, https://arxiv.org/html/2402.11183v2
- SCDMS for Marine Surveys & Desktop Studies — Submarine Cable Document Management System, accessed April 22, 2025, https://www.scdms1.com/marine/
- AI has high data center energy costs — but there are solutions | MIT Sloan, accessed April 22, 2025, https://mitsloan.mit.edu/ideas-made-to-matter/ai-has-high-data-center-energy-costs-there-are-solutions
- AI’s Power Requirements Under Exponential Growth: Extrapolating …, accessed April 22, 2025, https://www.rand.org/pubs/research_reports/RRA3572-1.html
- To power AI, data centers need more and more energy | The Current, accessed April 22, 2025, https://news.ucsb.edu/2025/021835/power-ai-data-centers-need-more-and-more-energy
- Study finds headroom on the grid for data centers — E&E News by POLITICO, accessed April 22, 2025, https://www.eenews.net/articles/study-finds-headroom-on-the-grid-for-data-centers/
- As AI drives up electricity demand, rural residents get caught in the …, accessed April 22, 2025, https://floodlightnews.org/transmission-line-conflicts-ahead-as-us-electricity-demand-booms/
- arxiv.org, accessed April 22, 2025, https://arxiv.org/pdf/2304.03271
- Professor’s TED Talk warns of AI’s hidden water costs | UCR News …, accessed April 22, 2025, https://news.ucr.edu/articles/2025/03/05/professors-ted-talk-warns-ais-hidden-water-costs
- www.foodandwaterwatch.org, accessed April 22, 2025, https://www.foodandwaterwatch.org/wp-content/uploads/2025/03/FSW_0325_AI_Water_Energy.pdf
- The Environmental Impact Of Artificial Intelligence — The Organization for World Peace, accessed April 22, 2025, https://theowp.org/reports/the-environmental-impact-of-artificial-intelligence/
- The Environmental Impact of Artificial Intelligence: Water Scarcity and the Future of Computing — Valicor, accessed April 22, 2025, https://www.valicor.com/blog/the-environmental-impact-of-ai-water-scarcity-and-the-future-of-computing
- Making AI Less “Thirsty”: Uncovering and Addressing the Secret Water Footprint of AI Models — SciSpace, accessed April 22, 2025, https://scispace.com/pdf/making-ai-less-thirsty-uncovering-and-addressing-the-secret-1ljldqa9.pdf
- Explained: Generative AI’s environmental impact | MIT News, accessed April 22, 2025, https://news.mit.edu/2025/explained-generative-ai-environmental-impact-0117
- Artificial Thirst: AI’s Unseen Drain on Water — Race & Social Justice Law Review, accessed April 22, 2025, https://race-and-social-justice-review.law.miami.edu/artificial-thirst-ais-unseen-drain-on-water/
- Addressing the hidden water footprint of data — Impax Asset Management, accessed April 22, 2025, https://impaxam.com/insights-and-news/blog/addressing-the-hidden-water-footprint-of-data/
- Why AI Computing Growth Threatens Global Water Resources — Technology Magazine, accessed April 22, 2025, https://technologymagazine.com/articles/ai-data-centres-can-water-companies-handle-the-heat
- AI Data Centers Threaten Global Water Security | Lawfare, accessed April 22, 2025, https://www.lawfaremedia.org/article/ai-data-centers-threaten-global-water-security
- Data centers draining resources in water-stressed communities — The University of Tulsa, accessed April 22, 2025, https://utulsa.edu/news/data-centers-draining-resources-in-water-stressed-communities/
- The Global E-waste Monitor 2017 — UNU Collections — United Nations University, accessed April 22, 2025, https://collections.unu.edu/eserv/UNU:6341/Global-E-waste_Monitor_2017__electronic_single_pages_.pdf
- E-waste from AI computers could ‘escalate beyond control’: study — Frontline — The Hindu, accessed April 22, 2025, https://frontline.thehindu.com/news/e-waste-ai-computers-artificial-intelligence-escalate-by-2030-environmental-damage/article68814329.ece
- E-waste from AI computers could ‘escalate beyond control’ | The Business Standard, accessed April 22, 2025, https://www.tbsnews.net/tech/e-waste-ai-computers-could-escalate-beyond-control-982206
- (PDF) E-waste Challenges of Generative Artificial Intelligence — ResearchGate, accessed April 22, 2025, https://www.researchgate.net/publication/378935180_E-waste_Challenges_of_Generative_Artificial_Intelligence
- Information Technology Factsheet | Center for Sustainable Systems — University of Michigan, accessed April 22, 2025, https://css.umich.edu/publications/factsheets/built-environment/information-technology-factsheet
- The Environmental Cost of AI | Climate Crisis — Library Journal, accessed April 22, 2025, https://www.libraryjournal.com/story/the-environmental-cost-of-ai-climate-crisis
- The Global E-waste Monitor 2017, accessed April 22, 2025, https://ewastemonitor.info/wp-content/uploads/2020/11/Global-E-waste-Monitor-2017-electronic-spreads.pdf
- Generative AI will soon generate tonnes of e-waste, finds study — Khmer Times, accessed April 22, 2025, https://www.khmertimeskh.com/501582203/generative-ai-will-soon-generate-tonnes-of-e-waste-finds-study/
- AI-driven data centers risk massive e-waste surge by 2030 — EHN.org, accessed April 22, 2025, https://www.ehn.org/ai-data-center-energy-use
- (PDF) e-WASTE: everything an ICT scientist and developer should know — ResearchGate, accessed April 22, 2025, https://www.researchgate.net/publication/337435190_e-WASTE_everything_an_ICT_scientist_and_developer_should_know
- E-waste challenges of generative artificial intelligence, accessed April 22, 2025, http://www.iue.cas.cn/xwzx/kydt/202411/P020241101317045439969.pdf
- The AI boom may unleash a global surge in electronic waste | illuminem, accessed April 22, 2025, https://illuminem.com/illuminemvoices/the-ai-boom-may-unleash-a-global-surge-in-electronic-waste
- A SITUATIONAL ANALYSIS OF A CIRCULAR ECONOMY IN THE DATA CENTRE INDUSTRY — WeLOOP, accessed April 22, 2025, https://www.weloop.org/wp-content/uploads/2021/09/2020_04_16_CEDaCI_situation_analysis_circular_economy_report_VF.pdf
- MAKING AND UNMAKING E-WASTE: TRACING THE GLOBAL AFTERLIFE OF DISCARDED DIGITAL TECHNOLOGIES IN BERLIN, accessed April 22, 2025, https://ecommons.cornell.edu/server/api/core/bitstreams/14212256-f59b-4cac-9bf4-15b4227f7491/content
- Assessment of the waste electrical and electronic equipment management systems profile and sustainability in developed and developing European Union countries | Request PDF — ResearchGate, accessed April 22, 2025, https://www.researchgate.net/publication/321945880_Assessment_of_the_waste_electrical_and_electronic_equipment_management_systems_profile_and_sustainability_in_developed_and_developing_European_Union_countries
- Closing the Loop on the World’s Fastest-growing Waste Stream …, accessed April 22, 2025, https://www.bakerinstitute.org/research/closing-loop-worlds-fastest-growing-waste-stream-electronics
- Data Center Recycling: Limits of E-Waste Recycling Solutions …, accessed April 22, 2025, https://www.human-i-t.org/data-center-recycling/
- INTEGRATED REPORT 2024 — ABB, accessed April 22, 2025, https://search.abb.com/library/Download.aspx?DocumentID=9AKK108470A7206&LanguageCode=en&DocumentPartId=&Action=Launch
- Assessing the Techno-Economic Feasibility of Waste Electric and Electronic Equipment Treatment Plant: A Multi-Decisional Modeling Approach — MDPI, accessed April 22, 2025, https://www.mdpi.com/2071-1050/15/23/16248
- Rare Earth Elements for Semiconductor Manufacturing: Global Supply Chain and Dominance — Scientific Research and Community, accessed April 22, 2025, https://www.onlinescientificresearch.com/articles/rare-earth-elements-for-semiconductor-manufacturing-global-supply-chain-and-dominance.pdf
- Fine exploration and green development of ion-adsorption type REE deposits in South China using multi-geophysical technology — Frontiers, accessed April 22, 2025, https://www.frontiersin.org/journals/earth-science/articles/10.3389/feart.2025.1489870/full
- Critical Materials Assessment — Department of Energy, accessed April 22, 2025, https://www.energy.gov/sites/default/files/2023-07/doe-critical-material-assessment_07312023.pdf
- Renewable Energy Materials Properties Database: Summary — NREL, accessed April 22, 2025, https://www.nrel.gov/docs/fy23osti/82830.pdf
- An Introduction to High Performance Computing and Its Intersection with Advances in Modeling Rare Earth Elements and Actinides — ACS Publications, accessed April 22, 2025, https://pubs.acs.org/doi/10.1021/bk-2021-1388.ch001
- QUANTITATIVE ANALYSIS OR RARE EARTHS BY X-RAY FLUORESCENCE SPECTROMETRY — IAEA INIS, accessed April 22, 2025, https://inis.iaea.org/collection/NCLCollectionStore/_Public/45/073/45073493.pdf
- Analytical Techniques for Detecting Rare Earth Elements in Geological Ores: Laser-Induced Breakdown Spectroscopy (LIBS), MFA-LIBS, Thermal LIBS, Laser Ablation Time-of-Flight Mass Spectrometry, Energy-Dispersive X-ray Spectroscopy, Energy-Dispersive X-ray Fluorescence Spectrometer, and Inductively Coupled Plasma Optical Emission Spectroscopy — MDPI, accessed April 22, 2025, https://www.mdpi.com/2075-163X/14/10/1004
- Chip Production’s Ecological Footprint: Mapping Climate and …, accessed April 22, 2025, https://www.interface-eu.org/publications/chip-productions-ecological-footprint
- AI Needs Critical Materials, Fast! But From Where? — Gravel2Gavel Construction & Real Estate Law Blog, accessed April 22, 2025, https://www.gravel2gavel.com/ai-critical-materials/
- Influence of different pretreatment methods on the analytical ability of PLS calibration model ((a): Lu; (b): Y). — ResearchGate, accessed April 22, 2025, https://www.researchgate.net/figure/nfluence-of-different-pretreatment-methods-on-the-analytical-ability-of-PLS-calibration_fig4_370967140
- Predictive performance of different PLS calibration models ((a): R 2 p; (b) — ResearchGate, accessed April 22, 2025, https://www.researchgate.net/figure/Predictive-performance-of-different-PLS-calibration-models-a-R-2-p-b-MREP_fig7_370967140
- Nvidia H100 GPUs: Supply and Demand · GPU Utils ⚡️, accessed April 22, 2025, https://gpus.llm-utils.org/nvidia-h100-gpus-supply-and-demand/
- Ecodesign requirements for circularity of servers and data storage products — Publications, accessed April 22, 2025, https://pub.norden.org/temanord2024-518/4-material-aspects-of-servers-and-data-storage-products-technical-analysis-.html
- TECHNICAL PROGRAM — The Minerals, Metals & Materials Society, accessed April 22, 2025, https://www.tms.org/specialtyCongress/2025/downloads/Specialty2025-PreliminaryTechnicalProgram1.pdf
- | Science and Technology — Lawrence Livermore National Laboratory, accessed April 22, 2025, https://st.llnl.gov/sites/default/files/2025-01/LLNL_ReadAhead_FY25_vPF.pdf
- National Landscape of High- Impact Crosscutting Opportunities for Next Generation Harsh Environment Materials and Manufacturing — Department of Energy, accessed April 22, 2025, https://www.energy.gov/sites/default/files/2025-01/Harsh_Environment_Materials_Roadmap_-_Draft.pdf
- Atlas of AI Summary of Key Ideas and Review | Kate Crawford — Blinkist, accessed April 22, 2025, https://www.blinkist.com/en/books/atlas-of-ai-en
- Digital Peacebuilding: A Framework for Critical–Reflexive Engagement — Oxford Academic, accessed April 22, 2025, https://academic.oup.com/isp/article/24/3/265/6847534
- HAROLD INNIS AND ‘THE BIAS OF COMMUNICATION’ — ResearchGate, accessed April 22, 2025, https://www.researchgate.net/publication/316523984_HAROLD_INNIS_AND_'THE_BIAS_OF_COMMUNICATION'
- Harold Innis: Unpacking the Theory of Media Bias — Journalism University, accessed April 22, 2025, https://journalism.university/media-and-communication-theories/harold-innis-theory-media-bias/
- An Agonistic Approach to Technological Conflict — ResearchGate, accessed April 22, 2025, https://www.researchgate.net/publication/344541984_An_Agonistic_Approach_to_Technological_Conflict
- An Agonistic Approach to Technological Conflict — WUR eDepot, accessed April 22, 2025, https://edepot.wur.nl/534029
- A Critique of Agonistic Politics — International Journal of Zizek Studies, accessed April 22, 2025, http://zizekstudies.org/index.php/IJZS/article/viewFile/95/378
- Democratic Politics and Conflict: An Agonistic Approach — University of Michigan, accessed April 22, 2025, https://quod.lib.umich.edu/p/pc/12322227.0009.011/--democratic-politics-and-conflict-an-agonistic-approach?rgn=main;view=fulltext
- Democracy as a Non-Hegemonic Struggle? Disambiguating Chantal Mouffe’s Agonistic Model of Politics — Void Network, accessed April 22, 2025, https://voidnetwork.gr/wp-content/uploads/2016/09/Democracy-as-a-Non-Hegemonic-Struggle-Disambiguating-Chantal-Mouffes-Agonistic-Model-of-Politics-by-Stefan-Rummens.pdf
- Full article: Thinking hegemony otherwise — an educational critique of Mouffe’s agonism, accessed April 22, 2025, https://www.tandfonline.com/doi/full/10.1080/1600910X.2024.2365758
- Intense Competition Across the AI Stack — CCIA — Computer & Communications Industry Association, accessed April 22, 2025, https://ccianet.org/articles/intense-competition-across-the-ai-stack/
- Deepseek AI and the Power of Competition to Propel AI Adoption — Catalyst Connection, accessed April 22, 2025, https://www.catalystconnection.org/news-blog/deepseek-ai-and-the-power-of-competition-to-propel-ai-adoption/
- The Path to Accelerating AI Computing Power — NADDOD Blog, accessed April 22, 2025, https://www.naddod.com/blog/the-path-to-accelerating-ai-computing-power
- NBER WORKING PAPER SERIES OLD MOATS FOR NEW MODELS: OPENNESS, CONTROL, AND COMPETITION IN GENERATIVE AI Pierre Azoulay Joshua L., accessed April 22, 2025, https://www.nber.org/system/files/working_papers/w32474/w32474.pdf
- Data centers at the crossroads of technology and resilience — PwC, accessed April 22, 2025, https://www.pwc.com/us/en/industries/tmt/library/hyperscale-data-center.html
- Size Of Datasets For Llm Training — Restack, accessed April 22, 2025, https://www.restack.io/p/large-scale-ai-training-answer-dataset-sizes-cat-ai
- Understanding LLMs: Model size, training data, and tokenization — Outshift — Cisco, accessed April 22, 2025, https://outshift.cisco.com/blog/understanding-llms-model-size-training-data-tokenization
- Common Crawl — Wikipedia, accessed April 22, 2025, https://en.wikipedia.org/wiki/Common_Crawl
- Open Data for the World: Common Crawl Frees Petabytes of Web Data | datos.gob.es, accessed April 22, 2025, https://datos.gob.es/en/blog/open-data-world-common-crawl-frees-petabytes-web-data
- COMMON CRAWL FOUNDATION — IIPC, accessed April 22, 2025, https://netpreserve.org/about-us/members/common-crawl-foundation/
- Training Data for the Price of a Sandwich: Common Crawl’s Impact on Generative AI, accessed April 22, 2025, https://foundation.mozilla.org/en/research/library/generative-ai-training-data/
- You may not grasp just how large the Common Crawl dataset is. It’s been growing — Hacker News, accessed April 22, 2025, https://news.ycombinator.com/item?id=26596310
- [2101.00027] The Pile: An 800GB Dataset of Diverse Text for Language Modeling — ar5iv, accessed April 22, 2025, https://ar5iv.labs.arxiv.org/html/2101.00027
- The Pile (dataset) — Wikipedia, accessed April 22, 2025, https://en.wikipedia.org/wiki/The_Pile_(dataset)
- The Pile, accessed April 22, 2025, https://pile.eleuther.ai/
- Open-Sourced Training Datasets for Large Language Models (LLMs) — Kili Technology, accessed April 22, 2025, https://kili-technology.com/large-language-models-llms/9-open-sourced-datasets-for-training-large-language-models
- LAION-400M: Open Dataset of CLIP-Filtered 400 Million Image-Text Pairs — ar5iv — arXiv, accessed April 22, 2025, https://ar5iv.labs.arxiv.org/html/2111.02114
- Processing 2 Billion Images for Stable Diffusion Model Training — Definitive Guides with Ray Series — Anyscale, accessed April 22, 2025, https://www.anyscale.com/blog/processing-2-billion-images-for-stable-diffusion-model-training-definitive-guides-with-ray-series
- LAION-5B: A NEW ERA OF OPEN LARGE-SCALE MULTI-MODAL DATASETS, accessed April 22, 2025, https://laion.ai/laion-5b-a-new-era-of-open-large-scale-multi-modal-datasets/
- Download & stream 400M images + text — Lightning AI, accessed April 22, 2025, https://lightning.ai/lightning-ai/studios/download-stream-400m-images-text
- img2dataset/dataset_examples/laion400m.md at main — GitHub, accessed April 22, 2025, https://github.com/rom1504/img2dataset/blob/main/dataset_examples/laion400m.md
- Is Data Really a Barrier to Entry? Rethinking Competition Regulation in Generative AI, accessed April 22, 2025, https://www.mercatus.org/research/working-papers/data-really-barrier-entry-rethinking-competition-regulation-generative-ai
- Limits to AI Growth: The Ecological and Social Consequences of Scaling — arXiv, accessed April 22, 2025, https://arxiv.org/html/2501.17980v1
- JISEA Green Computing Catalyzer and Intel Build Framework To Measure Artificial Intelligence’s Energy Use — NREL, accessed April 22, 2025, https://www.nrel.gov/news/program/2025/jisea-green-computing-catalyzer-and-intel-build-framework-to-measure-artificial-intelligences-energy-use.html
- Using AI and Data to Promote Sustainable Energy Supply — Sand Technologies, accessed April 22, 2025, https://www.sandtech.com/insight/using-ai-and-data-to-promote-sustainable-energy-supply/
- Generative AI: energy consumption soars — Polytechnique Insights, accessed April 22, 2025, https://www.polytechnique-insights.com/en/columns/energy/generative-ai-energy-consumption-soars/
- iea.blob.core.windows.net, accessed April 22, 2025, https://iea.blob.core.windows.net/assets/ed0483fd-aab4-4cf9-b25a-5aa362b56a2f/EnergyandAI.pdf
- Applications in artificial intelligence and multi-agent systems | Game …, accessed April 22, 2025, https://library.fiveable.me/game-theory/unit-14/applications-artificial-intelligence-multi-agent-systems/study-guide/DVaH3Br8LSGzlAW4
- Rethinking Strategic Mechanism Design In The Age Of Large Language Models: New Directions For Communication Systems — arXiv, accessed April 22, 2025, https://arxiv.org/html/2412.00495v1
- Game Theory Model for AI in Game Design — Restack, accessed April 22, 2025, https://www.restack.io/p/ai-for-game-design-answer-game-theory-model-cat-ai
- Game Theory Meets Large Language Models: A Systematic Survey — arXiv, accessed April 22, 2025, https://arxiv.org/html/2502.09053v1
- Generative AI for Game Theory-based Mobile Networking — arXiv, accessed April 22, 2025, https://arxiv.org/html/2404.09699v2
- Generative AI for Game Theory-based Mobile Networking — arXiv, accessed April 22, 2025, https://arxiv.org/html/2404.09699v1
- Mitigating Data Center Development’s Impacts — The Piedmont Environmental Council, accessed April 22, 2025, https://www.pecva.org/work/energy-work/mitigating-data-center-developments-impacts/
- Study Finds Headroom on the Grid for Data Centers — Nicholas Institute — Duke University, accessed April 22, 2025, https://nicholasinstitute.duke.edu/articles/study-finds-headroom-grid-data-centers
- Sustainable AI: Mitigating water risks in the data centre boom — SLR Consulting, accessed April 22, 2025, https://www.slrconsulting.com/insights/sustainable-ai-mitigating-water-risks-in-the-data-centre-boom/
- Numbers will not save us: Agonistic data practices — Taylor & Francis Online, accessed April 22, 2025, https://www.tandfonline.com/doi/full/10.1080/01972243.2021.1920081
- What Is Water Usage Effectiveness (WUE) in Data Centers …, accessed April 22, 2025, https://blog.equinix.com/blog/2024/11/13/what-is-water-usage-effectiveness-wue-in-data-centers/
- Harold Adams Innis: The Bias of Communications & Monopolies of …, accessed April 22, 2025, https://www.media-studies.ca/articles/innis.htm
- What is a hyperscale data center? — IBM, accessed April 22, 2025, https://www.ibm.com/think/topics/hyperscale-data-center
- What Is a Hyperscale Data Center? Benefits and How They Work | Liquid Web, accessed April 22, 2025, https://www.liquidweb.com/blog/what-is-hyperscale-computing/
- Environmental and Community Impacts of Large Data Centers …, accessed April 22, 2025, https://gradientcorp.com/trend_articles/impacts-of-large-data-centers/
- Conflicts at the Crossroads: Unpacking Land-Use Challenges in the Greater Bay Area with the “Production–Living–Ecological” Perspective — MDPI, accessed April 22, 2025, https://www.mdpi.com/2073-445X/14/2/249
- Full article: Advancing a Material and Epistemological Turn in the Study of AI: A Review and New Directions for Journalism Research — Taylor & Francis Online, accessed April 22, 2025, https://www.tandfonline.com/doi/full/10.1080/21670811.2025.2485255?src=
- Materiality and Risk in the Age of Pervasive AI Sensors — arXiv, accessed April 22, 2025, https://arxiv.org/pdf/2402.11183
- Materiality and Organizing: Social Interaction in a Technological Work — ResearchGate, accessed April 22, 2025, https://www.researchgate.net/publication/249962347_Materiality_and_Organizing_Social_Interaction_in_a_Technological_Work
- Full article: Beyond the ‘e-’ in e-HRM: integrating a sociomaterial perspective, accessed April 22, 2025, https://www.tandfonline.com/doi/full/10.1080/09585192.2021.1913624
- SOCIAL MEDIA AFFORDANCES FOR MEDIATED SCIENCE COMMUNICATION DURING THE COVID- 19 PANDEMIC — Taylor & Francis eBooks, accessed April 22, 2025, https://api.taylorfrancis.com/content/chapters/edit/download?identifierName=doi&identifierValue=10.4324/9781032678542-16&type=chapterpdf
- Position Papers — New Materialism, accessed April 22, 2025, https://newmaterialism.eu/working-groups/working-group-2/position-papers.html
- Methodologies in New Materialism and Discourse Analysis [Interactive Article], accessed April 22, 2025, https://discourseanalyzer.com/methodologies-in-new-materialism-and-discourse-analysis/
- New Materialism: An Ontology for the Anthropocene — UNM Digital Repository, accessed April 22, 2025, https://digitalrepository.unm.edu/cgi/viewcontent.cgi?article=4062&context=nrj
- Technology/Technicity/Techné — New Materialism, accessed April 22, 2025, https://newmaterialism.eu/almanac/t/technology-technicity-techne.html
- DRAFT ENVIRONMENTAL ASSESSMENT Honotua Fiber Optic Cable System Spencer Beach, Hawai’i TMK 6–2–02:08 — Hawaii.gov, accessed April 22, 2025, https://files.hawaii.gov/dbedt/erp/EA_EIS_Library/2009-03-08-HA-DEA-Honotua-Fiber-Optic-Cable.pdf
- Flll ,&D.PY — Hawaii.gov, accessed April 22, 2025, https://files.hawaii.gov/dbedt/erp/EA_EIS_Library/2017-05-23-OA-FEA-Hawaiki-Submarine-Cable-Kapolei-Landing.pdf
- 3.1 Panama Fuel — Export Preview | Digital Logistics Capacity Assessments, accessed April 22, 2025, https://lca.logcluster.org/print-preview-current-section/5121
- Faculty of Law International regulation of Submarine cables. An analysis of the different States’ practices. Álvaro Agulló M — Munin, accessed April 22, 2025, https://munin.uit.no/bitstream/handle/10037/20068/thesis.pdf
- Steven Jackson | Department of Science & Technology Studies, accessed April 22, 2025, https://sts.cornell.edu/steven-jackson
- Conclusion — Emerald Insight, accessed April 22, 2025, https://www.emerald.com/insight/content/doi/10.1108/978-1-80043-762-320211009/full/html
- Orientalist Networks and Their Afterlives — myMESA, accessed April 22, 2025, https://my-mesa.org/program/sessions/view/eyJpdiI6ImM1VlNkRzY3b2FlaWRiVDFCWG1BQ3c9PSIsInZhbHVlIjoiTjRIMThyREsxQnVISVdvQS9jYnN4Zz09IiwibWFjIjoiZDUyODlmZTIwNjJiMTkyZWE5ODAyYmMxMGRlMDYzNDRkOWFlOTg0MDZkNWMzZWU3N2UwZjI5YzRiMzA1NDgwMyIsInRhZyI6IiJ9
- core.ac.uk, accessed April 22, 2025, https://core.ac.uk/download/pdf/129542964.pdf
- Harold Innis’s communications theories — Wikipedia, accessed April 22, 2025, https://en.wikipedia.org/wiki/Harold_Innis%27s_communications_theories
- Full The Bias of Communication 2nd Edition Harold Innis Ebook All Chapters — Scribd, accessed April 22, 2025, https://www.scribd.com/document/796586808/Download-full-The-Bias-of-Communication-2nd-Edition-Harold-Innis-ebook-all-chapters
- Communication — Media and Time — Oxford Bibliographies, accessed April 22, 2025, https://www.oxfordbibliographies.com/abstract/document/obo-9780199756841/obo-9780199756841-0242.xml
- Digital media practice and medium theory informing learning on mobile touch screen devices — VU Research Repository, accessed April 22, 2025, https://vuir.vu.edu.au/36450/1/RENOLDS%20Victor-Final%20Thesis_Redacted.pdf
- Full article: Explaining the Mediatisation Approach — Taylor & Francis Online, accessed April 22, 2025, https://www.tandfonline.com/doi/full/10.1080/13183222.2017.1298556
- The Bias of Communication — ResearchGate, accessed April 22, 2025, https://www.researchgate.net/publication/365065264_The_Bias_of_Communication
- The Emperor of Strong AI Has No Clothes: Limits to Artificial Intelligence, accessed April 22, 2025, https://utoronto.scholaris.ca/server/api/core/bitstreams/79e16eb6-4bfa-4a75-85b1-a52121f5144d/content
- Harold Innis in the New Century : Reflections and Refractions, accessed April 22, 2025, https://ndl.ethernet.edu.et/bitstream/123456789/13400/1/1325.pdf
- (PDF) The Handbook of Media and Mass Communication Theory — ResearchGate, accessed April 22, 2025, https://www.researchgate.net/publication/277702361_The_Handbook_of_Media_and_Mass_Communication_Theory
- (PDF) Research on McLuhan’s Media Theory — ResearchGate, accessed April 22, 2025, https://www.researchgate.net/publication/382816255_Research_on_McLuhan's_Media_Theory
- Reacting to AI: What Would Marshall McLuhan Say? | The Tyee, accessed April 22, 2025, https://thetyee.ca/Culture/2024/08/14/Reacting-AI-What-Would-Marshall-McLuhan-Say/
- The Invisible Obelisk: Marshall McLuhan and Media Studies on the Blockchain, accessed April 22, 2025, https://archive.devcon.org/archive/watch/5/the-invisible-obelisk-marshall-mcluhan-and-media-studies-on-the-blockchain/
- The medium is the massage — Taylor & Francis Online, accessed April 22, 2025, https://www.tandfonline.com/doi/full/10.1080/1369118X.2013.868021
- TECHNOLOGY AS EXTENSIONS OF MAN: THE USE OF MARSHALL MCLUHAN’S TETRAD OF MEDIA EFFECTS IN AN ANALYSIS OF THE METAVERSE — Unisa IR, accessed April 22, 2025, https://uir.unisa.ac.za/bitstream/handle/10500/31283/dissertation_davie_tb.pdf?sequence=1&isAllowed=y
- Media Ecology in Journalism: A Phenomenological Study of the Adaptation Strategies of Senior Journalists — Liberty University, accessed April 22, 2025, https://digitalcommons.liberty.edu/cgi/viewcontent.cgi?article=7120&context=doctoral
- Social media and McLuhan in today’s education system — Scholarly Articles, accessed April 22, 2025, http://scholararticles.net/social-media-and-mcluhan-in-todays-education-system/
- web.mit.edu, accessed April 22, 2025, https://web.mit.edu/allanmc/www/mcluhan.mediummessage.pdf
- McLuhan Probes the Impact of Mass Media on Society | EBSCO Research Starters, accessed April 22, 2025, https://www.ebsco.com/research-starters/communication-and-mass-media/mcluhan-probes-impact-mass-media-society
- The Medium is the Message: Technologies Controlling Societal Direction, accessed April 22, 2025, https://www.aiu.edu/blog/the-medium-is-the-message-technologies-controlling-societal-direction/
- Amplifying the Message: The Right Medium Matters | TV Tech — TVTechnology, accessed April 22, 2025, https://www.tvtechnology.com/opinion/amplifying-the-message-the-right-medium-matters
- The Digital Mind: How Computers (Re)Structure Human Consciousness — MDPI, accessed April 22, 2025, https://www.mdpi.com/2409-9287/8/1/4
- scholarspace.manoa.hawaii.edu, accessed April 22, 2025, https://scholarspace.manoa.hawaii.edu/bitstreams/e9e0817f-d9a7-47b4-bc21-9ffd1714ee1c/download
- [Feb. 26, 2025 (Wednesday), 7–8 am PST] Webinar series on “Data, Algorithms and Tools for Life Cycle Sustainability Assessment”, accessed April 22, 2025, https://is4ie.org/events/other-events/116
- The future of artificial intelligence in the context of industrial ecology — Scholarly Publications Leiden University, accessed April 22, 2025, https://scholarlypublications.universiteitleiden.nl/access/item%3A3464618/view
- LCA — Roland Geyer, accessed April 22, 2025, https://www.rolandgeyer.com/lca
- Emission Factor Recommendation for Life Cycle Assessments with Generative AI | Environmental Science & Technology — ACS Publications, accessed April 22, 2025, https://pubs.acs.org/doi/10.1021/acs.est.4c12667
- The future of artificial intelligence in the context of industrial ecology — Research Collection, accessed April 22, 2025, https://www.research-collection.ethz.ch/bitstream/20.500.11850/561771/2/JofIndustrialEcology-2022-Donati-Thefutureofartificialintelligenceinthecontextofindustrialecology.pdf
- Embed systemic equity throughout industrial ecology applications: How to address machine learning unfairness and bias — PMC — PubMed Central, accessed April 22, 2025, https://pmc.ncbi.nlm.nih.gov/articles/PMC11667658/
- (PDF) White Paper on Global Artificial Intelligence Environmental Impact — ResearchGate, accessed April 22, 2025, https://www.researchgate.net/publication/384364115_White_Paper_on_Global_Artificial_Intelligence_Environmental_Impact
- Life-cycle Assessment : Inventory Guidelines and Principles — epa nepis, accessed April 22, 2025, https://nepis.epa.gov/Exe/ZyPURL.cgi?Dockey=9101W4I6.TXT
- [2502.01671] Life-Cycle Emissions of AI Hardware: A Cradle-To-Grave Approach and Generational Trends — arXiv, accessed April 22, 2025, https://arxiv.org/abs/2502.01671
- (PDF) AI-Enhanced lifecycle assessment of renewable energy systems — ResearchGate, accessed April 22, 2025, https://www.researchgate.net/publication/383874831_AI-Enhanced_lifecycle_assessment_of_renewable_energy_systems
- AI’s Environmental Impact: Calculated and Explained — Arbor.eco, accessed April 22, 2025, https://www.arbor.eco/blog/ai-environmental-impact
- The carbon emissions of writing and illustrating are lower for AI than for humans — PMC, accessed April 22, 2025, https://pmc.ncbi.nlm.nih.gov/articles/PMC10867074/
- Life-Cycle Emissions of AI Hardware: A Cradle-To-Grave Approach and Generational Trends — arXiv, accessed April 22, 2025, https://arxiv.org/html/2502.01671v1
- Unraveling the Hidden Environmental Impacts of AI Solutions for Environment Life Cycle Assessment of AI Solutions — arXiv, accessed April 22, 2025, https://arxiv.org/pdf/2110.11822
- Google Cloud measures its climate impact through LCA, accessed April 22, 2025, https://cloud.google.com/blog/topics/sustainability/google-cloud-measures-its-climate-impact-through-life-cycle-assessment
- APPROPRIATE Life Cycle Assessment: A PROcess-Specific, PRedictive Impact AssessmenT Method for Emerging Chemical Processes | ACS Sustainable Chemistry & Engineering — ACS Publications, accessed April 22, 2025, https://pubs.acs.org/doi/10.1021/acssuschemeng.2c07682
- TPUs improved carbon-efficiency of AI workloads by 3x | Google …, accessed April 22, 2025, https://cloud.google.com/blog/topics/sustainability/tpus-improved-carbon-efficiency-of-ai-workloads-by-3x
- How sustainable is Artificial Intelligence? — Ramboll Group, accessed April 22, 2025, https://www.ramboll.com/en-us/insights/decarbonise-for-net-zero/how-sustainable-is-artificial-intelligence
- Rethinking Concerns About AI’s Energy Use — Center for Data Innovation, accessed April 22, 2025, https://www2.datainnovation.org/2024-ai-energy-use.pdf
- Are LLMs destroying the planet? — Softwire, accessed April 22, 2025, https://www.softwire.com/insights/are-llms-destroying-the-planet/
- Integration of environment and nutrition in life cycle assessment of food items: opportunities and challenges — FAO Knowledge Repository, accessed April 22, 2025, https://openknowledge.fao.org/bitstreams/b881d890-f90b-435e-8af2-24b19e342a11/download
- The ecological footprint of medical AI — PMC — PubMed Central, accessed April 22, 2025, https://pmc.ncbi.nlm.nih.gov/articles/PMC10853292/
- AI — save or ruin the environment? — Bowdoin Student Organizations, accessed April 22, 2025, https://students.bowdoin.edu/bowdoin-science-journal/csci-tech/ai-save-or-ruin-the-environment/
- accessed December 31, 1969, https://arxiv.org/pdf/2110.11822.pdf
- accessed December 31, 1969, https://arxiv.org/pdf/2502.01671v1.pdf
- Global Material Flows Database — International Resource Panel, accessed April 22, 2025, https://www.resourcepanel.org/global-material-flows-database
- (PDF) AI for Identity and Access Management (IAM) in the Cloud: Exploring the Potential of Artificial Intelligence to Improve User Authentication, Authorization, and Access Control within Cloud-Based Systems — ResearchGate, accessed April 22, 2025, https://www.researchgate.net/publication/377694736_AI_for_Identity_and_Access_Management_IAM_in_the_Cloud_Exploring_the_Potential_of_Artificial_Intelligence_to_Improve_User_Authentication_Authorization_and_Access_Control_within_Cloud-Based_Systems
- Journal of Industrial Ecology, Yale University | IDEAS/RePEc, accessed April 22, 2025, https://ideas.repec.org/s/bla/inecol5.html
- New methodology transforms Material Flow Analysis for a circular economy | Imperial News, accessed April 22, 2025, https://www.imperial.ac.uk/news/260813/new-methodology-transforms-material-flow-analysis/
- Journal of Industrial Ecology, Yale University | IDEAS/RePEc, accessed April 22, 2025, https://ideas.repec.org/s/bla/inecol3.html
- (PDF) A Review of Material Flow Analysis — ResearchGate, accessed April 22, 2025, https://www.researchgate.net/publication/251955087_A_Review_of_Material_Flow_Analysis
- Special focus on the recycling of personal computers. presented by in Stuttgart, Germany — Research Collection — ETH Zürich, accessed April 22, 2025, https://www.research-collection.ethz.ch/bitstream/handle/20.500.11850/150014/eth-29754-02.pdf
- Kathleen McMahon’s research works | Delft University of Technology and other places, accessed April 22, 2025, https://www.researchgate.net/scientific-contributions/Kathleen-McMahon-2137661332
- The Effective Integration of Multi-Factor Authentication (MFA) with Zero Trust Security, accessed April 22, 2025, https://www.researchgate.net/publication/389586949_The_Effective_Integration_of_Multi-Factor_Authentication_MFA_with_Zero_Trust_Security/download
- Abstracts Book of — International Society for Industrial Ecology, accessed April 22, 2025, https://is4ie.org/Resources/Documents/ISIE_2009v1.pdf
- Material Flow Analysis: An Analytical Tool for Strategic Planning Towards a Zero-Waste Solution for End-of-Life Ballast Flows on a Track and Ballast Renewal Site (French Conventional Line) — MDPI, accessed April 22, 2025, https://www.mdpi.com/2079-9276/13/12/165
- Sustainability implications of artificial intelligence in the chemical industry: A conceptual framework, accessed April 22, 2025, https://par.nsf.gov/servlets/purl/10309101
- Uncovering the Spatiotemporal Dynamics of Urban Infrastructure …, accessed April 22, 2025, https://pubs.acs.org/doi/10.1021/acs.est.8b03111
- Material Flows and Efficiency | Request PDF — ResearchGate, accessed April 22, 2025, https://www.researchgate.net/publication/360377087_Material_Flows_and_Efficiency
- Material Flow Analysis (MFA) | Minimum.com, accessed April 22, 2025, https://www.minimum.com/resources/material-flow-analysis-mfa
- jie.yale.edu, accessed April 22, 2025, https://jie.yale.edu/sites/default/files/jiev23iss2bibtex.bib
- Consequences of Future Data Center Deployment in Canada on Electricity Generation and Environmental Impacts: A 2015–2030 Prospective Study — ResearchGate, accessed April 22, 2025, https://www.researchgate.net/publication/310506732_Consequences_of_Future_Data_Center_Deployment_in_Canada_on_Electricity_Generation_and_Environmental_Impacts_A_2015-2030_Prospective_Study
- The AI Data Center Boom: Strategies for Sustainable Growth and Risk Management — Aon, accessed April 22, 2025, https://www.aon.com/en/insights/articles/the-ai-data-center-boom-strategies-for-sustainable-growth-and-risk-management
- Reducing the carbon footprint of ICT products through material efficiency strategies: A life cycle analysis of smartphones — ResearchGate, accessed April 22, 2025, https://www.researchgate.net/publication/350549425_Reducing_the_carbon_footprint_of_ICT_products_through_material_efficiency_strategies_A_life_cycle_analysis_of_smartphones
- Bend the trend — International Resource Panel, accessed April 22, 2025, https://www.resourcepanel.org/sites/default/files/documents/document/media/gro24_full_report_29feb_final_for_web.pdf
- Data Center Fabric Market Size, Share | Industry Report, 2030 — Grand View Research, accessed April 22, 2025, https://www.grandviewresearch.com/industry-analysis/data-center-fabric-market-report
- A review of methods contributing to the assessment of the environmental sustainability of industrial systems | Request PDF — ResearchGate, accessed April 22, 2025, https://www.researchgate.net/publication/279888254_A_review_of_methods_contributing_to_the_assessment_of_the_environmental_sustainability_of_industrial_systems
- The geography of generative AI’s workforce impacts will likely differ from those of previous technologies — Brookings Institution, accessed April 22, 2025, https://www.brookings.edu/articles/the-geography-of-generative-ais-workforce-impacts-will-likely-differ-from-those-of-previous-technologies/
- Artificial intelligence, emotional labor, and the quest for sociological and political imagination among low-skilled workers | Policy and Society | Oxford Academic, accessed April 22, 2025, https://academic.oup.com/policyandsociety/article/44/1/116/7900406
- Full article: Automation, artificial intelligence, and job displacement …, accessed April 22, 2025, https://www.tandfonline.com/doi/full/10.1080/0023656X.2025.2477153?src=exp-la
- Societal Adaptation to AI Human-Labor Automation — arXiv, accessed April 22, 2025, https://www.arxiv.org/pdf/2501.03092
- Rethinking Automation and the Future of Work with Hannah Arendt — ResearchGate, accessed April 22, 2025, https://www.researchgate.net/publication/390176548_Rethinking_Automation_and_the_Future_of_Work_with_Hannah_Arendt
- How We Work, AI, and Human Engagement — The Scholarly Kitchen, accessed April 22, 2025, https://scholarlykitchen.sspnet.org/2024/01/31/in-this-post-robert-harington-looks-to-hannah-arendt-and-her-1958-book-the-human-condition-for-help-in-understanding-the-nature-of-how-we-work-asking-how-an-ai-world-may-affect-the-nature-of-our-wo/
- Hannah Arendt’s machines: Re-Evaluating marketplace theory in the …, accessed April 22, 2025, https://www.tandfonline.com/doi/abs/10.1080/21689725.2020.1743196
- 60 years ago, Hannah Arendt provided a haunting critique of modernity. Society will become stuck in accelerating cycles of labor and consumption, she argued. Free human action will be replaced by instrumentalization, and meaning will be replaced by productivity… : r/philosophy — Reddit, accessed April 22, 2025, https://www.reddit.com/r/philosophy/comments/1f1nzkb/60_years_ago_hannah_arendt_provided_a_haunting/
- Labour, Work, and Action: Arendt’s Phenomenology of Practical Life — Oxford Academic, accessed April 22, 2025, https://academic.oup.com/jope/article/44/2-3/275-300/6895815
- Exploring Totalitarian Elements of Artificial Intelligence in Higher Education With Hannah Arendt — HAW Hamburg, accessed April 22, 2025, https://reposit.haw-hamburg.de/bitstream/20.500.12738/14302/1/Exploring-Totalitarian-Elements-of-Artificial-Intelligence-in-Higher-Education-With-Hannah-Arendt.pdf
- Arendt among the machines: Labour, work and action on digital platforms, accessed April 22, 2025, https://www.mctd.ac.uk/arendt-among-the-machines-labour-work-and-action-on-digital-platforms/
- Algorithmic Bias and Social Inequality: The Hidden Human Rights …, accessed April 22, 2025, https://analyticsweek.com/algorithmic-bias-and-social-inequality-the-hidden-human-rights-crisis/
- Algorithmic bias — Wikipedia, accessed April 22, 2025, https://en.wikipedia.org/wiki/Algorithmic_bias
- What Is Algorithmic Bias? | IBM, accessed April 22, 2025, https://www.ibm.com/think/topics/algorithmic-bias
- Overcoming AI Bias: Understanding, Identifying and Mitigating Algorithmic Bias in Healthcare — Accuray, accessed April 22, 2025, https://www.accuray.com/blog/overcoming-ai-bias-understanding-identifying-and-mitigating-algorithmic-bias-in-healthcare/
- The problem of algorithmic bias in AI-based military decision support systems, accessed April 22, 2025, https://blogs.icrc.org/law-and-policy/2024/09/03/the-problem-of-algorithmic-bias-in-ai-based-military-decision-support-systems/
- Mitigating Bias in Artificial Intelligence — Berkeley Haas, accessed April 22, 2025, https://haas.berkeley.edu/wp-content/uploads/UCB_Playbook_R10_V2_spreads2.pdf
- Understanding algorithmic bias and how to build trust in AI — PwC, accessed April 22, 2025, https://www.pwc.com/us/en/tech-effect/ai-analytics/algorithmic-bias-and-trust-in-ai.html
- Algorithmic Bias and Accountability: The Double B(l)ind for Marginalized Job Applicants, accessed April 22, 2025, https://lawreview.colorado.edu/print/volume-96/algorithmic-bias-and-accountability-the-double-blind-for-marginalized-job-applicants-chris-chambers-goodman/
- CAN AN ALGORITHM BE AGONISTIC? Scenes of Contest in Calculated Publics P — Microsoft, accessed April 22, 2025, https://www.microsoft.com/en-us/research/wp-content/uploads/2017/10/CanAnAlgorithmBeAgonistic-April2016.pdf
- AI Agents — AI/ML ops — starter page — IBM, accessed April 22, 2025, https://www.ibm.com/think/insights/ai-agent-governance
- Proactive Contestation of AI Decision-making — Verfassungsblog, accessed April 22, 2025, https://verfassungsblog.de/roa-proactive-contestation-of-ai-decision-making/
- Towards accountability in the use of Artificial Intelligence for Public Administrations | Algorithm Watch, accessed April 22, 2025, https://algorithmwatch.org/en/wp-content/uploads/2021/05/Accountability-in-the-use-of-AI-for-Public-Administrations-AlgorithmWatch-2021.pdf
- Human autonomy with AI in the loop: Philosophical Psychology, accessed April 22, 2025, https://www.tandfonline.com/doi/abs/10.1080/09515089.2024.2448217
- Full article: Human autonomy with AI in the loop, accessed April 22, 2025, https://www.tandfonline.com/doi/full/10.1080/09515089.2024.2448217?af=R
- From Automation to Autonomy: Human Machine Relations … — ucf stars, accessed April 22, 2025, https://stars.library.ucf.edu/cgi/viewcontent.cgi?article=1226&context=hmc
- (PDF) AI Agents: A Systematic Review of Architectures, Components …, accessed April 22, 2025, https://www.researchgate.net/publication/389562150_AI_Agents_A_Systematic_Review_of_Architectures_Components_and_Evolutionary_Trajectories_in_Autonomous_Digital_Systems
- RAG, AI Agents, and Agentic RAG: An In-Depth Review and Comparative Analysis, accessed April 22, 2025, https://www.digitalocean.com/community/conceptual-articles/rag-ai-agents-agentic-rag-comparative-analysis
- Preparing for the AI Agent Revolution: Navigating the Legal and …, accessed April 22, 2025, https://stoneturn.com/insight/preparing-for-the-ai-agent-revolution/
- What are AI agents: Benefits and business applications | SAP, accessed April 22, 2025, https://www.sap.com/resources/what-are-ai-agents
- AI Agents: Evolution, Architecture, and Real-World Applications — arXiv, accessed April 22, 2025, https://arxiv.org/html/2503.12687v1
- What Are AI Agents? — IBM, accessed April 22, 2025, https://www.ibm.com/think/topics/ai-agents
- A Survey on the Optimization of Large Language Model-based Agents — arXiv, accessed April 22, 2025, https://arxiv.org/html/2503.12434v1
- What are AI agents: Benefits and business applications | SAP, accessed April 22, 2025, https://www.sap.com/denmark/resources/what-are-ai-agents
- (PDF) On the Morality of Artificial Agents — ResearchGate, accessed April 22, 2025, https://www.researchgate.net/publication/227190404_On_the_Morality_of_Artificial_Agents
- THE HUMAN CONDITION IN AN ALGORITHMIZED WORLD — A CRITIQUE THROUGH THE LENS OF 20TH-CENTURY JEWISH THINKERS AND THE CONCEPTS OF RATIONALITY, ALTERITY AND HISTORY Nathalie A. Smuha — Reconstructing Judaism, accessed April 22, 2025, https://www.reconstructingjudaism.org/files/nsmuha_humancondition.pdf
- Towards an Ethical Framework for Generative Artificial Intelligence (AI) Use in Education — ERIC, accessed April 22, 2025, https://files.eric.ed.gov/fulltext/EJ1428206.pdf
- Artificial intelligence | Peter Levine, accessed April 22, 2025, https://peterlevine.ws/?cat=43
- Old facts, new beginnings. Thinking with Arendt about algorithmic decision-making — European Consortium for Political Research (ECPR), accessed April 22, 2025, https://ecpr.eu/Filestore/CustomContent/Standing%20Groups/SGPL%20-%20Seminar%20Series/Arendt%20and%20algorithms_Dec2020.pdf
- (PDF) Exploring Totalitarian Elements of Artificial Intelligence in Higher Education With Hannah Arendt — ResearchGate, accessed April 22, 2025, https://www.researchgate.net/publication/373507736_Exploring_Totalitarian_Elements_of_Artificial_Intelligence_in_Higher_Education_With_Hannah_Arendt
- The Digital Agora in Crisis — by Kevin P. Lee — Substack, accessed April 22, 2025, https://substack.com/home/post/p-152695048?utm_campaign=post&utm_medium=web
- The Digitised Public Sphere: Re-Defining Democratic … — SciSpace, accessed April 22, 2025, https://scispace.com/pdf/the-digitised-public-sphere-re-defining-democratic-cultures-4v5aolieq7.pdf
- Agonism, decision, power: The art of working unfinished — EconStor, accessed April 22, 2025, https://www.econstor.eu/bitstream/10419/280770/1/1872032028.pdf
- Toward an agonistic model of democracy — ResearchGate, accessed April 22, 2025, https://www.researchgate.net/publication/262627579_Toward_an_agonistic_model_of_democracy
- Mediating Agonistic Peace — Lund University Publications, accessed April 22, 2025, https://lup.lub.lu.se/student-papers/record/9081098/file/9081102.pdf
- Agonism | Conflict, Debate & Dialogue — Britannica, accessed April 22, 2025, https://www.britannica.com/topic/agonism-philosophy
- Conflicts on the Threshold of Democratic Orders: A Critical Encounter with Mouffe’s Theory of Agonistic Politics — Taylor & Francis Online, accessed April 22, 2025, https://www.tandfonline.com/doi/abs/10.1080/20403313.2017.1382219
- An Agonistic Theory of Democratic Parliamentarism. The Case of the Walloon Parliament, accessed April 22, 2025, https://www.scirp.org/journal/paperinformation?paperid=114812
- What is Wrong with Agonistic Pluralism? Reflections on Conflict in Democratic Theory, accessed April 22, 2025, https://www.researchgate.net/publication/249626030_What_is_Wrong_with_Agonistic_Pluralism_Reflections_on_Conflict_in_Democratic_Theory
- (PDF) Chantal Mouffe’s Agonistic Project: Passions and Participation, accessed April 22, 2025, https://www.researchgate.net/publication/261699122_Chantal_Mouffe's_Agonistic_Project_Passions_and_Participation
- Why agonistic planning? Questioning Chantal Mouffes thesis of the ontological primacy of the political — ResearchGate, accessed April 22, 2025, https://www.researchgate.net/publication/304403680_Why_agonistic_planning_Questioning_Chantal_Mouffes_thesis_of_the_ontological_primacy_of_the_political
- The Agony of the Political, accessed April 22, 2025, https://pmc.iath.virginia.edu/issue.107/17.2tally.html
- Democratic Agonism: Conflict and Contestation in Divided Societies, accessed April 22, 2025, https://www.e-ir.info/2012/10/20/democratic-agonism-conflict-and-contestation-in-divided-societies/
- (PDF) Critical education for sustainability and Chantal Mouffe’s green democratic revolution, accessed April 22, 2025, https://www.researchgate.net/publication/377666138_Critical_education_for_sustainability_and_Chantal_Mouffe's_green_democratic_revolution
- On the limits of the political: The problem of overly permissive pluralism in Mouffe’s agonism — PhilArchive, accessed April 22, 2025, https://philarchive.org/archive/AYTOTL
- Citizen conceptions of democracy and support for artificial intelligence in government and politics | Request PDF — ResearchGate, accessed April 22, 2025, https://www.researchgate.net/publication/365598546_Citizen_conceptions_of_democracy_and_support_for_artificial_intelligence_in_government_and_politics
- Opportunities and challenges of AI-systems in political decision-making contexts — Frontiers, accessed April 22, 2025, https://www.frontiersin.org/journals/political-science/articles/10.3389/fpos.2025.1504520/full
- Framing contestation and public influence on policymakers: evidence from US artificial intelligence policy discourse — Oxford Academic, accessed April 22, 2025, https://academic.oup.com/policyandsociety/article/43/3/255/7644109
- Antagonistic AI — arXiv, accessed April 22, 2025, https://arxiv.org/pdf/2402.07350
- AI agents pose new governance challenges — Schwartz Reisman Institute, accessed April 22, 2025, https://srinstitute.utoronto.ca/news/challenges-in-governing-ai-agents
- Challenges in Governing AI Agents — Lawfare, accessed April 22, 2025, https://www.lawfaremedia.org/article/challenges-in-governing-ai-agents
- Agonistic participatory design: Working with marginalised social movements — ResearchGate, accessed April 22, 2025, https://www.researchgate.net/publication/254216013_Agonistic_participatory_design_Working_with_marginalised_social_movements
- Unmaking as Agonism: Using Participatory Design with Youth to Surface Difference in an Intergenerational Urban Context — Tapan Parikh, accessed April 22, 2025, https://tap2k.org/papers/sabie_unmaking_as_agonism.pdf
- Ecologies of Contestation in Participatory Design — Nitin Sawhney, accessed April 22, 2025, https://nitinsawhney.org/wp-content/uploads/2020/06/ecologies-of-contestation-pdc2020.pdf
- Agonistic Arrangements: Design for Dissensus in Environmental …, accessed April 22, 2025, https://www.ijdesign.org/index.php/IJDesign/article/view/5734/1088
- Design, Democracy and Agonistic Pluralism — DRS Digital Library, accessed April 22, 2025, https://dl.designresearchsociety.org/cgi/viewcontent.cgi?article=1812&context=drs-conference-papers
- Law and Agonistic Politics — 1st Edition — Andrew Schaap — Routledge B, accessed April 22, 2025, https://www.routledge.com/Law-and-Agonistic-Politics/Schaap/p/book/9781138259973
- Agonistic Democracy: Rethinking Political Institutions in Pluralist …, accessed April 22, 2025, https://www.amazon.com/Agonistic-Democracy-Rethinking-Institutions-Democratic/dp/113835404X
- Vincent August, Understanding democratic conflicts: The failures of agonistic theory, accessed April 22, 2025, https://philpapers.org/rec/AUGUDC-2
- (PDF) Parallel intelligence in three decades: a historical review and …, accessed April 22, 2025, https://www.researchgate.net/publication/383161035_Parallel_intelligence_in_three_decades_a_historical_review_and_future_perspective_on_ACP_and_cyber-physical-social_systems