Contingent Futures
May 27, 2022
Benjamin Bratton
Benjamin Bratton's work spans philosophy, architecture, computer science and geopolitics. He is Professor of Visual Arts at University of California, San Diego, and, until recently, Program Director of The Terraforming program at the Strelka Institute in Moscow, a three-year research program that considers the past and future role of cities as a planetary network by which humans occupy the Earth’s surface.
In his groundbreaking book The Stack: On Software and Sovereignty (MIT Press, 2016) Bratton proposes that different genres of planetary scale computation can be seen not as so many species evolving on their own, but as forming a coherent whole: an accidental megastructure that is both a computational infrastructure and a new governing architecture. His latest book The Revenge of The Real: Politics for a Post-Pandemic World (Verso Press, 2021), sees the COVID-19 pandemic as a crisis of political imagination and capacity in the West and in response argues on behalf of a positive biopolitics.
This interview was recorded on February 25th, 2022, one day after Russia invaded Ukraine. Three days later on February 28th, the Strelka Institute suspended its operations in protest along with The Terraforming program.
Joel Fear & Midjourney AI — Bathing Pools (2022)
What do you think of our magazine and the role of humor in this post-pandemic age?
Humor is an important defense mechanism, a healthy response to chronic traumas. Some people will burst out in laughter when they’re dazed after a car accident. There are ways in which laughter helps us embrace the intensity and absurdity of an experience; it’s part of how we cope. We’re all trying to pin down narrative structure in a world that feels chaotic, which can also lead to conspiracy theory. People would rather believe that shape shifting lizards run the world than that human society is acephalic, because that idea is more comforting than the prospect that nobody is driving the train at all. Conspiracy theory finds its ground by projecting a narrative onto the world that allows people to believe in at least some form of coherency. Humor perhaps responds to the void more directly and with less evasion.
I’m sure we have plenty of readers who are conspiracy theorists.
In fact, everyone says you’re really paranoid.
What?
Everyone says you are paranoid.
That I’m paranoid?
That’s a joke.
You got me there.
You’ve mentioned before that you don’t view society as just a bunch of signs and symbols, and that there’s a deeper biochemical circumstance that is indifferent to the kinds of narratives that we may try to project upon it. How have our perspectives on the relationships between people and things shifted since the pandemic, and how do we teach ourselves to see, what you call, the epidemiological view of society?
The epidemiological view of society recognizes that beneath the symbolic layer of signs, symbols, and meaning-making, there is a biological, biochemical, anatomical, microbial, physical reality that is mind-independent and has its own processes and dynamics that are foundational to existence. It's not something that's outside of us; it's something essential that we are part of and that constitutes any sapient entity. In comparison, sociology holds to social deterministic and cultural reductionist views of the world that have become self-destructive. Language is a material technology that shapes social reality, but it is not the reality itself. Unfortunately, when someone is convinced that reality is reducible to the stories we tell about it, and that if changing the stories we tell about reality will also change reality, then their project has lost much of the plot. The planet is, in fact, not just a text, even if textuality itself is an historical effect of planetary processes.
In ways that are not captured by the facile accusations of the online culture wars, reality itself — in the form of geochemistry, biochemistry, biology, etc. — is increasingly inserting itself into the politics of culture. The rise of political populism as a larger political movement is a symptomatic immune response to this intensification, such as the revival of autochthonous myths and the arbitrary social constructions around nation and race that are asked to constitute a reality that is more real than reality itself. But more importantly, climate change must be understood as a geochemical concentration of greenhouse gases that has atmospheric effects, regardless of the stories we tell or deny about it. Like the covid-19 virus, it is utterly indifferent and must be addressed as such.
So on the one hand, sociologists may tacitly model society primarily as an aggregation of meaning-making actors that produce a collective social construction of reality which comes to have real material effects. The priority is always on how a social “world” creates a planet and not on how a planet makes possible social worlds in the first place. The epidemiological model, on the other hand, forces people to think about themselves not only as a meaning-making subject, but as a biological object at the same time. The covid-19 pandemic situated humans as biological objects that are both vulnerable to a virus and capable of transmitting it to someone sitting next to you. Your subjective intention towards that person has nothing to do with your ability to infect them. You can love or hate them, but your objective proximate relationship means your presence could potentially harm or even kill them. The meaning you assign to them is not the fundamental cause of an outcome.
The pandemic should have reoriented in our thinking a greater sense of how it is that this underlying reality actually structures the world that we live in, and how it is that we see ourselves and imagine ourselves as beings in the world in relation to that reality. But it obviously didn’t. For a number of factors, whatever one may choose to recognize as planetary society has not done the work of reorienting worldviews as a response to the crisis. Rightwing populist regimes all around the world basically decided that the virus was whatever they said it was: “It exists. It doesn’t exist. It came from China. It doesn't come from China. It's safe. It's not safe. It doesn't really matter.” They convinced enough people that what matters are the stories they tell about the virus, not the virus itself, and so this life form was left to its own devices.
In real time, we watched how catastrophic that mode of governance can be. It converged with the anti-vaxx narrative which has long been horrified by the self-image of the material animal body and prefers categories of purity and invasion. “The vaccine might be right for you, but my personal health decision is to not get the vaccine for myself.” They hold this personal commitment to a particular narrative that is beyond the scope of reconciling with the notion of a shared physical society. Someone may come to this conclusion by imagining that the reason that anyone would get a vaccine is for their own personal protection. It’s a narrative that is predicated on the very Californian idea that what your reality is for you is yours, and what my reality is for me is mine. And then who's to say whose reality is really more real? Actually, an anti-vaxxer Marxist who attacked my book online came up with an accidentally ingenious summary of what’s at stake: “To get the vaccine is to literally inject The Establishment into your body.” How can you argue with such genius?
So how does the epidemiological model fit into the discussion on planetarity? How do we reinvent our forms of governance in the face of planetary crises, like the next pandemic or the impacts of climate change?
The present political systems that we live within are largely maladapted to the challenges we face. This is not just a matter of formal policy, but one of political culture. We’re witnessing a vicious circle that includes both the delegitimation of governance per se and the inability of governance to actually act in a collective self-interest. This tendency across the political spectrum of anti-governance (not just anti-state, but anti-governance), informed by what may be a legitimate contempt for existing political systems, produces a deep suspicion of the principle of collective self-composition at all. It is fundamentally myopic because it locates all sovereignty in the smallest of actions and affects. This makes any form of long term governance much more difficult, if not impossible, and so the results are incompetent.
In reality, beneath the fictional constructions of nation states lies the planetary bio-ecological assemblages of life, and they exist at scales that make any person or place already enmeshed even if they imagine themselves as separate. Viruses and climate change don’t care whether people live in Canada, the United States, or Mexico. When the next pandemic happens, one can only hope for a more rational collective response. We’ve witnessed the momentum of symbolic obligations, invested with both emotion and capital, attempt to bend the biochemical reality of the world toward it. It didn’t work, but that doesn’t matter for many people whose allegiance is foremost narrative. Biology is a political topic because it infuses and is infused by power, but we have lived through a form of politicization that won’t serve us in the future for all the reasons I have described.
So the question of planetary governance — if you imagine nearly eight billion people on the planet as a heterogeneous polity — is how would they be able to sense, model, and act back upon themselves in such a way that their collective intelligence harnesses the agency necessary to steer themselves in the right directions? This is a first principle of “governance” at whatever scale, from local to global. But given the complexity of planetary scale conditions, I don't see how we get to rational self-compositional agency without a very different kind of system of self-knowing and self-tracking, not only of ourselves individually, but of social systems, health systems, and ecological kinds of processes. That might mean very different kinds of biosensing practices, platforms, mechanisms, and regimes than presently at hand, as well as a foundational repurposing of those already at work.
I recognize that this seems implausible in the post-Soviet era of geopolitical fragmentation, secessionist fever, and multilevel polarization. But you can imagine this next decade will see momentum towards consolidation, as these great hemispheres of global influence gobble up all the little guys into superstructural spheres of influence.
In ways that are not captured by the facile accusations of the online culture wars, reality itself — in the form of geochemistry, biochemistry, biology, etc. — is increasingly inserting itself into the politics of culture.
What is different from what we already have?
What I argue in The Revenge of the Real is that the model of planetary computation that we built around Web2.0 is fatally broken and unreformable on its own terms. More GDPR (General Data Protection Regulation) style privacy mechanisms aren’t actually going to fix the fundamental problem, which is the idea that the primary function and purpose for planetary computation is the sensing, modeling, and predicting of individual humans — what they want to look like, look at, read, click, say, or hear. This notion suggests that the simulation and accommodation of individual human actors is the purpose of planetary computation. When we concentrate less on planetary computation as weaponized Web2.0, we will be able to start the real work.
Precedents already exist in, for example, climate science. The very idea of climate change is an epistemological accomplishment of planetary computation. In order to construct climate science, billions and billions of data points process some of the most complex, computational simulations ever created. Without the megastructure of sensors and models and supercomputing simulations that scientists have created, we wouldn’t have generated the initial hockey stick curves that pointed to climate change as a reality. Planetary computation makes climate science possible, which in turn makes the idea of climate change possible, which in turn provides for the various cultural theories about the “Anthropocene.” Indirectly, planetary computation has made our cultural understanding of the Anthropocene possible.
But, the models of sensing we have now are highly individuated, focusing on what you or I wants to click on next or how our results compare to others. Rightly so, people are concerned about the manipulative connotations of Web2.0 planetary computation, but what we need is the deindividuation of biosensing, where technology models all aggregate wholes instead of the anthropomorphized individuals, such that the complex systems in which we're embedded would be able to steer themselves in ways that would allow for their long term viability. Our current systems don't have the capacity to actually conceive of and direct themselves. To prevent their collapse, these complex systems need the capacity for self steerage. Acephalic emergence is not enough.
What would that look like?
One of the more fundamental critiques of the nature/culture distinction is to insist, on one level, that humans are not part of planetary systems. Of course we are, and that includes our cognition, our feats of intelligence, our technologies, and everything that makes up these processes. However, the perspective of much contemporary cultural critique is that human processes of reason, rationality, and technicity are fundamentally disassociated or delinked from natural conditions. That is because the history of the concept of “reason” includes this dissociation that therefore — because reality is discursively constructed — reason as such is dissociated from real nature. And so, de-anthropocentrism means de-rationality. This is mistaken. The emergence of the technosphere, if you like, is also part of a planetary process, so we can think of the aggregate forms of human collective intelligence as already part of an emergent planetary intelligence. This idea goes back at least to the 19th century when the Russian cosmic Konstantin Tsiolkovsky formulated that humans are one of the mediums “through which the planet thinks.”
But what is intelligence? The astronomically rare and unlikely phenomenon of intelligence as an emergent faculty of both biology and non-biological entities (such as artificial intelligence) is extraordinarily precious, but in a cosmic sense, still relatively immature. Intelligence is not merely the process of individual induction and deduction, but an aggregate pathfinding phenomenon extended and embedded in the technical systems in which we interface with the world.
The fundamental criteria for the direction of new intelligence by human political systems and planetary computation is learning how to amplify and orient intelligence in ways that prevent its extinction and collapse. Honestly, it’s uncertain whether complex intelligence as we know it will have a deep future. Perhaps we find ourselves in a great filtration moment, when our discoveries and technological advances such as nuclear fission generate a fork in the road — do we destroy ourselves, or will we carry on?
In the past 25-30 years, humans have discovered how to model and comprehend the climate, allowing for the self-recognition of Anthropogenic agency in transforming the planet. Once the dominant intelligent species of a planet is able to comprehend its own agency in transforming the planet, does it use this agency to destroy its own conditions of survival, or does it use it in a way to ensure the conditions of not only its own survival, but the long term viability of the biodiversity upon which it depends? This is the great filtration moment in which we find ourselves. Planetary computation has thus generated an emergence of planetary intelligence, a sort of long-term evolutionary sense of trying to understand and locate these processes, events, phenomena, and problems that explain our evolutionary arc.
This reminds me of your lecture on The Inverse Uncanny Valley — how when we look at AI, when we read the information presented to us about ourselves gathered by an AI, we see how AI sees us. We gain intelligence by seeing ourselves through an intelligence that is artificial.
As you probably know from Masahiro Mori’s work in the 1970s, we have a gut reaction to something that is not quite human. The uncanny valley is represented in a curve, where we find things that don’t freak us out at the two peaks of the curve, but in the valley between these two points, we find the uncanny. The Simpsons are okay because they don’t look enough like us to unsettle us, while healthy humans are okay because they represent what we are most familiar with. The things in the middle are what create an uncanny response, where something kicks in and we want whatever it is to stay away from us, something like a poorly designed CGI character or an actual corpse.
What I’m suggesting in the inverse uncanny valley is not that AIs are creepy because they aren’t quite human, but rather that what creeps us out is the feeling we get when we see ourselves through the eyes of AI. An unsettling, uncanny feeling resonates when we see the human presented back to us, but one that we don’t entirely recognize or only partially identify with. We’re creeped out because it’s our own self image seen from the outside. It’s sort of like how listening to one’s voice on a tape recording is a bit disconcerting, but AI multiplies that sensation to an existential level, as the AI’s coherency is disturbingly accurate at pinning you down. Philip K. Dickian questions arise, like what is the difference between me and the android speaking in my voice? What constitutes the coherency of myself as an individual and what is the specificity of humanity and our species in relation to these things? What happens when AI takes off the mask covering our faces, and humanity realizes we aren’t who we thought we were?
AI isn’t just a weird new tool to make things with, but an epistemological technology that changes the way in which we see ourselves and the world through externalization of thought, language, culture, and cognition. I suggest that one of the important long-term social and cultural impacts of machine intelligence will be a greater understanding of intelligence through its artificialization. And applied to intelligence on climate change, we will change our perception of ourselves in relation to the Earth in ways we never notice when we perceive ourselves directly.
So what happens when this use of artificial intelligence for planetary computation leads to a discovery that terrifies us? How do you think we can navigate climate change or other planetary issues if we become alienated by what we find?
This is the question I want to pose as the one that is preconditional to the rest. In a way, it’s a much more fundamental question than the ones on which the Humanities currently spends the bulk of its time. Finding an answer will of course be a traumatic process. Think about how intelligence has evolved on Earth in relation to predator-prey dynamics. The reason we have binocular vision and big visual cortexes and intense muscle reflexes is because we were once prey. Now we have the abilities of foresight and planning and simulating and modeling and acting upon contingent futures.
These traits not only develop as survival mechanisms, but from both accidental and deliberate self-alienation, an externalization of sapience. We invented the telescope to see very far, but its function eventually became the basis for the Heliocentric model of our solar system. This radical transformation in basic cosmological thinking expanded our intelligence, but it was a traumatic and alienating process. More so, Darwinian biology and the discovery of DNA also caused alienating shifts in our sense of place in the world, and with the emergence of neuroscience and machine intelligence, we discover that the phenomenon of thinking doesn’t occur as we expect. We don’t think the way we think we think. The self-image of the human is destabilized. Those who think such technologies foremost refortify old ideas about what “humans” are miss the forest for the trees.
These Copernican accomplishments could also be called Copernican traumas. They don’t reverberate directly, there’s no law the authorities could’ve passed that said, “Look, we are now Heliocentrists.” It was a long, weird, messy process that required ontological shifts within the cultures that are utterly incomplete. We saw it happen again just recently during the pandemic, as people were asked to comprehend themselves as contagion vectors and to imagine their own bodies in terms of a biophysical materialism. Everyone was forced to make sense of the grief, trauma, and strangeness this imposition placed on them. Hence why so many people chose instead to embrace the reified, grounded cultural meaning-making they had always known. This kind of disembedding or disorientation or disenchantment of the world leads almost inevitably to the forms of cultural fundamentalism we witnessed over the past two years, on both the right and the left. These slow processes of reorientation either percolate throughout culture or are debated away.
What happens when AI takes off the mask covering our faces, and humanity realizes we aren’t who we thought we were?
Right, there’s still people who believe the world is flat. So you’re suggesting the data we collect with artificial intelligence might lead to the next big Copernican trauma, that we might see something about ourselves and our relationship to the planet that is beyond our comprehension. What is the role institutions have to play in shepherding society through this traumatic process, especially pertaining to climate change or imagining new forms of governance?
If you’d asked me this question three weeks ago, my answer would have been totally different. But today, just a twenty minute walk from our campus at Strelka Institute, there are thousands of protesters at Pushkin Square being arrested. For a lot of our participants, these issues are no longer theoretical, but a matter of life and death. We’re pondering the role of a program like ours in a crisis like this, how we can contribute an understanding of intervention that is productive and honest. We have students from London and Moscow who of course approach this issue with different perspectives. Our discussions have become a microcosm of the question of planetary governance more generally. How do the disparate positions our participants find themselves in actually contribute to something that becomes collectively beneficial in a programmatic way?
In an environment such as this, institutions must ensure they continue the mission they set out to do. An institution finds its direction, value, and purpose because of the constraints it is placed under. The mode of governance Putin has constructed, for example, is the production of a closed, simulated reality inside of a different symbolic universe. He’s used the distinctly unique and beautiful Russian culture to construct an artificial world that no one else in the world would recognize as reality. I think one of the goals of Putinism is to cut Russia off from the rest of the world into an isolated bubble, so that his people feel they have no alternative to tying their destiny to the nation he envisions. As institutes like Strelka become increasingly rare in Russia, we enact a tremendously positive influence. Our role is to envision different structures that can tackle bigger issues.
With this in mind, how do you suggest we move toward deep synchronicity — bringing the geospheres and biospheres closer together in sync? What role does the technosphere play in the long-term?
Geosphere, meaning the geophysical, geologic processes of the Earth like minerals, volcanoes, rocks, and storms. Biosphere, meaning the comparatively recent carbon-based organic life that produces oxygen and produced our atmosphere. Technosphere, meaning what Peter Haff describes as the physical properties of a human-technological system that take on a role equivalent to the biosphere. Humans are part of both the latter two. Key to the relationship amongst all three spheres is that the emergence of life produced the planet that we have. It’s not like the planet was always conducive to life, and then life emerged. On the contrary, life emerged and emitted gasses that formed the atmosphere, creating the Earth that we recognize, but also enabling the human-made technosphere to evolve. The long-term viability of the continuance and complexity of the geosphere and biosphere is dependent on the role of the technosphere.
It’s a destructive force in and of itself, but it’s also constructed in a way that constitutes a positive and important mechanism for the medium of planetary intelligence to act upon and act back upon itself. That’s to say, we can use aspects of the technosphere like planetary computation, which constitutes both human and artificial intelligence, to change and evolve the technosphere so it doesn’t destroy the planet. To put it simply, the only viable response to anthropogenic climate change will need to be equally as anthropogenic. An artificial intervention will need to take place. In the next few generations, there’s several specific undertakings for the human species, beginning with basically expanding what we think of as a planet, imagining it as a park system. Edward O. Wilson’s Half-Earth thesis calls to set aside massive amounts of the Earth’s surface as protected area from any kind of industrialization. On top of this, we need to subtract billions and billions of tons of greenhouse gas from the atmosphere and rebury it in the form of dense carbon. Because these undertakings will be inherently artificial and anthropogenic, they’ll inevitably be technological. This doesn’t mean technology is the only solution or that it will even work, but there’s no way to conceive of course correction that is non-technological.
We covered these ideas in The Terraforming program at Strelka Institute. There’s three forms of terraforming to consider here. One is the terraforming that we live in right now that some people call the Anthropocene. We are and have been terraforming the Earth for generations, and as I mentioned earlier, we didn’t have the agency to recognize this process until recently, when we accumulated enough data and planetary intelligence to understand it. The second form comes with the recognition that whatever we do going forward, terraforming processes will continue to happen because complex intelligence inevitably terraforms. The baseline is that the apex intelligence species on Earth will terraform the planet in its own image to at least some degree, for better or worse.
Even termites terraform.
So, there was a long-term emergence of sapient intelligence, which itself evolved in relation to its technologization of the world and assumed terraforming scale agency in ways that it at first did not recognize. But through a process of technical alienation of its own experience, this species came to recognize through climate science its agency, the artificiality of its condition, and the anthropogenic effects of its initiatives. So now it is faced with the conundrum of how to artificially organize that intelligence and anthropogenic agency to remake the planetary conditions of its own survival.
This third notion of terraforming comes with a few questions — how do we make this process more deliberate? How do we make this process more predicated on the protection, remediation, and extension of the long-term viability of complex biodiversity and intelligence? How do we compose and organize a technosphere that amplifies the continuation of life without destroying it? I think the answer to these questions is the answer to deep synchronicity, that bringing the geosphere, biosphere, and technosphere closer into sync is now a human project. The solution won’t be a cultural one like we’ve seen in recent decades, how the West in particular has absorbed the climate crisis into culture through both mainstream environmentalism and Extinction Rebellion-type activism as if it’s merely a matter of reorienting our ethical disposition toward the world.
Going back to our earlier conversation on symbols and meaning-making, it’s not like thinking better thoughts and being better people will ultimately accumulate into the geophysical scale effects that we need to solve the climate crisis. This intense belief in the reality of symbols and inner mental states is a misunderstanding of the relationship between humans and the planet. It’s not about you or your personal morality at all. The externalization of personal emotional states won’t be enough to ensure the long-term viability of the planet. The terraforming of the future will need to be depersonalized, deindividuated, and not necessarily predicated upon the activation of cultural norms.
Do you see any emerging fields that might find this to be a challenge?
It’s interesting to see how the art world and speculative design have latched onto Web3.0. The art world tends to place their faith in LARPing and play acting, and from the outside looking in, it’s obvious to all but the participants that it’s a performance. Regardless, I still think there’s a purpose and value to their roleplay that tries out different possibilities. With imagination comes live-prototyping, and with the emergence of Web3.0, people are prototyping new and different kinds of economic and governance models. This way of thinking through alternatives is itself quite interesting.
Ten years ago, it would’ve been crazy to suggest that all the cool kids would move to Berlin and become avant-garde accountants who speculate with avant-garde insurance and the future of bureaucracy with DAOs. It’s an interesting development where people who might have otherwise made paintings or movies are now engineering speculative accounting platforms. I think this training of creative imagination on infrastructure is a positive thing with positive effects.
Another valuable shift with Web3.0 is the ways in which people are rethinking the relationship between the political and the technological. Within political theory and political activism more generally, we have the sense that the political is an autonomous domain from its technological milieu, that it deals with the distributions of power in an abstract sense, like antagonism, discursive resistance, and so forth. The popular notion is that the antagonistic distribution of this ethereal substance called power manifests itself in particular technological relationships that are not preconditioned. The discourses around Web3.0 are shifting this attitude toward an understanding that politics has always operated within a particular technological milieu and is limited by the same technological capacity that produces it. Power is constructed technologically, and always has been. As this becomes more clear, we can learn to play around with it in different kinds of ways. As Web3.0 and all the things it implies succeed, the dichotomization of centralization versus decentralization will start to complicate and even collapse. We’ll recognize more easily how the functions of certain kinds of centralized systems enable the decentralization of other kinds of activities, and vice versa. This will enable a shift in the speculative infrastructural imaginary that recognizes that infrastructures like plumbing or roads or electricity are both centralized and decentralized at the same time.
So this implies that our understanding of politics and power will also evolve, which will open doors to imagining new forms of planetary computation and intelligence. What advice do you have for the generation in their twenties and the thirties who will steer these undertakings?
Look, I don’t mean this in a bad way. Every generation in their twenties thinks they’re the fulcrum generation of history. To a certain extent, they are, at least for the history they will live. I’ve noticed a positive and negative distortion effect caused by our society that has constructed itself institutionally around educating people ages 18 to 25 with theory, philosophy, and critical reading. This is the time in a young adult’s life where they question who they want to be, what they want to do, what it all means, and what the future will hold. This means that an audience distortion effect has taken hold; because philosophy is now geared towards the preoccupations of 18 to 25 year olds, it focuses primarily on identity, self-becoming, and so forth. The identity crisis of being a young person in the world becomes the primary concern of all philosophy, so that authors start telling 24 year-olds that they are the revolutionary vanguard.
But the reality is that you likely won’t have the most leverage and power to rebuild the world until you reach the age of 40, which is when your generation will be replacing mine, just as mine is currently replacing the Boomers. My advice is to think long-term, because as you get older, your sense of time will change and feel much shorter. Understand that a lot of the decisions you make right now will form the foundation of whatever life you’re going to lead in the future. Imagine who you want to be at 40, what position in the world you feel you should have, and how you can uniquely contribute to the world in a way that no one else can. How can you organize your decisions around the role you want to have later in life?
The fact that so much attention today is spent on the implications of blockchains and the building of egalitarian infrastructure means this momentum and commitment can apply to a lot of different things in the future. The agency you have now will truly expand in a few decades, especially because you see the world in ways older generations do not. Their world doesn’t exist anymore.
And ours will be radically different.