For Lisa Nakamura
I have been thinking about Lisa Nakamura lately, and for a lot of reasons. Here I just want to reflect on how valuable her work in visual culture studies has been for me, and for many others. She was a pioneer of the study of what used to be tagged as ‘race in cyberspace.’ Now that the internet is everywhere, and race and racisms proliferate on it like fungus on damp newspaper, her work deserves renewed critical attention. Her book Digitizing Race: Visual Cultures of the Internet (Minnesota, 2008) is nearly a decade old, but it turns out that looking perceptively at ephemeral media need not render the resulting study ephemeral at all.
Digitizing Race draws together three things. The first is the post-racial project of a certain (neo)liberal politics that Bill Clinton took mainstream in the early nineties. Its central conceit was that all the state need do is provide opportunities for everyone to become functional subjects of postindustrial labor and consumption. The particular challenges of racism were ignored. The second is an historical transformation in the internet that began in the mid-nineties, which went from being military and scientific (with some creative subcultures on the side) to a vast commercial complex. This led to the waning of the early nineties internet subcultures, some of whom thought of it as a utopian or at least alternative media for identity play, virtual community and gift economies. In A Hacker Manifesto (Harvard, 2004), I was mostly interested in the last of these. Nakamura is more interested in what became of community and identity.
One theme that started to fade in internet culture (or cyberculture in the language of the time) had to do with passing online as something other than one’s meatspace self. This led to a certain gnostic belief in the separation of online from meatspace being, as if the differences and injustices of the latter could just be left behind. But the early cyberculture adepts tended to be a somewhat fortunate few, with proximity to research universities. As the internet’s user-base expanded, the newcomers (or n00bs) had other ideas.
The third tendency Nakamura layers onto the so-called neo-liberal turn and the commercialized and more-popular internet is the academic tendency known as visual studies or visual culture studies. This in part grew out of, and in reaction against, an art historical tradition that could absorb installation art but did not know how to think digital media objects or practices. Visual culture studies drew on anthropology and other disciplines to create the “hybrid form to end all hybrid forms.” (3) It also had something in common with cultural studies, in its attention to low, ephemeral and vulgar forms, treated not just as social phenomena but as aesthetic ones as well. Not all the tendencies within visual culture studies sat well together. There could be tension between paying attention to digital media objects and paying attention to vulgar popular forms. Trying to do both at once was an exercise in self-created academic marginality. The study of new media thus tended to privilege things that look like art; the study of the low, the minor or the vulgar tended to privilege social over aesthetic methods and preoccupations. Not the least virtue of Nakamura’s work is that she went out on a limb and studied questions of race and gender and in new and ephemeral digital forms and as aesthetic practices.
One way to subsume these three questions into some sort of totality might be to think about what Lisa Parks called visual capitalism. How is visual capital, an ensemble of images that appear to have value, created and circulated? How does social differentiation cleave along lines of access to powerful modes of representation? Having framed those questions, one might then look at how the internet came to function as a site for the creation and distribution of hegemonic and counter-hegemonic images of racialized bodies.
Here one might draw on Paul Gilroy’s work on the historical formation and contestation of racial categories, or the way Donna Haraway and Chela Sandoval look to cyborg bodies as produced by bio-technical networks, but within which they might exercise an ironic power of slippery self-definition. Either way, one might pay special attention to forms of image-making by non-elite or even banal cultures as well as to more high-profile mass media forms, cool subcultures or avant-garde art forms. There’s several strands to this story, however. One of which might be the evolution of technical media form. From Nick Mirzoeff, Nakamura takes the idea of visual technology as an enhancement of vison, from easel painting to digital avatars. In the context of that historical background, one might ask what is old and what is new about what one discovers in current media forms. This might be a blend of historical, ethnographic and formal-aesthetic methods. A good place to start such a study is with interfaces, and a good way to tie together the study of cinema, television and the internet is to study how the interfaces of the internet appear in cinema and television. Take, for instance, the video for Jennifer Lopez’s pop song, ‘If You Had My Love’ (1999). The conceit of the video is that Lopez is an avatar controlled by users who can view her in different rooms, doing difference dances in different outfits. The first viewer is a young man – a bit like one of Azuma’s otaku – who appears to be looking for something to jerk-off to, but there are other imaginary viewers through-out, including teen-girls and a rather lugubrious inter-racial threesome, nodding off together on a sofa.
So different people can be on the human side of the interface. Here, we are voyeurs on their voyeurism. The interface itself is perhaps the star, and J-Lo herself becomes its object. With the interface, the imaginary user in frame and the imagining one – us – can make J-Lo perform as different kinds of dancer, slotting her into different racial and cultural niches. The interface offers “multiple points of entry to the star.” (27) She – it – can be chopped and streamed. It’s remarkable that this video made for MTV sits so nicely now on Youtube.comwhose interactive modes it premediates.
There was – and still is – a lot of commentary on The Matrix (1999), but not much of it lingers over the slightly embarrassing second and third movies in the franchise. They are “bad films with their hearts in the right place.” (104) Like the J-Lo video, they deal among other things with what Eugene Thacker called immediacy, or the expectation of real time feedback and control via an interface. As Nakamura drolly notes, “This is an eloquent formulation of entitlement…” (94) Where the Matrix films get interestingly weird is in their treatment of racial difference among interface users under “information capitalism.” (96) The Matrix pits blackness as embodiment against whiteness as the digital. What goes on in the background to the main story is a species of Afrofuturism, but it’s the opposite of Black Accelerationism, in which a close proximity of the black body to the machine is in advance of whiteness, and to be desired. In The Matrix version, the black body holds back from the technical, and retains attributes of soul, individuality, corporeality, and this is its value. Nakamura: “Afrofurturist mojo and black identity are generally depicted as singular, ‘natural’… ‘unassimilable’ and ‘authentic.’” (100) Whereas with the bad guy Agent Smith, “Whiteness thus spreads in a manner that exemplifies a much-favored paradigm of e-business in the nineties: viral marketing.” (101) The white Agents propagate through digitally penetrating other white male bodies.
At least race appears in the films, which offer some sort of counter-imaginary to cyber-utopianism. But as Coco Fusco notes, photography and cinema don’t just record race – they produce it. Lev Manovich notes that it’s in the interface that the photographic image is produced now, and so for Nakamura, it is the interface that bears scrutiny as the place where race is made. In The Matrix, race is made to appear for a notionally white viewer. “The presence of blackness in the visual field guards whites from the iresistable seduction of the perfectly transparent interface…. Transparent interfaces are represented as intuitive, universal, pre- or postverbal, white, translucent, and neutral – part of a visual design aesthetic embodied by the Apple iPod.” (109)
Apple’s iconic early ads for the iPod featured blacked-out silhouettes of dancing bodies, their white earbud cords flapping as they move, against bold single-color backgrounds. For Nakamura, they conjure universal consumers who can make product choices, individuated neoliberal subjects in a color-blind world. Like the ‘users’ of J-Lo in her video, they can shuffle between places, styles, cultures, ethnicities – even if some of the bodies dancing in the ads are mean to be read as not just black-out but also black. Blackness, at the time at least, was still the marker for the authentic in white desire around music. In this world, “Whiteness is replication, blackness is singularity, but never for the black subject, always for the white subject.” (116) Nakamura: “This visual culture, which contrasts black and white interface styles so strongly, insists that it is race that is real. In this way the process of new media as a cultural formation that produces race is obscured; instead race functions here as a way to visualize new media image production… In this representational economy, images of blacks serve as talismans to ward off the consuming power of the interface, whose transparent depths, like Narcissus’ pool, threaten to fatally immerse its users.” (116, 117) If blackness stands for authentic embodiment in this visual culture, then Asian-ness stands for too much proximity to the tech. The Asian shows up only marginally in The Matrix. Its star, the biracial Keanu Reeves, was like J-Lo quite racially malleable for audiences. In his case he could be read as white by whites and Asian by Asians if they so desired. A more ironic and telling example is the film Minority Report (2002). Tom Cruise – was there a whiter star in his era? – has to get his eyes replaced, as retinal scanning is everywhere in this film’s paranoid future. Only the eyes he gets belonged to a Japanese person, and the Cruise character finds himself addressed as a particularly avid consumer everywhere he goes. Hiroki Azuma and Asada Akira had once advanced a kind of ironic Asian Accelerationism, which positively valued a supposed closeness of the Asian with the commodity and technology, but in Minority Report it’s an extreme for the white subject to avoid.
Race at the interface might be a moment in a process of production and reproduction (and its queer twists) that Donna Haraway called the integrated circuit. It partakes now in what Paul Gilroy notes is a crisis of raciology, brought on by the popularization of genetic testing. The old visual regimes of race struggle to adapt to the spreading awareness of the difference between genotype and phenotype. The film GATTACA (1997) is here a prescient one in imagining how a new kind of racism of the genotype might arise. It imagines a world rife with interfaces designed to detect the genotypical truth of appearances.
Nakamura ties these studies of the interface in cinema and television to studies of actual interfaces, particularly lowly, unglamorous, everyday ones. For instance, she looks at the avatars made for AIM Instant Messenger, which started in 1997 as an application running in Microsoft Windows. Of interest to her are the self-made cartoon-like avatars users chose to represent themselves to their ‘buddies.’ “The formation of digital taste cultures that are low resolution, often full of bathroom humor, and influenced by youth-oriented and transnational visual styles like anime ought to be traced as it develops in its native mode: the internet.” (30-31) At the time there was little research on such low forms, particularly those popular with women. Low-res forms populated with cut and paste images from the Care Bears, Disney and Hello Kitty are not the ideal subjects of interactivity imagined in cyberculture theories. But there are questions here of who has access to what visual capital, of “who sells and is bought, who surfs and is surfed.” (33) AIM avatars are often based on simple cut and paste graphics, but users modified the standard body images with signs that marked out their version of cultural or racial difference. This was a moment of explosion of ethnic identity content on the web – to which, incidentally, we may in 2017 be witnessing the backlash.
AIM users could download avatars from websites that offered them under various categories – of which race was never one, as this is a supposedly postracial world. The avatars were little gifs, made of body parts cut from a standard template with variations of different hair, clothing, slogans, etc. These could be assembled into mini-movies, remediating stuff from anime, comics, games; as a mix of photos and cartoons, flags, avatars.
One could read Nakamura’s interest in the visual self-presencing of women and girls as a subset of Henry Jenkins’ interest in fan based media, but she lacks Jenkins’ occasionally over-enthusiastic embrace of such activity as democratic and benign. Her subaltern taste-cultures are a little more embattled and compromised. The kind of femininity performed here is far from resistant and sometimes not even negotiated. These versions of what Hito Steyerl would later call the poor image would be hard to redeem aesthetically. Cultural studies had tried to ask meta-questions about what the objects of study are, but even so, we ended up with limited lists of proper new media objects, of which AIM avatars were was not one. The same could be said of the website alllooksame.com. The site starts with a series of photographs of faces, and asks the user to identify which is Japanese, Chinese or Korean. (Like most users, I could not tell, which is the point.) The category of the Asian-American is something of a post-Civil Rights construct. It promised resistance to racism in pan-ethnic identity, but which paradoxically treated race as real. While alllooksame.com is an odd site, for Nakamura it does at least unite Asian viewers in questioning visual rhetoric about race. Here it provides a counter-example to Ien Ang’s study of Huaren.org, which to her essentializes diasporic Chinese-ness.
Asian-American online practice complicates the digital divide, being on both sides. The Asian-American appears in popular racial consciousness as a ‘model minority’, supposedly uninterested in politics, avid about getting ahead in information capitalism, or whatever this is. Yet she or he also appears as the refugee, the undocumented, the subsistence wage service worker. For Nakamura, this means that the study of the digital divide has to look beyond the race of users to other questions of difference, and also to questions of agency online rather than mere user numbers.
While in some racialized codings, the ‘Asian’ is high-tech and assmiliates to (supposedly) western consumerist modes, the encounter between postcolonial literary theory and new media forms produces quite other conjunctures. To collapse a rich and complex debate along just one of its fault-lines: imperial languages such as English can be treated either as something detachable from its supposed national origin, or as something to refuse altogether. The former path values hybridity and the claiming of agency within the language of the colonizer. The latter wants resist this, and sticks up for the unity and coherence of a language and a people. And, just to complicate matters further, this second path has to be acknowledged is also a European idea – the unity and coherence of a people and its language being itself an idea that emerged out of European romanticism. Much the same fault-line can be found in debates about what to do in the postcolonial situation with the internet, which can also be perceived as western and colonizing – although it might make more sense now to think of it as colonizing not on behalf of the old nation-states as on behalf of what Benjamin Bratton calls the stack. Nakamura draws attention to some of the interesting examples of work on non-western media, including Eric Michaels’ brilliant work on video production among western desert Aboriginal people in Australia, and the work of the RAQS Media Collective and Sarai in India, which reached out to non-English speaking and even on-literate populations through interface design and community access.
Since her book was published, work really flourished in the study of non-western uptakes of media, not to mention work on encouraging local adaptions and hybrids of available forms. If one shifts one’s attention from the internet to cellular telephone, one even has to question the assumption that the west somehow leads and other places follow. It may well be the case that most of the world leap-frogged over the cyberspace of the internet to the cellspace of telephony. A recent book by Yuk Hui even asks if there are non-western cosmotechnics, but that’s a topic for another time.
The perfect counterpoint to the old cyberculture idea of online disembodiment is Nakamura’s study of online pregnancy forums – the whole point of which is to create a virtual community for women in some stage of the reproductive process. Here Nakamura pays close attention to ways of representing pregnant bodies. The site she examines allowed users to create their own signatures, which were often collages of idealized images of themselves, their partners, their babies, and – in a most affecting moment, their miscarriages. Sometimes sonograms were included in the collages of the signatures, but the separate the fetus from the mother, and so other elements were generally added to bring her back into the picture. It’s hard to imagine anything more kitsch. But then we might wonder why masculine forms of geek or otaku culture can be presented as cool when something like this is generally not. By the early 2000s the internet was about 50/50 men and women, and users were more likely to be working class or suburban. After it’s here comes everybody moment, the internet started to look more like regular everyday culture. These pregnant avatars, or ‘dollies’ were more cybertwee than cyberfeminist (not that these need be exclusive categories, of course). But by the early 2000s, “the commercialization of the internet has led many internet utopians to despair of its potential as a site to challenge institutional authority…” (160) But perhaps it’s a question of reading outside one’s academic habitus. Nakamura: “’Vernacular’ assemblages created by subaltern users, in this case pregnant women, create impossible bodies that critique normative ones without an overt artistic or political intent.” (161) The subaltern in this case can speak, but choses to speak through images that don’t quite perform as visual cultural studies would want them to. Nakamura wants to resist reading online pregnancy forums in strictly social-science terms, and to look at the aesthetic dimensions. It’s not unlike what Dick Hebdige did in retrieving London youth subcultures from criminological studies of ‘deviance.’
The blind spot of visual cultural studies, at least at time, was vernacular self-presentation. But it’s hard to deny the pathos of images these women craft of their stillborn or miscarried children. The one thing that perhaps received the most belated attention in studies of emerging media is how they interact with the tragic side of life – with illness, death and disease. Those of us who have been both on the internet and studying it for thirty years or so now will have had many encounters with loss and grief. We will have had friends we hardly ever saw IRL who have passed or who grieve for those who have passed. IRL there are conventions for what signs and gestures one should make. In online communication they are emerging also.
Nakamura was right to draw attention to this in Digitizing Race, and she did so with a tact and a grace one could only hope to emulate. Nakamura: “The achievement of authenticity in these cases of bodies in pain and mourning transcends the ordinary logic of the analog versus the digital photograph because these bodily images invoke the ‘semi-magical act’ of remembering types of suffering that are inarticulate, private, hidden within domestic or militarized spaces that exclude the public gaze.” (168) Not only is the body with all its marks and scars present in Nakamura’s treatment, it is present as something in addition to its whole being. “We live more, not less, in relation to our body parts, the dispossession or employment of ourselves constrained by a complicated pattern of self-alienation…. Rather than freeing ourselves from the body, as cyberpunk narratives of idealized disembodiment foresaw, informational technologies have turned the body into property…” (96) Here her work connects up with that of Maurizio Lazzarato and Gerald Raunig on machinic enslavement and the dividual respectively, in its awareness of the subsumption of components of the human into the inhuman.
But for all that, perhaps the enduring gift of this work is, to modify Adorno’s words, to not let the power of another or our own powerlessness – stupefy us. There might still be forms of agency, tactics of presentation, gestures of solidarity – and in unexpected places. Give how internet culture was tending in the decade after Digitizing Race, perhaps it is an obligation now to return the gift of serious and considered attention to our friends and comrades — and not least in the scholarly world. For the tragic side of life is never far away. The least we can do is listen to the pain of others. And speak in measured tones of each other’s small achievements of wit, grace and insight.
taken from:
0 Comments
On Benjamin Bratton's The Stack
Superstudio, 'Continuous Monument: An Architectural Model for Total Urbanization', 1969
What I like most about Benjamin Bratton’s The Stack: On Software and Sovereignty (MIT Press, 2015) is firstly its close attention to what I would call the forces and relations of production. We really need to know how the world is made right now if it is ever to be remade. Secondly, I appreciate his playful use of language as a way of freeing us from the prison-house of dead concepts. It is no longer enough to talk of neoliberalism, precarity or biopower. What were once concepts that allowed access to new information have become habits. Thirdly, while no friend to bourgeois romantic anti-tech humanism, Bratton has far more sense of the reality of the Anthropocene than today’s accelerationist thinkers. Bratton: “We experience a crisis of ‘ongoingness’ that is both the cause and effect of our species’ inability to pay its ecological and financial debts.” (303)
The category of thing that Bratton studies looks a bit like what others call the forces and relations of production, or infrastructure, but is better thought of as platforms. They are standards-based technical and social systems with distributed interfaces that enable remote coordination of information and action. They are both organizational and technical forms that allow complexity to emerge. They are hybrids not well suited to sociology or computer science. They support markets, but can or could enable non-market forms as well. They are also about governance, and as such resemble states. They enable a range of actions and are to some extent reprogrammable to enable still more.
loading...
Platforms offer a kind of generic universality, open to human and non-human users. They generate user identities whether the users want them or not. They link actors, information, events across times and spaces, across scales and temporalities. They also have a distinctive political economy: they exist to the extent that they generate a platform surplus, where the value of the user information for the platform is greater than the cost of providing the platform to those users. Not everything is treated as a commodity. Platforms treat some, often a lot, of information as free, and can rely on gift as much as commodity economies.
Bratton’s particular interest is in stack platforms. The metaphor of the stack comes from computation, where it has several meanings. For example, a solution stack is a set of software components layered on top of each other to form a platform for the running of particular software applications without any additional components. All stacks are platforms, but not all platforms are stacks. A stack platform has relatively autonomous layers, each of which has its own organizational form. In a stack, a user might make a query or command, which will tunnel down from layer to layer within the stack, and then the result will pass back up through the layers to the user. Bratton expands this metaphor of the stack to planetary scale. The world we live in appears as an “accidental megastructure” made up of competing and colluding stacks. (5) Computation is planetary-scale infrastructure that transforms what governance might mean. “The continuing emergence of planetary-scale computation as meta-infrastructure and of information as an historical agent of economic and geographic command together suggest that something fundamental has shifted off-center.” (3) The stack generates its own kind of geopolitics, one less about competing territorialities and more about competing totalities. One made up of enclaves and diasporas. It both perforates and hardens borders. It may even enable “alien cosmopolitanisms.” (5) It’s a “crisis of the Westphalian geographic design.” (7) “It is not the ‘state as a machine’ (Weber) or the ‘state machine’ (Althusser) or really even (only) the technologies of governance (Foucault) as much as it is the machine as the state.” (8)
Bratton follows Paul Virilio in imagining that any technology produces its own novel kind of accident. A thought he makes reversible: accidents produce technologies too. Take, for example, the First Sino-Google War of 2009, when two kinds of stack spatial form collided: Google’s transnational stack and the Great Firewall of China. This accident then set of a host of technical strategy on both sides to maintain geopolitical power. Perhaps the stack has a new kind of sovereignty, one that delaminates geography, governances and territory.
In place of Carl Schmitt’s nomos of the earth, Bratton proposes a nomos of the cloud, as in cloud computation, which as we shall see is a crucial layer of the stack. Nomos here means a primary rule or division of territory, from which others stem. Unlike in Brown and other theorists of neoliberalism, Bratton thinks sovereignty has not moved from state to market but to the stack. Schmitt championed a politics of land, people and idea versus liberal internationalism, an idea revived in a more critical vein by Mouffe. But perhaps where sovereignty now lies is in a form that is really neither and on which both depend, a stack platform sovereignty and its “automation of the exception.” (33) One could read Bratton as a very contemporary approach to the old Marxist methodology of paying close attention to the forces of production. “…an understanding of the ongoing emergence of planetary-scale computing cannot be understood as a secondary technological expression of capitalist economics. The economic history of the second half of the twentieth century is largely unthinkable without computational infrastructure and superstructure…. Instead of locating global computation as a manifestation of an economic condition, the inverse may be equally valid. From this perspective, so much of what is referred to as neoliberalism are interlocking political-economic conditions within the encompassing armature of planetary computation.” (56) The stack could have been the form for the global commons, but instead became an “an invasive machinic species.” (35) “Sovereignty is not just made about infrastructural lines; it is made by infrastructural lines.” (35) Code becomes a kind of law. “This is its bargain: no more innocent outside, now only theoretically recombinant inside…. The state takes on the armature of a machine because the machine, The Stack, has already taken on the roles and register of the state.” (38, 40) Bratton: “Will the platform efficiencies of The Stack provide the lightness necessary for a new subtractive modernity, an engine of a sustainable counter-industrialization, or will its appetite finally suck everything into its collapsing cores of data centers buried under mountains: the last race, the climate itself the last enemy?” (12) However, “It may be that our predicament is that we cannot design the next political geography or planetary computation until it is more fully designs us in its own image or, in other words, that the critical dependence of the future’s futurity is that we are not yet available for it!” (15)
loading...
Bratton’s conceptual object is not just the actually existing stack, but all of its possible variants, including highly speculative ones such as Constant’s New Babylon, and actual but failed or curtailed ones, such as Stafford Beer’s Cybersyn, Ken Sakamura’s TRON, the Soviet Internet and Soviet cybernetics. The actual stack includes such successful technical developments as TCP/IP. This protocol was the basis for a modular and distributed stack that could accommodate unplanned development. It was about packets of information rather than circuits of transmission; about smart edges around a dumb network. TCP/IP was authored as a scalable set of standards.
Bratton thinks infrastructure as a stack platform with six layers, treated in this order: earth, cloud, city, address, interface, user. I think of it more as the four middle layers, which produce the appearance of the user and the earth at either end. I will also reverse the order of Bratton’s treatment, and start with the phenomenology of the user, which is where we all actually come to experience ourselves in relation to the stack. User layer: A user is a category of agent, a position within a system that gives it a role. We like to think we are in charge, but we might be more like the Apollo astronauts, “human hood ornaments.” (251) Its an illusion of control. The more the human is disassembled into what Lazzarato and Raunig and others think of as dividual drives, the more special humans want to feel. “In this, the User layer of The Stack is not where the rest of the layers are mastered by some sovereign consciousness; it is merely where their effects are coherently personified.” (252) For a long time, design thought about the user as a stylized persona. As Melissa Gregg and others have showed, the scientific measurement of labor produced normative and ideal personas of the human body and subjectivity. The same could be said for audience studies. Fordism was an era of the design of labor process and leisure-time media for fictional people. But these personas are no longer needed. As Azuma shows, the stack does not need narrative fictions of and for ideal users but database fictions that aggregate empirical ones. The stack not only gives but takes data from its users. “User is a position not only through which we see The Stack, but also through which The Stack sees us.” (256) This is the cause of considerable discomfort among users who reactively redraw the boundaries around a certain idea of the human. Bratton is not sympathetic: “… anthropocentric humanism is not a natural reality into which we must awake from the slumber of machinic alienation; rather it is itself a symptomatic structure powered by – among other things – a gnostic mistrust of matter, narcissistic self-dramatization, and indefensibly pre-Copernican programs for design.” (256) Bratton is more interested in users who go the other way, such as the quantified-self people, who want self-mastery through increasingly detailed self-monitoring of the body by the machine. In Lazzarato’s terms, they are a people who desire their own machinic enslavement. Bratton thinks it is a bit more nuanced than that. There may still be a tension between the cosmopolitan utopias of users and their molding into data-nodes of consumption and labor. The user “To be sure, the bio-geo-politics of all this are ambiguous, amazing, paradoxical, and weird.” (269)
The stack does not care if a user like you is human or not. Bratton is keen to oppose the anthropomorphizing of the machine: “we must save the nonhumans from being merely humans” (274) Making the inhuman of the machine too akin to the merely human shuts out the great outdoors of the nonhuman world beyond. “We need to ensure that AI agents quickly evolve beyond the current status of sycophantic insects, because seeing and handling the world through that menial lens makes us, in turn, even more senseless.” (278) As one learns from Mbembe, commandment that does not confront an other with its own autonomous will quickly loses itself in ever more hyperbolic attempts to construct a sense of agency, will and desire.
Debate about user ‘rights’ has been limited to the human, and limited to a view of the human merely as endowed with property and privacy rights. Rather like Lefebvre’s right to the city, one needs a right to the stack that includes those without property. One could even question the need to think about information and its infrastructures in property terms at all. Bratton is not keen on the discourse of oedipal fears about the bad stepfather spying on us, resulting in users wanting no part in the public, but to live a private life of self-mastery, paranoia and narcissism. “The real nightmare, worse than the one in which the Big Machine wants to kills you, is the one in which it sees you as irrelevant, or not even a discrete thing to know.” (364) Maybe the user could be more rather than less than the neoliberal subject. The stack need not see us as users. To some extent it is an accommodation to cultural habits rather than a technical necessity. Interface layer: If one took the long view, one could say that the human hand is already an interface shaped over millennia by tools. That ancient interface now touches very new ones. The interface layer mediates between users and the technical layers below. Interface connects and disconnects; telescopes, compresses or expands layers – routing user actions through columns that burrow up and down through the stack. The Stack turns tech into images and images into tech. “Once an image can be used to control what it represents, it too becomes technology: diagram plus computation equals interface.” (220) Interfaces are persuasive and rhetorical, nodes among the urban flow. “What is open for me may be closed for you, and so our vectors are made divergent.” (222) Interfaces offer a kind of protocol, or generic threshold. We probe our interfacial condition, being trained through repetition. From the point of view of the interfae layer, users are peripheral units of stacks. Or user: one could think the Apple stack, for example, as creating a single distributed user for the “Apple experience.” Kittler thought media as looping the subject into three separate interfaces: cinema (the imaginary), typewriter (the symbolic) and phonograph (the real). Bratton thinks the interface as producing a more schizoid landscape. “The machinic image is qualified by many little sinkholes between the symbolic, the imaginary and the real, and at a global scale of billions of Users…” (225) This might form the basis of a materialist account of what Jodi Dean calls the decline in symbolic efficiency. Interfaces change not only the form of the subject but the form of labor. “Today, at the withering end of post-Fordism… we observe logistics shifting from the spatially contiguous assembly line to the radically dis-contiguous assemblage line linked internally through a specific interfacial chain. Contemporary logistics dis-embeds production of things from particular sites and scatters it according to the synchronization and global variance in labor price and resource access….” (231)
Interfaces also become more powerful forms of governance over the flows they represent. They have to appear as the remedy for the chaotic flows they themselves cause. Their reductive maps become true through use. They may also be the icons of weird forms of experimental religion, or ways of binding. They can notate the world with friend/enemy borders. The example here is the popular game Ingress, with its ludic Manicheanism, in which users are trained to attack and defend non-existent territories. Fanon once noted that when French colonial power jammed the radio broadcasts of the resistance, Algerians would leave the radio dial on the jammed signal, the noise standing in for it. Bratton wonders how one might update the gesture. Like Gilroy and Karatani in their different ways, Bratton wonders what kinds of universality could emerge, in this case as a kind of abstraction from of interfacial particularities and their “synthetic diagrammatic total images.” (297)
However, as Bratton realizes, “We fear a militarization of cognition itself… Enrollment and motivation according to the interfacial closures of a political theological totality might work by ludic sequences for human Users or by competitive algorithmic ecologies for nonhuman Users.” (297) Both human and nonhuman cognition could be assimilated to stack war machines. Or perhaps, one could remake both human and inhuman users into a new kind of polity, in part through interfaces of a different design. “A strong interfacial regime prefigures a platform architecture built of total interfacial images and does so through the repetition of use that coheres a durable polity in resemblance to the model.” (339)
Address layer: Address is a formal system, independent of what it addresses, that denotates singular things through bifurcators such as names or numbers, that can be resolved by a table for routing. Addressing creates generic subjectivity, so why not then also generic citizenship? Address can however give rise to something like fetishism. In Bratton’s novel reading of Marx, capitalism obfuscates the address of labor, treating it as a thing and addressing things in labor’s place as if those things had magical properties.
If we are all users (we humans and inhumans) then a right to the stack is also a right to address, as only that which has an address can be the subject of rights in the “virtual geographic order” of a stack geopolitics. (191) Address is no longer just a matter of discrete locations in a topography. As I put it in Gamer Theory, space is now a topology, which can be folded, stretched and twisted. As Galloway has already shown, the distributed network of TTP/IP is doubled by the centralized of DNS, which records who or what is at which address. Bratton’s interest is in what he calls deep address, modeled on the intertextuality or détournement one sees in the archive of texts, or the architectural thought of Christopher Alexander for whom building was about containers and conveyors for information. Address designates a place for things and enables relations between things; deep address designates also the relations, and then the relations among those relations. Deep address is to address as a derivative is to a contract. Its endless metadata: about objects, then metadata about the metadata about those objects, and so on. The financialization of addressability may also be a kind of fetishism, mistaking the metadata about a relation for a relation. Deep address as currently implemented makes everything appear to a user configured as a uniquely addressed subject who calls up the earth through the stack as if it were only a field of addressable resources. Hence, “not only is the totality of The Stack itself deeply unstable, it’s not clear that its abyssal scope of addressability and its platform for the proliferation of near-infinite signifiers within a mutable finite space are actually correspondent with the current version of Anthropocenic capitalism.” (213) However, deep address has become an inhuman affair. Not only are most users not subjects, so too most of what is addressed may not even be objects. Deep address generates its own accidents. Maybe it is headed toward heat death, or maybe toward some third nature – deep address may outlive the stack. Bratton: “we have no idea how to govern in this context.” (213)
City layer: Beneath the address layer is still the old-fashioned topography it once addressed – the city. Only the city now looks more like Archizoom’s No-Stop City than the static geometries of Le Corbusier. In the city layer absorbed into the stack, mobilization is prior to settlement, and the city is a platform for sorting users in transit. As Virilio noted some time ago, the airport is not only the interface but also the model of the overexposed city.
Like something out of a Ballard story, the city layer is one continuous planetary city. It has a doubled structure. For every shiny metropolis there’s an anti-city of warehouses and waste dumps. The stack subsumes cities into a common platform for labor, energy and information. Proximity still has value, and the economy of the city layer depends on extracting rents from it. Here one might add that the oldest form of ruling class – the rentier class – has found a future not (or not just) in monopolizing that land which yields energy (from farms and mines) but also that which yields information – the city. Cities are platforms for users rather than polities for citizens. And as Easterling might concur, their form is shaped more by McKinsey or Haliburton than by architects or planners. Architecture becomes at best interface design, where cement meets computation. It is now a laminating discipline, creating means of stabilizing networks, managing access, styling interfaces, mixing envelopes. Cities are to be accessed via mobile phone, which afford parameters of access, improvisation, syncopation. The ruin our civilization is leaving does not look like the pyramids. It’s a planet wrapped in fiber optic. But perhaps it could be otherwise. “Our planet itself is already the mega-structural totality in which the program of total design might work. The real design problem then is not foremost the authorship of a new envelope visible from space, but the redesign of the program that reorganizes the total apparatus of the built interior into which we are already thrown together.” (182) Ironically, today’s pharaohs are building headquarters that simulate old forms, be it Google’s campus, Amazon’s downtown or Apple’s weird spaceship. They all deny their spatial doubles, whether its Foxconn where Apple’s phones are made or Amazon’s “logistics plantations.” (185) But it is hard to know what a critical practice might be that can intervene now that cities are layers of stacks platforms, where each layer has its own architectural form. “Is Situationist cut-and-paste psychogeography reborn or smashed to bits by Minecraft?” (180) Bratton doesn’t say, but it at least nicely frame the kind of question one might now need to ask.
Cloud layer: Low in the stack, below the city layer, is the cloud. It could be dated form the development of Unix time-sharing protocols in the 1970s, from which stems the idea of users at remote terminals sharing access to the same computational power. The cloud may indeed be a kind of power. “As the governing nexus of The Stack, this order identifies, produces and polices the information that can move up and down, layer to layer, fixing internal and external borders and designating passages to and from.” (111)
It may also be a layer that gives rise to unique kinds of conflict, like the First Sino-Google War of 2009, where two stacks, built on different kinds of cloud with different logics of territory and different imagined communities of user collided. That may be a signal moment in an emerging kind of geopolitics that happens when stacks turn the old topography into a topology. “The rights and conditions of citizenship that were to whatever degree guaranteed by the linking of information, jurisdiction and physical location, all within the interior view of the state, now give way perhaps to the riskier prospects of a Google Grossraum, in which and for which the terms of ultimate political constitution are anything but understood.” (114) The cloud layer is a kind of terraforming project – here on earth. Clouds are built onto, or bypass, internet. They form a single big discontinuous computer. They take over functions of the state, cartography being just one example. There are many kinds of clouds, however, built into quite different models of the stack, each with their own protocols of interaction with other layers. Google, Apple and Amazon are stacks with distinctive cloud layers, but so too are WalMart, UPS and the Pentagon. Some cloud types: Facebook, which runs on the captured user graph. It is a rentier of affective life offering a semi-random newspaper and cinema, strung together on unpaid nonlabor, recognition and social debit. Then there’s Apple, who took over closed experience design from Disney, and offer brand as content. As a theology, Apple is an enclave aesthetic about self-realization in a centralized market. It’s a rentier of the last millimeter of interface to its walled garden. On the other hand, Amazon is an agora of objects rather than subjects, featuring supply chain compression, running on its own addressing system, with algorithmic pricing and micro-targeting. But even Amazon lacks Google’s universal ambition and cosmopolitan mission, as if the company merely channeled an inevitable quant reason. It is a corporation founded on an algorithm, fed by universal information liquidity, which presents itself as neutral platform for humans and inhumans, offering ‘free’ cloud services in exchange for information. “Google Großraum delaminates polity from territory and reglues it into various unblendable sublayers, weaving decentralized supercomputing through increasingly proprietary networks to hundreds of billions of device end-points.” (295) Despite their variety, to me these clouds are all shaped by the desires of what I call the vectorialist class, which is to extract what Bratton calls “platform surplus value.” (137) But perhaps they are built less on extracting rent or profit so much as asymmetries of information. They attempt in different ways to control the whole value chain through control of information. Finance as liquidity preference may be a subset of the vectoralist class as information preference, or power exercised through the most abstract form of relation, and baked into the cloud no matter what its particular form. Bratton: “The Cloud polis draws revenue from the cognitive capital of its Users, who trade attention and micro-economic compliance in exchange for global infrastructural services, and it in turn provides each of them with an active, discrete online identity and the license to use that infrastructure.” (295) Maybe this is “algorithmic capitalism” – or maybe (as I argue) it’s not capitalism any more, but something worse. (81) Something Bratton’s innovations in conceptual language help us perceive, but which could be pushed still further. The current cloud powers all built out accidental advantages or contingent decisions. Not without a lot of help from human users, whose unpaid non-labor provides the feedback for their constant optimization. We are all guinea pigs in an experiment of the cloud’s design. But Bratton is resistant to any dystopian or teleological read on this. The cloud layer was the product of accident as much as design; conflict as much as collaboration. Still, there’s something unsettling about the prospect of the nomos of the cloud. Bratton: “The camp and the bunker, detention and enclave, are inversions of the same architecture.” (368) The nomos of the cloud can switch between. It is yet to be seen what other topological forms it might enable.
Earth layer: Was computation discovered or invented? Now that the stack produces us as users who see the earth through the stack, we are inclined to substitute from our experience of working and playing with the stack onto the earth itself. It starts to look like a computer, maybe a first computer, from which the second one of the stack is derived. But while earth and stack may look formally similar they are not ontological identical. Or, as I speculated in A Hacker Manifesto, the forces of production as they now stand both reveal and create an ontology of information that is both historical and yet ontologically real.
Bratton: “The Stack is a hungry machine…” (82) It sucks in vast amounts of earth in many forms. Here Bratton connects to what Jussi Parikka calls a Geology of Media. Everyone has some Africa in their pocket now – even many Africans, although one should not ignore asymmetries in where extractions from the earth happen and where the users who get to do the extracting happen. Bratton: “… there is no Stack without a vast immolation and involution of the Earth’s mineral cavities. The Stack terraforms the host planet by drinking and vomiting its elemental juices and spitting up mobile phones…. How unfamiliar could its flux and churn be from what it is now? At the radical end of contingency, what is the ultimate recomposability of such materials? The answer may depend on how well we can collaborate with synthetic algorithmic intelligence to model the world differently…” (83) The stack terraforms the earth, according to a seemingly logical but haphazard geodesign – rather like the aesthetics of Superstudio. “As a landscaping machine, The Stack combs and twists settled areas into freshly churned ground, enumerating input and output points and re-rendering them as glassy planes of pure logistics. It wraps the globe in wires, making it into a knotty, incomplete ball of glass and copper twine, and also activating the electro-magnetic spectrum overhead as another drawing medium, making it visible and interactive, limning the sky with colorful blinking aeroglyphs.” (87) Particularly where the earth is concerned, “Computation is training governance to see the world like it does and to be blind like it is.” (90) But the stack lacks a bio-informational skin that might connect ecological observation to the questioning of resource management. Running the stack now puts more carbon into the atmosphere that the airline industry. If it as a state it would be the fifth largest energy suck on the planet. “Even if all goes well, the emergent mega-infrastructure of The Stack is, as a whole, perhaps the hungriest thing in the world, and the consequences of its realization may destroy its own foundation.” (94) Hence the big question for Bratton becomes: “Can The Stack be built fast enough to save us from the costs of building The Stack?” (96) Can it host computational governance of ecologies? “Sensing begets sovereignty,” as I showed in Molecular Red in the case of weather and climate. (97) But could it result in new jurisdictions for action? Hence, “we must be honest in seeing that accommodating emergency is also how a perhaps illegitimate state of exception is stabilized and over time normalized.” (103) Because so far “there is no one governance format for climates and electrons that the space for design is open at all.” (104) Bratton is reluctant to invite everything into Bruno Latour’s parliament of things, as this to him is a coercing of the nonhuman and inhuman into mimicking old-fashioned liberalism. But making the planet an enemy won’t end well for most of its inhabitants. Which brings us to the problem of the stack to come, and Bratton’s novel attempt to write in the blur between what is here but not named and what is named but not really here. Bratton: “Reactionary analog aesthetics and patriotisms, Emersonian withdrawal, and deconstructionist political theology buy us less time and far less wiggle room than they promise….” (81) What provides the interesting angle of view in Bratton is thinking geopolitics as a design problem. “We need a geopolitics of design that is comfortable not only with computation but also with vertical systems of designation and decision.” (xix) But this is not your usual design problem thinking. “The more difficult assignment for design is to compose relations within a framework that exceeds both the conventional appearance of forms and the provisional human context at hand, and so pursuing instead less the materialization of abstract ideas into real things than the redirection of real relations through a new diagram.” (210) Designing the stack to come, like any good design studio, does try to start with what is at hand. “Part of the design question then has to do with interpreting the status of the image of the world that is created by that second computer, as well as that mechanism’s own image of itself, and the way that it governs the planet by governing its model of that planet.” (301) This is not a program of cybernetic closure, but rather of “enabling the world to declare itself as data tectonics…. Can the ‘second planetary computer’ create worlds and images of worlds that take on the force of law (if not its formality) and effectively exclude worse alternatives?” (302) It might start with “a smearing of the planet’s surface with an objective computational film that would construct a stream of information about the performance of our shared socio-natural spaces…” (301) Contra Latour, but also Haraway and Tsing, for Bratton there is no local, only the global. We’re users stuck with a stack that resulted from “inadvertent geoengineering.” (306) But the design prospect is not to perfect or complete it, but to refashion it to endure its own accidents and support a range of experiments in rebuilding: “the geo-design I would endorse doesn’t see dissensus as an exception.” (306)
It’s not a romantic vision of a return to an earth before the stack. Bratton: “… the design of food platforms as less about preserving the experiential simulation of preindustrial farming and eating… and more like molecular gastronomy at landscape scale.” (306) But it is not a naïve techno-utopianism either. While I don’t think it’s a good name, Bratton is well aware of what he calls cloud feudalism, which uses the stack to distribute power and vale upwards. And it is fully aware that the “militarized luxury urbanism” of today’s vectorialist class depends on super-exploitation of labor and resources. (311) At least one novel observation here however is that the stack can have different governance forms at each level. The stack is not one infrastructure, but a laminating of relatively autonomous layers.
Here one might look sideways in a media archaeology vein at other forms of stack that fell by the wayside, from Bogdanov to the attempt to computerize the Soviet Gosplan – which as Bratton notes does not look completely unlike what Google actually achieved. Hayek may have been right in his time that state planning could not manage information better than a market. But maybe neither could manage information as well as a properly designed stack platform. Perhaps, as some Marxists once held, the capitalist ruling class (and then the vectoralist ruling class), perfected the forces of production that make them obsolete. Perhaps in the liminal space of the stack to come one can perceive technical-social forms that get past both the socialist and capitalist pricing problems. Bratton: “We allow, to the pronounced consternation of both socialist and capitalist realists, that some polypaternal supercomputational descendant of Google Gosplan might provide a mechanism of projection, response, optimization, automation, not to mention valuation and accounting beholden neither to market idiocracy nor dim bureaucratic inertia, but to the appetite and expression of a curated algorithmic phyla and its motivated Users.” (333) Perhaps there’s a way planning could work again, using deep address, but from the edges rather than the center. This might mean however an exit from a certain residual humanism: “the world may become an increasingly alien environment in which the privileged position of everyday human intelligence is shifted off-center.” (338) perhaps it’s not relevant whether artificial cognition could pass a Turing test, and is more interesting when it doesn’t. Here Bratton gestures towards the post-human accelerationism of Negarastani and Brassier, but with far more sense of the constraints now involved. “The Anthropocene should represent a shift in our worldview, one fatal to many of the humanities’ internal monologues.” (353)
Bratton: “The Stack becomes our dakhma.” (354) Perhaps a dakhma like the raised platform built by the Zoroastrians for excarnation, where the dead are exposed to the birds. To build stack to come we have to imagine it in ruins: “design for the next Stack… must work with both the positive assembly of matter in the void, on the plane and in the world, and also with the negative maneuver of information as the world, from its form and through its air.” (358)
To think about, and design, the stack to come, means thinking within the space of what Bratton calls the black stack, which is a “generic profile of its alternative totalities” (363) It might look more like something out of Borges than out of the oracular pronouncements of Peter Theil or Elon Musk. Bratton: “Could this aggregate ‘city’ wrapping the planet serve as the condition, the grounded legitimate referent, form which another, more plasmic, universal suffrage can be derived and designed?” (10) Let’s find out.
taken from:
loading...
Next in our tour of new-classic works of media theory, after Jodi Dean and Tiziana Terranova, I turn to Hito Steyerl, and her collected writings The Wretched of the Screen (e-flux and Sternberg Press, 2012). If Dean extends the line of psychoanalytic and Terranova the autonomist strains of (post)marxist thought, Steyerl does something similar for the formalist strategies of a past era of screen theory and practice.
Hito Steyerl is from that era when the politics of culture was all about representation. Sometimes the emphasis was on the question of who was represented; sometimes on the question of how. The latter question drove a series of inquiries and experiments in form. These experiments tended to focus fairly narrowly on things like the logic of cinematic editing, not least because the larger technical and economic form of cinema was fairly stable. It’s a line of thought that did not survive too well into the current era, which Steyerl alternately calls “audiovisual capitalism,” or “disaster capitalism” or “the conceptual turn of capitalism.” (33, 93, 42) Neither capitalism – if this is still what this is – or media form seems at all stable any more. (For Lev Manovich media dissolves into software). The formal questions need to be asked again and across a wider tract of forms, and for that matter for new categories of that which might or might not be represented. It may even turn out that the formal questions of media are not really about representation at all.
loading...
For this is an era in free-fall without end, to the point where it feels like a kind of stasis. But perhaps this free-fall opens onto a particular kind of vision. For Platonov, the Bolsheviks had taken away from peasant society both the heavens and the land, leaving only the horizon. Like Virilio, Steyerl thinks the horizon has now disappeared as well. Once it was the fulcrum that allowed mariners to find their latitude. But the line they drew was already an abstract one, turning the earth from a continuum of places to a grid-like space.
This abstracted perception of space both affirms and undermines the point of view of the spectator, making all of space appear as if for that point of view, but also making that point of view itself an abstract one. Steyerl doesn’t mention longitude, but one could think here also about how the chronometer created a second axis of abstraction, of time and longitude, to complete the grid, and making time as smooth and digital as space. “Time is out of joint, and we no longer know whether we are objects or subjects as we spiral down in an imperceptible free fall.” (26) Perhaps there is also a shift in emphasis from the horizontal to the vertical axis. This is an era that privileges the elevated view, whether of the drone or the satellite. The spectator’s point of view floats, a “remote control gaze.” (24) If everything is in free-fall then the only leverage is to be on top of the one falling below, a kind of “vertical sovereignty.” (23) It is the point of view of “intensified class war from above.” (26)
It is hard to know what kind of formal tactic might get leverage in such a situation. Steyerl counsels “a fall toward objects without reservation.” (28) This, as it will turn out later, yields some striking ways of approaching the question of the subject as a formal problem as well.
Falling toward the image as a kind of object, Steyerl is drawn to the poor image, or the lumpen-image, the kind that lacks quality but has accessibility, restores a bastard kind of cult value, functions as “a lure, a decoy, an index.”(32) This reverses one of the received ideas of cinema studies subset of screen studies, where “resolution was fetishized as if its lack amounted to castration of the author.” (35) Steyerl celebrates instead Kenneth Goldsmith’s Ubuweb, which makes a vast archive of the avant-garde available free online (but out of sight of google) at low resolution. The poor image is cousin to Julio Garcia Espinosa’s manifesto ‘For an imperfect cinema,’ and as such is one of Steyerl’s many reworkings, indeed détournements, of classic ‘moves’ from the formalist critical playbook. Like Third Cinema before it, the poor image works outside the alignment of high quality with high class. The poor image is not something that can just be celebrated. It is also the vehicle for hate speech and spam. The poor image is a democratic one, but in no sense is the democratic an ideal speech situation. Maybe its more what circulates in and between Hiroki Azuma’s databases. Poor images “express all the contradictions of the contemporary crowd: its opportunism, narcissism, desire for autonomy and creation, its inability to focus or make up its mind…” (41) Their value is defined by velocity alone. “They lose matter and gain speed.” (41) But there are other contradictions inherent in the poor image. It turns out that dematerialized art works pretty well with conceptual capitalism. The tendency of the latter may well be to insist that even non-existent things must be private property. So, for example Uber is now a major transport company yet it owns no vehicles; Amazon is a major retailer that owns no stores. Dematerialization is in my terms the strategy of the vectoral class. Or rather, not so much a dematerialization as a securing of control through the information vector, and thus an exploit based on certain properties of ‘matter’ of fairly recent discovery. On the other hand, the poor image can circulate in anonymous networks, yielding anomalous shared histories. Perhaps it produces Vertov’s visual bonds in unexpected ways. Steyerl’s figure for this is David Bowie’s song ‘Heroes’, where the hero is not a subject but an object, the image-as-object – that which can be copied, that which struggles to become a subject. But maybe instead the hero is that which Mario Perniola, after Benjamin, calls a “thing that feels” (50).
In the era of free-fall, perhaps trauma is a residue of the independent subject, “the negativity of the thing can be discerned by its bruises.” (52) Things condense violence just as much as subjects; things condense violence just as much as desire. “The material articulation of the image is like a clone of Trotsky walking around with an icepick in his head.” (53) The thing is an object but also a fossil, a form that is an index of the circumstances of its own death.
Again, Steyerl will reach here for a détournement of the old tactics. In this case it is Alexander Rodchenko, for whom all things could be comrades, just as for Platonov all living things could be comrades. In free-fall, the living and non-living things might be a heap rather than a hierarchy. “History, as Benjamin told us, is a pile of rubble. Only we are not staring at it from the point of view of Benjamin’s shell-shocked angel. We are not the angel. We are the rubble.” (56) Steyerl thinks not just the micro-scale of the image but also the macro scale of the institution, and asks questions with a similar formal lineage. From the point of view of screen studies, the museum is now a white cube full of black boxes. You pass under a black curtain into a usually quite uncomfortable little black box with a bench to sit on and bad sound. The black-box in white-cube system is for Steyerl not just a museum any more but also a factory. Strangely enough they are now often – like Tate Modern or DIA Beacon – now also often in former industrial sites. Steyerl: “the white cube is… the Real: the blank horror and emptiness of the bourgeois interior.” (62) Or was. Now it is a sort of a-factory, producing transformations in the feelings of subjects rather than in the form of objects. The museum even functions temporally like work does in the over-developed world. Once upon a time the factory and the cinema disciplined bodies to enter, perform their function, and leave at the same time. Now the museum allows the spectator to come and go, to set their own pace – just like contemporary occupations where the worker is ‘free’ so long as the contract gets done on time.
loading...
There is a sort of illusion of sovereignty in both cases, where one can appear to be ‘on top of things’. The spectator in the museum is as in charge as the artist, the curator or the critic. Or so it appears. It’s a sort of public sphere in negative, which does not actually foster a conversation, but rather produces the appearance of a public. Maybe it is something like the snob’s means of sustaining desire, identity and history, as Azuma would have it.
The labor of spectating in today’s museums is always incomplete. No one viewer ever sees all the moving images. Only a multiplicity of spectators could ever have seen the hours and hours of programming, and they never see the same parts of it. It is like Louis Althusser’s interpolation in negative. There’s no presumption of an ideological apparatus directly hailing its subjects. Rather its an empty and formal gesture, an apparatus that does not call ‘Hey you!’ But rather ‘Is anybody there?’ And usually getting no answer. The videos loop endlessly in mostly empty blackened rooms. Expanding the scale again, Steyerl considers the white cube-black box system as a now international system. No self-respecting part of the under-developed world is without its copy of these institutions of the over-developed world. The art world’s museum-authenticated cycles of “bling, boom and bust” are now a global phenomena. (93) A placeless culture for a placeless space and time in free-fall. “If contemporary art is the answer, the question is, how can capitalism be made more beautiful?” (93) Contemporary art mimics the values of a ruling class that I think may no longer quite be described as bourgeois and is perhaps not even capitalist any more. Both imagine themselves to be “unpredictable, unaccountable, brilliant, mercurial, moody, guided by inspiration and genius.” (94) The abstract, placeless space of international contemporary art is populated by “jpeg virtuosos,” “conceptual impostors” and “lumpen freelancers” all “hard wired and thin skinned” and trying to work their way in. (95, 96, 100) Not to mention the women whose affective labor holds the whole art world together.
For Steyerl there’s a new kind of shock worker, a post-Soviet kind used to churning out affects and percepts, living on adrenalin, deadlines, exhaustion and an odd kind of quota. Steyerl seems to think they cannot really be defined in class terms. But perhaps they are just a niche version of what I call the hacker class, adding transformations to information but who do not own the means of realizing the value of what they produce.
Here Steyerl usefully augments our understanding of the contemporary production of subjects. The art worker subcategory of the hacker class makes visible certain traits that may occur elsewhere. With some, for example, they no longer have work so much as occupations. These keep people busy but lack the classic experience of alienation. Work time is not managed the old fordist way, and nor is there a recognizable product into which the worker’s labor has been estranged. The workers from the 60s on revolted against alienation. “Capital reacted to this flight by designing its own version of autonomy: the autonomy of capital from workers.” (112) The artist is person who refuses division of labor into jobs. To be an artist is not to work but to have an occupation. As Franco Berardi argues, it forecloses the possibility of alienation as traditionally understood. The dream of the historic avant-gardes was to merge art and everyday life under the sign of an expanded concept of use value. Like all utopian projects, it came true, but with a twist. Art and the everyday merged, but under the sign of exchange value. Contemporary art became an aesthetic and economic project but not a political one. The Capital-A Artist is a mythic figure, a creative polymath as legitimation for the amateur entrepreneur, who does not know much about the forces of production can can read a spreadsheet and talk a good game.
Perhaps occupation is a category that could be pushed in other directions. Steyerl gestures towards the occupiers of the New School and their attempt – brief though it was – to occupy time and space in a different way, picking up where the Situationist International left off with their constructed situations.
But for many the free-lance life is not one in which to realize one’s dreams. It is a part of a world of freedom, but of freedom from social bonds, from solidarity, from culture, education, from a public sphere. The term free-lance probably comes from Walter Scott’s Ivanhoe(1820), and meant a mercenary rather than a casual worker. In Kurasawa’s Yojimbo (1961) the mercenary plays one side against the other for the common good. Perhaps there’s a hint there at strategies against the ‘sovereign’ powers of the present. In the legal concept of the king’s two bodies, the actual body of the sovereign, which is mortal, is doubled by a formal body, which is immortal. Steyerl’s détournement of the concept proposes thinking both the actual and formal bodies as dead. A dead form is kept half alive by dead subjects. Maybe its a kind of negative sovereignty. Here Steyerl focuses on the unidentified dead of the civil wars, from Spain to Turkey. The unidentified dead “transgress the realms of civil identity, property, the order of knowledge, and human rights alike.” (151) Like poor images, the unidentified dead remaining unresolved. “Their poverty is not a lack, but an additional layer of information, which is not about content but form. This form shows how the image is treated, how it is seen, passed on, or ignored, censored, and obliterated.” (156) In an era obsessed with the surveillance of the subject of both state and corporate power, Steyerl locates a dread exception. A different kind of negative or blank subject comes up in Steyerl’s study of spam. “According to the pictures dispersed via image spam, humanity consists of scantily dressed degree-holders with jolly smiles enhanced by orthodontic braces.” (161) They are “a reserve army of digitally enhanced creatures who resemble the minor demons and angels of mystic speculation…” (163) Image-spam is addressed to humans but does not really show them. It is “an accurate portrayal of what humanity is not. It is a negative image.” (165) As is Steyerl’s habit, here again she pushes this reading further, to ask whether these non-humans of image-spam could be the model for a kind of refusal or withdrawal, avatars for a desire to escape from visual territory. Contra Warhol: today everyone can be invisible for fifteen minutes. It’s a walkout: “it is a misunderstanding that cameras are tools of representation; they are at present tools of disappearance. The more people are represented the less is left of them in reality.” (168) The abstraction built into modern modes of perception marks the position of the subject as the central node in the visual field. But rather than stress its relentless centrality, Steyerl points towards the uses of its other quality – its abstract inhuman quality.
In an era where political art is reduced to “exotic self-ethnicization, pithy gestures, and militant nostalgia,” perhaps there’s a secret path through the free-fall world of floating images and missing people. (99) “Any image is a shared ground for action and passion, a zone of traffic between things and intensities.” (172) The poor image can perhaps be a visual bond in negative, marking through its additional layers of information some nodes in an abstract space where we choose not to be.
One way of thinking about the figure of the modern is that it was about relations between a past and a future that was never reversible or cyclical. Whether good or bad, the future was in a relation of difference to a past. But art is no longer modern, it is now contemporary, in which art has a relation only to the present, and to other art with which it synchronizes, in the present. Perhaps what Steyerl is attempting is to open a difference between the modern and the contemporary. It does not work quite as the internal difference within the modern did, but at least it opens up a space where historical thought, feeling and sensation might live again, if only in negative. As the residue of futures past.
taken from:
loading...
What are the classic texts for constituting a critical theory for the twenty-first century? How would one go about choosing them? Perhaps it would be a matter of making a fresh selection from the past, even the recent past, according to the agenda for thought and action than confronts us today.
That agenda seems to me to have at least three major features. The first is the anthropocene. One can no longer bracket off nature from the social, and construct a theory exclusively on the terrain of the social. The second is the role of information in both production and reproduction. One can no longer just assume that either capital marches on as in the age of the steam engine, or that the superstructures too are not affected by such vulgar questions. The third would be a shift away from Eurocentric concerns. World history appears to be made elsewhere now. If the first of these agenda items steered us towards Paul Burkett’s work, the second points towards some very prescient attempts to think the age of information. I want to start with Tiziana Terranova’s Network Culture: Politics for the Information Age (Pluto Press, London, 2004), which brings together the resources of the Italian autonomist thinkers with Deleuze and Guattari. It is a position I used to be close to myself but have move away from. So this appreciation of Terranova’s classic text is also in some respects an inquiry into both what reading Marx with Deleuze enabled but also where the limits of such a conjunction might lie.
Terranova starts by taking a certain distance from the ‘postmodern’ habits of thought characteristic of late twentieth century writing. This was a time when texts like Paul Virilio’s Information Bomb were popular, and encouraged a certain delirious end-of-the-world-as-we-know-it talk. If the industrial era tech of mechanically reproducible media were on the way out, then everything was supposedly becoming unreal and out of control. Welcome to what Jean-François Lyotard called the ‘immaterial’.
And that was the more respectable end of that talk. Neo-gnostic sects such as the extropians really through they could leave their bodies behind and upload consciousness into computers. The cyberpunks wanted to ‘jack-in’ and at least temporarily leave ‘meatspace’. Honestly, people really did lose their shit over the beginnings of the end of mechanical and broadcast media, as tends to happen in all such transitions in media form, as historians of media culture well know. Contrary to certain now popular narratives by latecomers, not everybody went gaga over ‘new media’ in that period, which stretches perhaps from the popularization of cyberpunk in 1984 to the death of the internet as a purely scientific and military media in 1995. There were plenty of experimental, critical and constructivist minds at work on it. I would count Terranova as a fine exponent of the constructivist approach, a sober builder of useful theory that might open spaces for new practices in the emerging world of post-broadcast media flux.
Terranova: “I do not believe that such information dynamics simply expresses the coming hegemony of the ‘immaterial’ over the material. On the contrary, I believe that if there is an acceleration of history and an annihilation of distances within an information milieu, it is a creative destruction, that is a productive movement that releases (rather than simply inhibits) social potentials for transformation.” (2-3) It became a question, then, of a level-headed analysis of the tendencies at work.
It helps not to make a fetish of just one aspect of media form, whether one is talking about the ‘internet’ back then, or ‘big data’ now. Sometimes these are aspects of more pervasive technological phyla. Terranova: “Here I take the internet to be not simply a specific medium but a kind of active implementation of a design technique able to deal with the openness of systems.” (3) This might be a useful interpretive key for thinking how certain now-dominant approaches to tech arose. Her approach is not limited to tech, however, but studies concepts and milieu as well as techniques. Lyotard sent everyone off on a bum steer with the idea of the immateriality of information, a problem compounded by Jameson’s famous assertion that the technics of late capitalism could not be directly represented, as if the physics of heat engines was somehow clearer in people’s heads than the physics of electrical conductivity. Terranova usefully begins again with a concrete image of information as something that happens in material systems, and thinks them through the image of a space of fluid motion rather than just as an end-to-end line from sender to receiver.
Anybody who studied communication late last century would have encountered some version of the sender -> code-> channel-> receiver model, with its mysterious vestigial term of ‘context’. Stuart Hall complicated this by adding a possible difference between the encoding and the decoding, thus making the non-identity of the message at either end a function not of noise as something negative but of culture as a positive system of differences. But even so, this way of thinking tended to make a fetish of the single, unilinear act of communication. It ended up as an endless argument over whether the sender’s power dominated the receiver’s, or if the receiver had an independent power of interpretation. That was what the difference between the Frankfurt school’s followers and the Birmingham school of Hall et al boiled down to.
Terranova usefully brackets off the whole language of domination and hegemony, moving the discussion away from the privileged questions of meaning and representation that the humanities-trained love so much. She insists we really take seriously the breakthrough of Claude Shannon’s purely mathematical theory of information of 1948. Information actually means three different things in Shannon. It is (1) a ratio of signal to noise, (2) a statistical measure of uncertainty, and (3) a non-deterministic theory of causation. He developed his theory in close contact with engineers working on communication problems in telephony at Bell Labs, one of the key sites where our twenty-first century world was made.
The signal to noise problem arose out of attempts to amplify telephony signals for long distance calls, where the additional energy used to amplify the signals also shows up as additional noise. This, incidentally, is where one sees how the experience of information as ‘immaterial’ is actually an effect produced by decades of difficult engineering. It takes energy to make a signal pass through a copper wire, making the electrons dance in their predictable but non-deterministic way. The energy leaks into the signal as a source of noise.
What was crucial about Shannon’s approach to this problem was to separate out the concept of information from having anything to do with ‘meaning’. Information is not ‘text’ or ‘language’. It is just a ratio of novelty and redundancy. “From an informational perspective, communication is neither a rational argument nor an antagonistic experience…” (15) It has nothing to with communication as domination or resistance, and it has nothing to do either with Habermasian communicative action or Lyotard’s language games. Before one even gets to any such hermeneutic theories, one has to deal with the specific materiality of information, which was in effect Shannon’s unique contribution. For information to be transmitted, it has to confront the demon of noise. In this approach sender and receiver appear as nodes as cooperating against noise rather than as dialectical opposites. But Terranova does not adopt Shannon without modification. Rather she follows Gilbert Simondon’s critique of information theory. Simondon points out that in Shannon, the individual sender and receiver are pre-constituted. They just appear as such, prior to the act of communication. Simondon’s approach picks up the vestigial concept of ‘context’. For him, the act of communication is also what constitutes the sender and receiver as such. His approach is to think the context as the site where information produces individuations out of a collective, undifferentiated context.
For Terranova, this is a step toward thinking the space of information as a more turbulent, metastable systems that can be disturbed by very small events: “the informational dimension of communication seems to imply an unfolding process of material constitution that neither the liberal ethics of journalism nor the cynicism of public relations officers really address.” (19) This touches on the problem of causation. Neither the liberal-rationalist nor the cynical-manipulable approach to communication really holds up to much scrutiny. Which reminds me of the advertising guru David Ogilvy, quoting one of his clients: “I know only half the advertising I pay for works, but I just don’t know which half.”
Terranova points the way to thinking information in a broader context to struggles over meaning. It could lead to problems in the organization of perception and the construction of bodily habits. It could be a way of framing a problem of the information redesign of the whole field of media and culture. Information design could be about more than messages defeating noise, but rather designing fields of possibility. The grand obsession of the first wave of information researchers and engineers was the elimination not only of noise, but of ambiguity. As Paul Edwards has shown in The Closed World (1988), even when they did not actually work any better, closed, digital systems took preference over analog ones, particularly in military-funded projects. For Terrannova, what might be a constructive project in the wake of that is some kind of information culture that does not enforce a cut in advance in the fabric of the world, and then reduce its manipulation to a set of predictable and calculable alternatives. “Informational cultures challenge the coincidence of the real with the possible.” (20) Information systems reduce material processed to closed systems defined by the relation between selection (the actual) and the field of possibilities (the virtual), but where that field appears in an impoverished form. “Information thus operates as a form of probabilistic containment and resolution of the instability, uncertainty and virtuality of a process.” (24)
Interestingly, Terranova’s approach is less about information as a way of producing copies, and more about the reduction of events to probabilities, thus sidestepping the language of simulation, although perhaps also neglecting somewhat the question of how information challenged regimes of private property. The emphasis is much more on information as a form of control for managing and reproducing closed systems. This then appears as a closure of the horizon of radical transformation. Instead of future societies we have futures markets. “A cultural politics of information thus also implies a renewed and intense struggle around the definition of the limits and alternatives that identify the potential for change and transformation.” (25)
This would be a cultural politics of the probable, the possible and real. “What lies beyond the possible and the real is thus the openness of the virtual, of the invention and the fluctuation, of what cannot be planned or even thought in advance, of what has no real permanence but only reverberations… The cultural politics of information involves a stab at the fabric of possibility.” (27) It does not arise out of negation, out of a confrontation with techno-power as an other. It is rather a positive feedback effect. There was a time when I would have shared some of this language and this project. But I think what became of an engagement with Deleuze was in the end an extension of a certain kind of romanticism, even if it is one that finds its magical other domain now immanent to the mundane world of things. Deleuze was enabling at the time, with his constructivist approach to building concepts that work across different domains. But he was a bit too quick to impose a philosophical authority on top of those other domains, as if everything was grist to the mill of a still-universalizing discourse of concept-making. It fell short of a genuine epistemological pluralism, where those other domains of knowledge and practice could push back and insist on their own protocols. Nowhere is this clearer than in Donna Haraway’s demolition job in When Species Meet (2007) of Deleuze and Guattari’s metafictional writing about wolf packs. Sometimes one has to cede the ground to people who actually know something about wolves. Perhaps I’ll pick this up again in a separate post on Terrannova’s very interesting chapter on biological computing.
One of the more powerful features of the theory of information in Shannon and since is the way it linked together information and entropy. Thermodynamics, that key to the scientific worldview in Marx’s era, offered the breakthrough of an irreversible concept of time, and one which appeared as a powerful metaphor for the era of the combustion engine. In short: heat leaks, energy dissipates. Any system based on a heat differential eventually ‘runs out of stram.’
Hence the figure of Maxwell’s Demon, which could magically sort the hot particles out from the cool ones, and prevent an energy system from entropic decline into disorder. But that, in a sense, is exactly what information systems do. The tendency of things might still be entropic: systems dissipate and break down. But there might still be neg-entropic counter-systems that can sort and order and organize. Such might be an information system. Such might also, as Joseph Needham among many others started to think, might be what is distinctive about living systems. Needham’s organicism borrowed from the systems-theory of Bertalanffy which pre-dates Shannon, and was based a lot more on analog thinking, particularly the powerful image of the organizing field. Much more influential was the transposition of the thought-image of the digital to the question of how life is organized as neg-entropic system, resulting in what for Haraway in Modest_Witness (1997) is a kind of code fetishism. What is appealing to Terranova in the confluence of biological and information thinking is the way it bypassed the humanistic subject, and thought instead toward populations at macro and micro scales. But in some ways Terranova is not that far from Haraway, even though Haraway makes almost no appearance in this text. Where they intersect is in the project of understanding how scientific knowledge is both real knowledge and shot through with ideological residues at the same time: “An engagement with the technical and scientific genealogy of a concept such as information… can be actively critical without dis-acknowledging its power to give expression and visibility to social and physical processes… Information is neither simply a physical domain nor a social construction, nor the content of a communication act, nor an immaterial entity set to take over the real, but a specific reorientation of forms of power andmodes of resistance.” (37) While I would want to pause over the word ‘resistance’, this seems to me a usefully nuanced approach.
I am a bit more skeptical these days about the will to impute a domain of otherness as a sort of immanent plane. Thus, while Terranova acknowledges the power of Manuel Castell’s figure of the network as a space of flows coming to dominate a space of places, she wants to retain a strong sense of radical possibility. One way she does so is by appealing to Bergson’s distinction between a quantified and a qualified sense of time, where time as quality, as duration, retains primacy, offering the promise of a “virtuality of duration.” (51)
But is this not yet another offshoot of romanticism? And what if it was really quite the other way around? What if the figure of time as quality actually depended on measurable, quantitative time? I’m thinking here of Peter Gallison’s demonstration of how the engineering feat of electrically synchronized time, so useful to the railways, enabled Einstein to question the metaphysical time and space that was the backdrop to Newton’s mechanics. As Gallison shows, it is only after you can actually distribute a measure of clock time pretty accurately between distant locations that you can even think about how time might be relative to mass and motion. It is certainly useful that Terranova offers a language within which to think a more elastic relation between the information in a network and the topology of that network itself. It isn’t always the case that, as with Shannon’s sender and receiver, that the nodes are fixed and pre-constituted. “A piece of information spreading throughout the open space of the network is not only a vector in search of a target, it is also a potential transformation of the space crossed that always leaves something behind.” (51) This more elastic space, incidentally, is how I had proposed thinking the category of vector in Virtual Geography (1995). In geometry a vector is a line of fixed length but of no fixed position. Thus one could think it as a channel that has certain affordances, but which could actually be deployed not only to connect different nodes, but sometimes to even call those nodes into being. Hence I thought vector as part of a vector-field, which might have a certain malleable geometry, but where what might matter is not some elusive ‘virtual’ dimension, but the tactics and experiments of finding what it actually affords.
Terranova stresses the way the internet functions as an open system, with distributed command functions. It was in this sense not quite the same as the attempts to build closed systems of an early generation of communication engineers: “resilience needs decentralization; decentralization brings localization and autonomy; localization and autonomy produce differentiation and divergence.” (57) The network, like empire, is tolerant of differences, and inclusive up to a point – but also expansionist. As Terranova notes, rather presciently, “There is nothing to stop every object from being given an internet address that makes it locatable in electronic space.” (62)
In short, the internet starts to acquire the properties of a fully-realized vector-field: “Unlike telegraphy and telephony… the communication of information in computer networks does not start with a sender, a receiver and a line, but with an overall information space, constituted by a tangle of possible directions and routes, where information propagates by autonomously finding the lines of least resistance.” (65) “In a packet-switched network… there is no simple vector or route between A… and B…” (67) But I think its still helpful to think of it as a vector-field, in that each of those routes still has fairly fixed affordances. Terranova was a pioneer in understanding that the build-out of an apparatus of which information theory was the concept had significant implications for rethinking the work of culture and politics. “There is no cultural experimentation with aesthetic forms or political organization, no building of alliances or elaboration of tactics that does not have to confront the turbulence of electronic space. The politics of network culture are thus not only about competing viewpoints, anarchistic self-regulation and barriers to access, but also about the pragmatic production of viable topological formations able to persist within an open and fluid milieu.” (68) She notes in passing some of the experiments of the late twentieth century in “network hydrodynamics” (69) such as the Communitree BBS, Andreas Broeckmann’s Syndicate list-serv, Amsterdam’s Digital City, and the rhizome list-serv. All of these fell apart one way or another, even if many others lived on and even mutated. Much of the functionality of today’s social media derives from these early experiments. Geert Lovink has devoted several books now to documenting what is living and what is dead in the experimental culture and politics of networks.
Terranova was also prescient in asking questions about the ‘free labor’ that was just starting to become a visible feature of network cultures at the time she was writing. She reads this through the autonomist-Marxist figure of the shift of work processes from the factory to society, or ‘the social factory.’ I sometimes wonder if this image might be a bit too limiting. A lot of free labor in the ‘nets looks more like the social office, or even like a social boudoir. Rather than the figure of the social as factory, it might be more helpful to think of a dismantling and repartitioning of all institutionalized divisions of labor under the impact of networked communication.
Still, it was useful at the time to insist on the category of labor, at a time when it was tending towards invisibility. One has to remember that ten years ago there was a lot more celebration of the ‘playful’ contributions of things like fan cultures to the net. Henry Jenkins’ repurposing of something like the Birmingham school’s insistence on popular agency would be a signal instance of this. Terranova: “The internet does not automatically turn every user into an active producer, and every worker into a creative subject.” (75) In a 1998 nettime.org post, Richard Barbrook suggested that the internet of that era had become the site for a kind of post-situationist practice of détournement, of which nettime itself might not have been a bad example. Before anybody had figured out how to really commodify the internet, it was a space for a “high tech gift economy.” Terranova thinks Barbrook put too much emphasis on the difference between this high tech gift economy and old fashioned capitalism. But perhaps it might be helpful to ask whether, at its commanding heights, this still is old fashioned capitalism, or whether the ruling class itself may not have mutated. Certainly, the internet became a vector along which the desires that were not recognizable under old-style capitalism chose to flee. Terranova: “Is the end of Marxist alienation wished for by the management gurus the same thing as the gift economy heralded by leftist discourse?” (79) Not so much. Those desires were recaptured again. I don’t know who exactly is supposed to have fallen for “naïve technological utopianism” (80) back in the 90s, apart from the extropians and their fellow travellers. In the main I think a kind of radical pragmatism of the kind advocated by Geert Lovink reigned, in practice at least. We were on the internet to do with it what what we wanted, what we could, for as long as it lasted, or as long as we could make it last, before somebody shut the party down. For a long time now there’s been a tension over how to regard what the internet has done to labor. Even in the 90s, it was not uncommon to see attacks on the elitism and rabid libertarianism of hacker culture – as if there weren’t complexities and internal divisions within that culture. Such a view renders certain less glamorous kinds of new work less visible, and also shuts down thinking about other kinds of agency that new kinds of labor might give rise to. Terranova: “it matters whether these are seen as the owners of elitist cultural and economic power or the avant-garde of new configurations of labor which do not automatically guarantee elite status.” (81)
Terranova’s Network Culture provided an early introduction in the Anglopohone world to the work of Mauritzio Lazzarato, but I always thought that his category of immaterial labor was less than helpful. Since I agree with Terranova’s earlier dismissal of the notion of information as ‘immaterial’, I am surprised to see her reintroduce the term to refer to labor, which if anything is even more clearly embedded in material systems.
For Lazzarato and Terranova, immaterial labor refers to two aspects of labor: the rise of the information content of the commodity, and the activity that produces its affective and cultural content. Terranova: “immaterial labor involves a series of activities that are not normally recognized as ‘work’ – in other words, the kinds of activities involved in defining and fixing cultural and artistic standards, fashions tastes, consumer norms, and more strategically, public opinion.” (82) It is the form of activity of “every productive subject within postindustrial societies.” (83) Knowledge is inherently collaborative, hence there are tensions in immaterial labor (but other kinds of labor are collaborative too). “The internet highlights the existence of networks of immaterial labor and speeds up their accretion into a collective entity.” (84) An observation that would prove to be quite prescient. Immaterial labor includes activities that fall outside the concept of abstract labor, meaning time used for the production of exchange value, or socially necessary labor time. Immaterial labor imbues the production process with desire. “Capital wants to retain control over the unfolding of these vitualities.” (84) Terranova follows those autonomist Marxists who have been interested in the mutations of labor after the classic factory form, and like them her central text is Marx’s ‘Fragment on Machines’ from the Grundrisse. The autonomists base themselves on the idea that the general intellect, or ensemble of knowledge constitutes the center of social production, but with some modification. “They claim that Marx completely identified the general intellect (or knowledge as the principle productive force) with fixed capital (the machine) and thus neglected to account for the fact that the general intellect cannot exist independently of the concrete subjects who mediate the articulation of the machines with each other.” (87) For the autonomists, living labor is always the determining factor, here recast as a mass intellectuality. (See here for a different reading of the ‘Fragment on Machines’) The autonomists think that taking the labor point of view means to think labor as subjectivity. Living labor alone acts as a kind of vitalist essence, of vast and virtual capacities, against which capital is always a reactive and recuperative force. This is in contrast to what the labor point of view meant, for example, to Bogdanov, which is that labor’s task is not just to think its collective self-interest, but to think about how to acquire the means to manage the whole of the social and natural world, but using the forms of organizing specific to it as a class.
From that point of view, it might be instructive to look to the internet for baby steps in self-organization, or at what Terranova calls free labor, and of how it was exploited in quite novel ways. “Free labor is a desire of labor immanent to late capitalism, and late capitalism is the field which both sustains free labor and exhausts it. It exhausts it by undermining the means through which that labor can sustain itself: from the burn-out syndromes of internet start-ups to under-compensation and exploitation in the cultural economy at large.” (94)
Here I think it is helpful not to just assume that this is the same ‘capitalism’ as in Marx’s era. The internet was the most public aspect of a whole modification of the forces of production, which enabled users to break with private property in information, to start creating both new code and new culture outside such constraints. But those forces of production drove not just popular strategies from below, but also enabled the formation of a new kind of ruling class from above. One based on extracting not so much surplus labor as surplus information.
taken from:
It is quite scandalous how much theory-talk still retails metaphors based on 19th century worldviews. As if what we can know about the world had not undergone several revolutions since. Hence if one were to look for a #Theory21c it would have to start with people who at least engage with technical scientific languages of our times. One example of which would be Tiziana Terranova’s Network Culture (Pluto Press 2004). I looked back over the bulk of the book in a previous post. This one takes up her engagement with the theories and sciences of biological computing.
This is perhaps the most interesting part of Network Culture. Terranova extends the Deleuzian style of conceptual constructivism to scientific (and other) languages that are interested in theories and practices of soft control, emergent phenomena and bottom-up organization. Her examples range from artificial life to mobile robotics to neural networks. All of these turned out to be intimations of new kinds of productive machines. There is a certain ideological side to such of this discourse, however “… the processes studied and replicated by biological computation are more than just a techno-ideological expression of market fundamentalism.” (100) They really were and are forms of a techno-science of rethinking life, and not least through new metaphors. No longer is the organism seen as one machine. It becomes a population of machines. “You start more humbly and modestly, at the bottom, with a multitude of interactions in a liquid and open milieu.” (101)
For example, in connectionist approaches to mind, “the brain and the mind are dissolved into the dynamics of emergence.” (102) Mind is immanent, and memories are Bergsonian events rather than stored images. These can be powerful and illuminating figures to think with.
But maybe they are still organized around what Bogdanov would call a basic metaphor that owes a bit too much to the unreflected experience of bourgeois culture. It just isn’t actually true that Silicon valley is an “ecosystem for the development of ‘disruptive technologies’ whose growth and success can be attributed to the incessant formation of a multitude of specialized, diverse entities that feed off, support and interact with one another,” to borrow a rather breathless quote from some starry-eyed urban researchers that Terranova mentions. (103) On the contrary, Silicon valley is a product of American military-socialism, massively pump-primed by Pentagon money. Terranova connects the language of biological computing to the Spinozist inclinations of autonomist theory: “A multitude of simple bodies in an open system is by definition acentered and leaderless.” (104) And “A multitude can always veer off somewhere unexpected under the spell of some strange attractor.” (105) But I am not sure this works as a method. Rather than treat scientific fields as distinct and complex entities, embedded in turn in ideological fields in particular ways, Terranova selects aspects of a scientific language that appear to fit with a certain metaphysics adhered to in advance.
Hence it can be quite fascinating and illuminating to look at the “diagonal and transversal dynamics” (105) of cellular automata, and admire at a distance how a “a bottom-up system, in fact, seems to appear almost spontaneously….” (105) But perhaps a more critical approach might be the necessary compliment. What role does infrastructure play in such systems? What role does an external energy source play? It is quite possible to make a fetish of a bunch of tiny things, such that one does not see the special conditions under which they might appear ‘self’ organizing.
As much as I revere Lucretius and the Epicurians, it seems to me to draw altogether the wrong lesson from him to say that “In this sense, the biological turn entails a rediscovery, that of the ancient clinamen.” (106) What is remarkable in Lucretius is how much he could get right by way of a basic materialist theory derived from the careful grouping and analysis of sense-impressions. One really can move from appearances, not to Plato’s eternal forms, but to a viable theory that what appears is most likely made of a small number of elements in various combinations. But here the least useful part of the Epicurean worldview is probably the famous swerve, or clinamen, which does break with too strict a determinism, but at the expense of positing a metaphysical principle that is not testable. Hence, contra Terranova, there can be no “sciences of the clinamen.” (107) This is also why I am a bit skeptical about the overuse of the term ‘emergence’, which plays something of a similar ideological role to ‘clinamen’. It becomes a too-broad term with too much room for smuggling in old baggage, such as some form of vitalism. Deleuze, in his Bergsonian moments, was certainly not free of this defect. A vague form of romantic spiritualism is smuggled in through the back door, and held to be forever out of reach of empirical study.
Still, with that caveat, I think there are still ways in which Terranova’s readings in biological computing are enabling, in opening up new fields from which – in Bogdanovite style – metaphors can be found that can be tested in other fields. But the key word there is tested. For example, when tested against what we know of the history of the military-entertainment complex, metaphors of emergence, complexity and self-organization do not really describe how this new kind of power evolved at all.
More interesting are Terranova’s use of such studies to understand how control might work. Here we find ways of thinking that actually can be adapted to explain social phenomena: “The control of acentered multitudes thus involves different levels: the production of rule tables determining the local relations between neighboring nodes; the selection of appropriate initial conditions; and the construction of aims and fitness functions that act like sieves within the liquid space, literally searching for the new and the useful.” (115) That might be a thought-image that leaves room for the deeper political-economic and military-technical aspects of how Silicon valley, and the military entertainment complex more generally, came into being. Terranova: “Cellular automata… model with a much greater degree of accuracy the chaotic fringes of the socius – zones of utmost mobility, such as fashions, trends, stock markets, and all distributed and acentered informational milieus.” (116) Read via Bogdanov rather than Deleuze, I think what is useful here is a kind of tektology, a process of borrowing (or détournement) of figures from one field that might then be set to work in another. But what distinguishes Bogdanov from Deleuze is that for him this is a practical question, a way of experimenting across the division of labor within knowledge production. It isn’t about the production of an underlying metaphysics held to have radicalizing properties in and of itself.
Hence one need not subscribe either to the social metaphysics of a plural, chaotic, self-differentiating ‘multitude,’ upon which ‘capital’ is parasite and fetter, and which cellular automata might be taken to describe. The desire to affirm such a metaphysics leads to blind spots as to what exactly one is looking at when one looks a cellular automata. What is the energy source? Where is the machine on which it runs? Who wrote the code that makes it seem that there is ‘emergent’ behavior?
There is a certain residual romanticism and vitalism at work here, in the figure of “the immense productivity of a multitude, its absolute capacity to deterritorialize itself and mutate.” (118) The metaphysical commitments of a Marx read through Spinoza become an interpretive key that predetermines what can be seen and not seen about the extraordinary transformations that took place in the mode of production. Terranova: “If there is an abstract social machine of soft control, it takes as its starting point the productivity of an acentered and leaderless multitude.” (123) It is remarkable how everyone, from the Spinozist left to the libertarian right seems to have forgotten about the ‘information superhighway’ moment in the history of the internet, and wants to talk instead about its self-organizing features. But what made those features possible? From when came the energy, the infrastructure, the legislative frame? Is there not a larger story of a rather more ‘molar’ kind about the formation of a new kind of ruling class alliance that was able to get a regulatory framework adopted that enabled a corporate take-over of all that military and scientific labor had until then been building? No wonder the right wants a ‘little people’ story to make that larger story of state and corporate power go away.
Where I an in agreement with the path Terranova is following however is in rejecting the social constructionism that seemed a default setting in the late twentieth century, when technical questions could never be treated as anything but second order questions derived from social practices. Deleuzian pluralist-monism had the merit at least of flattening out the terrain, putting the social and the asocial on the same plane, drawing attention to the assemblage of machines made of all sorts of things and managing flows of all kinds, both animate and inanimate.
But the danger of that approach was that it was a paradoxical way of putting theory in command again, in that it treated its metaphorical substitutions between fields as more real than the fields of knowledge from whence they came. What was real was the transversal flows of concepts, affects and percepts. The distinctive fields of knowledge production within which they arose were thus subordinated to the transversal production of flows between them. And thus theory remained king, even as it pretended to dethrone itself. At the end of the day Deleuze saved high theory from itself, and this is what remains old-fashioned about the whole enterprise. This is what is interesting to me about Bogdanov and Haraway, as they seem to me approaches to the problem of negotiating between fields of knowledge production that don’t necessarily privilege the practice of creating what flows between fields over the fields themselves. Perhaps because their training was in the biological sciences they have a bit more respect for the autonomy of such fields. However they still want to press the negative, critical question of how metaphors from commodity production might still contaminate such fields, and they do engage in a counter-production of other kinds of metaphorical tissue that might organize the space both within and between fields of knowledge otherwise.
It seems crucial in the age of the anthropocene that thought take “the biological turn.” (121) Never was it more obvious that the ‘social’ is not a distinct or coherent object of thought at all. But it might be timely to tarry with the sciences of actual biological worlds rather than virtual ones. One of the great struggles has been to simulate how this actual world works as a more or less closed totality, for that is what it is. The metaphorics of the virtual seem far from our current and most pressing concerns. The actual world is rather a thing of limits.
I would also want to be much more skeptical about the sociobiology of Richard Dawkins. I would prefer to follow Haraway in her attempt to reconstruct the line of a quite different kind of biological thinking, as she did in her first book on the biological metaphors of Crystals, Fabrics and Fields (1974). If one wanted a biological thought that could be appropriated in Deleuzian metaphors, then surely that was it. Terranova: “What Dawkins’ theory allows is the replacement of the individual by the unit or, as Deleuze named it, a ‘dividual’ resulting from a ‘cut’ within the polymorphous and yet nondeterministic mutations of a multitude.” (124) But perhaps it is rather the opposite. Dawkins’ world is still one of hypercompetitive individuals, it is just that the individual is the gene, not the individual organism. But then there always seems to be to be a certain slippage in the term ‘multitude’, which could describe a universe of petit-bourgeois small traders more than something like a proletariat. I see Dawkins more as Andrew Ross does, as The Chicago Gangster Theory of Life (1995). Of course Terranova is aware of this, and offers an interesting reading of the tension between competition and cooperation in Dawkins. “Selfishness closes the open space of a multitude down to a hole of subjectification.” (126) It is just that I would prefer to bracket off the Spinozist metaphysics, with its claims to describe in advance a real world of self-organizing and emergent properties.
I don’t think the alternative is necessarily a ‘deconstructive critique’. Deconstruction seems to me also to hinge on a kind of high theory. Where Deleuze foregrounds concept-production as king, deconstruction foregrounds the internal tensions of language. Both fall short of a genuine pluralism of knowledge-practices, and the struggle for a comradely and cooperative joint effort between them. The one thing that seems to me to have been pretty comprehensively rejected by everyone except those who do theory is the demand to put theory in command. I think the only thing left for us is a role that is interstitial rather than totalizing.
Still, Terranova’s reading of biological computing remains illuminating. Its function is not so much to naturalize social relations as to see the artificial side of natural relations. ‘Nature’ starts to appear as necessarily an artifact of forms of labor, techne and science, but to be more rather than less useful as a concept because of this. Contrary to Tim Morton, I think the ‘nature’ is still a useful site at which to work precisely because of how over-determined the concept always is by the means via which it was produced. Terranova ends Network Culture with a rethinking of the space between media and politics, and here I find myself much more in agreement. Why did anyone imagine that the internet would somehow magically fix democracy? This seemed premised on a false understanding from the start: “Communication is not a space of reason that mediates between state and society, but is now a site of direct struggle between the state and different organizations representing the private interests of organized groups of individuals.” (134)
Of all the attempts to think ‘the political’ in the late twentieth century, the most sober was surely Jean Baudrillard’s theory of the silent majority. He had the wit and honesty to point out that the masses do not need or want a politics, and even less an intellectual class to explain politics to them. The masses prefer spectacle to reason, and their hyper-conformity is not passivity but even a kind of power. It is a refusal to be anything but inert and truculent. Hence ‘the black hole of the masses’, which absorbs everything without comment or response. Meaning and ideas lose their power there.
One way of thinking about today’s big data or what Frank Pasquale calls The Black Box Society (2014) is as a way of getting back at the refusal of the black hole of the masses to play its role. Big data is a means of stripping the masses of information without their will or consent. It exploits its silence by silently recording not what it says but what it does. Terranova accepts the force of Baudrillard’s approach but not its quietist conclusions. She still wants to think of the space of communication as a contested one. “Images are not representations, but types of bioweapons that must be developed and deployed on the basis of a knowledge of the overall information ecology.” (141) This I think is a useful metaphorical language, provided we remember that an information ‘ecology’ is not really separate from what remains of a general one. Terranova refuses all of those languages which see images as some sort of metaphysical corruption of an enlightened space of reason. The object of a media practice has to become biopolitical power, that power of inducing perceptions and organizing the imagination. While I am skeptical as to whether the term ‘biopolitical’ really adds all that much, this does indeed seem to cut through a lot of misconceptions about the thorny relation between media and politics. After all, there is no politics that is not mediated. There is no real sense in which politics could ever be an autonomous concept. In sum: Network Culture is a book that remains a significant step forward. I am now a bit more skeptical than ten years ago about the limits of the Spinozist flavors of Marxism. They tend to want to see the monist-pluralist metaphysic as a superior image of the real, and to subordinate other knowledge production to that image. I find this less enabling now. However, Terranova used it to excellent effect in this brief, dense book, usefully framing the issues for #Theory21c where information is concerned.
Taken from:
It seems I got the title for my book The Spectacle of Disintegration (Verso 2013) from reading Jodi Dean. I read her book Blog Theory: Feedback and Capture in the Circuits of Drive (Polity Press, 2010) in manuscript. On re-reading it, I find this: “disintegrating spectacles allow for ever more advanced forms of monitoring and surveillance.” (39) And “Debord’s claim that, in the society of the spectacle ‘the uses of media guarantee a kind of eternity of noisy insignificance’ applies better to communicative capitalism as a disintegrated, networked, spectacular circuit.” (112)
I think I mean something similar by spectacle of disintegration to what Dean calls communicative capitalism, even though we read Debord a bit differently, but more on that later. After revisiting Tiziana Terranova’s Network Culture in a previous blog post, Dean seemed like a logical next stop in looking back through classic works in #theory21c. I am closer to Terranova than Dean on certain points, but there are things about Dean’s work I greatly admire. How can we even write books in the era of Snapchat and Twitter? Perhaps the book could be something like the tactic of slowing down the pace of work. Still, books are a problem for the era of communicative capitalism, which resists recombination into longer threads of argument. The contours of Dean’s argument are of a piece with this media strategy.
loading...
Dean offers “an avowedly political assessment of the present” rather than a technical one. (3) The political – a term which as Bottici argues was greatly expanded in scope and connotation across a half-century of political theory – becomes the language within which to critique the seeming naturalness and inevitability of the technical. But perhaps this now calls for a kind of ‘dialectical’ compliment, a critical scrutiny of the expanded category of the political, perhaps even from the point of view of techne itself. We intellectuals do love the political, perhaps on the assumption that it is the same kind of discourse as our own.
If industrial capitalism exploited labor; communicative capitalism exploits communication. It is where “reflexivity captures creativity.” (4) Iterative loops of communication did not really lead to a realization of democratic ideals of access, inclusion, participation. On the contrary, it is an era of capture, of desire caught in a net and reduced to mere drive. Dean draws her concepts mostly from Slavoj Zizek. Elsewhere I have argued that his late work offers little for a twenty-first century critical agenda. But if anyone has made a case for the utility of Zizek, it is Jodi Dean. So let’s approach Zizek then in an instrumental way, and see what use Dean puts him to as a tool. For both Dean and Zizek, “Ideology is what we do, even when we know better.” (5) It s not a theory of false consciousness or even of the interpolation of the subject. In this approach to ideology, closer to Sloterdijk’s enlightened false consciousness. It is about the gap between thought and action rather than thinking the ‘wrong’ thing. The key motif in Dean’s thought here is the decline in symbolic efficiency, also known as the collapse of the Big Other. These Lacanian phrases point to a growing impossibility of anchoring meaning or of totalizing it. Nobody is able to speak from a position that secures the sliding, proliferating chains of signification.
One could question this thesis on both historical or sociological grounds. Perhaps the stability of meaning is only ever secured by force. When I studied Vaneigem’s account of heresies in Excommunication, or Andrey Platonov’s account of popular speech under Stalinism in Molecular Red, these looked to me like the decline in symbolic efficiency already, and in both cases consistency was only secured by force.
Moving from an historical to a sociological axis, one might then look for where force is applied. In the United States that might include the red purge, the imprisonment and assassination of Black power and the now global campaign to murder the ideological enemies of the United States via death by drone. Perhaps there’s no Master’s discourse at all without force. The same would apply on a more day to day scale with domestic violence and police murder. There might certainly be particular instances of the decline in symbolic efficiency, when the function of the Master signifier is suspended, when there is no outside authority to tell us what to do, what to desire, what to believe, and where the result isn’t freedom but rather a kind of suffocation. Dean gives the example of Second Life, where people are free to have their avatars do anything, and that ends up being building real estate, shopping and weird sex stuff. Tumblr might be another example, where being free from the Master signifier seems to mean putting together random collages of pictures and greeting card quotations. The Master signifier depends on virtuality. It is not just another sign in a chain of signs, but a potential for signification as such, a way to project across the gap between fantasy and the real. Interestingly, where for Paolo Virno the virtual ends up sustaining history as a theological premise, here the virtual is theology as historical premise, as that which declines, taking the possibility of desire with it from the world.
loading...
Without the Master signifier, there’s no reason to stay with anything. Bonds can be dissolved at no cost. There’s a dissolution of the link between fantasy and reality, and a foreclosure of the symbolic. It is the gaps in the symbolic that allow access to the real, but those gaps are foreclosed, resulting in non-desire, non-meaning, and in the saturation in enjoyment. We are caught in short, recursive loops that attempt to directly provide enjoyment, but which just repeat over and over again its impossibility.
This kind of recursive or reflexive loop in which the subject is trapped applies to the world of objects too in communicative capitalism. Dean mentions climate change, but the Anthropocene more generally, or what Marx called metabolic rift might be symptoms of such loops in operation, in which positive feedback dominates, with the result that more is more. The capture of both objects and subjects just keeps deepening and expanding. Dean: “More circuits, more loops, more spoils for the first, strongest, richest, fastest, biggest.” (13) How the hell did it come to this? Dean builds on the work of our mutual friend Fred Turner, whose From Counterculture to Cyberculture tracks the construction of what Richard Barbrook calls the California Ideology. How did computing and information science, which were tools of control and hierarchy, become tools of collaboration and flexibility? Here I read Fred’s book a little differently to Dean. What I see there is a kind of social and technical field that was always open to different kinds of research and different kinds of result. The wartime laboratory experience in science and engineering was strikingly collaborative, expanding and developing what JD Bernal thought of as the communist practice of real science, and what for Richard Stallman (a red diaper baby) was the commons of hacker practice. Of course, what the military wanted from such experimental practices was a toolkit for command, control, communication and information, (aka C3I). But even there, flexibility and openness was always one of the objectives. The Air Force’s missile program might have imagined what Paul Edwards calls a closed world of cybernetic control, but the Army wanted tools that could work in the fog and friction of war as flexible, open, adaptive networks. The technology that descended from such academic and military origins was always hybrid and multiform, adaptable in different ways to different kinds of economies, politics and culture, although certainly not infinitely so. What I find missing in Dean is the sense of a struggle over how tech and flesh were to co-adapt to each other. Let’s not forget the damage done to the conversation about the politics of technology by the cold war purge, in which not only artists and writers were blacklisted, but scientists and engineers as well.
Iris Chang’s account of the fate of Tsien Hsue-Shen in Thread of the Silkworm is only the most absurdist of such stories. This pioneer rocket scientist lost his security clearances for having social ties to people who unbeknownst to him were communists. And so he was deported – to communist China! There he actually became what he never was in America – a highly skilled scientist working for the ‘communist cause’. This is just the most crazy of many thousands of such stories. Those who find the tech world ‘apolitical’ might inquire as to how it was made so thoroughly so.
Hence the California ideology is a product of particular histories, one piece of which is documented so well in Turner – but there are other histories. The belief that tech will save the world, that institutions are to be tolerated but not engaged, that rough consensus and running code are all that matter – this is not the only ideology of the tech world. That it became an unusually predominant one is not some naturally occurring phenomena – even though both California ideologists and Dean both tend to think it is. Rather, it is the product of particular struggles in which such an ideology got a powerful assist, firstly from state repression of certain alternatives, and then by corporate patronage of the more business-friendly versions of it. Dean write about “geeks” (23, 25) as if they were some kind of freemasonry, pretending to be apolitical, but with quiet influence. One might usefully look here to a deeper history of the kind of power the sciences and engineers have had, one not quite covered even by the ever-expanding sense of the ‘political’ now employed. The counter-literature here might include what for me is Bruno Latour’s best work: his historical study of Pasteur, and of the kind of spatially and temporally concentrating power of the laboratory. As Latour shows, Pasteur’s actual political-politics were fairly conventional and not very interesting, but the way the lab was able to become a form of power is a quite different story. Can we – why not? – even think of this as a class power, which has accrued over time its own field of heterogeneous interests, and which stands in relation to the commodity form as neither capital nor labor even if – like all other classes – it is forced into one or other of those relations. For Dean the geek, or in my terms the hacker class, is a displaced mediator, something that is pushed aside. But by what? The formal category of mediator covers over the existence of a kind of struggle that is neither purely political or a ‘natural’ result of tech evolution. We still lack a sense of the struggles over the information vector of the late twentieth century, with their partial victories and eventual defeats. The book is called Blog Theory, and in some ways its strength is its relation – only occasionally signaled – to Dean’s own practice as a blogger. There was a time when I read Dean’s (I Cite) blog religiously, alongside Nina Power, Mark Fisher (k-punk), Lars Iyer (Spurious) and a handful of others who really pioneered a kind of theory-writing in blog form, along side the new kinds of more (post)literary practices of Kate Zambreno and friends.
Blogging also looks like a displaced mediator, a step on the way to the mega-socialized media forms such as Facehooker, as Dean already senses. Dean: “Blogging’s settings… include the decline of symbolic efficiency, the recursive loops of universalized reflexivity, the extreme inequalities that reflexive networks produce, and the operation of displaced mediators at points of critical transition.” (29) Tumblr already existed in 2010 when Dean wrote Blog Theory, but was not quite as perfect an illustration of Dean’s conceptual framework then as it is now. Another name for all this might be the tumblresque.
Such media forms become short loops that lock the subject into repeated attempts at enjoyment, where enjoyment is no longer the lost object of desire but the object of loss itself. All drive is death drive. These reflexive, iterative loops are where we are stuck. Communicative action is not enlightenment. “… what idealists from the Enlightenment through critical and democratic theory, to contemporary techno-utopians theorize as the very form of freedom is actually a mechanism for the generation of extreme inequality and capture.” (30) This is not even, as in Hiroki Azuma, a return to a kind of human-animal. “The notion of drive counters this immanent naturalism by highlighting the inhuman at the heart of the human…” (31) The all-too-human ability to stick on minor differences and futile distractions drives the human ever further away from its own impossibility. Communicative capitalism relies on repetition, on suspending narrative, identity, and norms. Framed in those terms, the problem then is to create the possibility of breaking out of the endless short loops of drive. But if anything the tendency is in the other direction. After blogging came Facebook, Twitter, Instagram and Snapchat, driving even further into repetition. The culture industries gave way to what I call the vulture industries. Dean identified the tendency already with blogs. They no longer fill a desire for a way to communicate. Desire is a desire for a desire – that absent thing – whereas a drive is a repetition not of the desire but of the moment of failure to reach it. The virtual dimension disappears. Blogs had their counterpoint in search engines, as that which knows our desires even when we don’t. With the search engine, one trusts the algorithm; with blogs, one trusts one’s friends. Two kinds of affective response dominate in relation to both. One is hysterical: that’s not it! There must be more! The other is paranoid: someone must be stealing all the data. As it turned out, the former drove people to search and search, blog and blog, all the better for actual agencies – both state and corporate – to indeed steal it all. Here I would stress the asymmetry and struggle over information as a crucial feature of communicative capitalism – which may no longer even be a capitalism, but something worse. The blog for Dean is not a journal or journalism nor a literary form. It may be something like the letter writing of a pre-modern era, which was meant to be circulated beyond the named addressee. It is a sort of technique of the self, one that installs a gaze that shapes the writer. But there’s an ambiguity as to who the writer is visible to. For Dean, this gaze is not that of the Big Other, but of that other creature of Lacan-speak, the objet petit a. In this version, there is an asymmetry: we are entrapped in a kind of visibility. I see from my point of view but am seen from all points of view. It is as if I am seen by an alien object rather than another person. I receive no messages back specific to me and my identity. Ego formation is blocked. Dean: “Blogging is a technology uncoupled from the illusion of a core, true, essential and singular self…. In communicative capitalism, the gaze to which one makes oneself visible is a point hidden in an opaque and heterogeneous network. It is not the gaze of the symbolic other of our ego ideal but the more disturbing traumatic gaze of a gap or excess, objet petit a.” (56) Hence I never quite know who I am, even though I take endless online quizzes to try to find out. Which punk rock goddess are you? It turns out I am Kim Gordon. Funny, I thought I was Patti Smith.
The decline in symbolic efficiency is a convergence of the imaginary and the real. It is a world of imaginary identities sustained by the promise of enjoyment rather than a world of symbolic identities residing in the gap where desire desires to desire. Unanchored from the symbolic, and its impossible relation to the Big Other, I become too labile and unstable. It is a world of selves with boundary issues, over-sharing, but also troubled by any signs of the success of others, tripping circuits of envy and schadenfreude. It is not a world of law and transgression but repetition and drive. No more lost object of desire, its all loss itself as object. Blocked desires proliferate as partial drives making quickie loops, disappearing into the nets.
Of courses there are those who would celebrate this kind of (post)subjectivity. It could have been a step towards Guattari’s planet of six billion perverts, all coupling and breaking in desiring machines of wildly proliferating sorts. Dean explores instead the way the decline in symbolic efficiency was framed by Agamben as whatever being. Dean: “whatever being points to new modes of community and new forms of personality anticipated by the dissolution of inscriptions of identity through citizenship, ethnicity, and other modern markers of belonging.” (66) For Agamben, some of this is a good thing, in the dissolution of national identities, for example. His strategy – reminiscent of Baudrillard’s fatal strategy is to push whatever being to its limits. As every blogger knows, this media is not about reading and interpreting, but about circulating the signs. TL;DR, or “too long, didn’t read,” is the most common response. For Dean, the “whatever” in whatever being is a kind of insolence, a minimal acknowledgement that communication has taken place with no attempt to understand it. Agamben thinks there might be a way to take back the positive properties of being in language that communicative capitalism expropriates. He looks forward to a planetary refusal of identity, a kind of singularity without identity, perhaps other ways of belonging. Dean: “the beings who would so belong are not subjects in the sense that European philosophy or psychoanalysis might theorize.” (82) To which us card-carrying Deleuzians might respond: so much the worse for psychoanalysis and philosophy! Dean is disturbed by the apparent lack of antagonism of whatever being. But is it apolitical, or just a phenomena in which differences work out differently, without dialectic? Dean: “I can locate here neither a politics I admire nor any sort of struggle at all. What could motivate whatever beings?” (83) They don’t lack anything. But maybe that’s the point. Of course, whatever being does not evade the state in the way Agamben might have hoped. The capture of metadata enables a recording that does not presuppose classification or identity. The back hole of the masses has been conquered by the algorithm. Their silence speaks volumes. Agamben thought the extreme alienation of language in spectacle could have an kind of ironic coda, where that very alienation becomes something positive, a being after identity. He actually has a positive way of thinking what for Zizek and Dean is drive. Are whatever being really passive, or just a bit slippery? Why is passivity a bad thing anyway? Maybe there was always something a bit backward looking about Lacan.
In The Freudian Robot, Lydia Liu reads Lacan as reacting against the information science of the postwar years. As Tiziana Terranova shows, this was a period in which questions of texts and meanings were side-stepped by new ways of analyzing information mathematically, as a field of statistical probability. Here I am closer to Terranova in thinking that it is time to rethink strategy on the terrain on information rather than that of meaning. Dean does not: “What’s lost? The ability to distinguish between contestatory and hegemonic speech. Irony. Tonality. Normativity.” (89)
But were these ever more than illusions intellectuals entertained about what was going on in communication? Here I find reading Platonov salutary, as his accounts of the language of early Soviet times is really more one of frequency and repetition rather than a politics of ideology or propaganda. I don’t think the road to strategy necessarily always passes through critique, or through a politics of the subject as formed in the symbolic register. Perhaps the flux between the imaginary and the real is where the human resides most of the time anyway. It is not as if the symbolic has reliably been our friend. Dean mentions Friedrich Kittler’s cunning reworking of Lacan back into media theory, but I would pause to give it a bit more weight. For Kittler, Lacan’s famous tripartite of imaginary, symbolic and real is actually an effect of a certain moment in the development of media. It was a stage in the evolution of Haraway’s cyborg, when different technics became the mediating apparatus for different flows of sensation. For Kittler, the imaginary is the screen, the symbolic is the typewriter, and the gramophone is the real. This explains so much of the anxiety of the literate classes: the struggle of the typewriters against the screen, insisting on this or that symbolic order against the self/other fluctuations of screen-generated media, and with the grain of the voice as residual stand-in for the real beyond both. All of which, of course, media ‘convergence’ erases. We’re differently wired cyborgs now. It is telling that Dean wants to resist the “snares” of cognitive capitalism. (95) Dean: “Every little tweet or comment, every forwarded image or petition, accrues a tiny affective nugget, a little surplus enjoyment, a smidgen of attention that attaches to it, making it stand out from the larger flow before it blends back in.” (95) It is hard not to read it in media terms as an appeal by those invested in one media cyborg apparatus to resist the one that’s replacing it. Of course the new one is part of a political economy of domination and exploitation – but so too was the old mass media apparatus. Of course there’s things one can tease out of a conceptual frame that puts the emphasis on the subject’s relation to the symbolic order. But I don’t see this as a truly essential theoretical tactic. In many ways I think it more productive to follow Terranova and think about information as a ratio of signal to noise, and beyond that as a kind of dynamics into which one might attempt to intervene with information tactics. This is what the situationists called détournement. I would want to bring the concept of détournement more fully into relation to the work of both writers, as I think it is a more nuanced way of thinking the montage practices of Terranova’s network culture. For Dean, “The politics that montage suggests is a politics released from the burdens of coherence and consistency.” (104) But isn’t information politics always about frequency, about the probability of certain information appearing with certain other information, about affective states thereby generated? It is only intellectuals who really think political communication is anything else. Even economics may be not much more than this. In the vectoral age, as Boutang suggests, nobody knows the actual value of anything, so the problem is outsourced to a vast cyborg of plug and play info-filters – some human, some algorithmic.
Such an information ecology has its problems, of course. It knows the price of everything and the value of nothing. As Debord was already suggesting, the integrated spectacle integrated itself into the reality it was describing, and then ceased to know the difference between the two. We now live in the metabolic rifts produced by the wildly improbably molecular flows this produced, which in turn generates the disintegrating spectacle some call the Anthropocene.
What both Agamben and Dean miss about Debord is that the concept of spectacle was always doubled by that of détournement. This is clear in The Society of the Spectacle, where détournement gets the key last chapter (before the concluding coda). There Debord restates the case for the literary communism he and Gil Wolman first proposed in the 1950s as the avant-garde strategy for the era of spectacle. Détournement is precisely the tactic of treating all information as the commons, and refusing all private property in this domain. Contra Dean, this has nothing to do with a ‘participatory’ politics at all. It was always about the overthrown of the spectacle as a totality. Nor was Debord really contributing to the undermining of ‘expertise’. On the contrary, he dedicated his Comments to those few on both sides who he thought really had the knowledge to either defend the spectacle – or attack it. The same is the case with the book he helped Sanguinetti write about the Italian spectacle of the 70s – The Last Chance to Save Capitalism in Italy. He was well aware of the dangers of recuperation back into spectacle. It was indeed one of his major themes. Dean: “The spectacle contains and captures the possibility of the common good.” (112) But this is already the central point of his late work, which is about withdrawal rather than participation. Of course, détournement itself became coopted. Free information became the basis of a new business model, one that extracts surplus information from free labor. But this means moving détournement on from free data to freeing metadata. This may require not just détournement and critique but actually building different kinds of circuit, even if it is just in the gaps of the current infrastructure. For the time being, one tactic is just to keep putting into circulation the conjunctions of information that generate the affect of solidarity and the commons. It’s a way of taking advantage of the lateral ‘search’ that the decline of symbolic efficiency, or at least the lack of coercive force maintaining it, affords. There is surely a place for what Dean calls “discipline, sacrifice and delay.” (125) But Rome wasn’t unbuilt in a day, and it may take more than one kind of subject-apparatus cyborg to make a new civilization. The party presupposes a milieu. It is an effect and not a cause. Let’s build a new milieu. This civilization is over and everyone knows it. One need look no further than Andrew Ross’ account of environmental justice in Phoenix, Arizona to see the scale on which we have to imagine building another one. That is an organizational problem that calls for all sorts of different solutions to all sorts of problems. There can be no one ‘correct’ critical theory. They are all just tools for addressing parts of a manifold problem. Dean’s work seems well suited to the diagnosis of a certain subjective short-circuit and one possible solution to it. Strangely enough both Dean and Terranova have a use for the concept of the virtual, but here I would follow Debord and think more in terms of constrained situations and available resources. Both the Lacanians and the Deleuzians, otherwise so opposed, may both be a little too theological for the times.
Taken from
loading...
On Tiziana Terranova
What used to be the public sphere now seems like unmanageable noise. The internet is generally held to be to blame. But perhaps there never was a public sphere. Perhaps there is just different configurations of information and noise.
Contrary to certain popular narratives by latecomers, not everybody went gaga over ‘new media’ back in the late twentieth century. In the cyberculture period, from the popularization of cyberpunk in 1984 to the death of the internet as a purely scientific and military media in 1995, there were plenty of experimental, critical and constructivist minds at work on it. I would count Tiziana Terranova as a fine exponent of the constructivist approach, a sober builder of useful theory that might open spaces for new practices in the emerging world of post-broadcast and post-truth media flux. Her book, Network Cultures: Politics for the Information Age (Pluto Press, 2004) is still well worth reading for its keen grasp of the fundamental issues. Lyotard sent everyone off on a bum steer with the idea of the immaterial, a problem compounded by Jameson’s famous assertion that the technics of late capitalism could not be directly represented, as if the physics of heat engines was somehow clearer in people’s heads than the physics of electrical conductivity. Terranova usefully begins again with a concrete image of information as something that happens in material systems, and thinks them through the image of a space of fluid motion rather than just as an end-to-end line from sender to receiver.
In Terranova, information is not an essence but a site of struggle: “I do not believe that such information dynamics simply expresses the coming hegemony of the ‘immaterial’ over the material. On the contrary, I believe that if there is an acceleration of history and an annihilation of distances within an information milieu, it is a creative destruction, that is a productive movement that releases (rather than simply inhibits) social potentials for transformation.” (2-3)
It helps not to make a fetish of just one aspect of media form, whether one is talking about hypertext back then, or big data now. Sometimes these are aspects of more pervasive technological phyla. Terranova: “Here I take the internet to be not simply a specific medium but a kind of active implementation of a design technique able to deal with the openness of systems.” (3) This might be a useful interpretive key for thinking how certain now-dominant approaches to tech arose. Anybody who studied communication late last century would have encountered some version of the sender -> encoding-> channel-> decoding->receiver model, with its mysterious vestigial term of ‘context’. Stuart Hall opened this loop by adding a possible difference between the encoding and the decoding. He made non-identity a function not of noise as something negative but of culture as a positive field of differences. But even so, this way of thinking tended to make a fetish of the single, unilinear act of communication. It ended up as an endless argument over whether the sender’s encoding dominated the receiver’s decoding, or if the receiver could have a counter-hegemonic power of decoding otherwise. That was what the difference between the Frankfurt school’s epigones and the Birmingham school of Hall et al boiled down to.
Terranova usefully brackets off the whole critical language of domination versus counter-hegemony, moving the discussion away from the privileged questions of meaning and representation that still dominate critical thinking in the humanities. Like Alex Galloway (et al) in Excommunication, she insists that a critical perspective need not be hermeneutic. She does so by taking seriously the breakthrough of Claude Shannon’s purely mathematical theory of information of 1948.
Information actually means three different things in Shannon. It is (1) a ratio of signal to noise, (2) a statistical measure of uncertainty, and (3) a non-deterministic theory of causation. He developed his theory in close contact with engineers working on communication problems in telephony at Bell Labs, one of the key sites where our twenty-first century world was made. The signal-to-noise problem arose out of attempts to amplify telephony signals for long distance calls, where the additional energy used to amplify the signals also shows up as noise. This, incidentally, is where one sees how the experience of information as ‘immaterial’ is actually an effect produced by decades of difficult engineering. It takes energy to make a signal pass through a copper wire, making the electrons dance in their predictable but non-deterministic way.
What was crucial about Shannon’s approach to this problem was to separate out the concept of information from having anything to do with ‘meaning’. Information is is just a ratio of novelty and redundancy. “From an informational perspective, communication is neither a rational argument nor an antagonistic experience…” (15) It has nothing to with communication as domination or resistance. It has nothing to do either with Habermasian communicative action or Lyotard’s language games.
For information to be transmitted at all, it has to confront the demon of noise. In Michel Serres’ version, sender and receiver appear as nodes cooperating against noise rather than as differentiated individual entities. Terranova rather follows Gilbert Simondon, who pointed out that in Shannon, the individual sender and receiver are pre-constituted. They just appear, prior to the act of communication. Simondon’s approach picks up the vestigial concept of context. For him, the act of communication is also what constitutes the sender and receiver as such. His approach is to think the context as the site where information produces individuations out of a collective, undifferentiated context. This is a step toward thinking the space of information as a more turbulent, metastable system that can be disturbed by very small events: “the informational dimension of communication seems to imply an unfolding process of material constitution that neither the liberal ethics of journalism nor the cynicism of public relations officers really address.” (19) The materiality of information is prior to any discussion of ‘real’ reporting or ‘fake’ news or the sender-receiver nodes such flows constitute.
Terranova’s work points toward a critical and radical information theory (CRIT), to thinking about information production and protocols, rather than to second-order questions of meaning. It could be a way of framing a problem of information system design for the whole field of media and culture. Information design could be about more than messages defeating noise, but rather designing fields of possibility beyond click-counting, including problems in the organization of perception and the construction of bodily habits.
Information systems tend to be closed systems, defined by the relation between selection (the actual) and the field of possibilities (the virtual), but where that field appears in an impoverished form. “Information thus operates as a form of probabilistic containment and resolution of the instability, uncertainty and virtuality of a process.” (24) For Terrannova, what might be a constructive project in the wake of that is some kind of information culture that does not enforce a cut in advance in the fabric of the world, and then reduce its manipulation to a set of predictable and calculable alternatives. Interestingly, Terranova’s approach is less about information as a way of producing copies, and more about the reduction of events to probabilities, thus sidestepping the language of simulation, although perhaps also neglecting somewhat the question of how information challenged old regimes of private property. Her emphasis is much more on information as a form of control for managing and reproducing closed systems. This then appears as a closure of the horizon of radical transformation. As in Randy Martin, instead of a livable future we have futures markets. In information systems, the real only ever emerges out of the statistically probable. “What lies beyond the possible and the real is thus the openness of the virtual, of the invention and the fluctuation, of what cannot be planned or even thought in advance, of what has no real permanence but only reverberations… The cultural politics of information involves a stab at the fabric of possibility.” (27) The virtual does not arise out of negation, out of a confrontation with techno-power as an-other. It is unquantifiable. It is what an information system does not know about itself.
One of the more powerful features of the theory of information is the way it linked together information and entropy. Thermodynamics, which as Amy Welding shows was a key to the scientific worldview in Marx’s era, offered the breakthrough of an irreversible concept of time, and one which appeared as a powerful metaphor for the era of the combustion engine. In short: heat leaks, energy dissipates. Any system based on a heat differential eventually ‘runs out of steam.’
Hence the figure of Maxwell’s Demon, which could magically sort the hot particles out from the cool ones, and prevent an energy system from entropic decline into disorder. But that, in a sense, is exactly what information systems do. The tendency of things might still be entropic: systems dissipate and break down. But there might still be neg-entropic counter-systems that can sort and order and organize. Such might be an information system. Such might also, as Joseph Needham among many others started to think, might be what is distinctive about living systems. Needham’s organicism borrowed from the systems-theory of Bertalanffy which pre-dates Shannon, and was based a lot more on analog thinking, particularly the powerful image of the organizing field. Much more influential was the transposition of the thought-image of the digital to the question of how life is organized as neg-entropic system, resulting in what for Haraway in Modest_Witness (1997) is a kind of code fetishism. What is appealing to Terranova in the confluence of biological and information thinking is the way it bypassed the humanistic subject, and thought instead toward populations at macro and micro scales. Where Terranova and Haraway intersect is in the project of understanding how scientific knowledge is both real knowledge and shot through with ideological residues at the same time: “An engagement with the technical and scientific genealogy of a concept such as information… can be actively critical without dis-acknowledging its power to give expression and visibility to social and physical processes… Information is neither simply a physical domain nor a social construction, nor the content of a communication act, nor an immaterial entity set to take over the real, but a specific reorientation of forms of power and modes of resistance.” (37) While I would want to pause over the word ‘resistance’, this seems to me a usefully nuanced approach. One way she does so is by appealing to Bergson’s distinction between a quantified and a qualified sense of time, where time as quality, as duration, retains primacy, offering the promise of a “virtuality of duration.” (51) But is this not yet another offshoot of romanticism? And what if it was really quite the other way around? What if the figure of time as quality actually depended on measurable, quantitative time? I’m thinking here of Peter Gallison’sdemonstration of how the engineering feat of electrically synchronized time, so useful to the railways, enabled Einstein to question the metaphysics of a universal clock time that was the backdrop to Newton’s mechanics. As Gallison shows, it is only after you can actually distribute a measure of clock time pretty accurately between distant locations that you can even think about how time might be relative to mass and motion.
It is certainly useful that Terranova offers a language within which to think a more elastic relation between the information in a network and the topology of that network itself. It isn’t always the case that, as with Shannon’s sender and receiver, that the nodes are fixed and pre-constituted. “A piece of information spreading throughout the open space of the network is not only a vector in search of a target, it is also a potential transformation of the space crossed that always leaves something behind.” (51)
This more elastic space, incidentally, is how I had proposed thinking the category of vector in Virtual Geography (1995). In geometry, a vector is a line of fixed length but of no fixed position. Thus one could think it as a channel that has certain affordances, but which could actually be deployed not only to connect different nodes, but sometimes to even call those nodes into being. Hence I thought vector as part of a vector-field, which might have a certain malleable geometry, where what might matter is not some elusive ‘virtual’ dimension, but the tactics and experiments of finding what it actually affords. Terranova stresses the way the internet became a more open system, with distributedcommand functions. It was in this sense not quite the same as the attempts to build closed systems of an early generation of communication engineers: “resilience needs decentralization; decentralization brings localization and autonomy; localization and autonomy produce differentiation and divergence.” (57) The network, like empire, is tolerant of differences, and inclusive (up to a point), but also expansionist. As Terranova notes, rather presciently, “There is nothing to stop every object from being given an internet address that makes it locatable in electronic space.” (62) Since 1995, the internet started acquiring the properties of a fully-realized vector-field, one striated into distinct organization levels, what Benjamin Bratton calls The Stack – a useful counter-image to the network, drawing attention to planetary computation’s geopolitical and infrastructural qualities. Terranova was a pioneer in understanding that the build-out of this infrastructure, of which information theory was the concept, had significant implications for rethinking the work of culture and politics. “There is no cultural experimentation with aesthetic forms or political organization, no building of alliances or elaboration of tactics that does not have to confront the turbulence of electronic space. The politics of network culture are thus not only about competing viewpoints, anarchistic self-regulation and barriers to access, but also about the pragmatic production of viable topological formations able to persist within an open and fluid milieu.” (68) She notes in passing some of the experiments of the late twentieth century in “network hydrodynamics” (69) such as the Communitree BBS, Andreas Broeckmann’s Syndicate list-serv, Amsterdam’s Digital City, the rhizome list-serv, to which I would add the latter’s sister-list nettime.org. Some of these fell apart, even if many others lived and mutated. Much of the functionality of today’s social media derives from these early experiments.
Terranova was also prescient in asking questions about the ‘free labor’ that was just starting to become a visible feature of stack-life at the time she was writing. She reads this through the Autonomist-Marxist figure of the shift of work processes from the factory to society, or ‘the social factory.’ I sometimes wonder if this image might be a bit too limiting. It might be more helpful to think of a dismantling and repartitioning of all institutionalized divisions of labor under the impact of networked communication, more a social boudoir than social factory.
Still, it was useful to insist on the category of labor, at a time when it was tending towards invisibility. One has to remember that in cyberculture times there was a lot more celebration of ‘playful’ fan cultures to the net. Henry Jenkins’ repurposing of something like the Birmingham school’s insistence on popular decoding and recoding agency would be a signal instance of this. Terranova: “The internet does not automatically turn every user into an active producer, and every worker into a creative subject.” (75) It also makes plenty of alt-right trolls. In a 1998 nettime.org post, Richard Barbrook suggested that the cyberculture era internet had become the site for a kind of post-situationist practice of détournement, of which nettime.org itself might not have been a bad example. Before anybody had figured out how to really commodify the internet, it was a space for a “high tech gift economy.” Terranova thinks Barbrook put too much emphasis on the difference between this high tech gift economy and old fashioned capitalism. But perhaps it might be helpful to ask whether, at its commanding heights, this still is old fashioned capitalism, or whether the ruling class itself may not have mutated, and draws its power now from informatics control, based in part on capturing the value of information gifted by various forms of non-labor. Certainly, the internet became a vector along which the desires that were not recognizable under old-style capitalism chose to flee. Terranova: “Is the end of Marxist alienation wished for by the management gurus the same thing as the gift economy heralded by leftist discourse?” (79) Not so much. Those desires were recaptured again. I don’t know who exactly is supposed to have fallen for “naïve technological utopianism” (80) back in the nineties – apart from the Accelerationists, and even there, Black Accelerationism was a quite canny negotiation between the cramped spaces of both the political and the technical. In the main I think a kind of radical pragmatism of the kind advocated by Geert Lovink prevailed. We were on the internet to do with it what we wanted, for as long as we could make it last, before somebody shut the party down.
I’m not sure that producers of difference in information are quite the same thing as producers of sameness in material objects. Perhaps the worker and the hacker belong to different classes. I think the hacker class is composed of all those whose creations of difference can be captured as intellectual property and commodified. It’s a class with no necessary common culture at all, other than what it might make in struggling against the appropriation of its time. But it is not the case that the hacker prefigures new kinds of labor. Rather, both the hacker and worker experience a bifurcation into a secure well-paid elite and a casualized and hyper-exploited – and now global – mass.
For me this is a perspective from which to attain some critical perspective on attempts to expand the category of labor, to the point where to me it stops making much sense. Terranova’s Network Culture provided an early introduction in the Anglopohone world to the work of Mauritzio Lazzarato, but I always thought that his category of immaterial labor was less than helpful. Since I agree with Terranova’s earlier dismissal of the notion of information as immaterial, I am surprised to see her reintroduce the term to refer to labor, which if anything becoming ever more embedded in the material systems of the stack. For Lazzarato and Terranova, immaterial labor refers to two aspects of labor: the rise of the information content of the commodity, and the activity that produces its affective and cultural content. Terranova: “immaterial labor involves a series of activities that are not normally recognized as ‘work’ – in other words, the kinds of activities involved in defining and fixing cultural and artistic standards, fashions tastes, consumer norms, and more strategically, public opinion.” (82) It is the form of activity of “every productive subject within postindustrial societies.” (83) Knowledge is inherently collaborative, hence there are tensions in immaterial labor (but other kinds of labor are collaborative too). “The internet highlights the existence of networks of immaterial labor and speeds up their accretion into a collective entity.” (84) An observation that would prove to be quite prescient. Immaterial labor includes activities that fall outside the concept of abstract labor, meaning time used for the production of exchange value, or socially necessary labor time. Immaterial labor imbues the production process with desire. “Capital wants to retain control over the unfolding of these vitualities.” (84) But at that point one has to wonder if the terms in play here are still capital and labor, or if exploitation might not have new territories and new forms. Terranova follows those autonomist Marxists who have been interested in the mutations of labor after the classic factory form, and like them her central text is Marx’s ‘Fragment on Machines’ from the Grundrisse. The autonomists base themselves on the idea that the general intellect, or ensemble of knowledge, constitutes the center of social production, but with some modification. “They claim that Marx completely identified the general intellect (or knowledge as the principle productive force) with fixed capital (the machine) and thus neglected to account for the fact that the general intellect cannot exist independently of the concrete subjects who mediate the articulation of the machines with each other.” (87) For the autonomists (Bifo, for example), living labor is always the determining factor, here recast as a mass intellectuality. The autonomists think that taking the labor point of view means to think labor as subjectivity. Living labor alone acts as a kind of vitalist essence, of vast and virtual capacities, against which capital is always a reactive and recuperative force. This is in contrast to what the labor point of view meant, for example, to Bogdanov, which is that labor’s task is not just to think its collective self-interest, but to think about how to acquire the means to manage the totality of the social and natural world, but using the forms of organizing specific to it as a class.
From that point of view, it might be instructive to look, as Angela McRobbie does, for baby steps toward self-organization in Terranova calls free labor, and of how it was recuperated in quite novel ways. “Free labor is a desire of labor immanent to late capitalism, and late capitalism is the field which both sustains free labor and exhausts it. It exhausts it by undermining the means through which that labor can sustain itself: from the burn-out syndromes of internet start-ups to under-compensation and exploitation in the cultural economy at large.” (94)
Let’s not to just assume that this is a ‘late’ iteration of the same ‘capitalism’ as in Marx’s era. The internet was the most public aspect of a whole modification of the forces of production, which enabled users to break with private property in information, to start creating both new code and new culture outside such constraints. I think those forces of production drove not just popular cyberculture strategies from below, but also enabled the formation of a new kind of ruling class from above. One based on extracting not so much surplus labor as surplus information: extracted as content from both labor and non-labor; extracted as form the hacker class – creator of new forms. I call this new ruling class the vectoralist class – owner and controller not of the means of production but the vector of information, its stocks and flows. The most interesting part of Network Culture is where Terranova extends the Deleuzian style of conceptual constructivism to scientific (and other) languages that are interested in theories and practices of soft control, emergent phenomena and bottom-up organization. Her examples range from artificial life to mobile robotics to neural networks. All of these turned out to be intimations of new kinds of productive machines. There is a certain ideological side to such of this discourse, and yet “… the processes studied and replicated by biological computation are more than just a techno-ideological expression of market fundamentalism.” (100) They really were and are forms of a techno-science of rethinking life, and not least through new metaphors. No longer is the organism seen as one machine. It becomes a population of machines. “You start more humbly and modestly, at the bottom, with a multitude of interactions in a liquid and open milieu.” (101) For example, in connectionist approaches to mind, “the brain and the mind are dissolved into the dynamics of emergence.” (102) Mind is immanent, and memories are Bergsonian events rather than stored images. These can be powerful and illuminating figures to think with. But maybe they are still organized around what Bogdanov would call a basic metaphor that owes a bit too much to the unreflected experience of bourgeois culture. It just isn’t actually true that Silicon Valley is an “ecosystem for the development of ‘disruptive technologies’ whose growth and success can be attributed to the incessant formation of a multitude of specialized, diverse entities that feed off, support and interact with one another,” to borrow a rather breathless quote from some starry-eyed urban researchers that Terranova mentions. (103) On the contrary, Silicon Valley is a product of American military-socialism, massively pump-primed by Pentagon money. Terranova connects the language of biological computing to the Spinozist inclinations of autonomist theory: “A multitude of simple bodies in an open system is by definition acentered and leaderless.” (104) And “A multitude can always veer off somewhere unexpected under the spell of some strange attractor.” (105) But I am not sure this works as a method. Rather than treat scientific fields as distinct and complex entities, embedded in turn in ideological fields in particular ways, Terranova selects aspects of a scientific language that appear to fit with a certain metaphysics adhered to in advance. It can be quite fascinating and illuminating to look at the “diagonal and transversal dynamics” (105) of cellular automata, and admire at a distance how “a bottom-up system, in fact, seems to appear almost spontaneously….” (105) But perhaps a more critical and radical information theory approach might be the necessary compliment. What role does stack infrastructure play in such systems? What role does an external energy source play? It is quite possible to make a fetish of a bunch of tiny things, such that one does not see the special conditions under which they might appear ‘self’ organizing.
As much as I revere Lucretius and the Epicurians, it seems to me to draw altogether the wrong lesson from him to say that “In this sense, the biological turn entails a rediscovery, that of the ancient clinamen.” (106) What is remarkable in Lucretius is how much he could get right by way of a basic materialist theory derived from the careful grouping and analysis of sense-impressions. One really can move from appearances, not to Plato’s eternal forms, but to a viable theory that what appears is most likely made of a small number of elements in various combinations. But here the least useful part of the Epicurean worldview is probably the famous swerve, or clinamen, which does break with too strict a determinism, but at the expense of positing a metaphysical principle that is not testable. Hence, contra Terranova, there can be no “sciences of the clinamen.” (107)
This is also why I am a bit skeptical about the overuse of the term emergence, which plays something of a similar ideological role to clinamen. It becomes a too-broad term with too much room for smuggling in old baggage, such as some form of vitalism. Deleuze, in his Bergsonian moments, was certainly not free of this defect. A vague form of romantic spiritualism is smuggled in through the back door, and held to be forever out of reach of empirical study. Still, with that caveat, I think there are still ways in which Terranova’s readings in biological computing are enabling, in opening up new fields from which – in Bogdanovite style – metaphors can be found that can be tested in other fields. But the key word there is tested. For example, when tested against what we know of the history of the military entertainment complex, metaphors of emergence, complexity and self-organization do not really describe how this new kind of power evolved at all. More interesting are Terranova’s use of such studies to understand Galloways’s great early theme: how control might work now. Here we find ways of thinking that actually can be adapted to explain social phenomena: “The control of acentered multitudes thus involves different levels: the production of rule tables determining the local relations between neighboring nodes; the selection of appropriate initial conditions; and the construction of aims and fitness functions that act like sieves within the liquid space, literally searching for the new and the useful.” (115) That might be a thought-image that leaves room for the deeper political-economic and military-technical aspects of how Silicon Valley, and the military entertainment complex more generally, came into being.
Terranova: “Cellular automata… model with a much greater degree of accuracy the chaotic fringes of the socius – zones of utmost mobility, such as fashions, trends, stock markets, and all distributed and acentered informational milieus.” (116) Read via Bogdanov rather than Deleuze, I think what is useful here is a kind of tektology, a process of borrowing (or détournement) of figures from one field that might then be set to work in another. But what distinguishes Bogdanov from Deleuze is that for him this is a practical question, a way of experimenting across the division of labor within knowledge production. It isn’t about the production of an underlying metaphysics held to have radicalizing properties in and of itself. Hence one need not subscribe either to the social metaphysics of a plural, chaotic, self-differentiating ‘multitude,’ upon which ‘capital’ is parasite and fetter, and which cellular automata might be taken to describe. The desire to affirm such a metaphysics leads to blind spots as to what exactly one is looking at when one looks at cellular automata.
There is a certain residual romanticism and vitalism at work here, in the figure of “the immense productivity of a multitude, its absolute capacity to deterritorialize itself and mutate.” (118) The metaphysical commitments of a Marx read through Spinoza become an interpretive key that predetermines what can be seen and not seen about the extraordinary transformations that took place in the mode of production. Where I am in agreement with the path Terranova is following here is in rejecting the social constructionism that seemed a default setting in the late twentieth century, when technical questions could never be treated as anything but second order questions derived from social practices. Deleuzian pluralist-monism had the merit at least of flattening out the terrain, putting the social and the asocial on the same plane, drawing attention to the assemblage of machines made of all sorts of things and managing flows of all kinds, both animate and inanimate. But the danger of that approach was that it was a paradoxical way of putting theory in command again, in that it treated its metaphorical substitutions between fields as more real than the fields of knowledge from whence they came. What was treated as real was the transversal flows of concepts, affects and percepts. The distinctive fields of knowledge production within which they arose were thus subordinated to the transversal production of flows between them. And thus theory remained king, even as it pretended to dethrone itself. It seems crucial in the age of the Anthropocene that thought take “the biological turn.” (121) Never was it more obvious that the ‘social’ is not a distinct or coherent object of thought at all. One of the great struggles has been to simulate how this actual world works as a more or less closed totality, for that is what it is. The metaphorics of the virtual seem far from our current and most pressing concerns. The actual world is rather a thing of limits.
Terranova ends Network Culture with a rethinking of the space between media and politics, and here I find myself much more in agreement. Why did anyone imagine that the internet would somehow magically fix democracy? This seemed premised on a false understanding from the start: “Communication is not a space of reason that mediates between state and society, but is now a site of direct struggle between the state and different organizations representing the private interests of organized groups of individuals.” (134)
Of all the attempts to think ‘the political’ in the late twentieth century, the most sober was surely Jean Baudrillard’s theory of the silent majority. He had the wit and honesty to point out that the masses do not need or want a politics, and even less an intellectual class to explain politics to them. The masses prefer spectacle to reason, and their hyper-conformity is not passivity but even a kind of power. It is a refusal to be anything but inert and truculent. Hence the black hole of the masses, which absorbs everything without comment or response. Meaning and ideas lose their power there. As even liberal pundits found out in Trump’s America – proof of the most abject kind of a Terranovian critical and radical information theory (CRIT).
The essay is taken from:
On Jussi ParikkaOnce you start digging beyond the idea that media is about interpreting signs, there’s no end to how deep the rabbit hole can become. Behind the system of signs is the interface that formalizes them (Manovich) or simulates them (Galloway). Behind that is the information turbulence the interface manages (Terranova), the hardware it runs on (Chun) and the stack of levels that processes it (Bratton). All of which incorporates the labor that operates it (Berardi) or is enslaved by it (Lazzarato) and which is incorporated within integrated circuits (Haraway). The class of workers who make the content might be doubled by a class of hackers who make the form (Wark). The rabbit hole keeps going, becoming more of a mineshaft. For some the chemical and mineral dimension is also a big part of what appears when one looks behind the sign (Negarestani, Leslie, Kahn), which brings us to Jussi Parikka’s A Geology of Media (U. Minnesota Press, 2015). Which tunnels down into the bowels of the earth itself. Parikka: “Geology of media deals with the weird intersections of earth materials and entangled times.” (137) In this perspective, “Computers are a crystallization of past two hundred to three hundred years of scientific and technological development, geological insights, and geophysical affordances.” (137) But one could also reverse this perspective. From the point of view of the rocks themselves, computers are a working out of the potentials of a vast array of elements and compounds that took billions of years to make but only decades to mine and commodify – and discard. History is a process in which collective human labor transforms nature into a second nature to inhabit. On top of which it then builds what I call a third nature made of information, which not only reshapes the social world of second nature, but which instrumentalizes and transforms what it perceives as a primary nature in the process. There’s no information to circulate without a physics and a chemistry. “The microchipped world burns in intensity like millions of tiny suns.” (138) Perhaps the best way into this perspective is to go through some of the materials it takes to make information something that can appear as if it were for us. Let’s take a periodic table approach to media possibilities. Coltan is a famous example – I wrote about this important insulating material in Gamer Theory. A lot of it comes from the Congo. It’s an ore containing the elements niobium and tantalum, which along with antimony are used in making micro-capacitors. Then there’s lithium, used in the batteries of phones, laptops and hybrid cars, major deposits of which are in Afghanistan. Cobalt is also used in making batteries. Platinum is for hard drives, liquid crystal displays and hydrogen fuel cells. Gallium and indium for thin-layer photovoltaics. Neodynium is used for lasers; Germanium for fiber-optic cable. Palladium for water desalination. Aluminum, tantalum, tungsten, thorium, cerium, manganese, chromium, all part of 20th century industrial culture, but now have extended uses. Media materiality is still also very metallic: 36% of tin, 25% of cobalt, 15% of palladium, 15% of silver, 9% of gold, 2% of copper, 1% of aluminum are for media tech uses. (34) There can be sixty different elements on a computer chip. There’s a whole place and a whole industry named after an element: Silicon Valley. Very pure silicon is used to make semi-conductors. We’re used to thinking about a geopolitics of oil, but perhaps there’s a more elaborate Great Game going on these days based on access to these sometimes rare elements. Reza Negarestani’s Cycolonpedia is an extraordinary text which reverses the perspective, and imagines oil as a kind of sentient, subterranean agent of history. One could expand that imaginary to other elements and compounds. For instance, one could imagine aluminum as an agent in the story of Italian Fascism. Since bauxite was common in Italy but iron was rare, aluminum rather than steel became a kind of ‘national metal’, with both practical and lyrical properties. The futurist poet Marinetti even published a book on aluminum pages. What aluminum was to twentieth century struggles over second nature, maybe lithium will be to twenty-first century struggles over third nature. It might make sense, then, to connect the study of media to a speculative inquiry into geology, the leading discipline of planetary inquiry. (A connection I approached in a different way in Molecular Red, by looking at climate science). Parikka: “Geology becomes a way to investigate the materiality of the technological media world.” (4) James Hutton’s, Theory of the Earth (1778) proposed an image of the temporality of the earth as one of cycles and variations, erosion and deposition. Hutton also proposed an earth driven by subterranean heat. His earth is an engine, modeled on the steam engines of his time. It’s a useful image in that it sees the world outside of historical time. But rather than having its own temporality, Hutton saw it as oscillating around the constants of universal laws. This metaphysic inspired Adam Smith. Hence while usefully different and deeper than historical time, Hutton’s geology it is still a product of the labor and social organization of its era. Still, thinking from the point of view of the earth and of geological time is a useful way of getting some distance on seemingly fleeting temporalities of Silicon Valley and the surface effects of information in the mediated sphere of third nature. It also cuts across obsolete assumptions of a separate sphere of the social outside of the natural. “The modern project of ruling over nature understood as resource was based on a division of the two – the Social and the Natural – but it always leaked.” (x) One could rather see a first, second and third nature as equally material in the deepest sense. Parikka’s project includes a bringing together of media materialism and historical materialism: “media structure how things are in the world and how things are known in the world.” (1) I was after something similar in Molecular Red in turning to Karen Barad’s agential realism. It’s a project in which materialism is extended towards materiality in the geological, chemical and physical sense, without entirely losing sight of the category of labor. Parikka’s approach grows rather out of the work of Friedrich Kittler, “the Goethe scholar turned synth-geek and tinkerer.” (2) For Kittler, ‘man’ is an after-image of media-technology. One could think of this as a much-needed update on Foucault’s anti-humanism, which (like Préciado) at least drags it into the twentieth century. Where Foucault looked to architectural forms, such as the prison or clinic, the structuring of visibility, the administrative ordering of bodies, Kittler takes the next step and examines media as more contemporary practices that form the human. Parikka: “Media work on the level of circuits, hardware, and voltage differences, which the engineers as much as the military intelligence and secret agencies gradually recognized before the humanities did.” (3) Like Douglas Kahn, Parikka wants to extend this work further in the direction of what for us vulgar Marxists would constitute its base. He finds a useful ally in earth artist Robert Smithson, whose “abstract geology” paid close attention to the materiality of art practice. Smithson was an anti-McLuhan, in that he saw media not extensions of man, but as extensions of the earth. But besides the intriguing spatial substitution, bringing the depths of geology into view, Parikka is also interested in changing temporal perspectives. German media theorist Wolfgang Ernst has written of media as a temporal machine, paying close attention to the shift from narrative to calculative memory. Also of interest is Siegfried Zielinski’s project of a media studies of deep time. Zielinski was trying to escape the teleological approach to media, where the present appears as a progressive development and realization of past potentials. He explores instead the twists and cul-de-sacs of the media archive. Parikka takes this temporal figure and vastly expands it toward non-human times, past and present. Parikka proposes a double sided relation of media to earth. On the one hand, “the geophysical that becomes registered through the ordering of media reality. And conversely, it is the earth that provides for media and enables it.” (13) This double articulation might have its problems, as we shall see later. The goal is to think a medianature as one continuum, rather like like Haraway’s naturecultures. “Medianatures… is a concept that crystallizes the ‘double-bind’ of media and nature as co-constituting spheres.” (14) The third nature of information flows does not run on silicon alone. It also runs on fossil fuels. The Anthropocene, which Parikka recodes as the Anthrobscene, is “a systematic relation to the carboniferous.” (17) As Joseph Needham always pointed out, China beat the west to most technological discoveries, including coal mining. It was going on during the Song Dynasty (960-1279). For Jason Moore, we might as well call the present epoch the Capitalocene, given the intimate connection between the historic rise of capitalism as a mode of production and the exploitation of resources on a short-term basis, including the millennia’s worth of past photosynthetic activity locked away in layers of coal seams. “capitalism had its necessary (but not sufficient) conditions in a new relation with deep times and chemical processes of photosynthesis.” (18) Perhaps we could make the nonhuman elements’ contribution to historical materialism more visible. This might go beyond the rather speculative geology of morals in Deleuze and Guattari. Like Jane Bennett, Parikka is interested in their celebration of the craft of metallurgy, a kind of experimental labor that wants to explore what a material can do. But it might not be the case that this is neatly separable from science. As one learns in JD Bernal’s Science in History, science and craft, which is to say science and social labor, are always intimately connected. Still, there is something refreshing about an approach which does not build off from Deleuze’s ‘Postscript on Control Societies’, which was after all only an occasional piece, but from Anti-Oedipus instead. In this perspective, “media history conflates with earth history; the geological materials of metals and chemicals get deterritorialized from their strata and reterritorialized in machines that define our technical media culture.” (35) But I am wary of extending the category of ‘life’ to the non-organic, as there is a danger of merely porting an unexamined vitalism into new fields where it will function yet again as an unexamined first principle. Here we might learn more from natural scientists trying to reach into the humanities than from philosophers trying to reach into the natural sciences. Parikka usefully draws on Stephen Jay Gould’s model of evolutionary time as a punctuated equilibrium, as a succession of more or less stable states in variation alternating with moments of more rapid change. There’s no sense of progress in this version of deep time, no necessary evolution from lower to higher, from simple to complex. One can then approach the earth as an archive of different temporal blocks, each with its own rate and variability of change. “What we encounter are variations that define an alternative deep time strata of our media culture… It offers an anarcheology of surprises and differences.” (42) Starting with Hutton’s heat-engine earth, but seeing it as passing through shifts as well as cycles, and not necessarily on a teleological path anywhere, a vast spatial and temporal panorama opens up, within which media can operate as both very brief but also surprisingly long temporalities. A Youtube video may be fleetingly short up when put against the temporality of the earth, but the afterlife of the device that played it may turn out to be moderately long. Marx saw the machinery into which living labor was accumulated as dead labor, but perhaps it makes more sense to think of it as undead labor, for our machines, including media machines, may outlive us all in fossil form. “The amount of operational electronics discarded annually is one sort of geologically significant pile that entangles first, second and third nature: the communicational vectors of advanced digital technologies come with a rather direct link to and impact on first natures… Communicational events are sustained by the broader aspects of geology of media. They include technologies abandoned and consisting of hazardous material: lead, cadmium, mercury, barium, and so on.” (49) In this manner, the mediasphere of third nature returns to the lithosphere. China, being short of certain key metals, imports them as scrap and mines some of its minerals now from second nature rather than from nature as such. But Parikka is keen not to lose sight of labor as a category here. Rather than think of third nature as a realm of immaterial labor, he wants to emphasize hard-work and hard-ware, and the constitutive role of the geological and chemical in both. Here it is worth recalling Marx’s interest, late in life, in questions of soil chemistry, which led to him towards the concept of metabolic rift. Second nature got out of synch with nature, when minerals extracted from the soil by crops grown to produce food to fuel industrial labor did not return to the soils, depleting them. This led to soil science, and to practices of ‘culturing’ soil with nitrogen and potassium and so forth. Thus there’s a prehistory to third nature’s dependence on a vast array of mineral inputs, and its dumping of the resultant waste, in second nature’s dependence on mineral inputs to sustain – in the short term – commodified agriculture. Parikka: “… a deep time of the planet is inside our machines, crystallized as part of the contemporary political economy.” (57-8) A manifesto-like text in Mute Magazine once proposed we move on from psychogeography to a psychogeophysics. Drawing on the rogue surrealist Roger Caillois, the new materialism of Rosi Braidotti and Timothy Morton’s studies of hyperobjects, Parikka develops psychogeophysics as a low theory approach to experimentally perceiving the continuities of medianatures. “Perhaps the way to question these is not through a conceptual metaphysical discussion and essays but through excursions, walks, experiments, and assays? … Instead of a metaphysical essay on the nonhuman, take a walk outside…” (63) Parikka pushes back against the limits of psychogeography (not least in my formulation of it) as restricted to the interactions of the ambling human and the urban milieu. A psychogeophysics might be able to detect and map a much deeper and broader field of vortexes, flows and eddies. “Psychogeophysics aims for planetary scale aesthetics.” (67) It pushes on from the opening towards the animal in posthumanities writing, toward the earth itself. “Psychogeophysics performs the continuums across the biological, the nonorganic, and the social.” (67) Here it might come up against, and have to work through the history of nature aesthetics, in which the Grand Canyon went from being see as beautiful as becoming sublime. Both mapping and landscape painting landscape painting have an intimate connection to geology. (And as Bernard Smith shows, also to maritime exploration.) Psychogeophysics might work as a minor concept or practice of a low theory, of variation and deviation, experimenting with ways of perceiving other times and spaces. For example, the work of media artist Joyce Hinterding explores natural electro-magnetic fields. The open earth circuit predates closed tech circuits. A Geology of Media is structured around a passage from the interior of the earth (mining) to it surface (soils) to the air above (dust) and beyond. A psychogeophysics of dust might begin with Marcel Duchamp’s Large Glass, and his other experiments in ‘dust breeding.’ Dust, Parikka suggests, rather troubles our notions of what matter is. A case in point: I’m typing this on an Apple laptop, encased in a smooth, polished aluminum shell. But the polishing of the case creates aluminum dust, which is a major health hazard. “There is a bitter irony that the residue of the utopian promise is registered in the soft tissue of a globally distributed cheap labor force.” (89) “We need to attend to the material soul. Made of lungs and breath – and the shortness and time management of breath.” (103) Here Parikka focuses on the human lung as a media that absorbs dust, some of it toxic. Workers make Apple products, imparting their labor to the product, but inhale the aluminum dust, in exchange. Where Franco Berardi proposes the soul as a new site of exploitation, exhaustion and depression, it’s worth paying attention to a more material aspect of the breath-soul relationship. The soul of the hacker class toiling in the over-developed world might be inspired, but the lungs of the worker elsewhere may well be respiring toxic dust. Here one might examine Platonov’s materialist theory of the soul, which sees soul as a kind of surplus over bodily subsistence. In Platonov’s terms, bodies don’t have souls unless they have surplus energy to expend on growing one (and in his world it is not always a good thing when they do). One could think here of all the soul-restraining features of laboring to produce third nature: Lead damages the nervous system, cadmium accumulates in the kidneys, mercury affects the brain, barium causes brain swelling and liver damage, and so on. “Mines are a central part of this picture of cognitive capital.” (100) Here I agree with Parikka that the rather ethereal theories of semio-capitalism or cognitive capitalism, let alone their acceleration could do with contact with perspectives such as those of Jason Moore, which stress the material short-cuts on which commodification is based, such as cheap nature, cheap labor and cheap energy – but which leave long-term debts unpaid. Something like 81% of the energy used in the life cycle of computers is to make them, and much of that still comes from burning coal. It is ironic that the dust-free clean rooms of high tech industry are fueled by a process that throws ton after ton of coal dust into the air. “Dust does not stay outside us but is a narrative that enters us.” (102) The race for resources that colonizes the planet is continually throwing off waste that will far outlive the cycles of production and consumption in which they are consumed. Parikka: “… any extended understanding of the cultural techniques and technologies of the cognitariat needs to be able to take into account not just souls but where breath comes from.” (106) It is strange that we use the word fossil in two such difference senses: fossils are treasured artifacts of the past; fossil fuels are artifacts from the deep past to be burned up for energy, their waste cast into the atmosphere with abandon. Parikka teases out two other sense of the word fossil: fossil futures and future fossils. What are the potential futures that are now fossil relics in the archive? What fossils are we making now that we can imagine being discovered in some future time, by some other sentient species, after our human-species being has gone the way of the dinosaurs? The figure of the fossil provides a useful way of thinking and experimentally practicing a psychogeophysics. Fossils are a strange kind of ‘media’ artifact, preserving information across deep time. Certainly, the media technology of recent times will make an interesting fossil layer of the Anthropocene for future robot or alien archaeologists. And imagining them as such helps us think the third nature of information vectors, hurtling information around the world through fiber-optic cable at the speed of light, as something other than a world of super-fast temporalities. Parikka: “We need to address how fossils, whether of humans, dinosaurs, or indeed electronics, infuse with the archaic levels of the earth in terms of their electronic waste load and represent a ‘third nature’ overlapping and entangling with the first and second…. The third nature is the logistical vector of information through which production of second nature takes a new informational pace. But as we see from the existence of media fossils, the spheres of two and three are as entangled with ‘first nature’ as they are with each other. They are historically codetermining in a way that defies any clear cut differences between the modern era of industrialization and the postmodern era of information. In addition, the material residue of the third nature is visible in the hardware and waste it leaves behind, despite its ability to reach abstract informational levels.” (119) Paolo Virno dispenses with the category of second nature in Marx, arguing that Marx only ever used it to denote the false sense of ‘naturalness’ of bourgeois life. But where Parikka and I converge contra Virno is in trying to show how the paradoxical ‘real-falseness’ of the second nature that bourgeois culture celebrated. It was false in being utterly dependent on cheap labor, cheap nature and cheap energy, as Moore would put it. It was doomed from the start to a temporary existence, given the metabolic rift it opened up with its own conditions of existence. And the solution was not to reverse course, to try to find a way to value what socially organized production takes from, and gives back, from second nature to nature. On the contrary, the solution was to build out a third nature that would deploy the information vector to extract even more resources out of nature, from deeper, from further. And excrete even more waste into the rapidly closing system of planetary metabolism. As Adorno and Debord insisted in their different ways: the whole is the false. It’s a second nature and now a third nature that commodified relations of production have extruded out of social labor, like shiny but very temporary soap bubbles. Media artists play a key role throughout this book, but particularly in opening up the question of the fossil. Worth mentioning is Grégory Chatonsky’s work on telofossils, which posits an alternative teleology to the accelerationist one. In accelerationist thinking, the future extrudes as a linear intensification of the present. For Chanosky, today’s tech is tomorrow’s fossils, dead and extinct yet preserving their now useless form. Particularly affecting is Trevor Paglen’s work on The Last Pictures, an intentional fossil. Paglen created a photo archive to be attached to a satellite and boosted into space, as satellites are likely to be among the longest living fossils of the era of third nature. Thus Parikka’s movement throughout the book from underground to surface to atmosphere terminates in what for Lisa Parks is our orbital culture. Besides its spatial imagination, A Geology of Media opens up a usefully nonhuman way of thinking about temporalities. The accelerationist view only perceives human time speeding into an inhuman one in which artificial intelligence supersedes us. It seems unable to think the deep, nonhuman times that get produced along the way, as the accelerating juggernaut of third nature throws off waste products, some of which may outlast life itself. However, one limit to Parikka’s project is suggested by this very figure of the fossil, particularly if we think of what Quentin Meillassoux calls the arche-fossil. How is it possible to have a knowledge of a rock that existed before humans existed? How can there be knowledge of an object that existed in the world before there could be a correlative subject of knowledge? I’m not sure Parikka’s double articulation of media and geology really addresses this proposition. Meillassoux’s approach is to abolish the subject of knowledge and restore a speculative and pre-Kantian philosophy of the object, the essential and primary properties of which are mathematical and hence allegedly prior to any sensing and knowing subject. The problem with this is that Meillassoux has to bracket off the complex of scientific labor and apparatus through which the fossil is known at all. His is a contemplative realism that takes the fossil as simply given to thought. What he takes to be its primary qualities, mathematically described, are really the product of tertiary qualities, produced by an inhuman apparatus of scientific instrumentation that mediates a knowledge of this nonhuman object to the human. Here I find Karen Barad’s agential realism to be most helpful and the stub of a genuinely Marxist theory of science, in that it concerns itself with the means of production of knowledge. The nonhuman world of the fossil is mediated through the inhuman world of an apparatus, one that can sense things beyond the secondary qualities of objects detectable to the merely human senses. Rather than expand the category of object, as Meillassoux’s speculative realism does, or attribute life or consciousness to the inorganic and nonhuman as the new materialism does, one can expand the middle, the mediating term, which in this case is the inhuman apparatus of undead labor mixed with living labor. The inhuman apparatus can perceive beyond the merely subjective time of the human, for it too, like the fossil, is a product of deep time. There is no mystery of correlation to account for, as knowledge is not a matter of a subject contemplating an object. Rather, the appearance of objects and subjects as entities with specific boundaries and temporalities is itself a product of an inhuman process engaging many agents with many temporalities, some of them very deep indeed. Hence I agree with Parikka here: “We need carefully to refine what we mean by media and communication in the non-correlationist as well as new materialist contexts of contemporary media culture.” (135) But one does not achieve this by extending the category of ‘life’ or ‘thought’ into the deep time of the nonhuman, as this is simply the mirror image to speculative realism’s erasure of the subject under the weight of its vision of a chaotic and collapsing objective-real. Both approaches want to assign to a high theory a power it does not have, to define the whole field of being and becoming by itself. This latent tendency in the book seems contrary to its main achievement, which is to show the power of a more collaborative approach to knowledge, in which a low theory of psychogeophysics wanders between fields or burrows under them, rather than flying like Icarus above them. The problem with Kittler’s media theory, the thing that really dates it, is that it had still not given up on the imperial ambitions of a high theory. By pushing this field-colonizing theory as far as it will go, beyond the media apparatus toward the geology from which it is extruded, Parikka makes a step forward in the direction of a new organization of knowledge, towards a ‘post-colonial’ media theory, in the limited sense of not attempting to colonize other fields of knowledge. Parikka: “Media materiality is not contained in the machines, even if the machines themselves contain a planet. The machines are more like vectors across the geopolitics of labor, resources, planetary excavations, energy production, natural processes, from photosynthesis to mineralization, and the aftereffects of electronic waste.” (139) Such a perspective calls for a mediating of the various knowledges of the component parts of that totality to each other without the pretentions to mastery of any one field or discipline over all the others. The essay is taken from: The weaponized form of McLuhan’s famous phrase the medium is the message is the phrase, first we shape our tools, then our tools shape us (due to to McLuhan’s friend John Culkin). I have come to prefer this form of the idea, and my favorite motif for it is Doc Ock, the Marvel super-villain. Doc Ock’s artificially intelligent arms fuse to his brain stem in a reactor accident. In the movie version, the intelligence in the arms alters his behavior by making lower-level brain functions, such as emotional self-regulation, more powerful and volatile. The character backstory suggests a personality — a blue-collar nerd bullied as a schoolkid — that was already primed for destabilization by the usual sort of super-villain narcissistic wound. The accident alters the balance of power between his higher-level brain functions, and the hardware-extended lower-level brain functions. In the Doc Ock story, first we shape our tools, then our tools shape us captures the adversarial coupling between medium and message-sender. The weaker form of McLuhan’s idea suggests that media select messages rather than the other way around: paper selects for formal communication, email selects for informal communication, 4chan selects for trolling. The stronger form suggests that when there is a conflict between medium and message, the medium wins. A formal communication intent naturally acquires informal overtones if it ends up as an email, memetic overtones if it ends up as a 4chan message. Culkin’s form is the strongest. It suggests that the medium reshapes the principal crafting the message. The Doc Ock motif suggests why. There is no such thing as a dumb agent. All media have at least weak, latent, distributed intelligence. Intelligence that can accumulate power, exhibit agency, and contend for control. The most familiar example of this effect is in organizational behavior, captured in an extension to Alfred Chandler’s famous observation that structure follows strategy. That becomes first structure follows strategy, then strategy follows structure. The explicit form is Pournelle’s Iron Law of Bureaucracy: in a mature organization, agent goals trump principal goals. A subtler, less familiar example is the philosophical idea that in any master-slave relationship, the slave can self-actualize through labor. In practice, this happens only when the slave has some freedom above absolute wretchedness, with sufficient cognitive surplus to turn learning from labor into political leverage. In all such examples, the mechanism is the same. A seemingly powerless and dumb agent, by virtue of having privileged access to information and organizational operations, can become the principal by converting growing tacit knowledge of reality into consciously exercised political leverage. The idea sheds light on why we are instinctively concerned about the Trump administration-in-waiting. While it is plausible, indeed probable, that Trump’s own ideological postures are merely expedient responses to the needs of the moment, the same cannot be said of many of his agents-in-waiting, whether acknowledged or not. They are tools at the moment, being shaped to the will of a victor. Unfortunately, they can easily go from being shaped to doing the shaping. taken from: Howard Gossage, in front of his converted Firehouse, which housed his San Francisco advertising agency Howard Luck Gossage (1917–1969), known as “The Socrates of San Francisco,” was an advertising innovator and iconoclast during the “Mad Men” era. A non-conformist who railed against the norms of so-called scientific advertising in his day, Gossage introduced several innovative techniques to the advertising practice that would only become appreciated decades after his death. Gossage is credited with introducing the media theorist Marshall McLuhan to media and corporate leaders thereby providing McLuhan his entry into mainstream renown. More widely, Gossage was involved in some of the first environmental campaigning in the USA with the Sierra Club, and in the establishment of Friends of the Earth through his friendship with David Brower. Co-founder at age 36 of the advertising agency Wiener & Gossage, Howard Gossage is listed by Advertising Age at number 23 of its 100 advertising people of the 20th century. AdAge.com calls Gossage a “copywriter who influenced admakers worldwide.” http://en.wikipedia.org/wiki/Howard_Gossage The following is a short excerpt from Steve Harrison’s recent biography of Howard Gossage (from Chapter 5, “Launching Marshall McLuhan, and the birth of social networks” in Harrison, S. (2012). Changing the World is the Only Fit Work for a Grown Man – An Eyewitness Account of the life and Times of Howard Luck Gossage: 1960s Most Innovative, Influential & Irreverent Advertising Genius. Adworld Press. … Gossage felt that “McLuhan’s most powerful appeal, in the end, is to those who have thought themselves into a sort of intellectual isolation, who lie awake and groan ‘doesn’t anyone else think like me?’” According to his wife, Sally, Gossage was actually lying in bed late one night in February 1965 when his intellectual isolation came abruptly to an end. “I remember I was reading some sort of wonderful novel and Howard was reading Marshall McLuhan’s book [Understanding Media, published in 1964] and he said ‘I get it, I understand!’ and I said ‘What? and he said ‘McLuhan is assuming that the reader already knows the background stuff that McLuhan knows so he’s writing in shorthand. It needs to be filled in. I’m going to fix it.’ And the next thing I know he’s on the ‘phone and he’s got Marshall McLuhan on the ‘phone in Canada and he says ‘McLuhan, do you want to be famous?’ … Jerry Mander has a similar recollection of their first contact. “Howard Gossage discovered Marshall McLuhan as far as I can see. He launched Marshall McLuhan. He’d read that book and he said ‘Mander, look at this. This guy’s fantastic. This is the most amazing person, let’s call him up’. At that time McLuhan was not a well-known character. His book had just come out. I don’t know how Howard got a hold of it but he had read it cover to cover in a flash and said, ‘This is the best thing on media that’s ever been done, and he called him up on the ‘phone and his opening line was ‘Dr. McLuhan, how would you like to be famous?’ By the time the firehouse seminar took place in August 1965, Gossage had already delivered on his promise. Back in May, Gossage and his partner Dr. Gerald Feigen in their recently formed consultancy, Generalists, Inc., had invested $6,000 and taken the little-known academic to the East Coast. There, in restaurants affordable only to those on corporate welfare, he was introduced to the nation’s leading media owners, newspaper reporters, TV journalists and admen. Breathing surprisingly easily in the rarefied atmosphere of the corporate élite, the 53 year-old conservative academic in the striped seersucker suit and plastic clip-on bow tie took delight in telling the people who either ran or had found fame and fortune working in the media that, frankly, they really knew nothing about it. (pp. 95-96) taken from: |
MediaArchives
March 2019
|