By Pit Schultz
The current Facebook debate is a chance to get your act together and get organized – just a little.
What does it mean to get dis-, re-, co-organized, to the worse, or the better? A further balkanisation, a migration to the cryptoanarchist waste of resources, blockchain-nations as a refresh of the independent cyberspace myth, or the various academic art conferences giving a place for certain representative counter movements in order to map and neutralize them. The culture war is probably a trap allowing single career paths instead of lifting up the standards on a larger scale. This counts especially for the branch of critical media art, which has buffered away criticality from the rest of the art world for too long. While often fruitful and interesting on the lower layers, it gets thinner and weaker the higher you get. The neoliberal call for self-diversification is a part of a parapolitical neutralisation effort to keep away resistant forces from where they could do real harm and lead to systemic change.
Synchronize and Change Facebook from Within
It is understandable, leaving Facebook because it is dull, depressing, boring – but the same counts for your workplace probably, the compromise you had to make to rent an affordable flat, the places you need to go shopping or studying. Even if there are alternatives in the physical, urban world, the ecosystem of myriads of websites, linux distros, apps you never have seen, and tracks you never will listen to, is part of the long tail myth of consumerist choice. Culturally, the unification under one media platform, such as the book, or the internet, has been revolutionary in terms of “consciousness building”. Today you are told that somewhere else, with a different type of media speech will be authentic and free again, just to stop you from waking up and stating the obvious.
The existance of pluralism and diversity depends on the conditions of the surrounding it derives from. The current platformisation is adding new application layers on top of the web. One can dream and fight for a niche in between, or fight for the change and opening of these plaforms in terms of democratic design principles. Both approaches have pros and cons. Since we have no better polticial system architectures available, we could stick with embedded democracy and discuss the specifications, when looking at the complete lack of these features in todays online infrastructure.
Democratisation or Exodus as Hegemonial Choice
To run away from Facebook headlessly and to leave it before being censored, kicked out or shut off, is ill advised from a radical democratic point of view. Maybe it would be possible, in a gramscian way, to doubt the absolutistic public sphere that Facebook has errected. But going along back to the alternatives, such as to the municipal level of creative, digital and global cities, or to the balkanisation of cryptoanarchist blockchain based curriencies, or to the ghost towns of abandoned homepages in the the dark net, or to countless masculinist linux projects which reinvented the wheel, as well as to various counter-platforms that clone and modify the UX of Facebook in one or the other way, it turns out, that they have been proven as dead-end devolutions. Compared to the mass consumerist time waste of Facebook, they are still interesting tactical forms of excess. Due to their false promise of offering a strategy and not just an invidualist tactical sidestep, these outside positions are certainly not inherently better ones. Neither they are inherently bad -they’re just no solution. And they are certainly not politically or theoretically smarter than trying to change Facebook on Facebook.
From a media theoretical point of view it seems blind to #deletefacebook, since the deletion confirms, that Facebook reduces you to an effect of the medium. You can accept tacitly not to be able to change the channel from within the channel, being in it debating it, critiquing it, protesting against it or subverting it, or taking any distanced meta position from within the medium. From a political point of view, the spectrum of protest forms includes to excercise the right to delete yourself (#loeschdich), and there are various existing channels to discuss strategic common goals in the aftermath of the CA scandal. Not to confuse the means of change with the goal itself, we need to achieve more rights, more, or at least some, democratic freedom, to transform this powerful platform in an exemplary way. Instead of dispersing the platform into micropolitical niches which ultimately risks to neutralize it´s potentials, we could form new brilliant alliances of productive alienations.
By taking a virtual outside position, that can be e.g. excentric, external, artistic or theoretical, one cannot neglect, that even the most underpriviledged and precarious existence will be impossible outside of todays capitalist realism. Trying to escape the network effects of Facebook in an exemplary way will only be a symbolic move of self-alienation and ‘dark’ independency. Leaving Facebook weakens the resistance against the platform on this platform. As a representative function of the powerlessness and passivity in society, flirting with the exodus, a hipsterist escapism or cocooning into the digital diaspora will never make oneself less vulnerable, or free, without forming collective agencies of real resistance: running archives, sharing strange interests and hobbies, collecting and filtering what has been easily neglected or forgotten. We seek strategic alternatives for a planetary order, branching points and possible future forks. Of course they can be developed on or off the platform. But as an individual strategy, better don’t fool yourself believing that there’s a safe zone of a pirate utopia reserved for you in cyberspace.
Across the stack
Lets face the facts. Facebook is providing an application layer on top of the web. It has a horizontal monopoly position when it comes to web usage, as well as in the messenger and mobile social media space. To propose free alternative solutions is politically and technically ill advised. Alternatives that are based on a free and open clone approach are rather doomed, due to the network effects. Diaspora, ello. vero and various comparable approaches tried to operate next to Facebook, but there is not even a niche left for them. You need to expand to specialized social networks such as linkedin for CVs and job related stuff, and academia-edu or soundcloud etc. to pick up the crumbs. In terms of surveillance features and data vampirism the business models of these alternative platforms are rarely better than the one of Facebook. To tap into new network effects, either one is able to vertically go down the stack to expand into physical appliances (smart home) and/or protocols (p2p), or is able to go up the stack (ai driven agents). With a new abstraction layer which will lead to the neutralisation of Facebook in terms of an open api as an opened infrastructure, a democratized Facebook can serve as a backend. Like in Tim Berners-Lee solid project, for autonomous bots and scrapers, as a social media layer to build new stuff on top. Lately even Google has given up to build a competitor for instant messaging (to Facebooks quasi monopoly) going down and sideways in the mobile stack to use SMS and no-internet messaging as a base for a new app.
Interoperabilities, Design Change and Open Data Sheets
Even if the regulation will achieve some opening of the api´s, interoperability will not be able to replicate the complexities of one system onto the other without a more radical approach. Interoperability as the lowest common denominator of regulatory efforts represents the misunderstanding of todays infrastructures by the “other” culture, using railway or telephony metaphors. Only in combination with a true open architecture that provides open source, open data and open datasheets, forks will be possible, as well as federated spin-offs. Unfortunately it is precisely the api-based third party eco-system which will get restricted after the CA case. Interoperability as a means of capitalist appeasement is used in industries such as the military industry, to have heterogenically existing systems work together. In computer hardware, such as the PC, the interoperability is achieved by open standards. In software architecture it is called legacy and a migration or a replacement is often easier to achieve. The same counts for the keyword “algorithm”. Algorithms do not make much sense without data structures.
The way our data, attention, and labor online is defined is much more presented in the specifications of data sets, the modelling of input-output relations, and not merely visible in the implementations of running code, e.g. the algorithms. by demanding access to the documentation of the design process, the iterative, agile but nevertheless non-democratic development cycles within social media companies become debatable. The strategies, and use case modelling will be much more revealing than merely fetishizing the code itself, countering the assumed power, with a fetishisation of jurisdiction. Both law and code are not the most effective levels or corporate control. in order to change the companies one must try to understand their organisational models, their design process and decision structure first.
Demanding transparency for algorithms alone, just presents the limitations of digital literacy today. 30 years after the internet has been introduced and 70 years after the computer has been invented. So, instead of talking about acceleration one must talk about education. Besides the documentation of a systems architecture and of it´s specifications and source code, as well a full documentation of it´s “datasheets”, the metadata describing the data structures are necessary demands. Without them the interoperabilty as well as a regulatory approach to algorithms are ill fated forms of cross-cultural intermediation, which rather establish bodies of agency, that are open to lobbyism and obstruction of all kinds.
Facebook combines as well a few known design patterns, such as “portal”, the good old “collaborative filtering”, as we called it in the old nettime footer, as it is taking thread based use cases that are known from usenet and compuserve to provide the annotated web that has been promised in the 90ies. The problem behind the current privacy debacle is the property issue: ‘no commercial use without permission’ is as problematic as the complete lack of democractic functionality within the platform. A de-individuation of social media, as Benjamin Bratton has proposed, would be accompanied with new options to reset the priorities.
An individualist liberal movement comes with tactical sidesteps and prefers issues of smaller group identification before larger common goals. It is not that the bias of algorithms is not a problem, but it is embedded into a larger analysis of power including the data structures, the system architecture, flaws in laws and regulation, exploitative business models and various forms of discriminatory bias in the engineering process, often due to a lack of governance in terms of transparency and accountability. On the other hand, a new universalism even if proposed by Facebook serves as a plane on which today earth problems need to be understood and solved globally. The equality it offers and a certain degree of neutrality of the interface layers are a cultural potential to build on, to change from within, modify and fork, and politicise against. Dreaming about building your own little world on the side is fruitless if it is not connected to a larger political fight of changing the architecture of power and property mediated by the internet. Nevertheless, tacit forms of protest, as isolatory, anti-productive or escapistic they might appear, should not be condemned. As the change of these platforms is a long term goal which can be fought for on and off these platforms in various ways.
A more object oriented social network is possible, where subjects group around issues, goals, projects, events and the individual is not just the ultimate product in the center of the social graph any more. A combination of models known from Wikipedia and Mozilla, non profit coops with more democratic functionality in iterative design cycles, such as bug trackers, eternal logs, transparency of documentation. Democratic design principles known since a long time need to be formulated, discussed and implemented. The discourse of regulatory law as well as ethical commissions will not prevent the next levels of alienation, surveillance and oppression that are coming with machine learning and big data driven AI. The economic inequality and the property relation should be the first common issue beyond all minority based struggles, to connect various fights and not obey to the framings and neutralising offers of liberalism.
Recently it has been calm around a defense of the commons. The liberalisation of open data has lead to a pipeline of structured data, going directly from the foundations of Wikipedia to google graph and deep mind. Wikidata, a project funded by google, is not combined with a new license which would compensate for the billions of dollars worth of human labour. These are now used to train AI, and structure and enhance search results. The social factory of Facebook, as well as Amazon, Google, and any other larger commercial online platforms will turn to a model of commodifying and monetizing data by feeding it value extraction methods that are run by machine learning algorithms. The underlaying proprietization of data is the central strategic point to attack.
The french AI proposal is a great chance for a defense of open data, not just along the values of Diderot and Alembert. All training data for machine learning should be put under a new kind of data-GPL-license which is free for non-commercial or scientific use. But forces companies which make profit to re-compensate the commons, probably using blockchain methods, and become accountable in terms of reproducable trainings that are lacking methods of debugging and controlling AI.
Press Pause, Get Organized and Strike!
It has been a few years ago now, that Tizinana Terranova has proposed the red stack. Burak Arikan has proposed a formalisation of unpayed online labour. Tomazo Tozzi has proposed the netstrike in 1995. Richard Barbrook and me have written a manifesto for the digital artisan. There are plenty of people who are concerned and interested in possible and exemplary changes of Facebook here and now. Theoreticians, activists, journalists, artists … Until now there is hardly a place where they join, filter and exchange relevant texts and get self-organized a little more.
A strike or #facebreak might be still a valid form of protest, of demanding forms of governance on facebook and as a manifestation of not wanting to be governed in such a way. As proposed above, I will suspend my account from 25th of may to 1st of june 2018.
In the meantime let’s work on a list of demands, as in democratic design changes, to facebook and other platforms.
Lets call it #non-facebook.
Click to set custom HTML
Walter Benjamin has a reputation as a sophisticated reader of literary texts. But perhaps his media theory is not quite so elaborate. Here I shall attempt to boil it down in a very instrumental way. The question at this juncture might be less what we owe this no longer neglected figure but what he can do for us.
Benjamin thought that there were moments when a fragment of the past could speak directly to the present, but only when there was a certain alignment of the political and historical situation of the present that might resonate with that fragment. Applying this line of thought to Benjamin himself, we can wonder why he appeared to speak directly to critics of the late twentieth century, and whether he still speaks to us today.
Perhaps the connection then was than he seemed to speak from and speak to an era that had recently experienced a political defeat. Just as the tide turned against the labor movement in the interwar period, so too by the late seventies the new left seemed exhausted.
He appeared then as a figure both innocent and tragic. He had no real part in interwar politics, and committed suicide fleeing the Nazis. He could be read as offering a sort of will-to-power for a by then powerless cultural and political movement, which thought of itself as living through dark times, awaiting the irruption of the messianic time alongside it in which the dark times might be redeemed. He was a totem for a kind of quiet endurance, a gathering of fragments in the dark for a time to come. But perhaps his time no longer connects to our time in quite the same way. And perhaps it does Benjamin no favors to make him a canonic figure, his ideas reduced to just another piece of the reigning doxa of the humanities.
As a media theorist, Benjamin’s contributions are fragmentary and scattered. They might be organized around the following topics: art, politics, history, technology and the unconscious. Here I will draw on the useful one-volume collection Walter Benjamin, The Work of Art in the Age of Technological Reproducibility and Other Writings on Media (Harvard University Press, 2008).
Benjamin had already started to think art historically rather than stylistically before his engagement with Marxism. Here the work of the art historian Alois Riegl was probably decisive. Formal aesthetic problems are of no particular interest in themselves. “The dialectical approach… has absolutely no use for such rigid, isolated things as work, novel, book. It has to insert them into the living social contexts.” (80) Attention shifts to practices. The reception of the work by its contemporaries is part of its historical effect on its later critics as well. Reception may vary by class, Benjamin notes in passing, anticipating Raymond Williams.
Not the least significant historical fact about art was the emergence of forms of reproducibility far more extensive than the duplication of images on coins known since the Greeks. The project for modern art history was thus an “attempt to determine the effect of the work of art once its power of consecration has been eliminated.” (56) The eternal as a theme in art – in western art at least – was linked to impossibility of reproducing them. Fine art practices thus have to be thought in terms of the historical situation in which they appear alongside other practices, in particular new forms of reproducibility.
Central to thinking art then is an intersection of technical and historical forces, and hence the question of how art is made: “the vital, fundamental advances in art are a matter neither of new content nor of new forms – the technological revolution takes precedence over both.” (329) Benjamin draws our attention to the uneven history by which the mechanical intervenes in its production. So while the piano is to the camera as the violin is to painting, reproducibility has specific histories in regard to specific media forms.
The question of technical reproduction brings up the wider concept of technology itself. Here Benjamin provides at a stroke a key change of point of view: “… technology is the mastery not of nature but of the relation between nature and man.” (59) The technical relation is embedded in a social relation, making it a socio-technical relation. Here we step away from merely ontological accounts of technology towards a social and historical one, which nevertheless pays attention to the distinctiveness of the technical means of production.
Writing in the wake of the Great War, Benjamin is one of innumerable writers and artists who saw the potential of technology, including media technology, within the context of its enormous destructive power. The war was a sort of convulsion of the techno-social body, a sign of a failed attempt of our species-being to bring a new body under control.
But technology is not an exogenous force in Benjamin. He is neither a technological determinist nor for that matter a social constructivist. (Note, incidentally, how this hoary old way of framing this debate puts a finger on the scale of the social.) “Technology… is obviously not a purely scientific development. It is at the same time an historical one. As such, it forces an examination of the attempted positivistic and undialectical separation between the natural sciences and the humanities. The questions that humanity brings to nature are in part conditioned by the level of production.” (122) It’s a matter then of thinking the continuum of techno <-> social phenomena as instances of a larger historical praxis.
Benjamin also dissents from the optimistic belief in technology as progress that he thought had infected social democratic thinking in the inter-war years. “They misunderstood the destructive side of this development because they were alienated from the destructive side of dialectics. A prognosis was due, but failed to materialize. That failure sealed a process characteristic of the last century: the bungled reception of technology.”
This might be a useful lesson still for our own time. Benjamin does not want to retreat from thinking the technical, nor does he fetishize it. The technical changes to the forces of production have their destructive side, but that too can be taken in different ways. It is destructive in the sense made plain by the war; but it might be destructive in another sense too – destructive of the limits formed by the existing relations of production.
Tech change in media form brings political questions to the fore, not least because it usually disrupts the media producer’s relation to the means of production. “The technical revolutions – these are the fracture points in artistic development where political positions, exposed bit by bit, come to the surface. In every new technical revolution, the political position is transformed – as if on its own – from a deeply hidden element of art into a manifest one.” (329)
This understanding of technology frames Benjamin’s approach to the politics of both art and knowledge. Both are potentially the means by which our species-being might acquire a sensory perception and a conceptual grasp of its own socio-technical body and direct it towards its own emancipation. The task of the cultural worker is to contribute to such a project. It’s a matter of understanding and redeploying the mode of perception in its most developed form to such ends. The mode of perception of the early twentieth century appear as one in which “distraction and destruction [are] the subjective and objective sides, respectively, of one and the same process.” (56) That may be even more so today.
Benjamin practiced his own version of what I call low theory, in that the production of knowledge was not contemplative and was disinterested in the existing language games of the disciplines. Knowledge has to be communicated in an effective manner. “The task of real, effective presentation is just this: to liberate knowledge from the bounds of compartmentalized discipline and make it practical.” (61)
Both knowledge and art matter as part of the self-education of the working class. Benjamin thought the social democrats had made an error in diluting this labor point of view into a mere popular and populist pedagogy. “They believed that the same knowledge which secured the domination of the proletariat by the bourgeoisie would enable the proletariat to free itself from this domination.” (121) The project of an art and knowledge for liberation posed questions about the form of such art and knowledge, in which “real presentation banishes contemplation.” (62)
The artist or writer’s relationship to class is however a problematic one. Mere liberal sympathy for the ‘down-trodden’ is not enough, no matter how sincere: “a political tendency, however revolutionary it may seem, has a counter-revolutionary function so long as the writer feels his solidarity with the proletariat only in his attitudes, not as a producer.” (84) What matters is the relation to the means of production, not the attitude: “the place of the intellectual in the class struggle can be identified – or better, chosen – only on the basis of his position in the process of production.” (85)
Benjamin was already well aware that bourgeois cultural production can absorb revolutionary themes. The hack writer or artist is the one who might strike attitudes but does nothing to alienate the productive apparatus of culture from its rulers. “The solidarity of the specialist with the proletariat… can only be a mediated one.” (92) And so the politics of the cultural worker has to focus on those very means of mediation. Contrary to those who would absorb Benjamin into some genteel literary or fine art practice, he insists that “technical progress is for the author as producer the foundation of his political progress.” (87)
The task then is to be on the forefront of the development of the technical forces of cultural production, to make work that orients the working class to the historical task of making history consciously, and to overcome in the process the division of labor and cult of authorship of bourgeois culture. Particularly in a time of technical transition, the job is to seize all the means of making the perception and conception of the world possible and actionable: “photography and music, and whatever else occurs to you, are entering the growing, molten mass from which new forms are cast.” (88)
Moreover, Benjamin saw that the technical means were coming into being to make consumers of media into producers. Benjamin is ahead of his time on this point, but the times have surely overtaken him. The ‘prosumer’ – celebrated by Henry Jenkins – turned out to be as recuperable for the culture industry as the distracted spectator. The culture industry became the vulture industry, collecting a rent while we ‘produce’ entertainment for each other. Still, perhaps it’s a question of pushing still further, and really embracing Benjamin’s notion of the cultural producer as the engineer of a kind of cultural apparatus beyond the commodity form and the division of labor.
Reading Benjamin can easily lead to a fascination with the avant-garde arts of the early and mid 20th century, but surely this is to misunderstand what his project might really point to in our own times. Benjamin had a good eye for the leading work of his own time, which sat in interesting tension with his own antiquarian tendencies. His focus on the technical side of modern art came perhaps from Lázló Moholy-Nagy and others who wrote for G: An Avant Garde Journal of Art, Architecture & Design.
He understood the significance of Dada, the post-war avant-garde that already grasped the melancholy fact that within existing relations of production, the forces of production could not be used to comprehend the social-historical totality. Dada insisted in a reality in fragments, a still-life with bus-tickets. “The whole thing was put in a frame. And thereby the public was shown: Look, your picture frame ruptures time; the tiniest authentic fragment of daily life says more than painting. Just as the bloody finger print of a murderer on the page of a book says more than the text.” (86) The picture-frame that ruptures time might be a good emblem for Benjamin’s whole approach.
His optimism about Soviet constructivism makes for poignant reading today. He celebrated Sergei Tretyakov, who wanted to be an operating rather than a merely informing writer. Too bad that the example Benjamin celebrates is from Stalin’s disastrous forced collectivization of agriculture. Tretyakov would be executed in 1937. Still, Gerard Raunig has more recently taken up the seemingly lost cause of Tretyakov.
Benjamin was however surely right to take an interest in what Soviet media had attempted up until Stalin’s purge of it. “To expose such audiences to film and radio constitutes one of the most grandiose mass psychological experiments ever undertaken in the gigantic laboratory that Russia has become.” (325) This was one of the great themes of the Soviet writer Andrei Platonov. Unfortunately Benjamin, like practically everyone else, was ignorant of Platonov’s work of the time.
Surrealism contributed much to Benjamin’s aesthetic, particularly its fascination with the convulsive forces lurking in the urban and the popular. Like the Surrealists, and contra Freud, Benjamin was interested in the dream as an agency rather than a symptom. Surrealist photography taught him to see the photograph in a particular way, in that its estrangement from the domestic yielded a free play for the eye to perceive the unconscious detail.
Surely the strongest influence on Benjamin as a critical theorist of media was the German playwright Berthold Brecht and his demand that intellectuals not only supply but change the process of cultural production. Brecht’s epic theater was one of portraying situations rather than developing plots, using all the modern techniques of interruption, montage, and the laboratory, making use of elements of reality in experimental arrangements. Benjamin: “It is concerned less with filling the public with feelings, even seditious ones, than with alienating it in a enduring way, through thinking, from the conditions in which it lives.” (91)
One thing from Brecht that could have received a bit more attention in Benjamin is his practice of refunctioning, basically a version of what the Situationists later called détournement. This is the intentional refusal to accept that the work of art is anyone’s private property. Reproducibility has the capacity to abolish private property in at least one sphere: that of cultural production.
Here the ‘molten’ dissolution of forms of a purely aesthetic sort meets the more crucial issue of ownership of culture. But Benjamin was not always clear about the difference. “There were not always novels in the past, and there will not always have to be; there have not always been tragedies or great epics. Not always were the forms of commentary, translation, indeed even so-called plagiarism playthings in the margins of literature…” (82) Here he comes close to the Situationist position that all of culture is a commons, but he still tended to confuse formal innovation within media with challenges to its property form.
Benjamin also had his own idiosyncratic relation to past cultures. The range of artifacts from the past over which Benjamin’s attention wander is a wide one. His was a genius of the fragment. He was alert to the internal tension in aesthetic objects. Those that particularly draw his attention are objects that are at once wish-images of the future but which, at the very moment of imagining a future, also reveal something archaic.
Famously, in writing about mid-19th century Paris, he took an interest in Parisian shopping arcades, with their displays of industrial luxury, lit by gas lighting and the weak sun penetrating its covered walkways through the iron-framed skylights. These provide one of the architectural forms for the imagination of the utopian thinker Charles Fourier who thinks both forwards and backwards, mingling modern architecture with a primal image of class society.
Actually, one could dispute this reading. Charles Beech, Fourier’s biographer, thinks the Louvre was his architectural inspiration. And Fourier’s utopia is hardly classless. On the contrary, he wanted a way to render the passion for distinction harmless. The method might be more interesting in this example than the result.
Benjamin is on firmer ground in relating the daguerreotype to the panorama. (Of which the Met has a fine example). The invention of photography expands commodity exchange by opening up the field of the image to it. Painting turns to color and scale to find employment. This chimes with McLuhan’s observation that it is when a medium becomes obsolete that it becomes Art, where the signature of the artist recovers the otherwise anonymous toil of the artisan as something worthy of private property: “The fetish of the art market is the master’s name.” (142)
Art favors regression to remote spheres that do not appear either technical or political. For example: “Genre painting documents the failed reception of technology… When the bourgeoisie lost the ability to conceive great plans for the future, it echoed the words of the aged Faust: ‘Linger awhile! Thou art so fair.’ In genre painting it captured and fixed the present moment, in order to be rid of the image of its future. Genre painting was an art which refused to know anything of history.” (161) So many other genres might fall under the same heading.
Benjamin also draws our attention to a class of writing that saw a significant rebirth closer to our own time, but which in the Paris of the mid-19th century was represented by Saint-Simon. This is that writing that sizes up the transformative power of technology and globalization but omits class conflict. These days it is called techno-utopianism. In distancing itself from such enthusiasms, critical theory has all too often made the reverse error: focusing on class or the commodity and omitting technological change altogether or repeating mere petit-bourgeois romantic quibbles about its erosion of the homely and familiar. The challenge with Benjamin is to think the tension between technical changes in the forces of production and the class conflicts in which it is enmeshed but to which it cannot be entirely reduced. In thinking the hidden structural aspects not just of consciousness but also of infrastructure as the Benjamin channels Sigfried Giedion: “Construction plays the role of the subconscious.”
The life of the commodity is full of surprises. For instance, consider Benjamin’s intuition about how fashion was starting to work in the mid-19th century. “Fashion prescribes the ritual according to which the commodity fetish demands to be worshipped.” (102) Fashion places itself in opposition to the organic, and couples the living body to the inorganic world. This is the “sex appeal of the inorganic,” which Mario Perniola will later expand into a whole thesis. (102)
Fashion makes dead labor sexy. It points to a kind of value that is neither an exchange value nor a use value, but that lies in novelty itself – a hint at what Baudrillard will call sign value. “Just as fashion brings out the subtler distinctions of social standing, it keeps a particularly close watch over the coarser distinctions of class.” (138)
Another artifact that turned out to have a long life is the idea of the home and of interior decoration. The mid-19th century bourgeois was beginning to think the home as an entirely separate, even antithetical, place from the place of work, in comparison to the workshops of their artisanal predecessors. “The private individual, who in the office has to deal with reality, needs the domestic interior to sustain him in his illusions.” (103) One might wonder if in certain respects this distinction is now being undone.
The central thread of Benjamin’s work on Paris was supposed to be Baudelaire, who made Paris a subject for lyric poetry. It was a poetry of the urban wanderer, the celebrated flaneur, who for Benjamin had the gaze of the alienated man. The arcades and department stores used the flaneur as a kind of unpaid labor to sell goods. It’s a precursor to social media.
Both the flaneur and the facebooker are voluntary wanderers through the signage of commodified life, taking news of the latest marvels to their friends and acquaintences. The analogy can be extended. The flaneur, like today’s ‘creatives’, was not really looking to buy, but to sell. Benjamin’s image for this is the prostitute: the seller of the goods and the goods to be sold all at once.
The flaneur as bohemian, not really locatable in political or economic terms as bourgeois or proletarian, is a hint at the complexities of the question of class once the production of new information becomes a form of private property. Who is the class that produces, not use values in the form of exchange values, but sign values in the form of exchange values? Benjamin comes close to broaching this question of our times.
Benjamin offers a rather condensed formula for what he is looking at in his historical studies of wish-images. “Ambiguity is the appearance of dialectics in images, the law of dialectics at a standstill. This standstill is utopia and the dialectical image, therefore, dream image. Such an image is afforded by the commodity per se: as fetish.” (105)
The commodifed image is a fragment of dead labor, hived-off from a process it obscures. This is the image as fetish, a part-thing standing in for the whole-process of social labor. And yet at the same time it cannot but bear the trace of its own estrangement. As fragment it is fetish, but as mark of the absence of a real totality it points in negative toward utopia.
The Paris Commune of 1871 put an end to a certain dream image, forward-looking yet archaic. It was no longer an attempt to complete the bourgeois revolution but to oppose it with a new social force. The proletariat emerged from the shadows of bourgeois leadership as an independent movement. The dialectic might move forward again.
But this was only one of two developments that characterize the later 19th century. The other is the technical development of the forces of production subsuming the old arts and crafts practices of cultural production. Aesthetics, like science before it, becomes modern, meaning of a piece with the development of capitalism as a whole.
Benjamin: “The development of the forces of production shattered the wish symbols of the previous century, even before the monuments representing them had collapsed. In the nineteenth century this development worked to emancipate the forms of construction from art, just as in the sixteenth century the sciences freed themselves from philosophy. A start is made with architecture as engineered construction. Then comes the reproduction of nature as photography. The creation of fantasy prepares to become practical as commercial art.” (109)
Out of the study of 19th century Paris, Benjamin develops a general view of historical work that might properly be called historical materialist. “Every epoch… not only dreams the one to follow but, in dreaming, precipitates its awakening. It bears its end within itself and unfolds it – as Hegel already noted – by cunning. With the destabilizing of the market economy, we begin to recognize the monuments of the bourgeoisie as ruins even before they have crumbled.” (109) Benjamin pays much less attention to an epoch’s ideas about itself than its unconscious production and reproduction of forms, be they conceptual or architectural. (Which incidentally is why I am more interested in container ports and server farms than the explicit discourse of ‘neoliberalism‘ as a key to the age).
The historical-reconstructive task is not to restore a lost unity to the past, but rather to show its incompletion, to show how it implies a future development, and not at all consciously. Fragments from the past don’t lodge in a past totality but in constellation with fragments of the present. Benjamin: “history becomes the object of a construct whose locus is not empty time but rather the specific epoch, the specific life, the specific work. The historical materialist blasts the epoch out of its reified ‘historical continuity’ and thereby the life out of the epoch and the work out of the lifework. Yet this construct results in the simultaneous preservation and sublation of the lifework in the work, of the epoch in the lifework, and of course of history in the epoch.” (118)
“Historical materialism sees the work of past as still uncompleted.” (124) The task is to find – in every sense – the openings of history. ”To put to work an experience with history – a history that is originary for every present – is the task of historical materialism. The latter is directed toward a consciousness of the present which explodes the continuum of history.” (119)
The materials for historical work may not actually exist. In his essay on Eduard Fuchs, Benjamin draws attention to their shared passion for collecting, and for the collection as “the practical man’s answer to the aporias of theory” (119) Whether Daumier’s images, erotica or children’s books, the collector feels the resonance in low forms.
Such material has to be thought at one and the same time in terms of what it promises and what it obscures. “Whatever the historical materialist surveys in art or science has, without exception, a lineage he cannot observe without horror. The products of art and science owe there existence not merely to the effort of the great geniuses who created them, but also, in one degree or another, to the anonymous toil of their contemporaries. There is no document of culture which is not at the same time a document of barbarism.” (124)
Benjamin sets a high standard for the sorts of political claim that cultural work of any kind might claim, as it is always dependent on the labor of others. “It may augment the weight of the treasure accumulating on the back of humanity, but it does not provide the strength to shake off this burden so as to take control of it.” (125)
He did not share the optimism of inter-war social democracy, which still tended to see capitalism as a deterministic machine grinding on it its own imminent end. Benjamin was far more attuned to the barbaric side that Engels had glimpsed in his walks around Manchester.This barbarism, taken over from bourgeois culture, infected the proletariat via repression with “masochistic and sadistic complexes.” (137)
Benjamin thought both art and literature from the point of view of the pressure put on them by modern technical means. “Script – having found, in the book, a refuge in which it an lead an autonomous existence – is pitilessly dragged out into the street by advertisements and subjected to the brutal heteronomies of economic chaos.” (171) A great poet might acknowledge rather than ignore this. “Mallarmé… was in the Coup de dés the first to incorporate the graphic tensions of advertising into the printed page.” (171) A quite opposite reading, incidentally, to the recent and very interesting one offered by Quentin Meillassoux.)
Benjamin also grasped the role of the rise of administrative textuality in shaping its aesthetics: “the card index marks the conquest of three dimensional writing…. And today the book is already… an outdated mediation between two different filing systems.” (172) The modern poet needed to master statistics and technical drawing. “Literary competence is no longer founded on specialized training but is now based on polytechnical education, and thus becomes public property.” (360) One wonder what he would have thought about the computer-assisted distant reading of the digital humanities.
His more famous study is of photography and its transformation of the mode of perception, influenced by the remarkable photographer and activist Germaine Krull (subject of a recent retrospective). The first flowering of photography was before it was industrialized, and before it was art, which arises as a reaction to mechanical reproducibility. “The creative in photography is its capitulation to fashion.” (293)
Benjamin draws attention to pioneers such as Julia Margaret Cameron and David Octavius Hill. Looking at his image of the Newhaven fishwife, Benjamin “feels an irresistible compulsion to search such a picture for the tiny spark of contingency, the here and now, with which reality has, so to speak, seared through the image-character of the photograph…” (276) The camera is a tech for revealing the “optical unconscious.” (278)
Eugene Atget comes in for special consideration as the photographer who began the emancipation of object from aura. This is perhaps the most slippery – and maybe least useful – of Benjamin’s concepts. “What is aura, actually? A strange web of space and time: the unique appearance of a distance, no matter how close it may be. While at rest on a summer’s noon, to trace a range of mountains on the horizon, or a branch that throws its shadow on the observer, until the moment or the hour becomes part of the appearance – this is what it means to breathe the aura of these mountains, that branch. Now, ‘to bring things closer’ to us, or rather to the masses, is just as passionate an inclination in our day as the overcoming of whatever is unique in every situation by means of its reproduction.” (285) Aura is the “unique appearance of a distance,” at odds with transience and reproducibility.
Where other critical theorists put the stress on how commodity fetishism and the culture industry limit the ability of the spectator to see the world through modern media, Benjamin saw a more complex set of images and objects. He does not deny such constraints: “But it is precisely the purpose of the public opinion generated by the press to make the public incapable of judging…” (361) But rather tries to think them dialectically as also implicated in their own overcoming. Even a limited and limiting media cannot help pointing outside itself, and at the same time containing its own trace of its own limits.
Thus, in thinking about Mickey Mouse cartoons, Benjamin remarks that “In these films, mankind makes preparations to survive civilization.” (388) Disorder lurks just beyond the home, encouraging the viewer to return to the familiar. On the other hand, cinema can be a space in which the domestic environment can become visible and relatable to other spaces. “The cinema then exploded this entire prison-world with the dynamite of its fractions of a second, so that now we can take extended journeys of adventure between their widely scattered ruins.” (329)
The figure of the ruin in Benjamin goes back to his study of The Origins of German Tragic Drama, his doctoral thesis (which did not receive a pass). There the ruin in connected to allegory. “Allegories are, in the realm of thought, what ruins are in the realm of things.” (180) Allegory, in turn, implies that “Any person, any thing, any relationship can mean absolutely anything else. With this possibility, an annihilating but just verdict is pronounced on the profane world.” (175) The allegorical is central to Benjamin’s whole method (and taken up by many, from Jameson to Alex Galloway). “Through allegorical observation, then, the profane world is both elevated in rank and devalued.” (175)
Benjamin saw the baroque rather than the romantic as a worthy counterpoint to classicism, which had no sense of the fragmentary and disintegrating quality of the sensuous world. Nature appears to the baroque as over-ripeness and decay, an eternal transience. It is the classical ideal of eternal, pure and absolute forms or ideas in negative. From there, he removed the ideal double. It may creep back, at least among some interpreters, in at various moments when Benjamin evokes the messianic, but the contemporary reader is encouraged to complete the struggle Benjamin was having with his various inheritances.
Historical thought and action is about seizing the fragment of the past that opens towards the present and might provide leverage towards a future in which it can never be restored as a part of a whole. Benjamin: “structure and detail are always historically charged.” (184) And they are never going to coincide in an integrated totality, either as a matter of aesthetics, or as a matter of historical process.
Allegory is also connected to the dream. On the other side of the thing or the image is not its ideal form but the swarming multiplicity of what it may mean or become. This is where the critic, like the poet, sets up shop, “in order to blaze a way into the heart of things abolished or superseded, to decipher the contours of the banal as rebus…” (237)
The dream was all the rage in the early 20th century, as Aragon notes in Wave of Dreams. Benjamin refunctioned this surrealist obsession. Benjamin was rather more interested in the dreams of objects than of subjects. “The side which things turn towards the dream is kitsch.” (236) He met the kitsch products of the design and culture industries with curiosity rather than distaste or alarm. “Art teaches us to see into things. Folk art and kitsch allow us to see outward from within things.” (255)
Benjamin has a genius for using the energies of the obsolete. But one has to ask if the somewhat cult-like status Benjamin now enjoys is something of a betrayal of the critical leverage Benjamin thought the obsolete materials of the past could play in the present.
After discussing him with my students, we came to the conclusion that one could thing of, and use, all of Benjamin’s methods as ways of detecting the historical unconscious working through the tensions within cultural artifacts. Benjamin can be a series of lessons in which artifacts to look at, and how to look. One can look for the fragment of the past that speaks to the present. One can look within the photograph for the optical unconscious at work. One can look at obsolete forms, where the tension between past and dreamt future is laid bare. One can look at avant-gardes, which might anticipate where the blockage is in the incomplete work of history. One can look at the low or the kitsch, where certain dream-images are passed along in a different manner to fine art.
Our other thought was that one thing that seems to connect Benjamin to the present even more than the content of his writing is the precarity of his situation while writing it. Like Baudelaire and the bohemian flaneur, his is in contemporary terms a ‘gig economy’, of freelance work and of permanent exclusion from security. This precarity seemed to wobble on the precipice of an even greater, and more ostensibly political one — the rise of fascism. Today, the precarity of so many students, artists, traders in new information — the hacker class as I call it — seems to wobble on the precipice of an ecological precarity. If in Benjamin’s day it was the books that were set on fire, now it is the trees.
The thing about an interface is that when it is working smoothly you hardly notice it is there at all. Rather like ideology, really. Perhaps in some peculiar way it is ideology. That might be one of the starting points of Alexander Galloway’s book, The Interface Effect (Polity 2012). Like Alberto Toscano and Jeff Kinkle in Cartographies of the Absolute (Zer0 Books 2015), Galloway revives, and revises Fredric Jameson’s idea of cognitive mapping, which might in shorthand be described as a way of tracing how the totality of social relations in our commodifed world show up, in spite of themselves, in a particular work of literature, art or media.
Take the tv show, 24. In what sense could one say that the show is ‘political’? It certainly appears so in a ‘red state’ sort of way. The Jack Bauer character commits all sorts of crimes, including torture, in the name of ‘national security.’ But perhaps there’s more to it. Galloway draws attention to how certain formal properties of narrative, editing and so forth might help us see ‘politics’ at work in 24 in other ways.
24 is curiously a show about the totality, but in a rather reactionary way. Characters are connected to something much greater than their petty interests, but that thing is national security, whose over-riding ethical imperative justifies any action. This is of course much the same ‘moral relativism’ of which both the conservative right and the liberal center accused communists and then postmodernists.
The hero, Jack Bauer is a kind of hacker, circumventing the protocols of both technologies and institutions. Everything is about informatics weapons. Interrogation is about extracting information. “The body is a database, torture a query algorithm.” (112) Time is always pressing, and so short-cuts and hacks are always justified. The editing often favors ‘windowing’, where the screen breaks into separate panels showing different simultaneous events, cutting across the older logic of montage as succession.
The show’s narrative runs on a sort of networked instantaneity. Characters in different places are all connected and work against the same ever-ticking clock. Characters have no interiority, no communal life. They are on the job (almost) 24/7, like perfect postfordist workers, and like them their work is under constant surveillance. There is no domestic space. They have nothing but their jobs, and as Franco Berardi’s work also shows, a heightened ownership of their labor is their only source of spiritual achievement. “Being alive and being on the clock are now essentially synonymous.” (109) What was prefigured in modern works like Kenneth Fearing’s The Big Clock is now a total world.
But Galloway takes a step back and looks at a broader question of form and its relation to the content of the show. The twenty-four hour-long episodes in a season of 24 are supposed to be twenty-four consecutive hours, but there’s actually only 16.8 hours of television. The show makes no reference to the roughly 30% of the viewer’s time spent watching the ads. As Dallas Smythe and later Sut Jhally have argued, watching tv is more or less ‘work,’ and we might now add, a precursor form of the vast amount of more detailed non-labor we all perform on all kinds of screens.
Galloway: “24 is political because the show embodies in its formal technique the essential grammar of the control society, dominated as it is by specific and information logics.” (119) One might add that it is probably watched now in a specific way as well, by viewers checking their text messages or with Facebook open on their laptops while it plays on the big screen. It has to compete with all our other interfaces.
How then can the interface be a site where the larger historical and political forces can be detected playing themselves out as they articulate individual experiences and sensibilities into that larger world? How is the one transformed into the other, as a kind of parallel world, both attuned and blind to what is beyond it? What is the dialectic between culture and history? This might be what Fredric Jameson called allegory. For Galloway, allegory today takes the specific form of an interface, and even more specifically of the workings of an intraface, which might be described as the relation between the center and the edge within the interface itself.
Culture is history in representational form, as social life as a whole cannot be expressed directly (To say nothing of social-natural metabolic life). Culture is if anything not a representation of social life per se, but of the impossibility of its representation. Hence one might pay as much attention to the blind spots of an instance of cultural work – like the missing 30% of 24, where the ads run.
Further, might there be a certain homology between the mode of production at work at large in history and the specific way in which the form of a cultural work does its work? This was perhaps what Jameson was proposing in his famous essay on the ‘postmodern.’ But these times are not post anything: they just are what they are. If this is, in a term Galloway borrows from Deleuze, a society of control, then perhaps the interface is a kind of control allegory.
I can remember a time when we still called all this new media. It is an absurd term now, especially for students whose whole conscious life exists pretty much within the era of the internet and increasingly also of the web and the cellphone. I can also remember a time when the potentials of ‘new media’ appeared, and in some ways really were, quite open. That past is now often read as a kind of teleology where it was inevitable that it would end up monopolized by giant corporations profiting off non-labor in a society of control and surveillance. But this is selective memory. There were once avant-gardes who tried, and failed, to make it otherwise. That they – or we – failed is no reason to accept the official Silicon valley ideologies of history.
I mention this because Galloway starts The Interface Effect by recalling in passing the avant-garde in listserv form that was nettime.org and rhizome.org – but without flagging his own role in any of this. His work with the Radical Software Group and rhizome.org is not part of the story. That world appears here just as the place of the first reception for that pioneering attempt to describe it, Lev Manovich’s The Language of New Media (MIT Press, 2002).
Manovich came at this topic from a very different place from either the techno-boosters of Silicon valley’s California ideology or the politico-media avant-gardes of Western Europe. His own statement about this, which Galloway quotes, turned out to be prescient: “As a post-communist subject, I cannot but see Internet as a communal apartment of Stalin era: no privacy, everybody spies on everybody else, always present line for common areas such as the toilet or the kitchen.” How ironic, now that Edward Snowden, who showed that this is where we had ended up, had to seek asylum of sorts in Putin’s post-Soviet Russia.
As Galloway reads him, Manovich is a modernist, whose attention in drawn to the formal principles of a medium. Where new media is concerned, he will find five: numeric representation, modularity, automation, variability and transcoding. Emphasis shifts from the linear sequence to the database from which it draws, or from syntagm to paradigm. Ironically, the roots of digital media for Manovich are in cinema, and Dziga Vertov is his key example. Cinema, with its standard frames, is already in a sense ‘digital’, and the film editor’s trim-bins are already a ‘database’. Vertov was, after all, not a director so much as an editor.
Manovich’s perception of the roots of ‘new media’ in Vertov is still something of a scandal for the October journal crowd, who rather hilariously think it makes more sense to see today’s Ivy League art history program as the true inheritor of Vertov. Manovich refused that sort of political-historical approach to avant-garde aesthetics, which in a more lively form could be found in, say Brian Holmes or Geert Lovink. Manovich was also at some remove from those who want to reduce new media to hardware, such as Friedrich Kittler or Wendy Chun, or those who focused more on networks than on the computer itself, such as Eugene Thacker or Tiziana Terranova.
As a post-Soviet citizen, Manovich was also wary of politicized aesthetics that gloss over questions of form as well as those who – in the spirit of Godard – want to treat formalism as inherently radical. Interestingly, Galloway will take a – somewhat different – formalism and bring it back to political-historical questions, as one of those heroic Jameson-style moves in which quite oppose methods are reconciled within a larger whole.
Galloway’s distinctive, and subtle, argument is that digital media are not so much a new ontology as a simulation of one. The word ‘ontology’ is a slippery one here, and perhaps best taken in a naïve sense of ‘what is.’ A medium such as cinema has a certain material relation to what is, or rather what was. The pro-filmic event ends up as a sort of trace in the film, or, put the other way around, the film is an index of a past event. Here it is not the resemblance but the sequence of events that make film a kind of sign of the real, in much the same way that smoke is an indexical sign for fire.
Galloway: “Today all media are a question of synecdoche (scaling a part for the whole), not indexicality (pointing from here to there).” (9) Galloway doesn’t draw on Benjamin here, but one could think Benjamin’s view of cinema as a kind of organizing of indexical signs from perceptual scales and tempos that can exceed the human – signs pointing to a bigger world. It takes a certain masochistic posture to even endure it, and not quite in the way Laura Mulvey might have thought one of cinema-viewing’s modes as masochistic. For any viewer it is a sort of giving over of perceptual power to a great machine.
To the extent that it helps perceive often subtle, continuous changes by sharpening the edges through a binary of language, let’s say that by contrast digital media is sadistic rather than masochistic. “The world no longer indicates to us what it is. We indicate ourselves to it, and in so doing the world materializes in our image.” This media is not about indexes of a world, but about the profiles of its users.
Galloway does not want to go too far down this path, however. His is a theory not of media but of mediation, which is to say not a theory of a new class of objects but of a new class of relations: mediation, allegory, interface. Instead of beginning and ending from technical media, we are dealing instead with their actions: storing, transmitting, processing. Having learned his anti-essentialism – from Donna Haraway among others – he is careful not to seek essences for either objects or subjects.
A computer is not an ontology, then, but neither is it a metaphysics, in that larger sense of not just what is, but why and how what is, is. Most curiously, Galloway proposes that a computer is actually a simulation of a metaphysical arrangement, not a metaphysical arrangement: “… the computer does not remediate other physical media, it remediates metaphysics itself.” (20)
Here Galloway gives a rather abbreviated example, which I will flesh out a bit more than he does, as best I can. That example is object oriented programming. “The metaphysico-Platonic logic of object-oriented systems is awe inspiring, particularly the way in which classes (forms) define objects (instantiated things): classes are programmer-defined templates, they are (usually) static and state in abstract terms how objects define data types and process data; objects are instances of classes, they are created in the image of a class, they persist for finite amounts of time and are eventually destroyed. On the one hand an idea, on the other a body. On the one hand an essence, on the other an instance. On the one hand the ontological, on the other the ontical.” (21)
One could say a bit more about this, and about how the ‘ontology’ (in the information science sense) of object oriented programming, or of any other school of it, is indeed an ontology in a philosophical sense, or something like it. Object oriented programming (oop) is a programming paradigm based on objects that contain data and procedures. Most flavors of oop are class-based, where objects are instances of classes. Classes define data formats and procedures for the objects of a given class. These classes can be arranged hierarchically, where subordinate classes inherit from the ‘parent’ class. Objects then interact with each other as more or less black boxes. In some versions of oop, those boxes can not only hide their code, they can lock it away.
Among other things, this makes code more modular, and enables a division of labor among coders. Less charitably, it means that half-assed coders working on big projects can’t fuck too much of it up beyond the particular part they work on. Moreover, oop offers the ability to mask this division of labor and its history. The structure of the software enables a social reality where code can be written in California or Bangalore.
A commercially popular programming language that is substantially oop based is Java, although there are many others. They encourage the reuse of functional bits of code but add a heavy burden of unnecessary complexity and often lack transparency. It is an ontology that sees the world as collections of things interacting with things but where the things share inputs and outputs only. How they do so is controlled at a higher level. Such is its ‘metaphysico-Platonic logic’, as Galloway calls it, although to me it is sounding rather more like Leibnitz.
The structure of software – its ‘ontology’ in the information science sense – makes possible a larger social reality. But perhaps not in the same way as the media of old. Cinema was the defining medium of the 20th century; the game-like interfaces of our own time are something else (as I proposed in Gamer Theory). The interface itself still looks like a screen, so it is possible to imagine it still works the same way. Galloway: “It does not facilitate or make reference to an arrangement of being, it remediates the very conditions of being itself.” (21) The computer simulates an ontological plane with logical relations “The computer instantiates a practice not a presence, an effect not an object.” (22)
An ethic not an ontology – although not necessarily an ‘ethical’ one. “The machine is an ethic because it is premised on the notion that objects are subject to definition and manipulation according to a set of principles for action. The matter at hand is not that of coming to know a world, but rather that of how specific, abstract definitions are executed to form a world.” (23) (I would rather think this as a different kind of index, of the way formal logics can organize electrical conductivity, for example.)
“The computer is not an object, or a creator of objects, it is a process or active threshold mediating between two states.”(23) Or more that two – there can be many layers. (Benjamin Bratton’s stack). “The catoptrics of the society of the spectacle is now the dioptrics of the society of control.” (25) Or: we no longer have mirrors, we have lenses. Despite such a fundamental reorganization of the world, Galloway insists on the enduring usefulness of Marx (and Freud) and of their respective depth models of interpretation, which attempt to ferret out how something can appear as its opposite.
Galloway tips the depth model sideways, and considers the interface in terms of centers and edges, as “… the edges of art always make reference to the medium itself.” (33) This center-edge relation Galloway calls the intraface. It is a zone of indecision between center and edge, or what Roland Barthes called the stadium and the punctum. Making an intraface internally consistent requires a sort of neurotic repression of the problem of its edge. On the other hand, signaling the real presence of an edge to the intraface ends up making the work itself incoherent and schizophrenic, what Maurice Blanchot called the unworkable.
In cinema the great artists of the neurotically coherent and schizophrenically incoherent intrafaces respectively might be Hitchcock and Godard. The choice appears to be one of a coherent aesthetic of believing in the interface, but not enacting it (Hitchcock); and an incoherent aesthetic of enacting the interface, but not believing in it (Godard).
But Galloway is wary of assuming that only the second kind of intraface is a ‘political’ one. The multiplayer computer game World of Warcraft is as much an example of a schizophrenic intraface as any Godard movie. “At root, the game is not simply a fantasy landscape of dragons and epic weapons but a factory floor, an information-age sweat-shop, custom tailored in every detail for cooperative ludic labor.” (44)
In a classic Jameson move, Galloway doubles the binary of coherent vs incoherent aesthetics with a second: coherent vs incoherent politics, to make a four-fold scheme. The coherent aesthetics + coherent politics quadrant is probably a rare one now. Galloway doesn’t mention architecture here, but Le Corbusier would be a great example, where a new and clarified aesthetic geometry was supposed to be the representative form for the modern ruling class.
The quadrant of incoherent aesthetics + coherent politics is a lively one, giving Berthold Brecht, Alain Badiou, Jean-Luc Godard, or the punk band Fugazi. All in very different ways combine a self-revealing or self-annihilating aesthetic with a fixed political aspiration, be it communist or ‘straight edged.’ The World of Warcraft interface might fit here too, with its schizo graphics interfacing with an order whose politics we shall come to later.
Then there’s the coherent aesthetic + incoherent politics quadrant, which for Galloway means art for art’s sake, or a prioritizing of the aesthetic over the political, giving us the otherwise rather different cinema of Billy Wilder and Alfred Hitchcock, but also the aesthetics of Gilles Deleuze, and I would add Oscar Wilde, and all those with nothing to declare but their ‘genius.’
The most interesting quadrant combines incoherent aesthetics with incoherent politics. This is the ‘dirty’ regime, of the inhuman, of nihilism, of the “the negation of the negation.” Galloway will also say the interface of truth. Here lurks Nietzsche, George Bataille, and I would add the Situationists or the Jean-François Lyotard of Libidinal Economy. Galloway will by the end of the book place his own approach here, but to trouble that a bit, let me also point out that here lies the strategies of Nick Land and his epigones. Or, more interestingly – Béatriz Préciado’s Testo Junkie.
So in short there are four modes of aesthetic-political interface. The first is ideological, where art and justice are coterminous (the dominant mode). The second is ethical, which must destroy art in the service of justice (a privileged mode). The third is poetic, where one must banish justice in the service of art (a tolerated mode). The last is nihilist, and wants the destruction of all existing modes of art and justice (which for Galloway is a banished mode – unless one sees it – in the spirit of Nick Land – as rather the mode of capitalist deterritorialization itself, in which case it is actually dominant and the new ideological mode. Its avatar would perhaps be Joseph Schumpeter).
Galloway thinks one can map the times as a shift from the ideological to the ethical mode, and a generalize “decline in ideological efficiency.” (51) I suspect it may rather be a shift from the ideological to the nihilist, but which cannot declare itself, leading to a redoubling of efforts to produce viable ideological modes despite their waning effect. (The sub rosa popularity of Nick Land finds its explanation here as delicious and desirable wound and symptom).
Either way, the mechanism – in a quite literal sense – that produces this effect might be the transformation of the interface itself by computing, producing as it does an imaginary relation to ideological conditions, where ideology itself is modeled as software. The computer interface is an incoherent aesthetic that is either in the service of a coherent politics (Galloway’s reading), or, which wants to appear as such but is actually in the service of an incoherent politics that it cannot quite avow (my reading).
Hence where Galloway sees the present aesthetico-politics of the interface as oscillating between regimes 2 and 3, I think it is more about regimes 1 and 4, and entails a devaluing of the aesthetic-political compromises of the Godards and Hitchcocks, or Badious and Deleuzes of this world. I think we now have a short-circuit between ideology and nihilism that accepts no compromise formations. Galloway usefully focuses attention on the intraface as the surface between the problems of aesthetic form and the political-historical totality of which it is a part.
The interface is an allegorical device for Galloway, a concept that is related to, but not quite the same as Wendy Hui Kyong Chun’s that “software is a functional analog to ideology.” Certainly both writers have zeroed-in on a crucial point. “Today the ‘culture industry’ takes on a whole new meaning, for inside software the ‘cultural’ and the ‘industrial’ are coterminous.” (59)
The point where Galloway and Chun differ is that he does not follow her and Kittler in reducing software to hardware. Kittler’s view is part of a whole conceptual field that may be produced by the interface effect itself. There is a kind of segregation where data are supposedly immaterial ideas and the computer is a machine from a world called ‘technology.’ The former appears as a sort of idealist residue reducible to the latter in a sort of base-trumps-superstructure move.
This might correct for certain idealist deviations where the ‘immaterial’ or the ‘algorithm’ acquire mysterious powers of their own without reference to the physical logic-gates, memory cores, not to mention energy sources that actually make computers compute. However, it then runs the risk of treating data and information as somehow less real and less ‘material’ than matter and energy. Hence, as usual, a philosophical ‘materialism’ reproduces the idealism it seeks to oppose.
I think Galloway wants to accord a little more ‘materiality’ to data and information than that, although it is not a topic the book tackles directly. But this is a theory not of media but of mediation, or of action, process, and event. Galloway also has little to say about labor, but that might be a useful term here too, if one can separate it from assumptions about it being something only humans do. A theory of mediation might also be a theory of information labor. An interface would then be a site of labor, where a particular, concrete act meets social, abstract labor in its totality.
Software is not quite reducible to hardware. I think we can use a formula here from Raymond Williams: hardware sets limits on what software can do, but does not determine what it does in any stronger sense. Software is not then ‘ideological’, but something a bit more complicated. For Galloway, software is not just a vehicle for ideology, “instead, the ideological contradictions of technical transcoding and fetishistic abstraction are enacted and ‘resolved’ within the very form of software itself.” (61)
Of course not all interfaces are for humans. Actually most are probably now interfaces between machines and other machines. Software is a machinic turn for ideology, an interface which is mostly about the machinic. Here Galloway also takes his distance from those who, like Katherine Hayles, see code as something like an illocutory speech act. Just as natural languages require a social setting; code requires a technical setting. But to broaden the context and see code as a subset of enunciation (a key term for Lazzarato) is still to anthropomorphize it too much. I am still rather fond of a term Galloway has used before – allegorithm – an allegory that takes an algorithmic form, although in this book he has dropped it.
What does it mean to visualize data? What is data? In simple terms, maybe data are ‘the givens’, whereas information might mean to give (in turn) some form to what is given. Data is empirical; information is aesthetic. But data visualization mostly pictures its own rules of representation. Galloway’s example here is the visualization of the internet itself, of which there are many examples, all of which looking pretty much the same. “Data have no necessary information.” (83) But the information that is applied to it seems over and over to be the same, a sort of hub-and-spoke cloud aesthetic, which draws connections but leaves out protocols, labor, or power.
Maybe one form of what Jodi Dean calls the “decline in symbolic efficiency” is a parallel increase in aesthetic information that goes hand in hand with a decline in information aesthetics. There’s no necessary visual form for data, but the forms it gets seems to come from a small number of presets.
Galloway thinks this through Jacques Ranciere’s “distribution of the sensible.” Once upon a time there were given forms for representing particular things in particular situations. But after that comes a sort of sublime regime, which tries to record the trace of the unrepresentable, which succeeds the old distribution, as a result of breakdown between subjects of art and forms of representation. The nihilism of modernity actually stems out of realism, which levels the representational system, for in realism everything is equally representable.
Realism can even represent the Shoah, and its representability is actually the problem for there is nothing specific about the language in which it is represented, which could just as easily represent a tea party. The problem might be not so much about the representability of the Shoah as that its representation seems to have negligible consequences. Representation has lost ethical power.
But perhaps Ranciere was speaking only of the former society of the spectacle, not the current society of control. Galloway: “One of the key consequences of the control society is that we have moved from a condition in which singular machines produce proliferations of images, into a condition in which multitudes of machines produce singular images.” (91) We have no adequate pictures of the control society. Its unrepresentability is connected to what the mode of production itself makes visible and invisible.
Galloway: “the point of unrepresentability is the point of power. And the point of power today is not in the image. The point of power today resides in networks, computers, algorithms, information and data.” (92) Like Kinkle and Toscano, Galloway cites Mark Lombardi’s work, and adds that of the Bureau d’études as examples of what Brian Holmes calls a counter cartography of information. One that might actually restore questions of power and protocol to images of ‘networks.’ But these are still limited to certain affordances of the map-form as interface.
So we have no visual language yet for the control society. Although we have them for some of its effects. Galloway does not mention climate modeling, but to me that is surely the key form of
data-> information -> visualization
problem to which to attend in the Anthropocene. As I tried to show in Molecular Red, the data -> information interface is actually quite complicated. In climate science each co-produces the other. Data are not empirical in a philosophical sense, but they are wedded to specific material circumstances of which they are unwitting indexes.
One could also think about the problems of visualizing the results, particularly for lay viewers. I see a lot of maps of the existing continents with data on rising temperatures; and a lot of maps of rising seas making new continents which omit the climate projections. Imagine being at a given GPS coordinate 60 years from now where neither the land form nor the climate were familiar. How could one visualize such a terra incognita? Most visualizations hold one variable constant to help understand the other. In Gamer Theory I showed how the SimEarth game actually made some progress on this – but then that was commercially a failed game.
There’s lots of visualizations of networks and of climate change – curious how there’s few visualizations which show both at the same time. And what they tend to leave out is agency. Both social labor and the relations of production are not pictured. Images of today’s social labor often land on images of elsewhere. Galloway mentions the Chinese gold farmers, those semi-real, semi-mythical creatures (under) paid to dig up items worth money in games like World of Warcraft. Another might be the call center worker. Who we might more often hear but never see. These might be the allegorical figures of labor today.
For Galloway, we are all Chinese gold farmers, in the sense that all computerized and networked activity is connected to value-extraction. One might add that we are all call center workers, in that we are all responding to demands placed on us by a network and to which we are obliged to respond. There is of course a massive iniquity in how such labor (and non-labor) is rewarded, but all of it may increasingly take similar forms.
All labor and even non-labor becomes abstract and valorized, but race is a much more stubborn category, and a good example of how software simulates ideology. In a game like World of Warcraft, class is figured as something temporary. By grinding away at hard work you can improve your ‘position’. But race is basic and ineradicable. The ‘races’ are all fantasy types, rather than ‘real’ ones, but perhaps it is only in fantasy form that race can be accepted and become matter-of-fact. Control society may be one that even encourages a certain relentless tagging and identifying through race and other markers of difference – all the better to connect you at a fine-grained level of labor and consumption.
The answer to Gayatri Spivak’s question – can the subaltern speak? – is that the subaltern not only speaks but has to speak, even if restricted to certain stereotypical scripts. “The subaltern speaks and somewhere an algorithm listens.” (137) One version of this would be Lisa Nakamura’s cyber-types. In an era when difference is the very thing that what Galloway calls ludic capitalism feasts on, it is tempting to turn, as Badiou and Zizek do, back to the universal. But the questions of what the universal erases or suppresses are not addressed in this turn, just ignored.
Galloway advocates instead a politics of subtraction and disappearance: to be neither the universal nor differentiated subject, but rather the generic one of whatever-being. I’m not entirely convinced by this metaphysical-political turn, at least not yet. It is striking to me that most of The Interface Effect is written under the sign of Fredric Jameson, for whom politics is not a separate domain, but is itself an allegory for the history of capitalism itself. And yet the concluding remarks are built much more on the Jacobin approach to the political of the post-Althusserians such as Badiou, for whom the political is an autonomous realm against the merely economic.
From that Jacobin-political-philosophical point of view, the economic itself starts to become a bit reified. Hence Galloway associates the logic the game World of Warcraft with the economics of capital itself, because the game simulates a world in which resources are scarce and quantifiable. But surely any mode of production has to quantify. Certainly pre-capitalist ones did. I don’t think it is entirely helpful to associate use value only with the qualitative and uncountable, and to equate exchange value with quantification tout-court. One of the lessons of climate science, and the earth science of which it is a subset, is that one of the necessary ways in which one critiques exchange value is by showing that it attempts to quantify imaginary values. It is actually the ‘qualities’ of exchange value that are the problem, not that its math.
So while Galloway and I agree on a lot of things, there’s also points of interesting divergence. Galloway: “The virtual (or the new, the next) is no longer the site of emancipation… No politics can be derived today from a theory of the new.” (138) I would agree that the virtual became a way in which the theological entered critical theory, once again, through a back door. I tried to correct for that, between A Hacker Manifesto and Spectacle of Disintegration, through a reading of Debord’s concept of strategy, which I think tried to mediate between pure, calculative models and purely romantic, qualitative ones. It was also a way of thinking with a keen sense of the actual affordances of situations rather than a hankering for mystical ‘events’.
But I think there’s a problem with Galloway’s attempt to have done with an historicist (Jamesonian) mode of thought in favor of a spatialized and Jacobin or ‘political’ one. To try to supersede the modifier of the ‘post’ with that of the ‘non’ is still in spite of itself a temporal succession. I think rather that we need to think about new past-present configurations. It’s a question of going back into the database of the archive and understanding it not as a montage of successive theories but as a field of possible paths and forks – and selecting other (but not ‘new’) ones.
Galloway is quite right to insist that “Another world is not possible.” (139) But I read this more through what the natural sciences insist are the parameters of action than through what philosophy thinks are the parameters for thought. I do agree that we need to take our leave from consumerist models of difference and the demand to always tag and produce ourselves for the benefit of ludic capitalism. In Spectacle of Disintegration I called the taking-leave the language of discretion. But I dissented there somewhat from the more metaphysical cast Agamben gives this, and pulled back to the more earthy and specific form of Alice Becker-Ho’s studies of Romani language practices – that scandal of a language that refuses to belong to a nation.
I think there’s a bit of a danger in opting for the fourth quadrant of the political-aesthetic landscape. Incoherence in politics and aesthetics is as ambivalent as all the others in its implications. Sure it is partly this: “A harbringer of the truth regime, the whatever dissolves into the common, effacing representational aesthetics and representational politics alike.” (142) But it is also the corporate-nihilism of Joseph Schumpeter. I think it more consistent with Galoway’s actual thinking here to treat all four quadrants of the aesthetic-political interface as ambiguous and ambivalent rather than exempt the fourth.
Ludic capitalism is on the one hand a time of that playfulness which Schiller and Huizinga thought key to the social whole and its history, respectively. On the other, it is an era of cybernetic control. Poetics meets design in a “jurdico-geometric sublime” (29) whose star ideologues are poet-designers like Steve Jobs. The trick is to denaturalize the surfaces of this brand of capitalist realism, which wants to appear as a coherent regime of ideology, but which is actually one of the most perfect nihilism – and not in the redeemable sense.
I’m not entirely sure that the good nihilism of withdrawal can be entirely quarantined from the bad one of the celebration of naked, unjust power. It’s a question that needs rather more attention. Alex Galloway, Eugene Thacker and I may be in accord as ‘nihilists’ who refuse a certain legislative power to philosophy. As I see it, Galloway thinks there’s a way to ‘hack’ philosophy, to turn it against itself from within. I think my approach is rather to détourn it, it see it as a metaphor machine for producing both connections and disconnections that can move across the intellectual division of labor, that can find ways knowledge can be comradely, and relate to itself and the world other than via exchange value. From the latter point of view, these might just be component parts of the same project.
Software can now simulate the spectacle effectively that it is able to insinuate another logic into it, the simulation of the spectacle itself, but under which lies an abyss of non-knowledge. But I am not sure this was an inevitable outcome. As with Jodi Dean, I find Galloway rather erases the struggles around what ‘new media’ would become, and now retrospectively sees the outcome as, if not an essence, then a given. This is partly what makes me nervous about a language with seceding or withdrawing. One of the great political myths is of the outsider-subject, untouched by power. All such positions, be it the worker, the woman, the subaltern, can now be taken as fully subsumed. But I think that means one works from an inside – for example via Haraway’s figure of the cyborg – rather than looking for an exit back out again.
What if one took up the study of computation not just as a form of reason, but as a form of rhetoric? That might be one of the key questiobns animating media studies today. But it is not enough to simply describe the surface effects of computational media as in some sense creating or reproducing rhetorical forms. One would need to understand something of the genesis and forms of software and hardware themselves. Then one might have something to say not just about software as rhetoric, but software as ideology, computation as culture — as reproducing or even producing some limited and limiting frame for acting in and on he world.
Is the relation between the analog and the digital itself analog or digital? That might be one way of thinking the relation between the work of Alexander Galloway and Wendy Hui Kyong Chun. I wrote elsewhere about Galloway’s notion of software as a simulation of ideology. Here I take up Chun’s of software as an analogy for ideology, through a reading of her book Programmed Visions: Software and Memory (MIT Press, 2011).
Software as analogy is a strange thing. It illustrates an unknown through an unknowable. It participates in, and embodies, some strange properties of information. Chun: “digital information has divorced tangibility from permanence.” (5) Or as I put it in A Hacker Manifesto, the relation between matter as support for information becomes arbitrary. Chun: “Software as thing has led to all ‘information’ as thing.” (6)
The history of the reification of information passes through the history of the production of software as a separate object of a distinct labor process and form of property. Chun puts this in more Foucauldian terms: “the remarkable process by which software was transformed from a service in time to a product, the hardening of relations into a thing, the externalization of information from the self, coincides with and embodies larger changes within what Michel Foucault has called governmentality.” (6)
Software coincides with a particular mode of governmentality, which Chun follows Foucault in calling neoliberal. I’m not entirely sure ‘neoliberal’ holds much water as a concept, but the general distinction would be that in liberalism, the state has to be kept out of the market, whereas in neoliberalism, the market becomes the model for the state. In both, there’s no sovereign power governing from above so much as a governmentality that produces self-activating subjects who’s ‘free’ actions can’t be known in advance. Producing such free agents requires a management of populations, a practice of biopower.
Such might be a simplified explanation of the standard model. What Chun adds is the role of computing in the management of populations and the cultivation of individuals as ‘human capital.’ The neoliberal subject feels mastery and ‘empowerment’ via interfaces to computing which inform the user about past events and possible futures, becoming, in effect, the future itself.
The ‘source’ of this mode of governmentality is source code itself. Code becomes logos: in the beginning was the code. Code becomes fetish. On some level the user knows code does not work magically on its own but merely controls a machine, but the user acts as if code had such a power. Neither the work of the machine nor the labor of humans figures much at all.
Code as logos organizes the past as stored data and presents it via an interface as the means for units of human capital to place their bets. “Software as thing is inseparable from the externalization of memory, from the dream and nightmare of an all encompassing archive that constantly regenerates and degenerates, that beckons us forward and disappears before our very eyes.” (11) As Berardi and others have noted, this is not the tragedy of alienation so much as what Baudrillard called the ecstasy of communication.
Software is a crucial component in producing the appearance of transparency, where the user can manage her or his own data and imagine they have ‘topsight’ over all the variables relevant to their investment decisions about their own human capital. Oddly, this visibility is produced by something invisible, that hides its workings. Hence computing becomes a metaphor for everything we believe is invisible yet generates visible effects. The economy, nature, the cosmos, love, are all figured as black boxes that can be known by the data visible on their interfaces.
The interface appears as a device of some kind of ‘cognitive mapping’, although not the kind Fredric Jameson had in mind, which would be an aesthetic intuition of the totality of capitalist social relations. What we get, rather is a map of a map, of exchange relations among quantifiable units. On the screen of the device, whose workings we don’t know, we see clearly the data about the workings of other things we don’t know. Just as the device seems reducible to the code that makes the data appear, so too must the other systems it models be reducible to the code that makes their data appear.
But this is not so much an attribute of computing in general as a certain historical version of it, where software emerged as a second (and third, and fourth…) order way of presenting the kinds of things the computer could do. Chun: “Software emerged as a thing – as an iterable textual program – through a process of commercialization and commodification that has made code logos: code as source, code as true representation of action, indeed code as conflated with, and substituting for, action.” (18)
One side effect of the rise of software was the fantasy of the all-powerful programmer. I don’t think it is entirely the case that the coder is an ideal neoliberal subject, and not least because of the ambiguity as to whether the coder makes the rules or simply has to follow them. That creation involves rule-breaking is a romantic idea of the aesthetic, not the whole of it.
The very peculiar qualities of information, in part a product of this very technical-scientific trajectory, makes the coder a primary form of an equally peculiar kind of labor. But labor is curiously absent from parts of Chun’s thinking. The figure of the coder as hacker may indeed be largely myth, but it is one that poses questions of agency that don’t typically appear when one thinks through Foucault.
Contra Galloway, Chun does not want to take as given the technical identity of software as means of control with the machine it controls. She wants to keep the materiality of the machine in view at all times. Code isn’t everything, even if that is how code itself gets us to think. “This amplification of the power of source code also dominates critical analyses of code, and the valorization of software as a ‘driving layer’ conceptually constructs software as neatly layered.” (21)
And hence code becomes fetish, as Donna Haraway has also argued. However, this is a strange kind of fetish, not entirely analogous to the religious, commodity, or sexual fetish. Where those fetishes supposedly offer imaginary means of control, code really does control things. One could even reverse the claim here. What if not accepting that code has control was the mark of a fetishism? One where particular objects have to be interposed as talismans of a power relationship that is abstract and invisible?
I think one could sustain this view and still accept much of the nuance of Chun’s very interesting and persuasive readings of key moments and texts in the history of computing. She argues, for instance, that code ought not to be conflated with its execution. One cannot run ‘source’ code itself. It has to be compiled. The relation between source code and machine code is not a mere technical identity. “Source code only becomes a source after the fact.” (24)
Mind you, one could push this even further than Chun does. She grounds source code in machine code and machine code in machine architectures. But these in turn only run if there is an energy ‘source’, and can only exist if manufactured out of often quite rare materials – as Jussi Parikka shows in his Geology of Media. All of which, in this day and age, are subject to forms of computerized command to bring such materials and their labors together. To reduce computers to command, and indeed not just computers but whole political economies, might not be so much an anthropomorphizing of the computer as a recognition that information has become a rather nonhuman thing.
I would argue that perhaps the desire to see the act of commanding an unknown, invisible device through interface, through software, in which code appears as source and logos is at once a way to make sense of neoliberal political-economic opacity and indeed irrationality. But perhaps command itself is not quite so commanding, and only appears as a gesture that restores the subject to itself. Maybe command is not ‘empowering’ of anything but itself. Information has control over both objects and subjects.
Here Chun usefully recalls a moment from the history of computing – the “ENIAC girls.” (29) This key moment in the history of computing had a gendered division of labor, where men worked out the mathematical problem and women had to embody the problem in a series of steps performed by the machine. “One could say that programming became programming and software became software when the command structure shifted from commanding a ‘girl’ to commanding a machine.” (29)
Although Chun does not quite frame it as such, one could see the postwar career of software as the result of struggles over labor. Software removes the need to program every task directly in machine language. Software offers the coder an environment in which to write instructions for the machine, or the user to write problems for the machine to solve. Software appears via an interface that makes the machine invisible but offers instead ways to think about the instructions or the problem in a way more intelligible to the human and more efficient in terms of human abilities and time constraints.
Software obviates the need to write in machine language, which made programming a higher order task, based on mathematical and logical operations rather than machine operations. But it also made programming available as a kind of industrialized labor. Certain tasks could be automated. The routine running of the machine could be separated from the machine’s solution of particular tasks. One could even see it as in part a kind of ‘deskilling.’
The separation of software from hardware also enables the separation of certain programming tasks in software from each other. Hence the rise of structured programming as a way of managing quality and labor discipline when programming becomes an industry. Structured programming enables a division of labor and secures the running of the machine from routine programming tasks. The result might be less efficient from the point of view of organizing machine ‘labor’ but more efficient from the point of view of organizing human labor. Software recruits the machine into the task of managing itself. Structured programming is a step towards object oriented programming, which further hides the machine, and also the interior of other ‘objects’ from the ones with which the programmer is tasked within the division of labor.
As Chun notes, it was Charles Babbage more than Marx who foresaw the industrialization of cognitive tasks and the application of the division of labor to them. Neither foresaw software as a distinct commodity; or (I would add) one that might be the product of a quite distinct kind of labor. More could be said here about the evolution of the private property relation that will enable software to become a thing made by labor rather than a service that merely applies naturally-occurring mathematical relations to the running of machines.
Crucial to Chun’s analysis is the way source code becomes a thing that erases execution from view. It hides the labor of the machine, which becomes something like one of Derrida’s specters. It makes the actions of the human at the machine appear as a powerful relation. “Embedded within the notion of instruction as source and the drive to automate computing – relentlessly haunting them – is a constantly repeated narrative of liberation and empowerment, wizards and (ex)slaves.” (41)
I wonder if this might be a general quality of labor processes, however. A car mechanic does not need to know the complexities of the metallurgy involved in making a modern engine block. She or he just needs to know how to replace the blown gasket. What might be more distinctive is the way that these particular ‘objects’, made of information stored on some random material surface or other, can also be forms of private property, and can be designed in such a way as to render the information in which they traffic also private property. There might be more distinctive features in how the code-form interacts with the property-form than in the code-form alone.
If one viewed the evolution of those forms together as the product of a series of struggles, one might then have a way of explaining the particular contours of today’s devices. Chun: “The history of computing is littered with moments of ‘computer liberation’ that are also moments of greater obfuscation.” (45) This all turns on the question of who is freed from what. But in Chun such things are more the effects of a structure than the result of a struggle or negotiation.
Step by step, the user is freed from not only having to know about her or his machine, but then also from ownership of what runs on the machine, and then from ownership of the data she or he produces on the machine. There’s a question of whether the first kind of ‘liberation’ – from having to know the machine – necessarily leads to the other all on its own, or rather in combination with the conflicts that drove the emergence of a software-driven mode of production and its intellectual property form.
In short: Programmers appeared to become more powerful but more remote from their machines; users appeared to become more powerful but more remote from their machines. The programmer and then the user work not with the materiality of the machine but with its information. Information becomes a thing, perhaps in the sense of a fetish, but perhaps also in the senses of a form of property and an actual power.
But let’s not lose sight of the gendered thread to the argument. Programming is an odd profession, in that at a time when women were making inroads into once male-dominated professions, programing went the other way, becoming more a male domain. Perhaps it is because it started out as a kind of feminine clerical labor but became – through the intermediary of software – a priestly caste, an engineering and academic profession. Perhaps its male bias is in part an artifact of timing: programming becomes a profession rather late. I would compare it then to the well-known story of how obstetrics pushed the midwives out of the birth-business, masculinizing and professionalizing it, now over a hundred years ago, but then more recently challenged as a male-dominated profession by the re-entry of women as professionals.
My argument would be that while the timing is different, programming might not be all the different from other professions in its claims to mastery and exclusive knowledge based on knowledge of protocols shorn of certain material and practical dimensions. In this regard, is it all that different from architecture?
What might need explaining is rather how software intervened in, and transforms, all the professions. Most all of them have been redefined as kinds of information-work. In many cases this can lead to deskilling and casualization, on the one hand, and to the circling of the wagons around certain higher-order, but information based, functions on the other. As such, it is not that programming is an example of ‘neoliberalism’, so much as that neoliberalism has become a catch-all term for a collection of symptoms of the role of computing in its current form in the production of information as a control layer.
Hence my problem is with the ambiguity in formulations such as this: “Software becomes axiomatic. As a first principle, it fastens in place a certain neoliberal logic of cause and effect, based on the erasure of execution and the privileging of programming…” (49) What if it is not that software enables neoliberalism, but rather than neoliberalism is just a rather inaccurate way of describing a software-centric mode of production?
The invisible machine joins the list of other invisible operators: slaves, women, workers. They don’t need to be all that visible so long as they do what they’re told. They need only to be seen to do what they are supposed to do. Invisibility is the other side of power. To the extent that software has power or is power it isn’t an imaginary fetish.
Rather than fetish and ideology, perhaps we could use some different concepts, what Bogdanov calls substitution and the basic metaphor. In this way of thinking, actual organizational forms through which labor are controlled get projected onto other, unknown phenomena. We substitute the form of organization we know and experience for forms we don’t know – life, the universe, etc. The basic metaphors in operation are thus likely to be those of the dominant form of labor organization, and its causal model will become a whole worldview.
That seems to me a good basic sketch for how code, software and information became terms that could be substituted into any and every problem, from understanding the brain, or love, or nature or evolution. But where Chun wants to stress what this hides from view, perhaps we could also look at the other side, at what it enables.
Chun: “This erasure of execution through source code as source creates an intentional authorial subject: the computer, the program, or the user, and this source is treated as the source of meaning.” (53) Perhaps. Or perhaps it creates a way of thinking about relations of power, even of mapping them, in a world in which both objects and subjects can be controlled by information.
As Chun acknowledges, computers have become metaphor machines. As universal machines in Turing’s mathematical sense, they become universal machines also in a poetic sense. Which might be a way of explaining why Galloway thinks computers are allegorical. I think for him allegory is mostly spatial, the mapping of one terrain onto another. I think of allegory as temporal, as the paralleling of one block of time with another, indeed as something like Bogdanov’s basic metaphor, where were one cause-effect sequence is used to explain another one.
The computer is in Chun’s terms a sort of analogy, or in Galloway’s a simulation. This is the sense in which for Chun the relation between analog and digital is analog, while for Galloway it is digital. Seen from the machine side, one sees code as an analogy for the world it controls; seen from the software side, one sees a digital simulation of the world to be controlled. Woven together with Marx’s circuit of money -> commodity -> money there is now another: digital -> analog -> digital. The question of the times might be how the former got subsumed into the latter.
For Chun, the promise of what the ‘intelligence community’ calls ‘topsight’ through computation proves illusory. The production of cognitive maps via computation obscures the means via which they are made. But is there not a kind of modernist aesthetic at work here, where the truth of appearances is in revealing the materials via which appearances are made? I want to read her readings in the literature of computing a bit differently. I don’t think it’s a matter of the truth of code lying in its execution by and in and as the machine. If so, why stop there? Why not further relate the machine to its manufacture? I am also not entirely sure one can say, after the fact, that software encodes a neoliberal logic. Rather, one might read for signs of struggles over what kind of power information could become.
This brings us to the history of interfaces. Chun starts with the legendary SAGE air defense network, the largest computer system ever built. It used 60,000 vacuum tubes and took 3 megawatts to run. It was finished in 1963 and already obsolete, although it led to the SABRE airline reservation system. Bits of old SAGE hardware were used in film sets whether blinky computers were called for – as in Logan’s Run.
SAGE is an origin story for ideas of real time computing and interface design, allowing ‘direct’ manipulation that simulates engagement by the user. It is also an example of what Brenda Laurel would later think in terms of Computers as Theater. Like a theater, computers offer what Paul Edwards calls a Closed World of interaction, where one has to suspend disbelief and enter into the pleasures of a predictable world.
The choices offered by an interface make change routine. The choices offered by interface shape notion of what is possible. We know that our folders and desktops are not real but we use them as if they were anyway. (Mind you, a paper file is already a metaphor. The world is no more well or less well represented by my piles of paper folders as it is by my ‘piles’ of digital ‘folders’, even if they are not quite the same kind of representation).
Chun: “Software and ideology fit each other perfectly because both try to map the tangible effects of the intangible and to posit the intangible causes through visible cues.” (71) Perhaps this is one response to the disorientation of the postmodern moment. Galloway would say rather that software simulates ideology. I think in my mind it’s a matter of software emerging as a basic metaphor, a handy model from the leading labor processes of the time substituted for processes unknown.
So the cognitive mapping Jameson called for is now something we all have to do all the time, and in a somewhat restricted form – mapping data about costs and benefits, risks and rewards – rather than grasping the totality of commodified social relations. There’s an ‘archaeology’ for these aspects of computing too, going back to Vannevar Bush’s legendary article ‘As we may think’, with its model of the memex, a mechanical machine for studying the archive, making associative indexing links and recording intuitive trails.
In perhaps the boldest intuition in the book, Chun thinks that this was part of a general disposition, an ‘episteme’, at work in modern thought, where the inscrutable body of a present phenomena could be understood as the visible product of an invisible process that was in some sense encoded. Such a process requires an archive, a past upon which to work, and a process via which future progress emerges out of past information.
Computing meets and creates such a worldview. JCR Licklider, Douglas Engelbart and other figures in postwar computing wanted computers that were networked, ran in real time and had interfaces that allowed the user to ‘navigate’ complex problems while ‘driving’ an interface that could be learned step by step. Chun: “Engelbart’s system underscores the key neoliberal quality of personal empowerment – the individual’s ability to see, steer, and creatively destroy – as vital social development.” (83) To me it makes more sense to say that the symptoms shorthanded by the commonplace ‘neoliberal’ are better thought of as ‘Engelbartian.’ His famous ‘demo’ of interactive computing for ‘intellectual workers’ ought now to be thought of as the really significant cultural artifact of 1968.
Chun: “Software has become a common-sense shorthand for culture, and hardware shorthand for nature… In our so-called post-ideological society, software sustains and depoliticizes notions of ideology and ideology critique. People may deny ideology, but they don’t deny software – and they attribute to software, metaphorically, greater powers that have been attributed to ideology. Our interactions with software have disciplined us, created certain expectations about cause and effect, offered us pleasure and power – a way to navigate our neoliberal world – that we believe should be transferrable elsewhere. It has also fostered our belief in the world as neoliberal; as an economic game that follows certain rules.” (92) But does software really ‘depoliticize’, or does it change what politics is or could be?
Digital media both program the future and the past. The archive is first and last a public record of private property (which was of course why the Situationists practiced détournement, to treat it not as property but as a commons.) Political power requires control of the archive, or better, of memory – as Google surely have figured out. Chun: “This always there-ness of new media links it to the future as future simple. By saving the past, it is supposed to make knowing the future easier. The future is technology because technology enables us to see trends and hence to make projections – it allows us to intervene on the future based on stored programs and data that compress time and space.” (97)
Here is what Bogdanov would recognize as a basic metaphor for our times: “To repeat, software is axiomatic. As a first principle, it fastens in place a certain logic of cause and effect, a causal pleasure that erases execution and reduces programming to an act of writing.” (101) Mind, genes, culture, economy, even metaphor itself can be understood as software.
Software produces order from order, but as such it is part of a larger episteme: “the drive for software – for an independent program that conflates legislation and execution – did not arise solely from within the field of computation. Rather, code as logos existed elsewhere and emanated from elsewhere – it was part of a larger epistemic field of biopolitical programmability.” (103) As indeed Foucault’s own thought may be too.
In a particularly interesting development, Chun argues that both computing and modern biology derive from this same episteme. It is not that biology developed a fascination with genes as code under the influence of computing. Rather, both computing and genetics develop out of the same space of concepts.
Actually, early cybernetic theory had no concept of software. It isn’t in Norbert Weiner or Claude Shannon. Their work treated information as signal. In the former, the signal is feedback, and in the latter, the signal has to defeat noise. How then did information thought of as code and control develop both in cybernetics and also in biology? Both were part of the same governmental drive to understand the visible as controlled by an invisible program that derives present from past and mediates between populations and individuals.
A key text for Chun here is Erwin Schrödinger’s ‘What is Life?’ (1944), which posits the gene as a kind of ‘crystal’. He saw living cells as run by a kind of military or industrial governance, each cell following the same internalized order(s). This resonates with Shannon’s conception of information as negative-entropy (a measure of randomness) and Weiner’s of information as positive entropy (a measure of order).
Schrödinger’s text made possible a view of life that was not vitalist – no special spirit is invoked – but which could explain organization above the level of a protein, which was about the level of complexity that Needham and other biochemists could explain at the time. But it comes at the price of substituting ‘crystal’, or ‘form’ for the organism itself.
Drawing on early Foucault, Chun thinks some key elements of a certain episteme of knowledge are embodied in Schrödinger’s text. Foucault’s interest was in discontinuities. Hence his metaphor of ‘archeology,’ which gives us the image of discontinuous strata. It was never terribly clear in Foucault what accounts for ‘mutations’ that form the boundaries of these discontinuities. The whole image of ‘archaeology’ presents the work of the philosopher of knowledge as a sort of detached ‘fieldwork’ in the geological strata of the archive.
Chun: “The archeological project attempts to map what is visible and what is articulable.” (113) One has to ask whether Foucault’s work was perhaps more an exemplar than a critique of a certain mode of knowledge. Foucault said that Marx was a thinker who swam in the 19th century as a fish swims in water. Perhaps now we can say that Foucault is a thinker who swam in the 20th century as a fish swims in water. Computing, genetics and Foucault’s archaeology are about discontinuous and discrete knowledge.
Still, he has his uses. Chun puts Foucault to work to show how there is a precursor to the conceptual architecture of computing in genetics and eugenics. The latter was a political program, supposedly based on genetics, whose mission was improving the ‘breeding stock’ of the human animal. But humans proved very hard to program, so perhaps that drive ended up in computing instead.
The ‘source’ for modern genetics is usually acknowledged to be the rediscovered experiments of Gregor Mendel. Mendelian genetics is in a sense ‘digital’. The traits he studied are binary pairs. The appearance of the pea (phenotype) is controlled by a code (genotype). The recessive gene concept made eugenic selective breeding rather difficult as an idea. But it is a theory of a ‘hard’ inheritance, where nature is all and nurture does not matter. As such, it could still be used in debates about forms of biopower on the side of eugenic rather than welfare policies.
Interestingly, Chun makes the (mis)use of Mendelian genetics as a eugenic theory a precursor to cybernetics. “Eugenics is based on a fundamental belief in the knowability of the human body, an ability to ‘read’ its genes and to program humanity accordingly…. Like cybernetics, eugenics is means of ‘governing’ or navigating nature.” (122) The notion of information as a source code was already at work in genetics long before either computing or modern biology. Control of, and by, code as a means of fostering life, agency, communication and the qualities of freely acting human capital is then an idea with a long history. One might ask whether it might not correspond to certain tendencies in the organization of labor at the time.
What links machinic and biological information systems is the idea of some kind of archive of information out of which a source code articulates future states of a system. But memory came to be conflated with storage. The active process of both forgetting and remembering turns into a vast and endless storage of data.
Programmed Visions is a fascinating and illuminating read. I think where I would want to diverge from it is at two points. One has to do with the ontological status of information, and the second has to do with its political-economic status. In Chun I find that information is already reduced to the machines that execute its functions, and then those machines are inserted into a historical frame that sees only governmentality and not a political economy.
Chun: “The information travelling through computers is not 1s and 0s; beneath binary digits and logic lies a messy, noisy world of signals and interference. Information – if it exists – is always embodied, whether in a machine or an animal.” (139) Yes, information has no autonomous and prior existence. In that sense neither Chun or Galloway or I are Platonists. But I don’t think information is reducible to the material substrate that carries it.
Information is a slippery term, meaning both order, neg-entropy, form, on the one hand, and something like signal or communication on the other. These are related aspects of the same (very strange) phenomena, but not the same. The way I would reconstruct technical-intellectual history would but the stress on the dual production of information both as a concept and as a fact in the design of machines that could be controlled by it, but where information is meant as signal, and as signal becomes the means of producing order and form.
One could then think about how information was historically produced as a reality, in much the same way that energy was produced as a reality in an earlier moment in the history of technics. In both cases certain features of natural history are discovered and repeated within technical history. Or rather, features of what will retrospectively become natural history. For us there was always information, just as for the Victorians there was always energy (but no such thing as information). The nonhuman enters human history through the inhuman mediation of a technics that deploys it.
So while I take the point of refusing to let information float free and become a kind of new theological essence or given, wafting about in the ‘cloud’, I think there is a certain historical truth to the production of a world where information can have arbitrary and reversible relations to materiality. Particularly when that rather unprecedented relation between information and its substrate is a control relation. Information controls other aspects of materiality, and also controls energy, the third category here that could do with a little more thought. Of the three aspects of materiality: matter, energy and information, the latter now appears as a form of controlling the other two.
Here I think it worth pausing to consider information not just as governmentality but also as commodity. Chun: “If a commodity is, as Marx famously argued, a ‘sensible supersensible thing’, information would seem to be its complement: a supersensible sensible thing…. That is, if information is a commodity, it is not simply due to historical circumstances or to structural changes, it is also because commodities, like information, depend on a ghostly abstract.” (135) As retrospective readers of how natural history enters social history, perhaps we need to re-read Marx from the point of view of information. He had a fairly good grasp of thermodynamics, as Amy Wendling observes, but information as we know it today did not yet exist.
To what extent is information the missing ‘complement’ to the commodity? There is only one kind of (proto)information in Marx, and that is the general equivalent – money. The materiality of a thing – let’s say ‘coats’ – its use value, is doubled by its informational quantity, its exchange value, and it is exchanged against the general equivalent, or information as quantity.
But notice the missing step. Before one can exchange the thing ‘coats’ for money, one needs the information ‘coats’. What the general equivalent meets in the market is not the thing but another kind of information – let’s call it the general non-equivalent – a general, shared, agreed kind of information about the qualities of things.
Putting these sketches together, one might then ask what role computing plays in the rise of a political economy (or a post-political one), in which not only is exchange value dominant over use value, but where use value further recedes behind the general non-equivalent, or information about use value. In such a world, fetishism would be mistaking the body for the information, not the other way around, for it is the information that controls the body.
Thus we want to think bodies matter, lives matter, things matter – when actually they are just props for the accumulation of information and information as accumulation. ‘Neo’liberal is perhaps too retro a term for a world which does not just set bodies ‘free’ to accumulate property, but sets information free from bodies, and makes information property in itself. There is no bipower, as information is not there to make better bodies, but bodies are just there to make information.
Unlike Kittler, Chun is aware of the difficulties of a fully reductive approach to media. For her it is more a matter of keeping in mind both the invisible and visible, the code and what executes it. “The point is not to break free of this sourcery but rather to… make our computers more productively spectral by exploiting the unexpected possibilities of source code as fetish.” (20)
I think there may by more than one kind of non-visible in the mix here, though. So while there’s reasons to anchor information in the bodies that display it, there’s also reasons to think the relation of different kinds of information to each other. Perhaps bodies are shaped now by more than one kind of code. Perhaps it is no longer a time in which to use Foucault and Derrida to explain computing, but rather to see them as side effects of the era of computing itself.
Lev Manovich wrote the standard text on ‘new media’, back when that was still a thing. It was called The Language of New Media (MIT Press 2001). Already in that book, Manovich proposed a more enduring way of framing media studies for the post-broadcast age. In his most recent book, Software Takes Command (Bloomsbury 2013) we get this more robust paradigm without apologies. Like its predecessor it will become a standard text.
I’m sure I’m not the only one annoyed by the seemingly constant interruptions to my work caused by my computer trying to update some part of the software that runs on it. As Cory Doctorow shows so clearly, the tech industry will not actually leave us to our own devices. I have it set to require my permission when it does so, at least for those things I can control it updating. This might be a common example that points to one of Manovich’s key points about media today. The software is always changing.
Everything is always in beta, and the beta-testing will in a lot of cases be done by us, as a free service, for our vendors. Manovich: “Welcome to the world of permanent change – the world that is now defined not by heavy industrial machines that change infrequently, but by software that is always in flux.” (2)
Manovich takes his title from the modernist classic, Mechanisation Takes Command, published by Sigfried Giedion in 1948. (On which see Nicola Twiley, here.) Like Giedion, Manovich is interested in the often anonymous labor of those who make the tools that make the world. It is on Giedion’s giant shoulders that many students of ‘actor networks’, ‘media archaeology’ and ‘cultural techniques’ knowingly or unknowingly stand. Without Giedion’s earlier work, Walter Benjamin would be known only for some obscure literary theories.
Where Giedion is interested in the inhuman tool that interfaces with nonhuman natures, Manovich is interested in the software that controls the tool. “Software has become our interface to the world, to others, to our memory and our imagination – a universal language through which the world speaks, and a universal engine on which the world runs.” (2) If you are reading this, you are reading something mediated by, among other things, dozens of layers of software, including Word v. 14.5.1 and Mac OS v. 10.6.8, both of which had to run for me to write it in the first place.
Manovich’s book is limited to media software, the stuff that both amateurs and professionals use to make texts, pictures, videos, and things like websites that combine texts, pictures and videos. This is useful in that this is the software most of us know, but it points to a much larger field of inquiry that is only just getting going, including studies of software that runs the world without most of us knowing about it, platform studies that looks at the more complicated question of how software meets hardware, and even infrastructure studies that looks at the forces of production as a totality. Some of these are research questions that Manovich’s methods tend to illuminate and some not, as we shall see.
Is software a mono-medium or a meta-medium? Does ‘media’ even still exist? These are the sorts of questions that can become diversions if pursued too far. Long story short: While they disagree on a lot, I think what Alex Galloway and Manovich have in common is an inclination to suspect that there’s a bit of a qualitative break introduced into media by computation.
Giedion published Mechanization Takes Command in the age of General Electric and General Motors. As Manovich notes, today the most recognizable brands include Apple, Microsoft and Google. The last of whom, far from being ‘immaterial’, probably runs a million servers. (24) Billions of people use software; billions more are used by software. And yet it remains an invisible category in much of the humanities and social sciences.
Unlike Wendy Chun and Friedrich Kittler, Manovich does not want to complicate the question of software’s relation to hardware. In Bogdanovite terms, it’s a question of training. As a programmer, Manovich (like Galloway) sees things from the programming point of view. Chun on the other hand was trained as a systems engineer. And Kittler programmed in assembler language, which is the code that directly controls a particular kind of hardware. Chun and Kittler are suspicious of the invisibility that higher level software creates vis-à-vis hardware, and rightly so. But for Galloway and Manovich this ‘relative autonomy’ of software of the kind most people know is itself an important feature of its effects.
The main business of Software Takes Command is a to elaborate a set of formal categories through which to understand cultural software. This would be that subset of software that are used to access, create and modify cultural artifacts for online display, communication, for making inter-actives or adding metadata to existing cultural artifacts, as perceived from the point of view of its users.
The user point of view somewhat complicates one of the basic diagrams of media theory. From Claude Shannon to Stuart Hall, it is generally assumed that there is a sender and a receiver, and that the former is trying to communicate a message to the latter, through a channel, impeded by noise, and deploying a code. Hall break with Shannon with the startling idea that the code used by the receiver could be different to that of the sender. But he still assumes there’s a complete, definitive message leaving the sender on its way to a receiver.
Tiziana Terranova takes a step back from this modular approach that presumes the agency of the sender (and in Hall’s case, of the receiver). She is interested in the turbulent world created by multiples of such modular units of mediation. Manovich heads in a different direction He is interested in iterated mediation software introduces in the first instance between the user and the machine itself via software.
Manovich: “The history of media authoring and editing software remains pretty much unknown.” (39) There is no museum for cultural software. “We lack not only a conceptual history of media editing software but also systematic investigations of the roles of software in media production.” (41) There are whole books on the palette knife or the 16mm film camera, but on today’s tools – not so much. “[T]he revolution in the means of production, distribution, and access of media has not been accompanied by a similar revolution in the syntax and semantics of media.” (56)
We actually do know quite a bit about the pioneers of software as a meta-medium, and Manovich draws on that history. The names of Ivan Sutherland, JCR Licklider, Douglas Engelbart, the maverick Ted Nelson and Alan Kay have not been lost to history. But they knew they were creating new things. The computer industry got in the habit of passing off new things as if they were old, familiar things, in order to ease users gently into unfamiliar experiences. But in the process we lost sight of the production of new things under the cover of the old in the cultural domain.
Of course there’s a whole rhetoric about disruption and innovation, but successful software gets adopted by users by not breaking too hard with those users’ cultural habits. Thus we think there’s novelty were often there isn’t: start-up business plans are often just copies of previous successful ones. But we miss it when there’s real change: at the level of what users actually do, where the old is a friendly wrapper for the new.
Where Wendy Chun brings her interpretive gifts to bear on Engelbart, Manovich is more interested in Alan Kay and his collaborators, particularly Adele Goldberg, at Xerox Palo Alto Research Center, or Parc for short. Founded in 1970, Parc was the place that created the graphic user interface, the bitmapped display, Ethernet networking, the laser printer, the mouse, and windows. Parc developed the models for today’s word proessing, drawing, painting and music software. It also gave the world the programming language Smalltalk, a landmark in the creation of object oriented programming. All of which are component bits of a research agenda that Alan Kay called vernacular computing.
Famously, it was not Xerox it was Apple who brought all of that together in a consumer product, the 1984 Apple Macintosh computer. By 1991 Apple had also incorporated video software based on the QuickTime standards, which we can take as the start of an era in which a consumer desktop computer could be a universal media editor. At least in principle: those who remember early 90s era computer based media production will recall also the frustration. I was attempting to create curriculum for digital publishing in the late eighties – I came into computing sideways from print production. That was pretty stable by the start of the 90s, but video would take a bit longer to be viable on consumer-grade hardware.
The genius of Alan Kay was to realize that the barriers to entry of the computer into culture were not just computational but also cultural. Computers had to do things that people wanted to do, and in ways that they were used to doing them, initially at least. Hence the strategy of what Bolter and Grusin called remediation, wherein old media become the content of new media form.
If I look at the first screen of my iPhone, I see icons. The clock icon is an analog clock. The iTunes icon is a musical note. The mail icon is the back of an envelope. The video icon is a mechanical clapboard. The Passbook icon is old-fashioned manila files. The Facetime icon is an old-fashioned looking video camera. The Newstand icon looks like a magazine rack. Best of all, the phone icon is the handset of an old-fashioned landline. And so on. None of these things pictured even exist in my world any more, as I have this machine that does all those things. The icons are cultural referents from a once-familiar world that have become signs within a rather different world which I can pretend to understand because I am familiar with those icons.
All of this is both a fulfillment and a betrayal of the work of Alan Kay and his collaborators at Parc. They wanted to turn computers into “personal dynamic media.” (61) Their prototype was even called a Dynabook. They wanted a new kind of media, with unprecedented abilities. They wanted a computer that could store all of the user’s information, which could simulate all kinds of media, and could do so in a two-way, real-time interaction. They wanted something that had never existed before.
Unlike the computer Engelbart showed in his famous Demo of 1968, Kay and co. did not want a computer that was just a form of cognitive augmentation. They wanted a medium of expression. Engelbart’s demo shows many years ahead of its time what the office would look like, but not what the workspace of any kind of creative activity. Parc wanted computers for the creation of new information in all media, including hitherto unknown ones. They wanted computers for what I call the hacker class – those who create new information in all media, not just code. In a way the computers that resulted make such a class possible and at the same time set limits on its cognitive freedom.
The Parc approach to the computer is to think of it as a meta-medium. Manovich: “All in all, it is as though different media are actively trying to reach towards each other, exchanging properties and letting each other borrow their unique features.” (65) To some extent this requires making the computer itself invisible to the user. This is a problem for a certain kind of modernist aesthetic – and Chun may participate in this – for whom the honest display of materials and methods is a whole ethics or even politics of communication. But modernism was only ever a partial attempt to reveal its own means of production, no matter what Benjamin and Brecht may have said on the matter. Perhaps all social labor, even of the creative kind, requires separation between tasks and stages.
The interactive aspect of modern computing is of course well known. Manovich draws attention to another feature, and one which differentiates software more clearly from other kinds of media: view control. This one goes back to Engelbart’s Demo rather than Parc. At the moment I have this document in Page View, but I could change that to quite a few different ways of looking at the same information. If I was looking at a photo in my photo editing software, I could also choose to look at it as a series of graphs, and maybe manipulate the graphs rather than the representational picture, and so on.
This might be a better clue to the novelty of software than, say hypertext. The linky, not-linear idea of a text has a lot of precursors, not least every book in the world with an index, and every scholar taught to read books index-first. Of course there are lovely modernist-lit versions of this, from Cortesar’s Hopscotch to Roland Barthes and Walter Benjamin to this nice little software-based realization of a story by Georges Perec.
Then there’s nonlinear modernist cinema, such as Hiroshima Mon Amour, David Blair’s Waxor the fanastic cd-roms of The Residents and Laurie Anderson made by Bob Stein’s Voyager Interactive. But Manovich follows Espen Aarseth in arguing that hypertext is not modernist. Its much more general and supports all sorts of poetics. Stein’s company also made very successful cd-roms based on classical music. Thus while Ted Nelson got his complex, linky hypertext aesthetic from William Burroughs, what was really going on, particularly at Parc, was not tail-end modernism but the beginnings of a whole new avant-garde.
Maybe we could think of it as a sort of meta- or hyper-avant-garde, that wanted not just a new way of communicating in a media but new kinds of media themselves. Kay and Nelson in particular wanted to give the possibility of creating new information structures to the user. For example, consider Richard Shoup’s, Superpaint, coded at Parc in 1973. Part of what it does is simulate real-world painting techniques. But it also added techniques that go beyond simulation, including copying, layering, scaling and grabbing frames from video. Its ‘paintbrush’ tool could behave in ways a paintbrush could not.
For Manovich, one thing that makes ‘new’ media new is that new properties can be always be added to it. The separation between hardware and software makes this possible. “In its very structure computational media is ‘avant-garde’, since it is constantly being extended and thus redefined.” (93) The role of media avant-garde is no longer performed by individual or artist groups but happens in software design. There’s a certain view of what an avant-garde is that’s embedded in this, and perhaps it stems from Manovich’s early work on the Soviet avant-gardes, understood in formalist terms as constructors of new formalizations of media. It’s a view of avant-gardes as means of advancing – but not also contesting – the forward march of modernity.
Computers turned out to be malleable in a way other industrial technologies were not. Kay and others were able to build media capabilities on top of the computer as universal machine. It was a sort of détournement of the Turing and von Neuman machine. They were not techno-determinists. It all had to be invented, and some of it was counter-intuitive. The Alan Kay and Adele Goldberg version was at least as indebted to the arts, humanities and humanistic psychology as to engineering and design disciplines. In a cheeky aside, Manovich notes: “Similar to Marx’s analysis of capitalism in his works, here the analysis is used to create a plan for action for building a new world – in this case, enabling people to create new media.” (97)
Unlike Chun or David Golumbia, Manovich downplays the military aspect of postwar computation. He dismisses SAGE, even though out of it came the TX-2 computer, which was perhaps the first machine to allow a kind of real time interaction, if only for its programmer. From which, incidentally, came the idea of the programmer as hacker, Sutherland’s early work on computers as a visual medium, and the game Spacewar.
The Parc story is nevertheless a key one. Kay and co wanted computers that could be a medium for learning. They turned to the psychologist Jerome Bruner, an his version of Piaget’s theory of developmental stages. The whole design of the Dynabook had something for each learning stage, which to Bruner and Parc were more parallel learning strategies rather than stages. For the gestural and spatial way of learning, there was the mouse. For the visual and pictorial mode of learning, there were icons. For the symbolic and logical mode of learning, there was the programming language Smalltalk.
For Manovich, this was the blueprint for a meta-medium, which could not only represent existing media but also add qualities to them. It was also both an ensemble of ways of experiencing multiple media and also a system for making media tools, and even for making new kinds of media. A key aspect of this was standardizing the interfaces between different media. For example, when removing a bit of sound, or text, or picture, or video from one place and putting it in another, one would use standard Copy and Paste commands from the Edit menu. On the Mac keyboard, these even have standard key-command shortcuts: Cut, Copy and Paste are command-X, C and V, respectively.
But something happened between the experimental Dynabook and the Apple Mac. It shipped without Smalltalk, or any software authoring tool. From 1987 it came with Hypercard, written by Parc alumnus Bill Atkinson – which many of us fondly remember. Apple discontinued it in 2004. It seems clear now that the iPad is a thing that Apple’s trajectory was in the long term away from democratizing computing and thinking of the machine as a media device.
And so Kay’s vision was both realized and abandoned. It became cheap and relatively easy to make one’s own media tools. But the computer became a media consumption hub. By the time one gets to the iPad, it does not really present itself as a small computer. Its more like a really big phone, where everything is locked and proprietary.
A meta-medium contains simulations of old media but also makes it possible to imagine new media. This comes down to being able to handle more than one type of media data. Media software has ways of manipulating specific types of data, but also some that can work on data in general, regardless of type. View control, hyperlinking, sort and search would be examples. If I search my hard drive for ‘Manovich’, I get Word text by me about him, pdfs of his books, video and audio files, and a picture of Lev and me in front of a huge array of screens.
Such media-independent techniques are general concepts implanted into algorithms. Besides search, geolocation would be another example. So would visualization, or infovis, which can graph lots of different kinds of data set. You could read my book, Gamer Theory, or you could look at this datavis of it that uses Bradford Paley’s TextArc.
Manovich wants to contrast these properties of a meta-medium to medium specific ways of thinking. A certain kind of modernism puts a lot of stress on this: think Clement Greenberg and the idea of flatness in pictorial art. Russian formalism and constructivism in a way also stressed the properties of specific media, their given textures. But it was interested in experimentally working in between then on parallel experiments in breaking them down to their formal grammars. Manovich: “… the efforts by modern artists to create parallels between mediums were proscriptive and speculative… In contrast, software imposes common media ‘properties.’” (121)
One could probably quibble with Manovich’s way of relating software as meta-medium to precursors such as the Russian avant-gardes, but I won’t, as he knows a lot more about both than I do. I’ll restrict myself to pointing out that historical thought on these sorts of questions has only just begun. Particular arguments aside, I think Manovich is right to emphasize how software calls for a new way of thinking about art history, which as yet is not quite a pre-history to our actual present.
I also think there’s a lot more to be said about something that is probably no longer a political economy but more of an aesthetic economy, now that information is a thing that can be property, that can be commodified. As Manovich notes of the difference between the actual and potential states of software as meta-medium: “Of course, not all media applications and devices make all these techniques equally available – usually for commercial and copyright reasons.” (123)
So far, Manovich has provided two different ways of thinking about media techniques. One can classify them as media independent vs media specific; or as simulations of the old media vs new kinds of technique. For example, in Photoshop, you can Cut and Paste like in most other programs. But you can also work with layers in a way that is specific to Photoshop. Then there are things that look like old time darkroom tools. And there are things you could not really do in the darkroom, like add a Wind filter, to make your picture look like it is zooming along at Mach 1. Interestingly, there are also high pass, median, reduce noise, sharpen and equalize filters, all of which are hold-overs from something in between mechanical reproduction and digital reproduction: analog signal processing. There is a veritable archaeology of media just in the Photoshop menus.
What makes all this possible is not just the separation of hardware from software, but also the separation of the media file from the application. The file format allows the user to treat the media artifact on which she or he is working as “a disembodied, abstract and universal dimension of any message separate from its content.” (133) You work on a signal, or basically a set of numbers. The numbers could be anything so long as the file format is the right kind of format for a given software application. Thus the separation of hardware and software, and software application and file, allow an unprecedented kind of abstraction from the particulars of any media artifact.
One could take this idea of separation (which I am rather imposing as a reading on Manovich) down another step. Within PhotoShop itself, the user can work with layers. Layers redefine an image as a content images with modifications conceived as happening in separate layers. These can be transparent or not, turned on or off, masked to effect part of the underlying image only, and so on.
The kind of abstraction that layers enable can be found elsewhere. Its one of the non-media specific techniques. The typography layer of this text is separate from the word-content layer. GIS (Geographic Information System) also uses layers, turning space into a media platform holding data layers. Turning on and off the various layers of Google Earth or Google Maps will give a hint of the power of this. Load some proprietary information into such a system, toggle the layers on and off, and you can figure out the optimal location for the new supermarket. Needless to say, some of this ability, in both PhotoShop and GIS, descends from military surveillance technologies from the cold war.
So what makes new or digital media actually new and digital is the way software both is, and further enables, a kind of separation. It defines an area for users to work in an abstract and open-ended way. “This means that the terms ‘digital media’ and ‘new media’ do not capture very well the uniqueness of the ‘digital revolution.’… Because all the new qualities of ‘digital media’ are not situated ‘inside’ the media objects. Rather, they all exist ‘outside’ – as commands and techniques of media viewers, authoring software, animation, compositing, and editing software, game engine software, wiki software, and all other software ‘species.’” (149)
The user applies software tools to files of specific types. Take the file of a digital photo, for example. The file contains an array of pixels that have color values, a file header specifying dimensions, color profile, information about the camera and exposure and other metadata. It’s a bunch of – very large – numbers. A high def image might contain 2 million pixels and six million RGB color values. Any digital image seen on a screen is already a visualization. For the user it is the software that defines the properties of the content. “There is no such thing as ‘digital media.’ There is only software as applied to media (or ‘content’).” (152)
New media are new in two senses. The first is that software is always in beta, continually being updated. The second is that software is a meta-medium, both simulating old media tools and adding new ones under the cover of the familiar. Yet a third might be the creation not of new versions of old media or new tools for old media but entirely new media forms – hybrid media.
For instance, Google Earth combining aerial photos, satellite images. 3D computer graphics, stills, data overlays. Another example is motion graphics, including still images, text, audio, and so on. Even a simple website can contain page description information for text, vector graphics, animation. Or the lowly PowerPoint, able to inflict animation, text, images or movies on the public.
This is not quite the same thing as the older concept of multimedia, which for Manovich is a subset of hybrid media. In multimedia the elements are next to each other. “In contrast, in media hybrids, interfaces, techniques, and ultimately the most fundamental assumptions of different media forms and traditions, are brought together, resulting in new media gestalts.”(167) It generates new experiences, different from previously separate experiences. Multimedia does not threaten the autonomy of media, but hybridity does. In hybrid media, the different media exchange properties. For example, text within motion graphics can be made to conform to cinematic conventions, go in and out of ‘focus’.
Hybrid media is not same as convergence, as hybrid media can evolve new properties. Making media over as software did not lead to their convergence, as some thought, but to the evolution of new hybrids. “This, for me, is the essence of the new stage of computer meta-medium development. The unique properties and techniques of different media have become software elements that can be combined together in previously impossible ways.” (176)
Manovich thinks media hybrids in an evolutionary way. Like Franco Moretti, he is aware of the limits of the analogy between biological and cultural evolution. Novel combinations of media can be thought of as a new species. Some are not selected, or end up restricted to certain specialized niches. Virtual Reality for example, was as I recall a promising new media hybrid at the trade shows of the early 90s, but it ended up with niche applications. A far more successful hybrid is the simple image map in webdesign, where an image becomes an interface. It’s a hybrid had of a still image plus hyperlinks. Another would be the virtual camera in 3D modeling, which is now a common feature in video games.
One might pause to ask, like Galloway, or Toscano and Kinkle, whether such hybrid media help us cognitively map the totality of relations within which we are webbed. But the problem is that not only is the world harder to fathom in its totality, even media itself recedes from view. “Like the post-modernism of the 1980s and the web revolution of the 1990s, the ‘softwarization’ of media (the transfer of techniques and interfaces of all previously existing media technologies to software) has flattened history – in this case the history of modern media.” (180) Perhaps it’s a declension of the fetish, which no longer takes the thing for the relation, or even the image for the thing, but takes the image as an image, rather than an effect of software.
Software is a difficult object to study, in constant flux and evolution. One useful methodological tip from Manovich is to focus on formats rather than media artifacts or even instances of software. At its peak in the 1940s, Hollywood made about 400 movies per year. It would be possible to see a reasonable sample of that output. But Youtube uploads something like 300 hours of video every minute. Hence a turn to formats, which are relatively few in number and stable over time. Jonathan Sterne’s book on the mp3 might stand as an exemplary work along these lines. Manovich: “From the point of view of media and aesthetic theory, file formats constitute the ‘materiality’ of computational media – because bits organized in these formats is what gets written to a storage media…” (215)
Open a file using your software – say in Photoshop – and one quickly finds a whole host of ways in which you can make changes to the file. Pull down a menu, go down the list of commands. A lot of them have sub-menus where you can change the parameters of some aspect of the file. For example, a color-picker. Select from a range of shades, or open a color wheel and choose from anywhere on it. A lot of what one can fiddle with are parameters, also known as variables or arguments. In a GUI, or Graphic User Interface, there’s usually a whole bunch of buttons and sliders that allow these parameters to be changed.
Modern procedural programming is modular. Every procedure that is used repeatedly is encapsulated in a single function that software programs can evoked by name. These are sometimes called subroutines. Such functions generally solve equations. Ones that perform related functions are gathered in libraries. Using such libraries speeds up software development. A function works on particular parameters – for example, the color picker.
Softwarization allows for great deal of control of parameters. “In this way, the logic of programming is projected to the GUI level and becomes part of the user’s cognitive model of working with media inside applications.” (222) It may even project into the labor process itself. Different kinds of media work become similar in workflow. Select a tool, choose parameters, apply, repeat. You could be doing the layout for a book, designing a building, or editing a movie or preparing photos for a magazine.
Manovich: “Of course, we should not forget that the practices of computer programming are embedded within the economic and social structures of the software and consumer electronics industries.” (223) What would it mean to unpack that? How were the possibilities opened up by Alan Kay and others reconfigured in the transition from experimental design to actual consumer products? How did the logic of the design of computation end up shaping work as we know it? These are questions outside the parameters of Manovich’s formalist approach, but they are questions his methods usefully clarify. “We now understand that in software culture, what we identify by conceptual inertia as ‘properties’ of different mediums are actually the properties of media software.” (225)
There’s a lot more to Software Takes Command, but perhaps I’ll stop here and draw breath. It has a lot of implications for media theory. The intermediate objects of such a theory dissolve in the world of software: “… the conceptual foundation of media discourse (and media studies) – the idea that we can name a relatively small number of distinct mediums – does not hold any more.” (234) Instead, Manovich sees an evolutionary space with a large number of hybrid media forms that overlap, mutate and cross-fertilize.
If one way to navigate such a fluid empirical field might be to reduce it to the principles of hardware design, Manovich suggests another, which takes the relative stability of file formats and means of modifying them as key categories. This reformats what we think the ‘media’ actually are: “… a medium as simulated in software is a combination of a data structure and a set of algorithms.” (207)
There are consequences not only for media theory but for cultural studies as well. Cultural techniques can be not only transmitted, but even invented in software design: “a cultural object becomes an agglomeration of functions drawn from software libraries.” (238) These might be embedded less in specific software programs than in the common libraries of subroutines on which they draw to vary given parameters. These design practices then structure workflow and workplace habits.
Software studies offers a very different map as to how to rebuild media studies than just adding new sub-fields, for example, for game studies. It also differs from comparative media studies, in not taking it for granted that there are stable media – whether converging or not – to compare. It looks rather for the common or related tools and procedures embedded now in the software that run all media.
Manovich’s software studies approach is restricted to that of the user. One might ask how the user is itself produced as a byproduct of software design. Algorithm plus data equals media: which is how the user starts to think of it too. It is a cultural model, but so too is the ‘user’. Having specified how software reconstructs media as simulation for the user, one might move on to thinking about how the user’s usage is shaped not just by formal and cultural constraints but by other kinds as well. And one might think also about who or what gets to use the user in turn.
For Lisa Nakamura
I have been thinking about Lisa Nakamura lately, and for a lot of reasons. Here I just want to reflect on how valuable her work in visual culture studies has been for me, and for many others. She was a pioneer of the study of what used to be tagged as ‘race in cyberspace.’ Now that the internet is everywhere, and race and racisms proliferate on it like fungus on damp newspaper, her work deserves renewed critical attention. Her book Digitizing Race: Visual Cultures of the Internet (Minnesota, 2008) is nearly a decade old, but it turns out that looking perceptively at ephemeral media need not render the resulting study ephemeral at all.
Digitizing Race draws together three things. The first is the post-racial project of a certain (neo)liberal politics that Bill Clinton took mainstream in the early nineties. Its central conceit was that all the state need do is provide opportunities for everyone to become functional subjects of postindustrial labor and consumption. The particular challenges of racism were ignored.
The second is an historical transformation in the internet that began in the mid-nineties, which went from being military and scientific (with some creative subcultures on the side) to a vast commercial complex. This led to the waning of the early nineties internet subcultures, some of whom thought of it as a utopian or at least alternative media for identity play, virtual community and gift economies. In A Hacker Manifesto (Harvard, 2004), I was mostly interested in the last of these. Nakamura is more interested in what became of community and identity.
One theme that started to fade in internet culture (or cyberculture in the language of the time) had to do with passing online as something other than one’s meatspace self. This led to a certain gnostic belief in the separation of online from meatspace being, as if the differences and injustices of the latter could just be left behind. But the early cyberculture adepts tended to be a somewhat fortunate few, with proximity to research universities. As the internet’s user-base expanded, the newcomers (or n00bs) had other ideas.
The third tendency Nakamura layers onto the so-called neo-liberal turn and the commercialized and more-popular internet is the academic tendency known as visual studies or visual culture studies. This in part grew out of, and in reaction against, an art historical tradition that could absorb installation art but did not know how to think digital media objects or practices. Visual culture studies drew on anthropology and other disciplines to create the “hybrid form to end all hybrid forms.” (3) It also had something in common with cultural studies, in its attention to low, ephemeral and vulgar forms, treated not just as social phenomena but as aesthetic ones as well.
Not all the tendencies within visual culture studies sat well together. There could be tension between paying attention to digital media objects and paying attention to vulgar popular forms. Trying to do both at once was an exercise in self-created academic marginality. The study of new media thus tended to privilege things that look like art; the study of the low, the minor or the vulgar tended to privilege social over aesthetic methods and preoccupations. Not the least virtue of Nakamura’s work is that she went out on a limb and studied questions of race and gender and in new and ephemeral digital forms and as aesthetic practices.
One way to subsume these three questions into some sort of totality might be to think about what Lisa Parks called visual capitalism. How is visual capital, an ensemble of images that appear to have value, created and circulated? How does social differentiation cleave along lines of access to powerful modes of representation? Having framed those questions, one might then look at how the internet came to function as a site for the creation and distribution of hegemonic and counter-hegemonic images of racialized bodies.
Here one might draw on Paul Gilroy’s work on the historical formation and contestation of racial categories, or the way Donna Haraway and Chela Sandoval look to cyborg bodies as produced by bio-technical networks, but within which they might exercise an ironic power of slippery self-definition. Either way, one might pay special attention to forms of image-making by non-elite or even banal cultures as well as to more high-profile mass media forms, cool subcultures or avant-garde art forms.
There’s several strands to this story, however. One of which might be the evolution of technical media form. From Nick Mirzoeff, Nakamura takes the idea of visual technology as an enhancement of vison, from easel painting to digital avatars. In the context of that historical background, one might ask what is old and what is new about what one discovers in current media forms. This might be a blend of historical, ethnographic and formal-aesthetic methods.
A good place to start such a study is with interfaces, and a good way to tie together the study of cinema, television and the internet is to study how the interfaces of the internet appear in cinema and television. Take, for instance, the video for Jennifer Lopez’s pop song, ‘If You Had My Love’ (1999). The conceit of the video is that Lopez is an avatar controlled by users who can view her in different rooms, doing difference dances in different outfits. The first viewer is a young man – a bit like one of Azuma’s otaku – who appears to be looking for something to jerk-off to, but there are other imaginary viewers through-out, including teen-girls and a rather lugubrious inter-racial threesome, nodding off together on a sofa.
So different people can be on the human side of the interface. Here, we are voyeurs on their voyeurism. The interface itself is perhaps the star, and J-Lo herself becomes its object. With the interface, the imaginary user in frame and the imagining one – us – can make J-Lo perform as different kinds of dancer, slotting her into different racial and cultural niches. The interface offers “multiple points of entry to the star.” (27) She – it – can be chopped and streamed. It’s remarkable that this video made for MTV sits so nicely now on Youtube.comwhose interactive modes it premediates.
There was – and still is – a lot of commentary on The Matrix (1999), but not much of it lingers over the slightly embarrassing second and third movies in the franchise. They are “bad films with their hearts in the right place.” (104) Like the J-Lo video, they deal among other things with what Eugene Thacker called immediacy, or the expectation of real time feedback and control via an interface. As Nakamura drolly notes, “This is an eloquent formulation of entitlement…” (94) Where the Matrix films get interestingly weird is in their treatment of racial difference among interface users under “information capitalism.” (96)
The Matrix pits blackness as embodiment against whiteness as the digital. What goes on in the background to the main story is a species of Afrofuturism, but it’s the opposite of Black Accelerationism, in which a close proximity of the black body to the machine is in advance of whiteness, and to be desired. In The Matrix version, the black body holds back from the technical, and retains attributes of soul, individuality, corporeality, and this is its value. Nakamura: “Afrofurturist mojo and black identity are generally depicted as singular, ‘natural’… ‘unassimilable’ and ‘authentic.’” (100) Whereas with the bad guy Agent Smith, “Whiteness thus spreads in a manner that exemplifies a much-favored paradigm of e-business in the nineties: viral marketing.” (101) The white Agents propagate through digitally penetrating other white male bodies.
At least race appears in the films, which offer some sort of counter-imaginary to cyber-utopianism. But as Coco Fusco notes, photography and cinema don’t just record race – they produce it. Lev Manovich notes that it’s in the interface that the photographic image is produced now, and so for Nakamura, it is the interface that bears scrutiny as the place where race is made. In The Matrix, race is made to appear for a notionally white viewer. “The presence of blackness in the visual field guards whites from the iresistable seduction of the perfectly transparent interface…. Transparent interfaces are represented as intuitive, universal, pre- or postverbal, white, translucent, and neutral – part of a visual design aesthetic embodied by the Apple iPod.” (109)
Apple’s iconic early ads for the iPod featured blacked-out silhouettes of dancing bodies, their white earbud cords flapping as they move, against bold single-color backgrounds. For Nakamura, they conjure universal consumers who can make product choices, individuated neoliberal subjects in a color-blind world. Like the ‘users’ of J-Lo in her video, they can shuffle between places, styles, cultures, ethnicities – even if some of the bodies dancing in the ads are mean to be read as not just black-out but also black. Blackness, at the time at least, was still the marker for the authentic in white desire around music. In this world, “Whiteness is replication, blackness is singularity, but never for the black subject, always for the white subject.” (116)
Nakamura: “This visual culture, which contrasts black and white interface styles so strongly, insists that it is race that is real. In this way the process of new media as a cultural formation that produces race is obscured; instead race functions here as a way to visualize new media image production… In this representational economy, images of blacks serve as talismans to ward off the consuming power of the interface, whose transparent depths, like Narcissus’ pool, threaten to fatally immerse its users.” (116, 117)
If blackness stands for authentic embodiment in this visual culture, then Asian-ness stands for too much proximity to the tech. The Asian shows up only marginally in The Matrix. Its star, the biracial Keanu Reeves, was like J-Lo quite racially malleable for audiences. In his case he could be read as white by whites and Asian by Asians if they so desired. A more ironic and telling example is the film Minority Report (2002). Tom Cruise – was there a whiter star in his era? – has to get his eyes replaced, as retinal scanning is everywhere in this film’s paranoid future. Only the eyes he gets belonged to a Japanese person, and the Cruise character finds himself addressed as a particularly avid consumer everywhere he goes. Hiroki Azuma and Asada Akira had once advanced a kind of ironic Asian Accelerationism, which positively valued a supposed closeness of the Asian with the commodity and technology, but in Minority Report it’s an extreme for the white subject to avoid.
Race at the interface might be a moment in a process of production and reproduction (and its queer twists) that Donna Haraway called the integrated circuit. It partakes now in what Paul Gilroy notes is a crisis of raciology, brought on by the popularization of genetic testing. The old visual regimes of race struggle to adapt to the spreading awareness of the difference between genotype and phenotype. The film GATTACA (1997) is here a prescient one in imagining how a new kind of racism of the genotype might arise. It imagines a world rife with interfaces designed to detect the genotypical truth of appearances.
Nakamura ties these studies of the interface in cinema and television to studies of actual interfaces, particularly lowly, unglamorous, everyday ones. For instance, she looks at the avatars made for AIM Instant Messenger, which started in 1997 as an application running in Microsoft Windows. Of interest to her are the self-made cartoon-like avatars users chose to represent themselves to their ‘buddies.’ “The formation of digital taste cultures that are low resolution, often full of bathroom humor, and influenced by youth-oriented and transnational visual styles like anime ought to be traced as it develops in its native mode: the internet.” (30-31)
At the time there was little research on such low forms, particularly those popular with women. Low-res forms populated with cut and paste images from the Care Bears, Disney and Hello Kitty are not the ideal subjects of interactivity imagined in cyberculture theories. But there are questions here of who has access to what visual capital, of “who sells and is bought, who surfs and is surfed.” (33) AIM avatars are often based on simple cut and paste graphics, but users modified the standard body images with signs that marked out their version of cultural or racial difference. This was a moment of explosion of ethnic identity content on the web – to which, incidentally, we may in 2017 be witnessing the backlash.
AIM users could download avatars from websites that offered them under various categories – of which race was never one, as this is a supposedly postracial world. The avatars were little gifs, made of body parts cut from a standard template with variations of different hair, clothing, slogans, etc. These could be assembled into mini-movies, remediating stuff from anime, comics, games; as a mix of photos and cartoons, flags, avatars.
One could read Nakamura’s interest in the visual self-presencing of women and girls as a subset of Henry Jenkins’ interest in fan based media, but she lacks Jenkins’ occasionally over-enthusiastic embrace of such activity as democratic and benign. Her subaltern taste-cultures are a little more embattled and compromised.
The kind of femininity performed here is far from resistant and sometimes not even negotiated. These versions of what Hito Steyerl would later call the poor image would be hard to redeem aesthetically. Cultural studies had tried to ask meta-questions about what the objects of study are, but even so, we ended up with limited lists of proper new media objects, of which AIM avatars were was not one.
The same could be said of the website alllooksame.com. The site starts with a series of photographs of faces, and asks the user to identify which is Japanese, Chinese or Korean. (Like most users, I could not tell, which is the point.) The category of the Asian-American is something of a post-Civil Rights construct. It promised resistance to racism in pan-ethnic identity, but which paradoxically treated race as real. While alllooksame.com is an odd site, for Nakamura it does at least unite Asian viewers in questioning visual rhetoric about race. Here it provides a counter-example to Ien Ang’s study of Huaren.org, which to her essentializes diasporic Chinese-ness.
Asian-American online practice complicates the digital divide, being on both sides. The Asian-American appears in popular racial consciousness as a ‘model minority’, supposedly uninterested in politics, avid about getting ahead in information capitalism, or whatever this is. Yet she or he also appears as the refugee, the undocumented, the subsistence wage service worker. For Nakamura, this means that the study of the digital divide has to look beyond the race of users to other questions of difference, and also to questions of agency online rather than mere user numbers.
While in some racialized codings, the ‘Asian’ is high-tech and assmiliates to (supposedly) western consumerist modes, the encounter between postcolonial literary theory and new media forms produces quite other conjunctures. To collapse a rich and complex debate along just one of its fault-lines: imperial languages such as English can be treated either as something detachable from its supposed national origin, or as something to refuse altogether.
The former path values hybridity and the claiming of agency within the language of the colonizer. The latter wants resist this, and sticks up for the unity and coherence of a language and a people. And, just to complicate matters further, this second path has to be acknowledged is also a European idea – the unity and coherence of a people and its language being itself an idea that emerged out of European romanticism.
Much the same fault-line can be found in debates about what to do in the postcolonial situation with the internet, which can also be perceived as western and colonizing – although it might make more sense now to think of it as colonizing not on behalf of the old nation-states as on behalf of what Benjamin Bratton calls the stack.
Nakamura draws attention to some of the interesting examples of work on non-western media, including Eric Michaels’ brilliant work on video production among western desert Aboriginal people in Australia, and the work of the RAQS Media Collective and Sarai in India, which reached out to non-English speaking and even on-literate populations through interface design and community access.
Since her book was published, work really flourished in the study of non-western uptakes of media, not to mention work on encouraging local adaptions and hybrids of available forms. If one shifts one’s attention from the internet to cellular telephone, one even has to question the assumption that the west somehow leads and other places follow. It may well be the case that most of the world leap-frogged over the cyberspace of the internet to the cellspace of telephony. A recent book by Yuk Hui even asks if there are non-western cosmotechnics, but that’s a topic for another time.
The perfect counterpoint to the old cyberculture idea of online disembodiment is Nakamura’s study of online pregnancy forums – the whole point of which is to create a virtual community for women in some stage of the reproductive process. Here Nakamura pays close attention to ways of representing pregnant bodies. The site she examines allowed users to create their own signatures, which were often collages of idealized images of themselves, their partners, their babies, and – in a most affecting moment, their miscarriages. Sometimes sonograms were included in the collages of the signatures, but the separate the fetus from the mother, and so other elements were generally added to bring her back into the picture.
It’s hard to imagine anything more kitsch. But then we might wonder why masculine forms of geek or otaku culture can be presented as cool when something like this is generally not. By the early 2000s the internet was about 50/50 men and women, and users were more likely to be working class or suburban. After it’s here comes everybody moment, the internet started to look more like regular everyday culture. These pregnant avatars, or ‘dollies’ were more cybertwee than cyberfeminist (not that these need be exclusive categories, of course). But by the early 2000s, “the commercialization of the internet has led many internet utopians to despair of its potential as a site to challenge institutional authority…” (160)
But perhaps it’s a question of reading outside one’s academic habitus. Nakamura: “’Vernacular’ assemblages created by subaltern users, in this case pregnant women, create impossible bodies that critique normative ones without an overt artistic or political intent.” (161) The subaltern in this case can speak, but choses to speak through images that don’t quite perform as visual cultural studies would want them to. Nakamura wants to resist reading online pregnancy forums in strictly social-science terms, and to look at the aesthetic dimensions. It’s not unlike what Dick Hebdige did in retrieving London youth subcultures from criminological studies of ‘deviance.’
The blind spot of visual cultural studies, at least at time, was vernacular self-presentation. But it’s hard to deny the pathos of images these women craft of their stillborn or miscarried children. The one thing that perhaps received the most belated attention in studies of emerging media is how they interact with the tragic side of life – with illness, death and disease. Those of us who have been both on the internet and studying it for thirty years or so now will have had many encounters with loss and grief. We will have had friends we hardly ever saw IRL who have passed or who grieve for those who have passed. IRL there are conventions for what signs and gestures one should make. In online communication they are emerging also.
Nakamura was right to draw attention to this in Digitizing Race, and she did so with a tact and a grace one could only hope to emulate. Nakamura: “The achievement of authenticity in these cases of bodies in pain and mourning transcends the ordinary logic of the analog versus the digital photograph because these bodily images invoke the ‘semi-magical act’ of remembering types of suffering that are inarticulate, private, hidden within domestic or militarized spaces that exclude the public gaze.” (168)
Not only is the body with all its marks and scars present in Nakamura’s treatment, it is present as something in addition to its whole being. “We live more, not less, in relation to our body parts, the dispossession or employment of ourselves constrained by a complicated pattern of self-alienation…. Rather than freeing ourselves from the body, as cyberpunk narratives of idealized disembodiment foresaw, informational technologies have turned the body into property…” (96) Here her work connects up with that of Maurizio Lazzarato and Gerald Raunig on machinic enslavement and the dividual respectively, in its awareness of the subsumption of components of the human into the inhuman.
But for all that, perhaps the enduring gift of this work is, to modify Adorno’s words, to not let the power of another or our own powerlessness – stupefy us. There might still be forms of agency, tactics of presentation, gestures of solidarity – and in unexpected places. Give how internet culture was tending in the decade after Digitizing Race, perhaps it is an obligation now to return the gift of serious and considered attention to our friends and comrades — and not least in the scholarly world. For the tragic side of life is never far away. The least we can do is listen to the pain of others. And speak in measured tones of each other’s small achievements of wit, grace and insight.
On Benjamin Bratton's The Stack
Superstudio, 'Continuous Monument: An Architectural Model for Total Urbanization', 1969
What I like most about Benjamin Bratton’s The Stack: On Software and Sovereignty (MIT Press, 2015) is firstly its close attention to what I would call the forces and relations of production. We really need to know how the world is made right now if it is ever to be remade. Secondly, I appreciate his playful use of language as a way of freeing us from the prison-house of dead concepts. It is no longer enough to talk of neoliberalism, precarity or biopower. What were once concepts that allowed access to new information have become habits. Thirdly, while no friend to bourgeois romantic anti-tech humanism, Bratton has far more sense of the reality of the Anthropocene than today’s accelerationist thinkers. Bratton: “We experience a crisis of ‘ongoingness’ that is both the cause and effect of our species’ inability to pay its ecological and financial debts.” (303)
The category of thing that Bratton studies looks a bit like what others call the forces and relations of production, or infrastructure, but is better thought of as platforms. They are standards-based technical and social systems with distributed interfaces that enable remote coordination of information and action. They are both organizational and technical forms that allow complexity to emerge. They are hybrids not well suited to sociology or computer science. They support markets, but can or could enable non-market forms as well. They are also about governance, and as such resemble states. They enable a range of actions and are to some extent reprogrammable to enable still more.
Platforms offer a kind of generic universality, open to human and non-human users. They generate user identities whether the users want them or not. They link actors, information, events across times and spaces, across scales and temporalities. They also have a distinctive political economy: they exist to the extent that they generate a platform surplus, where the value of the user information for the platform is greater than the cost of providing the platform to those users. Not everything is treated as a commodity. Platforms treat some, often a lot, of information as free, and can rely on gift as much as commodity economies.
Bratton’s particular interest is in stack platforms. The metaphor of the stack comes from computation, where it has several meanings. For example, a solution stack is a set of software components layered on top of each other to form a platform for the running of particular software applications without any additional components. All stacks are platforms, but not all platforms are stacks. A stack platform has relatively autonomous layers, each of which has its own organizational form. In a stack, a user might make a query or command, which will tunnel down from layer to layer within the stack, and then the result will pass back up through the layers to the user.
Bratton expands this metaphor of the stack to planetary scale. The world we live in appears as an “accidental megastructure” made up of competing and colluding stacks. (5) Computation is planetary-scale infrastructure that transforms what governance might mean. “The continuing emergence of planetary-scale computation as meta-infrastructure and of information as an historical agent of economic and geographic command together suggest that something fundamental has shifted off-center.” (3)
The stack generates its own kind of geopolitics, one less about competing territorialities and more about competing totalities. One made up of enclaves and diasporas. It both perforates and hardens borders. It may even enable “alien cosmopolitanisms.” (5) It’s a “crisis of the Westphalian geographic design.” (7) “It is not the ‘state as a machine’ (Weber) or the ‘state machine’ (Althusser) or really even (only) the technologies of governance (Foucault) as much as it is the machine as the state.” (8)
Bratton follows Paul Virilio in imagining that any technology produces its own novel kind of accident. A thought he makes reversible: accidents produce technologies too. Take, for example, the First Sino-Google War of 2009, when two kinds of stack spatial form collided: Google’s transnational stack and the Great Firewall of China. This accident then set of a host of technical strategy on both sides to maintain geopolitical power. Perhaps the stack has a new kind of sovereignty, one that delaminates geography, governances and territory.
In place of Carl Schmitt’s nomos of the earth, Bratton proposes a nomos of the cloud, as in cloud computation, which as we shall see is a crucial layer of the stack. Nomos here means a primary rule or division of territory, from which others stem. Unlike in Brown and other theorists of neoliberalism, Bratton thinks sovereignty has not moved from state to market but to the stack. Schmitt championed a politics of land, people and idea versus liberal internationalism, an idea revived in a more critical vein by Mouffe. But perhaps where sovereignty now lies is in a form that is really neither and on which both depend, a stack platform sovereignty and its “automation of the exception.” (33)
One could read Bratton as a very contemporary approach to the old Marxist methodology of paying close attention to the forces of production. “…an understanding of the ongoing emergence of planetary-scale computing cannot be understood as a secondary technological expression of capitalist economics. The economic history of the second half of the twentieth century is largely unthinkable without computational infrastructure and superstructure…. Instead of locating global computation as a manifestation of an economic condition, the inverse may be equally valid. From this perspective, so much of what is referred to as neoliberalism are interlocking political-economic conditions within the encompassing armature of planetary computation.” (56)
The stack could have been the form for the global commons, but instead became an “an invasive machinic species.” (35) “Sovereignty is not just made about infrastructural lines; it is made by infrastructural lines.” (35) Code becomes a kind of law. “This is its bargain: no more innocent outside, now only theoretically recombinant inside…. The state takes on the armature of a machine because the machine, The Stack, has already taken on the roles and register of the state.” (38, 40)
Bratton: “Will the platform efficiencies of The Stack provide the lightness necessary for a new subtractive modernity, an engine of a sustainable counter-industrialization, or will its appetite finally suck everything into its collapsing cores of data centers buried under mountains: the last race, the climate itself the last enemy?” (12) However, “It may be that our predicament is that we cannot design the next political geography or planetary computation until it is more fully designs us in its own image or, in other words, that the critical dependence of the future’s futurity is that we are not yet available for it!” (15)
Bratton’s conceptual object is not just the actually existing stack, but all of its possible variants, including highly speculative ones such as Constant’s New Babylon, and actual but failed or curtailed ones, such as Stafford Beer’s Cybersyn, Ken Sakamura’s TRON, the Soviet Internet and Soviet cybernetics. The actual stack includes such successful technical developments as TCP/IP. This protocol was the basis for a modular and distributed stack that could accommodate unplanned development. It was about packets of information rather than circuits of transmission; about smart edges around a dumb network. TCP/IP was authored as a scalable set of standards.
Bratton thinks infrastructure as a stack platform with six layers, treated in this order: earth, cloud, city, address, interface, user. I think of it more as the four middle layers, which produce the appearance of the user and the earth at either end. I will also reverse the order of Bratton’s treatment, and start with the phenomenology of the user, which is where we all actually come to experience ourselves in relation to the stack.
User layer: A user is a category of agent, a position within a system that gives it a role. We like to think we are in charge, but we might be more like the Apollo astronauts, “human hood ornaments.” (251) Its an illusion of control. The more the human is disassembled into what Lazzarato and Raunig and others think of as dividual drives, the more special humans want to feel. “In this, the User layer of The Stack is not where the rest of the layers are mastered by some sovereign consciousness; it is merely where their effects are coherently personified.” (252)
For a long time, design thought about the user as a stylized persona. As Melissa Gregg and others have showed, the scientific measurement of labor produced normative and ideal personas of the human body and subjectivity. The same could be said for audience studies. Fordism was an era of the design of labor process and leisure-time media for fictional people. But these personas are no longer needed. As Azuma shows, the stack does not need narrative fictions of and for ideal users but database fictions that aggregate empirical ones.
The stack not only gives but takes data from its users. “User is a position not only through which we see The Stack, but also through which The Stack sees us.” (256) This is the cause of considerable discomfort among users who reactively redraw the boundaries around a certain idea of the human. Bratton is not sympathetic: “… anthropocentric humanism is not a natural reality into which we must awake from the slumber of machinic alienation; rather it is itself a symptomatic structure powered by – among other things – a gnostic mistrust of matter, narcissistic self-dramatization, and indefensibly pre-Copernican programs for design.” (256)
Bratton is more interested in users who go the other way, such as the quantified-self people, who want self-mastery through increasingly detailed self-monitoring of the body by the machine. In Lazzarato’s terms, they are a people who desire their own machinic enslavement. Bratton thinks it is a bit more nuanced than that. There may still be a tension between the cosmopolitan utopias of users and their molding into data-nodes of consumption and labor. The user “To be sure, the bio-geo-politics of all this are ambiguous, amazing, paradoxical, and weird.” (269)
The stack does not care if a user like you is human or not. Bratton is keen to oppose the anthropomorphizing of the machine: “we must save the nonhumans from being merely humans” (274) Making the inhuman of the machine too akin to the merely human shuts out the great outdoors of the nonhuman world beyond. “We need to ensure that AI agents quickly evolve beyond the current status of sycophantic insects, because seeing and handling the world through that menial lens makes us, in turn, even more senseless.” (278) As one learns from Mbembe, commandment that does not confront an other with its own autonomous will quickly loses itself in ever more hyperbolic attempts to construct a sense of agency, will and desire.
Debate about user ‘rights’ has been limited to the human, and limited to a view of the human merely as endowed with property and privacy rights. Rather like Lefebvre’s right to the city, one needs a right to the stack that includes those without property. One could even question the need to think about information and its infrastructures in property terms at all. Bratton is not keen on the discourse of oedipal fears about the bad stepfather spying on us, resulting in users wanting no part in the public, but to live a private life of self-mastery, paranoia and narcissism. “The real nightmare, worse than the one in which the Big Machine wants to kills you, is the one in which it sees you as irrelevant, or not even a discrete thing to know.” (364) Maybe the user could be more rather than less than the neoliberal subject. The stack need not see us as users. To some extent it is an accommodation to cultural habits rather than a technical necessity.
Interface layer: If one took the long view, one could say that the human hand is already an interface shaped over millennia by tools. That ancient interface now touches very new ones. The interface layer mediates between users and the technical layers below. Interface connects and disconnects; telescopes, compresses or expands layers – routing user actions through columns that burrow up and down through the stack. The Stack turns tech into images and images into tech. “Once an image can be used to control what it represents, it too becomes technology: diagram plus computation equals interface.” (220)
Interfaces are persuasive and rhetorical, nodes among the urban flow. “What is open for me may be closed for you, and so our vectors are made divergent.” (222) Interfaces offer a kind of protocol, or generic threshold. We probe our interfacial condition, being trained through repetition. From the point of view of the interfae layer, users are peripheral units of stacks. Or user: one could think the Apple stack, for example, as creating a single distributed user for the “Apple experience.”
Kittler thought media as looping the subject into three separate interfaces: cinema (the imaginary), typewriter (the symbolic) and phonograph (the real). Bratton thinks the interface as producing a more schizoid landscape. “The machinic image is qualified by many little sinkholes between the symbolic, the imaginary and the real, and at a global scale of billions of Users…” (225) This might form the basis of a materialist account of what Jodi Dean calls the decline in symbolic efficiency.
Interfaces change not only the form of the subject but the form of labor. “Today, at the withering end of post-Fordism… we observe logistics shifting from the spatially contiguous assembly line to the radically dis-contiguous assemblage line linked internally through a specific interfacial chain. Contemporary logistics dis-embeds production of things from particular sites and scatters it according to the synchronization and global variance in labor price and resource access….” (231)
Interfaces also become more powerful forms of governance over the flows they represent. They have to appear as the remedy for the chaotic flows they themselves cause. Their reductive maps become true through use. They may also be the icons of weird forms of experimental religion, or ways of binding. They can notate the world with friend/enemy borders. The example here is the popular game Ingress, with its ludic Manicheanism, in which users are trained to attack and defend non-existent territories. Fanon once noted that when French colonial power jammed the radio broadcasts of the resistance, Algerians would leave the radio dial on the jammed signal, the noise standing in for it. Bratton wonders how one might update the gesture. Like Gilroy and Karatani in their different ways, Bratton wonders what kinds of universality could emerge, in this case as a kind of abstraction from of interfacial particularities and their “synthetic diagrammatic total images.” (297)
However, as Bratton realizes, “We fear a militarization of cognition itself… Enrollment and motivation according to the interfacial closures of a political theological totality might work by ludic sequences for human Users or by competitive algorithmic ecologies for nonhuman Users.” (297) Both human and nonhuman cognition could be assimilated to stack war machines. Or perhaps, one could remake both human and inhuman users into a new kind of polity, in part through interfaces of a different design. “A strong interfacial regime prefigures a platform architecture built of total interfacial images and does so through the repetition of use that coheres a durable polity in resemblance to the model.” (339)
Address layer: Address is a formal system, independent of what it addresses, that denotates singular things through bifurcators such as names or numbers, that can be resolved by a table for routing. Addressing creates generic subjectivity, so why not then also generic citizenship? Address can however give rise to something like fetishism. In Bratton’s novel reading of Marx, capitalism obfuscates the address of labor, treating it as a thing and addressing things in labor’s place as if those things had magical properties.
If we are all users (we humans and inhumans) then a right to the stack is also a right to address, as only that which has an address can be the subject of rights in the “virtual geographic order” of a stack geopolitics. (191) Address is no longer just a matter of discrete locations in a topography. As I put it in Gamer Theory, space is now a topology, which can be folded, stretched and twisted. As Galloway has already shown, the distributed network of TTP/IP is doubled by the centralized of DNS, which records who or what is at which address.
Bratton’s interest is in what he calls deep address, modeled on the intertextuality or détournement one sees in the archive of texts, or the architectural thought of Christopher Alexander for whom building was about containers and conveyors for information. Address designates a place for things and enables relations between things; deep address designates also the relations, and then the relations among those relations. Deep address is to address as a derivative is to a contract. Its endless metadata: about objects, then metadata about the metadata about those objects, and so on.
The financialization of addressability may also be a kind of fetishism, mistaking the metadata about a relation for a relation. Deep address as currently implemented makes everything appear to a user configured as a uniquely addressed subject who calls up the earth through the stack as if it were only a field of addressable resources. Hence, “not only is the totality of The Stack itself deeply unstable, it’s not clear that its abyssal scope of addressability and its platform for the proliferation of near-infinite signifiers within a mutable finite space are actually correspondent with the current version of Anthropocenic capitalism.” (213)
However, deep address has become an inhuman affair. Not only are most users not subjects, so too most of what is addressed may not even be objects. Deep address generates its own accidents. Maybe it is headed toward heat death, or maybe toward some third nature – deep address may outlive the stack. Bratton: “we have no idea how to govern in this context.” (213)
City layer: Beneath the address layer is still the old-fashioned topography it once addressed – the city. Only the city now looks more like Archizoom’s No-Stop City than the static geometries of Le Corbusier. In the city layer absorbed into the stack, mobilization is prior to settlement, and the city is a platform for sorting users in transit. As Virilio noted some time ago, the airport is not only the interface but also the model of the overexposed city.
Like something out of a Ballard story, the city layer is one continuous planetary city. It has a doubled structure. For every shiny metropolis there’s an anti-city of warehouses and waste dumps. The stack subsumes cities into a common platform for labor, energy and information. Proximity still has value, and the economy of the city layer depends on extracting rents from it. Here one might add that the oldest form of ruling class – the rentier class – has found a future not (or not just) in monopolizing that land which yields energy (from farms and mines) but also that which yields information – the city.
Cities are platforms for users rather than polities for citizens. And as Easterling might concur, their form is shaped more by McKinsey or Haliburton than by architects or planners. Architecture becomes at best interface design, where cement meets computation. It is now a laminating discipline, creating means of stabilizing networks, managing access, styling interfaces, mixing envelopes. Cities are to be accessed via mobile phone, which afford parameters of access, improvisation, syncopation.
The ruin our civilization is leaving does not look like the pyramids. It’s a planet wrapped in fiber optic. But perhaps it could be otherwise. “Our planet itself is already the mega-structural totality in which the program of total design might work. The real design problem then is not foremost the authorship of a new envelope visible from space, but the redesign of the program that reorganizes the total apparatus of the built interior into which we are already thrown together.” (182)
Ironically, today’s pharaohs are building headquarters that simulate old forms, be it Google’s campus, Amazon’s downtown or Apple’s weird spaceship. They all deny their spatial doubles, whether its Foxconn where Apple’s phones are made or Amazon’s “logistics plantations.” (185) But it is hard to know what a critical practice might be that can intervene now that cities are layers of stacks platforms, where each layer has its own architectural form. “Is Situationist cut-and-paste psychogeography reborn or smashed to bits by Minecraft?” (180) Bratton doesn’t say, but it at least nicely frame the kind of question one might now need to ask.
Cloud layer: Low in the stack, below the city layer, is the cloud. It could be dated form the development of Unix time-sharing protocols in the 1970s, from which stems the idea of users at remote terminals sharing access to the same computational power. The cloud may indeed be a kind of power. “As the governing nexus of The Stack, this order identifies, produces and polices the information that can move up and down, layer to layer, fixing internal and external borders and designating passages to and from.” (111)
It may also be a layer that gives rise to unique kinds of conflict, like the First Sino-Google War of 2009, where two stacks, built on different kinds of cloud with different logics of territory and different imagined communities of user collided. That may be a signal moment in an emerging kind of geopolitics that happens when stacks turn the old topography into a topology. “The rights and conditions of citizenship that were to whatever degree guaranteed by the linking of information, jurisdiction and physical location, all within the interior view of the state, now give way perhaps to the riskier prospects of a Google Grossraum, in which and for which the terms of ultimate political constitution are anything but understood.” (114)
The cloud layer is a kind of terraforming project – here on earth. Clouds are built onto, or bypass, internet. They form a single big discontinuous computer. They take over functions of the state, cartography being just one example. There are many kinds of clouds, however, built into quite different models of the stack, each with their own protocols of interaction with other layers. Google, Apple and Amazon are stacks with distinctive cloud layers, but so too are WalMart, UPS and the Pentagon.
Some cloud types: Facebook, which runs on the captured user graph. It is a rentier of affective life offering a semi-random newspaper and cinema, strung together on unpaid nonlabor, recognition and social debit. Then there’s Apple, who took over closed experience design from Disney, and offer brand as content. As a theology, Apple is an enclave aesthetic about self-realization in a centralized market. It’s a rentier of the last millimeter of interface to its walled garden.
On the other hand, Amazon is an agora of objects rather than subjects, featuring supply chain compression, running on its own addressing system, with algorithmic pricing and micro-targeting. But even Amazon lacks Google’s universal ambition and cosmopolitan mission, as if the company merely channeled an inevitable quant reason. It is a corporation founded on an algorithm, fed by universal information liquidity, which presents itself as neutral platform for humans and inhumans, offering ‘free’ cloud services in exchange for information. “Google Großraum delaminates polity from territory and reglues it into various unblendable sublayers, weaving decentralized supercomputing through increasingly proprietary networks to hundreds of billions of device end-points.” (295)
Despite their variety, to me these clouds are all shaped by the desires of what I call the vectorialist class, which is to extract what Bratton calls “platform surplus value.” (137) But perhaps they are built less on extracting rent or profit so much as asymmetries of information. They attempt in different ways to control the whole value chain through control of information. Finance as liquidity preference may be a subset of the vectoralist class as information preference, or power exercised through the most abstract form of relation, and baked into the cloud no matter what its particular form.
Bratton: “The Cloud polis draws revenue from the cognitive capital of its Users, who trade attention and micro-economic compliance in exchange for global infrastructural services, and it in turn provides each of them with an active, discrete online identity and the license to use that infrastructure.” (295) Maybe this is “algorithmic capitalism” – or maybe (as I argue) it’s not capitalism any more, but something worse. (81) Something Bratton’s innovations in conceptual language help us perceive, but which could be pushed still further.
The current cloud powers all built out accidental advantages or contingent decisions. Not without a lot of help from human users, whose unpaid non-labor provides the feedback for their constant optimization. We are all guinea pigs in an experiment of the cloud’s design. But Bratton is resistant to any dystopian or teleological read on this. The cloud layer was the product of accident as much as design; conflict as much as collaboration. Still, there’s something unsettling about the prospect of the nomos of the cloud. Bratton: “The camp and the bunker, detention and enclave, are inversions of the same architecture.” (368) The nomos of the cloud can switch between. It is yet to be seen what other topological forms it might enable.
Earth layer: Was computation discovered or invented? Now that the stack produces us as users who see the earth through the stack, we are inclined to substitute from our experience of working and playing with the stack onto the earth itself. It starts to look like a computer, maybe a first computer, from which the second one of the stack is derived. But while earth and stack may look formally similar they are not ontological identical. Or, as I speculated in A Hacker Manifesto, the forces of production as they now stand both reveal and create an ontology of information that is both historical and yet ontologically real.
Bratton: “The Stack is a hungry machine…” (82) It sucks in vast amounts of earth in many forms. Here Bratton connects to what Jussi Parikka calls a Geology of Media. Everyone has some Africa in their pocket now – even many Africans, although one should not ignore asymmetries in where extractions from the earth happen and where the users who get to do the extracting happen. Bratton: “… there is no Stack without a vast immolation and involution of the Earth’s mineral cavities. The Stack terraforms the host planet by drinking and vomiting its elemental juices and spitting up mobile phones…. How unfamiliar could its flux and churn be from what it is now? At the radical end of contingency, what is the ultimate recomposability of such materials? The answer may depend on how well we can collaborate with synthetic algorithmic intelligence to model the world differently…” (83)
The stack terraforms the earth, according to a seemingly logical but haphazard geodesign – rather like the aesthetics of Superstudio. “As a landscaping machine, The Stack combs and twists settled areas into freshly churned ground, enumerating input and output points and re-rendering them as glassy planes of pure logistics. It wraps the globe in wires, making it into a knotty, incomplete ball of glass and copper twine, and also activating the electro-magnetic spectrum overhead as another drawing medium, making it visible and interactive, limning the sky with colorful blinking aeroglyphs.” (87)
Particularly where the earth is concerned, “Computation is training governance to see the world like it does and to be blind like it is.” (90) But the stack lacks a bio-informational skin that might connect ecological observation to the questioning of resource management. Running the stack now puts more carbon into the atmosphere that the airline industry. If it as a state it would be the fifth largest energy suck on the planet. “Even if all goes well, the emergent mega-infrastructure of The Stack is, as a whole, perhaps the hungriest thing in the world, and the consequences of its realization may destroy its own foundation.” (94)
Hence the big question for Bratton becomes: “Can The Stack be built fast enough to save us from the costs of building The Stack?” (96) Can it host computational governance of ecologies? “Sensing begets sovereignty,” as I showed in Molecular Red in the case of weather and climate. (97) But could it result in new jurisdictions for action? Hence, “we must be honest in seeing that accommodating emergency is also how a perhaps illegitimate state of exception is stabilized and over time normalized.” (103) Because so far “there is no one governance format for climates and electrons that the space for design is open at all.” (104)
Bratton is reluctant to invite everything into Bruno Latour’s parliament of things, as this to him is a coercing of the nonhuman and inhuman into mimicking old-fashioned liberalism. But making the planet an enemy won’t end well for most of its inhabitants. Which brings us to the problem of the stack to come, and Bratton’s novel attempt to write in the blur between what is here but not named and what is named but not really here. Bratton: “Reactionary analog aesthetics and patriotisms, Emersonian withdrawal, and deconstructionist political theology buy us less time and far less wiggle room than they promise….” (81)
What provides the interesting angle of view in Bratton is thinking geopolitics as a design problem. “We need a geopolitics of design that is comfortable not only with computation but also with vertical systems of designation and decision.” (xix) But this is not your usual design problem thinking. “The more difficult assignment for design is to compose relations within a framework that exceeds both the conventional appearance of forms and the provisional human context at hand, and so pursuing instead less the materialization of abstract ideas into real things than the redirection of real relations through a new diagram.” (210)
Designing the stack to come, like any good design studio, does try to start with what is at hand. “Part of the design question then has to do with interpreting the status of the image of the world that is created by that second computer, as well as that mechanism’s own image of itself, and the way that it governs the planet by governing its model of that planet.” (301)
This is not a program of cybernetic closure, but rather of “enabling the world to declare itself as data tectonics…. Can the ‘second planetary computer’ create worlds and images of worlds that take on the force of law (if not its formality) and effectively exclude worse alternatives?” (302) It might start with “a smearing of the planet’s surface with an objective computational film that would construct a stream of information about the performance of our shared socio-natural spaces…” (301)
Contra Latour, but also Haraway and Tsing, for Bratton there is no local, only the global. We’re users stuck with a stack that resulted from “inadvertent geoengineering.” (306) But the design prospect is not to perfect or complete it, but to refashion it to endure its own accidents and support a range of experiments in rebuilding: “the geo-design I would endorse doesn’t see dissensus as an exception.” (306)
It’s not a romantic vision of a return to an earth before the stack. Bratton: “… the design of food platforms as less about preserving the experiential simulation of preindustrial farming and eating… and more like molecular gastronomy at landscape scale.” (306) But it is not a naïve techno-utopianism either. While I don’t think it’s a good name, Bratton is well aware of what he calls cloud feudalism, which uses the stack to distribute power and vale upwards. And it is fully aware that the “militarized luxury urbanism” of today’s vectorialist class depends on super-exploitation of labor and resources. (311) At least one novel observation here however is that the stack can have different governance forms at each level. The stack is not one infrastructure, but a laminating of relatively autonomous layers.
Here one might look sideways in a media archaeology vein at other forms of stack that fell by the wayside, from Bogdanov to the attempt to computerize the Soviet Gosplan – which as Bratton notes does not look completely unlike what Google actually achieved. Hayek may have been right in his time that state planning could not manage information better than a market. But maybe neither could manage information as well as a properly designed stack platform. Perhaps, as some Marxists once held, the capitalist ruling class (and then the vectoralist ruling class), perfected the forces of production that make them obsolete. Perhaps in the liminal space of the stack to come one can perceive technical-social forms that get past both the socialist and capitalist pricing problems.
Bratton: “We allow, to the pronounced consternation of both socialist and capitalist realists, that some polypaternal supercomputational descendant of Google Gosplan might provide a mechanism of projection, response, optimization, automation, not to mention valuation and accounting beholden neither to market idiocracy nor dim bureaucratic inertia, but to the appetite and expression of a curated algorithmic phyla and its motivated Users.” (333) Perhaps there’s a way planning could work again, using deep address, but from the edges rather than the center.
This might mean however an exit from a certain residual humanism: “the world may become an increasingly alien environment in which the privileged position of everyday human intelligence is shifted off-center.” (338) perhaps it’s not relevant whether artificial cognition could pass a Turing test, and is more interesting when it doesn’t. Here Bratton gestures towards the post-human accelerationism of Negarastani and Brassier, but with far more sense of the constraints now involved. “The Anthropocene should represent a shift in our worldview, one fatal to many of the humanities’ internal monologues.” (353)
Bratton: “The Stack becomes our dakhma.” (354) Perhaps a dakhma like the raised platform built by the Zoroastrians for excarnation, where the dead are exposed to the birds. To build stack to come we have to imagine it in ruins: “design for the next Stack… must work with both the positive assembly of matter in the void, on the plane and in the world, and also with the negative maneuver of information as the world, from its form and through its air.” (358)
To think about, and design, the stack to come, means thinking within the space of what Bratton calls the black stack, which is a “generic profile of its alternative totalities” (363) It might look more like something out of Borges than out of the oracular pronouncements of Peter Theil or Elon Musk. Bratton: “Could this aggregate ‘city’ wrapping the planet serve as the condition, the grounded legitimate referent, form which another, more plasmic, universal suffrage can be derived and designed?” (10) Let’s find out.
Next in our tour of new-classic works of media theory, after Jodi Dean and Tiziana Terranova, I turn to Hito Steyerl, and her collected writings The Wretched of the Screen (e-flux and Sternberg Press, 2012). If Dean extends the line of psychoanalytic and Terranova the autonomist strains of (post)marxist thought, Steyerl does something similar for the formalist strategies of a past era of screen theory and practice.
Hito Steyerl is from that era when the politics of culture was all about representation. Sometimes the emphasis was on the question of who was represented; sometimes on the question of how. The latter question drove a series of inquiries and experiments in form. These experiments tended to focus fairly narrowly on things like the logic of cinematic editing, not least because the larger technical and economic form of cinema was fairly stable.
It’s a line of thought that did not survive too well into the current era, which Steyerl alternately calls “audiovisual capitalism,” or “disaster capitalism” or “the conceptual turn of capitalism.” (33, 93, 42) Neither capitalism – if this is still what this is – or media form seems at all stable any more. (For Lev Manovich media dissolves into software). The formal questions need to be asked again and across a wider tract of forms, and for that matter for new categories of that which might or might not be represented. It may even turn out that the formal questions of media are not really about representation at all.
For this is an era in free-fall without end, to the point where it feels like a kind of stasis. But perhaps this free-fall opens onto a particular kind of vision. For Platonov, the Bolsheviks had taken away from peasant society both the heavens and the land, leaving only the horizon. Like Virilio, Steyerl thinks the horizon has now disappeared as well. Once it was the fulcrum that allowed mariners to find their latitude. But the line they drew was already an abstract one, turning the earth from a continuum of places to a grid-like space.
This abstracted perception of space both affirms and undermines the point of view of the spectator, making all of space appear as if for that point of view, but also making that point of view itself an abstract one. Steyerl doesn’t mention longitude, but one could think here also about how the chronometer created a second axis of abstraction, of time and longitude, to complete the grid, and making time as smooth and digital as space. “Time is out of joint, and we no longer know whether we are objects or subjects as we spiral down in an imperceptible free fall.” (26)
Perhaps there is also a shift in emphasis from the horizontal to the vertical axis. This is an era that privileges the elevated view, whether of the drone or the satellite. The spectator’s point of view floats, a “remote control gaze.” (24) If everything is in free-fall then the only leverage is to be on top of the one falling below, a kind of “vertical sovereignty.” (23) It is the point of view of “intensified class war from above.” (26)
It is hard to know what kind of formal tactic might get leverage in such a situation. Steyerl counsels “a fall toward objects without reservation.” (28) This, as it will turn out later, yields some striking ways of approaching the question of the subject as a formal problem as well.
Falling toward the image as a kind of object, Steyerl is drawn to the poor image, or the lumpen-image, the kind that lacks quality but has accessibility, restores a bastard kind of cult value, functions as “a lure, a decoy, an index.”(32) This reverses one of the received ideas of cinema studies subset of screen studies, where “resolution was fetishized as if its lack amounted to castration of the author.” (35) Steyerl celebrates instead Kenneth Goldsmith’s Ubuweb, which makes a vast archive of the avant-garde available free online (but out of sight of google) at low resolution.
The poor image is cousin to Julio Garcia Espinosa’s manifesto ‘For an imperfect cinema,’ and as such is one of Steyerl’s many reworkings, indeed détournements, of classic ‘moves’ from the formalist critical playbook. Like Third Cinema before it, the poor image works outside the alignment of high quality with high class.
The poor image is not something that can just be celebrated. It is also the vehicle for hate speech and spam. The poor image is a democratic one, but in no sense is the democratic an ideal speech situation. Maybe its more what circulates in and between Hiroki Azuma’s databases. Poor images “express all the contradictions of the contemporary crowd: its opportunism, narcissism, desire for autonomy and creation, its inability to focus or make up its mind…” (41) Their value is defined by velocity alone. “They lose matter and gain speed.” (41)
But there are other contradictions inherent in the poor image. It turns out that dematerialized art works pretty well with conceptual capitalism. The tendency of the latter may well be to insist that even non-existent things must be private property. So, for example Uber is now a major transport company yet it owns no vehicles; Amazon is a major retailer that owns no stores. Dematerialization is in my terms the strategy of the vectoral class. Or rather, not so much a dematerialization as a securing of control through the information vector, and thus an exploit based on certain properties of ‘matter’ of fairly recent discovery.
On the other hand, the poor image can circulate in anonymous networks, yielding anomalous shared histories. Perhaps it produces Vertov’s visual bonds in unexpected ways. Steyerl’s figure for this is David Bowie’s song ‘Heroes’, where the hero is not a subject but an object, the image-as-object – that which can be copied, that which struggles to become a subject. But maybe instead the hero is that which Mario Perniola, after Benjamin, calls a “thing that feels” (50).
In the era of free-fall, perhaps trauma is a residue of the independent subject, “the negativity of the thing can be discerned by its bruises.” (52) Things condense violence just as much as subjects; things condense violence just as much as desire. “The material articulation of the image is like a clone of Trotsky walking around with an icepick in his head.” (53) The thing is an object but also a fossil, a form that is an index of the circumstances of its own death.
Again, Steyerl will reach here for a détournement of the old tactics. In this case it is Alexander Rodchenko, for whom all things could be comrades, just as for Platonov all living things could be comrades. In free-fall, the living and non-living things might be a heap rather than a hierarchy. “History, as Benjamin told us, is a pile of rubble. Only we are not staring at it from the point of view of Benjamin’s shell-shocked angel. We are not the angel. We are the rubble.” (56)
Steyerl thinks not just the micro-scale of the image but also the macro scale of the institution, and asks questions with a similar formal lineage. From the point of view of screen studies, the museum is now a white cube full of black boxes. You pass under a black curtain into a usually quite uncomfortable little black box with a bench to sit on and bad sound. The black-box in white-cube system is for Steyerl not just a museum any more but also a factory. Strangely enough they are now often – like Tate Modern or DIA Beacon – now also often in former industrial sites.
Steyerl: “the white cube is… the Real: the blank horror and emptiness of the bourgeois interior.” (62) Or was. Now it is a sort of a-factory, producing transformations in the feelings of subjects rather than in the form of objects. The museum even functions temporally like work does in the over-developed world. Once upon a time the factory and the cinema disciplined bodies to enter, perform their function, and leave at the same time. Now the museum allows the spectator to come and go, to set their own pace – just like contemporary occupations where the worker is ‘free’ so long as the contract gets done on time.
There is a sort of illusion of sovereignty in both cases, where one can appear to be ‘on top of things’. The spectator in the museum is as in charge as the artist, the curator or the critic. Or so it appears. It’s a sort of public sphere in negative, which does not actually foster a conversation, but rather produces the appearance of a public. Maybe it is something like the snob’s means of sustaining desire, identity and history, as Azuma would have it.
The labor of spectating in today’s museums is always incomplete. No one viewer ever sees all the moving images. Only a multiplicity of spectators could ever have seen the hours and hours of programming, and they never see the same parts of it. It is like Louis Althusser’s interpolation in negative. There’s no presumption of an ideological apparatus directly hailing its subjects. Rather its an empty and formal gesture, an apparatus that does not call ‘Hey you!’ But rather ‘Is anybody there?’ And usually getting no answer. The videos loop endlessly in mostly empty blackened rooms.
Expanding the scale again, Steyerl considers the white cube-black box system as a now international system. No self-respecting part of the under-developed world is without its copy of these institutions of the over-developed world. The art world’s museum-authenticated cycles of “bling, boom and bust” are now a global phenomena. (93) A placeless culture for a placeless space and time in free-fall.
“If contemporary art is the answer, the question is, how can capitalism be made more beautiful?” (93) Contemporary art mimics the values of a ruling class that I think may no longer quite be described as bourgeois and is perhaps not even capitalist any more. Both imagine themselves to be “unpredictable, unaccountable, brilliant, mercurial, moody, guided by inspiration and genius.” (94)
The abstract, placeless space of international contemporary art is populated by “jpeg virtuosos,” “conceptual impostors” and “lumpen freelancers” all “hard wired and thin skinned” and trying to work their way in. (95, 96, 100) Not to mention the women whose affective labor holds the whole art world together.
For Steyerl there’s a new kind of shock worker, a post-Soviet kind used to churning out affects and percepts, living on adrenalin, deadlines, exhaustion and an odd kind of quota. Steyerl seems to think they cannot really be defined in class terms. But perhaps they are just a niche version of what I call the hacker class, adding transformations to information but who do not own the means of realizing the value of what they produce.
Here Steyerl usefully augments our understanding of the contemporary production of subjects. The art worker subcategory of the hacker class makes visible certain traits that may occur elsewhere. With some, for example, they no longer have work so much as occupations. These keep people busy but lack the classic experience of alienation. Work time is not managed the old fordist way, and nor is there a recognizable product into which the worker’s labor has been estranged.
The workers from the 60s on revolted against alienation. “Capital reacted to this flight by designing its own version of autonomy: the autonomy of capital from workers.” (112) The artist is person who refuses division of labor into jobs. To be an artist is not to work but to have an occupation. As Franco Berardi argues, it forecloses the possibility of alienation as traditionally understood.
The dream of the historic avant-gardes was to merge art and everyday life under the sign of an expanded concept of use value. Like all utopian projects, it came true, but with a twist. Art and the everyday merged, but under the sign of exchange value. Contemporary art became an aesthetic and economic project but not a political one. The Capital-A Artist is a mythic figure, a creative polymath as legitimation for the amateur entrepreneur, who does not know much about the forces of production can can read a spreadsheet and talk a good game.
Perhaps occupation is a category that could be pushed in other directions. Steyerl gestures towards the occupiers of the New School and their attempt – brief though it was – to occupy time and space in a different way, picking up where the Situationist International left off with their constructed situations.
But for many the free-lance life is not one in which to realize one’s dreams. It is a part of a world of freedom, but of freedom from social bonds, from solidarity, from culture, education, from a public sphere. The term free-lance probably comes from Walter Scott’s Ivanhoe(1820), and meant a mercenary rather than a casual worker. In Kurasawa’s Yojimbo (1961) the mercenary plays one side against the other for the common good. Perhaps there’s a hint there at strategies against the ‘sovereign’ powers of the present.
In the legal concept of the king’s two bodies, the actual body of the sovereign, which is mortal, is doubled by a formal body, which is immortal. Steyerl’s détournement of the concept proposes thinking both the actual and formal bodies as dead. A dead form is kept half alive by dead subjects. Maybe its a kind of negative sovereignty.
Here Steyerl focuses on the unidentified dead of the civil wars, from Spain to Turkey. The unidentified dead “transgress the realms of civil identity, property, the order of knowledge, and human rights alike.” (151) Like poor images, the unidentified dead remaining unresolved. “Their poverty is not a lack, but an additional layer of information, which is not about content but form. This form shows how the image is treated, how it is seen, passed on, or ignored, censored, and obliterated.” (156) In an era obsessed with the surveillance of the subject of both state and corporate power, Steyerl locates a dread exception.
A different kind of negative or blank subject comes up in Steyerl’s study of spam. “According to the pictures dispersed via image spam, humanity consists of scantily dressed degree-holders with jolly smiles enhanced by orthodontic braces.” (161) They are “a reserve army of digitally enhanced creatures who resemble the minor demons and angels of mystic speculation…” (163) Image-spam is addressed to humans but does not really show them. It is “an accurate portrayal of what humanity is not. It is a negative image.” (165)
As is Steyerl’s habit, here again she pushes this reading further, to ask whether these non-humans of image-spam could be the model for a kind of refusal or withdrawal, avatars for a desire to escape from visual territory. Contra Warhol: today everyone can be invisible for fifteen minutes. It’s a walkout: “it is a misunderstanding that cameras are tools of representation; they are at present tools of disappearance. The more people are represented the less is left of them in reality.” (168) The abstraction built into modern modes of perception marks the position of the subject as the central node in the visual field. But rather than stress its relentless centrality, Steyerl points towards the uses of its other quality – its abstract inhuman quality.
In an era where political art is reduced to “exotic self-ethnicization, pithy gestures, and militant nostalgia,” perhaps there’s a secret path through the free-fall world of floating images and missing people. (99) “Any image is a shared ground for action and passion, a zone of traffic between things and intensities.” (172) The poor image can perhaps be a visual bond in negative, marking through its additional layers of information some nodes in an abstract space where we choose not to be.
One way of thinking about the figure of the modern is that it was about relations between a past and a future that was never reversible or cyclical. Whether good or bad, the future was in a relation of difference to a past. But art is no longer modern, it is now contemporary, in which art has a relation only to the present, and to other art with which it synchronizes, in the present. Perhaps what Steyerl is attempting is to open a difference between the modern and the contemporary. It does not work quite as the internal difference within the modern did, but at least it opens up a space where historical thought, feeling and sensation might live again, if only in negative. As the residue of futures past.
by McKenzie Wark
What are the classic texts for constituting a critical theory for the twenty-first century? How would one go about choosing them? Perhaps it would be a matter of making a fresh selection from the past, even the recent past, according to the agenda for thought and action than confronts us today.
That agenda seems to me to have at least three major features. The first is the anthropocene. One can no longer bracket off nature from the social, and construct a theory exclusively on the terrain of the social. The second is the role of information in both production and reproduction. One can no longer just assume that either capital marches on as in the age of the steam engine, or that the superstructures too are not affected by such vulgar questions. The third would be a shift away from Eurocentric concerns. World history appears to be made elsewhere now.
If the first of these agenda items steered us towards Paul Burkett’s work, the second points towards some very prescient attempts to think the age of information. I want to start with Tiziana Terranova’s Network Culture: Politics for the Information Age (Pluto Press, London, 2004), which brings together the resources of the Italian autonomist thinkers with Deleuze and Guattari. It is a position I used to be close to myself but have move away from. So this appreciation of Terranova’s classic text is also in some respects an inquiry into both what reading Marx with Deleuze enabled but also where the limits of such a conjunction might lie.
Terranova starts by taking a certain distance from the ‘postmodern’ habits of thought characteristic of late twentieth century writing. This was a time when texts like Paul Virilio’s Information Bomb were popular, and encouraged a certain delirious end-of-the-world-as-we-know-it talk. If the industrial era tech of mechanically reproducible media were on the way out, then everything was supposedly becoming unreal and out of control. Welcome to what Jean-François Lyotard called the ‘immaterial’.
And that was the more respectable end of that talk. Neo-gnostic sects such as the extropians really through they could leave their bodies behind and upload consciousness into computers. The cyberpunks wanted to ‘jack-in’ and at least temporarily leave ‘meatspace’. Honestly, people really did lose their shit over the beginnings of the end of mechanical and broadcast media, as tends to happen in all such transitions in media form, as historians of media culture well know.
Contrary to certain now popular narratives by latecomers, not everybody went gaga over ‘new media’ in that period, which stretches perhaps from the popularization of cyberpunk in 1984 to the death of the internet as a purely scientific and military media in 1995. There were plenty of experimental, critical and constructivist minds at work on it. I would count Terranova as a fine exponent of the constructivist approach, a sober builder of useful theory that might open spaces for new practices in the emerging world of post-broadcast media flux.
Terranova: “I do not believe that such information dynamics simply expresses the coming hegemony of the ‘immaterial’ over the material. On the contrary, I believe that if there is an acceleration of history and an annihilation of distances within an information milieu, it is a creative destruction, that is a productive movement that releases (rather than simply inhibits) social potentials for transformation.” (2-3) It became a question, then, of a level-headed analysis of the tendencies at work.
It helps not to make a fetish of just one aspect of media form, whether one is talking about the ‘internet’ back then, or ‘big data’ now. Sometimes these are aspects of more pervasive technological phyla. Terranova: “Here I take the internet to be not simply a specific medium but a kind of active implementation of a design technique able to deal with the openness of systems.” (3) This might be a useful interpretive key for thinking how certain now-dominant approaches to tech arose. Her approach is not limited to tech, however, but studies concepts and milieu as well as techniques.
Lyotard sent everyone off on a bum steer with the idea of the immateriality of information, a problem compounded by Jameson’s famous assertion that the technics of late capitalism could not be directly represented, as if the physics of heat engines was somehow clearer in people’s heads than the physics of electrical conductivity. Terranova usefully begins again with a concrete image of information as something that happens in material systems, and thinks them through the image of a space of fluid motion rather than just as an end-to-end line from sender to receiver.
Anybody who studied communication late last century would have encountered some version of the sender -> code-> channel-> receiver model, with its mysterious vestigial term of ‘context’. Stuart Hall complicated this by adding a possible difference between the encoding and the decoding, thus making the non-identity of the message at either end a function not of noise as something negative but of culture as a positive system of differences. But even so, this way of thinking tended to make a fetish of the single, unilinear act of communication. It ended up as an endless argument over whether the sender’s power dominated the receiver’s, or if the receiver had an independent power of interpretation. That was what the difference between the Frankfurt school’s followers and the Birmingham school of Hall et al boiled down to.
Terranova usefully brackets off the whole language of domination and hegemony, moving the discussion away from the privileged questions of meaning and representation that the humanities-trained love so much. She insists we really take seriously the breakthrough of Claude Shannon’s purely mathematical theory of information of 1948.
Information actually means three different things in Shannon. It is (1) a ratio of signal to noise, (2) a statistical measure of uncertainty, and (3) a non-deterministic theory of causation. He developed his theory in close contact with engineers working on communication problems in telephony at Bell Labs, one of the key sites where our twenty-first century world was made.
The signal to noise problem arose out of attempts to amplify telephony signals for long distance calls, where the additional energy used to amplify the signals also shows up as additional noise. This, incidentally, is where one sees how the experience of information as ‘immaterial’ is actually an effect produced by decades of difficult engineering. It takes energy to make a signal pass through a copper wire, making the electrons dance in their predictable but non-deterministic way. The energy leaks into the signal as a source of noise.
What was crucial about Shannon’s approach to this problem was to separate out the concept of information from having anything to do with ‘meaning’. Information is not ‘text’ or ‘language’. It is just a ratio of novelty and redundancy. “From an informational perspective, communication is neither a rational argument nor an antagonistic experience…” (15) It has nothing to with communication as domination or resistance, and it has nothing to do either with Habermasian communicative action or Lyotard’s language games. Before one even gets to any such hermeneutic theories, one has to deal with the specific materiality of information, which was in effect Shannon’s unique contribution.
For information to be transmitted, it has to confront the demon of noise. In this approach sender and receiver appear as nodes as cooperating against noise rather than as dialectical opposites. But Terranova does not adopt Shannon without modification. Rather she follows Gilbert Simondon’s critique of information theory. Simondon points out that in Shannon, the individual sender and receiver are pre-constituted. They just appear as such, prior to the act of communication. Simondon’s approach picks up the vestigial concept of ‘context’. For him, the act of communication is also what constitutes the sender and receiver as such. His approach is to think the context as the site where information produces individuations out of a collective, undifferentiated context.
For Terranova, this is a step toward thinking the space of information as a more turbulent, metastable systems that can be disturbed by very small events: “the informational dimension of communication seems to imply an unfolding process of material constitution that neither the liberal ethics of journalism nor the cynicism of public relations officers really address.” (19) This touches on the problem of causation. Neither the liberal-rationalist nor the cynical-manipulable approach to communication really holds up to much scrutiny. Which reminds me of the advertising guru David Ogilvy, quoting one of his clients: “I know only half the advertising I pay for works, but I just don’t know which half.”
Terranova points the way to thinking information in a broader context to struggles over meaning. It could lead to problems in the organization of perception and the construction of bodily habits. It could be a way of framing a problem of the information redesign of the whole field of media and culture. Information design could be about more than messages defeating noise, but rather designing fields of possibility.
The grand obsession of the first wave of information researchers and engineers was the elimination not only of noise, but of ambiguity. As Paul Edwards has shown in The Closed World (1988), even when they did not actually work any better, closed, digital systems took preference over analog ones, particularly in military-funded projects.
For Terrannova, what might be a constructive project in the wake of that is some kind of information culture that does not enforce a cut in advance in the fabric of the world, and then reduce its manipulation to a set of predictable and calculable alternatives. “Informational cultures challenge the coincidence of the real with the possible.” (20) Information systems reduce material processed to closed systems defined by the relation between selection (the actual) and the field of possibilities (the virtual), but where that field appears in an impoverished form. “Information thus operates as a form of probabilistic containment and resolution of the instability, uncertainty and virtuality of a process.” (24)
Interestingly, Terranova’s approach is less about information as a way of producing copies, and more about the reduction of events to probabilities, thus sidestepping the language of simulation, although perhaps also neglecting somewhat the question of how information challenged regimes of private property. The emphasis is much more on information as a form of control for managing and reproducing closed systems. This then appears as a closure of the horizon of radical transformation. Instead of future societies we have futures markets. “A cultural politics of information thus also implies a renewed and intense struggle around the definition of the limits and alternatives that identify the potential for change and transformation.” (25)
This would be a cultural politics of the probable, the possible and real. “What lies beyond the possible and the real is thus the openness of the virtual, of the invention and the fluctuation, of what cannot be planned or even thought in advance, of what has no real permanence but only reverberations… The cultural politics of information involves a stab at the fabric of possibility.” (27) It does not arise out of negation, out of a confrontation with techno-power as an other. It is rather a positive feedback effect.
There was a time when I would have shared some of this language and this project. But I think what became of an engagement with Deleuze was in the end an extension of a certain kind of romanticism, even if it is one that finds its magical other domain now immanent to the mundane world of things. Deleuze was enabling at the time, with his constructivist approach to building concepts that work across different domains. But he was a bit too quick to impose a philosophical authority on top of those other domains, as if everything was grist to the mill of a still-universalizing discourse of concept-making. It fell short of a genuine epistemological pluralism, where those other domains of knowledge and practice could push back and insist on their own protocols. Nowhere is this clearer than in Donna Haraway’s demolition job in When Species Meet (2007) of Deleuze and Guattari’s metafictional writing about wolf packs. Sometimes one has to cede the ground to people who actually know something about wolves. Perhaps I’ll pick this up again in a separate post on Terrannova’s very interesting chapter on biological computing.
One of the more powerful features of the theory of information in Shannon and since is the way it linked together information and entropy. Thermodynamics, that key to the scientific worldview in Marx’s era, offered the breakthrough of an irreversible concept of time, and one which appeared as a powerful metaphor for the era of the combustion engine. In short: heat leaks, energy dissipates. Any system based on a heat differential eventually ‘runs out of stram.’
Hence the figure of Maxwell’s Demon, which could magically sort the hot particles out from the cool ones, and prevent an energy system from entropic decline into disorder. But that, in a sense, is exactly what information systems do. The tendency of things might still be entropic: systems dissipate and break down. But there might still be neg-entropic counter-systems that can sort and order and organize. Such might be an information system. Such might also, as Joseph Needham among many others started to think, might be what is distinctive about living systems.
Needham’s organicism borrowed from the systems-theory of Bertalanffy which pre-dates Shannon, and was based a lot more on analog thinking, particularly the powerful image of the organizing field. Much more influential was the transposition of the thought-image of the digital to the question of how life is organized as neg-entropic system, resulting in what for Haraway in Modest_Witness (1997) is a kind of code fetishism. What is appealing to Terranova in the confluence of biological and information thinking is the way it bypassed the humanistic subject, and thought instead toward populations at macro and micro scales.
But in some ways Terranova is not that far from Haraway, even though Haraway makes almost no appearance in this text. Where they intersect is in the project of understanding how scientific knowledge is both real knowledge and shot through with ideological residues at the same time: “An engagement with the technical and scientific genealogy of a concept such as information… can be actively critical without dis-acknowledging its power to give expression and visibility to social and physical processes… Information is neither simply a physical domain nor a social construction, nor the content of a communication act, nor an immaterial entity set to take over the real, but a specific reorientation of forms of power andmodes of resistance.” (37) While I would want to pause over the word ‘resistance’, this seems to me a usefully nuanced approach.
I am a bit more skeptical these days about the will to impute a domain of otherness as a sort of immanent plane. Thus, while Terranova acknowledges the power of Manuel Castell’s figure of the network as a space of flows coming to dominate a space of places, she wants to retain a strong sense of radical possibility. One way she does so is by appealing to Bergson’s distinction between a quantified and a qualified sense of time, where time as quality, as duration, retains primacy, offering the promise of a “virtuality of duration.” (51)
But is this not yet another offshoot of romanticism? And what if it was really quite the other way around? What if the figure of time as quality actually depended on measurable, quantitative time? I’m thinking here of Peter Gallison’s demonstration of how the engineering feat of electrically synchronized time, so useful to the railways, enabled Einstein to question the metaphysical time and space that was the backdrop to Newton’s mechanics. As Gallison shows, it is only after you can actually distribute a measure of clock time pretty accurately between distant locations that you can even think about how time might be relative to mass and motion.
It is certainly useful that Terranova offers a language within which to think a more elastic relation between the information in a network and the topology of that network itself. It isn’t always the case that, as with Shannon’s sender and receiver, that the nodes are fixed and pre-constituted. “A piece of information spreading throughout the open space of the network is not only a vector in search of a target, it is also a potential transformation of the space crossed that always leaves something behind.” (51)
This more elastic space, incidentally, is how I had proposed thinking the category of vector in Virtual Geography (1995). In geometry a vector is a line of fixed length but of no fixed position. Thus one could think it as a channel that has certain affordances, but which could actually be deployed not only to connect different nodes, but sometimes to even call those nodes into being. Hence I thought vector as part of a vector-field, which might have a certain malleable geometry, but where what might matter is not some elusive ‘virtual’ dimension, but the tactics and experiments of finding what it actually affords.
Terranova stresses the way the internet functions as an open system, with distributed command functions. It was in this sense not quite the same as the attempts to build closed systems of an early generation of communication engineers: “resilience needs decentralization; decentralization brings localization and autonomy; localization and autonomy produce differentiation and divergence.” (57) The network, like empire, is tolerant of differences, and inclusive up to a point – but also expansionist. As Terranova notes, rather presciently, “There is nothing to stop every object from being given an internet address that makes it locatable in electronic space.” (62)
In short, the internet starts to acquire the properties of a fully-realized vector-field: “Unlike telegraphy and telephony… the communication of information in computer networks does not start with a sender, a receiver and a line, but with an overall information space, constituted by a tangle of possible directions and routes, where information propagates by autonomously finding the lines of least resistance.” (65) “In a packet-switched network… there is no simple vector or route between A… and B…” (67) But I think its still helpful to think of it as a vector-field, in that each of those routes still has fairly fixed affordances.
Terranova was a pioneer in understanding that the build-out of an apparatus of which information theory was the concept had significant implications for rethinking the work of culture and politics. “There is no cultural experimentation with aesthetic forms or political organization, no building of alliances or elaboration of tactics that does not have to confront the turbulence of electronic space. The politics of network culture are thus not only about competing viewpoints, anarchistic self-regulation and barriers to access, but also about the pragmatic production of viable topological formations able to persist within an open and fluid milieu.” (68)
She notes in passing some of the experiments of the late twentieth century in “network hydrodynamics” (69) such as the Communitree BBS, Andreas Broeckmann’s Syndicate list-serv, Amsterdam’s Digital City, and the rhizome list-serv. All of these fell apart one way or another, even if many others lived on and even mutated. Much of the functionality of today’s social media derives from these early experiments. Geert Lovink has devoted several books now to documenting what is living and what is dead in the experimental culture and politics of networks.
Terranova was also prescient in asking questions about the ‘free labor’ that was just starting to become a visible feature of network cultures at the time she was writing. She reads this through the autonomist-Marxist figure of the shift of work processes from the factory to society, or ‘the social factory.’ I sometimes wonder if this image might be a bit too limiting. A lot of free labor in the ‘nets looks more like the social office, or even like a social boudoir. Rather than the figure of the social as factory, it might be more helpful to think of a dismantling and repartitioning of all institutionalized divisions of labor under the impact of networked communication.
Still, it was useful at the time to insist on the category of labor, at a time when it was tending towards invisibility. One has to remember that ten years ago there was a lot more celebration of the ‘playful’ contributions of things like fan cultures to the net. Henry Jenkins’ repurposing of something like the Birmingham school’s insistence on popular agency would be a signal instance of this. Terranova: “The internet does not automatically turn every user into an active producer, and every worker into a creative subject.” (75)
In a 1998 nettime.org post, Richard Barbrook suggested that the internet of that era had become the site for a kind of post-situationist practice of détournement, of which nettime itself might not have been a bad example. Before anybody had figured out how to really commodify the internet, it was a space for a “high tech gift economy.” Terranova thinks Barbrook put too much emphasis on the difference between this high tech gift economy and old fashioned capitalism. But perhaps it might be helpful to ask whether, at its commanding heights, this still is old fashioned capitalism, or whether the ruling class itself may not have mutated.
Certainly, the internet became a vector along which the desires that were not recognizable under old-style capitalism chose to flee. Terranova: “Is the end of Marxist alienation wished for by the management gurus the same thing as the gift economy heralded by leftist discourse?” (79) Not so much. Those desires were recaptured again. I don’t know who exactly is supposed to have fallen for “naïve technological utopianism” (80) back in the 90s, apart from the extropians and their fellow travellers. In the main I think a kind of radical pragmatism of the kind advocated by Geert Lovink reigned, in practice at least. We were on the internet to do with it what what we wanted, what we could, for as long as it lasted, or as long as we could make it last, before somebody shut the party down.
For a long time now there’s been a tension over how to regard what the internet has done to labor. Even in the 90s, it was not uncommon to see attacks on the elitism and rabid libertarianism of hacker culture – as if there weren’t complexities and internal divisions within that culture. Such a view renders certain less glamorous kinds of new work less visible, and also shuts down thinking about other kinds of agency that new kinds of labor might give rise to. Terranova: “it matters whether these are seen as the owners of elitist cultural and economic power or the avant-garde of new configurations of labor which do not automatically guarantee elite status.” (81)
Terranova’s Network Culture provided an early introduction in the Anglopohone world to the work of Mauritzio Lazzarato, but I always thought that his category of immaterial labor was less than helpful. Since I agree with Terranova’s earlier dismissal of the notion of information as ‘immaterial’, I am surprised to see her reintroduce the term to refer to labor, which if anything is even more clearly embedded in material systems.
For Lazzarato and Terranova, immaterial labor refers to two aspects of labor: the rise of the information content of the commodity, and the activity that produces its affective and cultural content. Terranova: “immaterial labor involves a series of activities that are not normally recognized as ‘work’ – in other words, the kinds of activities involved in defining and fixing cultural and artistic standards, fashions tastes, consumer norms, and more strategically, public opinion.” (82) It is the form of activity of “every productive subject within postindustrial societies.” (83)
Knowledge is inherently collaborative, hence there are tensions in immaterial labor (but other kinds of labor are collaborative too). “The internet highlights the existence of networks of immaterial labor and speeds up their accretion into a collective entity.” (84) An observation that would prove to be quite prescient. Immaterial labor includes activities that fall outside the concept of abstract labor, meaning time used for the production of exchange value, or socially necessary labor time. Immaterial labor imbues the production process with desire. “Capital wants to retain control over the unfolding of these vitualities.” (84)
Terranova follows those autonomist Marxists who have been interested in the mutations of labor after the classic factory form, and like them her central text is Marx’s ‘Fragment on Machines’ from the Grundrisse. The autonomists base themselves on the idea that the general intellect, or ensemble of knowledge constitutes the center of social production, but with some modification. “They claim that Marx completely identified the general intellect (or knowledge as the principle productive force) with fixed capital (the machine) and thus neglected to account for the fact that the general intellect cannot exist independently of the concrete subjects who mediate the articulation of the machines with each other.” (87) For the autonomists, living labor is always the determining factor, here recast as a mass intellectuality. (See here for a different reading of the ‘Fragment on Machines’)
The autonomists think that taking the labor point of view means to think labor as subjectivity. Living labor alone acts as a kind of vitalist essence, of vast and virtual capacities, against which capital is always a reactive and recuperative force. This is in contrast to what the labor point of view meant, for example, to Bogdanov, which is that labor’s task is not just to think its collective self-interest, but to think about how to acquire the means to manage the whole of the social and natural world, but using the forms of organizing specific to it as a class.
From that point of view, it might be instructive to look to the internet for baby steps in self-organization, or at what Terranova calls free labor, and of how it was exploited in quite novel ways. “Free labor is a desire of labor immanent to late capitalism, and late capitalism is the field which both sustains free labor and exhausts it. It exhausts it by undermining the means through which that labor can sustain itself: from the burn-out syndromes of internet start-ups to under-compensation and exploitation in the cultural economy at large.” (94)
Here I think it is helpful not to just assume that this is the same ‘capitalism’ as in Marx’s era. The internet was the most public aspect of a whole modification of the forces of production, which enabled users to break with private property in information, to start creating both new code and new culture outside such constraints. But those forces of production drove not just popular strategies from below, but also enabled the formation of a new kind of ruling class from above. One based on extracting not so much surplus labor as surplus information.
It is quite scandalous how much theory-talk still retails metaphors based on 19th century worldviews. As if what we can know about the world had not undergone several revolutions since. Hence if one were to look for a #Theory21c it would have to start with people who at least engage with technical scientific languages of our times. One example of which would be Tiziana Terranova’s Network Culture (Pluto Press 2004). I looked back over the bulk of the book in a previous post. This one takes up her engagement with the theories and sciences of biological computing.
This is perhaps the most interesting part of Network Culture. Terranova extends the Deleuzian style of conceptual constructivism to scientific (and other) languages that are interested in theories and practices of soft control, emergent phenomena and bottom-up organization. Her examples range from artificial life to mobile robotics to neural networks. All of these turned out to be intimations of new kinds of productive machines.
There is a certain ideological side to such of this discourse, however “… the processes studied and replicated by biological computation are more than just a techno-ideological expression of market fundamentalism.” (100) They really were and are forms of a techno-science of rethinking life, and not least through new metaphors. No longer is the organism seen as one machine. It becomes a population of machines. “You start more humbly and modestly, at the bottom, with a multitude of interactions in a liquid and open milieu.” (101)
For example, in connectionist approaches to mind, “the brain and the mind are dissolved into the dynamics of emergence.” (102) Mind is immanent, and memories are Bergsonian events rather than stored images. These can be powerful and illuminating figures to think with.
But maybe they are still organized around what Bogdanov would call a basic metaphor that owes a bit too much to the unreflected experience of bourgeois culture. It just isn’t actually true that Silicon valley is an “ecosystem for the development of ‘disruptive technologies’ whose growth and success can be attributed to the incessant formation of a multitude of specialized, diverse entities that feed off, support and interact with one another,” to borrow a rather breathless quote from some starry-eyed urban researchers that Terranova mentions. (103) On the contrary, Silicon valley is a product of American military-socialism, massively pump-primed by Pentagon money.
Terranova connects the language of biological computing to the Spinozist inclinations of autonomist theory: “A multitude of simple bodies in an open system is by definition acentered and leaderless.” (104) And “A multitude can always veer off somewhere unexpected under the spell of some strange attractor.” (105) But I am not sure this works as a method. Rather than treat scientific fields as distinct and complex entities, embedded in turn in ideological fields in particular ways, Terranova selects aspects of a scientific language that appear to fit with a certain metaphysics adhered to in advance.
Hence it can be quite fascinating and illuminating to look at the “diagonal and transversal dynamics” (105) of cellular automata, and admire at a distance how a “a bottom-up system, in fact, seems to appear almost spontaneously….” (105) But perhaps a more critical approach might be the necessary compliment. What role does infrastructure play in such systems? What role does an external energy source play? It is quite possible to make a fetish of a bunch of tiny things, such that one does not see the special conditions under which they might appear ‘self’ organizing.
As much as I revere Lucretius and the Epicurians, it seems to me to draw altogether the wrong lesson from him to say that “In this sense, the biological turn entails a rediscovery, that of the ancient clinamen.” (106) What is remarkable in Lucretius is how much he could get right by way of a basic materialist theory derived from the careful grouping and analysis of sense-impressions. One really can move from appearances, not to Plato’s eternal forms, but to a viable theory that what appears is most likely made of a small number of elements in various combinations. But here the least useful part of the Epicurean worldview is probably the famous swerve, or clinamen, which does break with too strict a determinism, but at the expense of positing a metaphysical principle that is not testable. Hence, contra Terranova, there can be no “sciences of the clinamen.” (107)
This is also why I am a bit skeptical about the overuse of the term ‘emergence’, which plays something of a similar ideological role to ‘clinamen’. It becomes a too-broad term with too much room for smuggling in old baggage, such as some form of vitalism. Deleuze, in his Bergsonian moments, was certainly not free of this defect. A vague form of romantic spiritualism is smuggled in through the back door, and held to be forever out of reach of empirical study.
Still, with that caveat, I think there are still ways in which Terranova’s readings in biological computing are enabling, in opening up new fields from which – in Bogdanovite style – metaphors can be found that can be tested in other fields. But the key word there is tested. For example, when tested against what we know of the history of the military-entertainment complex, metaphors of emergence, complexity and self-organization do not really describe how this new kind of power evolved at all.
More interesting are Terranova’s use of such studies to understand how control might work. Here we find ways of thinking that actually can be adapted to explain social phenomena: “The control of acentered multitudes thus involves different levels: the production of rule tables determining the local relations between neighboring nodes; the selection of appropriate initial conditions; and the construction of aims and fitness functions that act like sieves within the liquid space, literally searching for the new and the useful.” (115) That might be a thought-image that leaves room for the deeper political-economic and military-technical aspects of how Silicon valley, and the military entertainment complex more generally, came into being.
Terranova: “Cellular automata… model with a much greater degree of accuracy the chaotic fringes of the socius – zones of utmost mobility, such as fashions, trends, stock markets, and all distributed and acentered informational milieus.” (116) Read via Bogdanov rather than Deleuze, I think what is useful here is a kind of tektology, a process of borrowing (or détournement) of figures from one field that might then be set to work in another. But what distinguishes Bogdanov from Deleuze is that for him this is a practical question, a way of experimenting across the division of labor within knowledge production. It isn’t about the production of an underlying metaphysics held to have radicalizing properties in and of itself.
Hence one need not subscribe either to the social metaphysics of a plural, chaotic, self-differentiating ‘multitude,’ upon which ‘capital’ is parasite and fetter, and which cellular automata might be taken to describe. The desire to affirm such a metaphysics leads to blind spots as to what exactly one is looking at when one looks a cellular automata. What is the energy source? Where is the machine on which it runs? Who wrote the code that makes it seem that there is ‘emergent’ behavior?
There is a certain residual romanticism and vitalism at work here, in the figure of “the immense productivity of a multitude, its absolute capacity to deterritorialize itself and mutate.” (118) The metaphysical commitments of a Marx read through Spinoza become an interpretive key that predetermines what can be seen and not seen about the extraordinary transformations that took place in the mode of production.
Terranova: “If there is an abstract social machine of soft control, it takes as its starting point the productivity of an acentered and leaderless multitude.” (123) It is remarkable how everyone, from the Spinozist left to the libertarian right seems to have forgotten about the ‘information superhighway’ moment in the history of the internet, and wants to talk instead about its self-organizing features. But what made those features possible? From when came the energy, the infrastructure, the legislative frame? Is there not a larger story of a rather more ‘molar’ kind about the formation of a new kind of ruling class alliance that was able to get a regulatory framework adopted that enabled a corporate take-over of all that military and scientific labor had until then been building? No wonder the right wants a ‘little people’ story to make that larger story of state and corporate power go away.
Where I an in agreement with the path Terranova is following however is in rejecting the social constructionism that seemed a default setting in the late twentieth century, when technical questions could never be treated as anything but second order questions derived from social practices. Deleuzian pluralist-monism had the merit at least of flattening out the terrain, putting the social and the asocial on the same plane, drawing attention to the assemblage of machines made of all sorts of things and managing flows of all kinds, both animate and inanimate.
But the danger of that approach was that it was a paradoxical way of putting theory in command again, in that it treated its metaphorical substitutions between fields as more real than the fields of knowledge from whence they came. What was real was the transversal flows of concepts, affects and percepts. The distinctive fields of knowledge production within which they arose were thus subordinated to the transversal production of flows between them. And thus theory remained king, even as it pretended to dethrone itself. At the end of the day Deleuze saved high theory from itself, and this is what remains old-fashioned about the whole enterprise.
This is what is interesting to me about Bogdanov and Haraway, as they seem to me approaches to the problem of negotiating between fields of knowledge production that don’t necessarily privilege the practice of creating what flows between fields over the fields themselves. Perhaps because their training was in the biological sciences they have a bit more respect for the autonomy of such fields. However they still want to press the negative, critical question of how metaphors from commodity production might still contaminate such fields, and they do engage in a counter-production of other kinds of metaphorical tissue that might organize the space both within and between fields of knowledge otherwise.
It seems crucial in the age of the anthropocene that thought take “the biological turn.” (121) Never was it more obvious that the ‘social’ is not a distinct or coherent object of thought at all. But it might be timely to tarry with the sciences of actual biological worlds rather than virtual ones. One of the great struggles has been to simulate how this actual world works as a more or less closed totality, for that is what it is. The metaphorics of the virtual seem far from our current and most pressing concerns. The actual world is rather a thing of limits.
I would also want to be much more skeptical about the sociobiology of Richard Dawkins. I would prefer to follow Haraway in her attempt to reconstruct the line of a quite different kind of biological thinking, as she did in her first book on the biological metaphors of Crystals, Fabrics and Fields (1974). If one wanted a biological thought that could be appropriated in Deleuzian metaphors, then surely that was it.
Terranova: “What Dawkins’ theory allows is the replacement of the individual by the unit or, as Deleuze named it, a ‘dividual’ resulting from a ‘cut’ within the polymorphous and yet nondeterministic mutations of a multitude.” (124) But perhaps it is rather the opposite. Dawkins’ world is still one of hypercompetitive individuals, it is just that the individual is the gene, not the individual organism. But then there always seems to be to be a certain slippage in the term ‘multitude’, which could describe a universe of petit-bourgeois small traders more than something like a proletariat.
I see Dawkins more as Andrew Ross does, as The Chicago Gangster Theory of Life (1995). Of course Terranova is aware of this, and offers an interesting reading of the tension between competition and cooperation in Dawkins. “Selfishness closes the open space of a multitude down to a hole of subjectification.” (126) It is just that I would prefer to bracket off the Spinozist metaphysics, with its claims to describe in advance a real world of self-organizing and emergent properties.
I don’t think the alternative is necessarily a ‘deconstructive critique’. Deconstruction seems to me also to hinge on a kind of high theory. Where Deleuze foregrounds concept-production as king, deconstruction foregrounds the internal tensions of language. Both fall short of a genuine pluralism of knowledge-practices, and the struggle for a comradely and cooperative joint effort between them. The one thing that seems to me to have been pretty comprehensively rejected by everyone except those who do theory is the demand to put theory in command. I think the only thing left for us is a role that is interstitial rather than totalizing.
Still, Terranova’s reading of biological computing remains illuminating. Its function is not so much to naturalize social relations as to see the artificial side of natural relations. ‘Nature’ starts to appear as necessarily an artifact of forms of labor, techne and science, but to be more rather than less useful as a concept because of this. Contrary to Tim Morton, I think the ‘nature’ is still a useful site at which to work precisely because of how over-determined the concept always is by the means via which it was produced.
Terranova ends Network Culture with a rethinking of the space between media and politics, and here I find myself much more in agreement. Why did anyone imagine that the internet would somehow magically fix democracy? This seemed premised on a false understanding from the start: “Communication is not a space of reason that mediates between state and society, but is now a site of direct struggle between the state and different organizations representing the private interests of organized groups of individuals.” (134)
Of all the attempts to think ‘the political’ in the late twentieth century, the most sober was surely Jean Baudrillard’s theory of the silent majority. He had the wit and honesty to point out that the masses do not need or want a politics, and even less an intellectual class to explain politics to them. The masses prefer spectacle to reason, and their hyper-conformity is not passivity but even a kind of power. It is a refusal to be anything but inert and truculent. Hence ‘the black hole of the masses’, which absorbs everything without comment or response. Meaning and ideas lose their power there.
One way of thinking about today’s big data or what Frank Pasquale calls The Black Box Society (2014) is as a way of getting back at the refusal of the black hole of the masses to play its role. Big data is a means of stripping the masses of information without their will or consent. It exploits its silence by silently recording not what it says but what it does.
Terranova accepts the force of Baudrillard’s approach but not its quietist conclusions. She still wants to think of the space of communication as a contested one. “Images are not representations, but types of bioweapons that must be developed and deployed on the basis of a knowledge of the overall information ecology.” (141) This I think is a useful metaphorical language, provided we remember that an information ‘ecology’ is not really separate from what remains of a general one.
Terranova refuses all of those languages which see images as some sort of metaphysical corruption of an enlightened space of reason. The object of a media practice has to become biopolitical power, that power of inducing perceptions and organizing the imagination. While I am skeptical as to whether the term ‘biopolitical’ really adds all that much, this does indeed seem to cut through a lot of misconceptions about the thorny relation between media and politics. After all, there is no politics that is not mediated. There is no real sense in which politics could ever be an autonomous concept.
In sum: Network Culture is a book that remains a significant step forward. I am now a bit more skeptical than ten years ago about the limits of the Spinozist flavors of Marxism. They tend to want to see the monist-pluralist metaphysic as a superior image of the real, and to subordinate other knowledge production to that image. I find this less enabling now. However, Terranova used it to excellent effect in this brief, dense book, usefully framing the issues for #Theory21c where information is concerned.