by Steven Craig Hickman
Berardi makes a valid point in his critique of Srnicek and Williams Inventing the Future:
“Srnicek and Williams suggest that we should ‘demand full automation, demand universal basic income, demand reduction of the work week’. But they do not explain who the recipient is of these demands. Is there any governing volition that can attend to these requests and implement them?
No, because governance has taken the place of government, and command is no longer inscribed in political decision but in the concatenation of techno-linguistic automatisms. This is why demands are pointless, and why building political parties is pointless as well.”1
Governance is ubiquitous, invisible, and decentralized within the networks itself now, power is part of the very interactive environment we face daily. The moment you open your iPhone, etc. you’re confronted with a governed set of choices and possibilities that capture your desires and modulate those very choices through sophisticated and ubiquitous algorithms. Same for almost every aspect of our once sacrosanct private lives, too. Our homes in the coming decades will be invasively programmed with ubiquitous smart devices that will attune us to techno-commercial decisioning processes out of our control, and yet they will allow us to still believe it is we who are choosing, deciding, using our oh so ingrained “free will” – that as many neuroscientists keep telling us is an illusion, delusion, a cognitive bias and hereditary error of judgment, etc.
If Edward Bernays opened the door on mass manipulation through the use of images, sound, and print…. our new world of smart devices will no longer need to manipulate us, they will decide for us and let us think that it we alone have made our choices independent of any external agent not knowing that this very thought of independence was the manipulation by algorithmic governance. Manipulation by design that allows the participant to fool herself. In this way we’ve reversed the earlier public relations need to externally manipulate our environment with images, sound bytes, and eye candy: now we will internalize this whole process and believe we alone are in charge of our choices, making rational and deliberate decisions while all along these are coming to us ubiquitously through pattern matching algorithms so accelerated we will not know they are not our own thoughts. Predictive engineering so fast and based on our own neuroscientific process that we will not be able to tell the difference. Like the street magician who seems to float on air, or produce strange objects from one’s own person, this new world of virtual precision will act at a distance and manipulate the very biases of our own cognitive powers.
Our very thought processes and reason are being reformatted by the instruments and technologies we use daily, and not realizing this we assume absolute independence of judgement over our lives while instead it is these very tools that have become so invisible and ubiquitous in our lives that are rewiring our very cognitive machinery and delivering us to a governing power so ubiquitous that it has become internalized as the very core of our own sense of Agency. Essentially we are be reprogrammed as androids or robots in a machinic society in which our very humanity is itself faked and manipulated for the benefit of the power elite. But this is not the power elite of old, no as AGI comes online in more and more stable and ubiquitous forms the very truth of power will reside within this inhuman network through the global interfaces of decisions we ourselves will so gladly accept. An inversion of human power into machinic systems is taking place even as we enjoy the comforts and leisure of our toys.
People speak of a post-work society and do not realize it is already in the offing, every text message you send, every web page you access, every time you access data from anywhere or pass information across the network you are allowing profits to be extracted by some invisible entity, some corporation, or government agency. Our lives of leisure have become the very site of a 24/7 profit machine in which we not only do not know we’re participating, but are gladly allowing it and enjoying it as pleasure and jouissance. We have entered the true age of leisure capitalism. Some will still work and maintain machines, etc. for a long while during the transitional process of the coming decades, while many on the planet will become unnecessary to the process and like other machines and products be obsolesced. The horror is that we have become the total product of leisure capitalism. Our very lives are the engines of profit and we will live out our existence enjoying the minor aspects of our mindless existence not even knowing that we’ve been enslaved in a system of surplus value so ubiquitous that no one except the benefactors know it exists.
Even the staged conflicts and wars will be planned and bound by this same governance process. The nations will appear to be enemies in images and simulated dramas of staged operatic forms in the media-tainment industry while all the while behind the scenes the great powers are working for the same global corporations. Orwell had part of this already figured out, as did Aldous Huxley…. now we do. But will we wake up long enough to disconnect? I doubt it…
Yet, with the emergence of cryptocurrencies, blockchain technology, and the various aspects of a neo-market bound to secrecy and darkness, underground resistances and pockets of awakening can become possible… we will follow this trail in time… As Adam Winfield relates it:
The rise of cryptocurrencies and blockchains has people speculating whether the technologies will lead to a libertarian future, in which governments and corporations lose their grip on centralized power, an authoritarian future, in which they tighten that grip, or something in between.
Yet, as he suggests what some of these libertarians perhaps didn’t anticipate, however, was the extent to which governments and corporations would eventually begin embracing the technologies – particularly blockchain. Increasingly, state powers and big corps are recognizing the potential of blockchain to restructure their operations, making them more efficient, transparent and secure, and could – advertently or not – be solidifying their rule in the process.
So what we’re seeing is the great global conglomerates and monopolies once again consolidating power and overtaking the very resistance of cognitive inventors and knowledge workers through a process of cooptation and immersive buy in to the very systems that were meant to resist such elite powers to begin with.
The libertarians who pioneered blockchain aren’t happy about this, as it goes against everything they believe the technology was created for (though some argue the intention was to sidestep state-backed currency systems, not replace them). As Ian Bogost writes in The Atlantic, “the irony would be tragic if it weren’t also so frightening. The invitation to transform distributed-ledger systems into the ultimate tool of corporate and authoritarian control might be too great a temptation for human nature to forgo.”
Winfield relates on corporate defender of the takeover of this technology. James Zdralek, a Montreal-based software designer for the global tech company SAP, believes the idea that blockchain could lead to total control is flawed because anything made up of individual units – including corporations and governments – cannot have a unified conscience that chooses good or evil. “It’s like judging a group of mold – it either gives you cheese or anthrax,” says Zdralek. “Either result was not the conscious choice of the group of mold cells.
Yet, as I’ve said above the whole point of algorithmic governance is that there never was any “conscious choice” – no free will or independence of judgement, and there is no unified or monolithic entity or power behind the proverbial conspiracy scene pulling the puppet strings either; rather, this whole world is oriented toward one thing “profits”, and it is this very ubiquitous and pervasive engine of accumulation and surplus value that drives all corporations, as well as the techno-commericium that is producing the very tools we’ve spoken of. So there need be no conscious decision because as neuroscientist agree it is all done within the ubiquitous processes outside an conscious choice.
One can discover in many works such as Neuroscience and the Economics of Decision Making (Alessandro Innocenti ed.) that for the last two decades there has been a flourishing research carried out jointly by economists, psychologists and neuroscientists. This meltdown of competences has lead towards original approaches to investigate the mental and cognitive mechanisms involved in the way the economic agent collects, processes and uses information to make choices. This research field involves a new kind of scientist, trained in different disciplines, familiar in managing experimental data, and with the mathematical foundations of decision making. The ultimate goal of this research is to open the black-box to understand the behavioral and neural processes through which humans set preferences and translate these behaviours into optimal choices.
The point here is to understand every aspect of the brain and how it can be manipulated to serve the optimal choices of those systems of governance and techno-commercial processes to benefit the expanding power of global corporations. Politics and Nation States matter not to these larger entities, rather the stage-craft of nation and politics serves the corporate interests of the competing global entities. Social Cognition works on the collective ways in which the mental representations that people hold of their social world and the way that social information is processed, stored, and retrieved. Hundreds of millions of dollars are being spent of various ways to manipulate the masses in every nation, no matter what culture or social conditioning.
Antoinette Rouvroy in Human Genes and Neoliberal Governance outlines the knowledge-power relations in the post-genomic era and addressing the pressing issues of genetic privacy and discrimination in the context of neoliberal governance, this book demonstrates and explains the mechanisms of mutual production between biotechnology and cultural, political, economic and legal frameworks. She explores the social, political and economic conditions and consequences of this new ‘perceptual regime’. In the second she pursues her analysis through a consideration of the impact of ‘geneticization’ on political support of the welfare state and on the operation of private health and life insurances. Genetics and neoliberalism, she argues, are complicit in fostering the belief that social and economic patterns have a fixed nature beyond the reach of democratic deliberation, whilst the characteristics of individuals are unusually plastic, and within the scope of individual choice and responsibility.
With the emergence of Human Enhancement or H++ or Transhumanism the elite seek to empower their own children with favorable adjustments and enhancements, breeding a new level of genetic superiority in their germlines. Eugenics is back with a vengeance. With the so called convergence technologies the power elite hope to consolidate their power base and build a new capitalist platform on techno-commercial footing that can carry us into a space-faring civilization.
Convergence in knowledge, technology, and society is the accelerating, transformative interaction among seemingly distinct scientific disciplines, technologies, and communities to achieve mutual compatibility, synergism, and integration, and through this process to create added value for societal benefit. It is a movement that is recognized by scientists and thought leaders around the world as having the potential to provide far-reaching solutions to many of today’s complex knowledge, technology, and human development challenges. Four essential and interdependent convergence platforms of human activity are defined in the first part of this report: nanotechnology-biotechnology-information technology and cognitive science (“NBIC”) foundational tools; Earth-scale environmental systems; human-scale activities; and convergence methods for societal-scale activities. The report then presents the main implications of convergence for human physical potential, cognition and communication, productivity and societal outcomes, education and physical infrastructure, sustainability, and innovative and responsible governance. As a whole, a new model for convergence is emerging in our time. To effectively take advantage of this potential, a proactive governance approach is suggested by these very powerful elite in many think-tanks and techno-commercial enterprises like Google and other global market leaders. The United States and its corporate overlords have been for sometime implementing a program aimed at focusing disparate R and D energies into a coherent activity – a “Societal Convergence Initiative”.
The notion of a technocracy arose during the twentieth century but was lacking in the collusion of corporation, government, and academia, etc., which is no longer the case. With the demise of the humanistic learning systems that stood in its way, the privatizing of academic institutions that is occurring, and the specialized and compartmentalized forms of education and corporate promotion in these various learning institutions the very nature of converging on business, expertise, technology, and society through the very use of these social conditioning technologies is becoming all pervasive and will only continue down this course.
We’ve seen in recent years how masses of people can be swayed by emotion: anger, frustration, distrust in government and authority, etc. to the point that they are will to try anything to regain a sense of sanity and stability in their lives to the point they will allow a Authoritarian Leader to take charge and correct what was perceived as the weakness of other leaders. As Kathleen Taylor in her recent book , Brainwashing describes it, there are three approaches to mind-changing: by force, by stealth, and by direct brain manipulation technologies. The first two, as I describe in the book, use standard psychological processes; in this sense there is nothing unnatural about mind control. The aim is to isolate victims from their previous environment; control what they perceive, think, and do; increase uncertainty about previous beliefs; instill new beliefs by repetition; and employ positive and negative emotions to weaken former beliefs and strengthen new ones.2
As she puts it: “People can be persuaded to give up objective freedoms and hand over control of their lives to others in return for apparent freedoms–in other words, as long as they are aware of the freedoms they are gaining and either contemptuous, or altogether unaware, of the freedoms they are giving up.” “The trick is to disable the brain’s alarm system”. (Brainwashing, p. 243)
Most of what she describes is the older mode of mind manipulation that has been used in many forms by many nations, terror groups, government security agencies, cults, etc. for centuries. The refinement has come in experimentation, use of pharmaceuticals, and social or mass psychology. Yet, in our time as technology displaces many of the decisioning processes we as humans have relied on as independent agents, with our belief in a Self-Subject this is no longer the same. For decades the very notion of the liberal Subject and Self has come under scrutiny and a propaganda mission from both the corporate controlled sciences and academy undermining the whole democratic political subject as we have come to know it. Without a sense of Self and subjectivity we no longer have a center from within which to provide normative claims or political judgments. If as many neuroscientists suggest the Self is an illusion and delusion then our whole political democratic system of government is no longer viable in the tradition of Enlightenment founded on Reason, etc., and the Lockean and Rousseauean notions along with the Mills utilitarianism fall by the wayside. The destruction of Secular modernity is at an end if these neuroscientists and academic thinkers are correct.
Are advances in science and technology enabling us in the foreseeable future to create digital minds. The notion of exponential growth is a pattern built deep into the scheme of life, but technological change now promises to outstrip even evolutionary change. Many neuroscientists in collusion with engineers are beginning to reverse engineer the brain with various EU and U.S. Brain initiatives. We may see in the future that technological and scientific advances ranging from the discovery of laws that control the behavior of the electromagnetic fields to the development of computers will change the very nature of what it means to be human. Some even see in this process of artificial selection the continuance of natural selection mutating into the ultimate algorithm, with genetics and the evolution of the central nervous system, along with the role of computer imaging which has played in understanding and modeling the brain. Having considered the behavior of the unique system that creates a mind, neuro-scientists are turning to an unavoidable question: Is the human brain the only system that can host a mind? If digital minds come into existence — it is difficult to argue that they will not — what are the social, legal, and ethical implications? Will digital minds be our partners, or our rivals?
In an age when so many things are happening at once will the decisions like this that could have dire implications for the species of homo sapiens be made for us, taken out of our hands as we are manipulated and distracted by war, climate change, terror, fear, anger, political and social unrest that keeps us from thinking clearly on these things? All the while the power elites continue investing and making these decisions outside the political and social systems of nation, culture, or the ethico-religious dimensions of our planetary civilization.
by Achim Szepanski
Issues such as satellite monitoring, enormous computing power on silicon chips, sensors, networks and predictive analytics are the components of digital systems (monitoring capital) that are currently extensively tracking, analyzing and capitalizing on the lives and behaviors of populations. For example, under the pressure of the financial markets, Google is forced to constantly increase the effectiveness of its data tracking and analysis generated by machine intelligence and, for that very reason, to combat every user's claim to privacy by the most diverse means. Thanks to a series of devices such as laptops and smartphones, cameras and sensors, today's computers are ubiquitous in capitalized everyday life. They are character-reading machines, the algorithms (unconditionally calculable, formal-clear procedural instructions) and develop their full power only in the context of digital media networking, for which the programmatic design, transformation and reproduction of all media formats is a prerequisite. In particular, the social networks in this game provide a kind of economy that has strangely established new algorithmic governance by extracting personal data that leads to the construction of metadata, cookies, tags, and other tracking technologies. This development has become known primarily as "Big Data," a system based on networking, databases, and high computer performance and capacity. The processes involved are, according to Stiegler, those of "grammatization". In the digital stage these lead to that individuals are led through a world in which their behavior is grammaticalized by interacting with computer systems operating in real time. The grammatization, for Stiegler, already begins with the cave paintings and leads through the media cuneiform, photography, film and television finally to the computer, the Internet and the smartphone. The result of all this is that the data paths and tracks generated by today's computerization technologies constitute ternary attention-reducing retentions or mnemonics that include specific time processes and individuation processes, i.e, "industrialization processes of memory" or "political processes of memory." and industrial economics based on the industrial exploitation of periods of consciousness «. With the digitization of the data paths and processes, which today are urgently required by sensors, interfaces and other means and are basically generated as binary numbers and calculable data, as Stiegler said, creating an automated social body in which even life is transformed into an agent of the hyper-industrial economy of capital. Deleuze anticipated this development in his famous essay on control societies, but control comes only when the digital calculation integrates Deleuze's modulations of control techniques into algorithmic governance that also automates all existences, ways of life, and cognition included.
The underlying power technologies of the protocols are a-normative because they are rarely widely debated in the public domain, but rather appear to be inherent in algorithmic governance. The debate now is: Do you create data digital logs or digital logs data? Or even more closely: are data digital protocols? In any case, their setting has a structuring character, not only the results. Like any governance, if we think of it in terms of Foucault, algorithmic governance also implements specific technologies of power, which today are no longer based on statistics related to the average and the norm, instead we have an automated, to do atomic and probabilistic machine intelligence,1The digital data machines that continuously collect and read data tracks mobilize an a-normative and an a-political rationality, consisting of the automatic analysis and the monetary valorisation of enormous amounts of data by modeling, anticipating and influencing the behavior of the population. One calls this today trivializing ubiquitous Computing, in which, and it must be pointed out again and again, the monitoring capital the extraction of the behavior of the users and the prediction products based on it, which develop by the algorithms of the monitoring capital, no longer only in the Internet, but in the real world and then diversify prediction products by special techniques. Everything, whether animated or inanimate, can be verdant, connect, communicate and calculate. And from the automobiles, refrigerators, houses and bodies, etc., signals are constantly flowing through digital devices that are based on activities of human and non-human actors that take place in the real world as data in the digital networks and serve there the transformation into forecasting products that are sold to advertisers who target accurate advertising (Zuboff 2018: 225), which are sold to advertisers who are targeting advertising. (Zuboff 2018: 225).
To put it in more detail: The monitoring capital of the companies Google or Facebook automates the buying behavior of consumers, channels it through the famous feedback loops of their AI machines and binds it purposefully to companies that are advertising customers of the monitoring capital. The promotional behavioral modifications to be achieved by users are based on machine processes and techniques such as tuning (adaptation to a system), herding (conditioning of the mass), and conditioning (training of stimulus-response patterns) that determine the behavior of the body of Directing Users, so that the engineered prediction products actually drive users' behaviors towards Google's guaranteed intentions. (Ibid. The maximum predictability of users' behavior is now a genuine source of profit: the consumer who uses a fitness app should best buy a healthy beverage product at the moment of maximum receptivity, such as after jogging, that has previously been made palatable by targeted advertising has been. Sporting goods manufacturer Nike has bought the data analysis company Zodiac and uses them in its stores in New York. If a customer enters a store with their Nike app on their smartphone, they will immediately be recognized and categorized by the geofencing software. The homepage of the app changes immediately and instead of online offers, new features appear on the screen, which of course includes special offers tailored to the customer and recommendations that are currently being offered in the store.
The surveillance capital has long ceased to be advertising-only, it quickly became a model for capital accumulation in Silicon Valley that was adopted by virtually every startup. But today it is not just limited to individual companies or the internet sector but has spread to a large number of products, services and the economic sector, including insurance, healthcare, finance, cultural industries, transportation, etc. Almost every product or service that begins with the word "smart" or "personalized," any Internet-connected device, any "digital assistant" is an interface in the enterprise supply chain for the invisible flow of behavioral data on the way to predicting the future the population in a surveillance economy. As an investor, Google quickly quashed the stated antipathy to advertising, instead opting to increase revenue by giving exclusive access to user data logs (once known as "data exhaust") data in combination with substantial analytical capacity and maximum Computer power was used to generate predictions of users' clickthrough rates, which are seen as a signal of the relevance of an ad. Operationally, this means that Google transformed its growing database to "work in" as a surplus of behavioral data while developing new ways to aggressively search for sources of surplus production. The company developed new methods for seizing the secret surplus by exposing data that users considered private and extensively personalizing users' information. And this data surplus was secretly analyzed for the importance it had for predicting user click behavior. This data surplus became the basis for new forecasts called "targeted advertising". Here was the source of surrogate capital, behavioral surplus, material infrastructures, computer power, algorithmic systems, and automated platforms. As clickthrough rates shot through the ceiling, advertising for Google became as important as the search engine, perhaps the entry point for a new kind of e-commerce that relied on broad online monitoring. The success of these new mechanisms became apparent when Google went public in 2004.
The first supervisor capitalists first enacted declarations by simply considering the users' private experiences as something that can be taken to translate them into data and to use them as private property and exploit them for the private gain of knowledge. This was dubbed with a rhetorical camouflage and secret explanations that no-one knew as themselves. Google began unilaterally to postulate that the Internet was merely a resource for their search engine. With a second declaration, they claimed that users' private experience served their rewards by selling personal fortunes to other companies. The next step was those surplus operations should move beyond the online milieu into the real world, where personal behavior data is considered free to be easily stolen by Google. This was a normal story in capitalism, finding things outside the market sphere and generating them as goods. Once we searched for Google, now Google is looking for us. Once we thought the digital services were free, now the surveillance capitalists think we are fair game. The surveillance capital no longer needs the population in their function as consumers, but the supply and demand orient the surveillance companies to transactions that are based on the anticipation of the behavior of the consumer Populations, groups, and individuals. Surveillance companies have few employees in relation to their computer power (and unlike the early industrial companies). The surveillance capital is dependent on the erosion of individual self-determination and autonomy, as well as the right of free choice, to generate an unobserved stream of behavioral data and to feed the markets that are not for but against the population. It is no longer enough to automate the streams of information that illuminate the population, but rather the goal is to automate the behavior of the population itself. These processes are constantly redesigned to increase the ignorance that affects individual observability and to eliminate any possibility of self-determination. The surveillance capital puts the focus away from the individual users up populations such as cities or even the economics of a country, which is not insignificant for the capital markets when the predictions about the behavior of populations gradually approach the certainty of their arrival. In the competition for the most efficient prediction products, supervisor capitalists have learned that the more behavioral surplus they acquire, the better the predictions are, which encourages capitalization through the economies of scale to ever new efforts. And the more the surplus can be varied, the higher the predictive value. This new drive of the economy leads from the desktops via the smartphones into the real world - you drive, run, shop, find a parking space, circulate the blood and you show a face. Everything should be recorded, localized and marketed.
There is a duality in information technology to announce, to automate its capacity, but also to computerize, that is, to translate things, processes and behavior into information, i.e new territories of knowledge are being produced on the basis of informational capacity, which may also be the subject of political conflicts; this concerns the distribution of knowledge, the decision about knowledge and the power of knowledge. Zuboff writes that the surveillance capitalists have the right to know, to decide who knows and to decide who decides to claim alone. They dominate the automation of knowledge and its specific division of labor. Zuboff goes on to say that one can not understand the surveillance capital without the digital, but the digital could also exist without the surveillance capital: The surveillance capital is not pure technology, rather digital technologies could take many forms. The monitoring capital is based on algorithms and sensors, artificial machines and platforms, but it is not the same as these components.
A company such as Google must already reach certain dimensions of size and diversification resources when collecting data that reflects user behavior and also serves to track behavioral excesses (Google's data emissions) and then uses that data to manipulate the data through its machine intelligence, convert prediction products of user behavior and sell them targeted to advertisers, products that start like heat seekers on the user to propose him, for example, at a pulse of 78 just the right fitness product via displayed advertising. For example, with the diversification that serves to increase the quality of the forecasting products, a broad diversification of observable topics in the virtual world must be achieved, and secondly, the extraction operations must be transferred from the network to the real world. In addition, the algorithmic operations must gain in depth, that is, they must aim at the intimacy of users to actuating and controlling, yes forming their behavior intervene by the company, for example, time and target pay-buttons on the smartphone or show lock a car automatically if the person concerned has not paid insurance amounts on time.
The data pool from which analysts can now draw is almost infinite. They know exactly who reclaims goods, calls hotlines or travels through online portals about a company. They know the favorite shops, the favorite restaurants, and bars of many consumers, the number of their "friends" on Facebook, the creator of ads that social media users have clicked on. You know who in recent days have visited the website of a competitor of the advertiser of an ad or has googled certain goods. They know the skin color, the sex, the financial situation of a person, his physical illnesses and emotional complaints. They know the age, the profession, the number of children, the neighborhood, the size of the apartment - after all, it is quite interesting for a company that manufactures mattresses to know whether a customer is single or, in the worst case scenario, the same five foam mats for the entire family orders.
Today, the group has materialized in Facebook, in its invisible algorithms, and has evoked a largely imaginary group addiction of unimaginable proportions. And here the theory of simulation is wrong, because there is nothing wrong with the digital networks, they are quite real and create a stability for those who are connected to the networks by simply expanding things, more inquiries, more friends and more so on. With the closure of the factories came the opening of the data mines. And the associated violation of privacy is the systematic result of a pathological division of knowledge, in which the monitoring capital knows, decides and decides who decides. Marcuse wrote that it was one of the boldest plans of National Socialism to wage the fight against the taboo of the private. And privacy is today so freed from any curiosity or secret, that without any hesitation or almost avid, you write everything on your time-wall so that everyone can read it. We are so happy when a friend comments anything. And you're always busy managing all the data feeds and updates, at least you have to deviate a bit of time from your daily routines. The taste, the preferences and the opinions are the market price you pay. But the social media business model will reach its limit and be ended, though it is still pushed by the growth of consumerism. This business model is always repeated after the dotcom boom of the 1990's. If growth stagnates, then the project must be completed. Transition-free growth of customer-centric, decentralized marketing is the fuel fueled by the mental pollution of the digital environment that corresponds to that of the natural environment.
For a search query, factors such as search terms, length of stay, the formulation of the query, letters and punctuation are among the clues used to spy on users' behavior, and so even these so-called data fumes are collect-able to target the user's excessive behavior advertising, whereby Google also assigns the best advertising sites to the algorithmic probability of the most paid advertisers, the prices of which are multiplied by the price per click multiplied by the probability with which the advertisement is then actually clicked on. In the end, these procedures also reveal what a particular individual thinks in a particular place and time. Singularity indeed. Every click on an ad banner displayed on Google is a signal for its relevance and is therefore considered a measure of successful targeting. Google is currently seeing an increase in paid clicks and at the same time a case of average cost per click, which equates to an increase in productivity as the volume of output has increased while costs are falling.
Just as the protocols are everywhere, so are the standards. It is possible to speak of environmental standards, safety and health standards, building standards and digital and industrial standards whose inter-institutional and technical status is made possible by the protocols' functioning. The capacity of standards depends on the control of protocols, a system of governance whose organizational techniques shape how value is extracted from those integrated into the different modes of production. But there are also the standards of the protocols themselves. The TCP / IP model of the Internet is a protocol that has become a technical standard for Internet communication. There is a specific relationship between protocol, implementation, and standard concerning digital processes: protocols are descriptions of the precise terms by which two computers can communicate with each other (i.e., a dictionary and a handbook for communicating). The implementation implies the creation of software that uses the protocol, i. handles the communication (two implementations that use the same protocol should be able to exchange data with each other). A standard defines which protocol should be used for specific purposes on certain computers. Although it does not define the protocol itself, it sets limits for changing the protocol.
translated by Dejan Stojkovski
@ Obsolete Capitalism blog
1) Let’s start with your first book, published in 2009, The Spam Book edited in collaboration with Jussi Parikka, a compendium from the Dark Side of Digital Culture. Why did you feel the urge to investigate the bad sides of digital culture as a writing debut? In the realm of “spam” seen as an intruder, an excess, an anomaly, and a menace, you have met the “virus” which has characterized your research path up until today.
As I recall Jussi and I jokingly framed The Spam Book as the antithesis to Bill Gates’ Road Ahead, but our dark side perspective was not so much about an evil “bad” side. It was more about shedding some light on digital objects that were otherwise obscured by discourses concerning security and epidemiological panics that rendered objects “bad”. So our introduction is really about challenging these discursively formed “bad” objects; these anomalous objects and events that seem to upset the norms of corporate networking.
We were also trying to escape the linguistic syntax of the biological virus, which defined much of the digital contagion discourse at the time, trapping the digital anomaly in the biological metaphors of epidemiology and Neo-Darwinism. This is something that I’ve tried to stick to throughout my writings on the viral, however, in some ways though I think we did stay with the biological metaphor to some extent in The Spam Book, but tried to turn it on its head so that rather than point to the nasty bits (spam, viruses, worms) as anomalous threats, we looked at the viral topology of the network in terms of horror autotoxicus or autoimmunity. That is, the very same network that is designed to share information becomes this auto-destructive vector for contagion. But beyond that, the anomaly is also constitutive of network culture. For example, the computer virus determines what you can and can’t do on a network. In a later piece we also pointed to the ways in which spam and virus writing had informed online marketing practices. (1)
In this context we were interested in the potential of the accidental viral topology. Jussi’s Digital Contagions looked at Virilio’s flipping of the substance/accident binary and I did this Transformations journal article on accidental topologies, so we were, I guess, both trying get away from prevalent discursive formations (e.g. the wonders of sharing versus the perils of spam) and look instead to the vectorial capacities of digital networks in which various accidents flourished.
2) Virality, Contagion Theory in the Age of Networks came out in 2012. It is an important essay which enables readers to understand virality as a social theory of the new digital dominion from a philosophical, sociological and political point of view (with the help of thinkers like Tarde and Deleuze). The path moves from the virus (the object of research) to the viral action (the spreading in social network areas to produce drives) to the contagion (the hypnotic theory of collective behaviour). How does the virus act in digital field and in the web? And how can we control spreading and contagion?
Before answering these specific questions, I need to say how important Tarde is to this book. Even the stuff on Deleuze and Guattari is really only read through their homage to Tarde. His contagion theory helped me to eschew biological metaphors, like the meme, which are discursively applied to nonbiological contexts. More profoundly Tarde also opens up a critical space wherein the whole nature/culture divide might be collapsed.
So to answer your questions about the digital field and control, we need to know that Tarde regarded contagion as mostly accidental. Although it is the very thing that produces the social, to the extent that by even counter-imitating we are still very much products of imitation, Tarde doesn’t offer much hope in terms of how these contagions can be controlled or resisted. He does briefly mention the cultivation or nurturing of imitation, however, this is not very well developed. But Virality adds affect theory to Tarde (and some claim that he is a kind of proto-affect theorist), which produces some different outcomes. When, for example, we add notions of affective atmospheres to his notion of the crowd, i.e. the role of moods, feelings and emotions, and the capacity to affectively prime and build up a momentum of mood, a new kind of power dynamic of contagion comes into view.
While we must not lose sight of Tarde’s accident, the idea that capricious affective contagion can be stirred or steered into action in some way so as to have a kind of an effect needs to be considered. Crudely, we can’t cause virality or switch it on, but we can agitate or provoke it into potential states of vectorial becoming. This is how small changes might become big; how that is, the production of a certain mood, for example, might eventually territorialize a network. Although any potential contagious overspill needs to be considered a refrain that could, at any moment, collapse back into a capricious line of flight.
The flipside of this affective turn, which has, on one hand, allowed us new critical insights into how things might potentially spread on a network, is that digital marketers and political strategists are, on the other hand, looking very closely at moods through strategies of emotional branding and marketing felt experiences. The entire “like” economy of corporate social media is, of course, designed emotionally. Facebook’s unethical emotional contagion experiment in 2014 stands out as an example of how far these attempts to steer the accidents of contagion might go.
3) Five years after the release of Virality, The Assemblage Brain is published in 2017. A year that has seen a new political paradigm: Trump has succeeded Obama in the United States, a country which we could define as the benchmark of the development of today’s western élites and as a metaphor of power. Both have used the social networks to spread their political message, political unconscious as you would say. As an expert of contagion, and political use of the social networks, what lesson can we learn from such experience?
In the UK we’re still arguing over what kind of dystopia we’re in: 1984, Brave New World? So it’s funny that someone described the book to me as a dystopian novel.
“Surely all these terrible things haven’t happened yet?”
“This is just a warning of where we might go wrong in the future.”
I’m not so sure about that. Yes, I make references to the dystopian fictions that inspired Deleuze’s control society, but in many ways I think I underestimated just how bad things have got.
It’s a complex picture though, isn’t it? There are some familiar narrative emerging. The mass populist move to the right has, in part, been seen as a class based reaction against the old neoliberal elites and their low wage economy which has vastly enriched the few. We experienced the fallout here in the UK with Brexit too. Elements of the working class seemed to vociferously cheer for Farage. Perhaps Brexit was a catchier, emotionally branded virus. It certainly unleashed a kind of political unconsciousness, tapping into a nasty mixture of nationalism and racism under the seemingly empowering, yet ultimately oppressing slogan “We Want Our Country Back.” Indeed, the data shows that more Leave messages spread on social media than Remain.
But those quick to blame the stupidity of white working class somnambulists rallying against a neoliberal elite have surely got it wrong. Brexit made a broad and bogus emotional appeal to deluded nationalists from across the class divide who feared the country had lost its identity because of the free movement of people. This acceleration towards the right was, of course, steered by the trickery of a sinister global coalition of corporate-political fascists – elites like Farage, Brexiteers like Johnson and Gove, and Trump’s knuckleheads in the US.
What can we learn about the role of digital media played in this trickery? We are already learning more about the role of filter bubbles that propagate these influences, and fake news, of course. We also need to look more closely at the claims surrounding the behavioural data techniques of Cambridge Analytica and the right wing networks that connect this sinister global coalition to the US billionaire, Robert Mercer. Evidently, claims that the behavioural analysis of personal data captured from social media can lead to mass manipulation are perhaps overblown, but again, we could be looking at very small and targeted influences that leads to something big. Digital theorists also need to focus on the effectiveness of Trump supporting Twitter bots and the affects of Trump’s unedited, troll-like directness on Twitter.
But we can’t ignore the accidents of influence. Indeed, I’m now wondering if there’s a turn of events. Certainly, here in the UK, after the recent General Election, UKIP seem to be a spent political force, for now anyhow. The British Nationalist Party have collapsed. The Tories are now greatly weakened. So while we cannot ignore the rise of extreme far right hate crime, it seems now that although we were on the edge of despair, and many felt the pain was just too much to carrying on, all of a sudden, there’s some hope again. “We Want Our Country Back” has been replaced with a new hopeful earworm chant of “Oh Jeremy Corbyn!”
There are some comparisons here with Obama’s unanticipated election win. A good part of Obama love grew from some small emotive postings on social media. Similarly, Corbyn’s recent political career has emerged from a series of almost accidental events; from his election as party leader to this last election result. Public opinion about austerity, which seemed to be overwhelmingly and somnambulistically in favour of self-oppression, has, it seems, flipped. The shocking events of the Grenfell Tower fire seems to be having a similar impact on Tory austerity as Hurricane Katrina did on the unempathetic G.W. Bush.
It’s interesting that Corbyn’s campaign machine managed to ride the wave of social media opinion with some uplifting, positive messages about policy ideas compared to the fearmongering of the right. The Tories spent £1million on negative Facebook ads, while Labour focused on producing mostly positive, motivating and sharable videos. Momentum are also working with developers, designers, UI/UX engineers on mobile apps that might help galvanize campaign support on the ground.
4. Let’s now turn to your book, The Assemblage Brain. The first question is about neuroculture. It is in fact quite clear that you are not approaching it under a biological, psychological, economic or marketing point of view. What is your approach in outlining neuroculture and more specifically what do you define as neurocapitalism?
The idea for the book was mostly prompted by criticism of fleeting references to mirror neurons in Virality. Both Tarde and Deleuze invested heavily in the brain sciences in their day and I suppose I was following on with that cross-disciplinary trajectory. But this engagement with science is, of course, not without its problems. So I wanted to spend some time thinking through how my work could relate to science, as well as art. There were some contradictions to reconcile. On one hand, I had followed this Deleuzian neuro-trajectory, but on the other hand, the critical theorist in me struggled with the role science plays in the cultural circuits of capitalism. I won’t go into too much detail here, but the book begins by looking at what seems to be a bit of theoretical backtracking by Deleuze and Guattari in their swansong What is Philosophy? In short, as Stengers argues, the philosophy of mixture in their earlier work is ostensibly replaced by the almost biblical announcement of “thou shalt not mix!” But it seems that the reappearance of disciplinary boundaries helps us to better understand how to overcome the different enunciations of philosophy, science and art, and ultimately, via the method of the interference, produce a kind of nonlocalised philosophy, science and art.
What is Philosophy? is also crucially about the brain’s encounter with chaos. It’s a counter- phenomenological, Whiteheadian account of the brain that questions the whole notion of matter and what arises from it. I think its subject matter also returns us to Bergson’s antilocationist stance in Matter and Memory. So in part, The Assemblage Brain is a neurophilosophy book. It explores the emotional brain thesis and the deeply ecological nature of noncognitive sense making. But the first part traces a neuropolitical trajectory of control that connects the neurosciences to capitalism, particularly apparent in the emotion turn we see in the management of digital labour and new marketing techniques, as well as the role of neuropharmaceuticals in controlling attention.
So neurocapitalism perhaps begins with the G.W. Bush announcement that the 1990s were the Decade of the Brain. Thereafter, government and industry investment in neuroscience research has exceeded genetics and is spun out to all kinds of commercial applications. It is now this expansive discursive formation that needs unpacking. But how to proceed? Should we analyse this discourse? Well, yes, but a problem with discourse analysis is that it too readily rubbishes science for making concrete facts from the hypothetical results of experimentation rather than trying to understand the implications of experimentation. To challenge neurocapitalism I think we need to take seriously both concrete and hypothetical experimentation. Instead of focusing too much on opening up a critical distance, we need to ask what is it that science is trying to make functional. For example, critical theory needs to directly engage with neuroeconomics and subsequent claims about the role neurochemicals might play in the relation between emotions and choice, addiction and technology use, and attention and consumption. It also needs to question the extent to which the emotional turn in the neurosciences has been integrated into the cultural circuits of capitalism. It needs ask why neuroscientists, like Damasio, get paid to do keynotes at neuromarketing conferences!
5) A Spinozian question. After What can a virus do? in Virality you have moved to What can a brain do? in The Assemblage Brain. Can you describe your shift from the virus to the brain and especially what you want to reach in your research path of Spinozian enquiry What can a body do? What creative potential do you attribute to the brain? And in Virilio’s perspective how many “hidden incidents in the brain itself” may lie in questioning: What can be done to a brain? How dangerous can the neural essence be when applied to technological development? The front line seems to be today in the individual cerebral areas and in the process of subjectivity under ruling diagrams of neural types...
Yes, the second part of the book looks at the liberating potential of sense making ecologies. I don’t just mean brain plasticity here. I’m not so convinced with Malabou’s idea that we can free the brain by way knowing our brain’s plastic potential. It plays a part, but we risk simply transferring the sovereignty of the self to the sovereignty of the synaptic self. I’m less interested in the linguistically derived sense of self we find here, wherein the symbolic is assumed to explain to us who we are (the self that says “I”). I’m more interested in Malabou’s warning that brain plasticity risks being hijacked by neoliberal notions of individualised worker flexibility.
Protevi’s Spinoza-inspired piece on the Nazis Nuremburg Rallies becomes more important in the book. So there’s different kinds of sensory power that can either produce more passive somnambulist Nazis followers or encourage a collective capacity towards action that fights fascism. Both work on a population through affective registers, which are not necessarily positive or negative, but rather sensory stimulations that produce certain moods. So, Protevi usefully draws on Deleuze and Bruce Wexler’s social neuroscience to argue that subjectivity is always being made (becoming) in deeply relational ways. Through our relation to carers, for instance, we see how subjectivity is a multiple production, never a given – more a perpetual proto-subjectivity in the making. Indeed, care is, in itself, deeply sensory and relational. The problem is that the education of our senses is increasingly experienced in systems of carelessness; from Nuremburg to the Age of Austerity. This isn’t all about fear. The Nazis focus on joy and pleasure (Freude), for example, worked on the mood of a population enabling enough racist feelings and a sense of superiority to prepare for war and the Holocaust. Capitalism similarly acts to pacify consumers and workers; to keep “everybody happy now” in spite of the degrees of nonconscious compulsion, obsolescence and waste, and disregard for environmental destruction. Yet, at the extreme, in the Nazis death camps, those with empathy were most likely to die. Feelings were completely shut down. In all these cases though, we find these anti-care systems in which the collective capacity to power is closed down.
Nonetheless, brains are deeply ecological. In moments of extreme sensory deprivation they will start to imagine images and sounds. The socially isolated brain will imagine others. In this context, it’s interesting that Wexler returns us to the importance of imitative relations. Again, we find here an imitative relation that overrides the linguistic sense of an inner self (a relation of interiority) and points instead to sense making in relation to exteriority. Without having to resort to mirror neurons, I feel there is a strong argument here for imitation as a powerful kind of affective relation that can function on both sides of Spinoza’s affective registers.
6) Let’s talk about specialized Control and neurofeedback: the neurosubject seen as the slave of the future of the sedated behaviour. Is it possible to train or to correct a brain? Let’s go back to the relation between politics and neuroculture. Trump’s administration displays neuropolitics today: for example “Neurocore” is a company where Betsy DeVos (current Trump’s US Secretary of Education) is the main shareholder. It is a company specialised in neuro-feedback techniques where one can learn how to modulate and therefore to control internal or external cerebral functions like some human-computer interfaces do. Neurocore affirms that they are able to positively work the electric impulses of the cerebral waves. What can we expect from mental wellness researches through neurofeedback and from self-regulated or digitally self-empowered cerebral manipulations, in politics and in society?
Of course, claims made by these brain training companies are mostly about gimmicky, money spinning, neuro-speculation. But I think this focus on ADHD is interesting. It also addresses the point you made in the previous question about being neurotypical. So Neurocore, like other similar businesses, claim to be able to treat the various symptoms of attention deficit by applying neuroscience. This usually means diagnosis via EEG – looking at brainwaves associated with attention/inattention – and then some application of noninvasive neurofeedback rather than drug interventions. OK, so by stimulating certain brainwaves it is perhaps possible to produce a degree of behavioural change akin to Pavlov or Skinner. But aside from these specific claims, there’s more a general and political relation established between the sensory environments of capitalism and certain brain-somatic states. I think these relations are crucial to understanding the paradoxical and dystopic nature of neurocapitalism.
For example, ADHD is assumed by many to be linked to faulty dopamine receptors and detected by certain brainwaves (there’s a FDA certified EEG diagnosis in the US), but the condition itself is a paradoxical mix of attention and inattention. On one hand, people with ADHD are distracted from the things they are supposed to neurotypically pay attention to, like school, work, paying the bills etc., and on the other, they are supposed to be hyper- attentive to the things that are regarded as distractions, like computer games, and other obsessions that they apparently spend disproportionate time on. There is a clear attempt here to manage certain kinds of attention through differing modes of sensory stimulation. But what’s neurotypical for school seems to clash with what’s neurotypical in the shopping mall. Inattention, distractibility, disorganization, impulsiveness and restlessness seem to be prerequisite behaviours for hyper-consumption.
Not surprisingly then, ADHD, OCD and dementia become part of the neuromarketer’s tool bag; that is, the consumer is modelled by a range of brain pathologies e.g. the attention- challenged, forgetful consumer whose compulsive drives are essential to brand obsessions. All this links to the control society thesis and Deleuze’s location of marketing as the new enemy and the potential infiltration of neurochemicals and brainwaves as the latest frontier in control.
What I do in the book is look back at the origins of the control society thesis, found explicitly in the dystopias of Burroughs and implicitly in Huxley. What we find is a familiar paradoxical switching between freedom and slavery, joyful coercion and oppression. In short, the most effective dystopias are always dressed up as utopias.
7) What then is an assemblage brain? It seems to me that a precise thought line passing from Bergson, Tarde, Deleuze, Guattari, Whitehead, Ruyer and Simondon has been traced here. You write: Everything is potentially «becoming brain». Why? And which difference is there with the cybernetic model of brain, prevailing today?
Although I don’t really do much Whitehead in the book, I think his demand for a nonbifurcated theory of nature is the starting point for the assemblage brain. Certainly, by the time I get to discuss Deleuze’s The Fold, Whitehead is there in all but name. So there’s this beautiful quote that I’ve used in a more recent article that perfectly captures what I mean...
[W]e cannot determine with what molecules the brain begins and the rest of the body ends. Further, we cannot tell with what molecules the body ends and the external world begins. The truth is that the brain is continuous with the body, and the body is continuous with the rest of the natural world. Human experience is an act of self-origination including the whole of nature, limited to the perspective of a focal region, located within the body, but not necessarily persisting in any fixed coordination with a definite part of the brain. (2)
This captures the antilocationist stance of the book, which rallies against a series of locationist positions in neuroculture ranging from what has been described as fMRI- phrenology to the neurophilosophy of Metzinger’s Platonic Ego Tunnel. The cybernetic model of sense making is a locationist model of sense making writ large. The cognitive brain is this computer that stores representations somewhere in a mental model that seems to hover above matter. It communicates with the outside world through internal encoding/decoding information processors, and even when this information becomes widely distributed through external networks, the brain model doesn’t change, but instead we encounter the same internal properties in this ridiculous notion of a megabrain or collective intelligence. We find a great antidote to the megabrain in Tarde’s social monadology, but The Fold brilliantly upsets the whole notion that the outside is nothing more than an image stored on the inside. On the contrary, the inside is nothing more than a fold on the outside.
To further counter such locationist perspectives on sense making – Whitehead’s limitations of the focal region - we need to rethink the question of matter and what arises from it. For example, Deleuze’s use of Ruyer results in this idea that everything is potentially becoming brain. There are, as such, micro-brains everywhere in Whitehead’s nonbifurcated assemblage – the society of molecules that compose the stone, e.g. which senses the warmth of the sun.
There’s evidently politics in here too. The ADHD example I mentioned is a locationist strategy that says our response to the stresses and disruptions experienced in the world today can be traced back to a problem that starts inside the head. On the contrary, it’s in our relations with these systems of carelessness that we will find the problem!
8) You declare that the couple “mind/brain” is insolvable. Against the ratio of the scientific concept of the «mind» you counterpose the chaotic materiality of the «brain» writing that the brain is the chaos which continues to haunt science (p.195). Can we say that such irreducible escape from chaos expressed in your metaphor of Huxley’s escape from Plato’s cavern, shows your preference for What is Philosophy by Deleuze and Guattari rather than A Thousand Plateaus where the assemblage theory is displayed?
So yes, in The Fold there is no mind/brain distinction, just, as What is Philosophy continues with, this encounter between matter and chaos. The brain simply returns or is an exchange point for the expression of chaos – Whitehead’s narrow “focal point” of the percipient event. This is, as Stengers argues, nothing more than a mere foothold of perception, not a command post! Such a concept of nature evidently haunts the cognitive neurosciences approach that seeks, through neuroaesthetics, for example, to locate the concept of beauty in the brain. We might be able to trace a particular sensation to a location in the brain, by, for example, tweaking a rat’s whisker so that it corresponds with a location in the brain, but the neurocorrelates between these sensations and the concept of beauty are drastically misunderstood as a journey from matter to mental stuff or matter to memory.
I think the metaphor of Huxley’s acid fuelled escape from Plato’s cave, which is contrasted with Dequincy’s opiated journey to the prison of the self, helps, in a slightly tongue-in-cheek way, to explore the difference between relations of interiority and exteriority or tunnels and folds. The point is to contrast Dequincy’s need to escape the harsh world he experienced in the early industrial age by hiding inside his opiated dream world with Huxley’s acid induced experience of “isness.” Huxley was certainly reading Bergson when he wrote Doors of Perception, so I think he was looking to route round the kind of perception explained by the journey from matter to the mental. My attempt at a somewhat crude lyrical conclusion is that while Dequincy hides in his tunnel Huxley is out there in the nonbifurcated fold...
9) One last question (maybe more ethical than what we would expect from new media theorists today) involves the aspect of a meeting between a virus and a brain. Which ethical, biological, political, social and philosophical effects may occur when viruses are purposely introduced/inoculated into human brain, as with «organoid» derived from grown cells in research laboratories? Growing a brain from embryonic cells and wildly experimenting modifying its growth can take the zoon politikon to a critical edge? Neither machines, or men or cyborg, but simple wearable synthetic micro- masses. Are we approaching in huge strides the bio-inorganic era that Deleuze defined in his book on Foucault, as the era of man in charge of the very rocks, or inorganic matter (the domain of silicon)?
One way to approach this fascinating question might be to again compare Metzinger’s neuroethics with an ethics of The Fold. On one hand, there’s this human right to use neurotechnologies and pharmaceutical psychostimulants to tinker with the Ego Tunnel. It’s these kind of out of body experiences that Metzinger’s claims will free us from the virtual sense of self by enabling humans to look back at ourselves and see through the illusion of the cave brain. On the other hand, the ethics of The Fold suggests a more politically flattened and nonbifurcated ecological relation between organic and inorganic matter. The nightmare of the wearable micro-masses ideal you mention would, I suppose, sit more concretely in the former. Infected with this virus, we would not just look back at ourselves, but perhaps spread the politics of the Anthropocene even further into the inorganic world. In many ways, looking at the capitalist ruins in which we live in now, we perhaps already have this virus in our heads? Indeed, isn’t humanity a kind of virus in itself? Certainly, our lack of empathy for the planet we contaminate is staggering. I would tend to be far more optimistic about being in the fold since even though we still have our animal politics and Anthropocene to contend with, if we are positioned more closely in nature; that is, in the consequential decay of contaminated matter, we may, at last, share in the feeling of decay. I suppose this is again already the case. We are living in the early ruins of inorganic and organic matter right now, yet we seem to think we can rise above it. But even Ego Tunnels like Trump will eventually find themselves rotting in the ruins.
1) Tony D Sampson and Jussi Parikka, “Learning from Network Dysfunctionality: Accidents, Enterprise and Small Worlds of Infection” in The Blackwell Companion to New Media Dynamics, Hartley, Burgess and Bruns (eds.), Wiley-Blackwell, 2012.
2) Whitehead cited in Dewey, J “The Philosophy of Whitehead” in Schilpp, P.A (ed.) The Philosophy 2 of Alfred North Whitehead. Tutor Publishing Company, New York, 1951.
Explorations in Media EcologyVolume 15 Number 1
© 2016 Intellect Ltd Article. English language. doi: 10.1386/eme.15.1.55_1
University of Cincinnati
Grand Valley State University
The authors argue that Gilles Deleuze can be read as a media ecologist, extending many insights of Marshall McLuhan’s including the idea that the medium is the message, that the content of any medium is another medium, and that media extend and alter human faculties. Yet since McLuhan preferred to write in axioms and probes, Deleuze provides a more robust theorizing of these issues. Specifically, Deleuze advances on McLuhan by providing a more complex notion of media as assemblages, avoiding the dilemmas of technological determinism, by developing a more robust way of under-standing affect and desire, away from McLuhan’s notion of sensory ratios, and by establishing power and ethics as central concerns, against McLuhan’s primarily descriptive scholarly approach. We conclude that Deleuze thus illustrates the continu-ing relevance of McLuhan’s foundational work, yet his advances on McLuhan offer many prospects for improving the study of media from a media ecological perspective.
The vast corpus of Gilles Deleuze has recently found significant uptake inthe wide world of critical theory. The corpus’ sheer volume helps explainthis widespread interest, since Deleuze theorizes on issues relevant to a broad diversity of topics of central concern for critical scholars such as power,desire and affect. Furthermore Deleuze’s concepts, such as smooth and stri-ated space, deterritorialization, control society, the rhizome, the molar andmolecular, the refrain, and machinic assemblages, seem particularly relevantfor today’s postmodern capitalism and digital media age. Arguing that this isthe proper role for the philosopher, Deleuze seeks to develop concepts thatmight spark new ways of thinking. As such, Deleuze frequently borrows fromand refines concepts from other thinkers, especially those either ignored orexcluded from mainstream western philosophizing such as, in the twentieth century, Alfred North Whitehead, Gilbert Simondon, Paul Virilio and William James. Tracing Deleuze’s intellectual heritage further into the past, Todd May(2005: 26), one of his most lucid interpreters, elucidates how Baruch Spinoza,Henri Bergson and Friedrich Nietzsche represent the Christ, Father and Holy Ghost of Deleuze’s thinking.
In this article, we would like to add another theorist to this list of Deleuze’s forebears – Marshall McLuhan. This is perhaps a surprising addition since, unlike these other theorists, Deleuze’s references to McLuhan are sporadic and brief. Nevertheless we argue that Deleuze can be envisioned as a proper heir of McLuhan’s, that is, as a media scholar with an ecological perspec-tive. This is, of course, not an exclusive claim, since Deleuze is also concerned with many issues not directly relevant to communication media, at least in McLuhan’s conceptualization. Yet many of Deleuze’s concepts have relevance for media studies, and his theorization often proceeds from insights originally developed by McLuhan including, as we illustrate in the first section below, the idea that the medium is the message, that the content of any medium is another medium, and that media extend and alter human faculties.
Despite these shared insights, Deleuze develops a more rigorous and complex theoretical perspective than McLuhan, who preferred to write in axioms and probes designed to innervate thought, rather than elaborate an entire framework. As such, we also make a second major claim, namely that as a media ecological scholar Deleuze refines and advances McLuhan’s initial explorations. Deleuze does so in three ways, there by fine-tuning McLuhan’s thoughts in ways that mitigate some significant criticisms. First, Deleuze defines the hazy concept ‘media’ with a turn towards ‘machinic assemblage’, addressing the widespread indictment of McLuhan for technological determinism. Second, although both Deleuze and McLuhan recognize that media generate different affects and hence desires, Deleuze provides a more extensive understanding of affect and desire that grounds and warrants McLuhan’s, at times, hasty proclamations. Finally, Deleuze directs attention to power and ethics, issues that McLuhan only briefly touches upon and that, due to their submerged role, risk turning McLuhan’s scholarship into a purely descriptive and hence politically debilitating enterprise, as evidenced by McLuhan’s work as a consultant for advertising firms. In sum, we argue that Deleuze, as an heir of McLuhan, takes up the work of his forebear in ways consistent with a media ecological perspective but in a manner that greatly advances that perspective.
Media and assemlages in McLuhan and Deleuze
Although Deleuze prefers the label ‘philosopher’, one can envision Deleuze as a media theorist in much the same vein as McLuhan. For one example, Deleuze’s (1995) description of the transition from disciplinary to control society relies upon the shift from analogue to digital technics, and his (Deleuze and Parnet 2002: 112–15) distinction, borrowed from Henri Bergson, between the actual and the virtual gains enhanced relevance with the onset of the digital age. Furthermore, Deleuze’s canon includes works explicitly focusing on cinema (1986, 1989), the art of Francis Bacon (2005), and The Logic of Sense (1990), in which Deleuze theorizes sense as the faculty with one side turned towards actuality (the thing or the state of affairs) and the other turned towards the proposition. In Deleuze’s (1990: 22) words, sense is ‘exactly the boundary between proposition and things’, and, in McLuhan’s language, we could see this boundary as a medium, which, for McLuhan, extends and translates human senses. Indeed, in Cinema 1, Deleuze (1986: 7–8) sounds a note similar to McLuhan’s emphasis on media as extensions of human faculties when he remarks that cinema is ‘the organ for perfecting the new reality’, an ‘essential factor’ in a ‘new way of thinking’, or when he and Félix Guattari (1987: 61) write, ‘The hand as a general form of content is extended in tools….’
Deleuze is often concerned with other media topics discussed by McLuhan, such as language, music, literature and, especially, different types of space. In addition, many of Deleuze’s major concepts hold direct relevance for the study of media. Peter Zhang (2011) has illustrated how Deleuze’s notions of striated versus smooth space map onto McLuhan’s distinction between acous-tic and visual space, a distinction Stanley Cavell (2003) portrays as central to McLuhan’s corpus. The connection is not lost on Deleuze, who makes positive remarks about McLuhan and other media scholars such as Lewis Mumford and Paul Virilio.1 Although Deleuze rarely uses the term ‘media’, preferring to discuss assemblages, modes and machines, his use of these terms remains consonant with McLuhan’s emphasis on media, since both are concerned with how different modes of thought, perception, language, affection and action shape society. D. N. Rodowick, one of the best interpreters of Deleuze’s cinema work, thus describes Deleuze’s fundamental task as to ‘understand the specific set of formal possibilities – modes of envisioning and represent-ing, of seeing and saying – historically available to different cultures at differ-ent times’ (1997: 5).
McLuhan, on the other hand, may seem more narrowly focused on media than Deleuze, especially since his chapter headings in Understanding Media (1964) are all different media technologies such as movies and television. Such a seemingly narrow focus has led to many criticisms of McLuhan’s supposed technological determinism. Yet careful consideration of McLuhan’s work reveals a broader focus than media alone since McLuhan is also concerned with the interfacing of culture with media, something that Deleuze’s terms machine, assemblage, and modes point towards. Indeed, McLuhan’s (1967: 159) later work frequently employs the concept of modes, stressing that the study of media is the study of modes: ‘All that remains to study are the media themselves, as forms, as modes ever creating the new assumptions and hence new objectives’. McLuhan’s use of the term modes resonates with Deleuze’s, indicating their shared concern about how culture interfaces with media. As McLuhan states, ‘Vivisective inspection of all modes of our own inner-outer individual-social lives makes us acutely sensitive to all inter-cultural and inter-media experience’ (1969: 64). Here, McLuhan under-stands modes as the manner of interfacing with media, as things that make us sensitive to inter-cultural and inter-media experience. This quotation seems to make clear, then, that McLuhan understands modes as liminal, in-between media and culture, just as Deleuze’s conceptualization of modes, assemblages and machines understands human experience as a coupling of media and culture, of human faculties and technology. A focus on how culture inter-faces with media denies the frequent accusations of McLuhan’s techno logical determinism. Like Deleuze, McLuhan is well aware that, to generate an effect,a medium first needs to be taken up by a social matrix.
Although the accusations of technological determinism may be based in a less than generous read, they remain a necessary cautionary note and their existence is far from surprising. McLuhan’s predilection for axioms and probes, part of a writing style he saw as an adaptation to an electronic media environment, produces claims that may seem, to the more deliberate scholar, to be overstatement, hyperbole, or a gross generalization. Take ‘the medium is the message’. McLuhan’s famous axiom seems to dismiss any socio-cultural effects from message content. Other statements supporting this axiom seem to confirm this extreme position, such as when McLuhan remarks that ‘the medium shapes and controls the scale and form of human association and action’, or when he earlier claims that the assembly line altered our ‘relations to one another and to ourselves’, and it ‘mattered not in the least whether it turned out cornflakes or Cadillacs’ (1964: 24, 23). The automobile certainly had a major impact on society, and elsewhere in the same work McLuhan treats things like the wheel, the bicycle, and the airplane as media, seeming to contradict his earlier dismissal since, with Cadillacs, the content of the assembly line is another technological medium.
Yet, despite his hyperbolic style, McLuhan’s basic claim that media intro-duce changes of scale, pace or pattern into human affairs remains undeniable. Furthermore, McLuhan’s notion of medium at least points towards a more complex understanding than just the material technology. Indeed, it is McLuhan (1964: 23) who first recognizes that the content of one medium is ‘always another medium’, such as in our assembly line and Cadillac example. If media shape human affairs, and the content of a medium is always another medium, then McLuhan’s position on the effects of media form versus their content is more complex than the accusations of technological determinism entail. The warnings against a simple and direct technological causation should be heeded, but McLuhan can be read more generously without infer-ring such a notion of causation.
In fact, we can see McLuhan’s axioms and probes as the first volley, necessary to shake free some encrusted biases towards content analysis, and Deleuze's work as the more sustained ground strokes. Deleuze and Guattari(1983: 240) cite McLuhan’s realization that the content of one medium is another medium approvingly, in their extended criticism of Saussurian linguistics’ emphasis on the signifier. Where as Saussurian linguistics stresses content(the signifiers, which mean only in relation), ‘the significance of McLuhan’ analysis’ is to have shown the import of ‘decoded flows, as opposed to a signifier that strangles and overcodes the flows’. They continue:
In the first place, for non-signifying language anything will do: whether it be phonic, graphic, gestural, etc., no flow is privileged in this language, which remains indifferent to its substance or its support, inasmuch as the latter is an amorphous continuum… (A) substance is said to be formed when a flow enters into a relationship with another flow, such that the first defines a content and the second, an expression. The deterritorialized flows of content and expression are in a state of conjunction or reciprocal precondition that constitutes figures as the ultimate units of both content and expression. These figures do not derive from a signifier nor are they even signs as minimal elements of the signifier; they are non-signs, or rather non-signifying signs, point-signs having several dimensions, flow-breaks or schizzes that form images through their coming together in a whole, but that do not maintain any identity when they pass from one whole to another. Hence the figures… are in no way ‘figurative’; they become figurative only in a particular constellation that dissolves in order to be replaced by another one. Three million points per second transmitted by television, only a few of which are retained.
Some elucidation is necessary, since Deleuze and Guattari’s theoretical vocabulary is quite different from McLuhan’s. Basically, Deleuze and Guattari argue, against linguistics, that content does not determine or ‘overcode’ the communication. In other words, similar to McLuhan’s idea that the medium is the message, Deleuze, and Guattari stress that the signifier is not what is signifi-cant. The quotation begins by calling attention to media besides language –non-signifying languages – and, based in their understanding of machinic assemblages, calls attention to the flows that compose these languages, like the flows of gestures, graphics, and sounds on television. They are argu-ing that we cannot understand all media communication on the model of language, as Deleuze also concludes in his cinema books. Instead of the dialectical linguistic model based upon signifier-signified relations, they draw upon Louis Hjelmselv’s four-part model, which recognizes a substance and form of a content and an expression. A substance (say a television show) is formed when the flows of content (the gestures and sounds and images) and the flows of expression (the video camera and the editing, cuts and montage)are combined. The form is the arrangement and structuring of the substance; on a television show, the form is the order of the shots and the linkages between them. Hence why Deleuze and Guattari cite McLuhan’s recognition that the content of any medium is another medium; the content of a television programme is the flows of gestures, sounds, and images. Only combined with the flows of expression (the tele-visual flow, its camera, and editing techniques), does a figure or a whole (something with both substance and form) emerge.
This four-part model emphasizing couplings or combinations lead Deleuze and Guattari to refer to what McLuhan would call the medium with the term-machinic assemblage. Drawing on the insight that the content of any medium is another medium, Deleuze and Guattari prefer to describe this media coupling as an assemblage. This assemblage is machinic in the sense that it works like any machine, combining flows and breaks into a whole operation. As Deleuze and Guattari explain,
An organ machine is plugged into an energy-source machine: the one produces a flow that the other interrupts. The breast is a machine that produces milk, and the mouth a machine coupled to it… For every organ-machine, an energy-machine: all the time, flows and interruptions.
Lest the reference to breast feeding leads us astray, think of television again. There are flows of gestures, images and sounds that are interrupted or broken by the television camera, through such devices as editing and montage. The coupling of the two produces the whole, a machinic assemblage.
As the last line of the quotation above indicates, the assemblage and its machinic couplings do not end here but have another component, another break – the viewer, who sees only a few figures from the three million points per second transmitted. At this level, the content of the television programme (the flows of light) become coupled with the viewer’s senses to constitute a new assemblage, one that may evoke significance and affect. Once again, in this assemblage the content of one medium is another medium; indeed, what was once the flow of expression (the tele-visual flow) now becomes the flow of content that the viewer’s sensory system processes into expression, into figuration. In other words, the expression (the figures perceived) only emerge from the tele-visual flow, which only emerges from the flows of gesture, image and sound and their coupling by the camera and in the editing booth.
This notion of humans plugging into other machines to become machinic assemblages seems consonant with McLuhan’s idea of media as extensions, such as the wheels and the accelerator of a car being extensions of our feet. As McLuhan (1964: 272) remarks about television, ‘With TV, the viewer is the screen. He is bombarded with light impulses that James Joyce called the “Charge of the Light Brigade” that imbues his “soulskin with subconscious inklings”’. Shortly thereafter, McLuhan continues, further illustrating his like-mindedness with Deleuze and Guattari: ‘The TV image offers some three million dots per second to the receiver. From these, he accepts only a few dozen each instant, from which to make an image’ (1964: 273).
The existence of the machinic assemblage (or what McLuhan would call medium) of television means both that the dialectical signifier-signified model is inadequate to understand media, with their non-signifying semiotics, and that this model must be expanded to include both content and expression. Precisely because the content of any medium is another medium, precisely because all media are machinic assemblages, we must pay attention to the coupling of content and expression, of flows and breaks, not simply to the linguistic content alone, the chain of signifiers that so occupies Saussurian linguists and rhetorical critics. Recognition of assemblages of content and expression also means that a linguistic model that only addresses content (signifiers) cannot adequately describe the production of communication and, as we will see, desire in the social. Asking what signifiers are present on tele- vision, for instance, misses the sensory intimacy of the televisual experience, which McLuhan describes as ‘cool’. This intimacy can explain why Nixon flops on television while Kennedy soars, whereas attention to their string of signifiers offers no insight (as the radio listeners who thought Nixon won this famous debate attest).
Indeed, this televisual intimacy has dramatically transformed political discourse, which now prefers the cool stylings of a Reagan and Clinton to the hot Nixon or McCain, which now demands political oratory characterized by sound-bytes, narrative form, and self-disclosure that Kathleen Hall Jamieson (1990) deems the ‘effeminate style’. In politics, television has certainly been the‘message’; the change in content can only be explained by consideration of the media constituting the social environment since, considered apart from their machinic assemblages, political signifiers lack significance. With John McCain and Barack Obama equally and fervently appealing to the American Dream, for instance, consideration of only their signifiers cannot account for the vast differences in affect and desire innervated by the coupling of those signifiers (and images and gestures) into a televisual assemblage. Obama emerged as the preferable figure in this media environment. This is not to discount party loyalty, ideology or other factors for causing some voters to prefer McCain but only to say that Obama crafted the more attractive televisual image.
Shifting the critical focus from the content to assemblage entails examin-ing the perceptual and affective aspects of media experience. In other words, it is not so much because of their content (since the content is mostly full of repetitive, generic promises anyway) but because of how they feel and seem that some politicians make better images for television. Thus McLuhan bases his claims about how television has changed politics on an explanation of the viewing experience. To do so, McLuhan depicts the tele-visual experience as a primarily tactile perception, one innervating a syn-aesthetic affect. In this depiction, McLuhan offers a typically eccentric notion of tactility, closer to what people mean when they say they were touched by art. As McLuhan (1964: 67) remarks, ‘It begins to be evident that “touch” is not skin but the interplay of the senses, and keeping in touch or getting in touch is a matter of a fruitful meeting of the senses, of sight translated into sound and sound into movement, and taste and smell’. Since the television image is profoundly participatory and in-depth, it touches viewers by causing a sort of synaesthetic interplay among the senses. The cool medium of television involves the viewer in the image construction, engaging an in-depth interplay of all the senses. McLuhan states, ‘The TV image requires each instant that we “close” the spaces in the mesh by a convulsive, sensuous participation that is profoundly kinetic and tactile, because tactility is the interplay of the senses, rather than the isolated contact of skin and object’ (1964: 273). For McLuhan, we are ‘touched’ by the televisual image, affected by flows of sight and sound to not only see and hear but to feel and think.
Needless to say, McLuhan often brings on criticisms of technological determinism with such claims. Indeed, unlike Deleuze who describes differ-ent regimes of the cinematic image, McLuhan treats television and cinema as distinct media with dissimilar image qualities. To do so, McLuhan (1964: 273) must dismiss anything like high-definition television as simply not television: ‘Nor would “improved” TV be television. The TV image is now a mosaic mesh of light and dark spots, which a movie shot never is, even when the quality of the movie image is very poor’. McLuhan’s argument here is defensive and belies the history of cinema and television, including the advance of technology and the evolution of image forms, readily apparent from today’s perspective. In contrast, Deleuze’s emphasis on assemblages allows him to recognize different cinematic images, some of which are closer to McLuhan’s depiction of television than cinema. For instance, Deleuze (1989: 6, 59–64) describes the moments in musicals where characters break into song as pure sonsigns. Pure sonsigns are part of the time-image regime in cinema, which presents a moment in time directly, like a musical performance the audience pres-ently enjoys (instead of a representation of a musical performance that the characters enjoy). These sonsigns are participatory, involving and non-linear, possessing the same characteristics that McLuhan attributes to the televisual image.
Lest we wander off into an extended discussion of Deleuze’s cinema theory, let us summarize the conclusions of this section. Deleuze and McLuhan shared a concern with media, directing our attention to media and away from the content focus of early media studies and linguistics. Deleuze presents an advance over McLuhan, however, by conceiving media as a machinic assemblage, the coupling of flows and their interruptions. Conceiving media as assemblages helps avoid some of McLuhan’s more totalizing claims about media considered as stable categories, claims often evoking accusations of technological determination. The assemblage concept is more nuanced than simple technological determinism, and, as we will see, leads directly into a more developed theory of affect and desire, whose basic precepts can be unearthed in McLuhan’s writings. Again, however, Deleuze provides the theoretical backing for McLuhan’s probes, spelling out in more detail the scholarly task – to perform a mapping of assemblages and their modes.
Affect and desire in McLuhan and Deleuze
Affect and desire remain consistent concerns shared by Deleuze and McLuhan, concerns pointed to by McLuhan and further developed in Deleuze’s work. To illustrate these concerns for McLuhan, let’s stick with television. McLuhan (1964) seems distinctly concerned with how media alter cultural attractions and desires, claiming that, among other effects, television led to preferences for cool stars like Ed Sullivan, for skin diving and the wraparound spaces of small cars, for westerns and their ‘varied and rough textures’, for the beat-nik sensibility, for football over baseball, and for different forms of fashion, literature, music, poetry and painting. The warrant for each of these claims is basically the same. Television is low-definition, fragmented and disconnected, requiring images that are iconic and in-depth. Such images demand a high-level of audience participation to complete, and therefore creates attractions to cool, participatory forms and qualities that allow viewers to participate in their construction. For instance, viewers must interpret a cool, rounded, diverse character instead of being directly shown how to understand an easily classifiable character. Football is a collaborative, in-depth sport whereas base-ball is mano-y-mano, an individualized challenge of batter versus pitcher. Yet, setting aside the correctness of these claims, the point here is that McLuhan remains primarily concerned with how media spawn differences in cultural attraction and desire.
Furthermore, at least with television, McLuhan attributes these changes in attraction and desire to alterations in experiential sensation, which Deleuze will describe as affects. In other words, both McLuhan and Deleuze understand desire as a production of affects that are enjoyable. Media become desirable because they produce affects such as fear, surprise and joy that ‘touch’ audiences. Such a view sees desire as a surface phenomenon, rather than resorting to a depth explanation as does psychoanalysis, which relates desire to some more fundamental longing such as the desire to mend the split in subjectivity from the entrance into the symbolic, or as a representative of the Oedipus myth. Both McLuhan and Deleuze and Guattari (1983) stridently criticize psychoanalysis, faulting it for, in part, ignoring mediated experience. Psychoanalysis must focus on content to portray it as a representation of some more fundamental desire, thereby dismissing the production of affect as irrelevant at best or at worst as a cover for this somehow more real source of desire. In contrast, Deleuze, Guattari and McLuhan conceive desire as machinic production that generates pleasurable affects. Such a conceptualization requires no deep mystery – unlike in psychoanalysis, which often reads in a compelling manner but tends to lack any empirical support (how do we know the Oedipus myth is fundamental, universal?) – and instead allows the scholar to focus directly on media and their affect-laden experience.
Yet while McLuhan senses that media spark affects that thereby alter desires, Deleuze provides further theoretical refinement of affect and desire. Following Spinoza, Deleuze and Guattari (1987: xvi) define affect as the body’s ability to affect and be affected. Affect designates a pre-personal, embodied intensity experienced in the transition from one state to another. Affect accompanies and provides the texture for all of experience. At particularly sharp moments, we experience it as a spark or shock (see Jenkins 2014) but it persists throughout lived experience regardless of its degree of intensity. Affects are thus pre-conscious, continuous flows of intensity that accompany experience. As pre-conscious, they take place before their cognitive processing into the separate sensory channels – I saw this, or I heard this. In this sense, affects are like the synaesthetic touch depicted by Deleuze and McLuhan alike. As one of Deleuze’s interpreters and translators, Brian Massumi, explains, ‘Affects are virtual synesthetic perspectives anchored in … the actually existing, particular things that embody them’ (2002: 35, original emphasis).
As intensities, affects are the flip side of the extensions McLuhan associates with media. That is, affects are the experienced impingements that rebound onto the mind-body from its mediated extensions. Without using the term affect, McLuhan evinces a similar conceptualization, especially in his retelling of the Narcissus myth. McLuhan (1964: 51) argues that Narcissus did not fall in love with himself but instead mistook the image in the water for another person. This is because Narcissus indicates a state of narcosis or numbness, and self-love does not evoke such affects. Instead, Narcissus experienced a shock from the extension of himself that he mistakes for another person, and that shock sparks a physiological response of numbness, similar to battle shock or auto-amputation. As McLuhan explains,
We speak of ‘wanting to jump out of my skin’ or of ‘going out of my mind,’ being ‘driven batty’ or ‘flipping my lid’… In the physical stress of super stimulation of various kinds, the central nervous system acts to protect itself by a strategy of amputation or isolation of the offending organ, sense, or function.
Thus Narcissus’ narcosis is a defence mechanism, and McLuhan perceives a similar defensive numbness in response to electronic media that extend central nervous systems. Just as Narcissus responds with shock and numbness to seeing himself extended, extending central nervous systems exposes and makes vulnerable that system, thereby inducing a similar narcosis. This numbness is a particularly strong affect (intensity) resultant from the mediated extension, one with potentially dire results according to McLuhan. In shock, we risk mistaking media as something other than ourselves extended and hence become ‘servo-mechanisms’ of technology (McLuhan 1964: 55). Such surrendering of ‘our senses and nervous systems to the private manipulation of those who would try to benefit from taking a lease on our eyes and ears and nerves’ leaves us without ‘any rights left’, McLuhan (1964: 73) continues. Thus for McLuhan awareness is the solution to narcosis, ‘As long as we adopt the Narcissus attitude of regarding the extensions of our own bodies as really out there and really independent of us, we will meet all technological challenges with the same sort of banana-skin pirouette and collapse’ (1964: 73, original emphasis).
McLuhan describes such affects via reference to sensory ratios. He contends that when one sense faculty (like vision) is super stimulated, human beings respond with narcosis, unable to perceive their mediated environment. Television stresses the sense of touch to such an extent that cultural desires alter in favour of the tactile and participatory. Thus McLuhan bases his ontology upon and begins with a pre-organized human body, with certain sensory faculties whose ratios are re-ordered by media. Such a perspective often leaves readers wondering how McLuhan knows these changes are effected, such as in the television examples above. The equation that television is a tactile medium and thus evokes tactile desires seems too simple, and reduces tactility (and vision, hearing, etc.) to a single mode. In contrast, Deleuze, following Spinoza, begins with an ‘I do not know’, one more open to differences and the vast possibilities of becoming. Thus Deleuze repeatedly quotes the famous passage from Spinoza that reads, ‘Nobody as yet has determined the limits of the body’s capabilities: that is, nobody as yet has learned from experience what the body can and cannot do’ (1992: 105).
Instead of beginning with an organized body, with particular sense organs and their faculties, Deleuze starts with the Body without Organs (BwO). The BwO is the unorganized body, prior to its extensions and couplings in machinic assemblages, the body conceived as a glutinous mass of potential rather than a solid substance and form. Deleuze thus often compares the BwO to an egg, a soup of undifferentiated cells prior to its organization into limbs, organs, and the like. From the perspective of the BwO, the body only has potential affects, virtual affects, affects as yet unactualized into various assemblages. Massumi offers one of the clearest explanations:
Call each of the body’s different vibratory regions a ‘zone of intensity.’ Look at the zone of intensity from the point of view of the actions it produces. From that perspective, call it an ‘organ’… Imagine the body in suspended animation: intensity = 0. Call that the ‘body without organs’…. Think of the body without organs as the body outside any determinate state, poised for any action in its repertory; this is the body from the point of view of its potential, or virtuality.
Where as McLuhan begins from bodies presumed to be structured by certain sensory organs, Deleuze begins from the BwO and asks how the body becomes organized through various machinic assemblages. Such a perspec-tive allows Deleuze to recognize difference, to leave open the possibility for a wide variety of becomings. Rather than a visual medium necessarily producing visual ratios, affects and desires, beginning with the BwO allows scholars to recognize becomings where an eye is not just an eye, an ear not just an ear, a hand not just a hand. Massumi (1992: 93–94) offers the example of a man who wishes to become a dog who wears shoes, only to discover that, walking on all fours, he has no hand left to tie the final shoe. The man employs his mouth-as-hand, tying the shoes with his teeth, in the process of becoming this strange monster. This example may seem to lie at quite a remove from media studies, yet it is only by beginning with an ontology that conceives the body as a pool of liquid potential, rather than a pre-organized sensory apparatus, that scholars can account for the differences in the translations and actualizations of media form, such as the shift from movement-images to time-images in the cinema. McLuhan’s ontology requires that he envision cinema as singular, a highly visual and hot medium, rather than recognizing the potential for cinema to become otherwise, to become aural, or tactile, or many other admixtures of percept, affect and cognition.
Beginning with this different ontology beckons a different scholarly gesture, especially with regard to affect. In his work on Spinoza, Deleuze describes this different scholarly gesture as an ethology. An ethology does not describe bodies according to their form, function, or organs, as does McLuhan, but according to their modes, that is, their manners of becoming, their capacities to affect and be affected. As Deleuze remarks, ‘Every reader of Spinoza knows that for him bodies and minds are not substances or subjects, but modes’ (1988: 123–24). Bodies are thus not bundles of sensory ratios but capabilities or capacities, such as the capacity of the hand to act like an eye. Such a perspective precludes McLuhan’s gesture, which confidently predicts the effects and affects of the senses, but instead presumes difference and that we do not know what a body can become in different combinations or assemblages. The scholarly gesture changes because bodies are not conceived of as organizations of form but as complex relations with other bodies, as assemblages, or as modes, those manners in which these relations become organ-zed. As a result, the scholar sees life differently. Thus Deleuze writes,
Concretely, if you define bodies and thoughts as capacities for affecting and being affected, many things change. You will define an animal, or a human being, not by its form, its organs and its functions, and not as a subject either; you will define it by the affects of which it is capable.
Deleuze’s advance upon, and difference from, McLuhan’s implicit notion of affect does not end here, however. Again following Spinoza, Deleuze also gives us clues into how modes produce different affects. Modes produce affect in two primary ways, by composing a relation of speed and slowness and by composing a relation between affective capacities. In Deleuze’s terms, ‘For concretely, a mode is a complex relation of speed and slowness, in the body but also in thought, and it is a capacity for affecting or being affected, pertaining to the body or to thought’ (1988: 124). Deleuze employs the example of music to describe the relations of speed and slowness. Beginning from form and substance, one can describe a musical piece as composed of notes, arranged in a particular order. Yet such a perspective misses something fundamental in music – the rhythm and tempo. The same order and notes can produce widely variant songs based upon the speed of the playing. As Deleuze (1988: 123) explains:
The important thing is to understand life… not as a form … but as a complex relation between differential velocities, between deceleration and acceleration of particles… In the same way, a musical form will depend on a complex relation between speeds and slowness of sound particles. It is not just a matter of music but of how to live: it is by speed and slowness that one slips in among things, that one connects with something else. One never commences; one never has a tabula rasa; one slips in, enters in the middle; one takes up or lays down rhythms.
Ethology first of all studies relations of speed and slowness, organized via modes. Second, ethology asks how the modes relate different capacities for affect. Deleuze often employs the example of the wasp and orchid, conceived collectively as an assemblage. The wasp’s capacity to fly, to smell, and to gather pollen combines with the orchid’s capacity to flower, to produce pollen, and to emit scents. In combination, the orchid reproduces and the wasp feeds, forming a complex assemblage that studying either the wasp or orchid in isolation would miss. To return to a media example, HBO employs television’s capacity for home broadcast and episodic organization combined with cinema’s capacity for high production value and epic narrative to produce many shows that are closer to McLuhan’s depiction of cinema, yet that still take place in private, intimate locales and via the lower-definition television screen. With these shows, we have a hybrid becoming of cinema and television, a cinema made for television, that shapes different modes of production (such as elongated narratives told episodically) and consumption (40-minute viewings without commercial interruptions).
In this sense, Deleuze’s conceptualization of assemblages, modes and affect is fundamentally based upon an ecological perspective, just as McLuhan repeatedly beckons for ecological thinking about media. For the media scholar, each such modal relation, each assemblage, must be mapped independently, rather than reduced to global categories such as cinema or television as McLuhan is wont to do, since each relation of speed and slowness has, to quote Deleuze, its own ‘amplitudes, thresholds…, and variations or transformations that are peculiar to them’ and since each relation of affective capacities remains unique due to ‘circumstances, and the way in which these capacities for being affected are filled’ (1988: 125–26).
Deleuze depicts these two functions of modes as a longitude (speed and slowness) and a latitude (affective capacities). Ethology entails depicting these longitudes and latitudes, drawing a map of the embodied modes as they actualize from virtual potential of the BwO. Ethology constitutes a major theoretical advance over McLuhan’s earlier probes, although one with many similarities to McLuhan’s thinking. Primarily, the advance occurs because the theory of modes and affect are more open to flexibility, becoming and the acknowledgment of difference. Instead of a pre-formed body with certain sensory ratios, beginning with the BwO allows scholars to recognize a wide variety of virtual potentials whose becomings offer more possibilities than McLuhan’s static understandings entail. Furthermore, conceiving modes as manners of relating speed and slowness and of relating affective capacities backs away from McLuhan’s more general and totalizing claims about media forms, such as television or cinema, considered as whole and static. Doing so allows us to better understand the transformations of media over time, as television becomes cinema and cinema becomes television and they both become some-thing else. This is especially important in a digital age, which has created the capacity for any content to be translated across a wide variety of mediums. Ethology also entails a final advance over McLuhan because it offers not only a prescription for a scholarly approach but also an outline of an ethics.
Ethics and Power in McLuhan and Deleuze
The practice of ethology is based in an ethics that offers guidelines for becomings in process, not a morality that proffers proscriptions from above. According to Spinoza, an ethical becoming or mode is one that produces joy and heals whereas an unethical becoming produces sadness and illness. Similar modes (say, drug use or a certain sexual practice) may be ethical for some and unethical for others depending on the situation, illustrating why ethology constitutes an ethics and not a morality. Besides providing this criterion for discerning ethical versus unethical modes, the ontology behind ethology represents an ethical gesture, in part by rejecting the imperializing and totalizing gesture of morality. As Deleuze elucidates:
Spinoza’s ethics has nothing to do with a morality; he conceives it as an ethology, that is, as a composition of fast and slow speeds, of capacities for affecting and being affected on this plane of immanence. This is why Spinoza calls out to us in the way he does; you do not know beforehand what good or bad you are capable of; you do not know beforehand what a body or a mind can do, in a given encounter, a given arrangement, a given combination.
By starting with an ‘I do not know’ about the body-mind instead of a confident and imperializing ontology of what the body-mind can do, ethology constitutes an ethical gesture because it remains open to difference and to the possibility of things becoming otherwise. Doing so also demands that the scholar begins in the middle, amidst embodied experience and its assemblages, rather than passing moral judgement and attempting, through force of word or often law, to make worldly actualities fit into those boxes. Ethology, then, offers guidance for a mode of living, one that begins in the middle and asks what new modes can be thought and produced which might spread love instead of hate, might heal instead of make ill, might produce happiness instead of sadness.
Thus ontology and ethics fuse into ethology in Deleuze in a way perhaps best described as an interology. The ethics of ethology is entirely Other-oriented, since it is based in an ontology that rests on percept and affect. Per this ontology, the becomings of humankind are no longer finished but radically open-ended. It is a matter of what assemblages or environments take him up, what assemblages or environments he is capable of being taken up by, what Others – human or non-human – he enters into composition with. To live in an intensive mode means to have a good encounter, to compose a good interality, to be taken up by a good assemblage, to unblock life so the mind-body can do what it is capable of doing – affecting and being affected, so it can enter into composition with what suits its nature, or what affirms its elan vital (roughly, life force).
What makes humankind virtuous is precisely our radical unfinishedness, our affinity, affectability, versatility, empower-ability, extendibility, composition-ability, or assemble-ability. Horseman-armour-lance-entourage-land makes a knight assemblage, which embodies the social posture of chivalry and courtly love. Archer-bow-arrow-mark-air-distance-gravity forms either a Zen assemblage of self-cultivation and satori or an assemblage of hunting or belligerence. Fiddler-violin-serenade-night-window forms a courtship assemblage. To live an ethical life entails switching from a ‘to be’ mode to an ‘and… and… and…’ mode, that is to say, from a subject orientation to an assemblage orientation, from ontology to interology, from being to becoming.
When we imagine humans as machinic assemblages, ‘I’m watching TV’ no longer makes sense because TV is me at this moment. The person-remote-TV-couch assemblage is my mode of being, which means I’m not in another mode of being. When I multitask, e.g., when I drive a Penske on the super-highway while listening to the news on radio plus some music on MP3 and also having a phone conversation with someone who’s trying to follow the vehicle I’m operating, I’ve composed a busy and dangerous assemblage, and invented a schizophrenic mode of being, which is nothing like the mode I’min when I’m meditating while washing dishes. Neither assemblage is evil but one is potentially bad and the other good. Bull fighting and petting your dog involve two very different assemblages (spectators form an important element of the former) and two very different modes of being, which should not be conflated as ‘interacting with animals’. The one catalyses the bull-becoming of the bullfighter, whereas the other catalyses the pet-becoming of the one who pets. This is not to deny the possibility of fighting the bull in a petting mode –bringing the two assemblages together makes possible a strange becoming.‘I’ am capable of doing very different things depending on whether I’m in the ‘Penske and…’ assemblage or the dishes-water-meditation assemblage. As such, ‘I’ is more a function of the assemblage than the organizer of it.
While McLuhan occasionally seems concerned with ethics, as in the earlier quotation about becoming servomechanisms of media, or in his promotion of art and games as anti-environments indispensable for awareness and survival, more often than not McLuhan’s project remains descriptive, assiduously avoiding issues of power. The chapter on TV in Understanding Media (1964), for example, speaks of TV as a new cultural ground that reconfigures people’s tastes. It is not, however, interested in the fact that many parents leave their children in front of the TV set not by choice but out of economic necessity. If there is an ethics in McLuhan, for the most part it remains implicit and ambivalent, to be derived by the readers for themselves. The telos of McLuhan’s explorations is laid bare in the title, Laws of Media: The New Science (1988). The subtitle gives it away; although McLuhan subtilizes what he means by ‘science’ so it is synonymous with what Vico means by ‘poetic wisdom’, it is nevertheless a descriptive enterprise, rather than an explicitly ethical and creative one. If there is an ethics in McLuhan, it is a power-blind ethics that ends with understanding – like the artist, the critic’s role is to promote under-standing so we can change course. Thus the scholar’s task becomes merely descriptive, an attempt to promote understanding. Yet this purely descriptive enterprise is surprisingly humanist and elides power, since it fails to attend to the assemblages of understanding and description, including the scholar’s role in regimes of power. In short, McLuhan fails to comprehend power-knowledge as a machinic assemblage, one that enables and disables certain forms of understanding, description and awareness.
This descriptive project leaves McLuhan without a politics or ethics, unlike Deleuze who makes these concerns front and central. For instance, McLuhan (1964: 199) contends that ‘the Gutenberg technology and literacy… created the first classless society in the world’. For ‘[t]he highest income cannot liberate a North American from his “middle-class” life. The lowest income gives everybody a considerable piece of the same middle-class existence’ (McLuhan 1964: 199). The way McLuhan (1969: 140) sees it, Marxists were thus wholly misguided because they did not understand the media environment: ‘The Marxists spent their lives trying to promote a theory after the reality had been achieved. What they called the class struggle was a spectre of the old feudalism in their rear-view mirror’. Sociological realities of the time and of the present day strictly deny McLuhan’s claims, and ignoring the realities of economic inequality and oppression leaves any critical philosophy without an ethical and political grounding. Consider, in contrast, Deleuze’s treatment of class and capitalism. Deleuze, with Guattari, ponders how people can desire fascism, and calls for a close examination of power in connection with desire, thus extending Marx’s work. In A Thousand Plateaus (1987), while promoting nomadism as an ethical posture for the multitudes, he and Guattari also suggest that capitalism itself has operated as a nomad war machine that betrays and dominates society. In Anti-Oedipus, just before the extended quotation above with the reference to McLuhan, Deleuze and Guattari (1983: 240) point out: ‘Capitalism is profoundly illiterate’. Following a non-linear, acoustic, disor-ganized organizational pattern, capitalism has made of the world a smooth space for itself, a control society for the multitudes, and a miserable place for millions of people.
In short, Deleuze is directly concerned with power and ethics, and this concern represents his final advance over McLuhan, one that is necessary to make any revived media ecological scholarship critical and relevant to the challenges facing our world. Deleuze’s concern with power is evident in the tonality of his concepts, such as striated space, smooth space and the control society, as distinguished from McLuhan’s descriptive, apolitical terms, such as visual space, acoustic space and the global village. McLuhan describes visual and acoustic spaces as products of media, whereas Deleuze understands them as elements of assemblages that can always break down or reverse into the opposite. While McLuhan says the phonetic alphabet and print media create visual space, Deleuze recognizes that ‘visual space’ can come in a wide variety of assemblages, or arrangements of power. This notion of McLuhan’s there-fore makes a ‘badly analysed composite’ since it is too homogenized, too inattentive to the multifaceted, heterogeneous assemblages of media and power. Thus Deleuze suggests that we use the analytically more rigorous ‘striated’ and ‘smooth’ spaces, the implication being that a visual space can be striated or smooth depending on the actual assemblage. For example, McLuhan would say that the cityscape of Manhattan, having been rationally laid out, makes a visual space, and that electronic media turn it into an acoustic space, an echo chamber, Time Square being an arch example. Deleuze would say that the grid-like cityscape of Manhattan enacts state power and makes a striated space typical of a disciplinary society, which is behind us. Disney World better represents the new spaces of control society, giving people the semblance of freedom within a framework in which space and time are regulated in a far more intricate way so it makes a striated space typical of a control society.
In short, McLuhan tends to depict social changes as exclusively the prod-uct of media, leaving little influence for the changes that come with differ-ent power dynamics. Visual space does not necessarily create unawareness and oppression anymore than acoustic space creates community. Instead, for Deleuze, any becoming is always a risky operation. What promises to be a breakthrough may turn out to be a breakdown. A line of becoming may end up being a line of micro-fascism. These are concerns starkly missing in McLuhan. Although McLuhan recognizes, in Laws of Media, that media often reverse into their opposites, his inattention to the interaction of media and power leads him to downplay these possibilities. Likewise, McLuhan seems unconcerned with alternatives that might develop new ways of thinking or new tactics for resistance. If power is the result of changes in media, then attacking or resisting power is a misguided effort. Seeing power and media as a more complex assemblage, following Deleuze, entails a different scholarly and ethical task.
Deleuze’s analytics of power encompasses and entails a poetics of active power, which is synonymous with the techné of life, or the practice of the self as an ego-less, non-organic, machinic assemblage. It inspires us to imagine resistance as none other than the affirmation of elan vital, the mapping of lines of flight, the invention of new possibilities of life – an active operation that betokens the innocence of becoming. As such, resistance precedes (reactive) power (which striates the life world and blocks becoming). It is self-defeating to imagine resistance as derivative of, as a reaction against, (reactive) power. The telos of resistance is the free spirit, one that inhabits striated spaces in an imperceptible, smooth mode, that accomplishes becomings regardless of control, that opens up conditions for different becomings.
As a result, the (ethical) task for the scholar radically shifts in the move from McLuhan to Deleuze. McLuhan primarily envisions his role as providing a descriptive account of the media environment in order to raise awareness that might provide a better social map, a role he compares to that of the artist. For Deleuze, in contrast, the scholarly task does not end with mapping but must include a creative gesture, must seek to create new concepts for thinking new modes of existence. Basically, he asks: in a late capitalist or a control society, how do we make becoming otherwise possible? In contrast, McLuhan’s faith in awareness as solution smacks of a naïve humanism impossible to adopt in Deleuze’s interology and ironic given McLuhan’s simultaneous call for ecological thinking. The ethical and hence scholarly task remains not just to analyse media or machinic environments so that we have a better under-standing, since this very notion of understanding elides what mode of under-standing, what assemblage of knowledge this understanding finds uptake in. As Deleuze surely ascertained from Foucault, knowledge is only ever existent in an assemblage with power, power/knowledge. Thus following a Deleuzian ethology, the ethical and scholarly task becomes creative as well as descriptive – to invent new concepts fruitful for different modes and different assemblages. Much of McLuhan’s work contributes to this creative fruiting of concepts, yet by ending with the descriptive and downplaying the issues of power and ethics, McLuhan remains an insufficient precursor to Deleuze’s more robust and elaborated alternative.
Although scholarship on McLuhan and Deleuze has been proliferating, efforts to render visible the implicit resonances between the two are still scanty. This article has been called forth by this gap. We have suggested that Deleuze can be read as an heir of McLuhan, as likewise a theorist of media guided by ecological thinking. Yet our understanding is that Deleuze has been inspired by McLuhan but not constrained by him. Instead, Deleuze always transforms McLuhan’s insights even as he uses them. If McLuhan is poetic, provocative, and full of potentials, Deleuze allows those potentials to come to fruition with his rigorous theorizing. Among all the resonances that can possibly be articulated, we have foregrounded three closely interconnected ones that are restated below.
First, McLuhan’s understanding of media as extensions of humans often treads near the trap of technological determinism, despite McLuhan’s more complex understanding of media as ground and formal cause. McLuhan’s suggestive, heuristic style of writing only aggravates the situation. Deleuze absorbs the thrust of McLuhan’s understanding but completely reverses the point of departure. His notion of machinic assemblage is no longer human or technologically centered, giving no determining priority to either. Rather, the assemblage comes first. Second, whereas McLuhan coaches us to attend to percept and affect and shifts in people’s taste by bringing into focus the human-technology interface, Deleuze enables us to home in on issues of desire since his machinic assemblage is also a desiring machine, a plane of immanence in which desire is produced and circulated. McLuhan lacks the rigorous theorizing of desire and affect developed in Deleuze, based in the uplifting Spinozan notion of affect as a matter of affecting and being affected.
Third, Deleuze’s notion of assemblage also entails an ethics – one that is based on ethology and interology – and an understanding of and posture towards power. To be ethical means to care about what assemblages to enter into, to organize one’s encounters, to map out new possibilities of life, to enhance one’s capacity to affect and be affected. In McLuhan’s work, the concern with ethics is implicit and ambivalent, and issues of power are elided, as required by his pseudo-scientific and descriptive scholarly endeavour. In contrast, Deleuze calls the scholar to the creative task of creating new concepts for thinking new, healthy modes of life.
Lastly, we’d like to reiterate that Deleuze’s ontology is an open ontology, an interology, one that befits the radically unfinished form of life known as humans. If the point of philosophizing is to contribute adequate concepts, then Deleuze has transformed a whole volley of McLuhan’s suggestive, ethically ambivalent probes into concepts and ethical precepts useful for understanding – and perhaps changing – the mediated environment in which we all float.
1. In Anti Oedipus,Deleuze and Guattari cite favorably McLuhan’s insight that the content of any medium is another medium (Deleuze andGuattari 1983: 240–41).For references to Mumford, see Deleuze and Guattari 1987: 428,457. For references to Virilio, see Deleuze andGuattari 1987: 231, 345,395–96, 480, 520–21
Cavell, R. (2003), McLuhan in Space: A Cultural Geography, Toronto: University of Toronto Press.
Deleuze, G. (1986), Cinema 1: The Movement Image, Minneapolis: University of Minnesota Press.
—— (1988), Spinoza: Practical Philosophy, San Francisco: City Light Books.
—— (1989), Cinema 2: The Time Image, Minneapolis: University of Minnesota Press.
—— (1990),The Logic of Sense, New York: Columbia University Press.
—— (1995),Negotiations: 1972–1990, New York: Columbia University Press.
—— (2005),Francis Bacon: The Logic of Sensation, Minneapolis: University of Minnesota Press. Deleuze, G. and Guattari, F. (1983),Anti-Oedipus: Capitalism and Schizophrenia ,Minneapolis: University of Minnesota Press.
—— (1987), A Thousand Plateaus: Capitalism and Schizophrenia, Minneapolis:University of Minnesota Press.
Deleuze, G. and Parnet, C. (2002),Dialogues II , London: Continuum. Jamieson, K. H. (1990), Eloquence in an Electronic Age: The Transformation of Political Speech making, New York: Oxford University Press.
Jenkins, E. (2014), Special Affects: Cinema, Animation, and the Translation of Consumer Culture, Edinburgh: Edinburgh University Press.
Massumi, B. (1992), A User’s Guide to Capitalism and Schizophrenia: Deviations from Deleuze and Guattari , Cambridge: The MIT Press.
—— (2002), Parables for the Virtual: Movement, Affect, Sensation, Durham:Duke University Press.
May, T. (2005), Gilles Deleuze: An Introduction , New York: Cambridge University Press.
Jenkins, E. and Zhang, P. (2016), ‘Deleuze the media ecologist? Extensions ofand advances on McLuhan’,
Explorations in Media Ecology,15: 1, pp. 55–72,doi: 10.1386/eme.15.1.55_1
Eric S. Jenkins is Assistant Professor of Communication at the University of Cincinnati. He is author of Special Affects: Cinema, Animation, and the Translation of Consumer Culture as well as numerous articles in national and international journals. His research focuses on the interaction of media and consumerism.
Contact: 144-A McMicken Hall, 2700 Campus Way, Cincinnati, OH 45219,USA
Peter Zhang is Associate Professor of Communication Studies at Grand Valley State University. He is author of a series of articles on media ecology, rhetoric, Deleuze, Zen and interality. He has guest edited a special section of China Media Research and guest co-edited two special sections of EME. Currently he is spearheading a second collective project on interality.
Contact: LSH 290, 1 Campus Dr, Allendale, MI 49401, USA.
Eric Jenkins and Peter Zhang have asserted their right under the Copyright, Designs and Patents Act, 1988, to be identified as the authors of this work in the format that was submitted to Intellect Ltd.
by Steven Craig Hickman
I’ve been reading Niklas Luhmann’s works for a couple years now and have slowly incorporated many of his theoretical concepts into my own sociological perspective. Along with Zygmut Baumann I find Luhmann’s theoretical framework one of the most intriguing in that long tradition stemming from Talcott Parsons, one of the world’s most influential social systems theorist. Of course Luhmann in later years would oppose his own conceptual framework to his early teacher and friend. Against many sociologists, especially those like Jürgen Habermas who developed and reduced their conceptual frameworks to human centered theories and practices, Luhumann developed a theory of Society in which communications was central. He did no exclude humans per se, but saw that within society humans had over time invented systems of dissemination that did not require the presence of the human element as part of its disseminative practices. We live amid impersonal systems that are not human but machinic entities that communicate among themselves more equitably than to us. Instead of stratification and normative theories codifying out personal relations within society Luhmann advocated a functionalism that dealt with these impersonal systems on their own terms rather than reducing them to outdated theories based on morality and normative practices. For Luhmann we continue to reduce the social to an outdated political and moral dimension that no longer understands the problems of our current predicament. In fact these sociologists do not even know what the problem is, or how to ask the right questions much less what questions to ask.
Luhmann was one of the first, and definitely not the last, sociologists to decenter the human from society. The notion of the social without the human actor was replaced by communications itself. Luhmann himself saw his theories as forming a new Trojan horse: “It had always been clear to me that a thoroughly constructed conceptual theory of society would be much more radical and much more discomforting in its effects than narrowly focused criticisms—criticisms of capitalism for instance—could ever imagine.” His reception in North American academy has been less than underwhelming according to Moeller because of his couching his terminology in the discourse of Habermas and the sociologists of his day in Germany. Over and over Foucault spoke of the conformity to discourse that scholars were forced to inhabit to be read as legitimate sources of scholarship. Yet, as Moeller tells it Luhmann hoped to hide is radical concepts in plain site even if within the discourse of his day: “Luhmann ascribes to his theory the “political effect of a Trojan horse.” – Luhmann openly admits to his attempt to smuggle into social theory, hidden in his writings, certain contents that could demolish and replace dominating self-descriptions, not only of social theory itself, but of society at large.” (Moeller, KL 223)
Recently I’ve been reading Luhmann’s The Reality of Mass Media which was published late in his life in 1996 (i.e., Luhmann died in 1998). I’ve yet to work through his Magnum opus the Theory of Society of which only two volumes – the one on Society (two parts) and one on Religion, were all that remain of his work left unfinished. But this one introduces many of the basic themes of Luhmann’s theoretical framework: the functional differentiation of modern society, the differing formations – law, religion, mass media, etc. – that constitute the communicative operations which enable the differentiation and operational closure of the system in question, reflexive organization – autopoiesis (Maturana) – and second order observation – or, the observation of observation, etc. What interests me in this work is how it touches base with current media theory from McLuhan, Innis, and others, as well as the specific notions surrounding his use of what he termed ‘cognitive constructivism’. Obviously notions of the Mass Media as the purveyor of reality for society hits on the traditions of propaganda, public relations, social constructivism, and all those Kantian notions and traditions from Vico onward that developed theories of how societies invent reality through various systems, myths, ideologies, etc.
Luhmann considered himself a radical “anti-humanist”, not in the sense of some Nietzschean overreaching of the human as an Übermensch, but rather in the form of an inhumanism that decenters the human agency from its primal place in the cosmos as something exceptional, distinct, superior to other creatures, and instead situates him back within the natural realm on equal footing with all beings on this planet and the cosmos. As Moeller observes “a radically antihumanist theory tries to explain why anthropocentrism—having been abolished in cosmology, biology, and psychology—now has to be abolished in social theory. Once this abolition has taken place, there is not much room left for traditional philosophical enquiries of a humanistic sort” (Moeller, 6).
For him theory is develops both anti-foundational and operationally closed systems that are also open to observation of observation; or, second order reflexivity that takes into account a nontrivial or complex systems, which, being in a system-environment relation, are open for mutual resonance, perturbation, and irritation (Moeller, 7). This brings us to his use of the concept of distinction which, for Luhmann, was neither a principle, nor an objective essence, nor even a final formula (telos), but was instead a “guiding difference which still leaves open the question as to how the system will describe its own identity; and leaves it open also inasmuch as theire can be several view on the matter, without the ‘contexturality’ of self-description hindering the system in it operating (Luhmann, 17).
I have to admit it was Levi R. Bryant in his The Democracy of Objects and on his blog Larval Subjects that I first heard of Niklas Luhmann. In Chapter 4: The Interior of Objects of that book Levi goes into detail about Niklas Luhman and his theories as it relates to his own version of Object Oriented Ontology – or, what he now terms Machine Ontology – Onto-Cartography. In one of his blog posts Laruelle and Luhmann Levi makes an acute observation on Luhmann’s conception of and use of distinction. Comparing it with Laruelle’s notion of distinction as decision in which all philosophies as compared to non-philosophy start with a decision “that allows it to observe the world philosophically”. Non-philosophy instead of starting with a decision instead observes these distinctions used by philosophers in understanding how they actually structure the world the philosopher describes. Be that as it may what Luhmann refers to, according to Levi, as the distinction is that it “allows an observer to observe a marked state as the blind spot of the observer. Every observation implies a blind spot, a withdrawn distinction from which indications are made, that is not visible to the observer the observes. The eye cannot see itself seeing.” Take for example my friend R. Scott Bakker’s notion of the Blind Brain Theory in which we are blind to the very processes that shape and form our very thoughts and reality, yet we observe in a fashion that neglects this fact and never knows of this blindness and believes it has all the information it needs to understand and communicate effectively about itself and reality. What intrigues both Laruelle and Luhmann is this second order reflection of the observer observing the observer. As Levi explains it:
“Observing the observer” consists in investigating how observers draw distinctions to bring a world into relief and make indications. Were, for example, Luhmann to investigate philosophy from a “sociological” perspective, his aim wouldn’t be to determine whether Deleuze or Rawls or Habermas, etc., was right. Rather, he would investigate the distinctions they draw to bring the world into relief in particular ways unique to their philosophy. In other words, he would investigate the various “decisional structures” upon which these various ways of observing are based.
In another work on Luhman Niklas Luhman’s Modernity: The Paradoxes of Differentiation in a chapter marked Injecting Noise into the System Rasch notes that contingency a concept that Luhmann used often meant quite simply, the “fact that things could be otherwise that they are; and things can be otherwise than they are because “things” are the result of selection” (Rasch, 52).3 This notion of selection is if not equivalent to making a distinction at least necessarily a qualification of that concept. All systems are observable because all systems are formed by distinctions, and these distinctions operate on the elements in a system whether that system is conscious or not because they operate by distinctions and can decide or choose between alternatives those distinctions establish (Rasch, 52). The information generated by the system being observed is contingent because other distinctions could have been made producing different information based on choice or exclusion; it is also based on an enforced selection because of time constraints, which affords a view onto the complexity of the system being observed thereby producing meaning (Rasch, 53). The chain of complex information observed within this system is the communication of that system.
As Luhmann would observe the moment a system communicates its information that information becomes non-information and cannot be reproduced in the same way again. He makes the observation that in our modern hypermedia infotainment society the observation of news events now occur simultaneously with the events themselves. In an accelerated society one never knows what the causal order is: did the event produce the communication, or did the communication produce the event. It’s as if the future was being produced in a reality machine that communicates our information simultaneously and for all time. The desuturing on history from effective communication is rendering our society helpless in the face of events. More and more we have neither the time to reflect nor the ability to observe, instead we let these autonomous systems do our thinking for us while we sift through the noise of non-information as if it was our reality.
Corporate media in their search for the newsworthy end up generating reproductions of future uncertainties – contrary to all evidence of continuity in the world we know from daily perceptions (Luhmann, 35). Instead of news we get daily reports, repeatable sound bites of events that can be rewired to meet particular ideological needs of the reporters as they convey their genealogy of non-informational blips as if it were news of import. The redundancy of non-informational reports are fed into the stream through a series of categories: sports, celebrities, local and national events, politics, finance, etc. as it reweaves the reality stories of the day through its invisible ideological conveyor belt as if for the very first time. As Luhmann tells us the “systems coding and programming, specialized towards selection of information, causes suspicion to arise almost of its own accord that there are background motives at work” (Luhmann, 38). Corporate media thrives on suspicion, on the paranoia it generates through gender, race, political, religious, national, and global encodings/decodings selected not for their informational content but for their contingent production of future fears.
The smiley faces of corporate reporters provides us with communications that generates a pleasing appearance by which the individuals themselves that cross the media threshold conceal themselves from others and therefore ultimately from themselves. The façade of truth bearing witness becomes the truth of a witness bearing the façade. “The mass media seem simultaneously to nurture and to undermine their own credibility. They ‘deconstruct’ themselves, since they reproduce the constant contradiction of their constative and their performative textual and image components with their own operations (Luhmann, 39). The news media instead of giving us the world as it is provide us with new realities supported by the endless operations of a selective ideological algorithms that filter the vast datamix of information into non-informational contexts presented to the unsuspecting viewers eyes or ears as if it were immediate news rather than the façade of lost information. In a final insight Luhman tells us that no autopoetic system can do away with itself. And in this, too, we have confirmation that we are hoodwinked by a specific problematic related to a system’s code. As he says, the “system could respond with its everyday ways of operating to suspicions of untruthfulness, but not to suspicions of manipulation” (Luhmann, 41). There is always that blind spot in your mind that sees the ideological subterfuge but never notices that you were complicit in feeding the very system that now entraps you. Being blind to its very nature you assume valid information when in fact all you have is the non-informational blips of a cultural matrix out of control. The datahives of capitalism churn away collecting neither news nor truth, but rather the informational bits that make up your onlife for future modes of economic tradecraft even as your panic flesh is made obsolescent just like all other commoditized bits in the lightbins of our global corporatists state(s). Even the Snowden’s of our world are but a trace of a trace lost in the paranoiac ocean of non-information. The secret worlds below the datahive hum away like the remembrance of bees that no longer pollinate. Living in a blipworld we hide from ourselves in endless chatter, noise of noise that means only one thing: the death of our humanity. As Luhmann asks: “How is it possible to accept information about the world and about society as information about reality when one knows how it is produced?” (Luhmann, 122) Knowing that we live in constructed tale which is not narrated by us but by the machinic code of machinic minds what is left of reality, anyway?
1. Moeller, Hans-Georg (2011-11-29). The Radical Luhmann (Kindle Locations 216-218). Columbia University Press. Kindle Edition.
2. Niklas Luhmann. The Reality of Mass Media. (Stanford University Press: Polity, 2000).
3. William Rasch, Niklas Luhman’s Modernity: The Paradoxes of Differentiation (Stanford University Press, 2000)
By Pit Schultz
The current Facebook debate is a chance to get your act together and get organized – just a little.
What does it mean to get dis-, re-, co-organized, to the worse, or the better? A further balkanisation, a migration to the cryptoanarchist waste of resources, blockchain-nations as a refresh of the independent cyberspace myth, or the various academic art conferences giving a place for certain representative counter movements in order to map and neutralize them. The culture war is probably a trap allowing single career paths instead of lifting up the standards on a larger scale. This counts especially for the branch of critical media art, which has buffered away criticality from the rest of the art world for too long. While often fruitful and interesting on the lower layers, it gets thinner and weaker the higher you get. The neoliberal call for self-diversification is a part of a parapolitical neutralisation effort to keep away resistant forces from where they could do real harm and lead to systemic change.
Synchronize and Change Facebook from Within
It is understandable, leaving Facebook because it is dull, depressing, boring – but the same counts for your workplace probably, the compromise you had to make to rent an affordable flat, the places you need to go shopping or studying. Even if there are alternatives in the physical, urban world, the ecosystem of myriads of websites, linux distros, apps you never have seen, and tracks you never will listen to, is part of the long tail myth of consumerist choice. Culturally, the unification under one media platform, such as the book, or the internet, has been revolutionary in terms of “consciousness building”. Today you are told that somewhere else, with a different type of media speech will be authentic and free again, just to stop you from waking up and stating the obvious.
The existance of pluralism and diversity depends on the conditions of the surrounding it derives from. The current platformisation is adding new application layers on top of the web. One can dream and fight for a niche in between, or fight for the change and opening of these plaforms in terms of democratic design principles. Both approaches have pros and cons. Since we have no better polticial system architectures available, we could stick with embedded democracy and discuss the specifications, when looking at the complete lack of these features in todays online infrastructure.
Democratisation or Exodus as Hegemonial Choice
To run away from Facebook headlessly and to leave it before being censored, kicked out or shut off, is ill advised from a radical democratic point of view. Maybe it would be possible, in a gramscian way, to doubt the absolutistic public sphere that Facebook has errected. But going along back to the alternatives, such as to the municipal level of creative, digital and global cities, or to the balkanisation of cryptoanarchist blockchain based curriencies, or to the ghost towns of abandoned homepages in the the dark net, or to countless masculinist linux projects which reinvented the wheel, as well as to various counter-platforms that clone and modify the UX of Facebook in one or the other way, it turns out, that they have been proven as dead-end devolutions. Compared to the mass consumerist time waste of Facebook, they are still interesting tactical forms of excess. Due to their false promise of offering a strategy and not just an invidualist tactical sidestep, these outside positions are certainly not inherently better ones. Neither they are inherently bad -they’re just no solution. And they are certainly not politically or theoretically smarter than trying to change Facebook on Facebook.
From a media theoretical point of view it seems blind to #deletefacebook, since the deletion confirms, that Facebook reduces you to an effect of the medium. You can accept tacitly not to be able to change the channel from within the channel, being in it debating it, critiquing it, protesting against it or subverting it, or taking any distanced meta position from within the medium. From a political point of view, the spectrum of protest forms includes to excercise the right to delete yourself (#loeschdich), and there are various existing channels to discuss strategic common goals in the aftermath of the CA scandal. Not to confuse the means of change with the goal itself, we need to achieve more rights, more, or at least some, democratic freedom, to transform this powerful platform in an exemplary way. Instead of dispersing the platform into micropolitical niches which ultimately risks to neutralize it´s potentials, we could form new brilliant alliances of productive alienations.
By taking a virtual outside position, that can be e.g. excentric, external, artistic or theoretical, one cannot neglect, that even the most underpriviledged and precarious existence will be impossible outside of todays capitalist realism. Trying to escape the network effects of Facebook in an exemplary way will only be a symbolic move of self-alienation and ‘dark’ independency. Leaving Facebook weakens the resistance against the platform on this platform. As a representative function of the powerlessness and passivity in society, flirting with the exodus, a hipsterist escapism or cocooning into the digital diaspora will never make oneself less vulnerable, or free, without forming collective agencies of real resistance: running archives, sharing strange interests and hobbies, collecting and filtering what has been easily neglected or forgotten. We seek strategic alternatives for a planetary order, branching points and possible future forks. Of course they can be developed on or off the platform. But as an individual strategy, better don’t fool yourself believing that there’s a safe zone of a pirate utopia reserved for you in cyberspace.
Across the stack
Lets face the facts. Facebook is providing an application layer on top of the web. It has a horizontal monopoly position when it comes to web usage, as well as in the messenger and mobile social media space. To propose free alternative solutions is politically and technically ill advised. Alternatives that are based on a free and open clone approach are rather doomed, due to the network effects. Diaspora, ello. vero and various comparable approaches tried to operate next to Facebook, but there is not even a niche left for them. You need to expand to specialized social networks such as linkedin for CVs and job related stuff, and academia-edu or soundcloud etc. to pick up the crumbs. In terms of surveillance features and data vampirism the business models of these alternative platforms are rarely better than the one of Facebook. To tap into new network effects, either one is able to vertically go down the stack to expand into physical appliances (smart home) and/or protocols (p2p), or is able to go up the stack (ai driven agents). With a new abstraction layer which will lead to the neutralisation of Facebook in terms of an open api as an opened infrastructure, a democratized Facebook can serve as a backend. Like in Tim Berners-Lee solid project, for autonomous bots and scrapers, as a social media layer to build new stuff on top. Lately even Google has given up to build a competitor for instant messaging (to Facebooks quasi monopoly) going down and sideways in the mobile stack to use SMS and no-internet messaging as a base for a new app.
Interoperabilities, Design Change and Open Data Sheets
Even if the regulation will achieve some opening of the api´s, interoperability will not be able to replicate the complexities of one system onto the other without a more radical approach. Interoperability as the lowest common denominator of regulatory efforts represents the misunderstanding of todays infrastructures by the “other” culture, using railway or telephony metaphors. Only in combination with a true open architecture that provides open source, open data and open datasheets, forks will be possible, as well as federated spin-offs. Unfortunately it is precisely the api-based third party eco-system which will get restricted after the CA case. Interoperability as a means of capitalist appeasement is used in industries such as the military industry, to have heterogenically existing systems work together. In computer hardware, such as the PC, the interoperability is achieved by open standards. In software architecture it is called legacy and a migration or a replacement is often easier to achieve. The same counts for the keyword “algorithm”. Algorithms do not make much sense without data structures.
The way our data, attention, and labor online is defined is much more presented in the specifications of data sets, the modelling of input-output relations, and not merely visible in the implementations of running code, e.g. the algorithms. by demanding access to the documentation of the design process, the iterative, agile but nevertheless non-democratic development cycles within social media companies become debatable. The strategies, and use case modelling will be much more revealing than merely fetishizing the code itself, countering the assumed power, with a fetishisation of jurisdiction. Both law and code are not the most effective levels or corporate control. in order to change the companies one must try to understand their organisational models, their design process and decision structure first.
Demanding transparency for algorithms alone, just presents the limitations of digital literacy today. 30 years after the internet has been introduced and 70 years after the computer has been invented. So, instead of talking about acceleration one must talk about education. Besides the documentation of a systems architecture and of it´s specifications and source code, as well a full documentation of it´s “datasheets”, the metadata describing the data structures are necessary demands. Without them the interoperabilty as well as a regulatory approach to algorithms are ill fated forms of cross-cultural intermediation, which rather establish bodies of agency, that are open to lobbyism and obstruction of all kinds.
Facebook combines as well a few known design patterns, such as “portal”, the good old “collaborative filtering”, as we called it in the old nettime footer, as it is taking thread based use cases that are known from usenet and compuserve to provide the annotated web that has been promised in the 90ies. The problem behind the current privacy debacle is the property issue: ‘no commercial use without permission’ is as problematic as the complete lack of democractic functionality within the platform. A de-individuation of social media, as Benjamin Bratton has proposed, would be accompanied with new options to reset the priorities.
An individualist liberal movement comes with tactical sidesteps and prefers issues of smaller group identification before larger common goals. It is not that the bias of algorithms is not a problem, but it is embedded into a larger analysis of power including the data structures, the system architecture, flaws in laws and regulation, exploitative business models and various forms of discriminatory bias in the engineering process, often due to a lack of governance in terms of transparency and accountability. On the other hand, a new universalism even if proposed by Facebook serves as a plane on which today earth problems need to be understood and solved globally. The equality it offers and a certain degree of neutrality of the interface layers are a cultural potential to build on, to change from within, modify and fork, and politicise against. Dreaming about building your own little world on the side is fruitless if it is not connected to a larger political fight of changing the architecture of power and property mediated by the internet. Nevertheless, tacit forms of protest, as isolatory, anti-productive or escapistic they might appear, should not be condemned. As the change of these platforms is a long term goal which can be fought for on and off these platforms in various ways.
A more object oriented social network is possible, where subjects group around issues, goals, projects, events and the individual is not just the ultimate product in the center of the social graph any more. A combination of models known from Wikipedia and Mozilla, non profit coops with more democratic functionality in iterative design cycles, such as bug trackers, eternal logs, transparency of documentation. Democratic design principles known since a long time need to be formulated, discussed and implemented. The discourse of regulatory law as well as ethical commissions will not prevent the next levels of alienation, surveillance and oppression that are coming with machine learning and big data driven AI. The economic inequality and the property relation should be the first common issue beyond all minority based struggles, to connect various fights and not obey to the framings and neutralising offers of liberalism.
Recently it has been calm around a defense of the commons. The liberalisation of open data has lead to a pipeline of structured data, going directly from the foundations of Wikipedia to google graph and deep mind. Wikidata, a project funded by google, is not combined with a new license which would compensate for the billions of dollars worth of human labour. These are now used to train AI, and structure and enhance search results. The social factory of Facebook, as well as Amazon, Google, and any other larger commercial online platforms will turn to a model of commodifying and monetizing data by feeding it value extraction methods that are run by machine learning algorithms. The underlaying proprietization of data is the central strategic point to attack.
The french AI proposal is a great chance for a defense of open data, not just along the values of Diderot and Alembert. All training data for machine learning should be put under a new kind of data-GPL-license which is free for non-commercial or scientific use. But forces companies which make profit to re-compensate the commons, probably using blockchain methods, and become accountable in terms of reproducable trainings that are lacking methods of debugging and controlling AI.
Press Pause, Get Organized and Strike!
It has been a few years ago now, that Tizinana Terranova has proposed the red stack. Burak Arikan has proposed a formalisation of unpayed online labour. Tomazo Tozzi has proposed the netstrike in 1995. Richard Barbrook and me have written a manifesto for the digital artisan. There are plenty of people who are concerned and interested in possible and exemplary changes of Facebook here and now. Theoreticians, activists, journalists, artists … Until now there is hardly a place where they join, filter and exchange relevant texts and get self-organized a little more.
A strike or #facebreak might be still a valid form of protest, of demanding forms of governance on facebook and as a manifestation of not wanting to be governed in such a way. As proposed above, I will suspend my account from 25th of may to 1st of june 2018.
In the meantime let’s work on a list of demands, as in democratic design changes, to facebook and other platforms.
Lets call it #non-facebook.
Click to set custom HTML
Walter Benjamin has a reputation as a sophisticated reader of literary texts. But perhaps his media theory is not quite so elaborate. Here I shall attempt to boil it down in a very instrumental way. The question at this juncture might be less what we owe this no longer neglected figure but what he can do for us.
Benjamin thought that there were moments when a fragment of the past could speak directly to the present, but only when there was a certain alignment of the political and historical situation of the present that might resonate with that fragment. Applying this line of thought to Benjamin himself, we can wonder why he appeared to speak directly to critics of the late twentieth century, and whether he still speaks to us today.
Perhaps the connection then was than he seemed to speak from and speak to an era that had recently experienced a political defeat. Just as the tide turned against the labor movement in the interwar period, so too by the late seventies the new left seemed exhausted.
He appeared then as a figure both innocent and tragic. He had no real part in interwar politics, and committed suicide fleeing the Nazis. He could be read as offering a sort of will-to-power for a by then powerless cultural and political movement, which thought of itself as living through dark times, awaiting the irruption of the messianic time alongside it in which the dark times might be redeemed. He was a totem for a kind of quiet endurance, a gathering of fragments in the dark for a time to come. But perhaps his time no longer connects to our time in quite the same way. And perhaps it does Benjamin no favors to make him a canonic figure, his ideas reduced to just another piece of the reigning doxa of the humanities.
As a media theorist, Benjamin’s contributions are fragmentary and scattered. They might be organized around the following topics: art, politics, history, technology and the unconscious. Here I will draw on the useful one-volume collection Walter Benjamin, The Work of Art in the Age of Technological Reproducibility and Other Writings on Media (Harvard University Press, 2008).
Benjamin had already started to think art historically rather than stylistically before his engagement with Marxism. Here the work of the art historian Alois Riegl was probably decisive. Formal aesthetic problems are of no particular interest in themselves. “The dialectical approach… has absolutely no use for such rigid, isolated things as work, novel, book. It has to insert them into the living social contexts.” (80) Attention shifts to practices. The reception of the work by its contemporaries is part of its historical effect on its later critics as well. Reception may vary by class, Benjamin notes in passing, anticipating Raymond Williams.
Not the least significant historical fact about art was the emergence of forms of reproducibility far more extensive than the duplication of images on coins known since the Greeks. The project for modern art history was thus an “attempt to determine the effect of the work of art once its power of consecration has been eliminated.” (56) The eternal as a theme in art – in western art at least – was linked to impossibility of reproducing them. Fine art practices thus have to be thought in terms of the historical situation in which they appear alongside other practices, in particular new forms of reproducibility.
Central to thinking art then is an intersection of technical and historical forces, and hence the question of how art is made: “the vital, fundamental advances in art are a matter neither of new content nor of new forms – the technological revolution takes precedence over both.” (329) Benjamin draws our attention to the uneven history by which the mechanical intervenes in its production. So while the piano is to the camera as the violin is to painting, reproducibility has specific histories in regard to specific media forms.
The question of technical reproduction brings up the wider concept of technology itself. Here Benjamin provides at a stroke a key change of point of view: “… technology is the mastery not of nature but of the relation between nature and man.” (59) The technical relation is embedded in a social relation, making it a socio-technical relation. Here we step away from merely ontological accounts of technology towards a social and historical one, which nevertheless pays attention to the distinctiveness of the technical means of production.
Writing in the wake of the Great War, Benjamin is one of innumerable writers and artists who saw the potential of technology, including media technology, within the context of its enormous destructive power. The war was a sort of convulsion of the techno-social body, a sign of a failed attempt of our species-being to bring a new body under control.
But technology is not an exogenous force in Benjamin. He is neither a technological determinist nor for that matter a social constructivist. (Note, incidentally, how this hoary old way of framing this debate puts a finger on the scale of the social.) “Technology… is obviously not a purely scientific development. It is at the same time an historical one. As such, it forces an examination of the attempted positivistic and undialectical separation between the natural sciences and the humanities. The questions that humanity brings to nature are in part conditioned by the level of production.” (122) It’s a matter then of thinking the continuum of techno <-> social phenomena as instances of a larger historical praxis.
Benjamin also dissents from the optimistic belief in technology as progress that he thought had infected social democratic thinking in the inter-war years. “They misunderstood the destructive side of this development because they were alienated from the destructive side of dialectics. A prognosis was due, but failed to materialize. That failure sealed a process characteristic of the last century: the bungled reception of technology.”
This might be a useful lesson still for our own time. Benjamin does not want to retreat from thinking the technical, nor does he fetishize it. The technical changes to the forces of production have their destructive side, but that too can be taken in different ways. It is destructive in the sense made plain by the war; but it might be destructive in another sense too – destructive of the limits formed by the existing relations of production.
Tech change in media form brings political questions to the fore, not least because it usually disrupts the media producer’s relation to the means of production. “The technical revolutions – these are the fracture points in artistic development where political positions, exposed bit by bit, come to the surface. In every new technical revolution, the political position is transformed – as if on its own – from a deeply hidden element of art into a manifest one.” (329)
This understanding of technology frames Benjamin’s approach to the politics of both art and knowledge. Both are potentially the means by which our species-being might acquire a sensory perception and a conceptual grasp of its own socio-technical body and direct it towards its own emancipation. The task of the cultural worker is to contribute to such a project. It’s a matter of understanding and redeploying the mode of perception in its most developed form to such ends. The mode of perception of the early twentieth century appear as one in which “distraction and destruction [are] the subjective and objective sides, respectively, of one and the same process.” (56) That may be even more so today.
Benjamin practiced his own version of what I call low theory, in that the production of knowledge was not contemplative and was disinterested in the existing language games of the disciplines. Knowledge has to be communicated in an effective manner. “The task of real, effective presentation is just this: to liberate knowledge from the bounds of compartmentalized discipline and make it practical.” (61)
Both knowledge and art matter as part of the self-education of the working class. Benjamin thought the social democrats had made an error in diluting this labor point of view into a mere popular and populist pedagogy. “They believed that the same knowledge which secured the domination of the proletariat by the bourgeoisie would enable the proletariat to free itself from this domination.” (121) The project of an art and knowledge for liberation posed questions about the form of such art and knowledge, in which “real presentation banishes contemplation.” (62)
The artist or writer’s relationship to class is however a problematic one. Mere liberal sympathy for the ‘down-trodden’ is not enough, no matter how sincere: “a political tendency, however revolutionary it may seem, has a counter-revolutionary function so long as the writer feels his solidarity with the proletariat only in his attitudes, not as a producer.” (84) What matters is the relation to the means of production, not the attitude: “the place of the intellectual in the class struggle can be identified – or better, chosen – only on the basis of his position in the process of production.” (85)
Benjamin was already well aware that bourgeois cultural production can absorb revolutionary themes. The hack writer or artist is the one who might strike attitudes but does nothing to alienate the productive apparatus of culture from its rulers. “The solidarity of the specialist with the proletariat… can only be a mediated one.” (92) And so the politics of the cultural worker has to focus on those very means of mediation. Contrary to those who would absorb Benjamin into some genteel literary or fine art practice, he insists that “technical progress is for the author as producer the foundation of his political progress.” (87)
The task then is to be on the forefront of the development of the technical forces of cultural production, to make work that orients the working class to the historical task of making history consciously, and to overcome in the process the division of labor and cult of authorship of bourgeois culture. Particularly in a time of technical transition, the job is to seize all the means of making the perception and conception of the world possible and actionable: “photography and music, and whatever else occurs to you, are entering the growing, molten mass from which new forms are cast.” (88)
Moreover, Benjamin saw that the technical means were coming into being to make consumers of media into producers. Benjamin is ahead of his time on this point, but the times have surely overtaken him. The ‘prosumer’ – celebrated by Henry Jenkins – turned out to be as recuperable for the culture industry as the distracted spectator. The culture industry became the vulture industry, collecting a rent while we ‘produce’ entertainment for each other. Still, perhaps it’s a question of pushing still further, and really embracing Benjamin’s notion of the cultural producer as the engineer of a kind of cultural apparatus beyond the commodity form and the division of labor.
Reading Benjamin can easily lead to a fascination with the avant-garde arts of the early and mid 20th century, but surely this is to misunderstand what his project might really point to in our own times. Benjamin had a good eye for the leading work of his own time, which sat in interesting tension with his own antiquarian tendencies. His focus on the technical side of modern art came perhaps from Lázló Moholy-Nagy and others who wrote for G: An Avant Garde Journal of Art, Architecture & Design.
He understood the significance of Dada, the post-war avant-garde that already grasped the melancholy fact that within existing relations of production, the forces of production could not be used to comprehend the social-historical totality. Dada insisted in a reality in fragments, a still-life with bus-tickets. “The whole thing was put in a frame. And thereby the public was shown: Look, your picture frame ruptures time; the tiniest authentic fragment of daily life says more than painting. Just as the bloody finger print of a murderer on the page of a book says more than the text.” (86) The picture-frame that ruptures time might be a good emblem for Benjamin’s whole approach.
His optimism about Soviet constructivism makes for poignant reading today. He celebrated Sergei Tretyakov, who wanted to be an operating rather than a merely informing writer. Too bad that the example Benjamin celebrates is from Stalin’s disastrous forced collectivization of agriculture. Tretyakov would be executed in 1937. Still, Gerard Raunig has more recently taken up the seemingly lost cause of Tretyakov.
Benjamin was however surely right to take an interest in what Soviet media had attempted up until Stalin’s purge of it. “To expose such audiences to film and radio constitutes one of the most grandiose mass psychological experiments ever undertaken in the gigantic laboratory that Russia has become.” (325) This was one of the great themes of the Soviet writer Andrei Platonov. Unfortunately Benjamin, like practically everyone else, was ignorant of Platonov’s work of the time.
Surrealism contributed much to Benjamin’s aesthetic, particularly its fascination with the convulsive forces lurking in the urban and the popular. Like the Surrealists, and contra Freud, Benjamin was interested in the dream as an agency rather than a symptom. Surrealist photography taught him to see the photograph in a particular way, in that its estrangement from the domestic yielded a free play for the eye to perceive the unconscious detail.
Surely the strongest influence on Benjamin as a critical theorist of media was the German playwright Berthold Brecht and his demand that intellectuals not only supply but change the process of cultural production. Brecht’s epic theater was one of portraying situations rather than developing plots, using all the modern techniques of interruption, montage, and the laboratory, making use of elements of reality in experimental arrangements. Benjamin: “It is concerned less with filling the public with feelings, even seditious ones, than with alienating it in a enduring way, through thinking, from the conditions in which it lives.” (91)
One thing from Brecht that could have received a bit more attention in Benjamin is his practice of refunctioning, basically a version of what the Situationists later called détournement. This is the intentional refusal to accept that the work of art is anyone’s private property. Reproducibility has the capacity to abolish private property in at least one sphere: that of cultural production.
Here the ‘molten’ dissolution of forms of a purely aesthetic sort meets the more crucial issue of ownership of culture. But Benjamin was not always clear about the difference. “There were not always novels in the past, and there will not always have to be; there have not always been tragedies or great epics. Not always were the forms of commentary, translation, indeed even so-called plagiarism playthings in the margins of literature…” (82) Here he comes close to the Situationist position that all of culture is a commons, but he still tended to confuse formal innovation within media with challenges to its property form.
Benjamin also had his own idiosyncratic relation to past cultures. The range of artifacts from the past over which Benjamin’s attention wander is a wide one. His was a genius of the fragment. He was alert to the internal tension in aesthetic objects. Those that particularly draw his attention are objects that are at once wish-images of the future but which, at the very moment of imagining a future, also reveal something archaic.
Famously, in writing about mid-19th century Paris, he took an interest in Parisian shopping arcades, with their displays of industrial luxury, lit by gas lighting and the weak sun penetrating its covered walkways through the iron-framed skylights. These provide one of the architectural forms for the imagination of the utopian thinker Charles Fourier who thinks both forwards and backwards, mingling modern architecture with a primal image of class society.
Actually, one could dispute this reading. Charles Beech, Fourier’s biographer, thinks the Louvre was his architectural inspiration. And Fourier’s utopia is hardly classless. On the contrary, he wanted a way to render the passion for distinction harmless. The method might be more interesting in this example than the result.
Benjamin is on firmer ground in relating the daguerreotype to the panorama. (Of which the Met has a fine example). The invention of photography expands commodity exchange by opening up the field of the image to it. Painting turns to color and scale to find employment. This chimes with McLuhan’s observation that it is when a medium becomes obsolete that it becomes Art, where the signature of the artist recovers the otherwise anonymous toil of the artisan as something worthy of private property: “The fetish of the art market is the master’s name.” (142)
Art favors regression to remote spheres that do not appear either technical or political. For example: “Genre painting documents the failed reception of technology… When the bourgeoisie lost the ability to conceive great plans for the future, it echoed the words of the aged Faust: ‘Linger awhile! Thou art so fair.’ In genre painting it captured and fixed the present moment, in order to be rid of the image of its future. Genre painting was an art which refused to know anything of history.” (161) So many other genres might fall under the same heading.
Benjamin also draws our attention to a class of writing that saw a significant rebirth closer to our own time, but which in the Paris of the mid-19th century was represented by Saint-Simon. This is that writing that sizes up the transformative power of technology and globalization but omits class conflict. These days it is called techno-utopianism. In distancing itself from such enthusiasms, critical theory has all too often made the reverse error: focusing on class or the commodity and omitting technological change altogether or repeating mere petit-bourgeois romantic quibbles about its erosion of the homely and familiar. The challenge with Benjamin is to think the tension between technical changes in the forces of production and the class conflicts in which it is enmeshed but to which it cannot be entirely reduced. In thinking the hidden structural aspects not just of consciousness but also of infrastructure as the Benjamin channels Sigfried Giedion: “Construction plays the role of the subconscious.”
The life of the commodity is full of surprises. For instance, consider Benjamin’s intuition about how fashion was starting to work in the mid-19th century. “Fashion prescribes the ritual according to which the commodity fetish demands to be worshipped.” (102) Fashion places itself in opposition to the organic, and couples the living body to the inorganic world. This is the “sex appeal of the inorganic,” which Mario Perniola will later expand into a whole thesis. (102)
Fashion makes dead labor sexy. It points to a kind of value that is neither an exchange value nor a use value, but that lies in novelty itself – a hint at what Baudrillard will call sign value. “Just as fashion brings out the subtler distinctions of social standing, it keeps a particularly close watch over the coarser distinctions of class.” (138)
Another artifact that turned out to have a long life is the idea of the home and of interior decoration. The mid-19th century bourgeois was beginning to think the home as an entirely separate, even antithetical, place from the place of work, in comparison to the workshops of their artisanal predecessors. “The private individual, who in the office has to deal with reality, needs the domestic interior to sustain him in his illusions.” (103) One might wonder if in certain respects this distinction is now being undone.
The central thread of Benjamin’s work on Paris was supposed to be Baudelaire, who made Paris a subject for lyric poetry. It was a poetry of the urban wanderer, the celebrated flaneur, who for Benjamin had the gaze of the alienated man. The arcades and department stores used the flaneur as a kind of unpaid labor to sell goods. It’s a precursor to social media.
Both the flaneur and the facebooker are voluntary wanderers through the signage of commodified life, taking news of the latest marvels to their friends and acquaintences. The analogy can be extended. The flaneur, like today’s ‘creatives’, was not really looking to buy, but to sell. Benjamin’s image for this is the prostitute: the seller of the goods and the goods to be sold all at once.
The flaneur as bohemian, not really locatable in political or economic terms as bourgeois or proletarian, is a hint at the complexities of the question of class once the production of new information becomes a form of private property. Who is the class that produces, not use values in the form of exchange values, but sign values in the form of exchange values? Benjamin comes close to broaching this question of our times.
Benjamin offers a rather condensed formula for what he is looking at in his historical studies of wish-images. “Ambiguity is the appearance of dialectics in images, the law of dialectics at a standstill. This standstill is utopia and the dialectical image, therefore, dream image. Such an image is afforded by the commodity per se: as fetish.” (105)
The commodifed image is a fragment of dead labor, hived-off from a process it obscures. This is the image as fetish, a part-thing standing in for the whole-process of social labor. And yet at the same time it cannot but bear the trace of its own estrangement. As fragment it is fetish, but as mark of the absence of a real totality it points in negative toward utopia.
The Paris Commune of 1871 put an end to a certain dream image, forward-looking yet archaic. It was no longer an attempt to complete the bourgeois revolution but to oppose it with a new social force. The proletariat emerged from the shadows of bourgeois leadership as an independent movement. The dialectic might move forward again.
But this was only one of two developments that characterize the later 19th century. The other is the technical development of the forces of production subsuming the old arts and crafts practices of cultural production. Aesthetics, like science before it, becomes modern, meaning of a piece with the development of capitalism as a whole.
Benjamin: “The development of the forces of production shattered the wish symbols of the previous century, even before the monuments representing them had collapsed. In the nineteenth century this development worked to emancipate the forms of construction from art, just as in the sixteenth century the sciences freed themselves from philosophy. A start is made with architecture as engineered construction. Then comes the reproduction of nature as photography. The creation of fantasy prepares to become practical as commercial art.” (109)
Out of the study of 19th century Paris, Benjamin develops a general view of historical work that might properly be called historical materialist. “Every epoch… not only dreams the one to follow but, in dreaming, precipitates its awakening. It bears its end within itself and unfolds it – as Hegel already noted – by cunning. With the destabilizing of the market economy, we begin to recognize the monuments of the bourgeoisie as ruins even before they have crumbled.” (109) Benjamin pays much less attention to an epoch’s ideas about itself than its unconscious production and reproduction of forms, be they conceptual or architectural. (Which incidentally is why I am more interested in container ports and server farms than the explicit discourse of ‘neoliberalism‘ as a key to the age).
The historical-reconstructive task is not to restore a lost unity to the past, but rather to show its incompletion, to show how it implies a future development, and not at all consciously. Fragments from the past don’t lodge in a past totality but in constellation with fragments of the present. Benjamin: “history becomes the object of a construct whose locus is not empty time but rather the specific epoch, the specific life, the specific work. The historical materialist blasts the epoch out of its reified ‘historical continuity’ and thereby the life out of the epoch and the work out of the lifework. Yet this construct results in the simultaneous preservation and sublation of the lifework in the work, of the epoch in the lifework, and of course of history in the epoch.” (118)
“Historical materialism sees the work of past as still uncompleted.” (124) The task is to find – in every sense – the openings of history. ”To put to work an experience with history – a history that is originary for every present – is the task of historical materialism. The latter is directed toward a consciousness of the present which explodes the continuum of history.” (119)
The materials for historical work may not actually exist. In his essay on Eduard Fuchs, Benjamin draws attention to their shared passion for collecting, and for the collection as “the practical man’s answer to the aporias of theory” (119) Whether Daumier’s images, erotica or children’s books, the collector feels the resonance in low forms.
Such material has to be thought at one and the same time in terms of what it promises and what it obscures. “Whatever the historical materialist surveys in art or science has, without exception, a lineage he cannot observe without horror. The products of art and science owe there existence not merely to the effort of the great geniuses who created them, but also, in one degree or another, to the anonymous toil of their contemporaries. There is no document of culture which is not at the same time a document of barbarism.” (124)
Benjamin sets a high standard for the sorts of political claim that cultural work of any kind might claim, as it is always dependent on the labor of others. “It may augment the weight of the treasure accumulating on the back of humanity, but it does not provide the strength to shake off this burden so as to take control of it.” (125)
He did not share the optimism of inter-war social democracy, which still tended to see capitalism as a deterministic machine grinding on it its own imminent end. Benjamin was far more attuned to the barbaric side that Engels had glimpsed in his walks around Manchester.This barbarism, taken over from bourgeois culture, infected the proletariat via repression with “masochistic and sadistic complexes.” (137)
Benjamin thought both art and literature from the point of view of the pressure put on them by modern technical means. “Script – having found, in the book, a refuge in which it an lead an autonomous existence – is pitilessly dragged out into the street by advertisements and subjected to the brutal heteronomies of economic chaos.” (171) A great poet might acknowledge rather than ignore this. “Mallarmé… was in the Coup de dés the first to incorporate the graphic tensions of advertising into the printed page.” (171) A quite opposite reading, incidentally, to the recent and very interesting one offered by Quentin Meillassoux.)
Benjamin also grasped the role of the rise of administrative textuality in shaping its aesthetics: “the card index marks the conquest of three dimensional writing…. And today the book is already… an outdated mediation between two different filing systems.” (172) The modern poet needed to master statistics and technical drawing. “Literary competence is no longer founded on specialized training but is now based on polytechnical education, and thus becomes public property.” (360) One wonder what he would have thought about the computer-assisted distant reading of the digital humanities.
His more famous study is of photography and its transformation of the mode of perception, influenced by the remarkable photographer and activist Germaine Krull (subject of a recent retrospective). The first flowering of photography was before it was industrialized, and before it was art, which arises as a reaction to mechanical reproducibility. “The creative in photography is its capitulation to fashion.” (293)
Benjamin draws attention to pioneers such as Julia Margaret Cameron and David Octavius Hill. Looking at his image of the Newhaven fishwife, Benjamin “feels an irresistible compulsion to search such a picture for the tiny spark of contingency, the here and now, with which reality has, so to speak, seared through the image-character of the photograph…” (276) The camera is a tech for revealing the “optical unconscious.” (278)
Eugene Atget comes in for special consideration as the photographer who began the emancipation of object from aura. This is perhaps the most slippery – and maybe least useful – of Benjamin’s concepts. “What is aura, actually? A strange web of space and time: the unique appearance of a distance, no matter how close it may be. While at rest on a summer’s noon, to trace a range of mountains on the horizon, or a branch that throws its shadow on the observer, until the moment or the hour becomes part of the appearance – this is what it means to breathe the aura of these mountains, that branch. Now, ‘to bring things closer’ to us, or rather to the masses, is just as passionate an inclination in our day as the overcoming of whatever is unique in every situation by means of its reproduction.” (285) Aura is the “unique appearance of a distance,” at odds with transience and reproducibility.
Where other critical theorists put the stress on how commodity fetishism and the culture industry limit the ability of the spectator to see the world through modern media, Benjamin saw a more complex set of images and objects. He does not deny such constraints: “But it is precisely the purpose of the public opinion generated by the press to make the public incapable of judging…” (361) But rather tries to think them dialectically as also implicated in their own overcoming. Even a limited and limiting media cannot help pointing outside itself, and at the same time containing its own trace of its own limits.
Thus, in thinking about Mickey Mouse cartoons, Benjamin remarks that “In these films, mankind makes preparations to survive civilization.” (388) Disorder lurks just beyond the home, encouraging the viewer to return to the familiar. On the other hand, cinema can be a space in which the domestic environment can become visible and relatable to other spaces. “The cinema then exploded this entire prison-world with the dynamite of its fractions of a second, so that now we can take extended journeys of adventure between their widely scattered ruins.” (329)
The figure of the ruin in Benjamin goes back to his study of The Origins of German Tragic Drama, his doctoral thesis (which did not receive a pass). There the ruin in connected to allegory. “Allegories are, in the realm of thought, what ruins are in the realm of things.” (180) Allegory, in turn, implies that “Any person, any thing, any relationship can mean absolutely anything else. With this possibility, an annihilating but just verdict is pronounced on the profane world.” (175) The allegorical is central to Benjamin’s whole method (and taken up by many, from Jameson to Alex Galloway). “Through allegorical observation, then, the profane world is both elevated in rank and devalued.” (175)
Benjamin saw the baroque rather than the romantic as a worthy counterpoint to classicism, which had no sense of the fragmentary and disintegrating quality of the sensuous world. Nature appears to the baroque as over-ripeness and decay, an eternal transience. It is the classical ideal of eternal, pure and absolute forms or ideas in negative. From there, he removed the ideal double. It may creep back, at least among some interpreters, in at various moments when Benjamin evokes the messianic, but the contemporary reader is encouraged to complete the struggle Benjamin was having with his various inheritances.
Historical thought and action is about seizing the fragment of the past that opens towards the present and might provide leverage towards a future in which it can never be restored as a part of a whole. Benjamin: “structure and detail are always historically charged.” (184) And they are never going to coincide in an integrated totality, either as a matter of aesthetics, or as a matter of historical process.
Allegory is also connected to the dream. On the other side of the thing or the image is not its ideal form but the swarming multiplicity of what it may mean or become. This is where the critic, like the poet, sets up shop, “in order to blaze a way into the heart of things abolished or superseded, to decipher the contours of the banal as rebus…” (237)
The dream was all the rage in the early 20th century, as Aragon notes in Wave of Dreams. Benjamin refunctioned this surrealist obsession. Benjamin was rather more interested in the dreams of objects than of subjects. “The side which things turn towards the dream is kitsch.” (236) He met the kitsch products of the design and culture industries with curiosity rather than distaste or alarm. “Art teaches us to see into things. Folk art and kitsch allow us to see outward from within things.” (255)
Benjamin has a genius for using the energies of the obsolete. But one has to ask if the somewhat cult-like status Benjamin now enjoys is something of a betrayal of the critical leverage Benjamin thought the obsolete materials of the past could play in the present.
After discussing him with my students, we came to the conclusion that one could thing of, and use, all of Benjamin’s methods as ways of detecting the historical unconscious working through the tensions within cultural artifacts. Benjamin can be a series of lessons in which artifacts to look at, and how to look. One can look for the fragment of the past that speaks to the present. One can look within the photograph for the optical unconscious at work. One can look at obsolete forms, where the tension between past and dreamt future is laid bare. One can look at avant-gardes, which might anticipate where the blockage is in the incomplete work of history. One can look at the low or the kitsch, where certain dream-images are passed along in a different manner to fine art.
Our other thought was that one thing that seems to connect Benjamin to the present even more than the content of his writing is the precarity of his situation while writing it. Like Baudelaire and the bohemian flaneur, his is in contemporary terms a ‘gig economy’, of freelance work and of permanent exclusion from security. This precarity seemed to wobble on the precipice of an even greater, and more ostensibly political one — the rise of fascism. Today, the precarity of so many students, artists, traders in new information — the hacker class as I call it — seems to wobble on the precipice of an ecological precarity. If in Benjamin’s day it was the books that were set on fire, now it is the trees.
The thing about an interface is that when it is working smoothly you hardly notice it is there at all. Rather like ideology, really. Perhaps in some peculiar way it is ideology. That might be one of the starting points of Alexander Galloway’s book, The Interface Effect (Polity 2012). Like Alberto Toscano and Jeff Kinkle in Cartographies of the Absolute (Zer0 Books 2015), Galloway revives, and revises Fredric Jameson’s idea of cognitive mapping, which might in shorthand be described as a way of tracing how the totality of social relations in our commodifed world show up, in spite of themselves, in a particular work of literature, art or media.
Take the tv show, 24. In what sense could one say that the show is ‘political’? It certainly appears so in a ‘red state’ sort of way. The Jack Bauer character commits all sorts of crimes, including torture, in the name of ‘national security.’ But perhaps there’s more to it. Galloway draws attention to how certain formal properties of narrative, editing and so forth might help us see ‘politics’ at work in 24 in other ways.
24 is curiously a show about the totality, but in a rather reactionary way. Characters are connected to something much greater than their petty interests, but that thing is national security, whose over-riding ethical imperative justifies any action. This is of course much the same ‘moral relativism’ of which both the conservative right and the liberal center accused communists and then postmodernists.
The hero, Jack Bauer is a kind of hacker, circumventing the protocols of both technologies and institutions. Everything is about informatics weapons. Interrogation is about extracting information. “The body is a database, torture a query algorithm.” (112) Time is always pressing, and so short-cuts and hacks are always justified. The editing often favors ‘windowing’, where the screen breaks into separate panels showing different simultaneous events, cutting across the older logic of montage as succession.
The show’s narrative runs on a sort of networked instantaneity. Characters in different places are all connected and work against the same ever-ticking clock. Characters have no interiority, no communal life. They are on the job (almost) 24/7, like perfect postfordist workers, and like them their work is under constant surveillance. There is no domestic space. They have nothing but their jobs, and as Franco Berardi’s work also shows, a heightened ownership of their labor is their only source of spiritual achievement. “Being alive and being on the clock are now essentially synonymous.” (109) What was prefigured in modern works like Kenneth Fearing’s The Big Clock is now a total world.
But Galloway takes a step back and looks at a broader question of form and its relation to the content of the show. The twenty-four hour-long episodes in a season of 24 are supposed to be twenty-four consecutive hours, but there’s actually only 16.8 hours of television. The show makes no reference to the roughly 30% of the viewer’s time spent watching the ads. As Dallas Smythe and later Sut Jhally have argued, watching tv is more or less ‘work,’ and we might now add, a precursor form of the vast amount of more detailed non-labor we all perform on all kinds of screens.
Galloway: “24 is political because the show embodies in its formal technique the essential grammar of the control society, dominated as it is by specific and information logics.” (119) One might add that it is probably watched now in a specific way as well, by viewers checking their text messages or with Facebook open on their laptops while it plays on the big screen. It has to compete with all our other interfaces.
How then can the interface be a site where the larger historical and political forces can be detected playing themselves out as they articulate individual experiences and sensibilities into that larger world? How is the one transformed into the other, as a kind of parallel world, both attuned and blind to what is beyond it? What is the dialectic between culture and history? This might be what Fredric Jameson called allegory. For Galloway, allegory today takes the specific form of an interface, and even more specifically of the workings of an intraface, which might be described as the relation between the center and the edge within the interface itself.
Culture is history in representational form, as social life as a whole cannot be expressed directly (To say nothing of social-natural metabolic life). Culture is if anything not a representation of social life per se, but of the impossibility of its representation. Hence one might pay as much attention to the blind spots of an instance of cultural work – like the missing 30% of 24, where the ads run.
Further, might there be a certain homology between the mode of production at work at large in history and the specific way in which the form of a cultural work does its work? This was perhaps what Jameson was proposing in his famous essay on the ‘postmodern.’ But these times are not post anything: they just are what they are. If this is, in a term Galloway borrows from Deleuze, a society of control, then perhaps the interface is a kind of control allegory.
I can remember a time when we still called all this new media. It is an absurd term now, especially for students whose whole conscious life exists pretty much within the era of the internet and increasingly also of the web and the cellphone. I can also remember a time when the potentials of ‘new media’ appeared, and in some ways really were, quite open. That past is now often read as a kind of teleology where it was inevitable that it would end up monopolized by giant corporations profiting off non-labor in a society of control and surveillance. But this is selective memory. There were once avant-gardes who tried, and failed, to make it otherwise. That they – or we – failed is no reason to accept the official Silicon valley ideologies of history.
I mention this because Galloway starts The Interface Effect by recalling in passing the avant-garde in listserv form that was nettime.org and rhizome.org – but without flagging his own role in any of this. His work with the Radical Software Group and rhizome.org is not part of the story. That world appears here just as the place of the first reception for that pioneering attempt to describe it, Lev Manovich’s The Language of New Media (MIT Press, 2002).
Manovich came at this topic from a very different place from either the techno-boosters of Silicon valley’s California ideology or the politico-media avant-gardes of Western Europe. His own statement about this, which Galloway quotes, turned out to be prescient: “As a post-communist subject, I cannot but see Internet as a communal apartment of Stalin era: no privacy, everybody spies on everybody else, always present line for common areas such as the toilet or the kitchen.” How ironic, now that Edward Snowden, who showed that this is where we had ended up, had to seek asylum of sorts in Putin’s post-Soviet Russia.
As Galloway reads him, Manovich is a modernist, whose attention in drawn to the formal principles of a medium. Where new media is concerned, he will find five: numeric representation, modularity, automation, variability and transcoding. Emphasis shifts from the linear sequence to the database from which it draws, or from syntagm to paradigm. Ironically, the roots of digital media for Manovich are in cinema, and Dziga Vertov is his key example. Cinema, with its standard frames, is already in a sense ‘digital’, and the film editor’s trim-bins are already a ‘database’. Vertov was, after all, not a director so much as an editor.
Manovich’s perception of the roots of ‘new media’ in Vertov is still something of a scandal for the October journal crowd, who rather hilariously think it makes more sense to see today’s Ivy League art history program as the true inheritor of Vertov. Manovich refused that sort of political-historical approach to avant-garde aesthetics, which in a more lively form could be found in, say Brian Holmes or Geert Lovink. Manovich was also at some remove from those who want to reduce new media to hardware, such as Friedrich Kittler or Wendy Chun, or those who focused more on networks than on the computer itself, such as Eugene Thacker or Tiziana Terranova.
As a post-Soviet citizen, Manovich was also wary of politicized aesthetics that gloss over questions of form as well as those who – in the spirit of Godard – want to treat formalism as inherently radical. Interestingly, Galloway will take a – somewhat different – formalism and bring it back to political-historical questions, as one of those heroic Jameson-style moves in which quite oppose methods are reconciled within a larger whole.
Galloway’s distinctive, and subtle, argument is that digital media are not so much a new ontology as a simulation of one. The word ‘ontology’ is a slippery one here, and perhaps best taken in a naïve sense of ‘what is.’ A medium such as cinema has a certain material relation to what is, or rather what was. The pro-filmic event ends up as a sort of trace in the film, or, put the other way around, the film is an index of a past event. Here it is not the resemblance but the sequence of events that make film a kind of sign of the real, in much the same way that smoke is an indexical sign for fire.
Galloway: “Today all media are a question of synecdoche (scaling a part for the whole), not indexicality (pointing from here to there).” (9) Galloway doesn’t draw on Benjamin here, but one could think Benjamin’s view of cinema as a kind of organizing of indexical signs from perceptual scales and tempos that can exceed the human – signs pointing to a bigger world. It takes a certain masochistic posture to even endure it, and not quite in the way Laura Mulvey might have thought one of cinema-viewing’s modes as masochistic. For any viewer it is a sort of giving over of perceptual power to a great machine.
To the extent that it helps perceive often subtle, continuous changes by sharpening the edges through a binary of language, let’s say that by contrast digital media is sadistic rather than masochistic. “The world no longer indicates to us what it is. We indicate ourselves to it, and in so doing the world materializes in our image.” This media is not about indexes of a world, but about the profiles of its users.
Galloway does not want to go too far down this path, however. His is a theory not of media but of mediation, which is to say not a theory of a new class of objects but of a new class of relations: mediation, allegory, interface. Instead of beginning and ending from technical media, we are dealing instead with their actions: storing, transmitting, processing. Having learned his anti-essentialism – from Donna Haraway among others – he is careful not to seek essences for either objects or subjects.
A computer is not an ontology, then, but neither is it a metaphysics, in that larger sense of not just what is, but why and how what is, is. Most curiously, Galloway proposes that a computer is actually a simulation of a metaphysical arrangement, not a metaphysical arrangement: “… the computer does not remediate other physical media, it remediates metaphysics itself.” (20)
Here Galloway gives a rather abbreviated example, which I will flesh out a bit more than he does, as best I can. That example is object oriented programming. “The metaphysico-Platonic logic of object-oriented systems is awe inspiring, particularly the way in which classes (forms) define objects (instantiated things): classes are programmer-defined templates, they are (usually) static and state in abstract terms how objects define data types and process data; objects are instances of classes, they are created in the image of a class, they persist for finite amounts of time and are eventually destroyed. On the one hand an idea, on the other a body. On the one hand an essence, on the other an instance. On the one hand the ontological, on the other the ontical.” (21)
One could say a bit more about this, and about how the ‘ontology’ (in the information science sense) of object oriented programming, or of any other school of it, is indeed an ontology in a philosophical sense, or something like it. Object oriented programming (oop) is a programming paradigm based on objects that contain data and procedures. Most flavors of oop are class-based, where objects are instances of classes. Classes define data formats and procedures for the objects of a given class. These classes can be arranged hierarchically, where subordinate classes inherit from the ‘parent’ class. Objects then interact with each other as more or less black boxes. In some versions of oop, those boxes can not only hide their code, they can lock it away.
Among other things, this makes code more modular, and enables a division of labor among coders. Less charitably, it means that half-assed coders working on big projects can’t fuck too much of it up beyond the particular part they work on. Moreover, oop offers the ability to mask this division of labor and its history. The structure of the software enables a social reality where code can be written in California or Bangalore.
A commercially popular programming language that is substantially oop based is Java, although there are many others. They encourage the reuse of functional bits of code but add a heavy burden of unnecessary complexity and often lack transparency. It is an ontology that sees the world as collections of things interacting with things but where the things share inputs and outputs only. How they do so is controlled at a higher level. Such is its ‘metaphysico-Platonic logic’, as Galloway calls it, although to me it is sounding rather more like Leibnitz.
The structure of software – its ‘ontology’ in the information science sense – makes possible a larger social reality. But perhaps not in the same way as the media of old. Cinema was the defining medium of the 20th century; the game-like interfaces of our own time are something else (as I proposed in Gamer Theory). The interface itself still looks like a screen, so it is possible to imagine it still works the same way. Galloway: “It does not facilitate or make reference to an arrangement of being, it remediates the very conditions of being itself.” (21) The computer simulates an ontological plane with logical relations “The computer instantiates a practice not a presence, an effect not an object.” (22)
An ethic not an ontology – although not necessarily an ‘ethical’ one. “The machine is an ethic because it is premised on the notion that objects are subject to definition and manipulation according to a set of principles for action. The matter at hand is not that of coming to know a world, but rather that of how specific, abstract definitions are executed to form a world.” (23) (I would rather think this as a different kind of index, of the way formal logics can organize electrical conductivity, for example.)
“The computer is not an object, or a creator of objects, it is a process or active threshold mediating between two states.”(23) Or more that two – there can be many layers. (Benjamin Bratton’s stack). “The catoptrics of the society of the spectacle is now the dioptrics of the society of control.” (25) Or: we no longer have mirrors, we have lenses. Despite such a fundamental reorganization of the world, Galloway insists on the enduring usefulness of Marx (and Freud) and of their respective depth models of interpretation, which attempt to ferret out how something can appear as its opposite.
Galloway tips the depth model sideways, and considers the interface in terms of centers and edges, as “… the edges of art always make reference to the medium itself.” (33) This center-edge relation Galloway calls the intraface. It is a zone of indecision between center and edge, or what Roland Barthes called the stadium and the punctum. Making an intraface internally consistent requires a sort of neurotic repression of the problem of its edge. On the other hand, signaling the real presence of an edge to the intraface ends up making the work itself incoherent and schizophrenic, what Maurice Blanchot called the unworkable.
In cinema the great artists of the neurotically coherent and schizophrenically incoherent intrafaces respectively might be Hitchcock and Godard. The choice appears to be one of a coherent aesthetic of believing in the interface, but not enacting it (Hitchcock); and an incoherent aesthetic of enacting the interface, but not believing in it (Godard).
But Galloway is wary of assuming that only the second kind of intraface is a ‘political’ one. The multiplayer computer game World of Warcraft is as much an example of a schizophrenic intraface as any Godard movie. “At root, the game is not simply a fantasy landscape of dragons and epic weapons but a factory floor, an information-age sweat-shop, custom tailored in every detail for cooperative ludic labor.” (44)
In a classic Jameson move, Galloway doubles the binary of coherent vs incoherent aesthetics with a second: coherent vs incoherent politics, to make a four-fold scheme. The coherent aesthetics + coherent politics quadrant is probably a rare one now. Galloway doesn’t mention architecture here, but Le Corbusier would be a great example, where a new and clarified aesthetic geometry was supposed to be the representative form for the modern ruling class.
The quadrant of incoherent aesthetics + coherent politics is a lively one, giving Berthold Brecht, Alain Badiou, Jean-Luc Godard, or the punk band Fugazi. All in very different ways combine a self-revealing or self-annihilating aesthetic with a fixed political aspiration, be it communist or ‘straight edged.’ The World of Warcraft interface might fit here too, with its schizo graphics interfacing with an order whose politics we shall come to later.
Then there’s the coherent aesthetic + incoherent politics quadrant, which for Galloway means art for art’s sake, or a prioritizing of the aesthetic over the political, giving us the otherwise rather different cinema of Billy Wilder and Alfred Hitchcock, but also the aesthetics of Gilles Deleuze, and I would add Oscar Wilde, and all those with nothing to declare but their ‘genius.’
The most interesting quadrant combines incoherent aesthetics with incoherent politics. This is the ‘dirty’ regime, of the inhuman, of nihilism, of the “the negation of the negation.” Galloway will also say the interface of truth. Here lurks Nietzsche, George Bataille, and I would add the Situationists or the Jean-François Lyotard of Libidinal Economy. Galloway will by the end of the book place his own approach here, but to trouble that a bit, let me also point out that here lies the strategies of Nick Land and his epigones. Or, more interestingly – Béatriz Préciado’s Testo Junkie.
So in short there are four modes of aesthetic-political interface. The first is ideological, where art and justice are coterminous (the dominant mode). The second is ethical, which must destroy art in the service of justice (a privileged mode). The third is poetic, where one must banish justice in the service of art (a tolerated mode). The last is nihilist, and wants the destruction of all existing modes of art and justice (which for Galloway is a banished mode – unless one sees it – in the spirit of Nick Land – as rather the mode of capitalist deterritorialization itself, in which case it is actually dominant and the new ideological mode. Its avatar would perhaps be Joseph Schumpeter).
Galloway thinks one can map the times as a shift from the ideological to the ethical mode, and a generalize “decline in ideological efficiency.” (51) I suspect it may rather be a shift from the ideological to the nihilist, but which cannot declare itself, leading to a redoubling of efforts to produce viable ideological modes despite their waning effect. (The sub rosa popularity of Nick Land finds its explanation here as delicious and desirable wound and symptom).
Either way, the mechanism – in a quite literal sense – that produces this effect might be the transformation of the interface itself by computing, producing as it does an imaginary relation to ideological conditions, where ideology itself is modeled as software. The computer interface is an incoherent aesthetic that is either in the service of a coherent politics (Galloway’s reading), or, which wants to appear as such but is actually in the service of an incoherent politics that it cannot quite avow (my reading).
Hence where Galloway sees the present aesthetico-politics of the interface as oscillating between regimes 2 and 3, I think it is more about regimes 1 and 4, and entails a devaluing of the aesthetic-political compromises of the Godards and Hitchcocks, or Badious and Deleuzes of this world. I think we now have a short-circuit between ideology and nihilism that accepts no compromise formations. Galloway usefully focuses attention on the intraface as the surface between the problems of aesthetic form and the political-historical totality of which it is a part.
The interface is an allegorical device for Galloway, a concept that is related to, but not quite the same as Wendy Hui Kyong Chun’s that “software is a functional analog to ideology.” Certainly both writers have zeroed-in on a crucial point. “Today the ‘culture industry’ takes on a whole new meaning, for inside software the ‘cultural’ and the ‘industrial’ are coterminous.” (59)
The point where Galloway and Chun differ is that he does not follow her and Kittler in reducing software to hardware. Kittler’s view is part of a whole conceptual field that may be produced by the interface effect itself. There is a kind of segregation where data are supposedly immaterial ideas and the computer is a machine from a world called ‘technology.’ The former appears as a sort of idealist residue reducible to the latter in a sort of base-trumps-superstructure move.
This might correct for certain idealist deviations where the ‘immaterial’ or the ‘algorithm’ acquire mysterious powers of their own without reference to the physical logic-gates, memory cores, not to mention energy sources that actually make computers compute. However, it then runs the risk of treating data and information as somehow less real and less ‘material’ than matter and energy. Hence, as usual, a philosophical ‘materialism’ reproduces the idealism it seeks to oppose.
I think Galloway wants to accord a little more ‘materiality’ to data and information than that, although it is not a topic the book tackles directly. But this is a theory not of media but of mediation, or of action, process, and event. Galloway also has little to say about labor, but that might be a useful term here too, if one can separate it from assumptions about it being something only humans do. A theory of mediation might also be a theory of information labor. An interface would then be a site of labor, where a particular, concrete act meets social, abstract labor in its totality.
Software is not quite reducible to hardware. I think we can use a formula here from Raymond Williams: hardware sets limits on what software can do, but does not determine what it does in any stronger sense. Software is not then ‘ideological’, but something a bit more complicated. For Galloway, software is not just a vehicle for ideology, “instead, the ideological contradictions of technical transcoding and fetishistic abstraction are enacted and ‘resolved’ within the very form of software itself.” (61)
Of course not all interfaces are for humans. Actually most are probably now interfaces between machines and other machines. Software is a machinic turn for ideology, an interface which is mostly about the machinic. Here Galloway also takes his distance from those who, like Katherine Hayles, see code as something like an illocutory speech act. Just as natural languages require a social setting; code requires a technical setting. But to broaden the context and see code as a subset of enunciation (a key term for Lazzarato) is still to anthropomorphize it too much. I am still rather fond of a term Galloway has used before – allegorithm – an allegory that takes an algorithmic form, although in this book he has dropped it.
What does it mean to visualize data? What is data? In simple terms, maybe data are ‘the givens’, whereas information might mean to give (in turn) some form to what is given. Data is empirical; information is aesthetic. But data visualization mostly pictures its own rules of representation. Galloway’s example here is the visualization of the internet itself, of which there are many examples, all of which looking pretty much the same. “Data have no necessary information.” (83) But the information that is applied to it seems over and over to be the same, a sort of hub-and-spoke cloud aesthetic, which draws connections but leaves out protocols, labor, or power.
Maybe one form of what Jodi Dean calls the “decline in symbolic efficiency” is a parallel increase in aesthetic information that goes hand in hand with a decline in information aesthetics. There’s no necessary visual form for data, but the forms it gets seems to come from a small number of presets.
Galloway thinks this through Jacques Ranciere’s “distribution of the sensible.” Once upon a time there were given forms for representing particular things in particular situations. But after that comes a sort of sublime regime, which tries to record the trace of the unrepresentable, which succeeds the old distribution, as a result of breakdown between subjects of art and forms of representation. The nihilism of modernity actually stems out of realism, which levels the representational system, for in realism everything is equally representable.
Realism can even represent the Shoah, and its representability is actually the problem for there is nothing specific about the language in which it is represented, which could just as easily represent a tea party. The problem might be not so much about the representability of the Shoah as that its representation seems to have negligible consequences. Representation has lost ethical power.
But perhaps Ranciere was speaking only of the former society of the spectacle, not the current society of control. Galloway: “One of the key consequences of the control society is that we have moved from a condition in which singular machines produce proliferations of images, into a condition in which multitudes of machines produce singular images.” (91) We have no adequate pictures of the control society. Its unrepresentability is connected to what the mode of production itself makes visible and invisible.
Galloway: “the point of unrepresentability is the point of power. And the point of power today is not in the image. The point of power today resides in networks, computers, algorithms, information and data.” (92) Like Kinkle and Toscano, Galloway cites Mark Lombardi’s work, and adds that of the Bureau d’études as examples of what Brian Holmes calls a counter cartography of information. One that might actually restore questions of power and protocol to images of ‘networks.’ But these are still limited to certain affordances of the map-form as interface.
So we have no visual language yet for the control society. Although we have them for some of its effects. Galloway does not mention climate modeling, but to me that is surely the key form of
data-> information -> visualization
problem to which to attend in the Anthropocene. As I tried to show in Molecular Red, the data -> information interface is actually quite complicated. In climate science each co-produces the other. Data are not empirical in a philosophical sense, but they are wedded to specific material circumstances of which they are unwitting indexes.
One could also think about the problems of visualizing the results, particularly for lay viewers. I see a lot of maps of the existing continents with data on rising temperatures; and a lot of maps of rising seas making new continents which omit the climate projections. Imagine being at a given GPS coordinate 60 years from now where neither the land form nor the climate were familiar. How could one visualize such a terra incognita? Most visualizations hold one variable constant to help understand the other. In Gamer Theory I showed how the SimEarth game actually made some progress on this – but then that was commercially a failed game.
There’s lots of visualizations of networks and of climate change – curious how there’s few visualizations which show both at the same time. And what they tend to leave out is agency. Both social labor and the relations of production are not pictured. Images of today’s social labor often land on images of elsewhere. Galloway mentions the Chinese gold farmers, those semi-real, semi-mythical creatures (under) paid to dig up items worth money in games like World of Warcraft. Another might be the call center worker. Who we might more often hear but never see. These might be the allegorical figures of labor today.
For Galloway, we are all Chinese gold farmers, in the sense that all computerized and networked activity is connected to value-extraction. One might add that we are all call center workers, in that we are all responding to demands placed on us by a network and to which we are obliged to respond. There is of course a massive iniquity in how such labor (and non-labor) is rewarded, but all of it may increasingly take similar forms.
All labor and even non-labor becomes abstract and valorized, but race is a much more stubborn category, and a good example of how software simulates ideology. In a game like World of Warcraft, class is figured as something temporary. By grinding away at hard work you can improve your ‘position’. But race is basic and ineradicable. The ‘races’ are all fantasy types, rather than ‘real’ ones, but perhaps it is only in fantasy form that race can be accepted and become matter-of-fact. Control society may be one that even encourages a certain relentless tagging and identifying through race and other markers of difference – all the better to connect you at a fine-grained level of labor and consumption.
The answer to Gayatri Spivak’s question – can the subaltern speak? – is that the subaltern not only speaks but has to speak, even if restricted to certain stereotypical scripts. “The subaltern speaks and somewhere an algorithm listens.” (137) One version of this would be Lisa Nakamura’s cyber-types. In an era when difference is the very thing that what Galloway calls ludic capitalism feasts on, it is tempting to turn, as Badiou and Zizek do, back to the universal. But the questions of what the universal erases or suppresses are not addressed in this turn, just ignored.
Galloway advocates instead a politics of subtraction and disappearance: to be neither the universal nor differentiated subject, but rather the generic one of whatever-being. I’m not entirely convinced by this metaphysical-political turn, at least not yet. It is striking to me that most of The Interface Effect is written under the sign of Fredric Jameson, for whom politics is not a separate domain, but is itself an allegory for the history of capitalism itself. And yet the concluding remarks are built much more on the Jacobin approach to the political of the post-Althusserians such as Badiou, for whom the political is an autonomous realm against the merely economic.
From that Jacobin-political-philosophical point of view, the economic itself starts to become a bit reified. Hence Galloway associates the logic the game World of Warcraft with the economics of capital itself, because the game simulates a world in which resources are scarce and quantifiable. But surely any mode of production has to quantify. Certainly pre-capitalist ones did. I don’t think it is entirely helpful to associate use value only with the qualitative and uncountable, and to equate exchange value with quantification tout-court. One of the lessons of climate science, and the earth science of which it is a subset, is that one of the necessary ways in which one critiques exchange value is by showing that it attempts to quantify imaginary values. It is actually the ‘qualities’ of exchange value that are the problem, not that its math.
So while Galloway and I agree on a lot of things, there’s also points of interesting divergence. Galloway: “The virtual (or the new, the next) is no longer the site of emancipation… No politics can be derived today from a theory of the new.” (138) I would agree that the virtual became a way in which the theological entered critical theory, once again, through a back door. I tried to correct for that, between A Hacker Manifesto and Spectacle of Disintegration, through a reading of Debord’s concept of strategy, which I think tried to mediate between pure, calculative models and purely romantic, qualitative ones. It was also a way of thinking with a keen sense of the actual affordances of situations rather than a hankering for mystical ‘events’.
But I think there’s a problem with Galloway’s attempt to have done with an historicist (Jamesonian) mode of thought in favor of a spatialized and Jacobin or ‘political’ one. To try to supersede the modifier of the ‘post’ with that of the ‘non’ is still in spite of itself a temporal succession. I think rather that we need to think about new past-present configurations. It’s a question of going back into the database of the archive and understanding it not as a montage of successive theories but as a field of possible paths and forks – and selecting other (but not ‘new’) ones.
Galloway is quite right to insist that “Another world is not possible.” (139) But I read this more through what the natural sciences insist are the parameters of action than through what philosophy thinks are the parameters for thought. I do agree that we need to take our leave from consumerist models of difference and the demand to always tag and produce ourselves for the benefit of ludic capitalism. In Spectacle of Disintegration I called the taking-leave the language of discretion. But I dissented there somewhat from the more metaphysical cast Agamben gives this, and pulled back to the more earthy and specific form of Alice Becker-Ho’s studies of Romani language practices – that scandal of a language that refuses to belong to a nation.
I think there’s a bit of a danger in opting for the fourth quadrant of the political-aesthetic landscape. Incoherence in politics and aesthetics is as ambivalent as all the others in its implications. Sure it is partly this: “A harbringer of the truth regime, the whatever dissolves into the common, effacing representational aesthetics and representational politics alike.” (142) But it is also the corporate-nihilism of Joseph Schumpeter. I think it more consistent with Galoway’s actual thinking here to treat all four quadrants of the aesthetic-political interface as ambiguous and ambivalent rather than exempt the fourth.
Ludic capitalism is on the one hand a time of that playfulness which Schiller and Huizinga thought key to the social whole and its history, respectively. On the other, it is an era of cybernetic control. Poetics meets design in a “jurdico-geometric sublime” (29) whose star ideologues are poet-designers like Steve Jobs. The trick is to denaturalize the surfaces of this brand of capitalist realism, which wants to appear as a coherent regime of ideology, but which is actually one of the most perfect nihilism – and not in the redeemable sense.
I’m not entirely sure that the good nihilism of withdrawal can be entirely quarantined from the bad one of the celebration of naked, unjust power. It’s a question that needs rather more attention. Alex Galloway, Eugene Thacker and I may be in accord as ‘nihilists’ who refuse a certain legislative power to philosophy. As I see it, Galloway thinks there’s a way to ‘hack’ philosophy, to turn it against itself from within. I think my approach is rather to détourn it, it see it as a metaphor machine for producing both connections and disconnections that can move across the intellectual division of labor, that can find ways knowledge can be comradely, and relate to itself and the world other than via exchange value. From the latter point of view, these might just be component parts of the same project.
Software can now simulate the spectacle effectively that it is able to insinuate another logic into it, the simulation of the spectacle itself, but under which lies an abyss of non-knowledge. But I am not sure this was an inevitable outcome. As with Jodi Dean, I find Galloway rather erases the struggles around what ‘new media’ would become, and now retrospectively sees the outcome as, if not an essence, then a given. This is partly what makes me nervous about a language with seceding or withdrawing. One of the great political myths is of the outsider-subject, untouched by power. All such positions, be it the worker, the woman, the subaltern, can now be taken as fully subsumed. But I think that means one works from an inside – for example via Haraway’s figure of the cyborg – rather than looking for an exit back out again.
What if one took up the study of computation not just as a form of reason, but as a form of rhetoric? That might be one of the key questiobns animating media studies today. But it is not enough to simply describe the surface effects of computational media as in some sense creating or reproducing rhetorical forms. One would need to understand something of the genesis and forms of software and hardware themselves. Then one might have something to say not just about software as rhetoric, but software as ideology, computation as culture — as reproducing or even producing some limited and limiting frame for acting in and on he world.
Is the relation between the analog and the digital itself analog or digital? That might be one way of thinking the relation between the work of Alexander Galloway and Wendy Hui Kyong Chun. I wrote elsewhere about Galloway’s notion of software as a simulation of ideology. Here I take up Chun’s of software as an analogy for ideology, through a reading of her book Programmed Visions: Software and Memory (MIT Press, 2011).
Software as analogy is a strange thing. It illustrates an unknown through an unknowable. It participates in, and embodies, some strange properties of information. Chun: “digital information has divorced tangibility from permanence.” (5) Or as I put it in A Hacker Manifesto, the relation between matter as support for information becomes arbitrary. Chun: “Software as thing has led to all ‘information’ as thing.” (6)
The history of the reification of information passes through the history of the production of software as a separate object of a distinct labor process and form of property. Chun puts this in more Foucauldian terms: “the remarkable process by which software was transformed from a service in time to a product, the hardening of relations into a thing, the externalization of information from the self, coincides with and embodies larger changes within what Michel Foucault has called governmentality.” (6)
Software coincides with a particular mode of governmentality, which Chun follows Foucault in calling neoliberal. I’m not entirely sure ‘neoliberal’ holds much water as a concept, but the general distinction would be that in liberalism, the state has to be kept out of the market, whereas in neoliberalism, the market becomes the model for the state. In both, there’s no sovereign power governing from above so much as a governmentality that produces self-activating subjects who’s ‘free’ actions can’t be known in advance. Producing such free agents requires a management of populations, a practice of biopower.
Such might be a simplified explanation of the standard model. What Chun adds is the role of computing in the management of populations and the cultivation of individuals as ‘human capital.’ The neoliberal subject feels mastery and ‘empowerment’ via interfaces to computing which inform the user about past events and possible futures, becoming, in effect, the future itself.
The ‘source’ of this mode of governmentality is source code itself. Code becomes logos: in the beginning was the code. Code becomes fetish. On some level the user knows code does not work magically on its own but merely controls a machine, but the user acts as if code had such a power. Neither the work of the machine nor the labor of humans figures much at all.
Code as logos organizes the past as stored data and presents it via an interface as the means for units of human capital to place their bets. “Software as thing is inseparable from the externalization of memory, from the dream and nightmare of an all encompassing archive that constantly regenerates and degenerates, that beckons us forward and disappears before our very eyes.” (11) As Berardi and others have noted, this is not the tragedy of alienation so much as what Baudrillard called the ecstasy of communication.
Software is a crucial component in producing the appearance of transparency, where the user can manage her or his own data and imagine they have ‘topsight’ over all the variables relevant to their investment decisions about their own human capital. Oddly, this visibility is produced by something invisible, that hides its workings. Hence computing becomes a metaphor for everything we believe is invisible yet generates visible effects. The economy, nature, the cosmos, love, are all figured as black boxes that can be known by the data visible on their interfaces.
The interface appears as a device of some kind of ‘cognitive mapping’, although not the kind Fredric Jameson had in mind, which would be an aesthetic intuition of the totality of capitalist social relations. What we get, rather is a map of a map, of exchange relations among quantifiable units. On the screen of the device, whose workings we don’t know, we see clearly the data about the workings of other things we don’t know. Just as the device seems reducible to the code that makes the data appear, so too must the other systems it models be reducible to the code that makes their data appear.
But this is not so much an attribute of computing in general as a certain historical version of it, where software emerged as a second (and third, and fourth…) order way of presenting the kinds of things the computer could do. Chun: “Software emerged as a thing – as an iterable textual program – through a process of commercialization and commodification that has made code logos: code as source, code as true representation of action, indeed code as conflated with, and substituting for, action.” (18)
One side effect of the rise of software was the fantasy of the all-powerful programmer. I don’t think it is entirely the case that the coder is an ideal neoliberal subject, and not least because of the ambiguity as to whether the coder makes the rules or simply has to follow them. That creation involves rule-breaking is a romantic idea of the aesthetic, not the whole of it.
The very peculiar qualities of information, in part a product of this very technical-scientific trajectory, makes the coder a primary form of an equally peculiar kind of labor. But labor is curiously absent from parts of Chun’s thinking. The figure of the coder as hacker may indeed be largely myth, but it is one that poses questions of agency that don’t typically appear when one thinks through Foucault.
Contra Galloway, Chun does not want to take as given the technical identity of software as means of control with the machine it controls. She wants to keep the materiality of the machine in view at all times. Code isn’t everything, even if that is how code itself gets us to think. “This amplification of the power of source code also dominates critical analyses of code, and the valorization of software as a ‘driving layer’ conceptually constructs software as neatly layered.” (21)
And hence code becomes fetish, as Donna Haraway has also argued. However, this is a strange kind of fetish, not entirely analogous to the religious, commodity, or sexual fetish. Where those fetishes supposedly offer imaginary means of control, code really does control things. One could even reverse the claim here. What if not accepting that code has control was the mark of a fetishism? One where particular objects have to be interposed as talismans of a power relationship that is abstract and invisible?
I think one could sustain this view and still accept much of the nuance of Chun’s very interesting and persuasive readings of key moments and texts in the history of computing. She argues, for instance, that code ought not to be conflated with its execution. One cannot run ‘source’ code itself. It has to be compiled. The relation between source code and machine code is not a mere technical identity. “Source code only becomes a source after the fact.” (24)
Mind you, one could push this even further than Chun does. She grounds source code in machine code and machine code in machine architectures. But these in turn only run if there is an energy ‘source’, and can only exist if manufactured out of often quite rare materials – as Jussi Parikka shows in his Geology of Media. All of which, in this day and age, are subject to forms of computerized command to bring such materials and their labors together. To reduce computers to command, and indeed not just computers but whole political economies, might not be so much an anthropomorphizing of the computer as a recognition that information has become a rather nonhuman thing.
I would argue that perhaps the desire to see the act of commanding an unknown, invisible device through interface, through software, in which code appears as source and logos is at once a way to make sense of neoliberal political-economic opacity and indeed irrationality. But perhaps command itself is not quite so commanding, and only appears as a gesture that restores the subject to itself. Maybe command is not ‘empowering’ of anything but itself. Information has control over both objects and subjects.
Here Chun usefully recalls a moment from the history of computing – the “ENIAC girls.” (29) This key moment in the history of computing had a gendered division of labor, where men worked out the mathematical problem and women had to embody the problem in a series of steps performed by the machine. “One could say that programming became programming and software became software when the command structure shifted from commanding a ‘girl’ to commanding a machine.” (29)
Although Chun does not quite frame it as such, one could see the postwar career of software as the result of struggles over labor. Software removes the need to program every task directly in machine language. Software offers the coder an environment in which to write instructions for the machine, or the user to write problems for the machine to solve. Software appears via an interface that makes the machine invisible but offers instead ways to think about the instructions or the problem in a way more intelligible to the human and more efficient in terms of human abilities and time constraints.
Software obviates the need to write in machine language, which made programming a higher order task, based on mathematical and logical operations rather than machine operations. But it also made programming available as a kind of industrialized labor. Certain tasks could be automated. The routine running of the machine could be separated from the machine’s solution of particular tasks. One could even see it as in part a kind of ‘deskilling.’
The separation of software from hardware also enables the separation of certain programming tasks in software from each other. Hence the rise of structured programming as a way of managing quality and labor discipline when programming becomes an industry. Structured programming enables a division of labor and secures the running of the machine from routine programming tasks. The result might be less efficient from the point of view of organizing machine ‘labor’ but more efficient from the point of view of organizing human labor. Software recruits the machine into the task of managing itself. Structured programming is a step towards object oriented programming, which further hides the machine, and also the interior of other ‘objects’ from the ones with which the programmer is tasked within the division of labor.
As Chun notes, it was Charles Babbage more than Marx who foresaw the industrialization of cognitive tasks and the application of the division of labor to them. Neither foresaw software as a distinct commodity; or (I would add) one that might be the product of a quite distinct kind of labor. More could be said here about the evolution of the private property relation that will enable software to become a thing made by labor rather than a service that merely applies naturally-occurring mathematical relations to the running of machines.
Crucial to Chun’s analysis is the way source code becomes a thing that erases execution from view. It hides the labor of the machine, which becomes something like one of Derrida’s specters. It makes the actions of the human at the machine appear as a powerful relation. “Embedded within the notion of instruction as source and the drive to automate computing – relentlessly haunting them – is a constantly repeated narrative of liberation and empowerment, wizards and (ex)slaves.” (41)
I wonder if this might be a general quality of labor processes, however. A car mechanic does not need to know the complexities of the metallurgy involved in making a modern engine block. She or he just needs to know how to replace the blown gasket. What might be more distinctive is the way that these particular ‘objects’, made of information stored on some random material surface or other, can also be forms of private property, and can be designed in such a way as to render the information in which they traffic also private property. There might be more distinctive features in how the code-form interacts with the property-form than in the code-form alone.
If one viewed the evolution of those forms together as the product of a series of struggles, one might then have a way of explaining the particular contours of today’s devices. Chun: “The history of computing is littered with moments of ‘computer liberation’ that are also moments of greater obfuscation.” (45) This all turns on the question of who is freed from what. But in Chun such things are more the effects of a structure than the result of a struggle or negotiation.
Step by step, the user is freed from not only having to know about her or his machine, but then also from ownership of what runs on the machine, and then from ownership of the data she or he produces on the machine. There’s a question of whether the first kind of ‘liberation’ – from having to know the machine – necessarily leads to the other all on its own, or rather in combination with the conflicts that drove the emergence of a software-driven mode of production and its intellectual property form.
In short: Programmers appeared to become more powerful but more remote from their machines; users appeared to become more powerful but more remote from their machines. The programmer and then the user work not with the materiality of the machine but with its information. Information becomes a thing, perhaps in the sense of a fetish, but perhaps also in the senses of a form of property and an actual power.
But let’s not lose sight of the gendered thread to the argument. Programming is an odd profession, in that at a time when women were making inroads into once male-dominated professions, programing went the other way, becoming more a male domain. Perhaps it is because it started out as a kind of feminine clerical labor but became – through the intermediary of software – a priestly caste, an engineering and academic profession. Perhaps its male bias is in part an artifact of timing: programming becomes a profession rather late. I would compare it then to the well-known story of how obstetrics pushed the midwives out of the birth-business, masculinizing and professionalizing it, now over a hundred years ago, but then more recently challenged as a male-dominated profession by the re-entry of women as professionals.
My argument would be that while the timing is different, programming might not be all the different from other professions in its claims to mastery and exclusive knowledge based on knowledge of protocols shorn of certain material and practical dimensions. In this regard, is it all that different from architecture?
What might need explaining is rather how software intervened in, and transforms, all the professions. Most all of them have been redefined as kinds of information-work. In many cases this can lead to deskilling and casualization, on the one hand, and to the circling of the wagons around certain higher-order, but information based, functions on the other. As such, it is not that programming is an example of ‘neoliberalism’, so much as that neoliberalism has become a catch-all term for a collection of symptoms of the role of computing in its current form in the production of information as a control layer.
Hence my problem is with the ambiguity in formulations such as this: “Software becomes axiomatic. As a first principle, it fastens in place a certain neoliberal logic of cause and effect, based on the erasure of execution and the privileging of programming…” (49) What if it is not that software enables neoliberalism, but rather than neoliberalism is just a rather inaccurate way of describing a software-centric mode of production?
The invisible machine joins the list of other invisible operators: slaves, women, workers. They don’t need to be all that visible so long as they do what they’re told. They need only to be seen to do what they are supposed to do. Invisibility is the other side of power. To the extent that software has power or is power it isn’t an imaginary fetish.
Rather than fetish and ideology, perhaps we could use some different concepts, what Bogdanov calls substitution and the basic metaphor. In this way of thinking, actual organizational forms through which labor are controlled get projected onto other, unknown phenomena. We substitute the form of organization we know and experience for forms we don’t know – life, the universe, etc. The basic metaphors in operation are thus likely to be those of the dominant form of labor organization, and its causal model will become a whole worldview.
That seems to me a good basic sketch for how code, software and information became terms that could be substituted into any and every problem, from understanding the brain, or love, or nature or evolution. But where Chun wants to stress what this hides from view, perhaps we could also look at the other side, at what it enables.
Chun: “This erasure of execution through source code as source creates an intentional authorial subject: the computer, the program, or the user, and this source is treated as the source of meaning.” (53) Perhaps. Or perhaps it creates a way of thinking about relations of power, even of mapping them, in a world in which both objects and subjects can be controlled by information.
As Chun acknowledges, computers have become metaphor machines. As universal machines in Turing’s mathematical sense, they become universal machines also in a poetic sense. Which might be a way of explaining why Galloway thinks computers are allegorical. I think for him allegory is mostly spatial, the mapping of one terrain onto another. I think of allegory as temporal, as the paralleling of one block of time with another, indeed as something like Bogdanov’s basic metaphor, where were one cause-effect sequence is used to explain another one.
The computer is in Chun’s terms a sort of analogy, or in Galloway’s a simulation. This is the sense in which for Chun the relation between analog and digital is analog, while for Galloway it is digital. Seen from the machine side, one sees code as an analogy for the world it controls; seen from the software side, one sees a digital simulation of the world to be controlled. Woven together with Marx’s circuit of money -> commodity -> money there is now another: digital -> analog -> digital. The question of the times might be how the former got subsumed into the latter.
For Chun, the promise of what the ‘intelligence community’ calls ‘topsight’ through computation proves illusory. The production of cognitive maps via computation obscures the means via which they are made. But is there not a kind of modernist aesthetic at work here, where the truth of appearances is in revealing the materials via which appearances are made? I want to read her readings in the literature of computing a bit differently. I don’t think it’s a matter of the truth of code lying in its execution by and in and as the machine. If so, why stop there? Why not further relate the machine to its manufacture? I am also not entirely sure one can say, after the fact, that software encodes a neoliberal logic. Rather, one might read for signs of struggles over what kind of power information could become.
This brings us to the history of interfaces. Chun starts with the legendary SAGE air defense network, the largest computer system ever built. It used 60,000 vacuum tubes and took 3 megawatts to run. It was finished in 1963 and already obsolete, although it led to the SABRE airline reservation system. Bits of old SAGE hardware were used in film sets whether blinky computers were called for – as in Logan’s Run.
SAGE is an origin story for ideas of real time computing and interface design, allowing ‘direct’ manipulation that simulates engagement by the user. It is also an example of what Brenda Laurel would later think in terms of Computers as Theater. Like a theater, computers offer what Paul Edwards calls a Closed World of interaction, where one has to suspend disbelief and enter into the pleasures of a predictable world.
The choices offered by an interface make change routine. The choices offered by interface shape notion of what is possible. We know that our folders and desktops are not real but we use them as if they were anyway. (Mind you, a paper file is already a metaphor. The world is no more well or less well represented by my piles of paper folders as it is by my ‘piles’ of digital ‘folders’, even if they are not quite the same kind of representation).
Chun: “Software and ideology fit each other perfectly because both try to map the tangible effects of the intangible and to posit the intangible causes through visible cues.” (71) Perhaps this is one response to the disorientation of the postmodern moment. Galloway would say rather that software simulates ideology. I think in my mind it’s a matter of software emerging as a basic metaphor, a handy model from the leading labor processes of the time substituted for processes unknown.
So the cognitive mapping Jameson called for is now something we all have to do all the time, and in a somewhat restricted form – mapping data about costs and benefits, risks and rewards – rather than grasping the totality of commodified social relations. There’s an ‘archaeology’ for these aspects of computing too, going back to Vannevar Bush’s legendary article ‘As we may think’, with its model of the memex, a mechanical machine for studying the archive, making associative indexing links and recording intuitive trails.
In perhaps the boldest intuition in the book, Chun thinks that this was part of a general disposition, an ‘episteme’, at work in modern thought, where the inscrutable body of a present phenomena could be understood as the visible product of an invisible process that was in some sense encoded. Such a process requires an archive, a past upon which to work, and a process via which future progress emerges out of past information.
Computing meets and creates such a worldview. JCR Licklider, Douglas Engelbart and other figures in postwar computing wanted computers that were networked, ran in real time and had interfaces that allowed the user to ‘navigate’ complex problems while ‘driving’ an interface that could be learned step by step. Chun: “Engelbart’s system underscores the key neoliberal quality of personal empowerment – the individual’s ability to see, steer, and creatively destroy – as vital social development.” (83) To me it makes more sense to say that the symptoms shorthanded by the commonplace ‘neoliberal’ are better thought of as ‘Engelbartian.’ His famous ‘demo’ of interactive computing for ‘intellectual workers’ ought now to be thought of as the really significant cultural artifact of 1968.
Chun: “Software has become a common-sense shorthand for culture, and hardware shorthand for nature… In our so-called post-ideological society, software sustains and depoliticizes notions of ideology and ideology critique. People may deny ideology, but they don’t deny software – and they attribute to software, metaphorically, greater powers that have been attributed to ideology. Our interactions with software have disciplined us, created certain expectations about cause and effect, offered us pleasure and power – a way to navigate our neoliberal world – that we believe should be transferrable elsewhere. It has also fostered our belief in the world as neoliberal; as an economic game that follows certain rules.” (92) But does software really ‘depoliticize’, or does it change what politics is or could be?
Digital media both program the future and the past. The archive is first and last a public record of private property (which was of course why the Situationists practiced détournement, to treat it not as property but as a commons.) Political power requires control of the archive, or better, of memory – as Google surely have figured out. Chun: “This always there-ness of new media links it to the future as future simple. By saving the past, it is supposed to make knowing the future easier. The future is technology because technology enables us to see trends and hence to make projections – it allows us to intervene on the future based on stored programs and data that compress time and space.” (97)
Here is what Bogdanov would recognize as a basic metaphor for our times: “To repeat, software is axiomatic. As a first principle, it fastens in place a certain logic of cause and effect, a causal pleasure that erases execution and reduces programming to an act of writing.” (101) Mind, genes, culture, economy, even metaphor itself can be understood as software.
Software produces order from order, but as such it is part of a larger episteme: “the drive for software – for an independent program that conflates legislation and execution – did not arise solely from within the field of computation. Rather, code as logos existed elsewhere and emanated from elsewhere – it was part of a larger epistemic field of biopolitical programmability.” (103) As indeed Foucault’s own thought may be too.
In a particularly interesting development, Chun argues that both computing and modern biology derive from this same episteme. It is not that biology developed a fascination with genes as code under the influence of computing. Rather, both computing and genetics develop out of the same space of concepts.
Actually, early cybernetic theory had no concept of software. It isn’t in Norbert Weiner or Claude Shannon. Their work treated information as signal. In the former, the signal is feedback, and in the latter, the signal has to defeat noise. How then did information thought of as code and control develop both in cybernetics and also in biology? Both were part of the same governmental drive to understand the visible as controlled by an invisible program that derives present from past and mediates between populations and individuals.
A key text for Chun here is Erwin Schrödinger’s ‘What is Life?’ (1944), which posits the gene as a kind of ‘crystal’. He saw living cells as run by a kind of military or industrial governance, each cell following the same internalized order(s). This resonates with Shannon’s conception of information as negative-entropy (a measure of randomness) and Weiner’s of information as positive entropy (a measure of order).
Schrödinger’s text made possible a view of life that was not vitalist – no special spirit is invoked – but which could explain organization above the level of a protein, which was about the level of complexity that Needham and other biochemists could explain at the time. But it comes at the price of substituting ‘crystal’, or ‘form’ for the organism itself.
Drawing on early Foucault, Chun thinks some key elements of a certain episteme of knowledge are embodied in Schrödinger’s text. Foucault’s interest was in discontinuities. Hence his metaphor of ‘archeology,’ which gives us the image of discontinuous strata. It was never terribly clear in Foucault what accounts for ‘mutations’ that form the boundaries of these discontinuities. The whole image of ‘archaeology’ presents the work of the philosopher of knowledge as a sort of detached ‘fieldwork’ in the geological strata of the archive.
Chun: “The archeological project attempts to map what is visible and what is articulable.” (113) One has to ask whether Foucault’s work was perhaps more an exemplar than a critique of a certain mode of knowledge. Foucault said that Marx was a thinker who swam in the 19th century as a fish swims in water. Perhaps now we can say that Foucault is a thinker who swam in the 20th century as a fish swims in water. Computing, genetics and Foucault’s archaeology are about discontinuous and discrete knowledge.
Still, he has his uses. Chun puts Foucault to work to show how there is a precursor to the conceptual architecture of computing in genetics and eugenics. The latter was a political program, supposedly based on genetics, whose mission was improving the ‘breeding stock’ of the human animal. But humans proved very hard to program, so perhaps that drive ended up in computing instead.
The ‘source’ for modern genetics is usually acknowledged to be the rediscovered experiments of Gregor Mendel. Mendelian genetics is in a sense ‘digital’. The traits he studied are binary pairs. The appearance of the pea (phenotype) is controlled by a code (genotype). The recessive gene concept made eugenic selective breeding rather difficult as an idea. But it is a theory of a ‘hard’ inheritance, where nature is all and nurture does not matter. As such, it could still be used in debates about forms of biopower on the side of eugenic rather than welfare policies.
Interestingly, Chun makes the (mis)use of Mendelian genetics as a eugenic theory a precursor to cybernetics. “Eugenics is based on a fundamental belief in the knowability of the human body, an ability to ‘read’ its genes and to program humanity accordingly…. Like cybernetics, eugenics is means of ‘governing’ or navigating nature.” (122) The notion of information as a source code was already at work in genetics long before either computing or modern biology. Control of, and by, code as a means of fostering life, agency, communication and the qualities of freely acting human capital is then an idea with a long history. One might ask whether it might not correspond to certain tendencies in the organization of labor at the time.
What links machinic and biological information systems is the idea of some kind of archive of information out of which a source code articulates future states of a system. But memory came to be conflated with storage. The active process of both forgetting and remembering turns into a vast and endless storage of data.
Programmed Visions is a fascinating and illuminating read. I think where I would want to diverge from it is at two points. One has to do with the ontological status of information, and the second has to do with its political-economic status. In Chun I find that information is already reduced to the machines that execute its functions, and then those machines are inserted into a historical frame that sees only governmentality and not a political economy.
Chun: “The information travelling through computers is not 1s and 0s; beneath binary digits and logic lies a messy, noisy world of signals and interference. Information – if it exists – is always embodied, whether in a machine or an animal.” (139) Yes, information has no autonomous and prior existence. In that sense neither Chun or Galloway or I are Platonists. But I don’t think information is reducible to the material substrate that carries it.
Information is a slippery term, meaning both order, neg-entropy, form, on the one hand, and something like signal or communication on the other. These are related aspects of the same (very strange) phenomena, but not the same. The way I would reconstruct technical-intellectual history would but the stress on the dual production of information both as a concept and as a fact in the design of machines that could be controlled by it, but where information is meant as signal, and as signal becomes the means of producing order and form.
One could then think about how information was historically produced as a reality, in much the same way that energy was produced as a reality in an earlier moment in the history of technics. In both cases certain features of natural history are discovered and repeated within technical history. Or rather, features of what will retrospectively become natural history. For us there was always information, just as for the Victorians there was always energy (but no such thing as information). The nonhuman enters human history through the inhuman mediation of a technics that deploys it.
So while I take the point of refusing to let information float free and become a kind of new theological essence or given, wafting about in the ‘cloud’, I think there is a certain historical truth to the production of a world where information can have arbitrary and reversible relations to materiality. Particularly when that rather unprecedented relation between information and its substrate is a control relation. Information controls other aspects of materiality, and also controls energy, the third category here that could do with a little more thought. Of the three aspects of materiality: matter, energy and information, the latter now appears as a form of controlling the other two.
Here I think it worth pausing to consider information not just as governmentality but also as commodity. Chun: “If a commodity is, as Marx famously argued, a ‘sensible supersensible thing’, information would seem to be its complement: a supersensible sensible thing…. That is, if information is a commodity, it is not simply due to historical circumstances or to structural changes, it is also because commodities, like information, depend on a ghostly abstract.” (135) As retrospective readers of how natural history enters social history, perhaps we need to re-read Marx from the point of view of information. He had a fairly good grasp of thermodynamics, as Amy Wendling observes, but information as we know it today did not yet exist.
To what extent is information the missing ‘complement’ to the commodity? There is only one kind of (proto)information in Marx, and that is the general equivalent – money. The materiality of a thing – let’s say ‘coats’ – its use value, is doubled by its informational quantity, its exchange value, and it is exchanged against the general equivalent, or information as quantity.
But notice the missing step. Before one can exchange the thing ‘coats’ for money, one needs the information ‘coats’. What the general equivalent meets in the market is not the thing but another kind of information – let’s call it the general non-equivalent – a general, shared, agreed kind of information about the qualities of things.
Putting these sketches together, one might then ask what role computing plays in the rise of a political economy (or a post-political one), in which not only is exchange value dominant over use value, but where use value further recedes behind the general non-equivalent, or information about use value. In such a world, fetishism would be mistaking the body for the information, not the other way around, for it is the information that controls the body.
Thus we want to think bodies matter, lives matter, things matter – when actually they are just props for the accumulation of information and information as accumulation. ‘Neo’liberal is perhaps too retro a term for a world which does not just set bodies ‘free’ to accumulate property, but sets information free from bodies, and makes information property in itself. There is no bipower, as information is not there to make better bodies, but bodies are just there to make information.
Unlike Kittler, Chun is aware of the difficulties of a fully reductive approach to media. For her it is more a matter of keeping in mind both the invisible and visible, the code and what executes it. “The point is not to break free of this sourcery but rather to… make our computers more productively spectral by exploiting the unexpected possibilities of source code as fetish.” (20)
I think there may by more than one kind of non-visible in the mix here, though. So while there’s reasons to anchor information in the bodies that display it, there’s also reasons to think the relation of different kinds of information to each other. Perhaps bodies are shaped now by more than one kind of code. Perhaps it is no longer a time in which to use Foucault and Derrida to explain computing, but rather to see them as side effects of the era of computing itself.
Lev Manovich wrote the standard text on ‘new media’, back when that was still a thing. It was called The Language of New Media (MIT Press 2001). Already in that book, Manovich proposed a more enduring way of framing media studies for the post-broadcast age. In his most recent book, Software Takes Command (Bloomsbury 2013) we get this more robust paradigm without apologies. Like its predecessor it will become a standard text.
I’m sure I’m not the only one annoyed by the seemingly constant interruptions to my work caused by my computer trying to update some part of the software that runs on it. As Cory Doctorow shows so clearly, the tech industry will not actually leave us to our own devices. I have it set to require my permission when it does so, at least for those things I can control it updating. This might be a common example that points to one of Manovich’s key points about media today. The software is always changing.
Everything is always in beta, and the beta-testing will in a lot of cases be done by us, as a free service, for our vendors. Manovich: “Welcome to the world of permanent change – the world that is now defined not by heavy industrial machines that change infrequently, but by software that is always in flux.” (2)
Manovich takes his title from the modernist classic, Mechanisation Takes Command, published by Sigfried Giedion in 1948. (On which see Nicola Twiley, here.) Like Giedion, Manovich is interested in the often anonymous labor of those who make the tools that make the world. It is on Giedion’s giant shoulders that many students of ‘actor networks’, ‘media archaeology’ and ‘cultural techniques’ knowingly or unknowingly stand. Without Giedion’s earlier work, Walter Benjamin would be known only for some obscure literary theories.
Where Giedion is interested in the inhuman tool that interfaces with nonhuman natures, Manovich is interested in the software that controls the tool. “Software has become our interface to the world, to others, to our memory and our imagination – a universal language through which the world speaks, and a universal engine on which the world runs.” (2) If you are reading this, you are reading something mediated by, among other things, dozens of layers of software, including Word v. 14.5.1 and Mac OS v. 10.6.8, both of which had to run for me to write it in the first place.
Manovich’s book is limited to media software, the stuff that both amateurs and professionals use to make texts, pictures, videos, and things like websites that combine texts, pictures and videos. This is useful in that this is the software most of us know, but it points to a much larger field of inquiry that is only just getting going, including studies of software that runs the world without most of us knowing about it, platform studies that looks at the more complicated question of how software meets hardware, and even infrastructure studies that looks at the forces of production as a totality. Some of these are research questions that Manovich’s methods tend to illuminate and some not, as we shall see.
Is software a mono-medium or a meta-medium? Does ‘media’ even still exist? These are the sorts of questions that can become diversions if pursued too far. Long story short: While they disagree on a lot, I think what Alex Galloway and Manovich have in common is an inclination to suspect that there’s a bit of a qualitative break introduced into media by computation.
Giedion published Mechanization Takes Command in the age of General Electric and General Motors. As Manovich notes, today the most recognizable brands include Apple, Microsoft and Google. The last of whom, far from being ‘immaterial’, probably runs a million servers. (24) Billions of people use software; billions more are used by software. And yet it remains an invisible category in much of the humanities and social sciences.
Unlike Wendy Chun and Friedrich Kittler, Manovich does not want to complicate the question of software’s relation to hardware. In Bogdanovite terms, it’s a question of training. As a programmer, Manovich (like Galloway) sees things from the programming point of view. Chun on the other hand was trained as a systems engineer. And Kittler programmed in assembler language, which is the code that directly controls a particular kind of hardware. Chun and Kittler are suspicious of the invisibility that higher level software creates vis-à-vis hardware, and rightly so. But for Galloway and Manovich this ‘relative autonomy’ of software of the kind most people know is itself an important feature of its effects.
The main business of Software Takes Command is a to elaborate a set of formal categories through which to understand cultural software. This would be that subset of software that are used to access, create and modify cultural artifacts for online display, communication, for making inter-actives or adding metadata to existing cultural artifacts, as perceived from the point of view of its users.
The user point of view somewhat complicates one of the basic diagrams of media theory. From Claude Shannon to Stuart Hall, it is generally assumed that there is a sender and a receiver, and that the former is trying to communicate a message to the latter, through a channel, impeded by noise, and deploying a code. Hall break with Shannon with the startling idea that the code used by the receiver could be different to that of the sender. But he still assumes there’s a complete, definitive message leaving the sender on its way to a receiver.
Tiziana Terranova takes a step back from this modular approach that presumes the agency of the sender (and in Hall’s case, of the receiver). She is interested in the turbulent world created by multiples of such modular units of mediation. Manovich heads in a different direction He is interested in iterated mediation software introduces in the first instance between the user and the machine itself via software.
Manovich: “The history of media authoring and editing software remains pretty much unknown.” (39) There is no museum for cultural software. “We lack not only a conceptual history of media editing software but also systematic investigations of the roles of software in media production.” (41) There are whole books on the palette knife or the 16mm film camera, but on today’s tools – not so much. “[T]he revolution in the means of production, distribution, and access of media has not been accompanied by a similar revolution in the syntax and semantics of media.” (56)
We actually do know quite a bit about the pioneers of software as a meta-medium, and Manovich draws on that history. The names of Ivan Sutherland, JCR Licklider, Douglas Engelbart, the maverick Ted Nelson and Alan Kay have not been lost to history. But they knew they were creating new things. The computer industry got in the habit of passing off new things as if they were old, familiar things, in order to ease users gently into unfamiliar experiences. But in the process we lost sight of the production of new things under the cover of the old in the cultural domain.
Of course there’s a whole rhetoric about disruption and innovation, but successful software gets adopted by users by not breaking too hard with those users’ cultural habits. Thus we think there’s novelty were often there isn’t: start-up business plans are often just copies of previous successful ones. But we miss it when there’s real change: at the level of what users actually do, where the old is a friendly wrapper for the new.
Where Wendy Chun brings her interpretive gifts to bear on Engelbart, Manovich is more interested in Alan Kay and his collaborators, particularly Adele Goldberg, at Xerox Palo Alto Research Center, or Parc for short. Founded in 1970, Parc was the place that created the graphic user interface, the bitmapped display, Ethernet networking, the laser printer, the mouse, and windows. Parc developed the models for today’s word proessing, drawing, painting and music software. It also gave the world the programming language Smalltalk, a landmark in the creation of object oriented programming. All of which are component bits of a research agenda that Alan Kay called vernacular computing.
Famously, it was not Xerox it was Apple who brought all of that together in a consumer product, the 1984 Apple Macintosh computer. By 1991 Apple had also incorporated video software based on the QuickTime standards, which we can take as the start of an era in which a consumer desktop computer could be a universal media editor. At least in principle: those who remember early 90s era computer based media production will recall also the frustration. I was attempting to create curriculum for digital publishing in the late eighties – I came into computing sideways from print production. That was pretty stable by the start of the 90s, but video would take a bit longer to be viable on consumer-grade hardware.
The genius of Alan Kay was to realize that the barriers to entry of the computer into culture were not just computational but also cultural. Computers had to do things that people wanted to do, and in ways that they were used to doing them, initially at least. Hence the strategy of what Bolter and Grusin called remediation, wherein old media become the content of new media form.
If I look at the first screen of my iPhone, I see icons. The clock icon is an analog clock. The iTunes icon is a musical note. The mail icon is the back of an envelope. The video icon is a mechanical clapboard. The Passbook icon is old-fashioned manila files. The Facetime icon is an old-fashioned looking video camera. The Newstand icon looks like a magazine rack. Best of all, the phone icon is the handset of an old-fashioned landline. And so on. None of these things pictured even exist in my world any more, as I have this machine that does all those things. The icons are cultural referents from a once-familiar world that have become signs within a rather different world which I can pretend to understand because I am familiar with those icons.
All of this is both a fulfillment and a betrayal of the work of Alan Kay and his collaborators at Parc. They wanted to turn computers into “personal dynamic media.” (61) Their prototype was even called a Dynabook. They wanted a new kind of media, with unprecedented abilities. They wanted a computer that could store all of the user’s information, which could simulate all kinds of media, and could do so in a two-way, real-time interaction. They wanted something that had never existed before.
Unlike the computer Engelbart showed in his famous Demo of 1968, Kay and co. did not want a computer that was just a form of cognitive augmentation. They wanted a medium of expression. Engelbart’s demo shows many years ahead of its time what the office would look like, but not what the workspace of any kind of creative activity. Parc wanted computers for the creation of new information in all media, including hitherto unknown ones. They wanted computers for what I call the hacker class – those who create new information in all media, not just code. In a way the computers that resulted make such a class possible and at the same time set limits on its cognitive freedom.
The Parc approach to the computer is to think of it as a meta-medium. Manovich: “All in all, it is as though different media are actively trying to reach towards each other, exchanging properties and letting each other borrow their unique features.” (65) To some extent this requires making the computer itself invisible to the user. This is a problem for a certain kind of modernist aesthetic – and Chun may participate in this – for whom the honest display of materials and methods is a whole ethics or even politics of communication. But modernism was only ever a partial attempt to reveal its own means of production, no matter what Benjamin and Brecht may have said on the matter. Perhaps all social labor, even of the creative kind, requires separation between tasks and stages.
The interactive aspect of modern computing is of course well known. Manovich draws attention to another feature, and one which differentiates software more clearly from other kinds of media: view control. This one goes back to Engelbart’s Demo rather than Parc. At the moment I have this document in Page View, but I could change that to quite a few different ways of looking at the same information. If I was looking at a photo in my photo editing software, I could also choose to look at it as a series of graphs, and maybe manipulate the graphs rather than the representational picture, and so on.
This might be a better clue to the novelty of software than, say hypertext. The linky, not-linear idea of a text has a lot of precursors, not least every book in the world with an index, and every scholar taught to read books index-first. Of course there are lovely modernist-lit versions of this, from Cortesar’s Hopscotch to Roland Barthes and Walter Benjamin to this nice little software-based realization of a story by Georges Perec.
Then there’s nonlinear modernist cinema, such as Hiroshima Mon Amour, David Blair’s Waxor the fanastic cd-roms of The Residents and Laurie Anderson made by Bob Stein’s Voyager Interactive. But Manovich follows Espen Aarseth in arguing that hypertext is not modernist. Its much more general and supports all sorts of poetics. Stein’s company also made very successful cd-roms based on classical music. Thus while Ted Nelson got his complex, linky hypertext aesthetic from William Burroughs, what was really going on, particularly at Parc, was not tail-end modernism but the beginnings of a whole new avant-garde.
Maybe we could think of it as a sort of meta- or hyper-avant-garde, that wanted not just a new way of communicating in a media but new kinds of media themselves. Kay and Nelson in particular wanted to give the possibility of creating new information structures to the user. For example, consider Richard Shoup’s, Superpaint, coded at Parc in 1973. Part of what it does is simulate real-world painting techniques. But it also added techniques that go beyond simulation, including copying, layering, scaling and grabbing frames from video. Its ‘paintbrush’ tool could behave in ways a paintbrush could not.
For Manovich, one thing that makes ‘new’ media new is that new properties can be always be added to it. The separation between hardware and software makes this possible. “In its very structure computational media is ‘avant-garde’, since it is constantly being extended and thus redefined.” (93) The role of media avant-garde is no longer performed by individual or artist groups but happens in software design. There’s a certain view of what an avant-garde is that’s embedded in this, and perhaps it stems from Manovich’s early work on the Soviet avant-gardes, understood in formalist terms as constructors of new formalizations of media. It’s a view of avant-gardes as means of advancing – but not also contesting – the forward march of modernity.
Computers turned out to be malleable in a way other industrial technologies were not. Kay and others were able to build media capabilities on top of the computer as universal machine. It was a sort of détournement of the Turing and von Neuman machine. They were not techno-determinists. It all had to be invented, and some of it was counter-intuitive. The Alan Kay and Adele Goldberg version was at least as indebted to the arts, humanities and humanistic psychology as to engineering and design disciplines. In a cheeky aside, Manovich notes: “Similar to Marx’s analysis of capitalism in his works, here the analysis is used to create a plan for action for building a new world – in this case, enabling people to create new media.” (97)
Unlike Chun or David Golumbia, Manovich downplays the military aspect of postwar computation. He dismisses SAGE, even though out of it came the TX-2 computer, which was perhaps the first machine to allow a kind of real time interaction, if only for its programmer. From which, incidentally, came the idea of the programmer as hacker, Sutherland’s early work on computers as a visual medium, and the game Spacewar.
The Parc story is nevertheless a key one. Kay and co wanted computers that could be a medium for learning. They turned to the psychologist Jerome Bruner, an his version of Piaget’s theory of developmental stages. The whole design of the Dynabook had something for each learning stage, which to Bruner and Parc were more parallel learning strategies rather than stages. For the gestural and spatial way of learning, there was the mouse. For the visual and pictorial mode of learning, there were icons. For the symbolic and logical mode of learning, there was the programming language Smalltalk.
For Manovich, this was the blueprint for a meta-medium, which could not only represent existing media but also add qualities to them. It was also both an ensemble of ways of experiencing multiple media and also a system for making media tools, and even for making new kinds of media. A key aspect of this was standardizing the interfaces between different media. For example, when removing a bit of sound, or text, or picture, or video from one place and putting it in another, one would use standard Copy and Paste commands from the Edit menu. On the Mac keyboard, these even have standard key-command shortcuts: Cut, Copy and Paste are command-X, C and V, respectively.
But something happened between the experimental Dynabook and the Apple Mac. It shipped without Smalltalk, or any software authoring tool. From 1987 it came with Hypercard, written by Parc alumnus Bill Atkinson – which many of us fondly remember. Apple discontinued it in 2004. It seems clear now that the iPad is a thing that Apple’s trajectory was in the long term away from democratizing computing and thinking of the machine as a media device.
And so Kay’s vision was both realized and abandoned. It became cheap and relatively easy to make one’s own media tools. But the computer became a media consumption hub. By the time one gets to the iPad, it does not really present itself as a small computer. Its more like a really big phone, where everything is locked and proprietary.
A meta-medium contains simulations of old media but also makes it possible to imagine new media. This comes down to being able to handle more than one type of media data. Media software has ways of manipulating specific types of data, but also some that can work on data in general, regardless of type. View control, hyperlinking, sort and search would be examples. If I search my hard drive for ‘Manovich’, I get Word text by me about him, pdfs of his books, video and audio files, and a picture of Lev and me in front of a huge array of screens.
Such media-independent techniques are general concepts implanted into algorithms. Besides search, geolocation would be another example. So would visualization, or infovis, which can graph lots of different kinds of data set. You could read my book, Gamer Theory, or you could look at this datavis of it that uses Bradford Paley’s TextArc.
Manovich wants to contrast these properties of a meta-medium to medium specific ways of thinking. A certain kind of modernism puts a lot of stress on this: think Clement Greenberg and the idea of flatness in pictorial art. Russian formalism and constructivism in a way also stressed the properties of specific media, their given textures. But it was interested in experimentally working in between then on parallel experiments in breaking them down to their formal grammars. Manovich: “… the efforts by modern artists to create parallels between mediums were proscriptive and speculative… In contrast, software imposes common media ‘properties.’” (121)
One could probably quibble with Manovich’s way of relating software as meta-medium to precursors such as the Russian avant-gardes, but I won’t, as he knows a lot more about both than I do. I’ll restrict myself to pointing out that historical thought on these sorts of questions has only just begun. Particular arguments aside, I think Manovich is right to emphasize how software calls for a new way of thinking about art history, which as yet is not quite a pre-history to our actual present.
I also think there’s a lot more to be said about something that is probably no longer a political economy but more of an aesthetic economy, now that information is a thing that can be property, that can be commodified. As Manovich notes of the difference between the actual and potential states of software as meta-medium: “Of course, not all media applications and devices make all these techniques equally available – usually for commercial and copyright reasons.” (123)
So far, Manovich has provided two different ways of thinking about media techniques. One can classify them as media independent vs media specific; or as simulations of the old media vs new kinds of technique. For example, in Photoshop, you can Cut and Paste like in most other programs. But you can also work with layers in a way that is specific to Photoshop. Then there are things that look like old time darkroom tools. And there are things you could not really do in the darkroom, like add a Wind filter, to make your picture look like it is zooming along at Mach 1. Interestingly, there are also high pass, median, reduce noise, sharpen and equalize filters, all of which are hold-overs from something in between mechanical reproduction and digital reproduction: analog signal processing. There is a veritable archaeology of media just in the Photoshop menus.
What makes all this possible is not just the separation of hardware from software, but also the separation of the media file from the application. The file format allows the user to treat the media artifact on which she or he is working as “a disembodied, abstract and universal dimension of any message separate from its content.” (133) You work on a signal, or basically a set of numbers. The numbers could be anything so long as the file format is the right kind of format for a given software application. Thus the separation of hardware and software, and software application and file, allow an unprecedented kind of abstraction from the particulars of any media artifact.
One could take this idea of separation (which I am rather imposing as a reading on Manovich) down another step. Within PhotoShop itself, the user can work with layers. Layers redefine an image as a content images with modifications conceived as happening in separate layers. These can be transparent or not, turned on or off, masked to effect part of the underlying image only, and so on.
The kind of abstraction that layers enable can be found elsewhere. Its one of the non-media specific techniques. The typography layer of this text is separate from the word-content layer. GIS (Geographic Information System) also uses layers, turning space into a media platform holding data layers. Turning on and off the various layers of Google Earth or Google Maps will give a hint of the power of this. Load some proprietary information into such a system, toggle the layers on and off, and you can figure out the optimal location for the new supermarket. Needless to say, some of this ability, in both PhotoShop and GIS, descends from military surveillance technologies from the cold war.
So what makes new or digital media actually new and digital is the way software both is, and further enables, a kind of separation. It defines an area for users to work in an abstract and open-ended way. “This means that the terms ‘digital media’ and ‘new media’ do not capture very well the uniqueness of the ‘digital revolution.’… Because all the new qualities of ‘digital media’ are not situated ‘inside’ the media objects. Rather, they all exist ‘outside’ – as commands and techniques of media viewers, authoring software, animation, compositing, and editing software, game engine software, wiki software, and all other software ‘species.’” (149)
The user applies software tools to files of specific types. Take the file of a digital photo, for example. The file contains an array of pixels that have color values, a file header specifying dimensions, color profile, information about the camera and exposure and other metadata. It’s a bunch of – very large – numbers. A high def image might contain 2 million pixels and six million RGB color values. Any digital image seen on a screen is already a visualization. For the user it is the software that defines the properties of the content. “There is no such thing as ‘digital media.’ There is only software as applied to media (or ‘content’).” (152)
New media are new in two senses. The first is that software is always in beta, continually being updated. The second is that software is a meta-medium, both simulating old media tools and adding new ones under the cover of the familiar. Yet a third might be the creation not of new versions of old media or new tools for old media but entirely new media forms – hybrid media.
For instance, Google Earth combining aerial photos, satellite images. 3D computer graphics, stills, data overlays. Another example is motion graphics, including still images, text, audio, and so on. Even a simple website can contain page description information for text, vector graphics, animation. Or the lowly PowerPoint, able to inflict animation, text, images or movies on the public.
This is not quite the same thing as the older concept of multimedia, which for Manovich is a subset of hybrid media. In multimedia the elements are next to each other. “In contrast, in media hybrids, interfaces, techniques, and ultimately the most fundamental assumptions of different media forms and traditions, are brought together, resulting in new media gestalts.”(167) It generates new experiences, different from previously separate experiences. Multimedia does not threaten the autonomy of media, but hybridity does. In hybrid media, the different media exchange properties. For example, text within motion graphics can be made to conform to cinematic conventions, go in and out of ‘focus’.
Hybrid media is not same as convergence, as hybrid media can evolve new properties. Making media over as software did not lead to their convergence, as some thought, but to the evolution of new hybrids. “This, for me, is the essence of the new stage of computer meta-medium development. The unique properties and techniques of different media have become software elements that can be combined together in previously impossible ways.” (176)
Manovich thinks media hybrids in an evolutionary way. Like Franco Moretti, he is aware of the limits of the analogy between biological and cultural evolution. Novel combinations of media can be thought of as a new species. Some are not selected, or end up restricted to certain specialized niches. Virtual Reality for example, was as I recall a promising new media hybrid at the trade shows of the early 90s, but it ended up with niche applications. A far more successful hybrid is the simple image map in webdesign, where an image becomes an interface. It’s a hybrid had of a still image plus hyperlinks. Another would be the virtual camera in 3D modeling, which is now a common feature in video games.
One might pause to ask, like Galloway, or Toscano and Kinkle, whether such hybrid media help us cognitively map the totality of relations within which we are webbed. But the problem is that not only is the world harder to fathom in its totality, even media itself recedes from view. “Like the post-modernism of the 1980s and the web revolution of the 1990s, the ‘softwarization’ of media (the transfer of techniques and interfaces of all previously existing media technologies to software) has flattened history – in this case the history of modern media.” (180) Perhaps it’s a declension of the fetish, which no longer takes the thing for the relation, or even the image for the thing, but takes the image as an image, rather than an effect of software.
Software is a difficult object to study, in constant flux and evolution. One useful methodological tip from Manovich is to focus on formats rather than media artifacts or even instances of software. At its peak in the 1940s, Hollywood made about 400 movies per year. It would be possible to see a reasonable sample of that output. But Youtube uploads something like 300 hours of video every minute. Hence a turn to formats, which are relatively few in number and stable over time. Jonathan Sterne’s book on the mp3 might stand as an exemplary work along these lines. Manovich: “From the point of view of media and aesthetic theory, file formats constitute the ‘materiality’ of computational media – because bits organized in these formats is what gets written to a storage media…” (215)
Open a file using your software – say in Photoshop – and one quickly finds a whole host of ways in which you can make changes to the file. Pull down a menu, go down the list of commands. A lot of them have sub-menus where you can change the parameters of some aspect of the file. For example, a color-picker. Select from a range of shades, or open a color wheel and choose from anywhere on it. A lot of what one can fiddle with are parameters, also known as variables or arguments. In a GUI, or Graphic User Interface, there’s usually a whole bunch of buttons and sliders that allow these parameters to be changed.
Modern procedural programming is modular. Every procedure that is used repeatedly is encapsulated in a single function that software programs can evoked by name. These are sometimes called subroutines. Such functions generally solve equations. Ones that perform related functions are gathered in libraries. Using such libraries speeds up software development. A function works on particular parameters – for example, the color picker.
Softwarization allows for great deal of control of parameters. “In this way, the logic of programming is projected to the GUI level and becomes part of the user’s cognitive model of working with media inside applications.” (222) It may even project into the labor process itself. Different kinds of media work become similar in workflow. Select a tool, choose parameters, apply, repeat. You could be doing the layout for a book, designing a building, or editing a movie or preparing photos for a magazine.
Manovich: “Of course, we should not forget that the practices of computer programming are embedded within the economic and social structures of the software and consumer electronics industries.” (223) What would it mean to unpack that? How were the possibilities opened up by Alan Kay and others reconfigured in the transition from experimental design to actual consumer products? How did the logic of the design of computation end up shaping work as we know it? These are questions outside the parameters of Manovich’s formalist approach, but they are questions his methods usefully clarify. “We now understand that in software culture, what we identify by conceptual inertia as ‘properties’ of different mediums are actually the properties of media software.” (225)
There’s a lot more to Software Takes Command, but perhaps I’ll stop here and draw breath. It has a lot of implications for media theory. The intermediate objects of such a theory dissolve in the world of software: “… the conceptual foundation of media discourse (and media studies) – the idea that we can name a relatively small number of distinct mediums – does not hold any more.” (234) Instead, Manovich sees an evolutionary space with a large number of hybrid media forms that overlap, mutate and cross-fertilize.
If one way to navigate such a fluid empirical field might be to reduce it to the principles of hardware design, Manovich suggests another, which takes the relative stability of file formats and means of modifying them as key categories. This reformats what we think the ‘media’ actually are: “… a medium as simulated in software is a combination of a data structure and a set of algorithms.” (207)
There are consequences not only for media theory but for cultural studies as well. Cultural techniques can be not only transmitted, but even invented in software design: “a cultural object becomes an agglomeration of functions drawn from software libraries.” (238) These might be embedded less in specific software programs than in the common libraries of subroutines on which they draw to vary given parameters. These design practices then structure workflow and workplace habits.
Software studies offers a very different map as to how to rebuild media studies than just adding new sub-fields, for example, for game studies. It also differs from comparative media studies, in not taking it for granted that there are stable media – whether converging or not – to compare. It looks rather for the common or related tools and procedures embedded now in the software that run all media.
Manovich’s software studies approach is restricted to that of the user. One might ask how the user is itself produced as a byproduct of software design. Algorithm plus data equals media: which is how the user starts to think of it too. It is a cultural model, but so too is the ‘user’. Having specified how software reconstructs media as simulation for the user, one might move on to thinking about how the user’s usage is shaped not just by formal and cultural constraints but by other kinds as well. And one might think also about who or what gets to use the user in turn.