by Steven Craig Hickman Berardi makes a valid point in his critique of Srnicek and Williams Inventing the Future: “Srnicek and Williams suggest that we should ‘demand full automation, demand universal basic income, demand reduction of the work week’. But they do not explain who the recipient is of these demands. Is there any governing volition that can attend to these requests and implement them? No, because governance has taken the place of government, and command is no longer inscribed in political decision but in the concatenation of techno-linguistic automatisms. This is why demands are pointless, and why building political parties is pointless as well.”1 Governance is ubiquitous, invisible, and decentralized within the networks itself now, power is part of the very interactive environment we face daily. The moment you open your iPhone, etc. you’re confronted with a governed set of choices and possibilities that capture your desires and modulate those very choices through sophisticated and ubiquitous algorithms. Same for almost every aspect of our once sacrosanct private lives, too. Our homes in the coming decades will be invasively programmed with ubiquitous smart devices that will attune us to techno-commercial decisioning processes out of our control, and yet they will allow us to still believe it is we who are choosing, deciding, using our oh so ingrained “free will” – that as many neuroscientists keep telling us is an illusion, delusion, a cognitive bias and hereditary error of judgment, etc. If Edward Bernays opened the door on mass manipulation through the use of images, sound, and print…. our new world of smart devices will no longer need to manipulate us, they will decide for us and let us think that it we alone have made our choices independent of any external agent not knowing that this very thought of independence was the manipulation by algorithmic governance. Manipulation by design that allows the participant to fool herself. In this way we’ve reversed the earlier public relations need to externally manipulate our environment with images, sound bytes, and eye candy: now we will internalize this whole process and believe we alone are in charge of our choices, making rational and deliberate decisions while all along these are coming to us ubiquitously through pattern matching algorithms so accelerated we will not know they are not our own thoughts. Predictive engineering so fast and based on our own neuroscientific process that we will not be able to tell the difference. Like the street magician who seems to float on air, or produce strange objects from one’s own person, this new world of virtual precision will act at a distance and manipulate the very biases of our own cognitive powers. Our very thought processes and reason are being reformatted by the instruments and technologies we use daily, and not realizing this we assume absolute independence of judgement over our lives while instead it is these very tools that have become so invisible and ubiquitous in our lives that are rewiring our very cognitive machinery and delivering us to a governing power so ubiquitous that it has become internalized as the very core of our own sense of Agency. Essentially we are be reprogrammed as androids or robots in a machinic society in which our very humanity is itself faked and manipulated for the benefit of the power elite. But this is not the power elite of old, no as AGI comes online in more and more stable and ubiquitous forms the very truth of power will reside within this inhuman network through the global interfaces of decisions we ourselves will so gladly accept. An inversion of human power into machinic systems is taking place even as we enjoy the comforts and leisure of our toys. People speak of a post-work society and do not realize it is already in the offing, every text message you send, every web page you access, every time you access data from anywhere or pass information across the network you are allowing profits to be extracted by some invisible entity, some corporation, or government agency. Our lives of leisure have become the very site of a 24/7 profit machine in which we not only do not know we’re participating, but are gladly allowing it and enjoying it as pleasure and jouissance. We have entered the true age of leisure capitalism. Some will still work and maintain machines, etc. for a long while during the transitional process of the coming decades, while many on the planet will become unnecessary to the process and like other machines and products be obsolesced. The horror is that we have become the total product of leisure capitalism. Our very lives are the engines of profit and we will live out our existence enjoying the minor aspects of our mindless existence not even knowing that we’ve been enslaved in a system of surplus value so ubiquitous that no one except the benefactors know it exists. Even the staged conflicts and wars will be planned and bound by this same governance process. The nations will appear to be enemies in images and simulated dramas of staged operatic forms in the media-tainment industry while all the while behind the scenes the great powers are working for the same global corporations. Orwell had part of this already figured out, as did Aldous Huxley…. now we do. But will we wake up long enough to disconnect? I doubt it… Yet, with the emergence of cryptocurrencies, blockchain technology, and the various aspects of a neo-market bound to secrecy and darkness, underground resistances and pockets of awakening can become possible… we will follow this trail in time… As Adam Winfield relates it: The rise of cryptocurrencies and blockchains has people speculating whether the technologies will lead to a libertarian future, in which governments and corporations lose their grip on centralized power, an authoritarian future, in which they tighten that grip, or something in between. Yet, as he suggests what some of these libertarians perhaps didn’t anticipate, however, was the extent to which governments and corporations would eventually begin embracing the technologies – particularly blockchain. Increasingly, state powers and big corps are recognizing the potential of blockchain to restructure their operations, making them more efficient, transparent and secure, and could – advertently or not – be solidifying their rule in the process. So what we’re seeing is the great global conglomerates and monopolies once again consolidating power and overtaking the very resistance of cognitive inventors and knowledge workers through a process of cooptation and immersive buy in to the very systems that were meant to resist such elite powers to begin with. The libertarians who pioneered blockchain aren’t happy about this, as it goes against everything they believe the technology was created for (though some argue the intention was to sidestep state-backed currency systems, not replace them). As Ian Bogost writes in The Atlantic, “the irony would be tragic if it weren’t also so frightening. The invitation to transform distributed-ledger systems into the ultimate tool of corporate and authoritarian control might be too great a temptation for human nature to forgo.” Winfield relates on corporate defender of the takeover of this technology. James Zdralek, a Montreal-based software designer for the global tech company SAP, believes the idea that blockchain could lead to total control is flawed because anything made up of individual units – including corporations and governments – cannot have a unified conscience that chooses good or evil. “It’s like judging a group of mold – it either gives you cheese or anthrax,” says Zdralek. “Either result was not the conscious choice of the group of mold cells. Yet, as I’ve said above the whole point of algorithmic governance is that there never was any “conscious choice” – no free will or independence of judgement, and there is no unified or monolithic entity or power behind the proverbial conspiracy scene pulling the puppet strings either; rather, this whole world is oriented toward one thing “profits”, and it is this very ubiquitous and pervasive engine of accumulation and surplus value that drives all corporations, as well as the techno-commericium that is producing the very tools we’ve spoken of. So there need be no conscious decision because as neuroscientist agree it is all done within the ubiquitous processes outside an conscious choice. One can discover in many works such as Neuroscience and the Economics of Decision Making (Alessandro Innocenti ed.) that for the last two decades there has been a flourishing research carried out jointly by economists, psychologists and neuroscientists. This meltdown of competences has lead towards original approaches to investigate the mental and cognitive mechanisms involved in the way the economic agent collects, processes and uses information to make choices. This research field involves a new kind of scientist, trained in different disciplines, familiar in managing experimental data, and with the mathematical foundations of decision making. The ultimate goal of this research is to open the black-box to understand the behavioral and neural processes through which humans set preferences and translate these behaviours into optimal choices. The point here is to understand every aspect of the brain and how it can be manipulated to serve the optimal choices of those systems of governance and techno-commercial processes to benefit the expanding power of global corporations. Politics and Nation States matter not to these larger entities, rather the stage-craft of nation and politics serves the corporate interests of the competing global entities. Social Cognition works on the collective ways in which the mental representations that people hold of their social world and the way that social information is processed, stored, and retrieved. Hundreds of millions of dollars are being spent of various ways to manipulate the masses in every nation, no matter what culture or social conditioning. Antoinette Rouvroy in Human Genes and Neoliberal Governance outlines the knowledge-power relations in the post-genomic era and addressing the pressing issues of genetic privacy and discrimination in the context of neoliberal governance, this book demonstrates and explains the mechanisms of mutual production between biotechnology and cultural, political, economic and legal frameworks. She explores the social, political and economic conditions and consequences of this new ‘perceptual regime’. In the second she pursues her analysis through a consideration of the impact of ‘geneticization’ on political support of the welfare state and on the operation of private health and life insurances. Genetics and neoliberalism, she argues, are complicit in fostering the belief that social and economic patterns have a fixed nature beyond the reach of democratic deliberation, whilst the characteristics of individuals are unusually plastic, and within the scope of individual choice and responsibility. With the emergence of Human Enhancement or H++ or Transhumanism the elite seek to empower their own children with favorable adjustments and enhancements, breeding a new level of genetic superiority in their germlines. Eugenics is back with a vengeance. With the so called convergence technologies the power elite hope to consolidate their power base and build a new capitalist platform on techno-commercial footing that can carry us into a space-faring civilization. Convergence in knowledge, technology, and society is the accelerating, transformative interaction among seemingly distinct scientific disciplines, technologies, and communities to achieve mutual compatibility, synergism, and integration, and through this process to create added value for societal benefit. It is a movement that is recognized by scientists and thought leaders around the world as having the potential to provide far-reaching solutions to many of today’s complex knowledge, technology, and human development challenges. Four essential and interdependent convergence platforms of human activity are defined in the first part of this report: nanotechnology-biotechnology-information technology and cognitive science (“NBIC”) foundational tools; Earth-scale environmental systems; human-scale activities; and convergence methods for societal-scale activities. The report then presents the main implications of convergence for human physical potential, cognition and communication, productivity and societal outcomes, education and physical infrastructure, sustainability, and innovative and responsible governance. As a whole, a new model for convergence is emerging in our time. To effectively take advantage of this potential, a proactive governance approach is suggested by these very powerful elite in many think-tanks and techno-commercial enterprises like Google and other global market leaders. The United States and its corporate overlords have been for sometime implementing a program aimed at focusing disparate R and D energies into a coherent activity – a “Societal Convergence Initiative”. The notion of a technocracy arose during the twentieth century but was lacking in the collusion of corporation, government, and academia, etc., which is no longer the case. With the demise of the humanistic learning systems that stood in its way, the privatizing of academic institutions that is occurring, and the specialized and compartmentalized forms of education and corporate promotion in these various learning institutions the very nature of converging on business, expertise, technology, and society through the very use of these social conditioning technologies is becoming all pervasive and will only continue down this course. We’ve seen in recent years how masses of people can be swayed by emotion: anger, frustration, distrust in government and authority, etc. to the point that they are will to try anything to regain a sense of sanity and stability in their lives to the point they will allow a Authoritarian Leader to take charge and correct what was perceived as the weakness of other leaders. As Kathleen Taylor in her recent book , Brainwashing describes it, there are three approaches to mind-changing: by force, by stealth, and by direct brain manipulation technologies. The first two, as I describe in the book, use standard psychological processes; in this sense there is nothing unnatural about mind control. The aim is to isolate victims from their previous environment; control what they perceive, think, and do; increase uncertainty about previous beliefs; instill new beliefs by repetition; and employ positive and negative emotions to weaken former beliefs and strengthen new ones.2 As she puts it: “People can be persuaded to give up objective freedoms and hand over control of their lives to others in return for apparent freedoms–in other words, as long as they are aware of the freedoms they are gaining and either contemptuous, or altogether unaware, of the freedoms they are giving up.” “The trick is to disable the brain’s alarm system”. (Brainwashing, p. 243) Most of what she describes is the older mode of mind manipulation that has been used in many forms by many nations, terror groups, government security agencies, cults, etc. for centuries. The refinement has come in experimentation, use of pharmaceuticals, and social or mass psychology. Yet, in our time as technology displaces many of the decisioning processes we as humans have relied on as independent agents, with our belief in a Self-Subject this is no longer the same. For decades the very notion of the liberal Subject and Self has come under scrutiny and a propaganda mission from both the corporate controlled sciences and academy undermining the whole democratic political subject as we have come to know it. Without a sense of Self and subjectivity we no longer have a center from within which to provide normative claims or political judgments. If as many neuroscientists suggest the Self is an illusion and delusion then our whole political democratic system of government is no longer viable in the tradition of Enlightenment founded on Reason, etc., and the Lockean and Rousseauean notions along with the Mills utilitarianism fall by the wayside. The destruction of Secular modernity is at an end if these neuroscientists and academic thinkers are correct. Are advances in science and technology enabling us in the foreseeable future to create digital minds. The notion of exponential growth is a pattern built deep into the scheme of life, but technological change now promises to outstrip even evolutionary change. Many neuroscientists in collusion with engineers are beginning to reverse engineer the brain with various EU and U.S. Brain initiatives. We may see in the future that technological and scientific advances ranging from the discovery of laws that control the behavior of the electromagnetic fields to the development of computers will change the very nature of what it means to be human. Some even see in this process of artificial selection the continuance of natural selection mutating into the ultimate algorithm, with genetics and the evolution of the central nervous system, along with the role of computer imaging which has played in understanding and modeling the brain. Having considered the behavior of the unique system that creates a mind, neuro-scientists are turning to an unavoidable question: Is the human brain the only system that can host a mind? If digital minds come into existence — it is difficult to argue that they will not — what are the social, legal, and ethical implications? Will digital minds be our partners, or our rivals? In an age when so many things are happening at once will the decisions like this that could have dire implications for the species of homo sapiens be made for us, taken out of our hands as we are manipulated and distracted by war, climate change, terror, fear, anger, political and social unrest that keeps us from thinking clearly on these things? All the while the power elites continue investing and making these decisions outside the political and social systems of nation, culture, or the ethico-religious dimensions of our planetary civilization.
taken from:
1 Comment
by Achim Szepanski Issues such as satellite monitoring, enormous computing power on silicon chips, sensors, networks and predictive analytics are the components of digital systems (monitoring capital) that are currently extensively tracking, analyzing and capitalizing on the lives and behaviors of populations. For example, under the pressure of the financial markets, Google is forced to constantly increase the effectiveness of its data tracking and analysis generated by machine intelligence and, for that very reason, to combat every user's claim to privacy by the most diverse means. Thanks to a series of devices such as laptops and smartphones, cameras and sensors, today's computers are ubiquitous in capitalized everyday life. They are character-reading machines, the algorithms (unconditionally calculable, formal-clear procedural instructions) and develop their full power only in the context of digital media networking, for which the programmatic design, transformation and reproduction of all media formats is a prerequisite. In particular, the social networks in this game provide a kind of economy that has strangely established new algorithmic governance by extracting personal data that leads to the construction of metadata, cookies, tags, and other tracking technologies. This development has become known primarily as "Big Data," a system based on networking, databases, and high computer performance and capacity. The processes involved are, according to Stiegler, those of "grammatization". In the digital stage these lead to that individuals are led through a world in which their behavior is grammaticalized by interacting with computer systems operating in real time. The grammatization, for Stiegler, already begins with the cave paintings and leads through the media cuneiform, photography, film and television finally to the computer, the Internet and the smartphone. The result of all this is that the data paths and tracks generated by today's computerization technologies constitute ternary attention-reducing retentions or mnemonics that include specific time processes and individuation processes, i.e, "industrialization processes of memory" or "political processes of memory." and industrial economics based on the industrial exploitation of periods of consciousness «. With the digitization of the data paths and processes, which today are urgently required by sensors, interfaces and other means and are basically generated as binary numbers and calculable data, as Stiegler said, creating an automated social body in which even life is transformed into an agent of the hyper-industrial economy of capital. Deleuze anticipated this development in his famous essay on control societies, but control comes only when the digital calculation integrates Deleuze's modulations of control techniques into algorithmic governance that also automates all existences, ways of life, and cognition included. The underlying power technologies of the protocols are a-normative because they are rarely widely debated in the public domain, but rather appear to be inherent in algorithmic governance. The debate now is: Do you create data digital logs or digital logs data? Or even more closely: are data digital protocols? In any case, their setting has a structuring character, not only the results. Like any governance, if we think of it in terms of Foucault, algorithmic governance also implements specific technologies of power, which today are no longer based on statistics related to the average and the norm, instead we have an automated, to do atomic and probabilistic machine intelligence,1The digital data machines that continuously collect and read data tracks mobilize an a-normative and an a-political rationality, consisting of the automatic analysis and the monetary valorisation of enormous amounts of data by modeling, anticipating and influencing the behavior of the population. One calls this today trivializing ubiquitous Computing, in which, and it must be pointed out again and again, the monitoring capital the extraction of the behavior of the users and the prediction products based on it, which develop by the algorithms of the monitoring capital, no longer only in the Internet, but in the real world and then diversify prediction products by special techniques. Everything, whether animated or inanimate, can be verdant, connect, communicate and calculate. And from the automobiles, refrigerators, houses and bodies, etc., signals are constantly flowing through digital devices that are based on activities of human and non-human actors that take place in the real world as data in the digital networks and serve there the transformation into forecasting products that are sold to advertisers who target accurate advertising (Zuboff 2018: 225), which are sold to advertisers who are targeting advertising. (Zuboff 2018: 225). To put it in more detail: The monitoring capital of the companies Google or Facebook automates the buying behavior of consumers, channels it through the famous feedback loops of their AI machines and binds it purposefully to companies that are advertising customers of the monitoring capital. The promotional behavioral modifications to be achieved by users are based on machine processes and techniques such as tuning (adaptation to a system), herding (conditioning of the mass), and conditioning (training of stimulus-response patterns) that determine the behavior of the body of Directing Users, so that the engineered prediction products actually drive users' behaviors towards Google's guaranteed intentions. (Ibid. The maximum predictability of users' behavior is now a genuine source of profit: the consumer who uses a fitness app should best buy a healthy beverage product at the moment of maximum receptivity, such as after jogging, that has previously been made palatable by targeted advertising has been. Sporting goods manufacturer Nike has bought the data analysis company Zodiac and uses them in its stores in New York. If a customer enters a store with their Nike app on their smartphone, they will immediately be recognized and categorized by the geofencing software. The homepage of the app changes immediately and instead of online offers, new features appear on the screen, which of course includes special offers tailored to the customer and recommendations that are currently being offered in the store. The surveillance capital has long ceased to be advertising-only, it quickly became a model for capital accumulation in Silicon Valley that was adopted by virtually every startup. But today it is not just limited to individual companies or the internet sector but has spread to a large number of products, services and the economic sector, including insurance, healthcare, finance, cultural industries, transportation, etc. Almost every product or service that begins with the word "smart" or "personalized," any Internet-connected device, any "digital assistant" is an interface in the enterprise supply chain for the invisible flow of behavioral data on the way to predicting the future the population in a surveillance economy. As an investor, Google quickly quashed the stated antipathy to advertising, instead opting to increase revenue by giving exclusive access to user data logs (once known as "data exhaust") data in combination with substantial analytical capacity and maximum Computer power was used to generate predictions of users' clickthrough rates, which are seen as a signal of the relevance of an ad. Operationally, this means that Google transformed its growing database to "work in" as a surplus of behavioral data while developing new ways to aggressively search for sources of surplus production. The company developed new methods for seizing the secret surplus by exposing data that users considered private and extensively personalizing users' information. And this data surplus was secretly analyzed for the importance it had for predicting user click behavior. This data surplus became the basis for new forecasts called "targeted advertising". Here was the source of surrogate capital, behavioral surplus, material infrastructures, computer power, algorithmic systems, and automated platforms. As clickthrough rates shot through the ceiling, advertising for Google became as important as the search engine, perhaps the entry point for a new kind of e-commerce that relied on broad online monitoring. The success of these new mechanisms became apparent when Google went public in 2004. The first supervisor capitalists first enacted declarations by simply considering the users' private experiences as something that can be taken to translate them into data and to use them as private property and exploit them for the private gain of knowledge. This was dubbed with a rhetorical camouflage and secret explanations that no-one knew as themselves. Google began unilaterally to postulate that the Internet was merely a resource for their search engine. With a second declaration, they claimed that users' private experience served their rewards by selling personal fortunes to other companies. The next step was those surplus operations should move beyond the online milieu into the real world, where personal behavior data is considered free to be easily stolen by Google. This was a normal story in capitalism, finding things outside the market sphere and generating them as goods. Once we searched for Google, now Google is looking for us. Once we thought the digital services were free, now the surveillance capitalists think we are fair game. The surveillance capital no longer needs the population in their function as consumers, but the supply and demand orient the surveillance companies to transactions that are based on the anticipation of the behavior of the consumer Populations, groups, and individuals. Surveillance companies have few employees in relation to their computer power (and unlike the early industrial companies). The surveillance capital is dependent on the erosion of individual self-determination and autonomy, as well as the right of free choice, to generate an unobserved stream of behavioral data and to feed the markets that are not for but against the population. It is no longer enough to automate the streams of information that illuminate the population, but rather the goal is to automate the behavior of the population itself. These processes are constantly redesigned to increase the ignorance that affects individual observability and to eliminate any possibility of self-determination. The surveillance capital puts the focus away from the individual users up populations such as cities or even the economics of a country, which is not insignificant for the capital markets when the predictions about the behavior of populations gradually approach the certainty of their arrival. In the competition for the most efficient prediction products, supervisor capitalists have learned that the more behavioral surplus they acquire, the better the predictions are, which encourages capitalization through the economies of scale to ever new efforts. And the more the surplus can be varied, the higher the predictive value. This new drive of the economy leads from the desktops via the smartphones into the real world - you drive, run, shop, find a parking space, circulate the blood and you show a face. Everything should be recorded, localized and marketed. There is a duality in information technology to announce, to automate its capacity, but also to computerize, that is, to translate things, processes and behavior into information, i.e new territories of knowledge are being produced on the basis of informational capacity, which may also be the subject of political conflicts; this concerns the distribution of knowledge, the decision about knowledge and the power of knowledge. Zuboff writes that the surveillance capitalists have the right to know, to decide who knows and to decide who decides to claim alone. They dominate the automation of knowledge and its specific division of labor. Zuboff goes on to say that one can not understand the surveillance capital without the digital, but the digital could also exist without the surveillance capital: The surveillance capital is not pure technology, rather digital technologies could take many forms. The monitoring capital is based on algorithms and sensors, artificial machines and platforms, but it is not the same as these components. A company such as Google must already reach certain dimensions of size and diversification resources when collecting data that reflects user behavior and also serves to track behavioral excesses (Google's data emissions) and then uses that data to manipulate the data through its machine intelligence, convert prediction products of user behavior and sell them targeted to advertisers, products that start like heat seekers on the user to propose him, for example, at a pulse of 78 just the right fitness product via displayed advertising. For example, with the diversification that serves to increase the quality of the forecasting products, a broad diversification of observable topics in the virtual world must be achieved, and secondly, the extraction operations must be transferred from the network to the real world. In addition, the algorithmic operations must gain in depth, that is, they must aim at the intimacy of users to actuating and controlling, yes forming their behavior intervene by the company, for example, time and target pay-buttons on the smartphone or show lock a car automatically if the person concerned has not paid insurance amounts on time. The data pool from which analysts can now draw is almost infinite. They know exactly who reclaims goods, calls hotlines or travels through online portals about a company. They know the favorite shops, the favorite restaurants, and bars of many consumers, the number of their "friends" on Facebook, the creator of ads that social media users have clicked on. You know who in recent days have visited the website of a competitor of the advertiser of an ad or has googled certain goods. They know the skin color, the sex, the financial situation of a person, his physical illnesses and emotional complaints. They know the age, the profession, the number of children, the neighborhood, the size of the apartment - after all, it is quite interesting for a company that manufactures mattresses to know whether a customer is single or, in the worst case scenario, the same five foam mats for the entire family orders. Today, the group has materialized in Facebook, in its invisible algorithms, and has evoked a largely imaginary group addiction of unimaginable proportions. And here the theory of simulation is wrong, because there is nothing wrong with the digital networks, they are quite real and create a stability for those who are connected to the networks by simply expanding things, more inquiries, more friends and more so on. With the closure of the factories came the opening of the data mines. And the associated violation of privacy is the systematic result of a pathological division of knowledge, in which the monitoring capital knows, decides and decides who decides. Marcuse wrote that it was one of the boldest plans of National Socialism to wage the fight against the taboo of the private. And privacy is today so freed from any curiosity or secret, that without any hesitation or almost avid, you write everything on your time-wall so that everyone can read it. We are so happy when a friend comments anything. And you're always busy managing all the data feeds and updates, at least you have to deviate a bit of time from your daily routines. The taste, the preferences and the opinions are the market price you pay. But the social media business model will reach its limit and be ended, though it is still pushed by the growth of consumerism. This business model is always repeated after the dotcom boom of the 1990's. If growth stagnates, then the project must be completed. Transition-free growth of customer-centric, decentralized marketing is the fuel fueled by the mental pollution of the digital environment that corresponds to that of the natural environment. For a search query, factors such as search terms, length of stay, the formulation of the query, letters and punctuation are among the clues used to spy on users' behavior, and so even these so-called data fumes are collect-able to target the user's excessive behavior advertising, whereby Google also assigns the best advertising sites to the algorithmic probability of the most paid advertisers, the prices of which are multiplied by the price per click multiplied by the probability with which the advertisement is then actually clicked on. In the end, these procedures also reveal what a particular individual thinks in a particular place and time. Singularity indeed. Every click on an ad banner displayed on Google is a signal for its relevance and is therefore considered a measure of successful targeting. Google is currently seeing an increase in paid clicks and at the same time a case of average cost per click, which equates to an increase in productivity as the volume of output has increased while costs are falling. Just as the protocols are everywhere, so are the standards. It is possible to speak of environmental standards, safety and health standards, building standards and digital and industrial standards whose inter-institutional and technical status is made possible by the protocols' functioning. The capacity of standards depends on the control of protocols, a system of governance whose organizational techniques shape how value is extracted from those integrated into the different modes of production. But there are also the standards of the protocols themselves. The TCP / IP model of the Internet is a protocol that has become a technical standard for Internet communication. There is a specific relationship between protocol, implementation, and standard concerning digital processes: protocols are descriptions of the precise terms by which two computers can communicate with each other (i.e., a dictionary and a handbook for communicating). The implementation implies the creation of software that uses the protocol, i. handles the communication (two implementations that use the same protocol should be able to exchange data with each other). A standard defines which protocol should be used for specific purposes on certain computers. Although it does not define the protocol itself, it sets limits for changing the protocol. translated by Dejan Stojkovski taken from: by Rizosfera @ Obsolete Capitalism blog 1) Let’s start with your first book, published in 2009, The Spam Book edited in collaboration with Jussi Parikka, a compendium from the Dark Side of Digital Culture. Why did you feel the urge to investigate the bad sides of digital culture as a writing debut? In the realm of “spam” seen as an intruder, an excess, an anomaly, and a menace, you have met the “virus” which has characterized your research path up until today. As I recall Jussi and I jokingly framed The Spam Book as the antithesis to Bill Gates’ Road Ahead, but our dark side perspective was not so much about an evil “bad” side. It was more about shedding some light on digital objects that were otherwise obscured by discourses concerning security and epidemiological panics that rendered objects “bad”. So our introduction is really about challenging these discursively formed “bad” objects; these anomalous objects and events that seem to upset the norms of corporate networking. We were also trying to escape the linguistic syntax of the biological virus, which defined much of the digital contagion discourse at the time, trapping the digital anomaly in the biological metaphors of epidemiology and Neo-Darwinism. This is something that I’ve tried to stick to throughout my writings on the viral, however, in some ways though I think we did stay with the biological metaphor to some extent in The Spam Book, but tried to turn it on its head so that rather than point to the nasty bits (spam, viruses, worms) as anomalous threats, we looked at the viral topology of the network in terms of horror autotoxicus or autoimmunity. That is, the very same network that is designed to share information becomes this auto-destructive vector for contagion. But beyond that, the anomaly is also constitutive of network culture. For example, the computer virus determines what you can and can’t do on a network. In a later piece we also pointed to the ways in which spam and virus writing had informed online marketing practices. (1) In this context we were interested in the potential of the accidental viral topology. Jussi’s Digital Contagions looked at Virilio’s flipping of the substance/accident binary and I did this Transformations journal article on accidental topologies, so we were, I guess, both trying get away from prevalent discursive formations (e.g. the wonders of sharing versus the perils of spam) and look instead to the vectorial capacities of digital networks in which various accidents flourished. 2) Virality, Contagion Theory in the Age of Networks came out in 2012. It is an important essay which enables readers to understand virality as a social theory of the new digital dominion from a philosophical, sociological and political point of view (with the help of thinkers like Tarde and Deleuze). The path moves from the virus (the object of research) to the viral action (the spreading in social network areas to produce drives) to the contagion (the hypnotic theory of collective behaviour). How does the virus act in digital field and in the web? And how can we control spreading and contagion? Before answering these specific questions, I need to say how important Tarde is to this book. Even the stuff on Deleuze and Guattari is really only read through their homage to Tarde. His contagion theory helped me to eschew biological metaphors, like the meme, which are discursively applied to nonbiological contexts. More profoundly Tarde also opens up a critical space wherein the whole nature/culture divide might be collapsed. So to answer your questions about the digital field and control, we need to know that Tarde regarded contagion as mostly accidental. Although it is the very thing that produces the social, to the extent that by even counter-imitating we are still very much products of imitation, Tarde doesn’t offer much hope in terms of how these contagions can be controlled or resisted. He does briefly mention the cultivation or nurturing of imitation, however, this is not very well developed. But Virality adds affect theory to Tarde (and some claim that he is a kind of proto-affect theorist), which produces some different outcomes. When, for example, we add notions of affective atmospheres to his notion of the crowd, i.e. the role of moods, feelings and emotions, and the capacity to affectively prime and build up a momentum of mood, a new kind of power dynamic of contagion comes into view. While we must not lose sight of Tarde’s accident, the idea that capricious affective contagion can be stirred or steered into action in some way so as to have a kind of an effect needs to be considered. Crudely, we can’t cause virality or switch it on, but we can agitate or provoke it into potential states of vectorial becoming. This is how small changes might become big; how that is, the production of a certain mood, for example, might eventually territorialize a network. Although any potential contagious overspill needs to be considered a refrain that could, at any moment, collapse back into a capricious line of flight. The flipside of this affective turn, which has, on one hand, allowed us new critical insights into how things might potentially spread on a network, is that digital marketers and political strategists are, on the other hand, looking very closely at moods through strategies of emotional branding and marketing felt experiences. The entire “like” economy of corporate social media is, of course, designed emotionally. Facebook’s unethical emotional contagion experiment in 2014 stands out as an example of how far these attempts to steer the accidents of contagion might go. 3) Five years after the release of Virality, The Assemblage Brain is published in 2017. A year that has seen a new political paradigm: Trump has succeeded Obama in the United States, a country which we could define as the benchmark of the development of today’s western élites and as a metaphor of power. Both have used the social networks to spread their political message, political unconscious as you would say. As an expert of contagion, and political use of the social networks, what lesson can we learn from such experience? In the UK we’re still arguing over what kind of dystopia we’re in: 1984, Brave New World? So it’s funny that someone described the book to me as a dystopian novel. “Surely all these terrible things haven’t happened yet?” “This is just a warning of where we might go wrong in the future.” I’m not so sure about that. Yes, I make references to the dystopian fictions that inspired Deleuze’s control society, but in many ways I think I underestimated just how bad things have got. It’s a complex picture though, isn’t it? There are some familiar narrative emerging. The mass populist move to the right has, in part, been seen as a class based reaction against the old neoliberal elites and their low wage economy which has vastly enriched the few. We experienced the fallout here in the UK with Brexit too. Elements of the working class seemed to vociferously cheer for Farage. Perhaps Brexit was a catchier, emotionally branded virus. It certainly unleashed a kind of political unconsciousness, tapping into a nasty mixture of nationalism and racism under the seemingly empowering, yet ultimately oppressing slogan “We Want Our Country Back.” Indeed, the data shows that more Leave messages spread on social media than Remain. But those quick to blame the stupidity of white working class somnambulists rallying against a neoliberal elite have surely got it wrong. Brexit made a broad and bogus emotional appeal to deluded nationalists from across the class divide who feared the country had lost its identity because of the free movement of people. This acceleration towards the right was, of course, steered by the trickery of a sinister global coalition of corporate-political fascists – elites like Farage, Brexiteers like Johnson and Gove, and Trump’s knuckleheads in the US. What can we learn about the role of digital media played in this trickery? We are already learning more about the role of filter bubbles that propagate these influences, and fake news, of course. We also need to look more closely at the claims surrounding the behavioural data techniques of Cambridge Analytica and the right wing networks that connect this sinister global coalition to the US billionaire, Robert Mercer. Evidently, claims that the behavioural analysis of personal data captured from social media can lead to mass manipulation are perhaps overblown, but again, we could be looking at very small and targeted influences that leads to something big. Digital theorists also need to focus on the effectiveness of Trump supporting Twitter bots and the affects of Trump’s unedited, troll-like directness on Twitter. But we can’t ignore the accidents of influence. Indeed, I’m now wondering if there’s a turn of events. Certainly, here in the UK, after the recent General Election, UKIP seem to be a spent political force, for now anyhow. The British Nationalist Party have collapsed. The Tories are now greatly weakened. So while we cannot ignore the rise of extreme far right hate crime, it seems now that although we were on the edge of despair, and many felt the pain was just too much to carrying on, all of a sudden, there’s some hope again. “We Want Our Country Back” has been replaced with a new hopeful earworm chant of “Oh Jeremy Corbyn!” There are some comparisons here with Obama’s unanticipated election win. A good part of Obama love grew from some small emotive postings on social media. Similarly, Corbyn’s recent political career has emerged from a series of almost accidental events; from his election as party leader to this last election result. Public opinion about austerity, which seemed to be overwhelmingly and somnambulistically in favour of self-oppression, has, it seems, flipped. The shocking events of the Grenfell Tower fire seems to be having a similar impact on Tory austerity as Hurricane Katrina did on the unempathetic G.W. Bush. It’s interesting that Corbyn’s campaign machine managed to ride the wave of social media opinion with some uplifting, positive messages about policy ideas compared to the fearmongering of the right. The Tories spent £1million on negative Facebook ads, while Labour focused on producing mostly positive, motivating and sharable videos. Momentum are also working with developers, designers, UI/UX engineers on mobile apps that might help galvanize campaign support on the ground. 4. Let’s now turn to your book, The Assemblage Brain. The first question is about neuroculture. It is in fact quite clear that you are not approaching it under a biological, psychological, economic or marketing point of view. What is your approach in outlining neuroculture and more specifically what do you define as neurocapitalism? The idea for the book was mostly prompted by criticism of fleeting references to mirror neurons in Virality. Both Tarde and Deleuze invested heavily in the brain sciences in their day and I suppose I was following on with that cross-disciplinary trajectory. But this engagement with science is, of course, not without its problems. So I wanted to spend some time thinking through how my work could relate to science, as well as art. There were some contradictions to reconcile. On one hand, I had followed this Deleuzian neuro-trajectory, but on the other hand, the critical theorist in me struggled with the role science plays in the cultural circuits of capitalism. I won’t go into too much detail here, but the book begins by looking at what seems to be a bit of theoretical backtracking by Deleuze and Guattari in their swansong What is Philosophy? In short, as Stengers argues, the philosophy of mixture in their earlier work is ostensibly replaced by the almost biblical announcement of “thou shalt not mix!” But it seems that the reappearance of disciplinary boundaries helps us to better understand how to overcome the different enunciations of philosophy, science and art, and ultimately, via the method of the interference, produce a kind of nonlocalised philosophy, science and art. What is Philosophy? is also crucially about the brain’s encounter with chaos. It’s a counter- phenomenological, Whiteheadian account of the brain that questions the whole notion of matter and what arises from it. I think its subject matter also returns us to Bergson’s antilocationist stance in Matter and Memory. So in part, The Assemblage Brain is a neurophilosophy book. It explores the emotional brain thesis and the deeply ecological nature of noncognitive sense making. But the first part traces a neuropolitical trajectory of control that connects the neurosciences to capitalism, particularly apparent in the emotion turn we see in the management of digital labour and new marketing techniques, as well as the role of neuropharmaceuticals in controlling attention. So neurocapitalism perhaps begins with the G.W. Bush announcement that the 1990s were the Decade of the Brain. Thereafter, government and industry investment in neuroscience research has exceeded genetics and is spun out to all kinds of commercial applications. It is now this expansive discursive formation that needs unpacking. But how to proceed? Should we analyse this discourse? Well, yes, but a problem with discourse analysis is that it too readily rubbishes science for making concrete facts from the hypothetical results of experimentation rather than trying to understand the implications of experimentation. To challenge neurocapitalism I think we need to take seriously both concrete and hypothetical experimentation. Instead of focusing too much on opening up a critical distance, we need to ask what is it that science is trying to make functional. For example, critical theory needs to directly engage with neuroeconomics and subsequent claims about the role neurochemicals might play in the relation between emotions and choice, addiction and technology use, and attention and consumption. It also needs to question the extent to which the emotional turn in the neurosciences has been integrated into the cultural circuits of capitalism. It needs ask why neuroscientists, like Damasio, get paid to do keynotes at neuromarketing conferences! 5) A Spinozian question. After What can a virus do? in Virality you have moved to What can a brain do? in The Assemblage Brain. Can you describe your shift from the virus to the brain and especially what you want to reach in your research path of Spinozian enquiry What can a body do? What creative potential do you attribute to the brain? And in Virilio’s perspective how many “hidden incidents in the brain itself” may lie in questioning: What can be done to a brain? How dangerous can the neural essence be when applied to technological development? The front line seems to be today in the individual cerebral areas and in the process of subjectivity under ruling diagrams of neural types... Yes, the second part of the book looks at the liberating potential of sense making ecologies. I don’t just mean brain plasticity here. I’m not so convinced with Malabou’s idea that we can free the brain by way knowing our brain’s plastic potential. It plays a part, but we risk simply transferring the sovereignty of the self to the sovereignty of the synaptic self. I’m less interested in the linguistically derived sense of self we find here, wherein the symbolic is assumed to explain to us who we are (the self that says “I”). I’m more interested in Malabou’s warning that brain plasticity risks being hijacked by neoliberal notions of individualised worker flexibility. Protevi’s Spinoza-inspired piece on the Nazis Nuremburg Rallies becomes more important in the book. So there’s different kinds of sensory power that can either produce more passive somnambulist Nazis followers or encourage a collective capacity towards action that fights fascism. Both work on a population through affective registers, which are not necessarily positive or negative, but rather sensory stimulations that produce certain moods. So, Protevi usefully draws on Deleuze and Bruce Wexler’s social neuroscience to argue that subjectivity is always being made (becoming) in deeply relational ways. Through our relation to carers, for instance, we see how subjectivity is a multiple production, never a given – more a perpetual proto-subjectivity in the making. Indeed, care is, in itself, deeply sensory and relational. The problem is that the education of our senses is increasingly experienced in systems of carelessness; from Nuremburg to the Age of Austerity. This isn’t all about fear. The Nazis focus on joy and pleasure (Freude), for example, worked on the mood of a population enabling enough racist feelings and a sense of superiority to prepare for war and the Holocaust. Capitalism similarly acts to pacify consumers and workers; to keep “everybody happy now” in spite of the degrees of nonconscious compulsion, obsolescence and waste, and disregard for environmental destruction. Yet, at the extreme, in the Nazis death camps, those with empathy were most likely to die. Feelings were completely shut down. In all these cases though, we find these anti-care systems in which the collective capacity to power is closed down. Nonetheless, brains are deeply ecological. In moments of extreme sensory deprivation they will start to imagine images and sounds. The socially isolated brain will imagine others. In this context, it’s interesting that Wexler returns us to the importance of imitative relations. Again, we find here an imitative relation that overrides the linguistic sense of an inner self (a relation of interiority) and points instead to sense making in relation to exteriority. Without having to resort to mirror neurons, I feel there is a strong argument here for imitation as a powerful kind of affective relation that can function on both sides of Spinoza’s affective registers. 6) Let’s talk about specialized Control and neurofeedback: the neurosubject seen as the slave of the future of the sedated behaviour. Is it possible to train or to correct a brain? Let’s go back to the relation between politics and neuroculture. Trump’s administration displays neuropolitics today: for example “Neurocore” is a company where Betsy DeVos (current Trump’s US Secretary of Education) is the main shareholder. It is a company specialised in neuro-feedback techniques where one can learn how to modulate and therefore to control internal or external cerebral functions like some human-computer interfaces do. Neurocore affirms that they are able to positively work the electric impulses of the cerebral waves. What can we expect from mental wellness researches through neurofeedback and from self-regulated or digitally self-empowered cerebral manipulations, in politics and in society? Of course, claims made by these brain training companies are mostly about gimmicky, money spinning, neuro-speculation. But I think this focus on ADHD is interesting. It also addresses the point you made in the previous question about being neurotypical. So Neurocore, like other similar businesses, claim to be able to treat the various symptoms of attention deficit by applying neuroscience. This usually means diagnosis via EEG – looking at brainwaves associated with attention/inattention – and then some application of noninvasive neurofeedback rather than drug interventions. OK, so by stimulating certain brainwaves it is perhaps possible to produce a degree of behavioural change akin to Pavlov or Skinner. But aside from these specific claims, there’s more a general and political relation established between the sensory environments of capitalism and certain brain-somatic states. I think these relations are crucial to understanding the paradoxical and dystopic nature of neurocapitalism. For example, ADHD is assumed by many to be linked to faulty dopamine receptors and detected by certain brainwaves (there’s a FDA certified EEG diagnosis in the US), but the condition itself is a paradoxical mix of attention and inattention. On one hand, people with ADHD are distracted from the things they are supposed to neurotypically pay attention to, like school, work, paying the bills etc., and on the other, they are supposed to be hyper- attentive to the things that are regarded as distractions, like computer games, and other obsessions that they apparently spend disproportionate time on. There is a clear attempt here to manage certain kinds of attention through differing modes of sensory stimulation. But what’s neurotypical for school seems to clash with what’s neurotypical in the shopping mall. Inattention, distractibility, disorganization, impulsiveness and restlessness seem to be prerequisite behaviours for hyper-consumption. Not surprisingly then, ADHD, OCD and dementia become part of the neuromarketer’s tool bag; that is, the consumer is modelled by a range of brain pathologies e.g. the attention- challenged, forgetful consumer whose compulsive drives are essential to brand obsessions. All this links to the control society thesis and Deleuze’s location of marketing as the new enemy and the potential infiltration of neurochemicals and brainwaves as the latest frontier in control. What I do in the book is look back at the origins of the control society thesis, found explicitly in the dystopias of Burroughs and implicitly in Huxley. What we find is a familiar paradoxical switching between freedom and slavery, joyful coercion and oppression. In short, the most effective dystopias are always dressed up as utopias. 7) What then is an assemblage brain? It seems to me that a precise thought line passing from Bergson, Tarde, Deleuze, Guattari, Whitehead, Ruyer and Simondon has been traced here. You write: Everything is potentially «becoming brain». Why? And which difference is there with the cybernetic model of brain, prevailing today? Although I don’t really do much Whitehead in the book, I think his demand for a nonbifurcated theory of nature is the starting point for the assemblage brain. Certainly, by the time I get to discuss Deleuze’s The Fold, Whitehead is there in all but name. So there’s this beautiful quote that I’ve used in a more recent article that perfectly captures what I mean... [W]e cannot determine with what molecules the brain begins and the rest of the body ends. Further, we cannot tell with what molecules the body ends and the external world begins. The truth is that the brain is continuous with the body, and the body is continuous with the rest of the natural world. Human experience is an act of self-origination including the whole of nature, limited to the perspective of a focal region, located within the body, but not necessarily persisting in any fixed coordination with a definite part of the brain. (2) This captures the antilocationist stance of the book, which rallies against a series of locationist positions in neuroculture ranging from what has been described as fMRI- phrenology to the neurophilosophy of Metzinger’s Platonic Ego Tunnel. The cybernetic model of sense making is a locationist model of sense making writ large. The cognitive brain is this computer that stores representations somewhere in a mental model that seems to hover above matter. It communicates with the outside world through internal encoding/decoding information processors, and even when this information becomes widely distributed through external networks, the brain model doesn’t change, but instead we encounter the same internal properties in this ridiculous notion of a megabrain or collective intelligence. We find a great antidote to the megabrain in Tarde’s social monadology, but The Fold brilliantly upsets the whole notion that the outside is nothing more than an image stored on the inside. On the contrary, the inside is nothing more than a fold on the outside. To further counter such locationist perspectives on sense making – Whitehead’s limitations of the focal region - we need to rethink the question of matter and what arises from it. For example, Deleuze’s use of Ruyer results in this idea that everything is potentially becoming brain. There are, as such, micro-brains everywhere in Whitehead’s nonbifurcated assemblage – the society of molecules that compose the stone, e.g. which senses the warmth of the sun. There’s evidently politics in here too. The ADHD example I mentioned is a locationist strategy that says our response to the stresses and disruptions experienced in the world today can be traced back to a problem that starts inside the head. On the contrary, it’s in our relations with these systems of carelessness that we will find the problem! 8) You declare that the couple “mind/brain” is insolvable. Against the ratio of the scientific concept of the «mind» you counterpose the chaotic materiality of the «brain» writing that the brain is the chaos which continues to haunt science (p.195). Can we say that such irreducible escape from chaos expressed in your metaphor of Huxley’s escape from Plato’s cavern, shows your preference for What is Philosophy by Deleuze and Guattari rather than A Thousand Plateaus where the assemblage theory is displayed? So yes, in The Fold there is no mind/brain distinction, just, as What is Philosophy continues with, this encounter between matter and chaos. The brain simply returns or is an exchange point for the expression of chaos – Whitehead’s narrow “focal point” of the percipient event. This is, as Stengers argues, nothing more than a mere foothold of perception, not a command post! Such a concept of nature evidently haunts the cognitive neurosciences approach that seeks, through neuroaesthetics, for example, to locate the concept of beauty in the brain. We might be able to trace a particular sensation to a location in the brain, by, for example, tweaking a rat’s whisker so that it corresponds with a location in the brain, but the neurocorrelates between these sensations and the concept of beauty are drastically misunderstood as a journey from matter to mental stuff or matter to memory. I think the metaphor of Huxley’s acid fuelled escape from Plato’s cave, which is contrasted with Dequincy’s opiated journey to the prison of the self, helps, in a slightly tongue-in-cheek way, to explore the difference between relations of interiority and exteriority or tunnels and folds. The point is to contrast Dequincy’s need to escape the harsh world he experienced in the early industrial age by hiding inside his opiated dream world with Huxley’s acid induced experience of “isness.” Huxley was certainly reading Bergson when he wrote Doors of Perception, so I think he was looking to route round the kind of perception explained by the journey from matter to the mental. My attempt at a somewhat crude lyrical conclusion is that while Dequincy hides in his tunnel Huxley is out there in the nonbifurcated fold... 9) One last question (maybe more ethical than what we would expect from new media theorists today) involves the aspect of a meeting between a virus and a brain. Which ethical, biological, political, social and philosophical effects may occur when viruses are purposely introduced/inoculated into human brain, as with «organoid» derived from grown cells in research laboratories? Growing a brain from embryonic cells and wildly experimenting modifying its growth can take the zoon politikon to a critical edge? Neither machines, or men or cyborg, but simple wearable synthetic micro- masses. Are we approaching in huge strides the bio-inorganic era that Deleuze defined in his book on Foucault, as the era of man in charge of the very rocks, or inorganic matter (the domain of silicon)? One way to approach this fascinating question might be to again compare Metzinger’s neuroethics with an ethics of The Fold. On one hand, there’s this human right to use neurotechnologies and pharmaceutical psychostimulants to tinker with the Ego Tunnel. It’s these kind of out of body experiences that Metzinger’s claims will free us from the virtual sense of self by enabling humans to look back at ourselves and see through the illusion of the cave brain. On the other hand, the ethics of The Fold suggests a more politically flattened and nonbifurcated ecological relation between organic and inorganic matter. The nightmare of the wearable micro-masses ideal you mention would, I suppose, sit more concretely in the former. Infected with this virus, we would not just look back at ourselves, but perhaps spread the politics of the Anthropocene even further into the inorganic world. In many ways, looking at the capitalist ruins in which we live in now, we perhaps already have this virus in our heads? Indeed, isn’t humanity a kind of virus in itself? Certainly, our lack of empathy for the planet we contaminate is staggering. I would tend to be far more optimistic about being in the fold since even though we still have our animal politics and Anthropocene to contend with, if we are positioned more closely in nature; that is, in the consequential decay of contaminated matter, we may, at last, share in the feeling of decay. I suppose this is again already the case. We are living in the early ruins of inorganic and organic matter right now, yet we seem to think we can rise above it. But even Ego Tunnels like Trump will eventually find themselves rotting in the ruins. Notes: 1) Tony D Sampson and Jussi Parikka, “Learning from Network Dysfunctionality: Accidents, Enterprise and Small Worlds of Infection” in The Blackwell Companion to New Media Dynamics, Hartley, Burgess and Bruns (eds.), Wiley-Blackwell, 2012. 2) Whitehead cited in Dewey, J “The Philosophy of Whitehead” in Schilpp, P.A (ed.) The Philosophy 2 of Alfred North Whitehead. Tutor Publishing Company, New York, 1951. taken from:
Explorations in Media EcologyVolume 15 Number 1 © 2016 Intellect Ltd Article. English language. doi: 10.1386/eme.15.1.55_1 Eric Jenkins University of Cincinnati Peter Zhang Grand Valley State University AbstractThe authors argue that Gilles Deleuze can be read as a media ecologist, extending many insights of Marshall McLuhan’s including the idea that the medium is the message, that the content of any medium is another medium, and that media extend and alter human faculties. Yet since McLuhan preferred to write in axioms and probes, Deleuze provides a more robust theorizing of these issues. Specifically, Deleuze advances on McLuhan by providing a more complex notion of media as assemblages, avoiding the dilemmas of technological determinism, by developing a more robust way of under-standing affect and desire, away from McLuhan’s notion of sensory ratios, and by establishing power and ethics as central concerns, against McLuhan’s primarily descriptive scholarly approach. We conclude that Deleuze thus illustrates the continu-ing relevance of McLuhan’s foundational work, yet his advances on McLuhan offer many prospects for improving the study of media from a media ecological perspective. The vast corpus of Gilles Deleuze has recently found significant uptake inthe wide world of critical theory. The corpus’ sheer volume helps explainthis widespread interest, since Deleuze theorizes on issues relevant to a broad diversity of topics of central concern for critical scholars such as power,desire and affect. Furthermore Deleuze’s concepts, such as smooth and stri-ated space, deterritorialization, control society, the rhizome, the molar andmolecular, the refrain, and machinic assemblages, seem particularly relevantfor today’s postmodern capitalism and digital media age. Arguing that this isthe proper role for the philosopher, Deleuze seeks to develop concepts thatmight spark new ways of thinking. As such, Deleuze frequently borrows fromand refines concepts from other thinkers, especially those either ignored orexcluded from mainstream western philosophizing such as, in the twentieth century, Alfred North Whitehead, Gilbert Simondon, Paul Virilio and William James. Tracing Deleuze’s intellectual heritage further into the past, Todd May(2005: 26), one of his most lucid interpreters, elucidates how Baruch Spinoza,Henri Bergson and Friedrich Nietzsche represent the Christ, Father and Holy Ghost of Deleuze’s thinking. In this article, we would like to add another theorist to this list of Deleuze’s forebears – Marshall McLuhan. This is perhaps a surprising addition since, unlike these other theorists, Deleuze’s references to McLuhan are sporadic and brief. Nevertheless we argue that Deleuze can be envisioned as a proper heir of McLuhan’s, that is, as a media scholar with an ecological perspec-tive. This is, of course, not an exclusive claim, since Deleuze is also concerned with many issues not directly relevant to communication media, at least in McLuhan’s conceptualization. Yet many of Deleuze’s concepts have relevance for media studies, and his theorization often proceeds from insights originally developed by McLuhan including, as we illustrate in the first section below, the idea that the medium is the message, that the content of any medium is another medium, and that media extend and alter human faculties. Despite these shared insights, Deleuze develops a more rigorous and complex theoretical perspective than McLuhan, who preferred to write in axioms and probes designed to innervate thought, rather than elaborate an entire framework. As such, we also make a second major claim, namely that as a media ecological scholar Deleuze refines and advances McLuhan’s initial explorations. Deleuze does so in three ways, there by fine-tuning McLuhan’s thoughts in ways that mitigate some significant criticisms. First, Deleuze defines the hazy concept ‘media’ with a turn towards ‘machinic assemblage’, addressing the widespread indictment of McLuhan for technological determinism. Second, although both Deleuze and McLuhan recognize that media generate different affects and hence desires, Deleuze provides a more extensive understanding of affect and desire that grounds and warrants McLuhan’s, at times, hasty proclamations. Finally, Deleuze directs attention to power and ethics, issues that McLuhan only briefly touches upon and that, due to their submerged role, risk turning McLuhan’s scholarship into a purely descriptive and hence politically debilitating enterprise, as evidenced by McLuhan’s work as a consultant for advertising firms. In sum, we argue that Deleuze, as an heir of McLuhan, takes up the work of his forebear in ways consistent with a media ecological perspective but in a manner that greatly advances that perspective. Media and assemlages in McLuhan and DeleuzeAlthough Deleuze prefers the label ‘philosopher’, one can envision Deleuze as a media theorist in much the same vein as McLuhan. For one example, Deleuze’s (1995) description of the transition from disciplinary to control society relies upon the shift from analogue to digital technics, and his (Deleuze and Parnet 2002: 112–15) distinction, borrowed from Henri Bergson, between the actual and the virtual gains enhanced relevance with the onset of the digital age. Furthermore, Deleuze’s canon includes works explicitly focusing on cinema (1986, 1989), the art of Francis Bacon (2005), and The Logic of Sense (1990), in which Deleuze theorizes sense as the faculty with one side turned towards actuality (the thing or the state of affairs) and the other turned towards the proposition. In Deleuze’s (1990: 22) words, sense is ‘exactly the boundary between proposition and things’, and, in McLuhan’s language, we could see this boundary as a medium, which, for McLuhan, extends and translates human senses. Indeed, in Cinema 1, Deleuze (1986: 7–8) sounds a note similar to McLuhan’s emphasis on media as extensions of human faculties when he remarks that cinema is ‘the organ for perfecting the new reality’, an ‘essential factor’ in a ‘new way of thinking’, or when he and Félix Guattari (1987: 61) write, ‘The hand as a general form of content is extended in tools….’ Deleuze is often concerned with other media topics discussed by McLuhan, such as language, music, literature and, especially, different types of space. In addition, many of Deleuze’s major concepts hold direct relevance for the study of media. Peter Zhang (2011) has illustrated how Deleuze’s notions of striated versus smooth space map onto McLuhan’s distinction between acous-tic and visual space, a distinction Stanley Cavell (2003) portrays as central to McLuhan’s corpus. The connection is not lost on Deleuze, who makes positive remarks about McLuhan and other media scholars such as Lewis Mumford and Paul Virilio.1 Although Deleuze rarely uses the term ‘media’, preferring to discuss assemblages, modes and machines, his use of these terms remains consonant with McLuhan’s emphasis on media, since both are concerned with how different modes of thought, perception, language, affection and action shape society. D. N. Rodowick, one of the best interpreters of Deleuze’s cinema work, thus describes Deleuze’s fundamental task as to ‘understand the specific set of formal possibilities – modes of envisioning and represent-ing, of seeing and saying – historically available to different cultures at differ-ent times’ (1997: 5). McLuhan, on the other hand, may seem more narrowly focused on media than Deleuze, especially since his chapter headings in Understanding Media (1964) are all different media technologies such as movies and television. Such a seemingly narrow focus has led to many criticisms of McLuhan’s supposed technological determinism. Yet careful consideration of McLuhan’s work reveals a broader focus than media alone since McLuhan is also concerned with the interfacing of culture with media, something that Deleuze’s terms machine, assemblage, and modes point towards. Indeed, McLuhan’s (1967: 159) later work frequently employs the concept of modes, stressing that the study of media is the study of modes: ‘All that remains to study are the media themselves, as forms, as modes ever creating the new assumptions and hence new objectives’. McLuhan’s use of the term modes resonates with Deleuze’s, indicating their shared concern about how culture interfaces with media. As McLuhan states, ‘Vivisective inspection of all modes of our own inner-outer individual-social lives makes us acutely sensitive to all inter-cultural and inter-media experience’ (1969: 64). Here, McLuhan under-stands modes as the manner of interfacing with media, as things that make us sensitive to inter-cultural and inter-media experience. This quotation seems to make clear, then, that McLuhan understands modes as liminal, in-between media and culture, just as Deleuze’s conceptualization of modes, assemblages and machines understands human experience as a coupling of media and culture, of human faculties and technology. A focus on how culture inter-faces with media denies the frequent accusations of McLuhan’s techno logical determinism. Like Deleuze, McLuhan is well aware that, to generate an effect,a medium first needs to be taken up by a social matrix. Although the accusations of technological determinism may be based in a less than generous read, they remain a necessary cautionary note and their existence is far from surprising. McLuhan’s predilection for axioms and probes, part of a writing style he saw as an adaptation to an electronic media environment, produces claims that may seem, to the more deliberate scholar, to be overstatement, hyperbole, or a gross generalization. Take ‘the medium is the message’. McLuhan’s famous axiom seems to dismiss any socio-cultural effects from message content. Other statements supporting this axiom seem to confirm this extreme position, such as when McLuhan remarks that ‘the medium shapes and controls the scale and form of human association and action’, or when he earlier claims that the assembly line altered our ‘relations to one another and to ourselves’, and it ‘mattered not in the least whether it turned out cornflakes or Cadillacs’ (1964: 24, 23). The automobile certainly had a major impact on society, and elsewhere in the same work McLuhan treats things like the wheel, the bicycle, and the airplane as media, seeming to contradict his earlier dismissal since, with Cadillacs, the content of the assembly line is another technological medium. Yet, despite his hyperbolic style, McLuhan’s basic claim that media intro-duce changes of scale, pace or pattern into human affairs remains undeniable. Furthermore, McLuhan’s notion of medium at least points towards a more complex understanding than just the material technology. Indeed, it is McLuhan (1964: 23) who first recognizes that the content of one medium is ‘always another medium’, such as in our assembly line and Cadillac example. If media shape human affairs, and the content of a medium is always another medium, then McLuhan’s position on the effects of media form versus their content is more complex than the accusations of technological determinism entail. The warnings against a simple and direct technological causation should be heeded, but McLuhan can be read more generously without infer-ring such a notion of causation. In fact, we can see McLuhan’s axioms and probes as the first volley, necessary to shake free some encrusted biases towards content analysis, and Deleuze's work as the more sustained ground strokes. Deleuze and Guattari(1983: 240) cite McLuhan’s realization that the content of one medium is another medium approvingly, in their extended criticism of Saussurian linguistics’ emphasis on the signifier. Where as Saussurian linguistics stresses content(the signifiers, which mean only in relation), ‘the significance of McLuhan’ analysis’ is to have shown the import of ‘decoded flows, as opposed to a signifier that strangles and overcodes the flows’. They continue: In the first place, for non-signifying language anything will do: whether it be phonic, graphic, gestural, etc., no flow is privileged in this language, which remains indifferent to its substance or its support, inasmuch as the latter is an amorphous continuum… (A) substance is said to be formed when a flow enters into a relationship with another flow, such that the first defines a content and the second, an expression. The deterritorialized flows of content and expression are in a state of conjunction or reciprocal precondition that constitutes figures as the ultimate units of both content and expression. These figures do not derive from a signifier nor are they even signs as minimal elements of the signifier; they are non-signs, or rather non-signifying signs, point-signs having several dimensions, flow-breaks or schizzes that form images through their coming together in a whole, but that do not maintain any identity when they pass from one whole to another. Hence the figures… are in no way ‘figurative’; they become figurative only in a particular constellation that dissolves in order to be replaced by another one. Three million points per second transmitted by television, only a few of which are retained. (1983: 240–41) Some elucidation is necessary, since Deleuze and Guattari’s theoretical vocabulary is quite different from McLuhan’s. Basically, Deleuze and Guattari argue, against linguistics, that content does not determine or ‘overcode’ the communication. In other words, similar to McLuhan’s idea that the medium is the message, Deleuze, and Guattari stress that the signifier is not what is signifi-cant. The quotation begins by calling attention to media besides language –non-signifying languages – and, based in their understanding of machinic assemblages, calls attention to the flows that compose these languages, like the flows of gestures, graphics, and sounds on television. They are argu-ing that we cannot understand all media communication on the model of language, as Deleuze also concludes in his cinema books. Instead of the dialectical linguistic model based upon signifier-signified relations, they draw upon Louis Hjelmselv’s four-part model, which recognizes a substance and form of a content and an expression. A substance (say a television show) is formed when the flows of content (the gestures and sounds and images) and the flows of expression (the video camera and the editing, cuts and montage)are combined. The form is the arrangement and structuring of the substance; on a television show, the form is the order of the shots and the linkages between them. Hence why Deleuze and Guattari cite McLuhan’s recognition that the content of any medium is another medium; the content of a television programme is the flows of gestures, sounds, and images. Only combined with the flows of expression (the tele-visual flow, its camera, and editing techniques), does a figure or a whole (something with both substance and form) emerge. This four-part model emphasizing couplings or combinations lead Deleuze and Guattari to refer to what McLuhan would call the medium with the term-machinic assemblage. Drawing on the insight that the content of any medium is another medium, Deleuze and Guattari prefer to describe this media coupling as an assemblage. This assemblage is machinic in the sense that it works like any machine, combining flows and breaks into a whole operation. As Deleuze and Guattari explain, An organ machine is plugged into an energy-source machine: the one produces a flow that the other interrupts. The breast is a machine that produces milk, and the mouth a machine coupled to it… For every organ-machine, an energy-machine: all the time, flows and interruptions. (1983: 1–2) Lest the reference to breast feeding leads us astray, think of television again. There are flows of gestures, images and sounds that are interrupted or broken by the television camera, through such devices as editing and montage. The coupling of the two produces the whole, a machinic assemblage. As the last line of the quotation above indicates, the assemblage and its machinic couplings do not end here but have another component, another break – the viewer, who sees only a few figures from the three million points per second transmitted. At this level, the content of the television programme (the flows of light) become coupled with the viewer’s senses to constitute a new assemblage, one that may evoke significance and affect. Once again, in this assemblage the content of one medium is another medium; indeed, what was once the flow of expression (the tele-visual flow) now becomes the flow of content that the viewer’s sensory system processes into expression, into figuration. In other words, the expression (the figures perceived) only emerge from the tele-visual flow, which only emerges from the flows of gesture, image and sound and their coupling by the camera and in the editing booth. This notion of humans plugging into other machines to become machinic assemblages seems consonant with McLuhan’s idea of media as extensions, such as the wheels and the accelerator of a car being extensions of our feet. As McLuhan (1964: 272) remarks about television, ‘With TV, the viewer is the screen. He is bombarded with light impulses that James Joyce called the “Charge of the Light Brigade” that imbues his “soulskin with subconscious inklings”’. Shortly thereafter, McLuhan continues, further illustrating his like-mindedness with Deleuze and Guattari: ‘The TV image offers some three million dots per second to the receiver. From these, he accepts only a few dozen each instant, from which to make an image’ (1964: 273). The existence of the machinic assemblage (or what McLuhan would call medium) of television means both that the dialectical signifier-signified model is inadequate to understand media, with their non-signifying semiotics, and that this model must be expanded to include both content and expression. Precisely because the content of any medium is another medium, precisely because all media are machinic assemblages, we must pay attention to the coupling of content and expression, of flows and breaks, not simply to the linguistic content alone, the chain of signifiers that so occupies Saussurian linguists and rhetorical critics. Recognition of assemblages of content and expression also means that a linguistic model that only addresses content (signifiers) cannot adequately describe the production of communication and, as we will see, desire in the social. Asking what signifiers are present on tele- vision, for instance, misses the sensory intimacy of the televisual experience, which McLuhan describes as ‘cool’. This intimacy can explain why Nixon flops on television while Kennedy soars, whereas attention to their string of signifiers offers no insight (as the radio listeners who thought Nixon won this famous debate attest). Indeed, this televisual intimacy has dramatically transformed political discourse, which now prefers the cool stylings of a Reagan and Clinton to the hot Nixon or McCain, which now demands political oratory characterized by sound-bytes, narrative form, and self-disclosure that Kathleen Hall Jamieson (1990) deems the ‘effeminate style’. In politics, television has certainly been the‘message’; the change in content can only be explained by consideration of the media constituting the social environment since, considered apart from their machinic assemblages, political signifiers lack significance. With John McCain and Barack Obama equally and fervently appealing to the American Dream, for instance, consideration of only their signifiers cannot account for the vast differences in affect and desire innervated by the coupling of those signifiers (and images and gestures) into a televisual assemblage. Obama emerged as the preferable figure in this media environment. This is not to discount party loyalty, ideology or other factors for causing some voters to prefer McCain but only to say that Obama crafted the more attractive televisual image. Shifting the critical focus from the content to assemblage entails examin-ing the perceptual and affective aspects of media experience. In other words, it is not so much because of their content (since the content is mostly full of repetitive, generic promises anyway) but because of how they feel and seem that some politicians make better images for television. Thus McLuhan bases his claims about how television has changed politics on an explanation of the viewing experience. To do so, McLuhan depicts the tele-visual experience as a primarily tactile perception, one innervating a syn-aesthetic affect. In this depiction, McLuhan offers a typically eccentric notion of tactility, closer to what people mean when they say they were touched by art. As McLuhan (1964: 67) remarks, ‘It begins to be evident that “touch” is not skin but the interplay of the senses, and keeping in touch or getting in touch is a matter of a fruitful meeting of the senses, of sight translated into sound and sound into movement, and taste and smell’. Since the television image is profoundly participatory and in-depth, it touches viewers by causing a sort of synaesthetic interplay among the senses. The cool medium of television involves the viewer in the image construction, engaging an in-depth interplay of all the senses. McLuhan states, ‘The TV image requires each instant that we “close” the spaces in the mesh by a convulsive, sensuous participation that is profoundly kinetic and tactile, because tactility is the interplay of the senses, rather than the isolated contact of skin and object’ (1964: 273). For McLuhan, we are ‘touched’ by the televisual image, affected by flows of sight and sound to not only see and hear but to feel and think. Needless to say, McLuhan often brings on criticisms of technological determinism with such claims. Indeed, unlike Deleuze who describes differ-ent regimes of the cinematic image, McLuhan treats television and cinema as distinct media with dissimilar image qualities. To do so, McLuhan (1964: 273) must dismiss anything like high-definition television as simply not television: ‘Nor would “improved” TV be television. The TV image is now a mosaic mesh of light and dark spots, which a movie shot never is, even when the quality of the movie image is very poor’. McLuhan’s argument here is defensive and belies the history of cinema and television, including the advance of technology and the evolution of image forms, readily apparent from today’s perspective. In contrast, Deleuze’s emphasis on assemblages allows him to recognize different cinematic images, some of which are closer to McLuhan’s depiction of television than cinema. For instance, Deleuze (1989: 6, 59–64) describes the moments in musicals where characters break into song as pure sonsigns. Pure sonsigns are part of the time-image regime in cinema, which presents a moment in time directly, like a musical performance the audience pres-ently enjoys (instead of a representation of a musical performance that the characters enjoy). These sonsigns are participatory, involving and non-linear, possessing the same characteristics that McLuhan attributes to the televisual image. Lest we wander off into an extended discussion of Deleuze’s cinema theory, let us summarize the conclusions of this section. Deleuze and McLuhan shared a concern with media, directing our attention to media and away from the content focus of early media studies and linguistics. Deleuze presents an advance over McLuhan, however, by conceiving media as a machinic assemblage, the coupling of flows and their interruptions. Conceiving media as assemblages helps avoid some of McLuhan’s more totalizing claims about media considered as stable categories, claims often evoking accusations of technological determination. The assemblage concept is more nuanced than simple technological determinism, and, as we will see, leads directly into a more developed theory of affect and desire, whose basic precepts can be unearthed in McLuhan’s writings. Again, however, Deleuze provides the theoretical backing for McLuhan’s probes, spelling out in more detail the scholarly task – to perform a mapping of assemblages and their modes. Affect and desire in McLuhan and DeleuzeAffect and desire remain consistent concerns shared by Deleuze and McLuhan, concerns pointed to by McLuhan and further developed in Deleuze’s work. To illustrate these concerns for McLuhan, let’s stick with television. McLuhan (1964) seems distinctly concerned with how media alter cultural attractions and desires, claiming that, among other effects, television led to preferences for cool stars like Ed Sullivan, for skin diving and the wraparound spaces of small cars, for westerns and their ‘varied and rough textures’, for the beat-nik sensibility, for football over baseball, and for different forms of fashion, literature, music, poetry and painting. The warrant for each of these claims is basically the same. Television is low-definition, fragmented and disconnected, requiring images that are iconic and in-depth. Such images demand a high-level of audience participation to complete, and therefore creates attractions to cool, participatory forms and qualities that allow viewers to participate in their construction. For instance, viewers must interpret a cool, rounded, diverse character instead of being directly shown how to understand an easily classifiable character. Football is a collaborative, in-depth sport whereas base-ball is mano-y-mano, an individualized challenge of batter versus pitcher. Yet, setting aside the correctness of these claims, the point here is that McLuhan remains primarily concerned with how media spawn differences in cultural attraction and desire. Furthermore, at least with television, McLuhan attributes these changes in attraction and desire to alterations in experiential sensation, which Deleuze will describe as affects. In other words, both McLuhan and Deleuze understand desire as a production of affects that are enjoyable. Media become desirable because they produce affects such as fear, surprise and joy that ‘touch’ audiences. Such a view sees desire as a surface phenomenon, rather than resorting to a depth explanation as does psychoanalysis, which relates desire to some more fundamental longing such as the desire to mend the split in subjectivity from the entrance into the symbolic, or as a representative of the Oedipus myth. Both McLuhan and Deleuze and Guattari (1983) stridently criticize psychoanalysis, faulting it for, in part, ignoring mediated experience. Psychoanalysis must focus on content to portray it as a representation of some more fundamental desire, thereby dismissing the production of affect as irrelevant at best or at worst as a cover for this somehow more real source of desire. In contrast, Deleuze, Guattari and McLuhan conceive desire as machinic production that generates pleasurable affects. Such a conceptualization requires no deep mystery – unlike in psychoanalysis, which often reads in a compelling manner but tends to lack any empirical support (how do we know the Oedipus myth is fundamental, universal?) – and instead allows the scholar to focus directly on media and their affect-laden experience. Yet while McLuhan senses that media spark affects that thereby alter desires, Deleuze provides further theoretical refinement of affect and desire. Following Spinoza, Deleuze and Guattari (1987: xvi) define affect as the body’s ability to affect and be affected. Affect designates a pre-personal, embodied intensity experienced in the transition from one state to another. Affect accompanies and provides the texture for all of experience. At particularly sharp moments, we experience it as a spark or shock (see Jenkins 2014) but it persists throughout lived experience regardless of its degree of intensity. Affects are thus pre-conscious, continuous flows of intensity that accompany experience. As pre-conscious, they take place before their cognitive processing into the separate sensory channels – I saw this, or I heard this. In this sense, affects are like the synaesthetic touch depicted by Deleuze and McLuhan alike. As one of Deleuze’s interpreters and translators, Brian Massumi, explains, ‘Affects are virtual synesthetic perspectives anchored in … the actually existing, particular things that embody them’ (2002: 35, original emphasis). As intensities, affects are the flip side of the extensions McLuhan associates with media. That is, affects are the experienced impingements that rebound onto the mind-body from its mediated extensions. Without using the term affect, McLuhan evinces a similar conceptualization, especially in his retelling of the Narcissus myth. McLuhan (1964: 51) argues that Narcissus did not fall in love with himself but instead mistook the image in the water for another person. This is because Narcissus indicates a state of narcosis or numbness, and self-love does not evoke such affects. Instead, Narcissus experienced a shock from the extension of himself that he mistakes for another person, and that shock sparks a physiological response of numbness, similar to battle shock or auto-amputation. As McLuhan explains, We speak of ‘wanting to jump out of my skin’ or of ‘going out of my mind,’ being ‘driven batty’ or ‘flipping my lid’… In the physical stress of super stimulation of various kinds, the central nervous system acts to protect itself by a strategy of amputation or isolation of the offending organ, sense, or function. (1964: 52) Thus Narcissus’ narcosis is a defence mechanism, and McLuhan perceives a similar defensive numbness in response to electronic media that extend central nervous systems. Just as Narcissus responds with shock and numbness to seeing himself extended, extending central nervous systems exposes and makes vulnerable that system, thereby inducing a similar narcosis. This numbness is a particularly strong affect (intensity) resultant from the mediated extension, one with potentially dire results according to McLuhan. In shock, we risk mistaking media as something other than ourselves extended and hence become ‘servo-mechanisms’ of technology (McLuhan 1964: 55). Such surrendering of ‘our senses and nervous systems to the private manipulation of those who would try to benefit from taking a lease on our eyes and ears and nerves’ leaves us without ‘any rights left’, McLuhan (1964: 73) continues. Thus for McLuhan awareness is the solution to narcosis, ‘As long as we adopt the Narcissus attitude of regarding the extensions of our own bodies as really out there and really independent of us, we will meet all technological challenges with the same sort of banana-skin pirouette and collapse’ (1964: 73, original emphasis). McLuhan describes such affects via reference to sensory ratios. He contends that when one sense faculty (like vision) is super stimulated, human beings respond with narcosis, unable to perceive their mediated environment. Television stresses the sense of touch to such an extent that cultural desires alter in favour of the tactile and participatory. Thus McLuhan bases his ontology upon and begins with a pre-organized human body, with certain sensory faculties whose ratios are re-ordered by media. Such a perspective often leaves readers wondering how McLuhan knows these changes are effected, such as in the television examples above. The equation that television is a tactile medium and thus evokes tactile desires seems too simple, and reduces tactility (and vision, hearing, etc.) to a single mode. In contrast, Deleuze, following Spinoza, begins with an ‘I do not know’, one more open to differences and the vast possibilities of becoming. Thus Deleuze repeatedly quotes the famous passage from Spinoza that reads, ‘Nobody as yet has determined the limits of the body’s capabilities: that is, nobody as yet has learned from experience what the body can and cannot do’ (1992: 105). Instead of beginning with an organized body, with particular sense organs and their faculties, Deleuze starts with the Body without Organs (BwO). The BwO is the unorganized body, prior to its extensions and couplings in machinic assemblages, the body conceived as a glutinous mass of potential rather than a solid substance and form. Deleuze thus often compares the BwO to an egg, a soup of undifferentiated cells prior to its organization into limbs, organs, and the like. From the perspective of the BwO, the body only has potential affects, virtual affects, affects as yet unactualized into various assemblages. Massumi offers one of the clearest explanations: Call each of the body’s different vibratory regions a ‘zone of intensity.’ Look at the zone of intensity from the point of view of the actions it produces. From that perspective, call it an ‘organ’… Imagine the body in suspended animation: intensity = 0. Call that the ‘body without organs’…. Think of the body without organs as the body outside any determinate state, poised for any action in its repertory; this is the body from the point of view of its potential, or virtuality. Where as McLuhan begins from bodies presumed to be structured by certain sensory organs, Deleuze begins from the BwO and asks how the body becomes organized through various machinic assemblages. Such a perspec-tive allows Deleuze to recognize difference, to leave open the possibility for a wide variety of becomings. Rather than a visual medium necessarily producing visual ratios, affects and desires, beginning with the BwO allows scholars to recognize becomings where an eye is not just an eye, an ear not just an ear, a hand not just a hand. Massumi (1992: 93–94) offers the example of a man who wishes to become a dog who wears shoes, only to discover that, walking on all fours, he has no hand left to tie the final shoe. The man employs his mouth-as-hand, tying the shoes with his teeth, in the process of becoming this strange monster. This example may seem to lie at quite a remove from media studies, yet it is only by beginning with an ontology that conceives the body as a pool of liquid potential, rather than a pre-organized sensory apparatus, that scholars can account for the differences in the translations and actualizations of media form, such as the shift from movement-images to time-images in the cinema. McLuhan’s ontology requires that he envision cinema as singular, a highly visual and hot medium, rather than recognizing the potential for cinema to become otherwise, to become aural, or tactile, or many other admixtures of percept, affect and cognition. Beginning with this different ontology beckons a different scholarly gesture, especially with regard to affect. In his work on Spinoza, Deleuze describes this different scholarly gesture as an ethology. An ethology does not describe bodies according to their form, function, or organs, as does McLuhan, but according to their modes, that is, their manners of becoming, their capacities to affect and be affected. As Deleuze remarks, ‘Every reader of Spinoza knows that for him bodies and minds are not substances or subjects, but modes’ (1988: 123–24). Bodies are thus not bundles of sensory ratios but capabilities or capacities, such as the capacity of the hand to act like an eye. Such a perspective precludes McLuhan’s gesture, which confidently predicts the effects and affects of the senses, but instead presumes difference and that we do not know what a body can become in different combinations or assemblages. The scholarly gesture changes because bodies are not conceived of as organizations of form but as complex relations with other bodies, as assemblages, or as modes, those manners in which these relations become organ-zed. As a result, the scholar sees life differently. Thus Deleuze writes, Concretely, if you define bodies and thoughts as capacities for affecting and being affected, many things change. You will define an animal, or a human being, not by its form, its organs and its functions, and not as a subject either; you will define it by the affects of which it is capable. (1988: 124) Deleuze’s advance upon, and difference from, McLuhan’s implicit notion of affect does not end here, however. Again following Spinoza, Deleuze also gives us clues into how modes produce different affects. Modes produce affect in two primary ways, by composing a relation of speed and slowness and by composing a relation between affective capacities. In Deleuze’s terms, ‘For concretely, a mode is a complex relation of speed and slowness, in the body but also in thought, and it is a capacity for affecting or being affected, pertaining to the body or to thought’ (1988: 124). Deleuze employs the example of music to describe the relations of speed and slowness. Beginning from form and substance, one can describe a musical piece as composed of notes, arranged in a particular order. Yet such a perspective misses something fundamental in music – the rhythm and tempo. The same order and notes can produce widely variant songs based upon the speed of the playing. As Deleuze (1988: 123) explains: The important thing is to understand life… not as a form … but as a complex relation between differential velocities, between deceleration and acceleration of particles… In the same way, a musical form will depend on a complex relation between speeds and slowness of sound particles. It is not just a matter of music but of how to live: it is by speed and slowness that one slips in among things, that one connects with something else. One never commences; one never has a tabula rasa; one slips in, enters in the middle; one takes up or lays down rhythms. Ethology first of all studies relations of speed and slowness, organized via modes. Second, ethology asks how the modes relate different capacities for affect. Deleuze often employs the example of the wasp and orchid, conceived collectively as an assemblage. The wasp’s capacity to fly, to smell, and to gather pollen combines with the orchid’s capacity to flower, to produce pollen, and to emit scents. In combination, the orchid reproduces and the wasp feeds, forming a complex assemblage that studying either the wasp or orchid in isolation would miss. To return to a media example, HBO employs television’s capacity for home broadcast and episodic organization combined with cinema’s capacity for high production value and epic narrative to produce many shows that are closer to McLuhan’s depiction of cinema, yet that still take place in private, intimate locales and via the lower-definition television screen. With these shows, we have a hybrid becoming of cinema and television, a cinema made for television, that shapes different modes of production (such as elongated narratives told episodically) and consumption (40-minute viewings without commercial interruptions). In this sense, Deleuze’s conceptualization of assemblages, modes and affect is fundamentally based upon an ecological perspective, just as McLuhan repeatedly beckons for ecological thinking about media. For the media scholar, each such modal relation, each assemblage, must be mapped independently, rather than reduced to global categories such as cinema or television as McLuhan is wont to do, since each relation of speed and slowness has, to quote Deleuze, its own ‘amplitudes, thresholds…, and variations or transformations that are peculiar to them’ and since each relation of affective capacities remains unique due to ‘circumstances, and the way in which these capacities for being affected are filled’ (1988: 125–26). Deleuze depicts these two functions of modes as a longitude (speed and slowness) and a latitude (affective capacities). Ethology entails depicting these longitudes and latitudes, drawing a map of the embodied modes as they actualize from virtual potential of the BwO. Ethology constitutes a major theoretical advance over McLuhan’s earlier probes, although one with many similarities to McLuhan’s thinking. Primarily, the advance occurs because the theory of modes and affect are more open to flexibility, becoming and the acknowledgment of difference. Instead of a pre-formed body with certain sensory ratios, beginning with the BwO allows scholars to recognize a wide variety of virtual potentials whose becomings offer more possibilities than McLuhan’s static understandings entail. Furthermore, conceiving modes as manners of relating speed and slowness and of relating affective capacities backs away from McLuhan’s more general and totalizing claims about media forms, such as television or cinema, considered as whole and static. Doing so allows us to better understand the transformations of media over time, as television becomes cinema and cinema becomes television and they both become some-thing else. This is especially important in a digital age, which has created the capacity for any content to be translated across a wide variety of mediums. Ethology also entails a final advance over McLuhan because it offers not only a prescription for a scholarly approach but also an outline of an ethics. Ethics and Power in McLuhan and DeleuzeThe practice of ethology is based in an ethics that offers guidelines for becomings in process, not a morality that proffers proscriptions from above. According to Spinoza, an ethical becoming or mode is one that produces joy and heals whereas an unethical becoming produces sadness and illness. Similar modes (say, drug use or a certain sexual practice) may be ethical for some and unethical for others depending on the situation, illustrating why ethology constitutes an ethics and not a morality. Besides providing this criterion for discerning ethical versus unethical modes, the ontology behind ethology represents an ethical gesture, in part by rejecting the imperializing and totalizing gesture of morality. As Deleuze elucidates: Spinoza’s ethics has nothing to do with a morality; he conceives it as an ethology, that is, as a composition of fast and slow speeds, of capacities for affecting and being affected on this plane of immanence. This is why Spinoza calls out to us in the way he does; you do not know beforehand what good or bad you are capable of; you do not know beforehand what a body or a mind can do, in a given encounter, a given arrangement, a given combination. (1988: 125) By starting with an ‘I do not know’ about the body-mind instead of a confident and imperializing ontology of what the body-mind can do, ethology constitutes an ethical gesture because it remains open to difference and to the possibility of things becoming otherwise. Doing so also demands that the scholar begins in the middle, amidst embodied experience and its assemblages, rather than passing moral judgement and attempting, through force of word or often law, to make worldly actualities fit into those boxes. Ethology, then, offers guidance for a mode of living, one that begins in the middle and asks what new modes can be thought and produced which might spread love instead of hate, might heal instead of make ill, might produce happiness instead of sadness. Thus ontology and ethics fuse into ethology in Deleuze in a way perhaps best described as an interology. The ethics of ethology is entirely Other-oriented, since it is based in an ontology that rests on percept and affect. Per this ontology, the becomings of humankind are no longer finished but radically open-ended. It is a matter of what assemblages or environments take him up, what assemblages or environments he is capable of being taken up by, what Others – human or non-human – he enters into composition with. To live in an intensive mode means to have a good encounter, to compose a good interality, to be taken up by a good assemblage, to unblock life so the mind-body can do what it is capable of doing – affecting and being affected, so it can enter into composition with what suits its nature, or what affirms its elan vital (roughly, life force). What makes humankind virtuous is precisely our radical unfinishedness, our affinity, affectability, versatility, empower-ability, extendibility, composition-ability, or assemble-ability. Horseman-armour-lance-entourage-land makes a knight assemblage, which embodies the social posture of chivalry and courtly love. Archer-bow-arrow-mark-air-distance-gravity forms either a Zen assemblage of self-cultivation and satori or an assemblage of hunting or belligerence. Fiddler-violin-serenade-night-window forms a courtship assemblage. To live an ethical life entails switching from a ‘to be’ mode to an ‘and… and… and…’ mode, that is to say, from a subject orientation to an assemblage orientation, from ontology to interology, from being to becoming. When we imagine humans as machinic assemblages, ‘I’m watching TV’ no longer makes sense because TV is me at this moment. The person-remote-TV-couch assemblage is my mode of being, which means I’m not in another mode of being. When I multitask, e.g., when I drive a Penske on the super-highway while listening to the news on radio plus some music on MP3 and also having a phone conversation with someone who’s trying to follow the vehicle I’m operating, I’ve composed a busy and dangerous assemblage, and invented a schizophrenic mode of being, which is nothing like the mode I’min when I’m meditating while washing dishes. Neither assemblage is evil but one is potentially bad and the other good. Bull fighting and petting your dog involve two very different assemblages (spectators form an important element of the former) and two very different modes of being, which should not be conflated as ‘interacting with animals’. The one catalyses the bull-becoming of the bullfighter, whereas the other catalyses the pet-becoming of the one who pets. This is not to deny the possibility of fighting the bull in a petting mode –bringing the two assemblages together makes possible a strange becoming.‘I’ am capable of doing very different things depending on whether I’m in the ‘Penske and…’ assemblage or the dishes-water-meditation assemblage. As such, ‘I’ is more a function of the assemblage than the organizer of it. While McLuhan occasionally seems concerned with ethics, as in the earlier quotation about becoming servomechanisms of media, or in his promotion of art and games as anti-environments indispensable for awareness and survival, more often than not McLuhan’s project remains descriptive, assiduously avoiding issues of power. The chapter on TV in Understanding Media (1964), for example, speaks of TV as a new cultural ground that reconfigures people’s tastes. It is not, however, interested in the fact that many parents leave their children in front of the TV set not by choice but out of economic necessity. If there is an ethics in McLuhan, for the most part it remains implicit and ambivalent, to be derived by the readers for themselves. The telos of McLuhan’s explorations is laid bare in the title, Laws of Media: The New Science (1988). The subtitle gives it away; although McLuhan subtilizes what he means by ‘science’ so it is synonymous with what Vico means by ‘poetic wisdom’, it is nevertheless a descriptive enterprise, rather than an explicitly ethical and creative one. If there is an ethics in McLuhan, it is a power-blind ethics that ends with understanding – like the artist, the critic’s role is to promote under-standing so we can change course. Thus the scholar’s task becomes merely descriptive, an attempt to promote understanding. Yet this purely descriptive enterprise is surprisingly humanist and elides power, since it fails to attend to the assemblages of understanding and description, including the scholar’s role in regimes of power. In short, McLuhan fails to comprehend power-knowledge as a machinic assemblage, one that enables and disables certain forms of understanding, description and awareness. This descriptive project leaves McLuhan without a politics or ethics, unlike Deleuze who makes these concerns front and central. For instance, McLuhan (1964: 199) contends that ‘the Gutenberg technology and literacy… created the first classless society in the world’. For ‘[t]he highest income cannot liberate a North American from his “middle-class” life. The lowest income gives everybody a considerable piece of the same middle-class existence’ (McLuhan 1964: 199). The way McLuhan (1969: 140) sees it, Marxists were thus wholly misguided because they did not understand the media environment: ‘The Marxists spent their lives trying to promote a theory after the reality had been achieved. What they called the class struggle was a spectre of the old feudalism in their rear-view mirror’. Sociological realities of the time and of the present day strictly deny McLuhan’s claims, and ignoring the realities of economic inequality and oppression leaves any critical philosophy without an ethical and political grounding. Consider, in contrast, Deleuze’s treatment of class and capitalism. Deleuze, with Guattari, ponders how people can desire fascism, and calls for a close examination of power in connection with desire, thus extending Marx’s work. In A Thousand Plateaus (1987), while promoting nomadism as an ethical posture for the multitudes, he and Guattari also suggest that capitalism itself has operated as a nomad war machine that betrays and dominates society. In Anti-Oedipus, just before the extended quotation above with the reference to McLuhan, Deleuze and Guattari (1983: 240) point out: ‘Capitalism is profoundly illiterate’. Following a non-linear, acoustic, disor-ganized organizational pattern, capitalism has made of the world a smooth space for itself, a control society for the multitudes, and a miserable place for millions of people. In short, Deleuze is directly concerned with power and ethics, and this concern represents his final advance over McLuhan, one that is necessary to make any revived media ecological scholarship critical and relevant to the challenges facing our world. Deleuze’s concern with power is evident in the tonality of his concepts, such as striated space, smooth space and the control society, as distinguished from McLuhan’s descriptive, apolitical terms, such as visual space, acoustic space and the global village. McLuhan describes visual and acoustic spaces as products of media, whereas Deleuze understands them as elements of assemblages that can always break down or reverse into the opposite. While McLuhan says the phonetic alphabet and print media create visual space, Deleuze recognizes that ‘visual space’ can come in a wide variety of assemblages, or arrangements of power. This notion of McLuhan’s there-fore makes a ‘badly analysed composite’ since it is too homogenized, too inattentive to the multifaceted, heterogeneous assemblages of media and power. Thus Deleuze suggests that we use the analytically more rigorous ‘striated’ and ‘smooth’ spaces, the implication being that a visual space can be striated or smooth depending on the actual assemblage. For example, McLuhan would say that the cityscape of Manhattan, having been rationally laid out, makes a visual space, and that electronic media turn it into an acoustic space, an echo chamber, Time Square being an arch example. Deleuze would say that the grid-like cityscape of Manhattan enacts state power and makes a striated space typical of a disciplinary society, which is behind us. Disney World better represents the new spaces of control society, giving people the semblance of freedom within a framework in which space and time are regulated in a far more intricate way so it makes a striated space typical of a control society. In short, McLuhan tends to depict social changes as exclusively the prod-uct of media, leaving little influence for the changes that come with differ-ent power dynamics. Visual space does not necessarily create unawareness and oppression anymore than acoustic space creates community. Instead, for Deleuze, any becoming is always a risky operation. What promises to be a breakthrough may turn out to be a breakdown. A line of becoming may end up being a line of micro-fascism. These are concerns starkly missing in McLuhan. Although McLuhan recognizes, in Laws of Media, that media often reverse into their opposites, his inattention to the interaction of media and power leads him to downplay these possibilities. Likewise, McLuhan seems unconcerned with alternatives that might develop new ways of thinking or new tactics for resistance. If power is the result of changes in media, then attacking or resisting power is a misguided effort. Seeing power and media as a more complex assemblage, following Deleuze, entails a different scholarly and ethical task. Deleuze’s analytics of power encompasses and entails a poetics of active power, which is synonymous with the techné of life, or the practice of the self as an ego-less, non-organic, machinic assemblage. It inspires us to imagine resistance as none other than the affirmation of elan vital, the mapping of lines of flight, the invention of new possibilities of life – an active operation that betokens the innocence of becoming. As such, resistance precedes (reactive) power (which striates the life world and blocks becoming). It is self-defeating to imagine resistance as derivative of, as a reaction against, (reactive) power. The telos of resistance is the free spirit, one that inhabits striated spaces in an imperceptible, smooth mode, that accomplishes becomings regardless of control, that opens up conditions for different becomings. As a result, the (ethical) task for the scholar radically shifts in the move from McLuhan to Deleuze. McLuhan primarily envisions his role as providing a descriptive account of the media environment in order to raise awareness that might provide a better social map, a role he compares to that of the artist. For Deleuze, in contrast, the scholarly task does not end with mapping but must include a creative gesture, must seek to create new concepts for thinking new modes of existence. Basically, he asks: in a late capitalist or a control society, how do we make becoming otherwise possible? In contrast, McLuhan’s faith in awareness as solution smacks of a naïve humanism impossible to adopt in Deleuze’s interology and ironic given McLuhan’s simultaneous call for ecological thinking. The ethical and hence scholarly task remains not just to analyse media or machinic environments so that we have a better under-standing, since this very notion of understanding elides what mode of under-standing, what assemblage of knowledge this understanding finds uptake in. As Deleuze surely ascertained from Foucault, knowledge is only ever existent in an assemblage with power, power/knowledge. Thus following a Deleuzian ethology, the ethical and scholarly task becomes creative as well as descriptive – to invent new concepts fruitful for different modes and different assemblages. Much of McLuhan’s work contributes to this creative fruiting of concepts, yet by ending with the descriptive and downplaying the issues of power and ethics, McLuhan remains an insufficient precursor to Deleuze’s more robust and elaborated alternative. Conclusion Although scholarship on McLuhan and Deleuze has been proliferating, efforts to render visible the implicit resonances between the two are still scanty. This article has been called forth by this gap. We have suggested that Deleuze can be read as an heir of McLuhan, as likewise a theorist of media guided by ecological thinking. Yet our understanding is that Deleuze has been inspired by McLuhan but not constrained by him. Instead, Deleuze always transforms McLuhan’s insights even as he uses them. If McLuhan is poetic, provocative, and full of potentials, Deleuze allows those potentials to come to fruition with his rigorous theorizing. Among all the resonances that can possibly be articulated, we have foregrounded three closely interconnected ones that are restated below. First, McLuhan’s understanding of media as extensions of humans often treads near the trap of technological determinism, despite McLuhan’s more complex understanding of media as ground and formal cause. McLuhan’s suggestive, heuristic style of writing only aggravates the situation. Deleuze absorbs the thrust of McLuhan’s understanding but completely reverses the point of departure. His notion of machinic assemblage is no longer human or technologically centered, giving no determining priority to either. Rather, the assemblage comes first. Second, whereas McLuhan coaches us to attend to percept and affect and shifts in people’s taste by bringing into focus the human-technology interface, Deleuze enables us to home in on issues of desire since his machinic assemblage is also a desiring machine, a plane of immanence in which desire is produced and circulated. McLuhan lacks the rigorous theorizing of desire and affect developed in Deleuze, based in the uplifting Spinozan notion of affect as a matter of affecting and being affected. Third, Deleuze’s notion of assemblage also entails an ethics – one that is based on ethology and interology – and an understanding of and posture towards power. To be ethical means to care about what assemblages to enter into, to organize one’s encounters, to map out new possibilities of life, to enhance one’s capacity to affect and be affected. In McLuhan’s work, the concern with ethics is implicit and ambivalent, and issues of power are elided, as required by his pseudo-scientific and descriptive scholarly endeavour. In contrast, Deleuze calls the scholar to the creative task of creating new concepts for thinking new, healthy modes of life. Lastly, we’d like to reiterate that Deleuze’s ontology is an open ontology, an interology, one that befits the radically unfinished form of life known as humans. If the point of philosophizing is to contribute adequate concepts, then Deleuze has transformed a whole volley of McLuhan’s suggestive, ethically ambivalent probes into concepts and ethical precepts useful for understanding – and perhaps changing – the mediated environment in which we all float. Note:1. In Anti Oedipus,Deleuze and Guattari cite favorably McLuhan’s insight that the content of any medium is another medium (Deleuze andGuattari 1983: 240–41).For references to Mumford, see Deleuze and Guattari 1987: 428,457. For references to Virilio, see Deleuze andGuattari 1987: 231, 345,395–96, 480, 520–21 ReferencesCavell, R. (2003), McLuhan in Space: A Cultural Geography, Toronto: University of Toronto Press. Deleuze, G. (1986), Cinema 1: The Movement Image, Minneapolis: University of Minnesota Press. —— (1988), Spinoza: Practical Philosophy, San Francisco: City Light Books. —— (1989), Cinema 2: The Time Image, Minneapolis: University of Minnesota Press. —— (1990),The Logic of Sense, New York: Columbia University Press. —— (1995),Negotiations: 1972–1990, New York: Columbia University Press. —— (2005),Francis Bacon: The Logic of Sensation, Minneapolis: University of Minnesota Press. Deleuze, G. and Guattari, F. (1983),Anti-Oedipus: Capitalism and Schizophrenia ,Minneapolis: University of Minnesota Press. —— (1987), A Thousand Plateaus: Capitalism and Schizophrenia, Minneapolis:University of Minnesota Press. Deleuze, G. and Parnet, C. (2002),Dialogues II , London: Continuum. Jamieson, K. H. (1990), Eloquence in an Electronic Age: The Transformation of Political Speech making, New York: Oxford University Press. Jenkins, E. (2014), Special Affects: Cinema, Animation, and the Translation of Consumer Culture, Edinburgh: Edinburgh University Press. Massumi, B. (1992), A User’s Guide to Capitalism and Schizophrenia: Deviations from Deleuze and Guattari , Cambridge: The MIT Press. —— (2002), Parables for the Virtual: Movement, Affect, Sensation, Durham:Duke University Press. May, T. (2005), Gilles Deleuze: An Introduction , New York: Cambridge University Press. Suggested citationJenkins, E. and Zhang, P. (2016), ‘Deleuze the media ecologist? Extensions ofand advances on McLuhan’, Explorations in Media Ecology,15: 1, pp. 55–72,doi: 10.1386/eme.15.1.55_1 Contributor detailsEric S. Jenkins is Assistant Professor of Communication at the University of Cincinnati. He is author of Special Affects: Cinema, Animation, and the Translation of Consumer Culture as well as numerous articles in national and international journals. His research focuses on the interaction of media and consumerism. Contact: 144-A McMicken Hall, 2700 Campus Way, Cincinnati, OH 45219,USA E-mail: [email protected] Peter Zhang is Associate Professor of Communication Studies at Grand Valley State University. He is author of a series of articles on media ecology, rhetoric, Deleuze, Zen and interality. He has guest edited a special section of China Media Research and guest co-edited two special sections of EME. Currently he is spearheading a second collective project on interality. Contact: LSH 290, 1 Campus Dr, Allendale, MI 49401, USA. E-mail: [email protected] Eric Jenkins and Peter Zhang have asserted their right under the Copyright, Designs and Patents Act, 1988, to be identified as the authors of this work in the format that was submitted to Intellect Ltd. by Steven Craig Hickman I’ve been reading Niklas Luhmann’s works for a couple years now and have slowly incorporated many of his theoretical concepts into my own sociological perspective. Along with Zygmut Baumann I find Luhmann’s theoretical framework one of the most intriguing in that long tradition stemming from Talcott Parsons, one of the world’s most influential social systems theorist. Of course Luhmann in later years would oppose his own conceptual framework to his early teacher and friend. Against many sociologists, especially those like Jürgen Habermas who developed and reduced their conceptual frameworks to human centered theories and practices, Luhumann developed a theory of Society in which communications was central. He did no exclude humans per se, but saw that within society humans had over time invented systems of dissemination that did not require the presence of the human element as part of its disseminative practices. We live amid impersonal systems that are not human but machinic entities that communicate among themselves more equitably than to us. Instead of stratification and normative theories codifying out personal relations within society Luhmann advocated a functionalism that dealt with these impersonal systems on their own terms rather than reducing them to outdated theories based on morality and normative practices. For Luhmann we continue to reduce the social to an outdated political and moral dimension that no longer understands the problems of our current predicament. In fact these sociologists do not even know what the problem is, or how to ask the right questions much less what questions to ask. Luhmann was one of the first, and definitely not the last, sociologists to decenter the human from society. The notion of the social without the human actor was replaced by communications itself. Luhmann himself saw his theories as forming a new Trojan horse: “It had always been clear to me that a thoroughly constructed conceptual theory of society would be much more radical and much more discomforting in its effects than narrowly focused criticisms—criticisms of capitalism for instance—could ever imagine.” His reception in North American academy has been less than underwhelming according to Moeller because of his couching his terminology in the discourse of Habermas and the sociologists of his day in Germany. Over and over Foucault spoke of the conformity to discourse that scholars were forced to inhabit to be read as legitimate sources of scholarship. Yet, as Moeller tells it Luhmann hoped to hide is radical concepts in plain site even if within the discourse of his day: “Luhmann ascribes to his theory the “political effect of a Trojan horse.” – Luhmann openly admits to his attempt to smuggle into social theory, hidden in his writings, certain contents that could demolish and replace dominating self-descriptions, not only of social theory itself, but of society at large.” (Moeller, KL 223) Recently I’ve been reading Luhmann’s The Reality of Mass Media which was published late in his life in 1996 (i.e., Luhmann died in 1998). I’ve yet to work through his Magnum opus the Theory of Society of which only two volumes – the one on Society (two parts) and one on Religion, were all that remain of his work left unfinished. But this one introduces many of the basic themes of Luhmann’s theoretical framework: the functional differentiation of modern society, the differing formations – law, religion, mass media, etc. – that constitute the communicative operations which enable the differentiation and operational closure of the system in question, reflexive organization – autopoiesis (Maturana) – and second order observation – or, the observation of observation, etc. What interests me in this work is how it touches base with current media theory from McLuhan, Innis, and others, as well as the specific notions surrounding his use of what he termed ‘cognitive constructivism’. Obviously notions of the Mass Media as the purveyor of reality for society hits on the traditions of propaganda, public relations, social constructivism, and all those Kantian notions and traditions from Vico onward that developed theories of how societies invent reality through various systems, myths, ideologies, etc. Luhmann considered himself a radical “anti-humanist”, not in the sense of some Nietzschean overreaching of the human as an Übermensch, but rather in the form of an inhumanism that decenters the human agency from its primal place in the cosmos as something exceptional, distinct, superior to other creatures, and instead situates him back within the natural realm on equal footing with all beings on this planet and the cosmos. As Moeller observes “a radically antihumanist theory tries to explain why anthropocentrism—having been abolished in cosmology, biology, and psychology—now has to be abolished in social theory. Once this abolition has taken place, there is not much room left for traditional philosophical enquiries of a humanistic sort” (Moeller, 6). For him theory is develops both anti-foundational and operationally closed systems that are also open to observation of observation; or, second order reflexivity that takes into account a nontrivial or complex systems, which, being in a system-environment relation, are open for mutual resonance, perturbation, and irritation (Moeller, 7). This brings us to his use of the concept of distinction which, for Luhmann, was neither a principle, nor an objective essence, nor even a final formula (telos), but was instead a “guiding difference which still leaves open the question as to how the system will describe its own identity; and leaves it open also inasmuch as theire can be several view on the matter, without the ‘contexturality’ of self-description hindering the system in it operating (Luhmann, 17). I have to admit it was Levi R. Bryant in his The Democracy of Objects and on his blog Larval Subjects that I first heard of Niklas Luhmann. In Chapter 4: The Interior of Objects of that book Levi goes into detail about Niklas Luhman and his theories as it relates to his own version of Object Oriented Ontology – or, what he now terms Machine Ontology – Onto-Cartography. In one of his blog posts Laruelle and Luhmann Levi makes an acute observation on Luhmann’s conception of and use of distinction. Comparing it with Laruelle’s notion of distinction as decision in which all philosophies as compared to non-philosophy start with a decision “that allows it to observe the world philosophically”. Non-philosophy instead of starting with a decision instead observes these distinctions used by philosophers in understanding how they actually structure the world the philosopher describes. Be that as it may what Luhmann refers to, according to Levi, as the distinction is that it “allows an observer to observe a marked state as the blind spot of the observer. Every observation implies a blind spot, a withdrawn distinction from which indications are made, that is not visible to the observer the observes. The eye cannot see itself seeing.” Take for example my friend R. Scott Bakker’s notion of the Blind Brain Theory in which we are blind to the very processes that shape and form our very thoughts and reality, yet we observe in a fashion that neglects this fact and never knows of this blindness and believes it has all the information it needs to understand and communicate effectively about itself and reality. What intrigues both Laruelle and Luhmann is this second order reflection of the observer observing the observer. As Levi explains it: “Observing the observer” consists in investigating how observers draw distinctions to bring a world into relief and make indications. Were, for example, Luhmann to investigate philosophy from a “sociological” perspective, his aim wouldn’t be to determine whether Deleuze or Rawls or Habermas, etc., was right. Rather, he would investigate the distinctions they draw to bring the world into relief in particular ways unique to their philosophy. In other words, he would investigate the various “decisional structures” upon which these various ways of observing are based. In another work on Luhman Niklas Luhman’s Modernity: The Paradoxes of Differentiation in a chapter marked Injecting Noise into the System Rasch notes that contingency a concept that Luhmann used often meant quite simply, the “fact that things could be otherwise that they are; and things can be otherwise than they are because “things” are the result of selection” (Rasch, 52).3 This notion of selection is if not equivalent to making a distinction at least necessarily a qualification of that concept. All systems are observable because all systems are formed by distinctions, and these distinctions operate on the elements in a system whether that system is conscious or not because they operate by distinctions and can decide or choose between alternatives those distinctions establish (Rasch, 52). The information generated by the system being observed is contingent because other distinctions could have been made producing different information based on choice or exclusion; it is also based on an enforced selection because of time constraints, which affords a view onto the complexity of the system being observed thereby producing meaning (Rasch, 53). The chain of complex information observed within this system is the communication of that system. As Luhmann would observe the moment a system communicates its information that information becomes non-information and cannot be reproduced in the same way again. He makes the observation that in our modern hypermedia infotainment society the observation of news events now occur simultaneously with the events themselves. In an accelerated society one never knows what the causal order is: did the event produce the communication, or did the communication produce the event. It’s as if the future was being produced in a reality machine that communicates our information simultaneously and for all time. The desuturing on history from effective communication is rendering our society helpless in the face of events. More and more we have neither the time to reflect nor the ability to observe, instead we let these autonomous systems do our thinking for us while we sift through the noise of non-information as if it was our reality. Corporate media in their search for the newsworthy end up generating reproductions of future uncertainties – contrary to all evidence of continuity in the world we know from daily perceptions (Luhmann, 35). Instead of news we get daily reports, repeatable sound bites of events that can be rewired to meet particular ideological needs of the reporters as they convey their genealogy of non-informational blips as if it were news of import. The redundancy of non-informational reports are fed into the stream through a series of categories: sports, celebrities, local and national events, politics, finance, etc. as it reweaves the reality stories of the day through its invisible ideological conveyor belt as if for the very first time. As Luhmann tells us the “systems coding and programming, specialized towards selection of information, causes suspicion to arise almost of its own accord that there are background motives at work” (Luhmann, 38). Corporate media thrives on suspicion, on the paranoia it generates through gender, race, political, religious, national, and global encodings/decodings selected not for their informational content but for their contingent production of future fears. The smiley faces of corporate reporters provides us with communications that generates a pleasing appearance by which the individuals themselves that cross the media threshold conceal themselves from others and therefore ultimately from themselves. The façade of truth bearing witness becomes the truth of a witness bearing the façade. “The mass media seem simultaneously to nurture and to undermine their own credibility. They ‘deconstruct’ themselves, since they reproduce the constant contradiction of their constative and their performative textual and image components with their own operations (Luhmann, 39). The news media instead of giving us the world as it is provide us with new realities supported by the endless operations of a selective ideological algorithms that filter the vast datamix of information into non-informational contexts presented to the unsuspecting viewers eyes or ears as if it were immediate news rather than the façade of lost information. In a final insight Luhman tells us that no autopoetic system can do away with itself. And in this, too, we have confirmation that we are hoodwinked by a specific problematic related to a system’s code. As he says, the “system could respond with its everyday ways of operating to suspicions of untruthfulness, but not to suspicions of manipulation” (Luhmann, 41). There is always that blind spot in your mind that sees the ideological subterfuge but never notices that you were complicit in feeding the very system that now entraps you. Being blind to its very nature you assume valid information when in fact all you have is the non-informational blips of a cultural matrix out of control. The datahives of capitalism churn away collecting neither news nor truth, but rather the informational bits that make up your onlife for future modes of economic tradecraft even as your panic flesh is made obsolescent just like all other commoditized bits in the lightbins of our global corporatists state(s). Even the Snowden’s of our world are but a trace of a trace lost in the paranoiac ocean of non-information. The secret worlds below the datahive hum away like the remembrance of bees that no longer pollinate. Living in a blipworld we hide from ourselves in endless chatter, noise of noise that means only one thing: the death of our humanity. As Luhmann asks: “How is it possible to accept information about the world and about society as information about reality when one knows how it is produced?” (Luhmann, 122) Knowing that we live in constructed tale which is not narrated by us but by the machinic code of machinic minds what is left of reality, anyway? 1. Moeller, Hans-Georg (2011-11-29). The Radical Luhmann (Kindle Locations 216-218). Columbia University Press. Kindle Edition. 2. Niklas Luhmann. The Reality of Mass Media. (Stanford University Press: Polity, 2000). 3. William Rasch, Niklas Luhman’s Modernity: The Paradoxes of Differentiation (Stanford University Press, 2000) taken from: |
MediaArchives
March 2019
|