by Achim Szepanski
It was one of Leibniz's intentions to show that the world is made up of elemental automata that he called "monads." Monads, which each consist of aggregates and form complex automats, can be fanned further and further into Leibniz's infinite small, from the organism to the elementary particles, without ever coming to a final atom. And always the Monad is coded, in so far as it is inscribed the entire present, past and future. If today's cellular automata are to function like a universal Turing machine, then for progress-believing authors who discuss Leibniz in terms of atomism rather than Deleuze in the context of the fold, this means nothing more than that the world's organizing principle is that of the computer and its progressive capacities in which Moore's Law is constantly pushing forward the progression, so that one can overcome again and again all technical problems. The smallest calculating machines today contain a lot of electronic components, but it is becoming increasingly difficult to guarantee the power supply to the digital machines and to control excessive heat generation. Thus, the biological model remains, under very different conditions, still unreached.
According to Klaus Mainzer, cellular automata consist of "checkerboard-like grids, whose cells change their states (eg the colors black or white) according to selected rules and thereby depend on the color distribution of the respective cell environments" (Mainzer 2014: 25). They are mostly two-dimensional grids (one- or multi-dimensional grids are also possible), whose cells have discrete states, but which can change over time.
All cells must be identical, so they behave according to the same rules. In addition, since all rules are executed stepwise and discretely, the automatic network operates synchronously and clocked. And each cell can relate to neighboring cells according to certain rules by comparing its own state with that of the other cells with each clock cycle in order to calculate its new state from this data. Thus, the state of the respective cell results at a time t from the previous state t-1 and the state of the neighboring cells, which in turn are in connection with further neighboring cells. Cellular automata are thus characterized by their interactive dynamics in time and space.
In the universe of cellular automata, space is a discrete set of cells (chain and lattice) that have a discrete number of possible states and that transform and update in discrete time steps. Finally, a cellular automaton can be specified as follows: 1. Cell space: size of the field and number of dimensions. 2. Boundary conditions. 3. Neighborhood and radius of influence of cells on each other. 4. Set of possible states of a cell. 5. Neighborhood and self-change of the cells. 6. Ambient function that indicates to which other cells a cell is connected. With the help of powerful computer services, pattern developments of future generations of cells can then be simulated. According to Mainzer, the cells of a cellular automaton that react to their respective environment behave like a swarming intelligence. (Ibid .: 161) Such a pattern formation of cellular automata can be modeled with the help of differential equations.
A configuration of cells is considered stable if and only if it matches its successor, but it will disappear in the next generation if all its cells are in the white state. Mainzer writes: "In this case, the entire system is destabilized. It could be said that two dead isolated cells are brought to life through coupling and diffusion. The awakening to life becomes precisely calculable. "(Ibid .: 138) Mainzer presumably assumes that life could have come about through a simple process," through coupling and diffusion, "although we are not yet in a self-preserving way "Substance and energy exchange" of an organism have to do. Nevertheless, there does seem to be "a definite reaction-diffusion equation" that has "a limit cycle, that is, an oscillating solution" (ibid .: 139). If border-cycle means something like an inside-outside difference, then we actually have an indication of the spontaneous emergence of life here. The term "oscillation," according to Mainzer, refers to the "cycle of a metabolism." And Mainzer notes that we are leaving the realm of entropy here.
The disarray or dissolution of a closed system can possibly be compensated by a specific opening of the system, in which the destruction is attacked by an incomplete new regulation, by a deviation, which can also lead to order again. The times of the destruction of the order (thermodynamics) and the times of the composition of the parts (negentropy) are integrated into the thermodynamics of open systems. (Serres: 1994: 103) Serres speaks of life as a multi-faceted, polychronic process, bathing as a Syrrhese in the flow of several times. Bergson's duration, Prigogine's deviation from reversibility and irreversibility, Darwin's evolution, and Boltzmann's disorder.) Finally, with Serres' conception of an open polychronology, it can be summed up that the idea that one could coherently describe the complex forms of the universe with the help of the logic of cellular automata is only another variation of the philosophical decision, which is the possibility of capturing the infinity of the data as a scientifically consistent theory. But things are not that easy. Let us visualize the distinction between smooth and notched spaces made by Deleuze / Guattari in a thousand plateaus. Networks are less in smooth mode, they are more strictly stratified. And stratified networks can be compared to the logic of cellular automata or cellular spaces. These networks were described in the 1950s by scientists such as John von Neumann and Nils Aall Barricelli. Cellular spaces, be they the mentioned lattices or elastic topologies, always contain clear distinctions between links and nodes as well as between one node and another. Such networks could now be contrasted with non-cellular spaces such as Konrad Wachsmann's »grapevine structures« or the works of the architect Lebbeus Woods, in which there is no clear separation between links and nodes or between the nodes. Instead, the smooth form dominates here, governed by various "logics": hydraulics, metallurgies, and pure difference in motion, flow, and variation. (See Deleuze/ Guattari 1992: 409) However, it has already been pointed out elsewhere that this type of multilateral dynamic nomadic network, which Deleuze/Guattari calls a "rhizome," has to do with the volatile, virtualization-in-circuit updating processing, real-money capital can be compatible.
Let's briefly look at new trends in automation, which today summarizes the feature pages under the label "Industry 4.0". Following a linearly conceived genealogy of the history of technology, the first epoch of industrialization begins with the steam engine, which is replaced by the epoch of electrification and Taylorization, and this in turn by the third epoch of digital automation and the introduction of robots. With the online networking of the machines in real time (Cyber-physical Systems) is today allegedly reached the fourth stage of the industrial revolution. "Cyber-physical systems" organize themselves largely autonomously via the mode of intelligent interfaces, for example as "smart grids" or "smart cities".
The term "Industry 4.0" thus refers to machine complexes that are networked without exception online and usually process in an internal and of course secure production network. One speaks now of the "Internet of Things". It can be assumed that in the future "smart factories" mainly networked machines are active, which also largely control themselves in the networks. All the machines in a company are now online and every single machine part communicates when it is equipped with sensors, RFID chips and specific software functions, not only with other machine parts, but also in certain lines and along lines with the areas of management, transport and logistics , Sales and shipping.It is no longer the computer, it is the IT networks themselves that are growing together with the physical and immaterial infrastructures, and this happens beyond the respective production sites, including the external environment of the companies, vehicles, home appliances or supermarket shelves. Even today, infrastructure tasks and the functioning of logistics are so complex that "cyber-physical systems" are absolutely necessary for the self-organization and automation of logistics, supply, health and transport systems. In doing so, local places like factories are tapped as databases, but the arithmetic operations themselves usually take place in remote places. In any case, the computing operations from the isolated computers migrate to networked environments where they process on the basis of sensor data and technologies, which not only collect and distribute data but also use them to calculate future events based on algorithmic techniques. In addition, the machines should be permanently addressable by means of the RFID tags, and their own paths should be sought in the global logistics chains for packet switching networks. The storage, evaluation and indexing of the data takes place in so-called Clouds of the data center.
The online-integrated human actors will communicate directly with the machines: the machines command the actors what they are doing and vice versa, the actors give orders to the machines what they have to do. It could also be that in the internal networks of the companies to similar modulations, relations and functions as in the Internet 2.0 comes. In principle, any access to the machine complexes could be done in real time, every machine is permanently reachable, and every machine can send signals on the spot, just in time and as needed. With the Industry 4.0 becomes a customer-oriented production possible, it is produced "on-demand", i. e. Consumption is adjusted to individual individuals. Sensors capture gigantic amounts of data ("big data") in order to observe and control the production processes. At this point, however, the immediate question would be how to determine the storage and access rights to the data in the future. If "big data" migrates into factory halls, then the comparatively transparent production processes also facilitate the manipulation possibilities, so that the highly complex systems become even more sensitive to disturbances - local irregularities can be cascaded in the sense of chaos theory.
As modular components of networks, human and non-human agents are integrated into the dynamics of permanent business communication via the online mode. The distinction between analog (carbon-based, offline) and digital (silicon-based, online) areas is breaking up, with the latter overflowing and mixing with the former. These phenomena are known as Ubiquitous Computing, Ambient Intelligence or the Internet of Things. The information theorist Luciano Floridi speaks here of a ubiquitous »on-life experience«, the drift into the post-human or the inhumane, in which the boundaries between the human, the technology and nature are blurred, and further from a shift from scarcity to overflowing with information, from the entities to the process and the relations (to the substance). (See Floridi 2013) All of this involves new forms of control and power that run in multidimensional dimensions and lines, including corporate, technological, scientific, military and cultural elements.
However, it is also necessary to put the term "Industry 4.0" into perspective. From the very beginning, microelectronics has had to do with the revolution in production and distribution, such as computer-aided design (CAD) or the various control technologies in the industry. And the associated increase in productivity dragged through all sectors of the economy, be it industrial production, agricultural production or raw material extraction, including the non-productive sectors of the state.
The accelerated growth driven by computer and information technologies is called "singularity" in scientific circles, but it has to be distinguished between technical and economic singularity. The economic singularity is measured by the general development of the substitutability between information and conventional inputs. The economist William D. Nordhaus has pointed out in a new study that economic growth depends on how much material can be replaced by electronics. In addition, at the macroeconomic level, account should be taken of the fact that higher productivity leads to lower prices, with the result that only an increasing proportion of high-productivity sectors can be demonstrated in toto if their volume increase overcompensates for the fall in prices. Nordhaus also investigates the question of the permanent substitutability of certain factors of production through information at the organizational level. (See Nordhaus 2015) He proves that this has not been the case in the past and predicts only a slow development towards the economic singularity for the 21st century, although capital intensity will continue to increase in favor of the capital stock (compared to the workload) hence also the share of the information capital. Nevertheless, digital information and communication technologies in the new millennium are expected to have helped establish a new level of productivity in production and distribution, by streamlining and accelerating trade between companies; rationalization processes in the global supply chains and in the supplier industry that enable the reorganization of areas such as architecture, urban planning, health care, etc. The software, with which management methods, derivatives, and digital logistics are processed, may well be understood as a basic innovation, which is integrated into a new technical-economic paradigm.3 (See Perez 2002: IX)
In the context of the neo-imperialism of the world's leading industrialized countries, in the future, 4.0.-industries are needed to secure their own competitive advantages. So it is no coincidence that German scientists are constantly pointing out that the industrial Internet can provide a central locational advantage for both Germany and the European Union. Based on a world-leading engine, automotive and supply industry, it is important to rapidly develop new technologies that can connect factories, energy, transportation, and data networks on a global scale. The state also had to provide intensive research and development programs to secure its location advantage.
This requires a sophisticated and complex logistics industry. Logistics is a sub-discipline of Operations Management. It quickly gained in importance as a result of containerization and its integration into the utilization chains of global capital. Concentrating on the product, on its efficiency and quality, is increasingly losing importance in globalized value chains, but instead capital exploitation is more along lines of abstract lines that process in spirals and cybernetic feedback loops, which in turn are integrated into ubiquitous digital networking , Companies like Google and Amazon are playing an increasingly important role in that the relation between production and consumption is weighted more strongly than the product and at the same time mixed in a unique way in the operational processes of horizontality and verticality. Verticality refers to the line manager who has to operate or supervise the "algorithmic" metrics and rhythms along others. tried to eliminate the error rate and slowness of the human decision. The line manager works along the operational lines, his function in the production processes is chiefly that of an enforcement agency, which gives commands while at the same time the rhythm of the production processes operates through it. After all, management seems to be mainly concerned with protecting the algorithmically organized production processes from the resistance of the workers. And finally, on the line means also working in the "progressive" mode working on the line, constantly improving, expanding, adding to it, to set a new line. This is also the new role of senior management, which knows no managers but only "leaders".
Deleuze, Gilles / Guattari, Félix (1974): Anti-Oedipus. Capitalism and Schizophrenia 1. Frankfurt / M
- (1992): Thousand plateaus. Capitalism and schizophrenia. Berlin.
- (1996): What is philosophy. Frankfurt / M.
Floridi, Luciano (2013): The Philosophy of Information. Oxford / New York.
Mainzer, Klaus (2014): The calculation of the world. From the world formula to big data. Munich
Serres, Michel (1991): Hermes I. Communication. Berlin.
- (1993): Hermes IV. Distribution. Berlin.
- (1994): Hermes V. The Northwest Passage. Berlin.
- (2008): Clarifications. Five talks with Bruno Latour. Berlin.
translated by Dejan Stojkovski
by Max Haiven
Rethinking agency in an AI-driven world – as the AMBIENT REVOLTS conference is trying to do – the critic Krystian Woznicki interviews social thinker Max Haiven about seminal notions of agency under AI-driven financialization.
Krystian Woznicki: As you repeatedly point out in your book “Art after Money, Money after Art”, agency can not be considered as an individual, but should be understood as a collective matter – a social matter that is. Why does an individual (political) action still matter and why, nonetheless, should we not limit our imaginative horizon to the idea of individual (political) action?
Max Haiven: This is a tricky question because I think to fully understand the potentials of individual agency we need to fundamentally problematize, even demolish, the Eurocentric and colonial divide we habitually make between the individual and the collective, or the social. I have recently been reading Jason Moore and Raj Patel’s book “The History of the World in Seven Cheap Things”, where they remind us of an observation made now decades ago by feminist and anti-colonial thinkers: that in order for the world to be shaped as it has been shaped by capitalism, colonialism and patriarchy, it was necessary for European philosophers to develop a framework that separated this thing we call “nature” from this thing we call “society” (Patel and Moore borrow Marx’s notion of the real-abstraction to describe how an invented set of ideas become functionally real in practice). Part of that separation is also separating humans from one another, as social beings who cooperate with other beings to reproduce the world together.
So while ultimately I do believe individuals need to make ethical and political choices and take actions, I think that we need to do so at the same time as we challenge and re-imagine what it means to be an actor or an agent, and that is obviously very difficult in this world we have created, that only seems to acknowledge and tell stories about individuals. Ultimately, as you also recently argued elsewhere, European Enlightenment taught us the notion of agency over rather than agency with (others).
In my work with Alex Khasnabish we theorized the radical imagination not as a thing an individual has, but as something we do together as we struggle within, against and beyond the systems of power that surround us. I think that, even when we imagine we are taking individual political action we are, in fact, always part of some common or collective movement, whether we acknowledge it or not. The ethical gesture of the individual ultimately has almost no meaning if it is not echoed by or acknowledged by others. Meanwhile, the whole meaning of “politics” as such is the question of living together, acting together.
So, any time we think of the individual, we need to question what it is we mean, and more importantly what it is we desire. Our obsession with individual agency – emblematized in the heroic Hollywood biopic narratives of triumphant individuals – clearly (temporarily) satisfies some sort of desire in us. What is this desire? Where does it come from? I would hazard the hypothesis that, at the same time as this system we live under – of competitive, individualistic, consumerist capitalism – increasingly strips us of our ability to change the world collectively, we gravitate towards narratives of the superhuman individual, the superhero, the maverick, even the anti-hero. And, as you also have pointed out, is it so surprising then that we seem to increasingly feel the only one who can save us (from ourselves) is the most bombastic, selfish, belligerent and unapologetic individualists: the proto-fascist strongmen of our current political climate?
Could you explain how this thinking of agency at a collective level relates to and is circumscribed by financialized sociality? Here I am thinking, among other things, that the financialized subject is, last but least, a collectivity and that it is only in the mode of collectivity that this “incorporation” can be properly challenged.
Collective agency is an equally tricky matter, because of course we must problematize the distinction between individual and collective as we have learned to draw it. It is, for me, not simply a matter of valourizing the collective over the individual. I recently reread Ursula Kroeber Le Guin’s beautiful classic “The Dispossessed”, which I think presents some illuminating and inspiring notions of what it means to reimagine the relationship of the individual and collective, based in her background in her reading of Taoism, in the Western traditions of anarchism and in her exploration of the diversity of world cultures through the study of ethnography.
If I may be schematic, I would say that we are always, whether we know it or not, acting collectively. We are a cooperative-imaginative species. We reproduce our world through complex divisions of labour and collaboration, often with “non-human” species, or with the “natural” world (I place these terms in scare-quotes to recall their artificiality). Often this cooperation is not at the front of our minds: it is encrypted in custom, tradition or habit. Power, as in “power-over,” the will to dominate, sovereignty, seems to me to be the methods by which certain classes, groups or people seek to take control over the means or the ends of this imaginative cooperation. But ultimately such control is difficult to gain and even more difficult to maintain. People rebel in big and small ways. Power-seekers fight amongst themselves. Things fall apart. We’re difficult animals.
So, now to come to your question, I think we gain a great deal when we think about finance capital and financialization as forces of power and control that are extremely adept at shaping our collective action. Finance names this global digitized monetary nexus that David Harvey argues acts as a “central nervous system” for capitalism, taking in information signals from around the globe and sending out triggers for response. I have suggested elsewhere that potentially the imagination is a better metaphor, but more on that in a moment. It is a form of meta-human reflexivity, a method by which a system of domination comes to know the world and act upon it, to take command over imaginative-cooperation at a global scale.
When we take this to be normal and natural, and indeed when we internalize finance’s paradigms, metaphors, value paradigms and so on, this, I think, is when we enter into a phase of financialized sociality, a phrase I take from my late advisor Randy Martin. Here, the social field becomes suffused with the logic of finance. Everything from education to housing to food to family life come to be reframed as “investments.” Debt is offered not as a means of domination and discipline (which it is indeed, functionally speaking) but as a means of personal liberation and responsibility.
But at the core, financialization is tyranny. It is a method by which the means and ends of our imaginative-cooperation, our capacities to act together to reproduce our world and our lives, is conscripted towards the reproduction of a more and more unequal world of social and ecological ruination. And we will be made to fight over the scraps.
Let us turn now to role of (AI) technology for financialization and financialized sociality. At some point in your book you cunningly say that we are challenged to discover “how we are bundled together” and to respond to that from within this condition. To my mind, the bundle concept was prominent in middle of the 1990s – during the first bigger waves of the “digital revolution” –, when thinkers such as Kojin Karatani unearthed it from Immanuel Kant’s writings who defined the subject as a bundle. Back then the bundle concept was seen as a possibility to think the subject as a bundle of information flows. This said, how is the condition of being “bundled together” an at once financial and technological matter? In other words, how does the technological catalyze this particular articulation of financialization?
This language of bundling – as I use it – comes from the financial process known as “securitization,” familiar to many from the 2007/8 subprime loan meltdown. Essentially, many debts or other financial obligations are pooled together, then repackaged as new financial assets that offer investors access to different forms of risk and yield. Often this can be extremely complicated and arcane, and these new financial assets themselves can be pooled and redivided again and again. And, of course, the debtor has no say and usually no knowledge of this necromancy.
This in many ways mirrors the processes by which data about us (I hesitate to say “our data” because I am skeptical of strategies that are predicated on the notion that we can or should “own” data associated with us) is congregated, parsed, bundled, sold, read, and made actionable. In fact, we know that many of the algorithmic and “self-learning” systems developed in finance are also used in other fields of “big data” analytics.
So the question becomes: in a world where debt and data about us is manipulated behind closed doors, by computer systems we cannot truly understand let alone control (unless we are part of the very corporations who oversee these processes, and even then we would feel helpless), what is happening to our collective powers? What are we, as a species that is in some ways defined by our ability to take collective action, becoming? Who are we anymore?
That language of bundling also comes from my partner, the artist Cassie Thornton, whose work on debt and financialization has been a constant inspiration. She undertook a very expensive graduate degree at the California College of the Arts and came to the realization that, whether they knew it or not, she and all her colleagues were actually making art about their debt which hung over the place like a cloud. She wanted to reveal it. Her MFA project was a yearbook titled “Our Bundles, Our Selves”, a play on the famous feminist health initiative Our Bodies, Ourselves from the 1970s. One of the points in this and all her work is that we may have been bundled together in new configurations against our will, but what sense and power can we find, together, if we have the courage to look? How can the isolation and fear of living in a world where vast unaccountable forces dominate be transformed into platforms for new solidarity?
In my book “Art after Money, Money after Art” I look at a number of similar artistic experiments in reassembling collective power, within, against and beyond securitization. The question for me becomes: how can these experiments help us imagine the notion of security beyond this endless imperative to “manage risk” which is the hallmark of financialization, a grim individualized task we are each forced to take on, but which we cannot help except to fail at, again and again. We cannot, any one of us, contain, control or even anticipate the risks that living in this system poses for us, ecologically, socially financially. To live by the imperative of risk management is to undertake an impossible task: we are individually tasked with managing risks that are structural and systemic, that can’t possibly be managed by individuals. If we want to escape, we need to reimagine what it means to be “secure,” beyond securitization. And we are only secure to the extent we can rely on one another and reproduce our world together.
With respect to the particular role of AI in this context, I would like to turn to a phrase that you use in your book like a (variable) leitmotif: “hidden in plain sight”. One of the assumptions of the AMBIENT REVOLTS conference is that AI is everywhere and nowhere: implemented and deployed not only at all kinds of abstract levels of society, but also literally in your living room and in your hand, nevertheless it remains sort of invisible as – and this is only the most obvious reason – it’s implementation and application has become so naturalized. With regard to financialization we can observe a similar situation: Although the major financial crisis of 2007/8 has been conditioned by the massive and semi-supervised use of AI-driven trading with derivatives and although this practice has been en vogue at Wall Street since the 1990s, this fact remains a blind spot in criticism and scholarship in general.
This is especially surprising if one takes into account the large number of academic and popular books from that decade, take for instance “Neural Networks in Finance and Investing: Using Artificial Intelligence to Improve Real-World Performance” (1992), “Trading on the Edge: Neural, Genetic, and Fuzzy Systems for Chaotic Financial Markets” (1994), “Neural Networks in the Capital Markets” (1995), “Artificial Intelligence in Finance & Security” (1995) or “Neural Networks for Financial Forecasting” (1996)). Despite this rich body of literature, there has hardly been any deeper analysis of the role that neural networks and AI play in finance and financialization. I mean works that often get referenced in this context, including Frank Pasquale’s “Black Box Society” or Scott Patterson’s and Michael Lewis’s respective books on that subject do not go beyond generalized ideas of algorithms and even more generalized ideas of AI, often these ideas even get blurred.
Yet, there is an important difference between algorithms in general and algorithms in AI. As media theoretician Felix Stalder reminds us in his book “Kultur der Digitalität”, in AI “algorithms are used to write new algorithms or to determine their variables. If this reflexive process is integrated into an algorithm, it becomes ‘self-learning’: The programmers do not define the rules for its execution, but rules according to which the algorithm should learn to reach a certain goal. In many cases, its solution strategies are so complex that they cannot even be comprehended afterwards. They can only be tested experimentally, but no longer logically. Such [self-learning] algorithms are basically black boxes, objects that can only be understood through their external behavior, but whose internal structure eludes recognition.”
Since at least the 2007/8-crisis all this conceptual insight and the discursive material from the 1990s should have enabled and motivated a profound theorization of the discourse of AI and neural networks in finance. Yet, the deeper connection between AI and finance has remained hidden in plain sight – a blind spot that is. This is alarming given that now AI is hyped as the omni-potent solution for all sectors of society and as a docile assistant in your daily life. The danger is that the naturalization of AI will even ‘consolidate’ the blind spot.
Well, on a very practical level, it would appear that the sorts of AI work going on in the world’s financial capitals is extremely complex, cutting edge stuff, the secrets of which are jealously guarded and patented, so we end up not learning about what they’re up to for many years, and even when we do it’s very obscure. The financial sphere is one is where the world’s most accomplished data scientists and programmers are working, because financial firms have the money to pay for the talent. So there is a great lag between the technological developments in terms of machine learning and self-building or self-correcting algorithms in finance and our learning of them publicly. Essentially, we have some of the greatest scientific and mathematical minds of a generation put to work building gladiator robots that seek to outdo one another trading moving conjectural representations of global wealth around and superhuman speeds. Arguably, it’s the money and competition in this sector that is driving forward a huge number of so-called advances in AI technologies, but, of course, not only is it totally secretive and unregulated, it’s also for a deeply unethical and socially destructive cause.
If these AI systems are tasked with managing the global economy, and if, forty years into the neoliberal global revolution, the global capitalist economy has its hands in almost every aspect of human life, from food to housing to work to medicine to education to the technology we increasingly use to manage social life, romance, entertainment and so on, should we not already admit we are ruled by machines? Maybe the feared “Skynet” moment, where AI powers grow to such an extent they supercede human agency, already came and went, and most of us missed it completely?
It is an interesting hypothesis, but one that, to my mind, may obscure more than it reveals. On the one hand, it’s important to note that, if it is true the world is already ruled by financial AI, it is ruled not by some central, broad (artificial) intelligence, but by the cutthroat competition between multiple very targeted, specifically calibrated AIs. Further, I think there is a very interesting argument made by Anis Shivani, which went without enough fanfare, which basically argued that our fears about a world run by AI are actually (legitimate) fears about a world run by capitalism. It’s not simply that AI is a neutral tool put towards evils ends by profiteers. Shivani argues that capital is a already a kind of AI: this dark inhuman product of our alienated social cooperation that comes to command and shape our labour and take control of society for its own reproduction and growth. This recalls Harvey’s metaphor, mentioned earlier, of financial markets as the “central nervous system” of capitalism. The “artificial intelligence” already exists, and may have existed for centuries. Todays machine-learning and self-producing algorithms, from this perspective, are upgrades to an already-existing system.
I would simply ask, however, if at a certain point it becomes more accurately or at least evocative to discuss the challenge of artificial imagination, by which I mean not simply the particular algorithms and protocols by which individual, competitive machine seek to translate the complex world into actionable data but, rather, the dynamics of a whole system made up of an innumerable quantity and velocity of such digital actions. As numerous scholars have already noted, “intelligence” is a poor metaphor for what algorithms “do.” But so long as we are using metaphors, I wonder what the notion of artificial imagination might open?
The most advanced financial instrument of AI-driven high-frequency trading is the derivative. Could you elucidate financialized sociality as derivative sociality and could you then also reflect the politics of the technological set up of derivative sociality? Here I have in mind, for instance, the fact that the group of those who create and profit from the technological set up is rather small, while the group of those who are instrumentalized by this set up is practically all-inclusive – an asymmetric situation, that is not only an imbalance of power but also affords “the many” a surprising amount of power that is “hidden in plain sight”.
Randy Martin, mentioned earlier, was among a group of critical scholars fascinated by the power and dark baroque elegance of the derivative. Briefly, derivatives are at their simplest level agreements between two parties to conduct some transaction at a specified future date: I will, or have the option, to buy 100 shares of Google stock from you in one year’s time at a set price. These contracts are literally ancient. In fact, some of the oldest clay tablets from Sumaria are essentially futures contracts, often used by farmers to hedge against the risks of price fluctuations in the future. But today, thanks to “developments” in financial accounting since the 1970s, the “value” of these contracts can be calculated and, thanks to “developments” in financial market infrastructure, there is today a massive trade in derivatives: the volume of annual trade in “over the counter” derivative products is, by some estimates, in the range of 700 times the entire planet’s economic output (GDP). All these measures are problematic (as too is the comparison of volume of trade to total output), but it gives us a good sense of the magnitude of the problem.
But the exact nature of the problem is complex. For Martin and others, derivatives essentially represent a method by which the future is mapped by financialized metrics of probability, and by which the whole world is measured on the basis of potential risks and rewards for speculative investment. Derivatives exist today that measure and trade on potential weather patterns, on the effects of climate change, on the direct and indirect result of geopolitical conflict (e.g. rising or falling oil prices), on practically anything at all. What’s more, thanks in part to new computing technologies, often all of these bets, which are largely made by huge financial players, are interlaced, cross-referenced, securitized, bundled into portfolios, creating a kind of collectively created house of cards that grows ever higher. I’m dramatically simplifying here to get at Martin’s key point: ultimately, the whole world is engridded in the framework of the derivative, and indeed the derivative ceases to measure and speculate on the world, it comes to actively shape and construct the world.
Derivative sociality gets at the way this whole process is based on and accelerates the phenomenon we discussed earlier: the way that each of us is increasingly and in many different intersecting and conflicting ways measured, bundled and securitized by this financialized order, of which the derivative is the key technology. Further, derivative sociality names the way we internalize this methodology: we come to apply the kind of thought-world of the derivative in our own lives: everything becomes an investment, we are each tasked with translating our circumstances into a series of risks to be managed, assets to be leveraged, hustles to be undertaken. Finally, derivative sociality indicated the way that this individualized imperative manifests in unusual or unforseen ways in the forms of new collectivities. Martin was also a researcher of contemporary dance, which for him also meant all kinds of creative movement. So he saw grassroots forms like skateboarding, hiphop and break-dancing and experimental contempoary dance as methods by which alienated and exploited individuals found one another and started to experiment in new forms of collective risk, underneath the financialized order, in its ruins.
And I think that contrast, between on the one hand Ivy-league educated astrophysicists working on Wall Street building AIs and kids in the ghetto inventing new ways of using their bodies together, of creating new socialities, kind of maps out the kind of inequalities in an era of derivative sociality. Elsewhere, Martin spoke about the new bifurcation of society between the lauded, valourized risk-takers and the abject “at risk” who must be managed lest their failures infect others.
Tackling the logic of derivative sociality I would like to direct our attention to how political geographer Louise Amoore who I also interviewed recently on the politics of AI. In her work on the rise of what could be called “calculative security” within “AI-driven governmentality” she is also engaged with the derivative as such an as a form that migrates across social fields. In her important book “The Politics of Possibility” Amoore writes: “The derivative’s ontology of association is overwhelmingly indifferent to the occurence of specific events, on the condition that there remains the possibility for action: in the domain of finance, the derivative can be exchanged and traded for as long as the correlation is sustained; in the domain of security, the derivative circulates for as long as the association rules are sustained.”
It is this reading of redeployment of the derivative form in the sector of security and governmentality that not only highlights how this form migrates across social fields – and thereby becomes the dominant form in society – but also how it comes to be constituitive of sociality. As Amoore writes about algorithms: “They infer possible futures on the basis of underlying fragmented elements of data toward which they are for the most part indifferent.” Here an enormous socio-political crisis becomes apparent: “Indifferent to the contingent biographies that actually make up the underlying data, risk in its derivative form is not centered on who we are, nor even on what our data say about us, but on what can be imagined and inferred about who we might be – on our very proclivities and potentialities.” And it is this indifference – we could follow – that makes the redeployment of the derivative logic in “AI-driven governmentality” so productive (respectively, so destructive) of sociality. Does it make sense to you?
Yes, absolutely. But I would simply ask: when have the powerful, and especially the powerful under capitalism, not been indifferent to the biographies and lives of those whom they subjugate? Subjugation in any system seems to me to always depend on dehumanization, the reduction of the subjugated to a kind of machine or indifferent mass. This was the ideology of colonialism and empire.
What is different, perhaps, is that this system is precisely interested in difference, specificity, idiosyncrasy, but only in aggregate. I think this implicit in the quote above. Martin likewise observes that the power of the “order of the derivative” is precisely that it uses new technologies to pay extremely close attention to differences, to parse those differences ever more finely, in order to assemble or bundle those differences more precisely, and thereby to organize populations, tendencies, characteristics and profiles more expediently.
This has a lot to do with the digital iteration of what Deleuze, following Foucault, calls a “society of control,” one where the bio-political apparatuses of “making life” take on a distinctly neoliberal frame, where each individual is tasked with carving out a space within a network of codes, fragmented institutions, discontinuous systems of power, all under the inhuman sovereignty of the market.
So, to be schematic, the problem with these technologies is not that the individual is being lost in the mass or in the data, but in fact that the individual, as a construct of the capitalist, colonial, patriarchal order, is in fact being accelerated: more difference, more distinction, more atomization, more specificity. And it is not so simple as saying all forms of commonality or collectivity are being lost either: rather, it would appear that the dominant forms of commonality or collectivity are being generated from the movements of a swarm, but only those with access to the algorithms and big-data sets can hope to “see” and leverage that movement. The rest of us wake up daily to find ourselves (or fail to recognize ourselves as) part of ephemeral collectivities we never knew existed. Until disaster strikes.
Reading AI-driven financialization as fostering a derivative sociality brings us to the question what happens under financialization with the common. For some time now, capitalism has “colonized” the common, e.g. by turning money into our common imaginative horizon. In the current phase of capitalism that is characterized by the excesses of financialization, this process moves to a new stage. How does this “colonization” of the common that you yourself don’t call colonization but “encryption” – how can this process be also understood as transforming the very nature of the political? What is the “encrypted common” with regard to the questions concerning collectivity and sociality we have discussed so far?
To begin, I have been called to be much more careful around using the term “colonization” as a metaphor, as I have done in my past work. Colonization, like slavery, is a very specific historical process, one that is still with us. So both for the sake of analytic clarity and critical efficacy, I think we must use colonialism as a term carefully. There are very important ways in which finance and financialization are bound up with colonialism, past and present, but they need to be addressed with some specificity.
Now, the enclosure of the commons and the colonization of lands and peoples are very closely connected in the history of capitalism. Patel and Moore, whom I mentioned earlier, follow on the insights of Silvia Federici and Massimo de Angelis in identifying both as methods by which capitalism destroys or strips populations of their means of subsistence and their relationship to the non-human (or as we say “more-than-human”) world, from that thing we used to call “nature,” except now we realize that the separation of society from nature was always artificial and epistemologically violent, and justified very real ecological and human violence as well.
So, with that said, let me now turn to your question more directly. It has indeed been my argument that finance and financialization represent new iterations or developments in the methods by which capitalism encloses or forecloses the common, a vast now-global engine for what David Harvey has called “accumulation by dispossession,” which feels slightly more accurate than Marx’s terminology of “primitive accumulation.” We can see this today in the turn towards questions of extraction and extractivism where communities and landscapes are ripped apart when speculative capital finances mining projects. Often, perhaps even usually, this process is also (neo-)colonial, in the sense that it represents an infraction on Indigenous people’s lands and lives, or rhymes with the history of imperialism, except today in directly financialized forms.
I have also sought to argue, especially in my book “Crises of Imagination, Crises of Power” that we must understand the cultural and social world as sites of enclosure, where our methods and practices of expression, aesthetics, connection and joy become targets of a form of capitalism that is increasingly bound up in accelerating consumerism, individualism and competition. And here I think money is the model: as Marx taught us, but also Mauss and others, money is this artificial bond with society we carry around with us, a kind of fragment of our own estranged potential to cooperation, solidified into an icon of capital itself that haunts us like a ghost. Money, then, is like a shard of a hologram, wherein we see the fractured whole if we learn how to look.
This is why I turn to the language of encryption when I speak of money, or when I speak of the way finance re-configures culture and society. It returns our own collective being, our collective agency to us in encoded form, scrambled and fragmented. My credit rating, for instance, is a reflection of my place within a capitalist society, my trustworthiness. But, of course, it is not a reflection of my whole being, my real relationships, but of a kind of speculative persona. And also: I have practically no control over or access to the means of telling this story about myself. But it is accessible to financialized corporations or their AIs. It is functional in the world, arguably much more functional than any story I may tell about myself. It determines, for instance, my access to my society’s resources, to the fruits of other people’s labor like housing, food, education or transportation. In the US credit ratings are assessed by prospective landlords and employers. We are hearing terrible rumors about China’s planned Social Credit system, where access to nearly all public and private resources will be controlled by a kind of credit rated derived not only from one’s personal financial history but also from social media interactions, schooling, the testimony of friends and family and, more generally, from algorithmically determined scoring based on the value of others in your social network.
So, we are part of encrypted societies, and we ourselves are in a kind of crypt, a concept I borrow from Derrida, who it turn borrows it from psychoanalysis. We are both sealed in and sealed outside of ourselves and our society at the exact same time, both alive and dead. I use this language as well as a method to turn our attention away from the present fascination with cryptocurrencies, which, to borrow a formulation from the Institute for Network Cultures, are often brilliant answers to the wrong set of questions. If we want to liberate ourselves from this form of financialized capitalism it is not enough to simply create what we imagine to be a more fair or pure form of money. We must think of money as one among many methods we use to reflect on and shape our own imaginative-cooperative potentials.
Turning to art that – in its best moments – can be the “raw magma of the common” as you say in your book, I wonder what these moments are in which art becomes (or reveals itself as) the “raw magma of the common”? And I wonder what this “magma” is actually all about?
I borrow this concept of magma from Cornelius Castoriadis, who uses it to describe the power of the imagination, out of which he posits all social institutions are formed. We are imaginative-cooperative beings and we build a life together, and reproduce life together, by creating certain frameworks of the imagination, ways of signifying the world, ways of formalizing our relationships. In unequal societies, these solidify, like cooling magma, into rock-like shapes of hierarchies, ranks, castes and institutions. We take these for eternal, natural or necessary when they are simply the momentary crystallization or petrification of our own imaginative power. But the metaphor of magma also hints that another volcanic eruption is coming, one so powerful it will sweep away these solidified forms. And indeed history teaches us how very temporary and fragile our social institutions are. Sometimes that appears to be a good thing. As Ursula Koerber Le Guin, my patron saint of the radical imagination, put it, “We live in capitalism. Its power seems inescapable. So did the divine right of kings. Any human power can be resisted and changed by human beings.” But equally I think we are now seeing institutions that seek to protect human freedom, like say the various United Nations declarations on human rights, the rights of children, or the rights of indigenous people (which, in spite of the many horrific flaws of the UN as a tool of imperialism, were each the result of huge grassroots struggles) which are now simply being ignored by today’s cynical neofascists, and which in any case were largely rendered toothless long before the most recent authoritarian wave by neoliberalism.
In any case, my notion of the magma of the common is an attempt to build on my past work where I associated the imagination with the common, by which I mean the potential for human beings to collaborate, communicate and reproduce the world together on a horizontal, egalitarian and caring basis. I think this is at the root of all those phenomena we, today, call “economics,” though it appears today in perverted form. An economy is, ultimately, a framework for cooperative, imaginative reproduction, for commoning. But most economies are highly exploitative, oppressive and unequal expressions or articulations of the the common. But like the imagination, the common is restless, within, against and beyond its current forms of incarceration.
How can art in its very best moments, respectively, as the “raw magma of the common”, challenge derivative sociality?
We must remember what we are capable of, together. We remember by doing. We remember in struggle, I think. The society of the derivative seeks to organize the imaginative-cooperative potential of the common by encouraging each of us to reimagine ourselves and so act in the world as isolated, atomized, paranoid, competitive risk-managers. It encourages (and promises to reward) us for translating or transmuting everything of value in our lives into assets to be leveraged: our education, our skills and talents, our relationships, our inheritance, our culture – anything at all. That is financialization’s dark magic, it’s cruel alchemy. And it brings a whole new layer of digital power to bear to defend, reproduce and extend this disorganized social order, this decentralized planned economy, where “finance” writ large controls everything like some authoritarian hive-mind. And we have learned to see the embrace of this derivative sociality as empowering, to embrace the ethos of the financier, the entrepreneur, the unapologetic gangster who realizes that everything is for sale and nothing is sacred. Is it any wonder we worship and indeed elect the most disgusting but vivid personifications of these tendencies?
And yet, as De Angelis points out, this is only a small part of the richness that makes up our lives. Still, the vast majority of our human relationships, even inside the bank or at work, are based on other values, on commoning even in some small and fractured way. Workers are constantly rebelling, even if it’s a small joke or an unspoken agreement to be lazy. Resistance flourishes, though not (yet) in the forms of open rebellion. We crave non-derivative connection, community and care, though fatefully we are willing to accept it in often authoritarian forms of nationalism, religion or ethno-nationalist solidarity. To remember the common, to see the common hidden in plain sight, to recognize the power we have, the power on which the system that exploits us depends, this is always the most powerful weapon in the hands of the exploited. We are broken, divided, exhausted and sick. But that power remains.
The notion that AI-driven financialization fosters derivative sociality is a prime subject in the episode film “Popular Unrest”. The AI-driven financial system (that is referred to as the “World Spirit”) elicits random solidarity and random killings and in the course of this brings to the fore the common. As it is hard to distinguish here between the utopian and the dystopian, I wonder whether you could explain why in this particular case the thin line is necessarily so thin in order for the “raw magma of the common” to emerge and what this tells us about the our predicament in times of AI-driven financialization?
I think ultimately this film by Melanie Gilligan is dystopian and quite pessimistic. To recap, the film depicts a world in the grips of the “World Spirit” which is kind of a hybrid of online markets, social media, a massive digital state bureaucracy and algorithmic control mechanisms. In its attempts to constantly learn more about the world it creates a kind of glitch in its own system which creates two effects: in one, people just start being randomly murdered, as if attacked from above by a knife. In the other, seemingly random strangers become inexplicably drawn together with a feeling of intense affinity. Spoiler alert: in the end, one of these “groupings” discovers the truth and seeks to confront the World Spirit, only to ultimately be corrupted by it, betraying one another, succumbing to competition, paranoia and individualism. The film ends with a world in which the wealthy spend their resources trying to protect themselves from the World Spirit which they are also, in so doing, helping to reproduce while the world’s poor are put perpetually “at risk” from its predations and violence.
For me, this film is a brilliant warning about where we may be headed, all the more impressive since it dates back almost a decade now. Ultimately, it is a warning about how our own powers and potentials are turned back against us. The World Spirit is not an autonomous thing, at least not as we are used to imagining autonomy. It’s a kind of collective hallucination given real power by the way it causes individuals and collectives to act. “We” built it to help ourselves cooperate and organize our society, and now it commands us. It actively seeks to prevent or coopt any other formats of collective agency that might challenge it.
You elaborate in your book that art could and should become a source of inspiration for radical change in general and social movements in particular. Moreover you suggest, that radical social movements not only could and should turn to art (as a source of inspiration) but also could and should turn into art or something like art. And if “Popular Unrest” is somehow visionary in this context as it presents crowds that emerge out of nothing somewhat performatively as if part of an semi-abstract social sculpture and as these crowds go about challenging the AI-driven financial system called the “World Spirit”, then what does the fact tell us, that the attempted abolition proves to be futile?
While I do suggest in the book that art can offer us lessons for struggle, I don’t necessarily think that all art does so, or does so well. And I am also a bit wary of the idea that art must offer us visions or models of successful collective or individual agency or social transformation. Ultimately, in this film, the Grouping fails and is co-opted. It’s not a happy story.
I end the book on the question of abolition as a means to draw strength and inspiration less from art than from movements, in this case the constellation of movements around the world that are seeking to abolish prisons and police as social institutions. They do so not only by making vigorous, radical demands, but also by working in the here-and-now to support individuals and communities affected by these institutions (for instance who endure racial profiling or harassment by the police, or which are ripped apart by the prison system) and by actively working to build alternative, grassroots institutions. Here, I think, we see efforts to generate new forms of connectivity and collectivity as both the means and ends of generating or reproducing a very different form of security, within, against and beyond the financialized imparative to securitize.
It strikes me that the failure of the Grouping in Gilligan’s film is how terribly naive they are, believing that their little cluster could, in a heroic gesture, reform or shut down the World Spirit, relying only on the ephemeral solidarity that the Spirit itself had generated within them. Instead, I would imagine successful resistance and rebellion would mean a much more gradual (and certainly less cinematic!) building of solidarities and alliances in formations that actually transformed people’s lives, that reduced dependency on the data-capitalist-sovereign, that created new bonds of social reproduction and robust radical organization.
The key for me is this creation of new bonds of social reproduction and robust radical organization. I think our abilities and inclinations towards these ends has atrophied in a financialized society, and our muscles of individualized risk management are engorged to the point of even obstructing and inhibiting our movement, both individual and collective. Art can and should play many roles, including inspiring and warning us. But it can also be part of a process of collective remembering, a remembering of our powers of imaginative-cooperation which I mentioned earlier, powers that can be turned towards creating new bonds of social reproduction and robust radical organization. We must become different animals to survive what we have created, to create a new relationship with the world, and to avenge what we have been made to become.
Max Haiven is Canada Research Chair in Culture, Media and Social Justice at Lakehead University in Northwest Ontario and co-director of the ReImagining Value Action Lab (RiVAL).He writes articles for both academic and general audiences and is the author of the books “Crises of Imagination, Crises of Power“, “The Radical Imagination“ (with Alex Khasnabish) and “Cultures of Financialization“. His new book “Art after Money, Money after Art: Creative Strategies Against Financialization“ has just been published by Pluto Press.
Krystian Woznicki is a critic and the co-founder of Berliner Gazette. His recently published book “Fugitive Belonging” blends writing and photography. Other publications include “A Field Guide to the Snowden Files” (with Magdalena Taube),“After the Planes” (with Brian Massumi), “Wer hat Angst vor Gemeinschaft?” (with Jean-Luc Nancy) and “Abschalten. Paradiesproduktion, Massentourismus und Globalisierung”.
This is caricature but we have left this idea of the actuarial reality behind for what I would call a ‘post-actuarial reality’ in which it is no longer about calculating probabilities but to account in advance for what escapes probability and thus the excess of the possible on the probable.
It is no longer a matter of threatening you or inciting you, but simply by sending you signals that provoke stimuli and therefore reflexes. There is no longer any subject in fact. It is not only that there are no longer any subjectivity, but it is that the very notion of subject is itself being completely eliminated thanks to this collection of infra-individual data; these are recomposed at a supra-individual level under the form of profile. You no longer ever appear.
Prevention consists in acting on the causes of phenomena so that we know that these phenomena will happen or will not happen. This is not at all what we are dealing with here in algorithmic governmentality since we have forgotten about causality and we are no longer in a causal regime. It is pre-emption and it consists in acting not on the causes but on the informational and physical environment so that certain things can or cannot be actualised, so that they can or cannot be possible. This is extremely different: it is an augmented actuality of the possible. Reality therefore fills the entire room, reality as actuality. This is a specific actuality that takes the form of a vortex aspiring both the past and the future. Everything becomes actual.
By Alexander Galloway
The politics of algorithms has been on people’s minds a lot recently. Only a few years ago, tech authors were still hawking Silicon Valley as the great hope for humanity. Today one is more likely to see books about how math is a weapon, how algorithms are oppressive, and how tech increases social inequality.
The incendiary failures are almost too numerous to mention: a digital camera that thinks Asians have their eyes closed; facial recognition technologies that misgender African-American women (or miss them entirely); Google searches that portray young black men as thugs and threats. A few hours after its launch in 2016, Microsoft’s chatbot “Tay” was already denying the Holocaust.
It used to be that if you wanted to explore the political nature of algorithms and digital media you had to go to Media Studies and STS, reading the important work of scholars like Lisa Nakamura, Wendy Chun, Seb Franklin, Simone Browne, David Golumbia, or Jacob Gaboury. (Or, before them, work on cybernetics and control from the likes of Gilles Deleuze, Donna Haraway, James Beniger, or Philip Agre.)
Now the politics of computation has gone mainstream. Social media followers no doubt saw the recent video clip in which New York congresswoman Alexandria Ocasio-Cortez claimed that algorithms perpetuate racial bias. And in a recent New York Times column, legal scholar Michelle Alexander quoted Cathy O’Neil’s argument that “algorithms are nothing more than opinions embedded in mathematics,” suggesting that algorithms constitute the newest system of Jim Crow.
I too am interested in the politics of computation, and have tried, over the years, to approach this problem from a variety of different angles. The specter of the “Chinese gold farmer,” for instance, has been an important topic in game studies, not least for what it reveals about the ideology of race. Or, to take another example, network architectures display an intricate intermixing of both vertical hierarchy and horizontal distribution, which together construct a novel form of “control” technology.
A topic that has captured my attention for several years now — although I’ve only yet written about it episodically and tangentially — is the way in which software (including its math and its logic) might itself be sexist, racist, or classist. And I don’t mean the uses of software. Use is too obvious; we know that answer already. I mean numbers like 5 or 7. Or the variable x. Or an if/then control structure. Or an entire computer language like C++ or Python. Do these kinds of things contain inherent bias? Could they be sexist or racist?
Uses of tech is one thing. Tech itself is another. Even the most ardent critics of Amazon or Google will frown and backpedal if one begins to criticize algebra or deductive logic. It’s the third rail of digital studies: don’t touch. For instance, some of you might remember the uproar a few years ago when Ari Schlesinger suggested designing a feminist computer language. How dare she! The very notion that computer languages might be sexist was anathema to most of the Internet public, fomenting a Gamergate-style backlash.
While I intend to make this kind of argument more explicitly in the future — the argument that mathematics itself is typed if not also explicitly gendered and racialized — I won’t do that here. Suffice it to say that the topic interests me a great deal. What I want to present here is an annotated collection of the various attempts to resist such a project, the many voices — so loud, so cocksure — that aim to silence and subdue the politicization of math and code.
But before starting, a few caveats. First, math, logic, and computation are not the same thing. Given more time it would be necessary to define these terms more clearly and show how they are related. Personally I consider math, logic, and computation to be intimately connected, often so intimately connected as to reduce one to another. For instance, in the past I’ve made claims like “software is math”; I acknowledge that others might be uncomfortable with this kind of reduction.
Second caveat: Race, class, and gender are not the same thing. I’m referencing them together here because they evoke a specific kind of cultural and political context, and because they all emerge from processes of discretization and difference. A richer discussion would necessarily need to address the complex interconnection between race, class, gender and other qualities of lived experience.
Third caveat: There are people working on these topics who do not fall into one of the responses below — see paragraph three above for some initial suggestions. I am aware of many of them, but of course not all of them, so please feel free to alert me to relevant references if you feel so moved. My intent is not to ignore or silence people. I am focusing on these responses because, in my assessment, they represent the set of dominant positions.
In documenting the resistance to the politicization of math and code, I’ve paraphrased and condensed texts found online and in various kinds of public debate. The italicized block quotes below are fictionalized accounts, but based on things said by real people. Each fictionalized account is followed by my own commentary. Note that I’m omitting all manner of bad-faith responses of the form “woman are inherently bad at math.” The responses below are all examples of good-faith responses from people who consider themselves more or less charitable and reasonable.
+ + +
Response #1: “Pure Abstraction”
“Math is the pursuit of abstraction and formal relation. Math expresses number in its purest form. Algorithms, math, and logic are agnostic to people and their specific qualities. An algorithm has no political or cultural agenda. It does not matter if you are a man or a woman. In math, a correct answer is the only thing that matters.”
Perhaps the most common response, particularly among mathematicians and computer scientists, is to occupy the position of, shall we call it, Naive Abstraction. Here math is entirely uncoupled from the real world. It merely expresses the clearest, most rigorous, and most formal relation between abstract entities. Even if they are unlikely to admit it publicly, most mathematicians are Platonists; most of them secretly (or not so secretly) think that mathematical entities exist in a realm of pure, formal abstraction.
Response #2: “Politics Unwelcome Here”
“How silly to try to classify mathematics along political or social lines. You are misusing math to further a personal agenda. If math has a cultural or political agenda, the agenda is invalid because agendas by definition deviate from the pure and abstract nature of math. A ‘feminist mathematics’ is simply nonsense; the very notion conflicts with the basic definition of mathematics. Math is pure abstraction uncoupled from world-bound facts such as race, ethnicity, gender, class, or culture.”
The second response is similar to the first. Here, Naive Abstraction still holds, only its proponents have become more aggressive in defending their turf. That’s just not how math works, they say, and to suggest otherwise is to commit an infraction against mathematics. Politicizing mathematics means subjecting it to an external reality for which it was never intended. It constitutes a kind of category mistake, they claim. Let’s keep math over here, and politics over there, and be careful not to mix them.
Response #3: “Your Terms Aren’t Clearly Defined”
“Who says math is the pursuit of pure abstraction, insensitive to the real world? Innumerable mathematicians have recognized the importance of observation and even experimentation in the development of mathematical knowledge. Math can be applied, even empirical — just ask any working scientist. And you overlook the extensive attention given within mathematics to real phenomena in disciplines like geometry, topology, or calculus. Hence your terms aren’t clearly defined. Math is complex, so don’t make indictments based on generalizations; deal instead with specific cases.”
Response #4: “Essentialism Is Bad”
“Certainly math can be understood culturally and politically, but to define math in this way means to assign it an essence, and essentialism is the worst form of cultural and political misuse. There is no essence to form or structure. A form gains its definition through encounters with other forms. Structures gain their meanings only after being put into exchange with other structures. Tech has no essence; to offer a rigid definition is to be guilty of essentialism.”
Now we have the same problem as before, only in reverse. Here the respondent might freely acknowledge the cultural and political valence of algorithms or code. They might even reject nominalism and acknowledge that math has general characteristics. However this introduces a new threat that must be resisted: essentialism. To define something is to assign it an essence. And since we already know essentialism is bad — thanks to a few decades of poststructuralism — this approach is destined to fail. Don’t try to politicize things because you’ll simply expose yourself to even greater hazards. (Try ethics instead.)
Response #5: “Not My Problem”
“Cultural and political concepts like race or gender might be interesting, but that’s just not my topic. They’re specific to particular contexts, while I’m looking at generalizable phenomena. Since race/class/gender aren’t generalizable mathematically, its safe to ignore them.”
The dynamic between general and specific can also be leveraged in other ways. A common technique is to suggest that race, class, or gender are “particulars”; they pertain to particular contexts, to particular bodies, to particular histories. And, as particulars, they do not rise to the level of general concern. Thus the Not My Problem folks often think they are operating in good faith even while avoiding or dismissing politics: yes, I care about your plight; but it’s yours alone; I’m simply interested in other things (birdwatching, stamp collecting, prime numbers).
Response #6: “Focus on Subjects”
“Yes, of course math is a cultural and political technology. Math is a tool of governmentality that constructs and disciplines subjects. Instead of studying math for its own sake, focus on how math produces subjects. Given that tech inscribes power onto bodies, effects will be visible in how subjects are coded and organized.”
Thus far we’ve been considering responses from people who are typically outside the field of critical digital studies. The two final responses — responses 6 and 7 — are interesting in that we find them within critical digital studies. In fact, these last two responses are some of the most popular positions in media studies today. Response 6 freely admits the cultural and political nature of math, code, logic, and software. Response 6 asserts, however, that the best way to understand the culture and politics of math is to look not at math but at subjects. Persons and their bodies become the legible substrates on which the various successes and failings of technology are inscribed. These folks tend either to be Foucauldians — “if you want to understand tech, first you have to understand power.” Or they tack more toward anthropology and the human sciences — “if you want to understand tech, first you have to understand people.” Either way, math falls out of the frame.
Response #7: “Focus on Use”
“Math is just a tool. Sure, algorithms can be cultural and political, but that’s a truism for most things. If racist or sexist values are deployed technically, then technology will appear racist or sexist. Math is a neutral vessel, but it’s rarely objective because it harbors people’s goals and intentions. To remedy any perceived bias, focus on the cultural and political context in which tech is used. In other words don’t talk about sexist algorithms so much as sexist uses of tech or sexist contexts.”
Last but not least, a common response to the question of political tech — arguably the most common response, at least for those “in the know” — is to say that code and software are embedded with values. What values exactly? The values of their creators, which is to say all the biases and assumptions of whoever designs the algorithm. Thus if you have a racist algorithm, it’s because some racist designer somewhere made it that way. If you have sexist software, it’s because some coder was negligent. If this argument sounds familiar, it should: it’s a version of the gun rights argument that “guns don’t kill people, people kill people.” Only now the argument is: math doesn’t hurt people, negligent mathematicians hurt people (using math). Sometimes we call this the Neutral Vessel response — sometimes the Just A Tool response — because it turns tech into a neutral, valueless vessel ready to receive someone else’s values, or a passive tool waiting to accomplish someone else’s agenda.
+ + +
What do these responses all have in common? First, they all delink math and code from culture and politics. Either the link is explicitly denied (responses 1-3) or the link is acknowledged before being disavowed (responses 4-7). So while the non-denial responses (4-7) seem more enlightened, given how they admit culture and politics, they remain particularly pernicious since they divert attention elsewhere. They suggest that we investigate the essence or non-essence of math, that we focus on math’s use, or on the subjects of math — anything to avoid looking at math itself.
Overall I see this as a kind of “fear of media.” Whether in denial or acknowledgement, all of the responses work to undermine the notion that math, code, logic, or software are, or could be, a medium at all. If math were a full-fledged medium, one would need to attend to its affordances, its forms and structures, its genres, its modes of signification, its various lapses and slippages, and all the many other qualities and capacities (active or passive) that make up a mode of mediation.
Ironically this fear of media tends to perpetuate stereotypes rather than remedy them. For instance, the “neutral vessel” is an ancient trope for female sexuality going back at least to Aristotle if not earlier, as are neutral media substrates more generally (matter as mater/mother, feminine substrates receiving masculine form, and so on). And the act of “injecting ethics” or “embedding values” into an otherwise passive, receptive technology resembles a kind of insemination. In other words “fear of media” also means “fear of the feminine.”
Yet most significantly, all the above responses favor incidental bias over essential bias. None of them asserts any sort of specific quality inherent to the nature of math or code. Hence the question remains: do math and code contain an essential bias, and, if so, what is it? Not that long ago affirmative answers would have easily been forthcoming. Rationality is an “iron cage” (Max Weber). Abstraction perpetuates alienation (Karl Marx). Discrete binaries are heteronormative (Judith Butler). Still, the notion that mathematics contains an essential bias has slipped away in recent years, replaced by other arguments.
My answer is also affirmative, only the explanation is a bit different. I maintain — and will need to elaborate further in a future post — that mathematics has been defined since the ancients through an elemental typing (or gendering), and that within such typing there exists a general segregation or prohibition on the mixing of types, and that the two core types themselves (geometry and arithmetic) are mutually intertwined using notions of hierarchy, foreignness, priority, and origin. Given the politicized nature of such a scenario — gendering, segregation, hierarchy, origin — only one conclusion is possible, that whatever incidental biases it may bear, mathematics also contains an essential bias. Any analysis of the culture and politics of math and code will need to address this core directly, if not now then soon.
by Himanshu Damle
If it is true string theory cannot accommodate stable dark energy, that may be a reason to doubt string theory. But it is a reason to doubt dark energy – that is, dark energy in its most popular form, called a cosmological constant. The idea originated in 1917 with Einstein and was revived in 1998 when astronomers discovered that not only is spacetime expanding – the rate of that expansion is picking up. The cosmological constant would be a form of energy in the vacuum of space that never changes and counteracts the inward pull of gravity. But it is not the only possible explanation for the accelerating universe. An alternative is “quintessence,” a field pervading spacetime that can evolve. According to Cumrun Vafa, Harvard, “Regardless of whether one can realize a stable dark energy in string theory or not, it turns out that the idea of having dark energy changing over time is actually more natural in string theory. If this is the case, then one can measure this sliding of dark energy by astrophysical observations currently taking place.”
So far all astrophysical evidence supports the cosmological constant idea, but there is some wiggle room in the measurements. Upcoming experiments such as Europe’s Euclid space telescope, NASA’s Wide-Field Infrared Survey Telescope (WFIRST) and the Simons Observatory being built in Chile’s desert will look for signs dark energy was stronger or weaker in the past than the present. “The interesting thing is that we’re already at a sensitivity level to begin to put pressure on [the cosmological constant theory].” Paul Steinhardt, Princeton University says. “We don’t have to wait for new technology to be in the game. We’re in the game now.” And even skeptics of Vafa’s proposal support the idea of considering alternatives to the cosmological constant. “I actually agree that [a changing dark energy field] is a simplifying method for constructing accelerated expansion,” Eva Silverstein, Stanford University says. “But I don’t think there’s any justification for making observational predictions about the dark energy at this point.”
Quintessence is not the only other option. In the wake of Vafa’s papers, Ulf Danielsson, a physicist at Uppsala University and colleagues proposed another way of fitting dark energy into string theory. In their vision our universe is the three-dimensional surface of a bubble expanding within a larger-dimensional space. “The physics within this surface can mimic the physics of a cosmological constant,” Danielsson says. “This is a different way of realizing dark energy compared to what we’ve been thinking so far.”
watch the lecture below:
FREE ONLINE BOOKS
1. Deep Learning by Yoshua Bengio, Ian Goodfellow and Aaron Courville
2. Neural Networks and Deep Learning by Michael Nielsen
3. Deep Learning by Microsoft Research
4. Deep Learning Tutorial by LISA lab, University of Montreal
1. Machine Learning by Andrew Ng in Coursera
2. Neural Networks for Machine Learning by Geoffrey Hinton in Coursera
3. Neural networks class by Hugo Larochelle from Université de Sherbrooke
4. Deep Learning Course by CILVR lab @ NYU
5. CS231n: Convolutional Neural Networks for Visual Recognition On-Going
6. Probabilistic Graphical Model by Daphne Koller in Coursera
7. Kevin Duh Class for Deep Net Deep Learning and Neural Network
VIDEO AND LECTURES
1. How To Create A Mind By Ray Kurzweil - Is a inspiring talk
2. Deep Learning, Self-Taught Learning and Unsupervised Feature Learning By Andrew Ng
3. Recent Developments in Deep Learning By Geoff Hinton
4. The Unreasonable Effectiveness of Deep Learning by Yann LeCun
5. Deep Learning of Representations by Yoshua bengio
6. Principles of Hierarchical Temporal Memory by Jeff Hawkins
7. Machine Learning Discussion Group - Deep Learning w/ Stanford AI Lab by Adam Coates
8. Making Sense of the World with Deep Learning By Adam Coates
9. Demystifying Unsupervised Feature Learning By Adam Coates
10.Visual Perception with Deep Learning by Yann LeCun
1. ImageNet Classification with Deep Convolutional Neural Networks
2. Using Very Deep Autoencoders for Content Based Image Retrieval
3. Learning Deep Architectures for AI
4. CMU’s list of papers
1. UFLDL Tutorial 1
2. UFLDL Tutorial 2
3. Deep Learning for NLP (without Magic)
4. A Deep Learning Tutorial: From Perceptrons to Deep Networks
1. MNIST Handwritten digits
2. Google House Numbers from street view
3. CIFAR-10 and CIFAR-100
5. Tiny Images 80 Million tiny images
6. Flickr Data 100 Million Yahoo dataset
7. Berkeley Segmentation Dataset 500
1. Google Plus - Deep Learning Community
2. Caffe Webinar
3. 100 Best Github Resources in Github for DL
5. Caffe DockerFile
6. TorontoDeepLEarning convnet
7. Vision data sets
8. Fantastic Torch Tutorial My personal favourite. Also check out gfx.js
9. Torch7 Cheat sheet
Perspectives from Leading Practitioners
Machine intelligence has been the subject of both exuberance and skepticism for decades. The promise of thinking, reasoning machines appeals to the human imagination, and more recently, the corporate budget. Beginning in the 1950s, Marvin Minksy, John McCarthy and other key pioneers in the field set the stage for today’s breakthroughs in theory, as well as practice. Peeking behind the equations and code that animate these peculiar machines, we find ourselves facing questions about the very nature of thought and knowledge. The mathematical and technical virtuosity of achieve‐ ments in this field evoke the qualities that make us human: Every‐ thing from intuition and attention to planning and memory. As progress in the field accelerates, such questions only gain urgency.
Heading into 2016, the world of machine intelligence has been bus‐ tling with seemingly back-to-back developments. Google released its machine learning library, TensorFlow, to the public. Shortly there‐ after, Microsoft followed suit with CNTK, its deep learning frame‐ work. Silicon Valley luminaries recently pledged up to one billion dollars towards the Open AI institute, and Google developed soft‐ ware that bested Europe’s Go champion. These headlines and ach‐ievements, however, only tell a part of the story. For the rest, we should turn to the practitioners themselves. In the interviews that follow, we set out to give readers a view to the ideas and challenges that motivate this progress. We kick off the series with Anima Anandkumar’s discussion of ten‐ sors and their application to machine learning problems in highdimensional space and non-convex optimization. Afterwards, Yoshua Bengio delves into the intersection of Natural Language Pro‐cessing and deep learning, as well as unsupervised learning and rea‐ soning. Brendan Frey talks about the application of deep learning to genomic medicine, using models that faithfully encode biological theory. Risto Miikkulainen sees biology in another light, relating examples of evolutionary algorithms and their startling creativity. Shifting from the biological to the mechanical, Ben Recht explores notions of robustness through a novel synthesis of machine intelli‐ gence and control theory. In a similar vein, Daniela Rus outlines a brief history of robotics as a prelude to her work on self-driving cars and other autonomous agents. Gurjeet Singh subsequently brings the topology of machine learning to life. Ilya Sutskever recounts the mysteries of unsupervised learning and the promise of attention models. Oriol Vinyals then turns to deep learning vis-a-vis sequence to sequence models and imagines computers that generate their own algorithms. To conclude, Reza Zadeh reflects on the history and evolution of machine learning as a field and the role Apache Spark will play in its future.
It is important to note the scope of this report can only cover so much ground. With just ten interviews, it far from exhaustive: Indeed, for every such interview, dozens of other theoreticians and practitioners successfully advance the field through their efforts and dedication. This report, its brevity notwithstanding, offers a glimpse into this exciting field through the eyes of its leading minds.
by Himanshu Damle
The formation of black holes can be understood, at least partially, within the context of general relativity. According to general relativity the gravitational collapse leads to a spacetime singularity. But this spacetime singularity can not be adequately described within general relativity, because the equivalence principle of general relativity is not valid for spacetime singularities; therefore, general relativity does not give a complete description of black holes. The same problem exists with regard to the postulated initial singularity of the expanding cosmos. In these cases, quantum mechanics and quantum field theory also reach their limit; they are not applicable for highly curved spacetimes. For a certain curving parameter (the famous Planck scale), gravity has the same strength as the other interactions; then it is not possible to ignore gravity in the context of a quantum field theoretical description. So, there exists no theory which would be able to describe gravitational collapses or which could explain, why (although they are predicted by general relativity) they don’t happen, or why there is no spacetime singularity. And the real problems start, if one brings general relativity and quantum field theory together to describe black holes. Then it comes to rather strange forms of contradictions, and the mutual conceptual incompatibility of general relativity and quantum field theory becomes very clear:
Black holes are according to general relativity surrounded by an event horizon. Material objects and radiation can enter the black hole, but nothing inside its event horizon can leave this region, because the gravitational pull is strong enough to hold back even radiation; the escape velocity is greater than the speed of light. Not even photons can leave a black hole. Black holes have a mass; in the case of the Schwarzschild metrics, they have exclusively a mass. In the case of the Reissner-Nordström metrics, they have a mass and an electric charge; in case of the Kerr metrics, they have a mass and an angular momentum; and in case of the Kerr-Newman metrics, they have mass, electric charge and angular momentum. These are, according to the no-hair theorem, all the characteristics a black hole has at its disposal. Let’s restrict the argument in the following to the Reissner-Nordström metrics in which a black hole has only mass and electric charge. In the classical picture, the electric charge of a black hole becomes noticeable in form of a force exerted on an electrically charged probe outside its event horizon. In the quantum field theoretical picture, interactions are the result of the exchange of virtual interaction bosons, in case of an electric charge: virtual photons. But how can photons be exchanged between an electrically charged black hole and an electrically charged probe outside its event horizon, if no photon can leave a black hole – which can be considered a definition of a black hole? One could think, that virtual photons, mediating electrical interaction, are possibly able (in contrast to real photons, representing radiation) to leave the black hole. But why? There is no good reason and no good answer for that within our present theoretical framework. The same problem exists for the gravitational interaction, for the gravitational pull of the black hole exerted on massive objects outside its event horizon, if the gravitational force is understood as an exchange of gravitons between massive objects, as the quantum field theoretical picture in its extrapolation to gravity suggests. How could (virtual) gravitons leave a black hole at all?
There are three possible scenarios resulting from the incompatibility of our assumptions about the characteristics of a black hole, based on general relativity, and on the picture quantum field theory draws with regard to interactions:
(i) Black holes don’t exist in nature. They are a theoretical artifact, demonstrating the asymptotic inadequacy of Einstein’s general theory of relativity. Only a quantum theory of gravity will explain where the general relativistic predictions fail, and why.
(ii) Black holes exist, as predicted by general relativity, and they have a mass and, in some cases, an electric charge, both leading to physical effects outside the event horizon. Then, we would have to explain, how these effects are realized physically. The quantum field theoretical picture of interactions is either fundamentally wrong, or we would have to explain, why virtual photons behave completely different, with regard to black holes, from real radiation photons. Or the features of a black hole – mass, electric charge and angular momentum – would be features imprinted during its formation onto the spacetime surrounding the black hole or onto its event horizon. Then, interactions between a black hole and its environment would rather be interactions between the environment and the event horizon or even interactions within the environmental spacetime.
(iii) Black holes exist as the product of gravitational collapses, but they do not exert any effects on their environment. This is the craziest of all scenarios. For this scenario, general relativity would have to be fundamentally wrong. In contrast to the picture given by general relativity, black holes would have no physically effective features at all: no mass, no electric charge, no angular momentum, nothing. And after the formation of a black hole, there would be no spacetime curvature, because there remains no mass. (Or, the spacetime curvature has to result from other effects.) The mass and the electric charge of objects falling (casually) into a black hole would be irretrievably lost. They would simply disappear from the universe, when they pass the event horizon. Black holes would not exert any forces on massive or electrically charged objects in their environment. They would not pull any massive objects into their event horizon and increase thereby their mass. Moreover, their event horizon would mark a region causally disconnected with our universe: a region outside of our universe. Everything falling casually into the black hole, or thrown intentionally into this region, would disappear from the universe.
World renowned physicist Dr. Michio Kaku made a shocking confession on live TV when he admitted that HAARP is responsible for the recent spate of hurricanes.
In an interview aired by CBS, Dr. Kaku admitted that recent ‘made-made’ hurricanes have been the result of a government weather modification program in which the skies were sprayed with nano particles and storms then “activated” through the use of “lasers”.
In the interview (below), Michio Kaku discusses the history of weather modification, before the CBS crew stop him in his tracks. The High-Frequency Active Auroral Research Program (HAARP) was created in the early 1990’s as part of an ionospheric research program jointly funded by the U.S. Air Force, the U.S. Navy, the University of Alaska Fairbanks, and the Defense Advanced Research Projects Agency (DARPA).
According to government officials, HAARP allows the military to modify and weaponize the weather, by triggering earthquakes, floods, and hurricanes.
Anongroup.org reports: One detail in a plethora of academic papers and patents about altering the weather with electromagnetic energy and conductive particles in the stratosphere, research published in the Proceedings of the National Academy of Sciences said the “laser beams” can create plasma channels in air, causing ice to form. According to Professor Wolf Kasparian:
Under the conditions of a typical storm cloud, in which ice and supercooled water coexist, no direct influence of the plasma channels on ice formation or precipitation processes could be detected.
Under conditions typical for thin cirrus ice clouds, however, the plasma channels induced a surprisingly strong effect of ice multiplication.
Within a few minutes, the laser action led to a strong enhancement of the total ice particle number density in the chamber by up to a factor of 100, even though only a 10−9 fraction of the chamber volume was exposed to the plasma channels.
The newly formed ice particles quickly reduced the water vapor pressure to ice saturation, thereby increasing the cloud optical thickness by up to three orders of magnitude.”
To really understand geoengineering, researchers have identified defense contractors Raytheon, BAE Systems, and corporations such as General Electric as being heavily involved with geoengineering. According to Peter A. Kirby, Massachusetts has historically been a center of geoengineering research.
With the anomalous hurricanes currently ravaging the Americas, floods destroying India, and wildfires destroying the Pacific Northwest, weather warfare is a topic on the public consciousness right now.
The drought is government-made. HAARP can influence weather anywhere on earth. Here is more proof. If you are tired of the drought and the dry weather conditions, consider that the government is manipulating the weather for purposes that are not in your best interest.
Slavoj Zizek - Nature does not exist
Controversial Slovenian philosopher, Slavoj Žižek, talks about the ongoing ecological crisis and concludes that Nature doesn't exist.
by David Roden
In the philosophy of technology, substantivism is a critical position opposed to the common sense philosophy of technology known as “instrumentalism”. Instrumentalists argue that tools have no agency of their own – only tool users. According to instrumentalism, technology is a mass of instruments whose existence has no special normative implications. Substantivists like Martin Heidegger and Jacques Ellul argue that technology is not a collection of neutral instruments but a way of existing and understanding entities which determines how things and other people are experienced by us. If Heidegger is right, we may control individual devices, but our technological mode of being exerts a decisive grip on us: “man does not have control over unconcealment itself, in which at any given time the real shows itself or withdraws” (Heidegger 1978: 299).
For Ellull, likewise, technology is not a collection of devices or methods which serve human ends, but a nonhuman system that adapts humans to its ends. Ellul does not deny human technical agency but claims that the norms according to which agency is assessed are fixed by the system rather than by human agents. Modern technique, for Ellul, is thus “autonomous” because it determines its principles of action internal to it (Winner 1977: 16). The content of this prescription can be expressed as the injunction to maximise efficiency; a principle overriding conceptions of the good adopted by human users of technical means.
In Chapter 7 of Posthuman Life, I argue that a condition of technical autonomy –self-augmentation – is in fact incompatible with technical autonomy. “Self-augmentation” refers to the propensity of modern technique to catalyse the development of further techniques. Thus while technical autonomy is a normative concept, self-augmentation is a dynamical one.
I claim that technical self-augmentation presupposes the independence of techniques from culture, use and place (technical abstraction). However, technical abstraction is incompatible with the technical autonomy implied by traditional substantivism, because where techniques are relatively abstract they cannot be functionally individuated. Self-augmentation can only operate where techniques do not determine how they are used. Thus substantivists like Ellul and Heidegger are wrong to treat technology as a system that subjects humans to its strictures. Self-augmenting Technical Systems (SATS) are not in control because they are not subjects or stand-ins for subjects. However, I argue that there are grounds for claiming that it may be beyond our capacity to control.
This hypothesis is, admittedly, quite speculative but there are four prima facie grounds for entertaining it:
If enough of 1-4 hold then technology is not in control of anything but is largely out of our control. Yet there remains something right about the substantivist picture, for technology exerts a powerful influence on individuals, society, and culture, if not an “autonomous” influence. However, since technology self-augmenting and thus abstract it is counter-final – it has no ends and tends to render human ends contingent by altering the material conditions on which our normative practices depend.
Esposito, E., 2013. The structures of uncertainty: performativity and unpredictability in economic operations. Economy and Society, 42(1), pp.102-129.
Ellul, J. 1964. The Technological Society, J. Wilkinson (trans.). New York: Vintage Books.
Heidegger, M. 1978. “The Question Concerning Technology”. In Basic Writings, D. Farrell
Krell (ed.), 283–317. London: Routledge & Kegan Paul.
Roden, David. 2014. Posthuman Life: Philosophy at the Edge of the Human. London: Routledge.
Winner, L. 1977. Autonomous Technology: Technics-out-of-control as a Theme in Political Thought. Cambridge, MA: MIT Press.
open culture - George Orwell Predicted Cameras Would Watch Us in Our Homes; He Never Imagined We’d Gladly Buy and Install Them Ourselves
Rouvroy/Stiegler - THE DIGITAL REGIME OF TRUTH: FROM THE ALGORITHMIC GOVERNMENTALITY TO A NEW RULE OF LAW
Steven Craig Hickman - Philip K. Dick, William Gibson and Science Experiments: Information from the Future