by Achim Szepanski
One should insist that paranoia is the mass psychological fuel of neo-fascist and right-wing populist movements. Today they have chosen the vulnerable foreigners as their suitable enemy object. And to anticipate it right away, the paranoia of right-wing populists and that of the state reinforce each other. But the paranoia held in latency remains first and foremost that of the state and its authorities, which have used data processing, cameras, satellites, computer spying and wiretapping to stage such a comprehensive observation of the population that we can easily speak of a paranoid political situation. Paranoia is further fuelled by the fact that, despite the overwhelming power and perfection of technological means, not everything can be completely recorded (and must therefore continue to be monitored). Since the global optical surveillance by the satellite view is possible, the unmonitored space remains only a mildly laughable Atopos, because a surveillance satellite from a few hundred kilometers distance can send high-resolution real-time images from any place in the world as top shots with pixelated detail fidelity to the computer monitors of the employees of secret services. The images come from the calculations themselves, they are numerically programmed.
The paranoia we are talking about here implies the constant construction of fear and of situations that almost challenge it, based on the fear that there might be a totally endangered system that constantly traverses the subject, a system that, however, is imagined rather than known. In this context, all perceptions of paranoia serve only as signs of the existence of an invisible power, whereby the excessive interpretations of power, which ultimately always result in power remaining untraceable, ultimately generate paranoid delusion. Paranoia here is not to be understood pathologically, but as a normalizing and, to a certain extent, institutionalized procedure that can pass through different levels of intensity, so that all in all one must speak of a functional paranoia. One can only talk and write about paranoia if it is pointed towards the delusion of interpretation (Lacan), which consists of inserting into everything and every meaning. Whether the paranoia is unformed to pathological, as Jacques Lacan assumes, or can even be enriched by rationalizing justifications, it remains, if it is collectively tuned in, a procedure of normality, a patho-logic with various degrees of normality and gradations, whereby the constant exaggeration, which today is mainly carried out medially and in the social networks, is very accommodating to the paranoia. It seems as if prevention, prophecy and paranoia together form a tendency towards “self-fullfilling prophecy”; they produce what they anticipate, both virtually and as an imaginary expectation, while the effect of paranoid actions on the social generates thoroughly dangerous reality effects.
The production of paranoia goes through several stages. It begins in an uncertain and tense atmosphere in which one only knows that something is wrong. It is followed by a mood in which situations suddenly make sense and the world is filled with meaning, and it ends in a third phase in which things and situations are supplemented with excess meaning. Paranoia therefore always implies a delusion of interpretation and meaning that is consolidated in an immovable symbolic and imaginary order that establishes relatively consistent delusional aggregates. And thus an old system of symbols is replaced by a certain number of collective delusional aggregates with which a, albeit fragile, symbolic order can be produced that is constantly transformed in the media today. If today, for example, an existing factual situation is literally flooded with data and information streams and fake convulsions via all media channels, so that because of this excess alone every fact can be easily reinterpreted. It is often difficult to tell what, in view of the oversupply of information, is targeted disinformation or only inaccurate screening. This makes the processing problem itself more and more precarious, so that ultimately only the permanent reconstruction of the risky events helps, in which, however, all the conditions that led to an event are set by the reconstruction work itself within the framework of teleological necessity. The paranoids either construct conspiracies or proclaim the gift of the darkto be able to see through the dark secrets of others before they are even set into the world, so that the paranoid judgment also corresponds to an anticipatory preventive logic with which a suspect is considered guilty from the outset. (Cf. Deleuze/Guattari 1992: 393) Recent events such as the Brexit, Trump’s fake news policy or the notorious lying policy of right-wing populism show that the statements in this post-factual situation sometimes no longer even require the appearance of truth. Crime rates in Germany, for example, are knowingly misassociated with refugees, which neither creates a sense of shame nor leads to serious consequences for the media, which disseminate the post factual.
In this respect, it is not surprising that the incessant generation and interpretation of data is immanent in the post-factual age, which, however, is by no means arbitrary (and without any meaning), but takes on a paranoid form due to the institutionally created contingencies. This, in turn, is technologically underpinned by the automated evaluation of large amounts of data for the purpose of recognizing patterns, regularities and hidden laws in the data streams. (Doll 2016: 312) This liquefaction of meaning in the endless data pap as a result of the permanent search for patterns and correlations in the data sets produced does not mean, as assumed by Baudrillard with his simulation theory, for example, that every meaning disappears and the signs circulate indifferently only in the As-Ob (Baudrillard 1982), but rather that the delusion of interpretation is becoming more and more intense, precisely because of the fact that it still has to be interpreted and interpreted, regardless of what is now meant in detail. In the last instance, this lies in future-oriented capitalization, which encompasses both money and bits in their interchangeability as well as an increase in capital that calculates the future, solely for the purpose of staging everything and everything as a financial investment, which, regardless of the underlying value to which it allegedly refers, is supposed to generate nothing but returns. (Cf. Vief 1991: 132f.)
With regard to the exchangeability of bits, the computer proves to be a character transformer that processes pure information, but not without content, but with arbitrary and exchangeable content. Just as money must be exchangeable for goods, regardless of which, bits must mean something, regardless of what they mean. (ibid.) Money and bits indicate communication exclusively under the aspect of the negation of a specific meaning. Or, to put it another way, to the exclusion of any meaning, except that it must be incessantly meant, so that the fact of being or the crossed perspective of meaning clearly comes to the fore here. For this reason, the current data invasion does not lead to a loss of meaning, but to an overproduction of meaning that is complementary to the indifference of every meaning set by capital, but it must still be meant, otherwise the system would fall apart. This kind of overproduction of meaning constitutes the real loss of meaning.
Thus, today the creative capacity of high-tech paranoia is characterized less by a lack of orientation knowledge than by the overproduction of meaning that results from the game that there is any meaning at all. When meanings become interchangeable in multiple circulating artificial interpretation processes, out of which the struggles for interpretations first arise, then a mad search for meaning inevitably follows. While capital can certainly live with this kind of loss of meaning through the overproduction of meanings, this is not easily possible for the state, because as we have seen, with it the equivalence of all meanings is questioned if it is to be expelled as the point of view of all viewpoints.
One could say with Lacan that the unfiltered data stream is the realm of the real, while information and metadata represent reality, a world made intelligible by cognitive filters and technological infrastructures, which itself is composed of registers of the imaginary and the symbolic.1 Unwavering accumulation of data, information and opinions, which react almost hysterically to each other, especially in the social networks, as the same and different in every moment, in order to produce delusional aggregates and illusionary waste of any kind. But all this is by no means open in the sense that an observer sets the difference between before/after in such a way that he designates neither one side nor the other, but the difference itself, namely the present, which in turn itself is called difference.is a non-place and thus remains open. Instead, the alternativelessness of the system is preached obtrusively along an ever-same present, which draws itself like chewing gum, and which therefore remains bleak with regard to its reference to the future, since capital as the all-constituent system is about borrowing from or accessing an already modified become of the future, which in turn affects the present, insofar as the present is determined by what it is supposed to have been in the future and will not have been. Thus the system in the sense of Futur II is closed, but it thoroughly forgets that the future remains black and contingent.
The postmedia is not characterized by the radical abolition of the old media, but by their transformation and mixing with new media. Every thing and every “event” – no matter how much singularity can be insisted on as the propagandists of the singular society do – is today not only integrated into media networks in which objects and subjects, opinions and counter-opinions circulate endlessly, but is often first constructed in the clouds of social networks.
And these networks are permanently searched and exploited using the personalized and personalizing algorithms of the data industry. The subject groups on the Internet by no means indulge in free participation and enjoy re-singularization; rather, the media are adapted to the new conditions of data production. In the social media, the divided and at the same time atomized user, if his employment is made productive in unpaid time as a source of data, is fed into algorithmic governance, whereby this kind of algorithmic data management leads to new forms of cerebral devastation.
It is clear that the mania for interpretation must constantly serve the manic search for signs and their relations in order to also underpin the political argumentations circulating in the media and state machines. This process can be set in motion at the political level because today there is a) the notorious collection and processing of data by the state (combined with the population’s fear that a technical super-player might develop), b) automatic pattern recognition of the data, which is tantamount to an algorithmization of the delusion of interpretation, and c) the individual’s fear that this state apparatus could become independent. The population’s mistrust of the state is the reflection of the state’s mistrust of the external and internal enemy. State paranoia is characterized by this interrelationship. (Doll 2016: 308) The secret services in particular rely on the paranoid mode of their computer programs that they have breathed in, in which excessive data processing is carried out in order to recognize patterns and correlations in the metadata and thus to literally construct enemies. If it is assumed that the opposing side always wants to inflict the worst on you and the enemy sits everywhere, then a kind of paranoid warfare of the state takes place by means of a permanent data paranoia. (ibid.)
Finally, paranoia is incorporated into program instructions at the level of algorithmic data processing itself, so that computers today are subject to the same trend as states – they are programmed to find anything suspicious because they are programmed to search for dangerous data clusters. The algorithms necessary for data filtering use specific patterns. Thus, the state paranoia itself makes the search algorithms of data processing paranoid, i.e. all data and metadata are treated as those of an enemy.
Big Data has its origins in the manic production of patterns and correlations in the financial markets. If Big Data is now used in government contexts, then it is about the production of political meanings, especially through the search for patterns and correlations in the corresponding data columns, in order to, building on this, find “answers” to questions such as how the population behaves as a social aggregate and how individuals react to further government impositions. However, data processing can also lead to the establishment of random correlations between different components, the importance of which is overestimated or without any significance, up to the perception of patterns and correlations, where actually none can be found at all. Data paranoia, for example, constantly generates this kind of clustering illusion, especially as data collection continues to grow quantitatively. (ibid: 313) And due to correlations found by chance, the wrong been targeted by the intelligence services.
Cloud computing, on the other hand, is speculating on the future of data processing. Whether the collected data will really make sense in the future after it has been automatically processed and further collected data has been added will only become clear at a later point in time. Paranoia here indicates that normativity is derived from the construction of a future that could always be different, and for this very reason, on the one hand, it must continue to be pursued and, on the other, it must be normalized, i.e. it must be dealt with preventively. State paranoia is thus closely related to prevention, for it has its own anticipatory aspects. Future surprises with a negative connotation are to be avoided by recognising them already in the uncertain present. The future already exists today as a virtual reason, in the form of a current risk that legitimizes the use of pre-emptive and partly already automated forms of government and especially repressive instruments. It is now governed by a digital, technological performativity that has arisen through algorithmic governance, which in itself proclaims that it is set to function on autopilot.
Beyond all this, the state feeds paranoia, according to Canetti a disease of power, with entire registers of affect modulation. Thus, colors can serve as collective warning signals for the population, whereby, as was observed in George W. Bush’s anti-terror measures, the use of certain colors as warning systems after September 11, central government functions that amount to prevention can be directly coupled with the nervous system of each individual. “Direct Perception” is an amedial, asymbolic perception (Massumi 2010: 106) that is based on the fact that the future condenses into an emphatic experience of the present. In retrospect, the event 9/11 thus proves to be a predetermined breaking point of power, with which new instances of interpretation were created to keep the coming abyss of catastrophe, which remains a construction, running through the hyper-production of contingency and conspiracy narratives. This happens above all by transforming the belief in coherent causalities in politics into a mode of latency, if the masses do not become disenchanted with politics as much as they are often conjured up, so that politics must be made less with discourses than with moods. This results in a real mood market that is less fluid than the capital market, which is why regulatory mechanisms must be used that the capital market does not know.
In the course of the rise of right-wing populist movements, the always reactionary and foamy fear of social decline among some sections of the population, especially in Europe, has long since turned into a passion, and with every call for even higher fences at the European borders, this collectivized passion reaches an even higher level of libidinously occupied paranoia, which, and this is noteworthy, in turn freezes in new, state, infrastructural and architectural forms as well as exclusion mechanisms. This mass-intensive paranoia thus ultimately remains the state form of government of insecurity of security, which requires the development of a social police force, which today ensures that more and more people worldwide are marked as virtual enemies and terrorists. It must therefore be understood why the new forms of social police and militarization in the social field cannot be separated from the discursive construction of the new figure of the enemy. This form of enemy construction creates a monstrosity that culminates in the general virtualization of an unspecific and unqualified enemy, who must be constantly updated through accelerated procedures of qualification and continuous re-qualification, at the cost of a growing criminalization of all those social practices that do not conform to the institutions of capital and the state. At present, collective paranoia flows continuously through symbolic and imaginary monuments, through sluices and into canalization systems that are constantly fed with desires oriented towards neoliberally propagated, possessive individualism without the individual, which rages above all in the middle class, but also among the dependent. In the middle classes, this individualism is effortlessly combined with a form of nationalism fed by a kind of victim logic that serves exclusively to demand a larger share of the general wealth, although the failure of one’s own efforts is not due to the state and capital, but to the fact that the individual is not the result of the individual’s own efforts.
A theorist of the Frankfurt School, Franz Neumann, has blamed the interplay of external fear, crisis and internal fear for the susceptibility of citizens to a politics of moods. (Neumann 1967: 266ff.) The external fear reacts to economic and political crisis phenomena, whereby at this point the moods of the right-wing populist movements react to the neoliberal reduction of politics to a productive struggle of economic interests, which is founded on the capitalization of all social relations in order to serve the fear (of the middle classes) of the loss of privileges and economic standards. Capitalization here is unconsciously the economic cancer that eats away at the disposable incomes of the middle class and the precariat. On the run, which today is not so much an escape from public life, as Neumann still assumed, but rather an escape into the possible, the individuals, who are constantly inoculated that it is nothing more than the multiplication of their own small capital that their life is supposed to be, encounter the inner monster of a hyper-narcissism whose courses of psychological action grotesquely resemble the catastrophe theorems staged in the media field. Thus one willingly leaves the badly designed world behind to discover an ego that urges to return to bestiality and is promoted with the construction of the financially risky subject, but which for the greater part of the population is identical to debt, with which it is absorbed by the financially financed economy without leaving bubbles on the sea of debt. But the freedom of the citizens continues to be mobilized as a promise, a freedom that often enough circulates in the hapless cycles of debt, so that the latent fear that leads to a state of exhaustion must be transformed into a political sadism directed against the poor and foreigners. In a state of powerlessness, such a deconstructed self sends out signals on which the right-wing populist movements can easily rely, using the external fear that reacts to the impending loss of privileges and is projected onto the refugees, while at the same time prescribing political therapy for the moods of individual failure and well-being.
The external fear is transposed into a political sadism that knows no pity for the poor and the excluded and therefore has to be directed against the surplus population in the South (and West). The right-wing populist therapeutics of political sadism can thus tie in with the fearful and at the same time cynical ego interest of the financially risky subject who serves the Eros of economic selection. The endless repetition of stereotypical, projective formulas, of which Adorno has already spoken, is the discursive fuel of right-wing populism, which today attempts to organize neoliberal politics in the direction of a free-driving unease. Conversely, the neoliberal elite, in order to stifle any desire for change in the population, needs to a certain extent right-wing populism, which it also promotes, as a means of generating fear and mobilizing to secure its power. The associated transformation of political affairs into religious conflicts, fake news and civilisation wars was, incidentally, pursued from the outset by all the apparatuses of neoliberal power. So it is not surprising that the collectively organized paranoia is permanently enriched with identitarian delusions, fake news and delirious ideas in order to finally achieve such a highly explosive state, a kairos, so to speak, in which the politics of feelings demand, for the sake of one’s own happiness, the genocide that is to be carried out against the foreigners, the poor and the invaders, ultimately against the global surplus population. In spite of the new situational mixture of set pieces of folk elements, xenophobia, racism and nationalism, as they are carried onto the streets especially in eastern Germany, the strategies remain classically oriented towards the assumption of power by the state – thus there are clear hierarchies, Functions and division of labour among the right-wing populists, the statesman AfD cadres and the moving executive branch of neo-Nazis and right-wing hooligans, as well as entangled secret services, a partly already infected police force and a grumbling “people” – all in all, these are situational, machine-based chains of a proto-Fascist overall structure.
The search for meaning and the delusion of interpretation have the form of time, and time has the form of the search for meaning and the delusion of interpretation. Paranoia is the production and circulation of interpretation through the production of difference, tied to the Shooting through the real (noise) with soft gaps and shooting through intrusive durations with (hard gaps).
by Achim Szepanski
If one follows the theory about technical objects as developed by the French theorist Gilbert Simondon (Simondon 2012), and then the statements of Frédéric Neyrat in the anthology Die technologische Bedingung (Hörl 2011), the task for today’s hypertechnized societies is to fundamentally rethink the identity of nature and technology (there is no total integration of nature into technology, nor is technology to be understood as an expression of nature), which has always been disturbed, by rethinking the machines or the technology itself. technical objects, which by no means prolong human organs like prostheses or serve human beings only as a means of use, are first affirmed in their pure functionality, so that in their inconclusive supplementarity they can finally attain the status of coherent and at the same time individualized systems, the localizations of which are embedded in complex machine associations or networks. (ibid.:37) Günther Anders had spoken almost simultaneously with Gilbert Simondon of machines as apparatuses, but of a world of apparatuses that had made the difference between technical and social designs obsolete and thus generally rendered the distinction between the two areas irrelevant. (Anders 1980: 110) According to Günther Anders, every single technical device is integrated into an ensemble, is itself only a device part, a part in the system of devices – the apparatuses -, with which it satisfies the needs of other devices on the one hand, and stimulates the need for new devices by its mere presence in other devices on the other. Anders writes: “What applies to these devices applies mutatis mutandis to all … To claim that this system of devices, this macro-device, is a ‘means’, that it is available to us for free use, would be completely pointless. The device system is our ‘world’. And ‘world’ is something other than ‘means’. Something categorically different.-” (Anders 2002: 2) Or to put it in Simondon’s terms, in view of our post-industrial situation we should speak of technical objects whose elements always form recursions and entertain inner resonances with each other, while at the same time the objects stand in external resonances with other technical objects, in order to be able to play out their own technicality in ensembles as open machines. In addition, many technical entities develop a plural functionality, executing instead of one function several functions within a machine system, such as the internal combustion engine, whose cooling fins assume the function of cooling and amplification when they counteract the deformation of the cylinder head. (Cf. Hörl 2011: 97) Simondon has not adopted the profoundly pessimistic view of post-industrial technologies as found in Günther Anders’ work; rather, in those technical objects that elude the hylemorphistic juxtaposition of form and matter, as still conceived in the working model qua matter to be formed by tools, Simondon has just identified a possibility of technology approaching nature’s autonomy, a tendency that leads to the dynamic unity of the technical objects themselves, in that these and the other objects are dynamically closed. a. incoporate a part of the natural world by creating associated milieus, connecting their interior (resonance of different parts and multifunctionality of parts) with the exterior, with other objects, be they natural or artificial. Yet the technical object cannot completely separate itself from an excess of abstraction that distinguishes the so-called artifical, heteronomous object, and Simondon attributes the power of abstraction above all to the human being as his constitutive contribution to technology, who thereby prevents the technical objects from concretizing themselves in open structures and playing off their tendency towards autonomy. (Ibid.: 154) Simondon, however, is by no means tempted by the thesis that in a post-industrial future everything living must be rigorously subordinated to open technical ensembles; on the contrary, Simondon advocates a social concept of technical ensembles or open machine associations in which the human being coexists with the “society of technical objects”. But where man intervenes too strongly and dominates the technical, we are dealing with heteronomous artificial objects, whereas the technical object at least tends towards an autonomy (it cannot completely abandon abstraction), which includes the natural moment, i.e. the unity and consistency of a machine system. Paradoxically, for Simondon it is precisely the artificial that prevents technology from becoming natural. (ibid.: 154) According to Simondon, abstract artificiality is always due to a lack of technicality, whereas the technical object is to be concretized in coherent processes, whereby each local operation of the technical object is to be integrated into a comprehensive arrangement of the mechanical ensembles. (Hegel defines the concrete as that which includes the relational, while the abstract is regarded as one-sided or isolated. The terms “concrete” and “abstract” therefore do not designate types of entities, such as the material and immaterial, but are used to describe the way in which thinking is related to the entities. Thus the abstract can prove to be the most concrete and the concrete the most abstract. A materialist concept must be able to explain what constitutes the reality of a conceptually formed abstraction without hypostatizing this form. It must be able to show how abstractions are treated by social practices, the latter being more than just work processes that shape matter, when they ultimately reposition themselves in a very specific way, i.e. concretize themselves in relation to technical objects, as proposed by Simondon.) Thus the technical object always functions in associated milieus, i.e. it is connected with other technical objects or it suffices itself, and in doing so it must always respect nature.
Simondo’s technical objects refer to their embedding in net structures, whereby he foresees the contemporary coupling of technical objects to the digital, information- and computation-intensive ecology of new media already in the 1960s, the dispositive of digital, transformational technologies including a neo-subjectivity deformed, non-intentional and distributed by machine speeds. Almost in tune with cybernetics, Simondon is aware that the machine is not used as or like a tool, but rather that it is operated. Technical objects are neither prostheses of the human being, nor, conversely, can the human being be completely dissolved as a prosthesis of the machines. First of all, the technical objects should be conceived purely in terms of their functionality, and this with regard to their explainable genesis, in the course of which, according to Simondon, they increasingly concretize (not abstractize) themselves on the basis of an immanent evolution, beyond the adaptation and expediency of their use or their fixation as means. However, the technical object is not a creative agent in its own right, it remains confined in an economic and scientific context, and the process of its concretization asserts synergetics, the interaction with other functional subsystems by modifying and completing the functionality of the technical object. The movement of the concretization of the technical object includes the organization of functional sub-systems in which the technical object matures into a technical ensemble, which in turn is characterized by comprehensive social, economic and technical processes and their structuring. Concretisation also means the tendency towards innovation, in which a series of conflicting requirements are satisfied by multifunctional solutions of individual technical objects, creating causal cycles to integrate the respective requirements. Technical elements (parts of the machine), technical individuals (machine as a whole) and technical ensembles (machines as part of social, technical and economic systems) are each already in a dynamic relationship that potentially releases a process of technological change. However, the economy is not dominated by the media/machines; rather, capital and its economy determine the technological situation.
According to the French theorist Frédéric Neyrat, the disturbed identity between nature and technology refers to the “hyperjekt”, which describes the mechanical autonomization of technology vis-à-vis human actants as well as the material substitution of the material by the artificial, without, however, having to assume a total integration of nature into technology. (Technology as a detachment from nature, as a substitution of natural substances by plastics, and as a detachment of technology from man by means of mechanical autonomy. It must be assumed that machines and their materials are in a relationship of interference). One can identify the hyperjekt as a substitution and autonomization milieu (materials and machines) of the technical, independent of subject/mind and object/nature, whereby one should not speak of associations, but of superimpositions with regard to the contextualization of the two milieus, if one discusses the internal and external resonances of the technical objects.
Post-industrial technology, e.g. Gotthard Günther’s concept of the transclassical machine, is in between nature and spirit, because precisely because of the processes of double detachment it is forbidden to reduce the transclassical machine purely to scientific-human creation, since it follows an independent logic of reflection. It is about the transclassical machine, whose essential function is to deliver, transform and translate information. (The information articulates the difference that makes a difference, as Gregory Bateson sees it, but not insofar as the smallest unit of information, a bit, as Bateson assumes, is simple, but as Bernhard Vief writes in his essay Digital Money (Vief 1991), is given twice: bits are immaterial, relative divisors, they stand for a movement of differentiality that is neither present nor absent, and thus the binary code, the binary sequence of numbers, can be positioned as an effect of the alternance that articulates them, as an effect of the alternance that positions them. As Lacan has shown with the example of the cybernetic machine, the articulated is of the same order as the symbolic registers, whereby the switches of the switching algebra represent the third of that order: The articulation, which itself is neither open nor closed, indicates the possibility of the purely positional states.) The transclassical machine can be mapped neither to the object nor to the subject; rather, it holds in it a trivalent logic: subject, object, and the transclassical machine as hyperject. The hyperjekt belongs neither to nature (object) nor to spirit (subject), and thus it is subject to an exteriority, which, however, is by no means to be understood as the outsourcing of the interior of a subject, but rather indicates an independent “region of being” – it contains a trivality that proves its incompleteness per se, because it does not synthesize the opposites (subject and object) – on the contrary, these non-trivial machines (Heinz von Foerster) remain removed from the complete analysis as well as from the synthesization. At this point, however, the concept of technical being must put up with the question of whether the media of technical objects can be captured ontologically as modes of dispersion into open spaces or the dispersion of space itself. In the last century, Second Order Cybernetics had created its own constellation of concepts (feedback, autopoiesis, temporal irreversibility, self-referentiality, etc.) that had long since immigrated into mathematical models or computer simulation. Although this does not dissolve the material substrate or the physicality on which those processes sit, the autonomous-immanent relations and interactions of a multi-level complexity reign here, with complexations taking place in each individual contingent process: Systems transform random events into structures, and conversely, certain events can also destroy structures, so that a single system indicates a continuous fluctuation between disorganization and reorganization as well as between the virtual and the actual in almost every conceivable case. Gotthard Günther has above all tried to present the ontological implications of these forms of knowledge and has introduced the concept of polycontextuality. In a polycontextural world context, the transclassical machines that operate in a rift or as the third between subject/spirit or object/nature are scattered over a multitude of objects, qualities and differences. (ibid.: 165f.) These transclassical machines are conceivable as ensembles of universes, each of which can raise an equivalent demand for objectivity without having to represent or even eliminate the demands of other ensembles. In it, the concept of context describes a continuum of potential reality that changes shape with each quantification. Günther therefore speaks of the contingency of the objective itself, whose difference does not convey an intelligent hierarchy, with the consequence that in these technological fields we are dealing less with classifications or taxonomies, but with decision-making situations and flexible practices. On the other hand, the computers known to us so far operate only auto-referentially, i.e. they cannot process the difference between their own operations and the environment within themselves.
Frédéric Neyrat introduces the so-called holoject as a fourth level of technology, which in contrast to the hyperject as a medium of absolute connectivity refers both to the subject and to the object, to the superposition of both components, which is always continuous, unstable and endless. (Ibid.: 168f.) As such, the holoject inexists, but can transfer its continuity properties to the hyperject and thus give it shape, which we then finally call an organless body, a mechanical ensemble that is machinic in all its parts. There is no fusion of areas (subject/object, knowledge/thing, etc.), but rather, according to quantum physics, there are superpositions in which, for example, two waves retain their identity when they generate a third wave, which, however, neither represents a synthesis of the two preceding waves nor their destruction, but rather indicates a non-commutative identity according to François Laruelle. The concept of idempotence, a term from computer science, includes a function that is linked to itself or remains unchanged through the addition of further functions, so that the generative matrix persists as a non-commutative identity through all variations without ever requiring transcendence. According to Neyrat, idempotence is the characteristic feature of the holoject, which, according to Neyrat, is “both, as well as, as well as …”, whereby, with regard to idempotence, the function of the “and” is primarily focused on, i.e. on the insistence of subjunctive syntheses, and this leads us to an open technical structure in which the technical object, as an “in-between”, already appears with a certain delay and as an inexhaustible reserve of the technical medium itself. In this context, Mc Luhan’s formula “The medium is the message” does not postulate an identity of terms, nor is the message degraded to a mere effect of technical structures; rather, something resounds in the “is” that returns in the medium as difference, virulence, or disruption without ever being shut down. (Cf. Lenger 2013) The message of the medium occurs in the fact that difference only joins a media “together” in order to return as disparation in it and to repeat itself as difference, thus simultaneously undermining its previous technical modes and modifications. At this point Jean-luc Nancy speaks of an eco-technique of intersections, twists and tensions, a technique that is alien to the principle of coordination and control, and he describes this pure juxtaposition, this unstable assembly without any sense, as a structure. (Cf. Hörl 2011: 61)
Alexander Galloway has defined the black box as an apparatus in which primarily the inputs and outputs are known or visible, whereby the various interfaces establish its relation to the outside, with regard to the cybernetic situation. While Marx’s fetishism critique of the commodity was still about deciphering the mystical shell in order to penetrate to the rational core, today’s post-industrial technologies, on the other hand, which constantly produce new goods such as information, are open and visible in a shell that functions purely via interfaces, while at the same time the core is invisible. (ibid. 269) The interactive interfaces occupy the surfaces in the black boxes and usually allow only selective passageways from the visible outside to the opaque inside. Black boxes function as nodes integrated into networks, whose external connectivity is subject to an architecture and management that remains largely invisible. According to Vilém Flusser, the camera can be regarded as exemplary for most devices and their function. His agent controls the camera to a certain extent by controlling the interface, i.e. by means of input and output selections, but the camera controls the agent precisely because of the opacity of the inside of the black box. For Simondon today, digital technologies with their visually attractive and “black-boxed” interfaces would prove to be highly problematic. These technologies usually derive their popularity from a suggestive aesthetics of the surface, and they do not attract the user because they offer him, for example, a liberating possibility of indetermination of the technology, of flexible couplings between the machines and with the human, as Simondon may consider worthy of consideration. Simondon insists that the fundamental movement of technological development does not consist in an increase in automation, but rather in the emergence and evolution of those open machines that are susceptible to social regulation. With the black boxes, on the other hand, we are dealing with technological objects that are described as ensembles of readable rational functions, and this with regard to their input-output relations, which are to run as smoothly as possible, whereby on the one hand their core remains unreadable, on the other hand their material construction in the discourse at best still exists as a rather negligible speaker. Simondon challenges us to look inside the black boxes at a glance.
In addition, the problem of connectivity with regard to the aspect of non-emitting, transmitting machines must be taken into account, which have a plurality of processes and effects of their own, and this proves to be a matter of the highest economic relevance, if these machines then produce multiple machine functions and effects in and with their complexes, contrary to a one-dimensional chain of effects, yes, these functions even release explosions of previous machines and thus produce new conjunctions. “The spheres of production and energy technology, transport, information and human technology indicate vague field definitions of machines in which the machine-environmental is already inscribed,” writes Hans-Dieter Bahr (Bahr 1983: 277), and in principle the mechanical ensembles and processes can thus be described as transmitting information, information into which also natural, economic and social structures and processes, including their postponements, complexifications and layer changes, enter, whereby it is by no means just about communications, but also about absorptions and filters of the information itself, about the manipulation of the data qua algorithms – and thus the respective relations and programming/functionalizations would also have to be decoded inside the technical objects themselves, which, however, would virtually prevent the hegemonic discourses on technology. In contrast to the darkening of the interior of the black boxes, Simondon pleads for a discus that focuses on the perfect transparency of the machines. The aim here is to recognize potentials and relations that are sometimes already condensed in the machines, and which then concretize themselves qua a functional overdetermination of the technical objects. For Simondon, the machines represent something like mediators between nature and man, which we have to grasp, among other things, in the dicourses on media. The machine, as Hans-Dieter Bahr explained in his paper Über den Umgang mit Maschinen, could therefore be described less as the concept of an object “machine” than as a discursive formation. (ibid.: 277). Every (digital) machine is functionalized by programming, whereby it quickly becomes apparent, however, that the mere description and maintenance of the constructive functions does not necessarily mean that a machine has to “function”; rather, the manifold dysfunctionalities of the machines must be taken into account, which can cross the functioning system of input and output relations at any time, accidents, crashes, crises, etc., and which are the result of the “machine”. (It may well happen that a deceleration of the machine speed is cost-saving for an economy as a whole, think, for example, of the (external) climate costs that do not arise, although the deceleration for the individual capital increases costs; a machine may well become obsolete due to competition between companies, i.e. from an economic point of view, although it is still fully functional in terms of materials, a constellation that Marx called moral wear and tear). The in-between of the machines or the machine transmissions, respectively, enormously block a teleological view: The outputs of the complex machines are today less than ever commodities, which are mostly already further machine inputs, but generate much stronger complexes of effects including the unintentional side effects, with which the machines themselves mutate into the labyrinthine and therefore constantly need new programming and functionalities for orientation and control in order to maintain their input selections and outputs, because the machines are supposed to function through the supply of programs, materials, information and by controlling the input-output relations.
Possible outputs of the machines can be utility values, but also other dysfunctions, which disturb the continuous operation – but most of these outputs are inputs into other machines. So machines emit energy and information streams that are cut or interrupted by other machines, while the source machines of the emitted streams have themselves already made cuts or withdrawals from other streams, which in turn belong to other source machines. Each emission of a stream is thus an incision into another emission and so on and so forth, so Deleuze/Guattari see it at least in Anti-Oedipus. (Deleuze/Guattari 1974: 11f.) At the same time a double division emerges with the mechanical incisions, whereby the concept of the incision does not emerge as meaning from an inside, in order to then be translated or transported into the inside of another, but rather in the communication of the cut something is displayed that is already “outside” as an outside, for example a network of machine series that flee in all directions. (Lenger 2013) Each communication or translation takes place over an inexpressive incision into which the net divides. This division remains unexpressive in the message, but only because an open space is opened that allows everything to be communicated and expressed. And these divisions take place today via interfaces. Interfaces are usually referred to as significant surfaces. An extension of conceptuality takes place when it is conceived as transitions or passages, described as thresholds, doors or windows, or when it is furthermore understood in the sense of a flexibilisation of input selections as fields of choice, whereby we can then speak of an intraface that identifies itself as an indefinite zone of translations from inside and outside. (Cf. Galloway 2012) The intraface opens the machine structures in an indefinite way to associated milieus, which we have ever encountered with open machines or processes in which several intrafaces are always integrated, as effects of the translations that work or don’t work, whereby even this binary distinction is still questionable when one considers that machine transmissions simply cannot do without side effects and disturbances.
Now the cybernetic hypothesis is characterized precisely by the fact that it defines the technological object or the technical system by the sum of the inputs and outputs, whereby black boxes (computers, data objects, interfaces, codes) permanently have to eliminate dysfunctional inputs. Among the unfavorable inputs are climatic conditions, incomplete classifications, influences of other machines, faulty programs, wear and tear, and it is up to the cybernetic machines to absorb these structures and correct them according to their own criteria, and these transformations in turn affect the outputs. When machine systems select and transform different types of input, this means that a multitude of economic, social, natural, cultural, and legal functions belong to their inputs as well as to their expenditures. (Bahr 1983: 281) Here, the disciplining function of the feedback mode of cybernetic control loops, the attempt to feed back outputs to inputs in such a way that dysfunctional inputs can be faded out or eliminated in the future, or at least more functional selections of inputs can take place than before, becomes quite evident. Cybernetics is thus characterized not only by automation, but above all by the mechanism of input selections. If now the human element is selected out, one speaks of the automaton. This of course contradicts a posthuman situation as Gilbert Simondon had still imagined it: When technical objects individualize themselves, they are always also in external resonance, whereby the resonances in between of the technical individual and the associated techno-logical milieu insist, creating a recursive causality in between. Cybernetics, however, wants to subject the in-between entirely to its automatism or to its input selections, whereby the identity of living beings and machine is thought of purely from the point of view of the automaton, while Simondon conceives this asymptotic analogy between the human and the machine from the perspective of the machines that have always oriented themselves towards open spaces and associated milieus, which in turn corresponds to a certain affirmation of non-selective inputs and a variety of stratagems that continue themselves as incisions, divisions and crossings of the machine milieus. Machines as media configure intermediate worlds insofar as they indicate a mediation without extremes or poles, since the poles (inputs and outputs) often prove to be further stratagems. Technological objects today are usually embedded in digital networks, with the associated architecture of the protocols regulating their exchange of information among themselves, which thus proliferates over a complex topology of densifications and scatters, and even from this, Simondon would probably still gain a cultural force. This does not use the machines, but confirms that cultural dignity lies precisely in the recognition of the pure functioning of the technical objects, whereby the human being only enters into a dialogue with the technical ensembles and this dialogue can lead to a true transindividuality. We speak here generally of technicity. If the input and output selections are considered on the basis of their intersecting contingencies, then we are not talking about with more automatic machines, but actually with open machines – and concretization then means appreciating the contingency of the functions and the interdependence of the elements in order to do justice to their inner resonance, which makes them probable machines that cannot be measured against the ideal of precision, but to display different degrees of precision by expanding their range of use, expanding into new areas, until they occupy, or at least affect, all fields of the social, cultural, economic and technological, as in the case of computer technology, though in a usurping manner. It is the process of disparation between two realities, in Deleuze’s sense the disparation between the virtual and the current, which ultimately activates information differently from the digital and sets in motion a process of individuation that comes from the future. The information is located less on the homogeneous level of a single reality, but at least on two or more disparate levels, e.g. a 3D topology that knots our posthuman reality; it is a fabrication of reality that folds the past and the future into the present, as an individuation of reality through disparation that is information in itself. If individuation encompasses the disparation of the virtual and the current, then information is always already there, already the present of a future present. What is called past or present is therefore mainly the disparation of an immanent source of information, which is always in the process of dissolution. For Simondon, the idea of the capacity or potential of a technical object is closely linked to his theory of individuation. The individual object is never given in advance, it must be produced, it must coagulate, or it must gain existence in an ongoing process. The pre-individual is not a stage that lacks identity, it is not an undifferentiated chaos, but rather a condition that is more than a unit or an identity, namely a system of highest potentiality or full potentials, an excess or a supersaturation, a system that exists independently of thinking.
Digital networks today not only encompass the globe that they themselves generate, but they also penetrate into the social microstructures of the capitalist economy, whose human agents in turn subject them to permanent addressability, online presence and informational control. (Lenger 2013) Being “online” today condenses into a hegemonic form of life; constantly mobilizable availability is part of a flexible normalization that affirms users in toto with the practice of everyday wellness, cosmetics, and fitness programs until, in the course of their permanent recursion with the machines, they finally completely incorporate the processes of normalization. In the postscript on the control societies, Deleuze described human agents as “divuenes,” mostly a-physical entities, infinitely divisible and condensable to data representation, which, precisely because of the effects of a-human technologies of control, at some point act like computer-based systems. At present, we can assume that there is a homology between post-Fordist management methods, which propagate non-hierarchical networks, self-organisation, flexibility and innovation in heroic litanies, and the neurosciences, which describe the brain as a decentralised network of neuronal aggregates and emphasise neurological plasticity (Christine Malabou) as the basis for cognitive flexibility and adaptation. According to Catharine Malabou, neuronal and social functions influence each other until it is no longer possible to distinguish between them. At least there is the possibility that human species, with the rapid translation of their own material history into data streams, networked connectivity, artificial intelligence, and satellite monitoring, tend to become a decal of technology. If the events – mobile apps, technological devices, economic crises, digital money, drone wars, etc. – process at the speed of light, then the reference systems of traditional techno discourses will definitely be destabilized, and their definitions and hypotheses as useful indicators of what the future of hyper-accelerated capitalism could still bring will increasingly fail. The obscuring of clearly defined boundaries between bodies and machines, the interpenetration of human perception and algorithmic code, the active this leads to the injection of a fundamental technological drift into the social, cultural and economic, while the economy and its machinery continue to determine the technological. Implemented in social reality, the important signifiers of technological acceleration today include concepts such as “big data”, “distant reading” and “augmented reality”; they conclude the words still bound to gravity and capital as power in the weightless space of the regimes of computation. There will be more migrations into this weightless space in the future, for example, of thoughts in mobile technologies, and we will at the same time have to deal with an increasing volatility in the field of the digital financial economy, triggered by trading algorithms based on neuronal networks and genetic programming, we will be completely immersed in the relational networks of social media, and last but not least we will be confronted with a completely distributed brain that modulates experiments in neurotechnology. Nothing remains stable, everything is in motion.
Algorithms need to be discussed in the context of this new mode of automation. Usually the algorithm is defined as an instruction to solve a problem, and this happens in a sequence of finite, well-defined steps or instructions, in sets of ordered steps that operate with data and computable structures implemented in computer programs. As such, the algorithm is an abstraction, its existence is integrated into the particular program language of a particular machine architecture, which in turn consists of hardware, data, bodies and decisions. The currently existing algorithms process ever larger amounts of data and thus process a growing entropy of data streams (big data), they generate far more than just instructions that have to be executed, namely potentially infinite amounts of data and information, which in turn interfere with other algorithms in order to re-program the various algorithmic processes. From an economic perspective, algorithms are a form of fixed capital into which social knowledge (extracted from the work of mathematicians, programmers, but also user activities) is objectified, whereby this form of fixed capital is not usable in itself, but only to the extent that it is drawn into monetary capitalization, whereby it can also drive and force it further. In any case, algorithms are not to be understood as mere tools, but one should finally understand that they actively intervene in the analysis and processing of the data streams in order to translate them into economically relevant information and also to utilize them or to generate the respective orders on the financial markets self-referentially and to conclude them successfully. This means that the far greater share of financial transactions in high-frequency trading today runs via a pure machine-machine communication, which the human actants are no longer able to observe, because the data and information flows flow at a-human high speeds via invisible apparatuses and still liquefy the distinction between machine, body and image. (Cf. Wilkins/Dragos 2013) Although the composition of human and ahuman entities varies, as in the various HFH systems, in extreme cases some financial companies eliminate almost any human intervention in the automated transactions, so that the data read by the machines self-referentially flow continuously into and back from the algorithms controlling the processes, so that trading decisions can be largely automated. Every human intervention, on the other hand, complicates even those financial processes in which specific errors and problems have arisen. Some algorithms are already being physically implemented in silicon chips: The combination of hardware and software. In the HFH Systems section, the contemporary financial economy is thus largely invisibly shaped by algorithms – for example, certain programs permanently scan the financial markets to see whether the indicators fixed by algorithms reach certain levels, which then become effective as buy or sell signals. There are current versions of algorithms such as the volume-weighted average price algorithms (VWAP), which, in conjunction with econometric methods, generate complex randomness functions to optimize the size and execution times of monetary transactions in the context of global trading volumes. (ibid.) We are dealing with other types of algorithms that try to identify and anticipate such transactions, or there are non-adaptive, low-latency algorithms that “process” both the differentials of transmission rates in global financial networks and the correlating material transformations that enable those informational relations. Genetic algorithms are used to optimize the possible combinations of price fluctuations of financial derivatives and instruments and to ensure the optimal finetuning of each parameter within a financial system. (ibid.) The implementation of algorithmic systems in computerized financial economics represents a qualitatively new phase of the real subsumption of machinery under capital, indicating the transition from cybernetics to contemporary scientific technicity, the so-called “nano-bio-info-cognitive” revolution, which sits on distributed networks and supposedly friction-free systems (superconductors, ubiquitous computing). (Cf. Srnicek/Williams 2013) (Real subsumption under capital includes that every aspect of the production process, technology, markets, labor.
Let us now come to the current machine ensembles and their environments, the digital networks and their complex ecologies of the material and the economic, in which high-frequency commerce (HFH) is integrated. Digital technologies have long since permeated the entire financial system – with the HFH, the fluid planetary movement of financial capital, which is a drive to the violence of pure market abstraction and to the substitution of material experience by the various models of computer simulation, easily stands out from the production and orders of consumption to continue in a closed, self-referential, semiotic system that permanently forces the calibration and recalibration of machine-machine relations. The process of decimalization (the pricing out of assets qua decimal and no longer by fractions), which has been self-accelerating on the financial markets since about 2000 and has further reduced the spread between purchase and sale prices, reflects and fires the need to move ever higher and more time-consuming transaction sums on the financial markets, so that the ever smaller falling spreads can still be compensated at all. The traders keep the positions of the respective deals only for extremely short periods of time, while they only realise small spreads, so that the high profits result solely from the quantity and speed of transactions. With so-called direct trading, which above all allows large institutional investors to bypass all mediators (including the stock exchange) between themselves and the respective trading partner, as well as the existence of almost completely automated infrastructures, it is becoming increasingly urgent for financial companies to access the latest technological innovations in order to manage, control and, if at all possible, control them in the sense of an accelerative dynamic. Thus, in the current HFH, digital automation infiltrates almost every aspect of the trading process, from analysis to execution to back-end processes, with all components controlled by algorithms. An HFH system must perform the fine-tuning of all programming and memory capacities, the manipulation of individual data points and packets, the acquisition of databases and the selection of inputs, etc… There is therefore a clear trend towards hegemonic automation on the financial markets. (Marx had at least rudimentarily described automation in the ground plans as a process of absorption of the general productive forces – part of the social brain – into the machine or the capital fixe, which also includes the knowledge and technological abilities of the workers (Marx 1974: 603), who now follow the logic of capital rather than still being expressions of social work.) If one visualizes the history of the relation between capital and technology, then it seems quite obvious that automation has moved away from the thermomechanical model of classical industrial capitalism and has integrated itself into the electronic-calculating networks of contemporary capitalism. Digital automation today processes in detail the social nervous system and the social brain; it encompasses the potentials of the virtual, the simulative and the abstract, feedback and autonomous processes; it unfolds in networks and their electronic and nervous connections, in which the operator/user functions as a quasi-automatic relay of the continuously flowing streams of information.
remixing of the edges of humans, animals, plants and inanimate objects – all
by Achim Szepanski
It was one of Leibniz's intentions to show that the world is made up of elemental automata that he called "monads." Monads, which each consist of aggregates and form complex automats, can be fanned further and further into Leibniz's infinite small, from the organism to the elementary particles, without ever coming to a final atom. And always the Monad is coded, in so far as it is inscribed the entire present, past and future. If today's cellular automata are to function like a universal Turing machine, then for progress-believing authors who discuss Leibniz in terms of atomism rather than Deleuze in the context of the fold, this means nothing more than that the world's organizing principle is that of the computer and its progressive capacities in which Moore's Law is constantly pushing forward the progression, so that one can overcome again and again all technical problems. The smallest calculating machines today contain a lot of electronic components, but it is becoming increasingly difficult to guarantee the power supply to the digital machines and to control excessive heat generation. Thus, the biological model remains, under very different conditions, still unreached.
According to Klaus Mainzer, cellular automata consist of "checkerboard-like grids, whose cells change their states (eg the colors black or white) according to selected rules and thereby depend on the color distribution of the respective cell environments" (Mainzer 2014: 25). They are mostly two-dimensional grids (one- or multi-dimensional grids are also possible), whose cells have discrete states, but which can change over time.
All cells must be identical, so they behave according to the same rules. In addition, since all rules are executed stepwise and discretely, the automatic network operates synchronously and clocked. And each cell can relate to neighboring cells according to certain rules by comparing its own state with that of the other cells with each clock cycle in order to calculate its new state from this data. Thus, the state of the respective cell results at a time t from the previous state t-1 and the state of the neighboring cells, which in turn are in connection with further neighboring cells. Cellular automata are thus characterized by their interactive dynamics in time and space.
In the universe of cellular automata, space is a discrete set of cells (chain and lattice) that have a discrete number of possible states and that transform and update in discrete time steps. Finally, a cellular automaton can be specified as follows: 1. Cell space: size of the field and number of dimensions. 2. Boundary conditions. 3. Neighborhood and radius of influence of cells on each other. 4. Set of possible states of a cell. 5. Neighborhood and self-change of the cells. 6. Ambient function that indicates to which other cells a cell is connected. With the help of powerful computer services, pattern developments of future generations of cells can then be simulated. According to Mainzer, the cells of a cellular automaton that react to their respective environment behave like a swarming intelligence. (Ibid .: 161) Such a pattern formation of cellular automata can be modeled with the help of differential equations.
A configuration of cells is considered stable if and only if it matches its successor, but it will disappear in the next generation if all its cells are in the white state. Mainzer writes: "In this case, the entire system is destabilized. It could be said that two dead isolated cells are brought to life through coupling and diffusion. The awakening to life becomes precisely calculable. "(Ibid .: 138) Mainzer presumably assumes that life could have come about through a simple process," through coupling and diffusion, "although we are not yet in a self-preserving way "Substance and energy exchange" of an organism have to do. Nevertheless, there does seem to be "a definite reaction-diffusion equation" that has "a limit cycle, that is, an oscillating solution" (ibid .: 139). If border-cycle means something like an inside-outside difference, then we actually have an indication of the spontaneous emergence of life here. The term "oscillation," according to Mainzer, refers to the "cycle of a metabolism." And Mainzer notes that we are leaving the realm of entropy here.
The disarray or dissolution of a closed system can possibly be compensated by a specific opening of the system, in which the destruction is attacked by an incomplete new regulation, by a deviation, which can also lead to order again. The times of the destruction of the order (thermodynamics) and the times of the composition of the parts (negentropy) are integrated into the thermodynamics of open systems. (Serres: 1994: 103) Serres speaks of life as a multi-faceted, polychronic process, bathing as a Syrrhese in the flow of several times. Bergson's duration, Prigogine's deviation from reversibility and irreversibility, Darwin's evolution, and Boltzmann's disorder.) Finally, with Serres' conception of an open polychronology, it can be summed up that the idea that one could coherently describe the complex forms of the universe with the help of the logic of cellular automata is only another variation of the philosophical decision, which is the possibility of capturing the infinity of the data as a scientifically consistent theory. But things are not that easy. Let us visualize the distinction between smooth and notched spaces made by Deleuze / Guattari in a thousand plateaus. Networks are less in smooth mode, they are more strictly stratified. And stratified networks can be compared to the logic of cellular automata or cellular spaces. These networks were described in the 1950s by scientists such as John von Neumann and Nils Aall Barricelli. Cellular spaces, be they the mentioned lattices or elastic topologies, always contain clear distinctions between links and nodes as well as between one node and another. Such networks could now be contrasted with non-cellular spaces such as Konrad Wachsmann's »grapevine structures« or the works of the architect Lebbeus Woods, in which there is no clear separation between links and nodes or between the nodes. Instead, the smooth form dominates here, governed by various "logics": hydraulics, metallurgies, and pure difference in motion, flow, and variation. (See Deleuze/ Guattari 1992: 409) However, it has already been pointed out elsewhere that this type of multilateral dynamic nomadic network, which Deleuze/Guattari calls a "rhizome," has to do with the volatile, virtualization-in-circuit updating processing, real-money capital can be compatible.
Let's briefly look at new trends in automation, which today summarizes the feature pages under the label "Industry 4.0". Following a linearly conceived genealogy of the history of technology, the first epoch of industrialization begins with the steam engine, which is replaced by the epoch of electrification and Taylorization, and this in turn by the third epoch of digital automation and the introduction of robots. With the online networking of the machines in real time (Cyber-physical Systems) is today allegedly reached the fourth stage of the industrial revolution. "Cyber-physical systems" organize themselves largely autonomously via the mode of intelligent interfaces, for example as "smart grids" or "smart cities".
The term "Industry 4.0" thus refers to machine complexes that are networked without exception online and usually process in an internal and of course secure production network. One speaks now of the "Internet of Things". It can be assumed that in the future "smart factories" mainly networked machines are active, which also largely control themselves in the networks. All the machines in a company are now online and every single machine part communicates when it is equipped with sensors, RFID chips and specific software functions, not only with other machine parts, but also in certain lines and along lines with the areas of management, transport and logistics , Sales and shipping.It is no longer the computer, it is the IT networks themselves that are growing together with the physical and immaterial infrastructures, and this happens beyond the respective production sites, including the external environment of the companies, vehicles, home appliances or supermarket shelves. Even today, infrastructure tasks and the functioning of logistics are so complex that "cyber-physical systems" are absolutely necessary for the self-organization and automation of logistics, supply, health and transport systems. In doing so, local places like factories are tapped as databases, but the arithmetic operations themselves usually take place in remote places. In any case, the computing operations from the isolated computers migrate to networked environments where they process on the basis of sensor data and technologies, which not only collect and distribute data but also use them to calculate future events based on algorithmic techniques. In addition, the machines should be permanently addressable by means of the RFID tags, and their own paths should be sought in the global logistics chains for packet switching networks. The storage, evaluation and indexing of the data takes place in so-called Clouds of the data center.
The online-integrated human actors will communicate directly with the machines: the machines command the actors what they are doing and vice versa, the actors give orders to the machines what they have to do. It could also be that in the internal networks of the companies to similar modulations, relations and functions as in the Internet 2.0 comes. In principle, any access to the machine complexes could be done in real time, every machine is permanently reachable, and every machine can send signals on the spot, just in time and as needed. With the Industry 4.0 becomes a customer-oriented production possible, it is produced "on-demand", i. e. Consumption is adjusted to individual individuals. Sensors capture gigantic amounts of data ("big data") in order to observe and control the production processes. At this point, however, the immediate question would be how to determine the storage and access rights to the data in the future. If "big data" migrates into factory halls, then the comparatively transparent production processes also facilitate the manipulation possibilities, so that the highly complex systems become even more sensitive to disturbances - local irregularities can be cascaded in the sense of chaos theory.
As modular components of networks, human and non-human agents are integrated into the dynamics of permanent business communication via the online mode. The distinction between analog (carbon-based, offline) and digital (silicon-based, online) areas is breaking up, with the latter overflowing and mixing with the former. These phenomena are known as Ubiquitous Computing, Ambient Intelligence or the Internet of Things. The information theorist Luciano Floridi speaks here of a ubiquitous »on-life experience«, the drift into the post-human or the inhumane, in which the boundaries between the human, the technology and nature are blurred, and further from a shift from scarcity to overflowing with information, from the entities to the process and the relations (to the substance). (See Floridi 2013) All of this involves new forms of control and power that run in multidimensional dimensions and lines, including corporate, technological, scientific, military and cultural elements.
However, it is also necessary to put the term "Industry 4.0" into perspective. From the very beginning, microelectronics has had to do with the revolution in production and distribution, such as computer-aided design (CAD) or the various control technologies in the industry. And the associated increase in productivity dragged through all sectors of the economy, be it industrial production, agricultural production or raw material extraction, including the non-productive sectors of the state.
The accelerated growth driven by computer and information technologies is called "singularity" in scientific circles, but it has to be distinguished between technical and economic singularity. The economic singularity is measured by the general development of the substitutability between information and conventional inputs. The economist William D. Nordhaus has pointed out in a new study that economic growth depends on how much material can be replaced by electronics. In addition, at the macroeconomic level, account should be taken of the fact that higher productivity leads to lower prices, with the result that only an increasing proportion of high-productivity sectors can be demonstrated in toto if their volume increase overcompensates for the fall in prices. Nordhaus also investigates the question of the permanent substitutability of certain factors of production through information at the organizational level. (See Nordhaus 2015) He proves that this has not been the case in the past and predicts only a slow development towards the economic singularity for the 21st century, although capital intensity will continue to increase in favor of the capital stock (compared to the workload) hence also the share of the information capital. Nevertheless, digital information and communication technologies in the new millennium are expected to have helped establish a new level of productivity in production and distribution, by streamlining and accelerating trade between companies; rationalization processes in the global supply chains and in the supplier industry that enable the reorganization of areas such as architecture, urban planning, health care, etc. The software, with which management methods, derivatives, and digital logistics are processed, may well be understood as a basic innovation, which is integrated into a new technical-economic paradigm.3 (See Perez 2002: IX)
In the context of the neo-imperialism of the world's leading industrialized countries, in the future, 4.0.-industries are needed to secure their own competitive advantages. So it is no coincidence that German scientists are constantly pointing out that the industrial Internet can provide a central locational advantage for both Germany and the European Union. Based on a world-leading engine, automotive and supply industry, it is important to rapidly develop new technologies that can connect factories, energy, transportation, and data networks on a global scale. The state also had to provide intensive research and development programs to secure its location advantage.
This requires a sophisticated and complex logistics industry. Logistics is a sub-discipline of Operations Management. It quickly gained in importance as a result of containerization and its integration into the utilization chains of global capital. Concentrating on the product, on its efficiency and quality, is increasingly losing importance in globalized value chains, but instead capital exploitation is more along lines of abstract lines that process in spirals and cybernetic feedback loops, which in turn are integrated into ubiquitous digital networking , Companies like Google and Amazon are playing an increasingly important role in that the relation between production and consumption is weighted more strongly than the product and at the same time mixed in a unique way in the operational processes of horizontality and verticality. Verticality refers to the line manager who has to operate or supervise the "algorithmic" metrics and rhythms along others. tried to eliminate the error rate and slowness of the human decision. The line manager works along the operational lines, his function in the production processes is chiefly that of an enforcement agency, which gives commands while at the same time the rhythm of the production processes operates through it. After all, management seems to be mainly concerned with protecting the algorithmically organized production processes from the resistance of the workers. And finally, on the line means also working in the "progressive" mode working on the line, constantly improving, expanding, adding to it, to set a new line. This is also the new role of senior management, which knows no managers but only "leaders".
Deleuze, Gilles / Guattari, Félix (1974): Anti-Oedipus. Capitalism and Schizophrenia 1. Frankfurt / M
- (1992): Thousand plateaus. Capitalism and schizophrenia. Berlin.
- (1996): What is philosophy. Frankfurt / M.
Floridi, Luciano (2013): The Philosophy of Information. Oxford / New York.
Mainzer, Klaus (2014): The calculation of the world. From the world formula to big data. Munich
Serres, Michel (1991): Hermes I. Communication. Berlin.
- (1993): Hermes IV. Distribution. Berlin.
- (1994): Hermes V. The Northwest Passage. Berlin.
- (2008): Clarifications. Five talks with Bruno Latour. Berlin.
translated by Dejan Stojkovski
by Max Haiven
Rethinking agency in an AI-driven world – as the AMBIENT REVOLTS conference is trying to do – the critic Krystian Woznicki interviews social thinker Max Haiven about seminal notions of agency under AI-driven financialization.
Krystian Woznicki: As you repeatedly point out in your book “Art after Money, Money after Art”, agency can not be considered as an individual, but should be understood as a collective matter – a social matter that is. Why does an individual (political) action still matter and why, nonetheless, should we not limit our imaginative horizon to the idea of individual (political) action?
Max Haiven: This is a tricky question because I think to fully understand the potentials of individual agency we need to fundamentally problematize, even demolish, the Eurocentric and colonial divide we habitually make between the individual and the collective, or the social. I have recently been reading Jason Moore and Raj Patel’s book “The History of the World in Seven Cheap Things”, where they remind us of an observation made now decades ago by feminist and anti-colonial thinkers: that in order for the world to be shaped as it has been shaped by capitalism, colonialism and patriarchy, it was necessary for European philosophers to develop a framework that separated this thing we call “nature” from this thing we call “society” (Patel and Moore borrow Marx’s notion of the real-abstraction to describe how an invented set of ideas become functionally real in practice). Part of that separation is also separating humans from one another, as social beings who cooperate with other beings to reproduce the world together.
So while ultimately I do believe individuals need to make ethical and political choices and take actions, I think that we need to do so at the same time as we challenge and re-imagine what it means to be an actor or an agent, and that is obviously very difficult in this world we have created, that only seems to acknowledge and tell stories about individuals. Ultimately, as you also recently argued elsewhere, European Enlightenment taught us the notion of agency over rather than agency with (others).
In my work with Alex Khasnabish we theorized the radical imagination not as a thing an individual has, but as something we do together as we struggle within, against and beyond the systems of power that surround us. I think that, even when we imagine we are taking individual political action we are, in fact, always part of some common or collective movement, whether we acknowledge it or not. The ethical gesture of the individual ultimately has almost no meaning if it is not echoed by or acknowledged by others. Meanwhile, the whole meaning of “politics” as such is the question of living together, acting together.
So, any time we think of the individual, we need to question what it is we mean, and more importantly what it is we desire. Our obsession with individual agency – emblematized in the heroic Hollywood biopic narratives of triumphant individuals – clearly (temporarily) satisfies some sort of desire in us. What is this desire? Where does it come from? I would hazard the hypothesis that, at the same time as this system we live under – of competitive, individualistic, consumerist capitalism – increasingly strips us of our ability to change the world collectively, we gravitate towards narratives of the superhuman individual, the superhero, the maverick, even the anti-hero. And, as you also have pointed out, is it so surprising then that we seem to increasingly feel the only one who can save us (from ourselves) is the most bombastic, selfish, belligerent and unapologetic individualists: the proto-fascist strongmen of our current political climate?
Could you explain how this thinking of agency at a collective level relates to and is circumscribed by financialized sociality? Here I am thinking, among other things, that the financialized subject is, last but least, a collectivity and that it is only in the mode of collectivity that this “incorporation” can be properly challenged.
Collective agency is an equally tricky matter, because of course we must problematize the distinction between individual and collective as we have learned to draw it. It is, for me, not simply a matter of valourizing the collective over the individual. I recently reread Ursula Kroeber Le Guin’s beautiful classic “The Dispossessed”, which I think presents some illuminating and inspiring notions of what it means to reimagine the relationship of the individual and collective, based in her background in her reading of Taoism, in the Western traditions of anarchism and in her exploration of the diversity of world cultures through the study of ethnography.
If I may be schematic, I would say that we are always, whether we know it or not, acting collectively. We are a cooperative-imaginative species. We reproduce our world through complex divisions of labour and collaboration, often with “non-human” species, or with the “natural” world (I place these terms in scare-quotes to recall their artificiality). Often this cooperation is not at the front of our minds: it is encrypted in custom, tradition or habit. Power, as in “power-over,” the will to dominate, sovereignty, seems to me to be the methods by which certain classes, groups or people seek to take control over the means or the ends of this imaginative cooperation. But ultimately such control is difficult to gain and even more difficult to maintain. People rebel in big and small ways. Power-seekers fight amongst themselves. Things fall apart. We’re difficult animals.
So, now to come to your question, I think we gain a great deal when we think about finance capital and financialization as forces of power and control that are extremely adept at shaping our collective action. Finance names this global digitized monetary nexus that David Harvey argues acts as a “central nervous system” for capitalism, taking in information signals from around the globe and sending out triggers for response. I have suggested elsewhere that potentially the imagination is a better metaphor, but more on that in a moment. It is a form of meta-human reflexivity, a method by which a system of domination comes to know the world and act upon it, to take command over imaginative-cooperation at a global scale.
When we take this to be normal and natural, and indeed when we internalize finance’s paradigms, metaphors, value paradigms and so on, this, I think, is when we enter into a phase of financialized sociality, a phrase I take from my late advisor Randy Martin. Here, the social field becomes suffused with the logic of finance. Everything from education to housing to food to family life come to be reframed as “investments.” Debt is offered not as a means of domination and discipline (which it is indeed, functionally speaking) but as a means of personal liberation and responsibility.
But at the core, financialization is tyranny. It is a method by which the means and ends of our imaginative-cooperation, our capacities to act together to reproduce our world and our lives, is conscripted towards the reproduction of a more and more unequal world of social and ecological ruination. And we will be made to fight over the scraps.
Let us turn now to role of (AI) technology for financialization and financialized sociality. At some point in your book you cunningly say that we are challenged to discover “how we are bundled together” and to respond to that from within this condition. To my mind, the bundle concept was prominent in middle of the 1990s – during the first bigger waves of the “digital revolution” –, when thinkers such as Kojin Karatani unearthed it from Immanuel Kant’s writings who defined the subject as a bundle. Back then the bundle concept was seen as a possibility to think the subject as a bundle of information flows. This said, how is the condition of being “bundled together” an at once financial and technological matter? In other words, how does the technological catalyze this particular articulation of financialization?
This language of bundling – as I use it – comes from the financial process known as “securitization,” familiar to many from the 2007/8 subprime loan meltdown. Essentially, many debts or other financial obligations are pooled together, then repackaged as new financial assets that offer investors access to different forms of risk and yield. Often this can be extremely complicated and arcane, and these new financial assets themselves can be pooled and redivided again and again. And, of course, the debtor has no say and usually no knowledge of this necromancy.
This in many ways mirrors the processes by which data about us (I hesitate to say “our data” because I am skeptical of strategies that are predicated on the notion that we can or should “own” data associated with us) is congregated, parsed, bundled, sold, read, and made actionable. In fact, we know that many of the algorithmic and “self-learning” systems developed in finance are also used in other fields of “big data” analytics.
So the question becomes: in a world where debt and data about us is manipulated behind closed doors, by computer systems we cannot truly understand let alone control (unless we are part of the very corporations who oversee these processes, and even then we would feel helpless), what is happening to our collective powers? What are we, as a species that is in some ways defined by our ability to take collective action, becoming? Who are we anymore?
That language of bundling also comes from my partner, the artist Cassie Thornton, whose work on debt and financialization has been a constant inspiration. She undertook a very expensive graduate degree at the California College of the Arts and came to the realization that, whether they knew it or not, she and all her colleagues were actually making art about their debt which hung over the place like a cloud. She wanted to reveal it. Her MFA project was a yearbook titled “Our Bundles, Our Selves”, a play on the famous feminist health initiative Our Bodies, Ourselves from the 1970s. One of the points in this and all her work is that we may have been bundled together in new configurations against our will, but what sense and power can we find, together, if we have the courage to look? How can the isolation and fear of living in a world where vast unaccountable forces dominate be transformed into platforms for new solidarity?
In my book “Art after Money, Money after Art” I look at a number of similar artistic experiments in reassembling collective power, within, against and beyond securitization. The question for me becomes: how can these experiments help us imagine the notion of security beyond this endless imperative to “manage risk” which is the hallmark of financialization, a grim individualized task we are each forced to take on, but which we cannot help except to fail at, again and again. We cannot, any one of us, contain, control or even anticipate the risks that living in this system poses for us, ecologically, socially financially. To live by the imperative of risk management is to undertake an impossible task: we are individually tasked with managing risks that are structural and systemic, that can’t possibly be managed by individuals. If we want to escape, we need to reimagine what it means to be “secure,” beyond securitization. And we are only secure to the extent we can rely on one another and reproduce our world together.
With respect to the particular role of AI in this context, I would like to turn to a phrase that you use in your book like a (variable) leitmotif: “hidden in plain sight”. One of the assumptions of the AMBIENT REVOLTS conference is that AI is everywhere and nowhere: implemented and deployed not only at all kinds of abstract levels of society, but also literally in your living room and in your hand, nevertheless it remains sort of invisible as – and this is only the most obvious reason – it’s implementation and application has become so naturalized. With regard to financialization we can observe a similar situation: Although the major financial crisis of 2007/8 has been conditioned by the massive and semi-supervised use of AI-driven trading with derivatives and although this practice has been en vogue at Wall Street since the 1990s, this fact remains a blind spot in criticism and scholarship in general.
This is especially surprising if one takes into account the large number of academic and popular books from that decade, take for instance “Neural Networks in Finance and Investing: Using Artificial Intelligence to Improve Real-World Performance” (1992), “Trading on the Edge: Neural, Genetic, and Fuzzy Systems for Chaotic Financial Markets” (1994), “Neural Networks in the Capital Markets” (1995), “Artificial Intelligence in Finance & Security” (1995) or “Neural Networks for Financial Forecasting” (1996)). Despite this rich body of literature, there has hardly been any deeper analysis of the role that neural networks and AI play in finance and financialization. I mean works that often get referenced in this context, including Frank Pasquale’s “Black Box Society” or Scott Patterson’s and Michael Lewis’s respective books on that subject do not go beyond generalized ideas of algorithms and even more generalized ideas of AI, often these ideas even get blurred.
Yet, there is an important difference between algorithms in general and algorithms in AI. As media theoretician Felix Stalder reminds us in his book “Kultur der Digitalität”, in AI “algorithms are used to write new algorithms or to determine their variables. If this reflexive process is integrated into an algorithm, it becomes ‘self-learning’: The programmers do not define the rules for its execution, but rules according to which the algorithm should learn to reach a certain goal. In many cases, its solution strategies are so complex that they cannot even be comprehended afterwards. They can only be tested experimentally, but no longer logically. Such [self-learning] algorithms are basically black boxes, objects that can only be understood through their external behavior, but whose internal structure eludes recognition.”
Since at least the 2007/8-crisis all this conceptual insight and the discursive material from the 1990s should have enabled and motivated a profound theorization of the discourse of AI and neural networks in finance. Yet, the deeper connection between AI and finance has remained hidden in plain sight – a blind spot that is. This is alarming given that now AI is hyped as the omni-potent solution for all sectors of society and as a docile assistant in your daily life. The danger is that the naturalization of AI will even ‘consolidate’ the blind spot.
Well, on a very practical level, it would appear that the sorts of AI work going on in the world’s financial capitals is extremely complex, cutting edge stuff, the secrets of which are jealously guarded and patented, so we end up not learning about what they’re up to for many years, and even when we do it’s very obscure. The financial sphere is one is where the world’s most accomplished data scientists and programmers are working, because financial firms have the money to pay for the talent. So there is a great lag between the technological developments in terms of machine learning and self-building or self-correcting algorithms in finance and our learning of them publicly. Essentially, we have some of the greatest scientific and mathematical minds of a generation put to work building gladiator robots that seek to outdo one another trading moving conjectural representations of global wealth around and superhuman speeds. Arguably, it’s the money and competition in this sector that is driving forward a huge number of so-called advances in AI technologies, but, of course, not only is it totally secretive and unregulated, it’s also for a deeply unethical and socially destructive cause.
If these AI systems are tasked with managing the global economy, and if, forty years into the neoliberal global revolution, the global capitalist economy has its hands in almost every aspect of human life, from food to housing to work to medicine to education to the technology we increasingly use to manage social life, romance, entertainment and so on, should we not already admit we are ruled by machines? Maybe the feared “Skynet” moment, where AI powers grow to such an extent they supercede human agency, already came and went, and most of us missed it completely?
It is an interesting hypothesis, but one that, to my mind, may obscure more than it reveals. On the one hand, it’s important to note that, if it is true the world is already ruled by financial AI, it is ruled not by some central, broad (artificial) intelligence, but by the cutthroat competition between multiple very targeted, specifically calibrated AIs. Further, I think there is a very interesting argument made by Anis Shivani, which went without enough fanfare, which basically argued that our fears about a world run by AI are actually (legitimate) fears about a world run by capitalism. It’s not simply that AI is a neutral tool put towards evils ends by profiteers. Shivani argues that capital is a already a kind of AI: this dark inhuman product of our alienated social cooperation that comes to command and shape our labour and take control of society for its own reproduction and growth. This recalls Harvey’s metaphor, mentioned earlier, of financial markets as the “central nervous system” of capitalism. The “artificial intelligence” already exists, and may have existed for centuries. Todays machine-learning and self-producing algorithms, from this perspective, are upgrades to an already-existing system.
I would simply ask, however, if at a certain point it becomes more accurately or at least evocative to discuss the challenge of artificial imagination, by which I mean not simply the particular algorithms and protocols by which individual, competitive machine seek to translate the complex world into actionable data but, rather, the dynamics of a whole system made up of an innumerable quantity and velocity of such digital actions. As numerous scholars have already noted, “intelligence” is a poor metaphor for what algorithms “do.” But so long as we are using metaphors, I wonder what the notion of artificial imagination might open?
The most advanced financial instrument of AI-driven high-frequency trading is the derivative. Could you elucidate financialized sociality as derivative sociality and could you then also reflect the politics of the technological set up of derivative sociality? Here I have in mind, for instance, the fact that the group of those who create and profit from the technological set up is rather small, while the group of those who are instrumentalized by this set up is practically all-inclusive – an asymmetric situation, that is not only an imbalance of power but also affords “the many” a surprising amount of power that is “hidden in plain sight”.
Randy Martin, mentioned earlier, was among a group of critical scholars fascinated by the power and dark baroque elegance of the derivative. Briefly, derivatives are at their simplest level agreements between two parties to conduct some transaction at a specified future date: I will, or have the option, to buy 100 shares of Google stock from you in one year’s time at a set price. These contracts are literally ancient. In fact, some of the oldest clay tablets from Sumaria are essentially futures contracts, often used by farmers to hedge against the risks of price fluctuations in the future. But today, thanks to “developments” in financial accounting since the 1970s, the “value” of these contracts can be calculated and, thanks to “developments” in financial market infrastructure, there is today a massive trade in derivatives: the volume of annual trade in “over the counter” derivative products is, by some estimates, in the range of 700 times the entire planet’s economic output (GDP). All these measures are problematic (as too is the comparison of volume of trade to total output), but it gives us a good sense of the magnitude of the problem.
But the exact nature of the problem is complex. For Martin and others, derivatives essentially represent a method by which the future is mapped by financialized metrics of probability, and by which the whole world is measured on the basis of potential risks and rewards for speculative investment. Derivatives exist today that measure and trade on potential weather patterns, on the effects of climate change, on the direct and indirect result of geopolitical conflict (e.g. rising or falling oil prices), on practically anything at all. What’s more, thanks in part to new computing technologies, often all of these bets, which are largely made by huge financial players, are interlaced, cross-referenced, securitized, bundled into portfolios, creating a kind of collectively created house of cards that grows ever higher. I’m dramatically simplifying here to get at Martin’s key point: ultimately, the whole world is engridded in the framework of the derivative, and indeed the derivative ceases to measure and speculate on the world, it comes to actively shape and construct the world.
Derivative sociality gets at the way this whole process is based on and accelerates the phenomenon we discussed earlier: the way that each of us is increasingly and in many different intersecting and conflicting ways measured, bundled and securitized by this financialized order, of which the derivative is the key technology. Further, derivative sociality names the way we internalize this methodology: we come to apply the kind of thought-world of the derivative in our own lives: everything becomes an investment, we are each tasked with translating our circumstances into a series of risks to be managed, assets to be leveraged, hustles to be undertaken. Finally, derivative sociality indicated the way that this individualized imperative manifests in unusual or unforseen ways in the forms of new collectivities. Martin was also a researcher of contemporary dance, which for him also meant all kinds of creative movement. So he saw grassroots forms like skateboarding, hiphop and break-dancing and experimental contempoary dance as methods by which alienated and exploited individuals found one another and started to experiment in new forms of collective risk, underneath the financialized order, in its ruins.
And I think that contrast, between on the one hand Ivy-league educated astrophysicists working on Wall Street building AIs and kids in the ghetto inventing new ways of using their bodies together, of creating new socialities, kind of maps out the kind of inequalities in an era of derivative sociality. Elsewhere, Martin spoke about the new bifurcation of society between the lauded, valourized risk-takers and the abject “at risk” who must be managed lest their failures infect others.
Tackling the logic of derivative sociality I would like to direct our attention to how political geographer Louise Amoore who I also interviewed recently on the politics of AI. In her work on the rise of what could be called “calculative security” within “AI-driven governmentality” she is also engaged with the derivative as such an as a form that migrates across social fields. In her important book “The Politics of Possibility” Amoore writes: “The derivative’s ontology of association is overwhelmingly indifferent to the occurence of specific events, on the condition that there remains the possibility for action: in the domain of finance, the derivative can be exchanged and traded for as long as the correlation is sustained; in the domain of security, the derivative circulates for as long as the association rules are sustained.”
It is this reading of redeployment of the derivative form in the sector of security and governmentality that not only highlights how this form migrates across social fields – and thereby becomes the dominant form in society – but also how it comes to be constituitive of sociality. As Amoore writes about algorithms: “They infer possible futures on the basis of underlying fragmented elements of data toward which they are for the most part indifferent.” Here an enormous socio-political crisis becomes apparent: “Indifferent to the contingent biographies that actually make up the underlying data, risk in its derivative form is not centered on who we are, nor even on what our data say about us, but on what can be imagined and inferred about who we might be – on our very proclivities and potentialities.” And it is this indifference – we could follow – that makes the redeployment of the derivative logic in “AI-driven governmentality” so productive (respectively, so destructive) of sociality. Does it make sense to you?
Yes, absolutely. But I would simply ask: when have the powerful, and especially the powerful under capitalism, not been indifferent to the biographies and lives of those whom they subjugate? Subjugation in any system seems to me to always depend on dehumanization, the reduction of the subjugated to a kind of machine or indifferent mass. This was the ideology of colonialism and empire.
What is different, perhaps, is that this system is precisely interested in difference, specificity, idiosyncrasy, but only in aggregate. I think this implicit in the quote above. Martin likewise observes that the power of the “order of the derivative” is precisely that it uses new technologies to pay extremely close attention to differences, to parse those differences ever more finely, in order to assemble or bundle those differences more precisely, and thereby to organize populations, tendencies, characteristics and profiles more expediently.
This has a lot to do with the digital iteration of what Deleuze, following Foucault, calls a “society of control,” one where the bio-political apparatuses of “making life” take on a distinctly neoliberal frame, where each individual is tasked with carving out a space within a network of codes, fragmented institutions, discontinuous systems of power, all under the inhuman sovereignty of the market.
So, to be schematic, the problem with these technologies is not that the individual is being lost in the mass or in the data, but in fact that the individual, as a construct of the capitalist, colonial, patriarchal order, is in fact being accelerated: more difference, more distinction, more atomization, more specificity. And it is not so simple as saying all forms of commonality or collectivity are being lost either: rather, it would appear that the dominant forms of commonality or collectivity are being generated from the movements of a swarm, but only those with access to the algorithms and big-data sets can hope to “see” and leverage that movement. The rest of us wake up daily to find ourselves (or fail to recognize ourselves as) part of ephemeral collectivities we never knew existed. Until disaster strikes.
Reading AI-driven financialization as fostering a derivative sociality brings us to the question what happens under financialization with the common. For some time now, capitalism has “colonized” the common, e.g. by turning money into our common imaginative horizon. In the current phase of capitalism that is characterized by the excesses of financialization, this process moves to a new stage. How does this “colonization” of the common that you yourself don’t call colonization but “encryption” – how can this process be also understood as transforming the very nature of the political? What is the “encrypted common” with regard to the questions concerning collectivity and sociality we have discussed so far?
To begin, I have been called to be much more careful around using the term “colonization” as a metaphor, as I have done in my past work. Colonization, like slavery, is a very specific historical process, one that is still with us. So both for the sake of analytic clarity and critical efficacy, I think we must use colonialism as a term carefully. There are very important ways in which finance and financialization are bound up with colonialism, past and present, but they need to be addressed with some specificity.
Now, the enclosure of the commons and the colonization of lands and peoples are very closely connected in the history of capitalism. Patel and Moore, whom I mentioned earlier, follow on the insights of Silvia Federici and Massimo de Angelis in identifying both as methods by which capitalism destroys or strips populations of their means of subsistence and their relationship to the non-human (or as we say “more-than-human”) world, from that thing we used to call “nature,” except now we realize that the separation of society from nature was always artificial and epistemologically violent, and justified very real ecological and human violence as well.
So, with that said, let me now turn to your question more directly. It has indeed been my argument that finance and financialization represent new iterations or developments in the methods by which capitalism encloses or forecloses the common, a vast now-global engine for what David Harvey has called “accumulation by dispossession,” which feels slightly more accurate than Marx’s terminology of “primitive accumulation.” We can see this today in the turn towards questions of extraction and extractivism where communities and landscapes are ripped apart when speculative capital finances mining projects. Often, perhaps even usually, this process is also (neo-)colonial, in the sense that it represents an infraction on Indigenous people’s lands and lives, or rhymes with the history of imperialism, except today in directly financialized forms.
I have also sought to argue, especially in my book “Crises of Imagination, Crises of Power” that we must understand the cultural and social world as sites of enclosure, where our methods and practices of expression, aesthetics, connection and joy become targets of a form of capitalism that is increasingly bound up in accelerating consumerism, individualism and competition. And here I think money is the model: as Marx taught us, but also Mauss and others, money is this artificial bond with society we carry around with us, a kind of fragment of our own estranged potential to cooperation, solidified into an icon of capital itself that haunts us like a ghost. Money, then, is like a shard of a hologram, wherein we see the fractured whole if we learn how to look.
This is why I turn to the language of encryption when I speak of money, or when I speak of the way finance re-configures culture and society. It returns our own collective being, our collective agency to us in encoded form, scrambled and fragmented. My credit rating, for instance, is a reflection of my place within a capitalist society, my trustworthiness. But, of course, it is not a reflection of my whole being, my real relationships, but of a kind of speculative persona. And also: I have practically no control over or access to the means of telling this story about myself. But it is accessible to financialized corporations or their AIs. It is functional in the world, arguably much more functional than any story I may tell about myself. It determines, for instance, my access to my society’s resources, to the fruits of other people’s labor like housing, food, education or transportation. In the US credit ratings are assessed by prospective landlords and employers. We are hearing terrible rumors about China’s planned Social Credit system, where access to nearly all public and private resources will be controlled by a kind of credit rated derived not only from one’s personal financial history but also from social media interactions, schooling, the testimony of friends and family and, more generally, from algorithmically determined scoring based on the value of others in your social network.
So, we are part of encrypted societies, and we ourselves are in a kind of crypt, a concept I borrow from Derrida, who it turn borrows it from psychoanalysis. We are both sealed in and sealed outside of ourselves and our society at the exact same time, both alive and dead. I use this language as well as a method to turn our attention away from the present fascination with cryptocurrencies, which, to borrow a formulation from the Institute for Network Cultures, are often brilliant answers to the wrong set of questions. If we want to liberate ourselves from this form of financialized capitalism it is not enough to simply create what we imagine to be a more fair or pure form of money. We must think of money as one among many methods we use to reflect on and shape our own imaginative-cooperative potentials.
Turning to art that – in its best moments – can be the “raw magma of the common” as you say in your book, I wonder what these moments are in which art becomes (or reveals itself as) the “raw magma of the common”? And I wonder what this “magma” is actually all about?
I borrow this concept of magma from Cornelius Castoriadis, who uses it to describe the power of the imagination, out of which he posits all social institutions are formed. We are imaginative-cooperative beings and we build a life together, and reproduce life together, by creating certain frameworks of the imagination, ways of signifying the world, ways of formalizing our relationships. In unequal societies, these solidify, like cooling magma, into rock-like shapes of hierarchies, ranks, castes and institutions. We take these for eternal, natural or necessary when they are simply the momentary crystallization or petrification of our own imaginative power. But the metaphor of magma also hints that another volcanic eruption is coming, one so powerful it will sweep away these solidified forms. And indeed history teaches us how very temporary and fragile our social institutions are. Sometimes that appears to be a good thing. As Ursula Koerber Le Guin, my patron saint of the radical imagination, put it, “We live in capitalism. Its power seems inescapable. So did the divine right of kings. Any human power can be resisted and changed by human beings.” But equally I think we are now seeing institutions that seek to protect human freedom, like say the various United Nations declarations on human rights, the rights of children, or the rights of indigenous people (which, in spite of the many horrific flaws of the UN as a tool of imperialism, were each the result of huge grassroots struggles) which are now simply being ignored by today’s cynical neofascists, and which in any case were largely rendered toothless long before the most recent authoritarian wave by neoliberalism.
In any case, my notion of the magma of the common is an attempt to build on my past work where I associated the imagination with the common, by which I mean the potential for human beings to collaborate, communicate and reproduce the world together on a horizontal, egalitarian and caring basis. I think this is at the root of all those phenomena we, today, call “economics,” though it appears today in perverted form. An economy is, ultimately, a framework for cooperative, imaginative reproduction, for commoning. But most economies are highly exploitative, oppressive and unequal expressions or articulations of the the common. But like the imagination, the common is restless, within, against and beyond its current forms of incarceration.
How can art in its very best moments, respectively, as the “raw magma of the common”, challenge derivative sociality?
We must remember what we are capable of, together. We remember by doing. We remember in struggle, I think. The society of the derivative seeks to organize the imaginative-cooperative potential of the common by encouraging each of us to reimagine ourselves and so act in the world as isolated, atomized, paranoid, competitive risk-managers. It encourages (and promises to reward) us for translating or transmuting everything of value in our lives into assets to be leveraged: our education, our skills and talents, our relationships, our inheritance, our culture – anything at all. That is financialization’s dark magic, it’s cruel alchemy. And it brings a whole new layer of digital power to bear to defend, reproduce and extend this disorganized social order, this decentralized planned economy, where “finance” writ large controls everything like some authoritarian hive-mind. And we have learned to see the embrace of this derivative sociality as empowering, to embrace the ethos of the financier, the entrepreneur, the unapologetic gangster who realizes that everything is for sale and nothing is sacred. Is it any wonder we worship and indeed elect the most disgusting but vivid personifications of these tendencies?
And yet, as De Angelis points out, this is only a small part of the richness that makes up our lives. Still, the vast majority of our human relationships, even inside the bank or at work, are based on other values, on commoning even in some small and fractured way. Workers are constantly rebelling, even if it’s a small joke or an unspoken agreement to be lazy. Resistance flourishes, though not (yet) in the forms of open rebellion. We crave non-derivative connection, community and care, though fatefully we are willing to accept it in often authoritarian forms of nationalism, religion or ethno-nationalist solidarity. To remember the common, to see the common hidden in plain sight, to recognize the power we have, the power on which the system that exploits us depends, this is always the most powerful weapon in the hands of the exploited. We are broken, divided, exhausted and sick. But that power remains.
The notion that AI-driven financialization fosters derivative sociality is a prime subject in the episode film “Popular Unrest”. The AI-driven financial system (that is referred to as the “World Spirit”) elicits random solidarity and random killings and in the course of this brings to the fore the common. As it is hard to distinguish here between the utopian and the dystopian, I wonder whether you could explain why in this particular case the thin line is necessarily so thin in order for the “raw magma of the common” to emerge and what this tells us about the our predicament in times of AI-driven financialization?
I think ultimately this film by Melanie Gilligan is dystopian and quite pessimistic. To recap, the film depicts a world in the grips of the “World Spirit” which is kind of a hybrid of online markets, social media, a massive digital state bureaucracy and algorithmic control mechanisms. In its attempts to constantly learn more about the world it creates a kind of glitch in its own system which creates two effects: in one, people just start being randomly murdered, as if attacked from above by a knife. In the other, seemingly random strangers become inexplicably drawn together with a feeling of intense affinity. Spoiler alert: in the end, one of these “groupings” discovers the truth and seeks to confront the World Spirit, only to ultimately be corrupted by it, betraying one another, succumbing to competition, paranoia and individualism. The film ends with a world in which the wealthy spend their resources trying to protect themselves from the World Spirit which they are also, in so doing, helping to reproduce while the world’s poor are put perpetually “at risk” from its predations and violence.
For me, this film is a brilliant warning about where we may be headed, all the more impressive since it dates back almost a decade now. Ultimately, it is a warning about how our own powers and potentials are turned back against us. The World Spirit is not an autonomous thing, at least not as we are used to imagining autonomy. It’s a kind of collective hallucination given real power by the way it causes individuals and collectives to act. “We” built it to help ourselves cooperate and organize our society, and now it commands us. It actively seeks to prevent or coopt any other formats of collective agency that might challenge it.
You elaborate in your book that art could and should become a source of inspiration for radical change in general and social movements in particular. Moreover you suggest, that radical social movements not only could and should turn to art (as a source of inspiration) but also could and should turn into art or something like art. And if “Popular Unrest” is somehow visionary in this context as it presents crowds that emerge out of nothing somewhat performatively as if part of an semi-abstract social sculpture and as these crowds go about challenging the AI-driven financial system called the “World Spirit”, then what does the fact tell us, that the attempted abolition proves to be futile?
While I do suggest in the book that art can offer us lessons for struggle, I don’t necessarily think that all art does so, or does so well. And I am also a bit wary of the idea that art must offer us visions or models of successful collective or individual agency or social transformation. Ultimately, in this film, the Grouping fails and is co-opted. It’s not a happy story.
I end the book on the question of abolition as a means to draw strength and inspiration less from art than from movements, in this case the constellation of movements around the world that are seeking to abolish prisons and police as social institutions. They do so not only by making vigorous, radical demands, but also by working in the here-and-now to support individuals and communities affected by these institutions (for instance who endure racial profiling or harassment by the police, or which are ripped apart by the prison system) and by actively working to build alternative, grassroots institutions. Here, I think, we see efforts to generate new forms of connectivity and collectivity as both the means and ends of generating or reproducing a very different form of security, within, against and beyond the financialized imparative to securitize.
It strikes me that the failure of the Grouping in Gilligan’s film is how terribly naive they are, believing that their little cluster could, in a heroic gesture, reform or shut down the World Spirit, relying only on the ephemeral solidarity that the Spirit itself had generated within them. Instead, I would imagine successful resistance and rebellion would mean a much more gradual (and certainly less cinematic!) building of solidarities and alliances in formations that actually transformed people’s lives, that reduced dependency on the data-capitalist-sovereign, that created new bonds of social reproduction and robust radical organization.
The key for me is this creation of new bonds of social reproduction and robust radical organization. I think our abilities and inclinations towards these ends has atrophied in a financialized society, and our muscles of individualized risk management are engorged to the point of even obstructing and inhibiting our movement, both individual and collective. Art can and should play many roles, including inspiring and warning us. But it can also be part of a process of collective remembering, a remembering of our powers of imaginative-cooperation which I mentioned earlier, powers that can be turned towards creating new bonds of social reproduction and robust radical organization. We must become different animals to survive what we have created, to create a new relationship with the world, and to avenge what we have been made to become.
Max Haiven is Canada Research Chair in Culture, Media and Social Justice at Lakehead University in Northwest Ontario and co-director of the ReImagining Value Action Lab (RiVAL).He writes articles for both academic and general audiences and is the author of the books “Crises of Imagination, Crises of Power“, “The Radical Imagination“ (with Alex Khasnabish) and “Cultures of Financialization“. His new book “Art after Money, Money after Art: Creative Strategies Against Financialization“ has just been published by Pluto Press.
Krystian Woznicki is a critic and the co-founder of Berliner Gazette. His recently published book “Fugitive Belonging” blends writing and photography. Other publications include “A Field Guide to the Snowden Files” (with Magdalena Taube),“After the Planes” (with Brian Massumi), “Wer hat Angst vor Gemeinschaft?” (with Jean-Luc Nancy) and “Abschalten. Paradiesproduktion, Massentourismus und Globalisierung”.
This is caricature but we have left this idea of the actuarial reality behind for what I would call a ‘post-actuarial reality’ in which it is no longer about calculating probabilities but to account in advance for what escapes probability and thus the excess of the possible on the probable.
It is no longer a matter of threatening you or inciting you, but simply by sending you signals that provoke stimuli and therefore reflexes. There is no longer any subject in fact. It is not only that there are no longer any subjectivity, but it is that the very notion of subject is itself being completely eliminated thanks to this collection of infra-individual data; these are recomposed at a supra-individual level under the form of profile. You no longer ever appear.
Prevention consists in acting on the causes of phenomena so that we know that these phenomena will happen or will not happen. This is not at all what we are dealing with here in algorithmic governmentality since we have forgotten about causality and we are no longer in a causal regime. It is pre-emption and it consists in acting not on the causes but on the informational and physical environment so that certain things can or cannot be actualised, so that they can or cannot be possible. This is extremely different: it is an augmented actuality of the possible. Reality therefore fills the entire room, reality as actuality. This is a specific actuality that takes the form of a vortex aspiring both the past and the future. Everything becomes actual.
By Alexander Galloway
The politics of algorithms has been on people’s minds a lot recently. Only a few years ago, tech authors were still hawking Silicon Valley as the great hope for humanity. Today one is more likely to see books about how math is a weapon, how algorithms are oppressive, and how tech increases social inequality.
The incendiary failures are almost too numerous to mention: a digital camera that thinks Asians have their eyes closed; facial recognition technologies that misgender African-American women (or miss them entirely); Google searches that portray young black men as thugs and threats. A few hours after its launch in 2016, Microsoft’s chatbot “Tay” was already denying the Holocaust.
It used to be that if you wanted to explore the political nature of algorithms and digital media you had to go to Media Studies and STS, reading the important work of scholars like Lisa Nakamura, Wendy Chun, Seb Franklin, Simone Browne, David Golumbia, or Jacob Gaboury. (Or, before them, work on cybernetics and control from the likes of Gilles Deleuze, Donna Haraway, James Beniger, or Philip Agre.)
Now the politics of computation has gone mainstream. Social media followers no doubt saw the recent video clip in which New York congresswoman Alexandria Ocasio-Cortez claimed that algorithms perpetuate racial bias. And in a recent New York Times column, legal scholar Michelle Alexander quoted Cathy O’Neil’s argument that “algorithms are nothing more than opinions embedded in mathematics,” suggesting that algorithms constitute the newest system of Jim Crow.
I too am interested in the politics of computation, and have tried, over the years, to approach this problem from a variety of different angles. The specter of the “Chinese gold farmer,” for instance, has been an important topic in game studies, not least for what it reveals about the ideology of race. Or, to take another example, network architectures display an intricate intermixing of both vertical hierarchy and horizontal distribution, which together construct a novel form of “control” technology.
A topic that has captured my attention for several years now — although I’ve only yet written about it episodically and tangentially — is the way in which software (including its math and its logic) might itself be sexist, racist, or classist. And I don’t mean the uses of software. Use is too obvious; we know that answer already. I mean numbers like 5 or 7. Or the variable x. Or an if/then control structure. Or an entire computer language like C++ or Python. Do these kinds of things contain inherent bias? Could they be sexist or racist?
Uses of tech is one thing. Tech itself is another. Even the most ardent critics of Amazon or Google will frown and backpedal if one begins to criticize algebra or deductive logic. It’s the third rail of digital studies: don’t touch. For instance, some of you might remember the uproar a few years ago when Ari Schlesinger suggested designing a feminist computer language. How dare she! The very notion that computer languages might be sexist was anathema to most of the Internet public, fomenting a Gamergate-style backlash.
While I intend to make this kind of argument more explicitly in the future — the argument that mathematics itself is typed if not also explicitly gendered and racialized — I won’t do that here. Suffice it to say that the topic interests me a great deal. What I want to present here is an annotated collection of the various attempts to resist such a project, the many voices — so loud, so cocksure — that aim to silence and subdue the politicization of math and code.
But before starting, a few caveats. First, math, logic, and computation are not the same thing. Given more time it would be necessary to define these terms more clearly and show how they are related. Personally I consider math, logic, and computation to be intimately connected, often so intimately connected as to reduce one to another. For instance, in the past I’ve made claims like “software is math”; I acknowledge that others might be uncomfortable with this kind of reduction.
Second caveat: Race, class, and gender are not the same thing. I’m referencing them together here because they evoke a specific kind of cultural and political context, and because they all emerge from processes of discretization and difference. A richer discussion would necessarily need to address the complex interconnection between race, class, gender and other qualities of lived experience.
Third caveat: There are people working on these topics who do not fall into one of the responses below — see paragraph three above for some initial suggestions. I am aware of many of them, but of course not all of them, so please feel free to alert me to relevant references if you feel so moved. My intent is not to ignore or silence people. I am focusing on these responses because, in my assessment, they represent the set of dominant positions.
In documenting the resistance to the politicization of math and code, I’ve paraphrased and condensed texts found online and in various kinds of public debate. The italicized block quotes below are fictionalized accounts, but based on things said by real people. Each fictionalized account is followed by my own commentary. Note that I’m omitting all manner of bad-faith responses of the form “woman are inherently bad at math.” The responses below are all examples of good-faith responses from people who consider themselves more or less charitable and reasonable.
+ + +
Response #1: “Pure Abstraction”
“Math is the pursuit of abstraction and formal relation. Math expresses number in its purest form. Algorithms, math, and logic are agnostic to people and their specific qualities. An algorithm has no political or cultural agenda. It does not matter if you are a man or a woman. In math, a correct answer is the only thing that matters.”
Perhaps the most common response, particularly among mathematicians and computer scientists, is to occupy the position of, shall we call it, Naive Abstraction. Here math is entirely uncoupled from the real world. It merely expresses the clearest, most rigorous, and most formal relation between abstract entities. Even if they are unlikely to admit it publicly, most mathematicians are Platonists; most of them secretly (or not so secretly) think that mathematical entities exist in a realm of pure, formal abstraction.
Response #2: “Politics Unwelcome Here”
“How silly to try to classify mathematics along political or social lines. You are misusing math to further a personal agenda. If math has a cultural or political agenda, the agenda is invalid because agendas by definition deviate from the pure and abstract nature of math. A ‘feminist mathematics’ is simply nonsense; the very notion conflicts with the basic definition of mathematics. Math is pure abstraction uncoupled from world-bound facts such as race, ethnicity, gender, class, or culture.”
The second response is similar to the first. Here, Naive Abstraction still holds, only its proponents have become more aggressive in defending their turf. That’s just not how math works, they say, and to suggest otherwise is to commit an infraction against mathematics. Politicizing mathematics means subjecting it to an external reality for which it was never intended. It constitutes a kind of category mistake, they claim. Let’s keep math over here, and politics over there, and be careful not to mix them.
Response #3: “Your Terms Aren’t Clearly Defined”
“Who says math is the pursuit of pure abstraction, insensitive to the real world? Innumerable mathematicians have recognized the importance of observation and even experimentation in the development of mathematical knowledge. Math can be applied, even empirical — just ask any working scientist. And you overlook the extensive attention given within mathematics to real phenomena in disciplines like geometry, topology, or calculus. Hence your terms aren’t clearly defined. Math is complex, so don’t make indictments based on generalizations; deal instead with specific cases.”
Response #4: “Essentialism Is Bad”
“Certainly math can be understood culturally and politically, but to define math in this way means to assign it an essence, and essentialism is the worst form of cultural and political misuse. There is no essence to form or structure. A form gains its definition through encounters with other forms. Structures gain their meanings only after being put into exchange with other structures. Tech has no essence; to offer a rigid definition is to be guilty of essentialism.”
Now we have the same problem as before, only in reverse. Here the respondent might freely acknowledge the cultural and political valence of algorithms or code. They might even reject nominalism and acknowledge that math has general characteristics. However this introduces a new threat that must be resisted: essentialism. To define something is to assign it an essence. And since we already know essentialism is bad — thanks to a few decades of poststructuralism — this approach is destined to fail. Don’t try to politicize things because you’ll simply expose yourself to even greater hazards. (Try ethics instead.)
Response #5: “Not My Problem”
“Cultural and political concepts like race or gender might be interesting, but that’s just not my topic. They’re specific to particular contexts, while I’m looking at generalizable phenomena. Since race/class/gender aren’t generalizable mathematically, its safe to ignore them.”
The dynamic between general and specific can also be leveraged in other ways. A common technique is to suggest that race, class, or gender are “particulars”; they pertain to particular contexts, to particular bodies, to particular histories. And, as particulars, they do not rise to the level of general concern. Thus the Not My Problem folks often think they are operating in good faith even while avoiding or dismissing politics: yes, I care about your plight; but it’s yours alone; I’m simply interested in other things (birdwatching, stamp collecting, prime numbers).
Response #6: “Focus on Subjects”
“Yes, of course math is a cultural and political technology. Math is a tool of governmentality that constructs and disciplines subjects. Instead of studying math for its own sake, focus on how math produces subjects. Given that tech inscribes power onto bodies, effects will be visible in how subjects are coded and organized.”
Thus far we’ve been considering responses from people who are typically outside the field of critical digital studies. The two final responses — responses 6 and 7 — are interesting in that we find them within critical digital studies. In fact, these last two responses are some of the most popular positions in media studies today. Response 6 freely admits the cultural and political nature of math, code, logic, and software. Response 6 asserts, however, that the best way to understand the culture and politics of math is to look not at math but at subjects. Persons and their bodies become the legible substrates on which the various successes and failings of technology are inscribed. These folks tend either to be Foucauldians — “if you want to understand tech, first you have to understand power.” Or they tack more toward anthropology and the human sciences — “if you want to understand tech, first you have to understand people.” Either way, math falls out of the frame.
Response #7: “Focus on Use”
“Math is just a tool. Sure, algorithms can be cultural and political, but that’s a truism for most things. If racist or sexist values are deployed technically, then technology will appear racist or sexist. Math is a neutral vessel, but it’s rarely objective because it harbors people’s goals and intentions. To remedy any perceived bias, focus on the cultural and political context in which tech is used. In other words don’t talk about sexist algorithms so much as sexist uses of tech or sexist contexts.”
Last but not least, a common response to the question of political tech — arguably the most common response, at least for those “in the know” — is to say that code and software are embedded with values. What values exactly? The values of their creators, which is to say all the biases and assumptions of whoever designs the algorithm. Thus if you have a racist algorithm, it’s because some racist designer somewhere made it that way. If you have sexist software, it’s because some coder was negligent. If this argument sounds familiar, it should: it’s a version of the gun rights argument that “guns don’t kill people, people kill people.” Only now the argument is: math doesn’t hurt people, negligent mathematicians hurt people (using math). Sometimes we call this the Neutral Vessel response — sometimes the Just A Tool response — because it turns tech into a neutral, valueless vessel ready to receive someone else’s values, or a passive tool waiting to accomplish someone else’s agenda.
+ + +
What do these responses all have in common? First, they all delink math and code from culture and politics. Either the link is explicitly denied (responses 1-3) or the link is acknowledged before being disavowed (responses 4-7). So while the non-denial responses (4-7) seem more enlightened, given how they admit culture and politics, they remain particularly pernicious since they divert attention elsewhere. They suggest that we investigate the essence or non-essence of math, that we focus on math’s use, or on the subjects of math — anything to avoid looking at math itself.
Overall I see this as a kind of “fear of media.” Whether in denial or acknowledgement, all of the responses work to undermine the notion that math, code, logic, or software are, or could be, a medium at all. If math were a full-fledged medium, one would need to attend to its affordances, its forms and structures, its genres, its modes of signification, its various lapses and slippages, and all the many other qualities and capacities (active or passive) that make up a mode of mediation.
Ironically this fear of media tends to perpetuate stereotypes rather than remedy them. For instance, the “neutral vessel” is an ancient trope for female sexuality going back at least to Aristotle if not earlier, as are neutral media substrates more generally (matter as mater/mother, feminine substrates receiving masculine form, and so on). And the act of “injecting ethics” or “embedding values” into an otherwise passive, receptive technology resembles a kind of insemination. In other words “fear of media” also means “fear of the feminine.”
Yet most significantly, all the above responses favor incidental bias over essential bias. None of them asserts any sort of specific quality inherent to the nature of math or code. Hence the question remains: do math and code contain an essential bias, and, if so, what is it? Not that long ago affirmative answers would have easily been forthcoming. Rationality is an “iron cage” (Max Weber). Abstraction perpetuates alienation (Karl Marx). Discrete binaries are heteronormative (Judith Butler). Still, the notion that mathematics contains an essential bias has slipped away in recent years, replaced by other arguments.
My answer is also affirmative, only the explanation is a bit different. I maintain — and will need to elaborate further in a future post — that mathematics has been defined since the ancients through an elemental typing (or gendering), and that within such typing there exists a general segregation or prohibition on the mixing of types, and that the two core types themselves (geometry and arithmetic) are mutually intertwined using notions of hierarchy, foreignness, priority, and origin. Given the politicized nature of such a scenario — gendering, segregation, hierarchy, origin — only one conclusion is possible, that whatever incidental biases it may bear, mathematics also contains an essential bias. Any analysis of the culture and politics of math and code will need to address this core directly, if not now then soon.
by Himanshu Damle
If it is true string theory cannot accommodate stable dark energy, that may be a reason to doubt string theory. But it is a reason to doubt dark energy – that is, dark energy in its most popular form, called a cosmological constant. The idea originated in 1917 with Einstein and was revived in 1998 when astronomers discovered that not only is spacetime expanding – the rate of that expansion is picking up. The cosmological constant would be a form of energy in the vacuum of space that never changes and counteracts the inward pull of gravity. But it is not the only possible explanation for the accelerating universe. An alternative is “quintessence,” a field pervading spacetime that can evolve. According to Cumrun Vafa, Harvard, “Regardless of whether one can realize a stable dark energy in string theory or not, it turns out that the idea of having dark energy changing over time is actually more natural in string theory. If this is the case, then one can measure this sliding of dark energy by astrophysical observations currently taking place.”
So far all astrophysical evidence supports the cosmological constant idea, but there is some wiggle room in the measurements. Upcoming experiments such as Europe’s Euclid space telescope, NASA’s Wide-Field Infrared Survey Telescope (WFIRST) and the Simons Observatory being built in Chile’s desert will look for signs dark energy was stronger or weaker in the past than the present. “The interesting thing is that we’re already at a sensitivity level to begin to put pressure on [the cosmological constant theory].” Paul Steinhardt, Princeton University says. “We don’t have to wait for new technology to be in the game. We’re in the game now.” And even skeptics of Vafa’s proposal support the idea of considering alternatives to the cosmological constant. “I actually agree that [a changing dark energy field] is a simplifying method for constructing accelerated expansion,” Eva Silverstein, Stanford University says. “But I don’t think there’s any justification for making observational predictions about the dark energy at this point.”
Quintessence is not the only other option. In the wake of Vafa’s papers, Ulf Danielsson, a physicist at Uppsala University and colleagues proposed another way of fitting dark energy into string theory. In their vision our universe is the three-dimensional surface of a bubble expanding within a larger-dimensional space. “The physics within this surface can mimic the physics of a cosmological constant,” Danielsson says. “This is a different way of realizing dark energy compared to what we’ve been thinking so far.”
watch the lecture below:
FREE ONLINE BOOKS
1. Deep Learning by Yoshua Bengio, Ian Goodfellow and Aaron Courville
2. Neural Networks and Deep Learning by Michael Nielsen
3. Deep Learning by Microsoft Research
4. Deep Learning Tutorial by LISA lab, University of Montreal
1. Machine Learning by Andrew Ng in Coursera
2. Neural Networks for Machine Learning by Geoffrey Hinton in Coursera
3. Neural networks class by Hugo Larochelle from Université de Sherbrooke
4. Deep Learning Course by CILVR lab @ NYU
5. CS231n: Convolutional Neural Networks for Visual Recognition On-Going
6. Probabilistic Graphical Model by Daphne Koller in Coursera
7. Kevin Duh Class for Deep Net Deep Learning and Neural Network
VIDEO AND LECTURES
1. How To Create A Mind By Ray Kurzweil - Is a inspiring talk
2. Deep Learning, Self-Taught Learning and Unsupervised Feature Learning By Andrew Ng
3. Recent Developments in Deep Learning By Geoff Hinton
4. The Unreasonable Effectiveness of Deep Learning by Yann LeCun
5. Deep Learning of Representations by Yoshua bengio
6. Principles of Hierarchical Temporal Memory by Jeff Hawkins
7. Machine Learning Discussion Group - Deep Learning w/ Stanford AI Lab by Adam Coates
8. Making Sense of the World with Deep Learning By Adam Coates
9. Demystifying Unsupervised Feature Learning By Adam Coates
10.Visual Perception with Deep Learning by Yann LeCun
1. ImageNet Classification with Deep Convolutional Neural Networks
2. Using Very Deep Autoencoders for Content Based Image Retrieval
3. Learning Deep Architectures for AI
4. CMU’s list of papers
1. UFLDL Tutorial 1
2. UFLDL Tutorial 2
3. Deep Learning for NLP (without Magic)
4. A Deep Learning Tutorial: From Perceptrons to Deep Networks
1. MNIST Handwritten digits
2. Google House Numbers from street view
3. CIFAR-10 and CIFAR-100
5. Tiny Images 80 Million tiny images
6. Flickr Data 100 Million Yahoo dataset
7. Berkeley Segmentation Dataset 500
1. Google Plus - Deep Learning Community
2. Caffe Webinar
3. 100 Best Github Resources in Github for DL
5. Caffe DockerFile
6. TorontoDeepLEarning convnet
7. Vision data sets
8. Fantastic Torch Tutorial My personal favourite. Also check out gfx.js
9. Torch7 Cheat sheet
Perspectives from Leading Practitioners
Machine intelligence has been the subject of both exuberance and skepticism for decades. The promise of thinking, reasoning machines appeals to the human imagination, and more recently, the corporate budget. Beginning in the 1950s, Marvin Minksy, John McCarthy and other key pioneers in the field set the stage for today’s breakthroughs in theory, as well as practice. Peeking behind the equations and code that animate these peculiar machines, we find ourselves facing questions about the very nature of thought and knowledge. The mathematical and technical virtuosity of achieve‐ ments in this field evoke the qualities that make us human: Every‐ thing from intuition and attention to planning and memory. As progress in the field accelerates, such questions only gain urgency.
Heading into 2016, the world of machine intelligence has been bus‐ tling with seemingly back-to-back developments. Google released its machine learning library, TensorFlow, to the public. Shortly there‐ after, Microsoft followed suit with CNTK, its deep learning frame‐ work. Silicon Valley luminaries recently pledged up to one billion dollars towards the Open AI institute, and Google developed soft‐ ware that bested Europe’s Go champion. These headlines and ach‐ievements, however, only tell a part of the story. For the rest, we should turn to the practitioners themselves. In the interviews that follow, we set out to give readers a view to the ideas and challenges that motivate this progress. We kick off the series with Anima Anandkumar’s discussion of ten‐ sors and their application to machine learning problems in highdimensional space and non-convex optimization. Afterwards, Yoshua Bengio delves into the intersection of Natural Language Pro‐cessing and deep learning, as well as unsupervised learning and rea‐ soning. Brendan Frey talks about the application of deep learning to genomic medicine, using models that faithfully encode biological theory. Risto Miikkulainen sees biology in another light, relating examples of evolutionary algorithms and their startling creativity. Shifting from the biological to the mechanical, Ben Recht explores notions of robustness through a novel synthesis of machine intelli‐ gence and control theory. In a similar vein, Daniela Rus outlines a brief history of robotics as a prelude to her work on self-driving cars and other autonomous agents. Gurjeet Singh subsequently brings the topology of machine learning to life. Ilya Sutskever recounts the mysteries of unsupervised learning and the promise of attention models. Oriol Vinyals then turns to deep learning vis-a-vis sequence to sequence models and imagines computers that generate their own algorithms. To conclude, Reza Zadeh reflects on the history and evolution of machine learning as a field and the role Apache Spark will play in its future.
It is important to note the scope of this report can only cover so much ground. With just ten interviews, it far from exhaustive: Indeed, for every such interview, dozens of other theoreticians and practitioners successfully advance the field through their efforts and dedication. This report, its brevity notwithstanding, offers a glimpse into this exciting field through the eyes of its leading minds.
by Himanshu Damle
The formation of black holes can be understood, at least partially, within the context of general relativity. According to general relativity the gravitational collapse leads to a spacetime singularity. But this spacetime singularity can not be adequately described within general relativity, because the equivalence principle of general relativity is not valid for spacetime singularities; therefore, general relativity does not give a complete description of black holes. The same problem exists with regard to the postulated initial singularity of the expanding cosmos. In these cases, quantum mechanics and quantum field theory also reach their limit; they are not applicable for highly curved spacetimes. For a certain curving parameter (the famous Planck scale), gravity has the same strength as the other interactions; then it is not possible to ignore gravity in the context of a quantum field theoretical description. So, there exists no theory which would be able to describe gravitational collapses or which could explain, why (although they are predicted by general relativity) they don’t happen, or why there is no spacetime singularity. And the real problems start, if one brings general relativity and quantum field theory together to describe black holes. Then it comes to rather strange forms of contradictions, and the mutual conceptual incompatibility of general relativity and quantum field theory becomes very clear:
Black holes are according to general relativity surrounded by an event horizon. Material objects and radiation can enter the black hole, but nothing inside its event horizon can leave this region, because the gravitational pull is strong enough to hold back even radiation; the escape velocity is greater than the speed of light. Not even photons can leave a black hole. Black holes have a mass; in the case of the Schwarzschild metrics, they have exclusively a mass. In the case of the Reissner-Nordström metrics, they have a mass and an electric charge; in case of the Kerr metrics, they have a mass and an angular momentum; and in case of the Kerr-Newman metrics, they have mass, electric charge and angular momentum. These are, according to the no-hair theorem, all the characteristics a black hole has at its disposal. Let’s restrict the argument in the following to the Reissner-Nordström metrics in which a black hole has only mass and electric charge. In the classical picture, the electric charge of a black hole becomes noticeable in form of a force exerted on an electrically charged probe outside its event horizon. In the quantum field theoretical picture, interactions are the result of the exchange of virtual interaction bosons, in case of an electric charge: virtual photons. But how can photons be exchanged between an electrically charged black hole and an electrically charged probe outside its event horizon, if no photon can leave a black hole – which can be considered a definition of a black hole? One could think, that virtual photons, mediating electrical interaction, are possibly able (in contrast to real photons, representing radiation) to leave the black hole. But why? There is no good reason and no good answer for that within our present theoretical framework. The same problem exists for the gravitational interaction, for the gravitational pull of the black hole exerted on massive objects outside its event horizon, if the gravitational force is understood as an exchange of gravitons between massive objects, as the quantum field theoretical picture in its extrapolation to gravity suggests. How could (virtual) gravitons leave a black hole at all?
There are three possible scenarios resulting from the incompatibility of our assumptions about the characteristics of a black hole, based on general relativity, and on the picture quantum field theory draws with regard to interactions:
(i) Black holes don’t exist in nature. They are a theoretical artifact, demonstrating the asymptotic inadequacy of Einstein’s general theory of relativity. Only a quantum theory of gravity will explain where the general relativistic predictions fail, and why.
(ii) Black holes exist, as predicted by general relativity, and they have a mass and, in some cases, an electric charge, both leading to physical effects outside the event horizon. Then, we would have to explain, how these effects are realized physically. The quantum field theoretical picture of interactions is either fundamentally wrong, or we would have to explain, why virtual photons behave completely different, with regard to black holes, from real radiation photons. Or the features of a black hole – mass, electric charge and angular momentum – would be features imprinted during its formation onto the spacetime surrounding the black hole or onto its event horizon. Then, interactions between a black hole and its environment would rather be interactions between the environment and the event horizon or even interactions within the environmental spacetime.
(iii) Black holes exist as the product of gravitational collapses, but they do not exert any effects on their environment. This is the craziest of all scenarios. For this scenario, general relativity would have to be fundamentally wrong. In contrast to the picture given by general relativity, black holes would have no physically effective features at all: no mass, no electric charge, no angular momentum, nothing. And after the formation of a black hole, there would be no spacetime curvature, because there remains no mass. (Or, the spacetime curvature has to result from other effects.) The mass and the electric charge of objects falling (casually) into a black hole would be irretrievably lost. They would simply disappear from the universe, when they pass the event horizon. Black holes would not exert any forces on massive or electrically charged objects in their environment. They would not pull any massive objects into their event horizon and increase thereby their mass. Moreover, their event horizon would mark a region causally disconnected with our universe: a region outside of our universe. Everything falling casually into the black hole, or thrown intentionally into this region, would disappear from the universe.
open culture - George Orwell Predicted Cameras Would Watch Us in Our Homes; He Never Imagined We’d Gladly Buy and Install Them Ourselves
Rouvroy/Stiegler - THE DIGITAL REGIME OF TRUTH: FROM THE ALGORITHMIC GOVERNMENTALITY TO A NEW RULE OF LAW
Steven Craig Hickman - Philip K. Dick, William Gibson and Science Experiments: Information from the Future