Tuesday, March 23, 2010
Caleb Kelly’s Cracked Media focuses on artists who create music by exploiting the noise and glitches of playback devices such as turntables and CD players. He outlines “a creative practice that drives playback tools into territory where undesired elements of the media become the focus of the practice.” According to Kelly, “’Cracked media’ are the tools of media playback expanded beyond their original function as a simple playback device for prerecorded sound or image. ‘The crack’ is a point of rupture or a place of chance occurrence, where unique events take place that are ripe for exploitation toward new creative possibilities.” Commodities marketed as offering high definition sound, phonographs and compact disc players are “mediating devices” that are typically constructed and used to “create a transparent experience that transcends mediation.” Cracked media, however, take that fact of mediation as their starting point and, through the cracks and breaks that intrude into the regular listening experience, make the medium into the message. The “practitioners of cracked media” use a variety of different approaches, from destroying the mediating device or recording to temporarily altering them to merely manipulating the playback technology. There is the “temporary crack,” such as writing on a CD with a nonpermanent marker; the “permanent crack,” such as using an intentionally or unintentionally scratched record; and, finally, “broken media”: cases in which the integrity of the mediating device or recording is destroyed. Kelly situates cracked media as a response to Adorno and Attali’s critiques of recorded music. In his well-known dismissal of the phonograph, Adorno argues that the record, in addition to being a poor copy of the real performance, serves capitalist interests by reducing music to a fetishized commodity that can be consumed in the privacy of the home. In his book on noise and political economy, Attali extends this critique, claiming that the commodity-recording eventually becomes a simulacrum detached from any origin. According to Kelly, “The use of cracked media in the creation of sound and music problematizes Adorno and Attali’s critiques of recording technology. It calls for a reevaluation of the potential for original and creative output from the end point of the recording industry, the point of playback. First, the flow of production and consumption is disturbed by the productive musical outcomes generated by cracking and breaking media, and second, the fetishistic character of musical consumption is questioned by the abuse of the reified products of the music industry.” He adds, “The imagined transparent and passive mediating devices of storage and playback are transformed into generative technologies by practitioners of the crack and break, breaking the linear flow of production and consumption.” Kelly admits that noise would seem to be a readymade concept for analyzing cracked media, but he finds its multifarious meanings inadequate to fully account for cracked media. Yet he does outline four useful ways to categorize noise - “acoustic noise, noise in information theory, subjective noise, and material noise” – and concludes, “The practitioners of cracked media take the objects of recording and playback and generate new outcomes for them utilizing noise – a noise that is always part of the system, waiting in the background.” Kelly devotes his longest chapter to phonographic experimentation. Of course playing, even modifying, instruments in ways that exceed their traditional use has a long history, dating back at least to the performance of Erik Satie’s Le Piege de Méduse in 1914. John Cage’s prepared piano is the most famous example of the prepared technique, which is also illustrated by Nakamura Toshimaru’s no-input mixing desk, John Cale’s electric viola, and Sonic Youth’s modified guitars. Kelly claims, “These musicians manipulate instruments to expand their sonic palette, thereby varying the expected sound outcomes the instrument is capable of.” The turntable “scratch” would seem to be another important example, though Kelly does not go into the history of hip-hop. He outlines the different techniques of cracking and breaking records and phonographs before entering into the history of these techniques. He argues that the history of phonograph experimentation reflects “the technology’s changing value as a commodity.” When records were more valuable, they were only temporarily modified. But when they became everyday objects, valueless, and/or obsolete, they made themselves available for artistic damage and destruction. He notes that in the 1920s and 30s, many classical composers, including Hindemith, Stravinsky, Milhaud, and Varese, began to incorporate the use of phonographs into their compositions and started to consider the phonograph as a “compositional tool.” John Cage of course went the furthest, starting a series of phonograph experiments with the use of two turntables in his 1939 piece Imaginary Landscape No. 1. Cage also achieved a breakthrough in his Cartridge Music, which called for replacing the cartridge needle with other objects and then rubbing the cartridge against various items. Kelly claims, “Cartridge Music marks the beginning of cracked media in relation to the phonograph. It is a fully realized composition for which the modified turntable is the only instrument.” Kelly then turns to the music of Fluxus and Nam June Paik. Fluxus, which often violently reacted to consumer society, produced many works that involved the destruction of objects and instruments, including musical ones. Paik, who is better known these days for his video work, composed a couple of such pieces, including Violin with String, in which Paik pulled a violin behind him down a road. Besides television, Paik also made use of modified turntables and tape in his multimedia works. In Random Access, Paik detached the tone arm of a record player from the turntable body, allowing users to directly access any sound on a set of stacked records. In a related work, Paik attached strips of magnetic tape to a gallery wall and allowed listeners to interface with the tape at random by using a detached tape head. Kelly then turns to Czech artist Milan Knizak, who in the 1960s became bored with the few records he owned and played over and over again. Knizak deliberately scratched and attacked the records in order to produce something new. In his Destroyed Music, Knizak created a new record by gluing together “four quarters of different vinyl discs,” and in later works he attached items to and painted the surfaces of records, transforming them into sculptures or paintings. In the 1980s and 90s, Christian Marclay rose to fame for his turntable experiments. In addition to using multiple record players at once, Marclay prepared his records by scratching, breaking, taping, and recombining them in various ways (focusing on the formal details, Kelly completely neglects the hauntological content of Marclay’s performances). In Footsteps, Marclay covered an art gallery floor with records, which were walked on by visitors and subsequently sold. Kelly claims, “Christian Marclay, in his destruction of the vinyl object, heralded the next step in the use of turntable as an instrument, which was the disappearance of the vinyl record altogether.” Otomo Yoshide, Akiyama Tetsuji, and Michael Graeve all experimented with using the turntable to play itself, creating music/sound out of the mediating device alone. Kelly’s next chapter focuses on music based on CD skipping and moves from the avant-garde toward pop music. In an exceptional section, Kelly shows how reviewers of the CD technology when it first appeared set out to explore its limits and therefore unknowingly anticipated later musical experiments. Only one year after the introduction of the CD, Yasunao Tone, who had been involved with the Tokyo Fluxus in the 1960s, used Scotch tape to make CDs skip. In what he came to call “wounded CDs,” Tone created and recorded his own complex musical works, and then modified the CDs in order to create chance effects. In contrast to the marketing pitch, “The idea of a playback technology that could play pure clean audio was displaced by Tone’s noisy, glitching CDs.” Kelly then discusses Oval, perhaps the most famous group mentioned in the book. Inspired by a CD borrowed from the library that skipped, Oval built an entire musical aesthetic around reproducing that skipping effect. Often using CDs that had to be returned to the library, they used marker pens and adhesive tape to temporarily alter the CDs. According to Kelly, Oval “aestheticized” the glitch: “The notion of the crack and break as transgressive is undone in the work of Oval.” Oval recorded the unexpected skip and then subjected it to sequencing and a greater level of musical order. After linking cracked media to Michel de Certeau’s idea of “tactics,” Kelly’s final chapter describes theories by Paul Demarinis and Kim Cascone that consider how musical works can be created from unwanted background noise or aural “detritus” that is usually ignored or discarded. Kelly concludes by admitting that, due to the sensitive nature of the laser, there are substantial limitations to modifiying CDs, and that the turntable’s potential for experimental abuse has also been nearly exhausted.
Sunday, March 21, 2010
In these early “political” texts, most of which first appeared in Socialism ou Barbarie, Castoriadis savagely attacks the Soviet bureaucracy and reformist trade unions and emphasizes that the revolution and socialism can only take the form of the autonomous activity of the proletariat, the self-management of the working class. Though Castoriadis’ ideas may be familiar due to his influence on the Situationists and Autonomia, these groundbreaking pieces remain strikingly original. Collected together, however, Castoriadis’ articles can seem rather repetitive, and their relentless anti-Stalinism may be wearying to contemporary, post-Cold War readers, most of whom will be better served by reading only “The Problem of the USSR and the Possibility of a Third Historical Solution,” “Socialism or Barbarism,” “Proletarian Leadership,” and “On the Content of Socialism I.” Castoriadis begins with a detailed critique of the monstrous realities of Stalinist Russia. As the volume progresses, this acute awareness of the total failure of the socialist project in the Soviet Union leads Castoriadis to question “what the objectives of the revolution should be” and to set forth the autonomy of the proletariat as the true content of socialism. Trotsky of course already had something of a monopoly on ultraleftist critiques of Stalinism. Trotsky labeled the Soviet Union a “degenerated workers’ State” and believed that Stalinism was a parasitic phenomenon without deep enough of foundations to last very long. In fact, Trotsky maintained that WW II would cause the collapse of Stalinism, and Trotskyites resorted to intellectual contortions to defend this idea against the realities of the postwar era. For Castoriadis, the Soviet Union is not a “degenerated workers’ State” because the Soviet bureaucracy has fully assumed the position of an exploitative class and has so completely taken power that the State “no longer has a working-class character.” “Russian society is divided into classes, among which the two fundamental ones are the bureaucracy and the proletariat. The bureaucracy there play the role of the dominant, exploiting class in the full sense of the term.” When Trotsky posed the dilemma of “Socialism or Barbarism?”, he failed to recognize that a “third solution, beyond the dilemma of capitalism or socialism, is possible. It corresponds to the proletariat’s potential revolutionary bankruptcy. And its historical meaning would be that of a fall into an unprecedented modern barbarism, entailing an unbridled, rationalized exploitation of the masses, their complete political dispossession, and the collapse of culture.” In other words, the “bureaucratic capitalism” exhibited by the Soviet Union. Castoriadis argues that it is essential not to confuse juridical descriptions with the real relations of production. The nationalization of industry and the abolition of private property do not eliminate what Castoriadis sees as the fundamental conflict of modern economies: the division between those who command and those who are commanded: “As traditional forms of property and the bourgeoisie of the classical period are pushed out by State property and by the bureaucracy, the main conflict within society gradually ceases to be the old one between the owners of wealth and those without property and is replaced by the conflict between directors and executants in the process of production.” As a result, the “socialist revolution cannot simply be the abolition of private property. This objective can be achieved by the monopolies and the bureaucracy themselves with no other result than an improvement in its methods of exploitation. The goal of the socialist revolution must be the abolition of all fixed and stable distinctions between directors and executants, in relation to both production and social life in general.” In order to avoid this distinction between directors and executants, the proletariat must direct itself during and after the revolution: “Only the proletariat, acting as a whole, can achieve the aims of the proletariat revolution. No one else can do the job for it. The working class cannot and should not entrust anyone with this task, and especially not its own ‘cadres’. . . . If the proletariat does not itself as a whole assume at every moment the initiative and the leadership of every aspect of social life, both during and more especially after the revolution, it will only have succeeded in changing masters.” But Castoriadis is aware that the emergence of worker self-management from worker subjugation is paradoxical. He claims “The proletariat itself should be its own leadership. . . . But on the other hand, it is obvious [due to its current state of exploitation and passivity] that the class cannot immediately and directly be its own leadership.” He adds, “For the passage from . . . the stage during which the exploited, alienated, and mystified class cannot be its own leadership to the stage during which the class necessarily directs itself – appears as, and in reality is, a leap, an absolute contradiction. This is a contradiction that, let it be said parenthetically, is no more remarkable than the revolution itself, nor than every moment during which a thing ceases to be itself in order to become another thing.” The situation is also complicated by the fact that the division of mental and physical labor and the division of vanguard and mass movement also reflect the division of directors and executants and therefore must be abolished. But taken to its extreme, the critique of Leninism can be politically paralyzing because it proscribes any action by isolated groups before the revolution is in full swing. Castoriadis, however, argues that a revolutionary leadership, as long as it draws from the experience of the working class and is voluntarily followed, can act as “a body that decides on the orientation and the methods of action of the class or portions thereof, and strive to get these methods adopted by the class through ideological struggle and exemplary action.” “To be revolutionary signifies both to think that only the masses in struggle can resolve the problem of socialism and not fold one’s arms for all that; it means to think that the essential content of the revolution will be given by the masses’ creative, original, and unforeseeable activity, and to act oneself, beginning with a rational analysis of the present and with a perspective that anticipates the future. In the last analysis, it means to postulate that the revolution will signify an overthrow and a tremendous enlargement of our present form of rationality and to utilize this same rationality in order to anticipate the content of the revolution.”
Wednesday, March 17, 2010
David Golumbia’s The Cultural Logic of Computation argues that many of the most suspect features of Western rationalism have been propagated in a disguised form through the computer and what he terms “computationalism.” Through a critical analysis of the “rhetoric of computing” (but not, unfortunately, through a careful materialist investigation into the computer), Golumbia tries to show how “Computation – as metaphor, method, and organizing frame – occupies a privileged and under-analyzed role in our culture.” He argues, “belief in the power of computation – a set of beliefs I call here computationalism – underwrites and reinforces a surprisingly traditionalist conception of human being, society, and politics.” Golumbia smartly rejects claims about the computer causing a rupture in humanity and history: “Networks, distributed communication, personal involvement in politics, and the geographically widespread sharing of information about the self and communities have been characteristic of human societies in every time and every place: a burden of this book is to resist the suggestion that they have emerged only with the rise of computers.” He particularly takes issue with arguments that frame the computer as an anti-hierarchical harbinger of freedom. He maintains that computerization largely “benefits and fits into established structures of institutional power.” Rather than empower users, computers tend to empower institutions, including academic disciplines, corporations, and government agencies. In fact, “Too often, computers aid institutions in centralizing, demarcating, and concentrating power.” He adds, “Inside our existing institutions of power, computerization tends to be aligned with relatively authority-seeking, hierarchical, and often politically conservative forces – the forces that justify existing forms of power.” Drawing from Deleuze the contrast between smooth and striated spaces, Golumbia argues that computers aid institutions in striating society and subjecting it to high levels of control. Beneath the image of a computerized “flat world,” the computer is used to categorize and control in increasingly sophisticated and invasive ways. Golumbia uses the term “computationalism” to refer to “the view that a great deal, perhaps all, of human and social experience can be explained via computational processes.” Despite the appearance of being new, computationalism draws from the long history of instrumental reason. Actually, “The computer, despite its claims to fluidity, is largely a proxy for an idealized form of rationalism.” Golumbia is certainly right about this link between computationalism and rationalism. But as a consequence, much of his book ends up reiterating the familiar cultural studies critique of ethnocentric, abstract, white male “reason,” which excludes and oppresses all cultural, racial, gender, and physical differences. The first section of the book discusses computationalism and language, and it begins with a rather strong attack on Chomsky’s linguistics, which were influenced by early work on computing. Golumbia claims that, for Chomsky, somewhere deep in the brain is a “logical engine” that functions like a computer and “is capable of generating infinitely many structures.” Chomsky exhibits a “belief in syntax,” “the restricted set of logical rules that are necessary to generate all the sentences of a given language.” Syntax is not identical to the grammar of any actual language, but rather a “deep structure” that, while inaccessible to direct observation, hierarchically structures language. According to Golumbia, “Chomsky, like all computationalists, is convinced that pure form is something that can be studied in isolation from use, context, and social meaning.” “By arguing strenuously that linguistic phenomena could be separable into form and content, essentially out of his own intuitions rather than any particular empirical demonstrations, Chomsky fit linguistics into the rationalist tradition from which it had spent nearly a hundred years extricating itself.” Chomsky’s computationalism/rationalism made his work appealing to many linguists, who also wanted to ignore all cultural and contextual linguistic differences. Golumbia continues his attack by pointing out the hypocrisy of Chomsky’s ideas: “In his overt politics, Chomsky opposes the necessity of hierarchies in the strongest way, while his intellectual work is predicated on the intuitive notion that language and cognition require hierarchies.” Golumbia partially explains this irony by claiming that Chomsky’s political views are “fundamentally libertarian” and “historically connected” to a rationalist “individualist conservatism.” Golumbia argues that the Chomskyan revolution in linguistics heavily influenced functionalist philosophy. Many of the major functionalists, including Hilary Putnam and Jerry Fodor, studied with Chomsky at one point or another. Functionalism is a “model of mind” that posits that “’psychological states’ . . . are simply ‘computational states’ of the brain. The proper way to think of the brain is as a digital computer.” Golumbia shows how functionalist philosophy was so opposed to more holistic and culturally sensitive arguments that many of its major figures eventually turned against it, recognizing it for the instrumental reasoning that it was. In later chapters, Golumbia shows how machine translation projects and computational linguistics also exhibited signs of a computationalism that, searching for a pure form of language, treated all linguistic and cultural difference as an error or unwanted burden. Outside of these specialized fields, there have been similar efforts in attempts to transform the Internet into a Semantic Web in which all text is categorized and computable. Golumbia is rather skeptical of the entire Semantic Web project, claiming, “Digital text, for the most part, is just text, and for the most part text is just language. Language, a fundamental human practice, does not have a good history of conforming to overarching categorization schemes.” At this point he also show how a kind of linguistic colonization discretely occurs through the reliance of most computer languages and Internet tools on English. He writes, “Few English-only speakers realize the implications of the fact that almost all programming languages consist entirely of English words and phrases, and that most operating systems are structured around command-line interfaces that take English writing, and specifically imperative statements, as their input.” HTML and other Internet instruments also heavily rely on English as a “norm.” “One might say that, despite the ability of computers to produce documents in Hindi or Japanese, computers and networks themselves speak and operate in only a fragmentary, math- and logic-oriented chunk of English.” The promotion across the globe of the computer as an opportunity therefore draws more people toward English. Golumbia even sees something insidious in Nicholas Negroponte’s proposal that every child in the world receive a laptop: “There could be almost no more efficient means of eradicating the remaining non-Western cultures of the world than to give children seductive, easy-to-use tools that simply do not speak their languages.” In the next major section, Golumbia moves from computationalism and language to computationalism and corporations, though a closer study of the history of business computing and corporate capitalism could have helped this section. He argues that computers enable the collection and analysis of large quantities of market data. They are used to reduce “large masses of data into tractably hierarchical, striated layers.” That is, computers allow the population to be striated into increasingly refined categories, though these categories often reflect dubious cultural and social stereotypes or the desires of marketers (sub-prime loans might be one such questionable category). Although discrimination through the financial redlining of districts has been banned, the computer’s ability to striate the population, which has a substantial impact on the lives of almost everyone, can have similar negative effects and escapes from almost all regulation. The data that computers process also can come to seem more real than reality. For example, Golumbia describes how computerized spreadsheets allow corporate management to manipulate abstracted data that is kept from and seems more important than real employees. “Often, the managers and owners of a company . . . see the reality portrayed by and manipulated via spreadsheets as the ‘actual’ reality of the company, and the rest of the corporate activities as a kind of epiphenomena connected to the spreadsheet reality only via metaphoric and metonymic processes.” Here and elsewhere, Golumbia shows how the computer benefits most those at the top of the institutional hierarchy. He adds that new forms of computerized surveillance and control in the workplace are more powerful and harder to fight than older ones: “Today’s computerized methods of employee tracking and surveillance are both more effective and less visible, and have so far seemed less tractable to organized resistance, despite the apparent computational knowledge and interest of the people whose work is so closely monitored and controlled.” Because “nearly all interaction with contemporary corporations is mediated precisely and specifically by computerization, both internally and externally,” computers also provide corporations sophisticated means for controlling interactions with customers. The final section of Golumbia’s book focuses on computationalism and politics and extends his argument about how computers empower institutions and reinforce hierarchies.
Monday, March 15, 2010
Alexander Galloway’s Protocol studies how control is maintained within distributed computer networks such as the Internet. As its subtitle states, the book asks the question, “How would control exist after decentralization?” Galloway nicely takes aim at the common belief that the Internet is chaotic and “uncontrollable” (and, depending on one’s politics, therefore essentially liberating or in need of stronger regulation). He instead asserts, “The founding principle of the Net is control not freedom. Control has existed from the beginning.” Without "protocol," computers would be unable to communicate with each other over the Internet and web surfing would be hindered by the discontinuity of websites. That is, “Without a shared protocol, there is no network.” Galloway draws the term “protocol” from computing, where it indicates “standards governing the implementation of specific technologies.” Protocols are “recommendations and rules that outline specific technical standards,” and they function as “a distributed management system that allows control to exist within a heterogeneous material milieu.” He argues that the misguided belief in the freedom of the Internet is due to the contradictory nature of protocol, which both “radically distributes control into autonomous bodies” and “focuses control into rigidly defined hierarchies.” Transmission Control Protocol/Internet Protocol (TCP/IP) allows computers on a network to communicate in a nonhierarchical manner, whereas the Domain Name System (DNS) that “maps network addresses to network names” creates a strict inverted tree hierarchy. “Ironically . . . nearly all Web traffic must submit to a hierarchical structure (DNS) to gain access to the anarchic and radically horizontal structure of the Internet.” As is well known, the Internet emerged from efforts to create a network that could survive a nuclear attack. Distributed networks, protocol, and computing technology combined to create the new “apparatus of control” that characterizes our contemporary conjuncture. Galloway loosely situates and periodizes his theory of protocol using the work of Deleuze on “control societies,” Foucault on “biopolitics and biopower,” Kittler on “discourse networks,” Mandel & Jameson on “late capitalism,” and Hardt & Negri on “Empire.” Discussing different network “diagrams,” Galloway defines centralized, decentralized, and distributed networks. Centralized networks, from the American judicial system to Bentham’s panopticon, are hierarchical and centered by a “single authoritative hub,” to which different, subordinate nodes can be connected. Decentralized networks, such as the system of airports in the United States, have multiple hubs, “each with its own array of dependent nodes.” Decentralized networks are less hierarchical than centralized ones, but they are still hierarchical because of the distinction between hubs and nodes. Distributed networks, such as rhizomes and the Internet, allow nodes to directly connect without necessarily involving intermediary hubs. Distributed networks are therefore far less hierarchical and centered than both centralized and decentralized networks. That is, “distributed networks have no central hubs and no radial nodes. Instead each entity in the distributed network is an autonomous agent.” Because there is no longer a chain of command, distributed networks survive only through agreed upon “rules of the system”: protocol. Galloway then closely examines how computing protocols are created and function. He explains how the blueprints for protocols are specified in “Requests for Comments” (RFCs) and how protocols are layered in a way that allows host computers to communicate via the Internet. In addition to such technical protocols, there are also “formal protocols.” Without the latter, the network made possible by the former might be too discontinuous, too heterogeneous, for use. Formal protocols involve “techniques of continuity” - such as concealing the source, eliminating dead links, and expectations about speed and image resolution – that render the Internet a smooth space for navigation. “Protocological machines” such as browsers and HTML are also affected by and serve formal protocols. After an ambitious chapter that links protocol to biopolitics and looks at “protocol [as] an affective, aesthetic force that has control over ‘life itself,’” Galloway shifts his focus to “the failures of protocol.” The first type of failure involves bureaucratic and proprietary efforts to interfere with protocol. He claims, “bureaucratic and institutional forces (as well as proprietary interests) are together the inverse of protocol’s control logic.” Bureaucratic interests impose themselves on protocol from without, whereas proprietary interests try to co-opt protocol from within. Galloway gives a useful overview of the complex web of organizations involved in creating the standards of the Internet. He concludes, “Ironically . . . the Internet protocols that help engender a distributed system of organization are themselves underpinned by adistributed, bureaucratic institutions.” The final chapters of the book turn to hacking, tactical media, and Internet art, and cover more familiar media studies ground. These sections are the most affected by Galloway’s rejection of “negative political strategies.” He maintains that protocol can only be resisted from within: “it is through protocol that one must guide one’s efforts, not against it.” He takes this position because “Protocol is synonymous with possibility. . . . If one chooses to ignore a certain protocol, then it becomes impossible to communicate on that particular channel. No protocol, no connection.” As a result, “in order to be politically progressive, protocol must be partially reactionary.” Galloway offers a sympathetic account of hacking: “By knowing protocol better than anyone else, hackers push protocol into a state of hypertrophy, hoping to come out the other side. So in a sense, hackers are created by protocol, but in another, hackers are protocological actors par excellence” that call “attention to commercial or governmental actions that impede protocol through making certain technologies proprietary or opaque.” He frames “tactical media as those phenomena that are able to exploit flows in protocological and proprietary command and control not to destroy technology, but to sculpt protocol and make it better suited to people’s real desires.” He then shows how early Internet artworks focused on the limitations of the network whereas later ones focused on the possibilities and limitations of commercial software. In his conclusion, he explains why protocol works so well: “Like liberalism, or democracy, or capitalism, protocol is a successful technology precisely because its participants are evangelists, not servants. Like liberalism, democracy, or capitalism, protocol creates a community of actors who perpetuate the system of organization. And they perpetuate it even when they are in direct conflict with it.”
Wednesday, March 10, 2010
The Economic Institutions of Capitalism is one of the foundational works of transaction cost economics and the New Institutional Economics. Transaction cost economics argues that the economic institutions of capitalism, including firms, markets, and relational contracting, serve “the main purpose and effect of economizing on transaction costs.” According to Williamson, “A transaction occurs when a good or service is transferred across a technologically separable interface. One stage of activity terminates and another begins.” Transaction costs appear whenever there is “friction” at the point of interface. In other words, transaction costs are the “costs of running the economic system.” Ronald Coase’s work from the 1930s pioneered the transaction cost approach. Coase argued that the economy could be organized either into firms or markets, with the decision being based on transaction costs. Despite Coase’s contributions, institutional analyses of economic organization did not prosper between 1940-70. Instead, the neoclassical approach (which would include Milton Friedman’s neoliberal economics) treated the modern corporation as a “black box,” reducing firms to “production functions” and ignoring institutional complexity and variety. However, work in organization theory during this period by Herbert Simon, Alfred Chandler, and Michael Polanyi (the first two were Williamson’s teachers) established the importance of organizational form to business performance. For transaction costs economics, the firm is not simply a production function, but rather a “governance structure.” “Transaction cost economics poses the problem of economic organization as a problem of contracting.” The firm offers a number of unique forms for governing contracts that the market does not offer. Williamson draws from Herbert Simon the assumption that “human agents are subject to bounded rationality, when behavior is ‘intendedly rational, but only limitedly so.’” Transaction cost economics also pays close attention to opportunism (a result of information asymmetry and behavioral uncertainty) and varying “asset specificity,” which can include site, physical, and human asset specificity. If human rationality was perfect or information was completely available and people were completely trustworthy, detailed contracts would suffice and there would be little need for organizational forms of governance. But in reality, “Planning is necessarily incomplete (because of bounded rationality), promise predictably breaks down (because of opportunism), and the pairwise identity of parties now matters (because of asset specificity). This is the world of governance. . . . The organizational imperative that emerges in such circumstance is this: Organize transactions so as to economize on bounded rationality while simultaneously safeguarding them against the hazards of opportunism. Such a statement supports a different and larger conception of the economic problem than does the imperative ‘Maximize profits!’” The central problem that transaction costs economics tries to explain is the removal of contracts from market governance (classical contracting between separate parties, and public, legal forms of enforcement) to within the firm, which is a “distinctive governance instrument.” The most well-known example is the vertically-integrated corporation, which incorporated various supply, production, distribution, and marketing functions into one organizational structure. Williamson positions the transaction cost explanation of the existence of large, hierarchical corporations against theories based on technological determinism or power. Between 1940-70, the existence of large, vertically integrated corporations was often attributed to modern technology, which supposedly could only be effectively created and used by large organizations (see John Kenneth Galbraith’s theory of the “technostructure”). Williamson rejects this argument, which made some sense during the era of postwar Fordism, but which recent history has largely disproven. Williamson also denies the theory that power – either of capitalists over workers or of monopolies over competitors – can explain the success of large, hierarchical corporations. Trying to establish the importance of transaction costs, Williamson strongly attacks Marxist/radical economists here, but at later points he makes room for questions of technology and power as relative or occasional factors in economic organization. Williamson argues that firm size depends on the ability to reduce transaction costs and is in turn restricted by the limitations of inner-firm incentives and bureaucratic features. He then attempts to show that hierarchy has an economic rationale by comparing and analyzing the transaction costs involved in different forms of hierarchical and non-hierarchical organization, from capitalist domination to communal collaboration. He concludes that hierarchy reduces transaction costs, and adds “The question of optimal work organization is thus poorly posed when it is put in terms of hierarchy or its absence. Attention ought to be shifted instead to whether reliance on hierarchy is excessive (generates adverse side effects) and whether appointments to hierarchical positions are made in a way that both promotes efficiency and commands general respect.” Williamson then surveys Alfred’s Chandler’s work on the history of the modern corporation, and he finds that “the modern corporation is mainly to be understood as the product of a series of organizational innovations that have had the purpose and effect of economizing on transaction costs.” Before concluding with studies on how transaction cost economics can contribute to our thinking about natural monopolies and anti-trust legislation, he evaluates the corporate board of directors as a governance structure that can be used to safeguard the interests of various corporate stakeholders, from stockholders to workers to, perhaps, the community.
Sunday, March 7, 2010
The Closed World focuses on the relationship of mainframe computing to Cold War politics. Edwards argues that “the historical trajectory of computer development cannot be separated from the elaboration of American grand strategy in the Cold War.” He therefore attempts to show how the computer not only served a practical function within Cold War defenses but also functioned as a technological support for various discourses that were at the center of Cold War intellectual and political life. The first half of the book focuses on what Edwards terms “closed-world discourse.” Closed-world discourse refers to “the language, technologies, and practices that together supported the visions of centrally controlled, automated global power at the heart of American Cold War politics.” Closed-world discourse modeled the world as a closed system, and its “central metaphor” was “containment,” “the image of an enclosed space surrounded and sealed by American power.” The actual details of that closed system could vary, referring to containment of the Soviet Union, defense of capitalist spheres of influence, or extension of the capitalist world-system (led of course by the U.S.) to a global level. Computers were essential supports for closed-world discourse because “they allowed the practical construction of central real-time military control systems on a gigantic scale” and “they facilitated the metaphorical understanding of world politics as a sort of system subject to technological management.” Without the computer, fantasies of containment and control on a global scale would have been blatantly absurd. But Edwards thankfully offers much more than a study of the rhetoric of computing during the Cold War. He assembles from Foucault and other poststructuralists a concept of discourse as “a self-elaborating ‘heterogeneous ensemble’ that combines techniques and technologies, metaphors, language, practices and fragments of other discourses around a set of supports. It produces both power and knowledge: individual and institutional behavior, facts, logic, and the authority that reinforces it.” Just as computers functioned as a technological support for closed-world discourse, closed-world discourse came to determine the evolution of computing technology. Edwards writes, “’Closed-world discourse’ thus names a language, a worldview, and a set of practices.” It includes techniques “for modeling aspects of the world as closed systems,” technologies such as the computer, forms of “mathematical and computer simulation of systems,” fantasies and ideologies of centralized command and control, and a “language of systems, gaming, and abstract communication and information.” Edwards begins his history of closed-world discourse by examining the development of computers by the military between the 1940s and 1960s. The military certainly was responsible for many of the most important computing innovations during this period, and military spending was central to the success and shape of the mainframe computing industry, though, as elsewhere, Edwards neglects the importance of other kinds of institutions for the history of computing. Surveying the early computers such as the ENIAC and Univac, Edwards notes that the computer in the late 1940s and early 1950s had not yet reached a stage of “closure” and therefore was available to be (re)constructed along the lines of military dreams of command-control systems. Edwards devotes an important chapter to the SAGE air defense system, “the first large-scale, computerized command, control and communications system” that “unleashed a cascading wave of command-control projects from the late 1950s onwards.” SAGE, standing for Semi-Automatic Ground Environment, originated in the Whirlwind computer system, which pioneered real-time control in computing. When the Whirlwind, which was struggling for funding, was adopted by the Air Force, real-time control became central to defense initiatives. SAGE was complete by 1961 and cost more than a billion dollars to construct. Technically, SAGE was an accomplishment; many of its features, such as computing reliability and speed and its real-time operation capabilities, trickled out into the rest of the world of mainframe computing. But there were many reasons to doubt its actual effectiveness as a system for air defense. Edwards argues that such practical concerns hardly hurt SAGE because its most important function may have been to support closed-world discourse. Although each “SAGE center was an archetypal closed-world space: enclosed and insulated, containing a world represented abstractly on a screen, rendered manageable, coherent, and rational through digital calculation and control,” its “closed world was a leaky container, constantly patched and repatched, continually sprouting new holes.” Such leaks didn’t matter because SAGE “was far more than a weapons system. It was a dream, a myth, a metaphor for total defense, a technology of closed-world discourse.” Edwards’ next chapter surveys the history of the Rand Corporation (the name stands for “Research and Development,” and there is no relation to Remington Rand), which dogmatically preached the techniques of systems analysis and legitimated a “theory- and simulation-based approach to strategy.” Infatuated with the mathematical techniques of operations research and systems analysis as well as with the new paradigm of game theory, Rand valued its own simulated systems more than any empirical data about reality. “The formalistic language of statistics, systems, and Rand-style game-theoretic nuclear strategy – all of it reliant on the technological support of digital computers – reduced the problem of war to an issue of algorithms, electronics, and kill ratios.” Because there simply was no data on how nuclear war would play out, nuclear conflicts could only be simulated, and often on the very machines that would have been used in a real war. When put to use during the Vietnam War, Rand’s techniques created a simulation of success, an “electronic battlefield” that offered reassuring data about the effectiveness of military operations. The U.S’ defeat in the war, however, eventually punctured the solipsistic shell of the systems approach. The last half of Edwards’ book shifts its focus to “cyborg discourse” and the development of the fields of cognitive psychology and artificial intelligence. Edwards writes, “Cyborg discourse is the field of techniques, language, and practice in which minds are constructed as natural-technical objects . . . through the metaphor of computing.” Cyborg discourse includes techniques for the “integration of humans into mechanical and electronic systems,” the “computer as a technology with linguistic, interactive, and heuristic problem-solving capacities,” diversifying “Practices of computer use,” “Experiences of intimacy with computers and of connection to other people through computers,” fictions of “cyborgs, robots, and intelligent machines, increasingly prominent in science fiction and popular cultures,” and computer metaphors such as those comparing the brain to a machine. Edwards argues cyborg discourse complemented closed-world discourse: “Cyborg discourse collaborated with closed-world discourse both materially, when artificial technologies and human/machine integration techniques were used for military purposes, and metaphorically, by creating an interpretation of the inner world of human psychology as a closed and technically manipulable system.” He adds, “Cyborg discourse is the political subject position . . . of the closed world’s inhabitants. Artificial intelligence, man-computer symbiosis, and human information processing represent the reductions necessary to integrate humans fully into command and control. The cybernetic organism . . . is the form of a mind in symbiotic relationship with the information machines of the electronic battlefield.” World War II created numerous situations in which it was necessary to smoothly integrate men into complex electromechanical systems. Edwards argues that the research programs in psychology and related fields that came out of the war therefore reflected military conflicts and concerns. After surveying the creation of cybernetics and information theory (this history is available from many other sources at this point), Edwards turns to the origins of cognitive psychology in the Psycho-Acoustic and Electro-Acoustic Laboratories (PAL and EAL). He traces “one of the major historical trajectories of cognitivism, from wartime work on human-machine integration to postwar concerns with information theory to the computer as metaphor for the human mind.” Edwards then devotes a chapter to the origins of artificial intelligence (AI) (Edwards’ historical research and argument here, as in many other places, overlaps with that of N. Katherine Hayles). Most of Edwards’ books focuses on the 1950s and 1960s. Closed-world discourse declined during the 1970s, partially because of the failure of the Vietnam War and changes in government spending. But the 1980s and the Reagan era saw closed-world discourse rebound and reach new heights with the Strategic Defense Initiative / Star Wars, which imagined satellites in outer-space using laser beams to create a protective barrier for the U.S. Although that program has now been scrapped as well, it is clear that closed-world discourse remains an attractive one for the military. Edwards’ exhaustive research into the computing technologies of the Cold War should be considered a model for studies of technology and culture. But readers should remain aware of Edwards’ narrow scope, which openly neglects the business institutions that also used and developed mainframe computers. The history of business computing is quite a different story.
Jan Stutje’s biography of Ernest Mandel offers an important view into the life of the theorist of “late capitalism.” Because Mandel was a committed Trotskyist from his teenage years to his death, his biography risks dissolving into the history of the Fourth International. Stutje deliberately attempts to avoid this reduction, yet the personal, political, and intellectual spheres of Mandel’s life often remain disconnected within the book, but perhaps that lack of unity was a fact of Mandel’s life. Mandel was born in 1923 in Frankfurt. His parents brought their Polish-Austrian and Jewish background with them when they moved to Belgium. Growing up in Antwerp, Mandel was surrounded by an intense, cosmopolitan intellectual and political life. In the early 1930s, political and Jewish refugees – many of them Trotskyists – from Germany and elsewhere flooded the home of his parents, who helped them find places to hide. Mandel’s father was deeply involved in the Fourth International, allowing its committee to meet at his house and assisting in the publication of its pamphlets and periodicals. At the age of 13, Mandel became a “fiery supporter” of and active participant in these activities. Mandel officially became a Trotskyist when he joined the Revolutionary Socialist Party (the Belgian section of the Fourth International) in 1938 at the age of fifteen. During the Nazi occupation of Belgium, underground newspapers (Trotskyist and more general ones) were printed and distributed from within the Mandel home. When the Nazis began to register and round up Jews in the area around 1941, the Mandel family fled into hiding, though the young Mandel secretly remained in Brussels to engage in political organizing. Whereas many Belgian Trotskyites were pessimistic about the situation and encouraged observation and patience, Mandel, at the age of twenty, took one of his first steps toward becoming a political theorist by writing a pamphlet drawing on a wide body of Marxist literature that argued for stronger immediate action. He was arrested in 1942, but his father paid a large ransom to obtain his freedom. He was arrested again in 1944 and sent to Germany, where he passed through a series of prisons and work camps, finally being liberated by the Americans in 1945. After returning to Belgium, he began studying history and became involved in student socialist groups, but in 1946, after he had completed his comprehensive exams, he turned from his studies to journalism. He soon found all his time occupied writing for numerous publications from many different countries, including Time-Life-Fortune. He gained a reputation and earned his living for the next few decades by working as a journalist. At this time, Mandel stuck closely to Trotsky on a number of major points. He maintained that the Soviet Union was a “degenerated workers’ state,” not a kind of “state capitalism,” and that its degenerated bureaucratic form would not be able to last long. Like many other Marxists and Trotskyists in the years immediately following the war, Mandel clung to the idea that “capitalism had reached its final phase,” that it would soon collapse and be replaced by socialism, what Aufheben calls the theory of capitalist decline. Decades later, his work on late capitalism would extend yet correct this early belief to account for the unexpected postwar boom. When the Belgian Trotskyists’ hopes for postwar revolution were shown to be deluded and the country moved toward social democracy, Mandel and the other Trotskyists joined the Belgian Socialist Party, hoping to radicalize the party from within. But conflicts between the official line of the Socialist Party and what was printed in more leftist and Trotskyist publications resulted in Mandel’s career as a social democrat ending in 1965. After ten years of work, Mandel published his first major work, Traité d’économie marxiste (Marxist Economic Theory), in 1962, establishing himself as “an internationally recognized economist.” Among other things, the book responded to theories of capitalist stagnation and built on the rediscovery of Marx’s Grundrisse, which Mandel acquired a rare copy of through a friend. During the 1950s and early 1960s, Mandel, as one of the leaders of the Fourth International, struggled with another leader, Michel Pablo, over where the possibilities of revolution could be located. Pablo emphasized colonial revolution, whereas Mandel, while recognizing the importance of the Third World, maintained hope of “revolution in the developed capitalist countries.” This was one just one of the many often vicious fights among the Trotskyists to which Mandel devoted a great deal of his time and energy. In 1964, Mandel published the article “The Economics of Neo-Capitalism,” which laid out many of the themes, such as the theory of long waves of capitalist development, that would be more fully analyzed nearly a decade later in Late Capitalism. Around 1965-67, Mandel worked on an overview of Marx’s writing, eventually publishing The Formation of the Economic Thought of Karl Marx: 1843 to Capital. He came to strongly disagree with Althusser’s reading of Marx in For Marx and Reading Capital. Although Mandel “recognized the discontinuity in Marx’s thinking,” he “denied that Marx had discarded the concept of alienation.” Mandel argued that the concept of alienation was transformed over time in Marx’s writing, and attacked Althusser’s static, structural methodology. He voiced these criticisms at a colloquium and in his 1969 essay “Althusser Corrects Marx.” Che Guevara had “enthusiastically” read Mandel’s Marxist Economic Theory and “had large parts of its translated.” When he traveled to Cuba in 1964, Mandel befriended Che. During that trip, they worked together on the questions of whether central planning could work in a country like Cuba where the forces of production were not well developed and of whether the law of value should override the autonomy of workers’ councils. Mandel never saw Che again before his murder. A later trip to Cuba promised to put Mandel in contact with Castro, but it never came about, perhaps in part because of the unsteady relationship of Cuba to Stalinist Russia (which would not have appreciated such a public meeting with a famous Trotskyist). In 1966, Mandel participated in a public debate with Rudi Dutschke on the Chinese Cultural Revolution. Mandel quickly became friends with and a mentor to Dutschke, one of the most important figures in the growing German extra-parliamentary left. Through his influence on Dutschke and the many speeches and lectures he was invited to give in Germany, Mandel came to play a major role in the development of the New Left and student movement in Germany. Mandel passed on Trotskyist suggestions to Dutschke about “organizational questions,” promoting entry of a vanguard into the social-democratic union and trying “to persuade Dutschke to transform the Marxist wing of the [German] SDS into a revolutionary socialist youth organization.” Mandel was in Paris when May ’68 broke out. He gave an inflammatory speech on May 9 at the Sorbonne alongside Daniel Cohn-Bendit, and on May 10 he participated in the night of the barricades. In his later analysis of the events in his article “The Lessons of May 1968,” Mandel argued that May ’68 proved that “gradual, institutionalized establishment of workers’ control or other anti-capitalist structural change was an illusion,” but as a Leninist-Trotskyist he of course faulted the movement for a lack of a vanguard in the factories. His speech on May 9 was noticed by the state, and as a result he was treated as persona non grata in France until 1981. America and Australia also “closed their borders to him” in 1970, with Germany soon following. In fact, a US Senate Subcommittee described Mandel as “the major theoretician of terrorism for the Fourth International.” The label was largely based on Mandel’s reserved acceptance of the armed struggle position. Not until around 1980 did many of these countries loosen their travel restrictions. Mandel completed his ideas about late capitalism around 1970 for a lecture cycle, “Theory of Late Capitalism: Laws of Motion and Stages of the Capitalist Mode of Production.” Despite his success in other spheres, Mandel faced numerous political challenges within the university system. He struggled to get Late Capitalism accepted as his PhD thesis in 1972, and there was a great deal of resistance when he was granted a professorship at the Free University of Brussels in 1971, where he was made a full professor in 1986. During the time he was working on Late Capitalism, Mandel emphasized the limits of Keynesian policies to solve the structural limits of the realization of surplus value in the capitalist economy. Stutje claims that Mandel resisted the reductive, monocausal approach of most Marxist scholars, who tended to identify one factor as determining history. Stutje argues that Mandel recognized that nearly half a dozen factors combined in complex ways to determine economic growth. Stutje posits that this is one of the reasons that Late Capitalism ultimately feels like a fragmented work, each chapter functioning well on its own without any final synthesis of ideas in the book. In the late 1960s and early 1970s, Mandel defended the ideas of Trotsky in the New Left Review, and he published many articles in the journal and books through New Left Books. Through his friendship with Perry Anderson, Mandel was invited to write the introductions to a newly translated edition of all three volumes of Marx’s Capital, which Penguin books eventually published. In his later years, Mandel wrote on the history of WW II and attacked the Eurocommunist revision of Marx’s theory of the state. He made plans for a “sequel to Late Capitalism that was to be a synthesis of capitalism in its period of downturn.” The book was to be titled A General Theory of Waged Work, the Workers’ Movement and Socialism, but he never completed it. Stutje devotes the final chapters to showing Mandel’s inability to revise his Trotskyist assumptions in the face of the decay of the Fourth International and the dissolution of the Soviet Union. Mandel saw in the latter a potential for revolution, but was severely disappointed and depressed when capitalism retook the formerly socialist states. Mandel died in 1995.