OnScenes
  • OnScenes
  • News
  • Art
    • Music >
      • Album Review
    • Poetry
    • Film >
      • Filmmakers >
        • Movies
    • Theater >
      • TheaterMakers
  • Philosophy
  • PhiloFiction
  • Science&Technology
  • Economy
  • Media
    • Video
    • Audio
  • About
  • Contact
    • Location

The Digital Revolution

5/27/2017

0 Comments

 
Picture
​In the beginning, there was the sign. The sign was spoken and sung. Then it was written, first as picture, then as word. Eventually the sign was printed. The ingenious coding system that was writing allowed ideas and feelings, descriptions and observations to be captured and preserved. The technology of printing liberated these written records from isolated libraries, allowing them to be communicated to thousands, then millions. They are the glue that binds our culture together. 
As the scientific revolution took hold in the nineteenth century, we discovered methods to capture images and sounds technologically. Photography, then records and film, reproduced reality without the intervention of words. At least we thought they did. We accepted their representations as if they were real. In fact, they were simply a different set of codes. While far more verisimilitudinous than words, the images and sounds of cinema are still code systems—distillations of reality, sometimes distortions of it, always imaginations of it. That's why it is necessary to learn how to read a film. 
​Movies and their offspring have defined the twentieth century for us. 
​But now we find ourselves on the verge of a new phase in the history of media. The languages which we invented to represent reality are merging. Film is no longer separate from print. Books can include movies; movies, books. We call this synthesis "multimedia," or "new media." The technology offers tantalizing promise: instant and universal access to the world's knowledge and art, captured or produced with a versatile set of media tools. But it also brings with it some knotty challenges, both technical and ethical. 
​The Information Age is quite real. The microcomputer revolution of the 1980s has increased our access to and control over information a hundredfold. As that revolution matures over the next generation, information power will increase by another four or five magnitudes. Within our lifetimes, most of the world's knowledge will be available to many of the world's people instantaneously and at negligible cost. There is no turning back, nor is there any longer any doubt. We now know that we can index everything ever printed and that we can build networks to access this print universe in seconds rather than years. (Similar access to audio and video remains beyond current our capabilities, and we haven't yet thought seriously how to index images and sounds, but this will come, too.) 
There is a memorable scene in Godard's Les Carabiniers (1963) in which the two soldiers return to their wives after the war with their booty. They have postcards of all the world's wonders which they proudly display, one at a time: the Eiffel Tower, the Great Pyramid, the Empire State Building, the Grand Canyon.... They think these images are deeds to the properties. We laugh at their naivete as the pile of postcards mounts. The scene is an emblem of the information revolution: we now have the deeds to all the world's intellectual riches. But what will we do with this unimaginable wealth? Perhaps we are just as naive as Godard's carabiniers: we have been given the keys to the virtual kingdom, but what about the reality that was once its subject? 
​As we noted in Chapter 1, the virtual world increasingly crowds out the natural world, and the very power that we now have to manipulate these once precious images and sounds devalues them, destroying our faith in their honesty and our appreciation of their art. 
When the first edition of this book appeared in 1977, it may have seemed strange that an introduction to film included so much about print and electronic media. At the time, movies and print seemed to have little in common: they were both communication systems, true, but the similarity ended there. Now, as the technologies and distribution systems used to reproduce and disseminate the two converge, we can see how they fit together.
This has happened almost by accident. No one set out in 1960 to find a technological common denominator between books and movies. No Godard fan, noticing his fascination with the clash of words and images, decided to find the link between the two. Nor did a Truffaut aficionado, after having seen Fahrenheit 451 or having read Truffaut's Hitchcock, dedicate years to discovering the technical common bond between the two media which that filmmaker/writer loved equally. The development of semiotics in the sixties and seventies was fortuitous, since it provided a single critical approach to both written language and filmed language, but semiotics was a way of thmking, not a science; there were no semiotics labs funded by governments to discover the basic building blocks of signification. 
Rather, as you might expect, the technologies developed more or less independently and for mundane economic reasons. It was only after a decade of furious activity that it became clear that both types of expression, print and film, were going to share a common technology and that—therefore—it would be possible to do both at the same time in the same place. The common technology they now share is digitization. 
In the 1950s, computers were regarded simply as number-crunchers. (In 1952 IBM actually estimated that 18 computers would saturate the entire world market.) The machines of that day were programmed by feeding in decks of punched cards—a medium that dated from the 1890s; they required carefully engineered environments ("computer rooms"); and they were operated by specially trained engineers, who—like priests of old—were exclusively ordained to enter the sanctum sanctorum and approach the electronic oracle. 
​In the 1960s, it became clear that the CRT screen could provide a more efficient link between the machines and the people who operated them than the punched cards or the paper and magnetic tapes that had become common by that time. Indeed, the engineer at IBM who first thought to connect a television cathode ray tube to a computer may be considered the godfather of multimedia, for once that visual device became the basic input/output channel, the development of a visual metaphor for the logical control process became irresistible. This marriage of technologies was not preordained, and if punched cards and band printers had remained the input/output devices for digital computers, multimedia—to say nothing of the microcomputer appliance itself—might have remained a dream. 
​In the 1970s, when the development of word processors as basic business tools suggested that computers could be operated by ordinary laypeople, interest increased in a visual control metaphor—or "graphical user interface," as the jargon later tagged it. At the same time, filmmakers and audio technicians became intrigued with the exciting possibilities of applying this new tool to the manipulation of images and sounds. Filmmakers like James and John Whitney used mainframes to produce abstract images for their films as early as 1961. Musicians and audio artists became infatuated with the new Moog synthesizer in the late 1960s. The first digital audiotape recorder was offered for sale by the Lexicon company in 1971. By the late 1970s CBS had developed a machine for digital editing of videotape. The price? $ 1 million. (It is not known if any were sold. Today you can do a better job with a system that costs less than $3,000.) The elements of multimedia were evolving. 
​In December 1968 Douglas Engelbart, an employee of the Stanford Research Institute, demonstrated an effective graphical user interface, fulfilling a dream first outlined by physicist Vannevar Bush in his seminal 1946 essay "As We May Think." In the early 1970s researchers at Xerox's Palo Alto Research Center and elsewhere combined the graphics on the now ubiquitous CRT with a separate physical pointing device that they called a mouse. What may have appeared to Engelbart to be the end of a line of technological development now revealed itself as instead the beginning of a fertile and fascinating field of inquiry: the invention of a coherent visual and physical metaphor for the complex and subtle interaction between humans and their first true intellectual tools. Interface design rapidly became a subject of intense interest, ft isn't often that a new basic and universal language system is invented.* The twentieth century has seen two: first film, now the graphical interface. As new systems of communication, it was only a matter of time before the languages of film and computers merged to give birth to multimedia. 
Apple's Macintosh computer, introduced with great fanfare in January i984, marked the long-anticipated birth of multimedia. As the first microcomputer to ​commercialize successfully the graphical interface developed at Xerox PARC years earlier, the machine and its software provided a platform sophisticated enough to support the development of new media during the next ten years. 
​The company had been founded in 1977 by two young Californians, Steven Jobs and Stephen Wozniak. While Wozniak was regarded as the "techie," Jobs, the "business head," turned out to have the greater impact on the history of technology, for it was he who championed the Macintosh vision of the computer as an appliance, like a toaster, with an interface simple enough for anyone without technical training to operate.
Before the introduction of the Mac, Apple had manufactured machines, like the rest of the nascent microcomputer industry, which required a significant amount of technical expertise to operate. Their success until that time had been due to two factors: the feisty and romantic image they had projected, and the lucky accident that two other young men, Dan Bricklin and Bob Frankston, based in Cambridge, Massachusetts, had written a program called "VisiCalc," which ran only on the Apple n computer. VisiCalc was introduced to the market in 1979, shortly after the machine debuted. Thousands of young MBAs, heady with the financial dreams of the 1980s, rushed out to buy Apples to run this new "spreadsheet" program, a business planning tool.
The first Macintoshes shipped in 1984 with a painting program as well as word-processing software, Immediately, users could draw on their screens as well as type. Perhaps just as important, the graphic power of the machine was applied to texts as well as images. Writers could actually choose their own fonts and type styles! This was a major advance over the crude representations of the dot-matrix character-based screens of that time. The heretofore arcane concerns of publishers and printers—fonts, leading, point sizes, kerning—quickly became common knowledge for a new generation of assistants and middle managers. This social and esthetic sea-change had been foreshadowed in the 1970s when ubiquitous cheap photocopiers allowed almost anyone to be a publisher. Now the Macintosh let anyone design layouts and set type, too. 
The "consumer" celebrated in the fifties and sixties was yielding to the "user" of the eighties and nineties. The consumer had been a passive and dutiful partner for the great industrial producers of the first half of the twentieth century; the user was to become an active, independent, and demanding client for the service providers of the next century. Little did we know, as we marched in the streets in the sixties chanting "Power to the People!," that the power would indeed be granted—but in the arts and communications, rather than in politics and economics. 
​This relationship between the counterculture of the sixties and the microcomputer culture of the eighties is curious but undeniable. Apple understood it early on, and profited by that understanding. The famous Super Bowl commercial that introduced the Macintosh as "the computer for the rest of us" in January 1984 traded heavily on the residue of countercultural yearnings.
​Separately from the cultural mystique that it acquired, the microcomputer was also "revolutionary" in the purest sense of the term, since its historical progress is measured geometrically rather than arithmetically. "Moore's Law" suggests that chip density (and by extension computing power) per dollar doubles every 18 months. Gordon Moore, one of the founders of Intel, the dominant chip manufacturer, offered this rule of thumb early on. The history of the microcomputer over the past twenty years bears uncanny witness to its truth. The machine I bought in March 1994 ran 200 times faster than the machine I bought in April 1981, had 200 times the storage capacity, and cost about the same. Those figures are almost exactly what Moore's Law would have predicted. Adjust for inflation, and performance doubles yet again. 
​This isn't just computer jock talk. Numerically, the information revolution of the 1980s accomplished in fewer than ten years what took the transportation revolution 100 years to achieve before it ended in the 1960s. Put another way, if transportation power had developed at the same speed as information power in the 1980s, the hve-hour flight from New York to Los Angeles would now take about a minute and a half. These numbers are so profound that we can only surmise what the cultural and social effects will be. Bob Dylan's phrase from the sixties comes to mind:
... somedring is happening here
But you don't know what it is
Do you, Mr. Jones?* 
While the burgeoning microcomputer industry led the way in the office, the consumer electronics industry took advantage of the microchip revolution in the home.
​At the end of the 1970s, people saw movies in theaters, listened to music on records, watched one of the four national television networks (actually getting up out of their chairs to change channels on occasion), used telephones with wires tethering them to the wall, and, if they were so inclined, corresponded with each other using pens, pencils, typewriters, paper, and the U.S. Postal Service. 
By the early 1990s, these same folks saw movies mainly at home on videotape, listened to digital music on Compact Discs (more often walking in the street than sitting at home), had their choice of 40 or more cable channels from which to choose (and channel-surfed without leaving their chairs), made telephone calls in their cars or walking around, and, if they were so inclined, corresponded with each other via fax or electronic mail. 
A few years later, they could also, if they so chose, buy a camcorder that would let them shoot videotape of near-professional quality. They could install a home theatre with a screen almost as large as the ones at the local sixplex (and with a sound system that was markedly better). They could watch videodiscs, skipping, browsing, freezing, and skimming as they might with a book; install their very own satellite dish; or buy a computer for the kids to play with that had the power of a 1980s IBM mainframe.
Increasingly they chose this last alternative, often for the sake of the children. By 1990, computer literacy was a prerequisite for admission to many colleges and universities. By 1994, nearly 40 percent of American homes had microcomputers and the stage was set for multimedia to weave together most of the technological strands we have just enumerated.
​"You say you want a revolution...." Digitization and computerization completed the profound shift in our cultural architecture that had begun in Edison's labs a century earlier. As the Information Age became a reality and knowledge joined labor and capital in the social equation, ideology couldn't keep up. It is more than coincidental that the rise of the microchip accompanied the end of the Cold War, a conjunction that Mikhaii Gorbachev himself once pointed out. 
Despite the exponential speed of the digital revolution in the eighties, it took more than twelve years after the introduction of the CD-ROM in 1985 before multimedia became a marketable product. The reason? Digitized images and sounds, not to mention movies, made extraordinary demands on processor speed, storage capacity, and communication bandwidth. The digital text for this book, fully formatted, amounts to about 2.5 megabytes. The black-and-white images and diagrams that appear in the book take up an additional 90 megabytes on the DVD-ROM version (although they appear on the disc at greatly reduced resolution; the book versions occupy 750 megabytes). The additional illustrations, color, animation, programming, texts, and movies fill up most of the remaining 4,300 megabytes. In other words, the fully formatted text of How To Read a Film occupies less than one-tenth of 1 percent of the disc space required for the multimedia version, while the images and sounds which merely illustrate it consume more than 1,000 times as much real estate. 
​Some other numbers to think about: 
The standard computer screen of the mid-nineties, when multimedia came of age, measured 640 by 480 pixels. II each pixel in a full-screen image was either black or white, 38,400 bytes would be necessary to describe that image.* However, if you have a standard VGA color screen, with a palette of 16 colors, multiply that number by 4; if you have a basic Internet machine with a palette of 256 colors, multiply it by 8; and if you want color approaching the quality of film or television, multiply it by 24. All of a sudden, a single screen occupies nearly a megabyte of storage. You see the disparity: you can store an entire book—or a single decent color image. A picture may be worth a thousand words, but should it cost 150,000? This is not a good deal. 
​Now, make that still color image move. Don't even think about 24 frames per second. Try 12—it will almost work. Now you need more than 11 megabytes for each second of jerky movie that you show. A CD-ROM, with its gargantuan storage capacity of 650 megabytes, could hold a minute of film (well, not quite). This is also not a good deal. The old analog world never looked so good. (Maybe this digital thing is a bad idea.) 
Finally, assiiming that you can find some way to lick the storage problem, remember that you will have to transfer 11 megabytes per second from disc to CPU to screen in order to show your 12-frames-persecond "movie," while the standard transfer rate of CD-ROMs in the late 1980s was 150,000 bytes per second.
​You now have some idea of the technical challenges that confronted the digital video pioneers! The solutions they worked out for this seemingly insurmountable problem are ingenious and instructive. For the most part, they were not initially hardware-based. Building chips that could process this amount of information quickly enough and at a reasonable price would have solved only half the prob​lem, since the storage demands were just as astronomical as the demands on the processors. Separate Digital Signal Processor chips (DSPs) are useful—even necessary—but the first stage of multimedia was made possible by software that uses purely mathematical techniques that are as beautiful as they are effective.
​Although they are too complex to detail here, suffice it to say that these algorithms compress the amount of data required to store and display an image (or that succession of images known as a movie) by recording the difference between successive pixels or frames rather than the individual values of each pixel in each frame. For example, a still image with a large background of a single color would take much less room to store than the same image with a multicolored, variegated background. It is the number of changes that count, not the number of pixels. Similarly, a movie that is slow moving with few cuts requires far less storage than a quickly changing scene with numerous cuts. Only the differences between frames are recorded, not the complete data for each frame. The compression of each still image is known as "spatial compression;" the compression of succeeding frames is called "temporal compression." Both sets of algorithms are necessary to produce economically viable digital video. 
These compression techniques can easily reduce storage for a still image by a factor of 10 and storage for a moving image by a factor of 100. So we are back within the limits prescribed by the capacities of current hardware. The main standard for still image compression is known as JPEG (for the group that designed it, the Joint Photographic Experts Group) while the main standard for movies is called MPEG (for Motion Picture Experts Group). The DVD specification is based on MPEG-2 and provides for full-screen, full-motion video. There are many other schemes in use as well. (Oh, yes. There is also a very simple way to reduce the amount of data necessary for a digital movie: reduce the size of the image. That's why most digital movie windows on early multimedia CD-ROMs looked like large postage stamps.) 
Although the Voyager Company had demonstrated the possibilities of multimedia as early as 1989 with their release of Robert Winter's CD Companion to Beethoven's Ninth, multimedia did not begin to become a market reality until June 1991, when Apple introduced their software technology for movies known as QuickTime. (Microsoft followed with Video for Windows the next year.) QuickTime was designed as an architecture to support all media types, time-based or not.
One of its aims was to provide a platform-independent technology so that moving images could be shown at the best quality that the hardware on which they were run was capable. As successive versions of the software were issued the architecture supported more features (text, interadivity), more codecs (compression algorithms), and adaptations for use on the Internet (streaming, variable transmission rates). With QuickTime, new media producers had their first effective tool for integrating audio and video in a text environment, but they were still constrained by the hardware. Their delivery media, the CD-ROM and the Internet, were both limited. CD-ROM was based on technology that was devised in the late 1970s, while Internet transmission was hampered by low modem speeds. DVD, the successor to the CD and designed to have sufficient capacity and speed for digital video, was not marketed until 1997, while high-speed cable modems and DSL Internet connections did not become widespread until the turn of the century. 
Developed jointly by Philips and Sony, the laser-based CD was introduced as an audio medium in 1982. Within six years it dominated the recording business, one of the great success stories of twentieth-century consumer electronics marketing. The success of the CD in the audio market brought prices down rapidly, making this physical medium even more attractive for the computer industry which in 1985 adopted CD-ROM as the storage technology of the future.
Ironically, Sony, like Philips, had little success in the multimedia market. During the early 1990s the company brought out at least four versions of a portable CD-ROM player, but neither the Data Discman (in several incarnations), the Bookman, nor the MMCD player was accepted by the public. Success would come, but not until the next generation: DVD.
The audio CD succeeded so quickly because Sony and Philips controlled the technology: a single uniform standard was adopted by all manufacturers. Conversely, until the advent of DVD, CD multimedia development was slowed by a multiplicity of approaches. In addition to the Apple Macintosh and Microsoft MPC formats and Sony's efforts, the list of erstwhile contenders included Philips's CD-I (introduced in 1991), Commodore's CDTV, Tandy's VIS, IBM's Ultimedia, and the game machines of Sega, Nintendo, Sony, and 3DO. Except for CD-I, all were nonstarters. Like its imitators, CD-I discs played on an attachment to the television set (priced at about $600 at introduction) controlled by a remote joystick that lacked a keyboard. CD-I was hampered by two major limitations: the poor resolution of the television screen combined with the lack of a keyboard meant that very little could be done with text. CD-I turned out to be little more than a playback medium for still images. The base technology simply hadn't the muscle to support effective video. Although hundreds of companies rushed to market in the early 1990s with CD-ROM-based products (many of them quite ingenious), multimedia remained more a dream than a reality. By 1995 most of the early multimedia producers were out of business. The only successful CD-ROM genres were games and text-centric products like encyclopedias and reference works. Indeed, by 1994 more encyclopedias were sold on disc than in traditional book form. 
The original specification for the CD aimed for a product that could deliver more than an hour of digital audio, uncompressed. DVD was designed to have the capacity to deliver a standard two-hour feature film on a similar-sized disc. By using a laser with a shorter wavelength engineers were able to fit almost seven times as many bits on the same disc, but as we have seen that is not nearly enough capacity for raw video. Compression technology was necessary for a viabte product. Here's where it gets interesting.
Compression algorithms come in two flavors, "lossy" and "nonlossy." As their names imply, nonlossy compression faithfully reproduces every digital value captured from the original, while lossy compression does not: it approximates some values. Furthermore, the very nature of digitization itself implies a loss of values. No matter how high the sampling rate, theoretically values in between the steps are lost. There are still audiophiles who complain about the "coldness" of CD reproduction, preferring the old-fashioned analog vinyl disks, despite their fragility. 
Because it depends on heavy, lossy compression, the DVD format established as a standard late in 1995 for the next generation of optical disc technology compounded these esthetic problems. Most consumers marvel at the picture quality of DVD-Videos. Indeed, the resolution and color fidelity are both far superior to the VHS tape with which a DVD-Video disc is usually compared. The DVD-Video launch was one of the most successful introductions of a consumer electronics product in history.
But, just as CD sound is cold and lifeless for audiophiles, so the digital image is too clean and airless for some videophiles; they continue to prefer analog Laserdisc. As Robert Browning put it, "What's come to perfection perishes."
The problem is ethical as well as esthetic: most of the frames in a DVD-Video simply aren't there; they haven't been recorded. And most of the pixels in the frames that do exist also aren't present. You don't get 100:1 compression for nothing. Filmmakers may very well prefer DVD-Video to VHS for its clarity while at the same time reserving the right to criticize the medium for its supercilious attitude toward fidelity. But then, they are all copies, aren't they?
The fully digital image also presents challenges to distributors. Because copies are exact and there is no generation loss, DVDs present serious piracy problems. Once movies are digitized they are just as easy to duplicate and transmit as digital text. It's only a question of bandwidth.
​Yet the call of the digital siren was irresistible to the hardware companies. The MPEG-2 algorithm was adopted by the floundering consumer satellite television industry for its digital second-generation product in the mid-1990s and proved successful. By 1998 the consumer electronics market was flooded with digital cameras, both still and video, and film-based photography was under siege. Sony had tried to market a digital still camera called Mavica as early as 1989. In 1992 Kodak had introduced the Photo CD format to deliver film-based photos digitally. Both had languished. Now, the time was right. From the moment DVD-Video was introduced in April 1997 analog was dead—at least in marketing terms.
James Monaco / HOW TO READ A FILM /The World of Movies, Media, and Multimedia 
0 Comments



Leave a Reply.

    Alain Badiou’s Cinema
    Cahiers du Cinema - Mise en scene
    Cahiers du cinema (Vol.1) - French film culture and Cahiers du Cinema
    Cahiers Du Cinema (Vol.1) - Fereydoun Hoveyda:'The First Person Plural'
    Cahier Du Cinema (Vol.1) - Amedee Ayfre: 'Neo-Realism and Phenomenology'
    CAHIERS DU CINEMA (Vol.4) - RETURN OF THE SAME
    CAHIERS DU CINEMA (Vol.4) - Theorize/terrorize (Godardian Pedagogy)
    CAHIERS DU CINEMA (Vol.4) - Michel Foucault in Interview: Anti - retro
    Christopher Vitale - Guide to Reading Deleuze’s The Movement-Image, Part I
    Christopher Vitale - Guide to Reading Deleuze’s Cinema II:The Time-Image
    Christopher Vitale - Towards a Cinema of Affects: A Manifesto, Part I – From Film-World to Film-Art
    Christopher Vitale - Towards a Cinema of Affects: A Manifesto, Part II – Characters, Objects, Plots, Settings
    Christopher Vitale - Reading Cinema II, Part III: Noosigns, Lecto-signs, and the Cinematic Worldcreating for a People Yet to Come
    David Sterritt - Video and television
    Felix Guattari - Cinema Of Desire
    Felix Guattari - Cinema Fou
    An Interview with Gilles Deleuze- The Brain Is the Screen: An Interview with Gilles Deleuze
    Gilles deleuze - The world is lost, the world itself "turns to film"
    Gilles Deleuze on Cinema
    Gilles Deleuze - THREE QUESTIONS ON SIX TIMES TWO
    Gilles Deleuze - ON THE MOVEMENT-IMAGE
    Gilles Deleuze - ON the Time-Image
    HYPERSTITION: Truth is Science is Fiction
    Jacques Rancière - THE GOVERNESS, THE JEWISH CHILD AND THE PROFESSOR
    James Monaco - Film and Music
    James Monaco - Film: Politics
    James Monaco - The Digital Revolution
    Jean Baudrillard - 'The Evil Demon Of Images' (Part 1)
    Jean Baudrillard - 'The Evil Demon Of Images' (Part 2)
    Slavoj Žižek - The Pervert’s Guide to Cinema
    Slavoj Žižek - They Live (1988)
    Marshall McLuhan - Movies
    McKenzie Wark - Anthropo{mise-en-s}cène
    NORA M. ALTER - Mourning, Sound and Vision: Jean-Luc Godard’s JLG/JLG (Part 1)
    Nora M. Alter - Mourning, Sound and Vision: Jean-Luc Godard’s JLG/JLG (Part 2)
    Nina Power and GEOFFREY NOWELL-SMITH - SUBVERSIVE PASOLINI: 'LA RICOTTA' AND THE GOSPEL ACCORDING TO MATTHEW
    Post-Cinema: Theorizing 21st-Century Film
    Terence Blake - Deleuze and Shining
    Vuk Vuković - Le Charme discret de la bourgeoisie - absurd as Real
    youandwhosearmy? - DELEUZE, PATTON, AND GODARD GO TO THE CINEMA
    youandwhosearmy? - CINEMA IN THE AGE OF CONTROL SOCIETIES
    youandwhosearmy? -CINEMA IN THE AGE OF 'CONTROL SOCIETIES'
    youandwhosearmy? - BETWEEN CINEMA & PHILOSOPHY
    William S.Burroughs Among the Situationists

    Archive

    January 2018
    December 2017
    November 2017
    September 2017
    August 2017
    June 2017
    May 2017
    April 2017
    March 2017
    February 2017
    January 2017

    RSS Feed

Powered by Create your own unique website with customizable templates.
  • OnScenes
  • News
  • Art
    • Music >
      • Album Review
    • Poetry
    • Film >
      • Filmmakers >
        • Movies
    • Theater >
      • TheaterMakers
  • Philosophy
  • PhiloFiction
  • Science&Technology
  • Economy
  • Media
    • Video
    • Audio
  • About
  • Contact
    • Location