The increasing availability of low cost and open source physical computing technology enables artists or creators to elegantly utilize previously unavailable or overly complex forms of data, while additionally casting the artist or creator as technologist and engineer. Physical computing relies on microcontroller embedded systems to utilize sensors and other physical data in interactive digital systems, expanding human-computer interfaces vastly beyond their original conventions. New media and interactive telecommunications curricula are being introduced at many levels of education and academia; microcontrollers like Arduino and single board computers like Raspberry Pi have become familiar household names as the Maker Movement proliferates technology, art, and culture. The efficacy of this and related hardware grows smaller, more affordable, powerful, and well documented.
In addition to hardware, the creator’s toolkit has expanded to include programming languages and software environments designed specifically to encompass the needs new media artists. Visual programming interfaces like Max/MSP/Jitter and PureData are approachable and well documented, while SuperCollider, Processing, openFrameWorks, and Arduino utilize object oriented programming to create immense possibilities for sound, video, image, and interface creation. Much of this software is open source, and many user generated libraries have been produced to suit specific applications. With the expansion of internet and socially driven technology, physical computing and open source software offer new ways to invite interaction and greater social participation, while also creating more possibilities for complex and meaningful interactive and generative art.
Interactive Art allows viewer participation by providing some form of input in order to determine the outcome or evolution of the piece, engaging the audience in more than the role of mental spectator. This interaction is often mediated with sensors, which communicate data based on participant action to enact changes on the art object or environment in ways determined by the artist. Each audience in enabled to shape the outcome of the work and thus determine their experience of the work. Interactive art is distinguishable from Generative art in that Interactive art is user determined on some level, whereas Generative Art does not provide this level of viewer agency.
Work in the field of Interactive Art is related, though not intrinsically tied, to coinciding work in the field of Generative Art. Generative art is a separate practice in which art is created with the use of a system autonomous of human interaction, which can independently determine artistic outcomes without human intervention through the use of various mathematical algorithms. Sensor or other real-time data is often utilized within the context of these systems in order to shape their outcomes. Generative systems can be manipulated in real-time with the use of interactive programming languages such as Max/MSP/Jitter, Isadora, and openFrameworks, thus giving rise to the practice of Live coding. The practice of Live coding is somewhat in opposition to conceptions of generative art, as it introduces an element of human determinism to an otherwise autonomous system.
The question of audience participation, and what, precisely separates the art object or experience from other everyday objects is again raised with the implementation of technology into the function and manifestation of creative intent. Duchamp’s assertion that the audience holds the power to determine whether or not the presented object or experience is art– rather than such a determination being made by the artist or creator– becomes further muddled with the introduction of technology. Dadaism inherently dissected art as an institution while creating works that were, in their time, anti-art (the nebulous definition of “art” has since expanded to embrace such works– and so Duchamp weeps). As in the Happenings of Allen Kaprow and others, the presence of an audience is intrinsic and even necessary to the execution of the work.
In the case of works utilizing physical computing technology to create interactive experiences, Duchamp’s notion of audience participation is taken further, as the outcome of the work is literally determined by data introduced in the presence or participation of the viewer(s).
The new media art object can not only be viewed, but manipulated by the viewer, thus bridging the gap between artist, art object, and audience in novel ways. In the wake of open source software and the maker movement, however, the widespread use of technology to produce generative or interactive works (in increasingly defined manners), may very well simply be proliferating technology for technology’s sake, rather than challenging our notions of the ways technology should or can be used. A number of early attempts at bridging the divide between technology and the art world proved ineffective, though remarkable.
The technical planning and October 1966 execution of Billy Klüver and Robert Rauschenberg’s ‘9 Evenings’ was a collaboration between artist and engineer in the most formal sense. Communication between the Bell Labs engineers and artists was flawed at best, strained at worst. The engineers, though intrigued by the creative goals of the artists, saw no practicality or innovative possibilities in their execution, while the artists were often confounded and frustrated at the limitations of the technology due to a lack of awareness or only superficial understanding of how things actually worked. Much of the technology utilized was co-opted from telecommunications, and was adapted in form but not in function to suit the needs of the artists. Beyond that, the relationship between art piece and audience was left mostly untouched. Interaction was minimal, in large part because it was technically impossible. The critical reception of the event was relentlessly harsh, ranging from confusion to general dismissal; the artists stood by their works with unshaken resolve that negative reception was the fault of the audience and not the presentation of the work itself.
The Pepsi pavilion at the 1970 World’s Fair in Osaka, Japan was problematic for a number of reasons. Communications between artists and their beneficiary were terse and ineffective, and while the collaboration between artists and engineers and the implementation of the technology was extremely successful for the artist’s aesthetic goals, Pepsi was less than pleased with the outcome. The main downfall of E.A.T. (Experiments In Art and Technology) was to assume Pepsi’s financial support would have come without the burden of corporate interest. Despite this, the artists involved were successful in shaping the functions of the technology to suit their intent. Pepsi’s subsuming of the exhibit shortly after the beginning of the expo stand as an early lesson that corporate interests and the goals of those in the art world are–all too frequently–at odds. This issue of corporate interest and sponsorship is one that must been constantly reexamined, and one that certainly has a deep effect on creative outcomes but also on the ways technologies are normalized for wider use– in the pursuit of corporate interest, personal use, and creative intentions. E.A.T.’s intent to create an immersive interactive environment with the aid of analog technology was ambitious. It is an unfortunate fact that such a feat was impossible without some form of sponsorship.
Early iterations of interactive and generative art utilizing both computer and microcontroller technology for the utilization of sensor data date back to the 1960’s. One notable example– considered to be one of the first “intelligent environments”–is ‘GlowFlow’, produced by Dan Sandin, Jerry Erdman, and Richard Venezsky and exhibited at the Memorial Union Gallery of the University of Wisconsin in April 1969:
“The installation consisted of a dark room in “which glowing lines of light defined an illusory space. The display was accomplished by pumping phosphorescent particles through transparent tubes attached to the gallery walls. These tubes passed through opaque columns concealing lights which excited the phosphors. A pressure sensitive pad in front of each of the six columns enabled the computer to respond to footsteps by lighting different tubes or changing the sounds generated by a Moog synthesizer or the origin of these sounds. […] Delays were introduced between the detection of a participant and the computer’s response so that the contemplative mood of the environment would not be destroyed by frantic attempts to elicit more responses.” (Krueger, 1977)
This idea of Intelligent Environments, or environments that respond to viewer action or presence through technological means, has become increasingly popular as technology has become more readily available. Much of the contemporary work in Installation Art embodies this approach.
The premiere Ars Electronica Festival in 1979 marked the beginning of a longstanding consideration and utilization of physical computing technology by artists and designers.
Notable works exhibited in the scope of this research were those of Bruno Spoerri and Alexander Vitkine. Bruno Spoerri’s demonstration of Lyricon, an electronic wind instrument and wind controller, produced by a Massachusetts company called Computone Inc from 1974 to 1980, won first prize. Information on the construction and technical aspects of the Lyricon are sparse. It was a wind controller with embedded digital and analog electronics and achieved some means of pitch following additive synthesis, and has since been produced in various forms with MIDI implementation. Alexander Vitkine presented the Sonoscope, a hybrid analog and digital device made to generate visual representations of sound, its function is described thusly:
“ The drawing medium is an electron beam that can be deflected by electric voltages to the right and left as well as up and down. These voltages are controlled by the acoustic flow to be visualized. The electrical signals generated by the sounds are conducted through the filter and thereby divided into different frequencies. You will be directed to one or multiple channels, each of which corresponds to the octave of the scale. The finally appearing on the screen figuration consists of eight different colored elements. Each of which is associated with one of the aforementioned channels and, therefore, corresponds to a certain pitch, or, more specifically, the tonal distance one octave. Shape, size and / or intensity are controlled separately for each octave by the volume, so that the overall picture is also a description of height, tone, and volume…” (Ars Technica Festival Program, 1979)
Again, in depth technical details of the construction of this device were few and far between. The practice of generating images using computers was not uncommon by 1979, the more notable aspect of this endeavor is the use of a live sound input in tandem with the device.
Much of the intersections of physical computing, art, and design that followed were enabled by the greater availability and lower cost of increasingly powerful microcontroller chips, particularly those produced by Texas Instruments from the early 1970’s onward. The TMS1802NC was the first of such chips produced by Texas Instruments, and was widely implemented to serve industrial and domestic automation and control. Later chips included Intel’s 8048 chip, introduced in 1976, and the 8051, introduced in 1980, which remains popular to this day. Atmel corporation, founded in 1984, began producing microcontroller chips that were derivatives of the 8051 architecture, with notably lower power consumption. The inception of the first open source Arduino micro- controller utilizing Atmel AVR technology–introduced by Atmel 1996–in 2005 marked the beginning of a huge movement in DIY and maker culture, which has undoubtedly created a yet more blurred relationship between art and technology.
Advances in technology have contributed greatly to the possibilities of intersections and cohesions of art and technology. Works in data sonification and visualization persist, while network communications have allowed artists to take advantage of data on a global scale. In addition, conductive materials have become more attainable through advances in manufacturing, and so wearable electronic interfaces continue to be re-imagined. Of course notable in the field of wearables is Laetitia Sonami’s ‘The Lady’s Glove’, which utilizes a network of sensors to trigger and manipulate sound samples. Also of note is Afroditi Psarra’s ‘Divergence’ project, a wearable interface that detects and sonify electromagnetic fields. Much of the work in wearables is localized to the experience and sensory input of a single user. Works in Data Sonification include that of Mileece Petre. Petre’s exhibition at Los Angeles MoMA entitled ‘Bio-Electricity’ used an Arduino to collect live biofeedback data from sensors connected to plants to generate sound. In the field of video generation and manipulation, Camille Utterback’s works ‘Text Rain’ and ‘Liquid Time’ both utilize data from cameras to track the movements and position of users. ‘Text Rain’ generates text that falls vertically until intercepted by the shadow cast by the body of the user. ‘Liquid Time’ employs an overhead camera to track the positions of viewers in a gallery space, then using that data to manipulate the time sync of various sections of a video.
The advent of accessible and well documented microcontroller and sensor technology has allowed artists to dabble in the world of engineering in novel ways, circumventing the absolute necessity of collaboration with individuals involved in the field of engineering. Some may feel that this shift is a step backward, and that technology cannot be utilized to its full potential without the insight of those who work solely in the field of engineering. Such an unimpeded interaction between artist and technology allows for unexpected subversions and appropriations of technology that has otherwise been designated for very specific purposes. Creative experimentation and even malfunctions (in the normal functions of the technologies utilized) push the limits of our understandings and uses of technology, and serve to further evolve and reinvent technologies to serve entirely new and unexpected functions. This is particularly clear in the case of telecommunications technology, which has always been intrinsically tied to work with technology in the arts. Music technology as we know it today would not exist without the early appropriation of radio and radio test equipment, along with other forms of communications technology from military use that became accessible through surplus. These mostly analog technologies saturated the field of interactive and generative art.
Physical computing technology has been similarly subverted and redeveloped to suit artistic intent. The production and availability of a wide variety of sensors that allow artists and audiences to interact with digital interfaces through more inventive and compelling ways than a simple mouse and keyboard, allowing for the creation of intelligent and highly complex reactive environments and interfaces with little to no engineering knowledge. Whether the outcomes of this ease of use are aesthetically interesting is debateable, as documentation of use and limited knowledge of function may indeed limit the scope of those implementing these technologies only to already documented use (therein lies the danger of ‘Maker Culture’ and the DIY movement).