A Noise Reduction Pass on the Concept of Media
For a New Typology of Mediation
0. Origins of a new typology
This chapter originates from a presentation I gave at Simon Fraser University on Eugene Thacker’s concept of biomedia in his essay “What is Biomedia?” The title of my presentation, “When Rhetoric is a Biomass”, indicated that the concept of “media,” among other concepts in Thacker’s essay, was in need of some specificity. The diversity of phenomenal and discursive terrain covered by this term has become too noisy, I argue, and so requires what in the sound studio is known as noise reduction, in order to more clearly hear the signal. In the presentation I noted that the following terms in Thacker’s essay are entirely undefined, and utilized throughout completely free of any anchoring to an explicit theoretical framework:
Indeed, at least two of these important concepts are used interchangeably, as in this pairing of sentences in which “data” and “information” are terms that apparently, more or less, mean the same thing:
“[T]he body is accounted for through data….”
“The accountability of the body through information….” 1
While this chapter is not primarily concerned with biomedia, it derives from some of the conceptual obfuscations in Thacker’s essay and prompts the framework below. In brief, I propose the following modalities as offering theoretic possibilities such as general clarity, synthesis of distinct concepts, discursive and historical specificity, technological accuracy, and pragmatic implementation:
- 1. Submedium
- the body and its correlate: air for terrestrial beings, water for aqueous beings
- 2 Medium
- “unibody” materiality, the continuous production of the same mediation (e.g. painting, sculpture)
- 3. Media Systems and Formats
- discontinuous and variable media, a complex system that affords propagation of multiple mediations, e.g. video systems that can play multiple videos, or the forms of speech and writing.
- 4. Platform
- a media system that can remediate multiple kinds of media systems. Examples would be Amazon (from toasters to DVDs to books to ebooks), or advanced “black boxes” for the home that combine cable, video on demand, internet radio, games, internet browsing, etc.
- 5. Meta-medium
- a form of mediation that has yet to be remediated or “absorbed as content” for another medium. In the past the codex was a meta-medium, and one can speculate that perhaps prior to the codex, the temple would qualify as the meta-medium of its time, uniting through feature extraction all the other symbolic, material and affective fields of a culture. Today, it is computation. The meta-medium can remediate aspects of all other media through forms of feature extraction or remediation.
A new typology of course is also an invitation to critique by others, and its robustness is only obtained through encounters in dialogic exchanges with those who may disagree and counter it. Noise reduction is not a process to be approached carelessly, as it produces many artifacts and in fact, through the technique of frequency-dependent gain reduction, actually causes some loss of signal, but this is also true of the process of conceptual abstraction in general (attenuation of particular details). In order to facilitate such argument and contestation, and to apply my “NR Pass” with some care, each modality is explicated in further detail below. The typology in essence is along the lines of a nested hierarchy in systems theory.
As mentioned earlier, key concepts in Thacker’s “What is Biomedia?” are undefined. While biomedia is defined several times,the definition of biomedia can only be inferred, rather than substantiated. Thacker’s central claim that informatic technologies recontextualize our notions of life and the biological in general is straightforward, noncontroversial, and even pop cultural. But given the lack of rigorous concepts employed in his essay, nothing follows from this assertion. The concept of submedium undertaken here attempts to produce an actual ethical and existential stake in the theorization of biomedia.
The apparent target of Thacker’s critique are general notions in common circulation. Repeatedly we are reminded that he is trying to counter familiar notions in the discursive (if not the actual) atmosphere, which he either forthrightly states, or at other times accomplishes by putting quotation marks around words that will not be explicated:
- “digitization” & “technologization” are both in quotes2
- “body” & “technology” are both in quotes3
- ‘The key to the general logic of biotech is that it is not a “technology” in the colloquial sense of the term’4
- ‘In biomedia the biological body never stops being biological; it is precisely for that reason that the biological body is inextricably “technological”‘5
- ‘Note that the process of encoding is not identical with dematerialization, as it is often taken to be with digital media.‘6
- ‘They are not “bodies” in the sense of anatomical or even anthropomorphic bodies (a traditional biological standpoint’7
- ‘This total passage across platforms is not, it should be reiterated, a pure investment in the supposed dematerialized domain of the digital’8
What is attacked throughout Thacker’s text are these “general supposed colloquial often taken to be traditional” notions of undefined concepts often put in quotation marks. This might potentially be semi-productive of theoretical refinements. However, to counter these general notions with simply another set of general notions – the aforementioned collection of undefined and unanchored key concepts – is hardly satisfactory. If one were to apply a hermeneutics of suspicion to Thacker’s texts, one would suspect and impute a motive of attempting to dazzle the reader with copious technical information in the hopes that we will not notice this lack of theoretical heft and specificity by throwing dozens of new terms in our eyes as rhetorical sand particles, so that we will not notice the general absence of media-theoretic concepts. In some footnotes we are pointed in directions as to where we might find these general notions, and towards the end of the essay some important names are named (Hayles, Kurzweil, Moravec), but the lack of citations from these sources of generalities, and his own countering of these generalities with his own generalities, makes this lack of conceptual anchoring, definition and focus problematic for anything like a media-theoretic definition of biomedia.
Thacker is not a scientist by training but rather a writer of fiction, philosophy and film criticism, so I will be brief with my technical criticism and illustrate my contention with his apparently scientific discourse with a diagram of the analog-to-digital and digital-to-analog conversion process:
Thacker devotes a dozen pages discovering for the first time the straightforward linear process (he calls it variously “loops” and “spirals”) of what is analogously understood as the familiar ADC/DAC conversion (analog to digital, and digital to analog, processing) of any digital (computational) media. He names this as the process of Encoding, Recoding, and Decoding, which is actually helpful, as it in fact connects biomedia to many other forms of computational media. However, because Thacker does not have a technical background he imputes pseudo-philosophical significance to all stages of the process. First, he continuously mis-applies the concept of “essence” to data, because he is missing a concept of “feature detection” or extraction.
To refer to the examples with which we started: the investment in bioinformatics is not purely digital or computational, but a dual investment in the ways in which the informatic essence of DNA affords new techniques for optimizing DNA through novel software…9
The establishing of such a passage is based on the assumption that some essence or essential data pervades biomolecular bodies such as DNA or proteins…10
One does not digitize or informaticize an “essence” of phenomena as “data,” but rather extracts the most pertinent or relevant features for digitization or informaticization. Which features are relevant? Those that allow appropriate reconstitution. The figure below, from my original Powerpoint presentation, clarifies the density of Thacker’s technical descriptions of biomedia by drawing an analogy with audio media:
While one may never know it from the dense jumble of technical processes described in Thacker’s article, what he “really means” to be describing is the analog-to-digital, and digital-to-analog conversion chain that are part of any computational process (because they are digital, and the digital depends on the analog in the same way that marks on a ruler depend on the physical continuous analog length of the ruler). In the presentation slide above, imagine that there is a scenario that is equally “audio” and/or “bio.” We have a bird and its chirp. If we want to just digitally record the chirp and play it back, or perhaps while doing so, also clone the bird, we can describe this as Representation (or reproduction, copying), so the chirp we hear back from the digital encoding sounds a lot like the chirp we heard in real life, or the bird that is cloned is like the first bird, etc. I would leave it to audiophiles and “biophile” metaphysicians to debate which is of better “fidelity,” the original analog or reconstituted analog, as it is not an important consideration here. As anyone knows who has worked with media, one can also process media – change them, introduce alterations, sound effects or bio effects, and so on. In audio this is called the wet/dry mix ratio (original and effected sound combination), and actually ports somewhat to Thacker’s notion of wet lab/dry lab (though for biomedia digital is dry, whereas in audio the original and unprocessed signal is the dry media). A processed bird chirp would be a changed chirp (or a blue bird, if cloned, as suggested above). Finally, with digital media (and similar to signal-based analog systems) we can just skip the input altogether, and generate a raw waveform from data tables, algorithms, sine waves etc. (equivalent to databases of genes, proteins, enzymes etc.) and just synthesize directly from the digital realm to analog space (or in biomedia terms, produce a Frankenbird, as shown above). The nearest audio synthesis technique equivalent to Frankenbird would probably be physical modeling synthesis (emulating real musical instruments through mathematical representation). The reader will note that even if done many times in succession, multiple linear processes do not make for all the loops and spirals of Thacker’s account11.
Additionally, Thacker imputes various dialectical or rhetorical “tensions” to this process, when indeed the only tension is reading someone without a technical background attempt to make technical processes that he doesn’t understand seem philosophically relevant in some way. Thus when Thacker attempts to claim some major or profound role for CRITIQUE in the process of feature extraction, he is not able to come up with a single example of what a critical perspective could possibly contribute to biomedia:
“The accountability of the body through information also means that the context of biomedia (its informatic framework) facilitates certain approaches, techniques, and research questions at the expense of other approaches and modes of critical inquiry.“12
Regrettably, given his lack of a concept of feature extraction – that the only features that need to be abstracted are those that serve purposes of reconstitution (or “rewetting” as he might say, going from dry digital to analog wet space and bodies) – he is only able to intimate or lightly suggest a profound purpose for critique that in fact he is unable to exemplify with even a single “for instance” of how a critic might improve the current techniques, contexts or scenarios. True, he does mention the word “ethnic” in the previous sentence, but this is the biopolitical as an ambient decoration of rhetoric and not much else.
Thacker’s bioinformatic body is a body without an environment, and thus in need of theoretic complementarity. The most robust conceptualization of the environment as medium is to be found in James J. Gibson’s account in The Ecological Approach to Visual Perception13, which provides an environmental framework for the psychology of perception. His account is also phenomenological, since it develops a set of environmental invariants that are true for all perceiving species: medium (what the species perceives and moves through), substance (what can’t be moved through), and surface (the opaque interface between substance and medium). Animals can move through the mediums of either air or water. Gibson starts his discussion of medium by identifying four characteristics of what I am here calling the submedium:
- it is contrasted with substance, in that a detached body can move through it- it is “insubstantial”
- it is generally transparent- transmitting, absorbing or reflecting light, “a homogenous medium thus affords vision”
- it transmits waves of vibrations or pressure gradients, the “source of sound waves.” All material does, in fact, however living beings cannot dwell inside of solids (those that apparently do actually dwell in the voids of solids, filled by either air or water)
- Air or water “allows rapid chemical diffusion” which affords the sense of smell.
Thus, what we usually think of as human senses of sight, sound, and smell are actually also specific properties of the insubstantial medium that we take for our environment.
If we understand the notion of medium, I suggest, we come to an entirely new way of thinking about perception and behavior. The medium in which animals move about (and in which objects can be moved about) is at the same time the medium for light, sound, and odor coming from sources in the environment.14
To the four characteristics above Gibson adds breathing: both air and water contain oxygen, necessary for respiration: “The principles of respiration are the same in the water as in the air; oxygen is absorbed and carbon dioxide emitted after the burning of fuel in the tissues.“15 Gibson then identifies a property of air and water that is also worth consideration for media in general: it should be homogenous and stable:
[B]oth the air and the water tend to be homogenous, although fresh water differs from salt water. From place to place, the composition of air changes very little and the composition of water changes very gradually, for the temporary gradients that arise are dissipated by winds and currents. There are no sharp transitions in a medium, no boundaries between one volume and another, that is to say, no surfaces. This homogeneity is crucial.16
This notion of a general homogeneity is also useful for conceptualizing other forms of media, for the blankness of a page or canvas, the black membrane of a video screen, the material continuity of a slab of stone or wood (and its discarded discontinuities or “imperfections”!), the smooth topology of a speaker cone and so on also partake of this quality of homogeneity that is supportive of mediations in general. While a tear in the page or crack in the video screen may still allow for mediation, the “resting state” as it were of media is to be an insubstantial and homogenous (and thus, non-interfering) background with regards to its foregrounded “content.” Artists may of course choose to interrupt this material insubstantiality, as in the practices of glitch mediation:
There’s more adventurous way to obtain glitches.
Try burning some soundscapes and or beats on a cd-r, tape the cd-r to the bottom of your shoe, walk around for a while.
Take the cd of your shoe and clean it good with water and soap, dry it with a towel. Make sure it’s scratched but not broken, also make sure the scratches are not too deep, if you can look straight through a cd-r, a laser can too!
Making sure that only the surface is scratched and it is clean, you’re good to go!
Now put it in a stereo which can record to cassette tape, i use a very cheap sony ghettoblaster. The cheaper cd player tend to play a scratched cd easier then the better ones.
Record the output of the skipping and glitching cd to cassette tape and then record the cassette back to computer.
You could now for example blend the recordings with the original and use filters and cut and paste techniques to “glitch” your songs!
Lots of fun and it can give you a really noisy and glitchy layer to use in your work. The cassette compression and bandwidth prevent the cd glitches to sound harsh, annoying or painful.
Have fun and think before you do!17
Of course, these practices presume and depend upon a homogenous medium to modify for purposes of aesthetic disruption.
Gibson finalizes his description of medium by adding a sixth characteristic, “an intrinsic polarity of up and down. Gravity pulls downward, not upward. Radiant light comes from above, not below.” This provides what he calls “an absolute axis of reference, the vertical axis.“18 For our purposes we may note that it tends to be true of visual media in which this axis tends to be located, since hearing is omnidirectional and smell ambient and frontal but also a less well-developed human sense.
For our purposes of general media theory, Gibson’s concept of medium, I argue, should be understood as a submedium. As the environmental correlate of the body, it has an ethical limit, which is the limit of our possible existence. While one can paint or write or computationally model anything, there are limits to what we may want to pump into the atmosphere or into our bodies (Gibson elsewhere provides an interesting triad of substances for taste and smell – nutritive, nonnutritive, and toxic,19). The submedium is a pseudo-medium in this sense because it cannot – or at least, should not (the ethical dimension) – attain to an equivalent status of other media, which is to mediate an infinity of virtualities, because with respect to the submedium all is actual, and in fact there is no virtuality because the signal modifies the channel. Its modifications are real modifications, that can result in either dead, deformed or heavily experimented – upon human bodies, or a depleted atmosphere, poisoned water supply and so on. It is equally true that intervention in the submedium can clean up toxic spills and provide more effective drug treatments. I only claim that the submedium, in order to attain the equivalent status of any other media articulated in this framework, would necessarily have to undergo such constant modifications and revisions that its integrity may be irreparably damaged in some manifestations. While it is true that human practices have regarded the environment and the body as media in many ways – e.g. by tossing anything into it – this is not sustainable and undermines every other possibility of mediation. Thus, my categorization of Thacker’s biomedia and Gibson’s medium as “submedium” inscribes this ethical and existential dimension and limitation.
The other two primary features of the environment noted by Gibson – substance and surface – belong to the concept of medium. The concept of submedium assists us with disentangling some competing notions. For example, it is natural to describe air as the medium of sound when teaching sound design, since audio knowledge requires some knowledge of basic acoustics, but then it is also the case the microphones, computers and speakers are also sound media. What both mediums types have in common of course is that sound passes through both. In my media courses, I offer my “personal” definition of medium as “something through which” because it is the only phrase that adequately relates to every concept of media or medium I have ever come across. A medium allows for mediation, which I have explored in depth elsewhere and will here summarize as the inscription or overlay of a virtual order in relation to actual things20. Leaving behind the ethically limited condition of mediums we imperil through the impact of our virtual orders, what I am calling medium in this framework is any mediating entity that more or less “goes on and on saying the same thing” in its material dimension (obviously, not in its interpretive aspect). A painting on a wall, a graphic image on a t-shirt, a mask, any page of a book, a carpet pattern, a bust of Beethoven – all of these are “unibody” mediums, continuously producing the same mediation in its material being. The “codex” is not a “medium” in this schema, but a media system in the contemporary situation (in prior epochs it would have been considered a meta-medium, the medium no other media had yet assimilated or remediated). The e-book is also a media system, its platform the Internet, and so on. However, any particular book or e-book is a medium equivalent to any painting or sculpture or necktie pattern. It has a fixity and unibody construction. This is not to deny that copies may vary (an effect of the platform or media system on the medium), that readings or viewings may be different (an aspect of reception), or that as physical objects they are subject to entropy and decay (as anything else is). The notion of medium here accommodates some of the distinctions between “old and new” media, and aesthetic discourses related to either media specificity or traditional media, which is also an aspect of discourses of what I call media systems and formats. A painting is of a different order from a DVD player because one cannot hit a button and change the painting, or add and subtract a soundtrack via a volume knob. Also, the relative fixity and continuity of a medium contrasts with language, which is better explored as a media system, i.e. the capacity of versatile, discontinuous and varying mediation, which any medium lacks.
Masks have an effect in this regard that contrasts with language as media system. The stark fixity of masks in theatre, comics or ritual contrasts the discontinuous varying recombinatorial flexibility of language and human meaning in general. In a sense what I am distinguishing here as medium relative to media systems is also based on a spatial/temporal distinction. Speech flows and changes, while masks freeze and stare as a spatial array. As a medium a text is spatial, it is only temporized when entering the order of reading, which is temporal. A CD or vinyl record of themselves would be mediums, however they do not mediate without their attendant system, so they are formats aligned with media systems, since one cannot access their virtuality without the system, which is not true of neckties and cheap plastic Beethoven busts. Considering texts again, one has to know a language to access the virtual order, and this knowledge is a media system, and the format would be knowledge of the specific features of a particular language that allow its comprehension. Also, with any verbal texts there is such a rich weave of near-universal graphic design practices (visual hierarchies, margins, headings, page numberings, blocks of words and colors, fonts, etc.) that it can be read to a large degree as a visual medium.
However, in the future we will have programmable matter, and much of what I am here calling medium will become media system – we will be able to change our paintings on the wall at the touch of a button. At Carnegie Mellon’s “Claytronics” lab, Goldstein and Mowry describe their research as follows:
Programmable matter refers to a technology that will allow one to control and manipulate three-dimensional physical artifacts (similar to how we already control and manipulate two-dimensional images with computer graphics). In other words, programmable matter will allow us to take a (big) step beyond virtual reality, to synthetic reality, an environment in which all the objects in a user’s environment (including the ones inserted by the computer) are physically realized. Note that the idea is not to transport objects nor is it to recreate an objects chemical composition, but rather to create a physical artifact that will mimic the shape, movement, visual appearance, sound, and tactile qualities of the original object.21
Programmable matter may perhaps become a mainstay of future domestic space, as theoretical physicists such as Michio Kaku have argued22. One would then be able to then transform a club chair into a countertop into a bookshelf. The concept of programmable matter in fact is distinguished against non-programmable matter, which is what the notion of “medium” in this framework alludes to. But of course the idea of “programming media” has its precedent in media systems.
3. Media Systems and Formats:
“New media” are the media of systems, if not embodiments of systems theory. Media systems are either analog or digital. Elsewhere I have argued that analog media should be understood as index-icons (iconic representations produced through indexical means, showing continuity of form between data and phenomena), and that digital media is characterized by index-symbols (symbols arrayed in causal hardware chains, data that is discontinuous in form with its analog input or output). These distinctions do not exclude non-systems based mediations. For example, a handprint in a 40,000-year-old cave painting is an icon produced through indexical means. However, such a handprint exists as a medium and not as part of a media system, since it is readily available to perception. Media systems require formats. The most impactful historical development of analog media systems is photography, which is indissociable from hundreds of formats (indeed, there are “medium” and “large” format cameras). Any media system conjures up its formats: reel-to-reel, 8 track, CD, mp3, filmstrip, Beta, VHS, .CR2, .RAW, and so on. A media system is programmable, offering discontinuous, varying, and multiple mediations and in fact typically feature on/off capacities. Perhaps not enough has been theorized about the on/off function, as I cannot think of any major media theory that has conceptualized the difference between a television that is turned on and turned off. In fact, it is this capacity that aligns it with speech, language and writing. Media systems can “shut up,” if the reader will permit this colloquial phrasing. Earlier I described unibody media as media that goes on and on “saying the same thing.” We don’t throw blankets or towels over our paintings, Beethoven busts, books, carpets etc. when we don’t want to look at them, because they are not producing constantly changing varying discontinuous mediations that absorb and distract our attention. They are ambient, spatialized meanings that we can pay attention to when we want to. Changes and differences put a load on our attention, since we are cognitively wired to notice information, i.e. changes in the environment. In fact, even when treating audiovisual media in an ambient mode, as in the practice keeping the background chatter going for some sense of background human presence, this spatialization of discontinuous flow is still afforded by the capacity to switch it one way or the other. The off, sleep or standby features of a media system require us to declare, refuse, or attenuate attention. This on-off-sleep-standby quality of media systems is what aligns it with language (speech, reading and writing), because we can be quiet or speak, whereas a book on the shelf, since it is a medium, does not need to have an off-button, in the same way that a painting also does not need one. But a mouth needs to sometimes “produce silence,” as it were. Speech, reading and writing are material forms that introduce into consciousness constantly changing discontinuous virtual orders in a temporal flow, and we arrest that flow with a metaphorical OFF switch (or, a computer screen “hot corner” set to SLEEP), by not speaking, reading or writing. A video monitor turned off is a symbol of the homogenous transparent insubstantial submedium of air (with absolute up-down axis as cognitive reference). A DVD on a shelf is a symbol of the same state of medium as the painting. Similarly, with respect to platform, thumbnail images of movies or video control icons will be symbols of media systems. Raymond Williams’ well-known description of television as a “planned flow“23 of course only describes old-fashioned network broadcast and cable television, i.e. pre-programmed television, and not, for example, video art or VOD, in which artistic license or intervention or viewer selection intervenes in the flow by creating its own program. The notion of “program” is thus offset or displaced to another order, the procedural or algorithmic in general, rather than the structure of a sequence.
Of course aligning language with media systems is not to “remove” it from the other four modalities considered here. Language is an all-pervasive feature of culture, and with respect to major concepts in systems theory is perhaps better understood as an overlapping system that is nonsaturated by other systems. But there is a difference between the ambient cultural knowledge that motivates formations at the level of medium or submedium – e.g. matching one’s shoes to one’s socks, or taking vitamins – and the cognitive demands of temporized language flow through speech, reading, writing or listening, since language in these instances can be understood to shift from a spatially embedded to a time-based constitution that is also the shift from unibody array to discontinuous flow.
Photography is an interesting technology to consider from the medium/media systems distinction. Any single photograph, printed and displayed in the manner of a painting, would be akin to a medium. But a camera is a media system. However, early cameras were not as “systematic” in this sense of affording variability and discontinuity, given the elaborate setup (including locking people into position with torturous-looking metal frames) and long exposure times. But a camera with a roll of film that takes 36 images, or a flash card that record thousands of images, is a media system in the sense of providing rapid change between mediations. A photo album applies the form of the previous meta-medium, the codex, to an array of images in the sense of a media system, as does the now defunct technology of the slide show (reviewing still images in a series during a sitting). It can be useful to shift analytic positionings from reception to making to distributing to displaying, as each stance may afford a different and useful perspective. There is no need for a typology that forces every possible mediation into a lockstep mechanism, and variety of practices, technologies, and contexts should be taken into account. For example, spatial contexts can have media system effects, as in gallery spaces that present many mediums simultaneously or through exhibition rotation. The same occurs with artists’ studios, which can be converted into exhibition spaces and often are. These spaces may function as either media systems or platforms, depending on how they are utilized. A performance may suddenly erupt in the midst of a staid exhibition space, for instance. Or, the space entirely emptied out between shows, in which case it reverts to an almost submedial state. Installation art and site specificity may activate a space as a medium. The modes identified here are traceable across a diversity of artefacts, spaces, techniques, and practices and should not be taken as a kind of fixing agent for hard mappings of every conceivable variant of mediation.
Microsoft’s new XBox One is too new a product for there to be any scholarly commentary on it. The XBox 1 is the latest attempt at producing the infamous, or legendary, all-in-one “black box” (the black box in your living room that can do anything). In a provocatively titled Slate.com article, “Steve Jobs’ Dream Device Has Arrived, And it’s made by Microsoft. Meet the Xbox One,” Farhad Manjoo writes:
Just before he died, Steve Jobs told his biographer Walter Isaacson about his dream for revolutionizing television. His fantasy device would control all the many doodads that crowd your living room – DVRs, game consoles, Blu-ray players – and would connect to the vast world of entertainment available online. Best of all, it would be drop-dead simple to control – no more futzing with the Input button to switch between different kinds of content, no more fiddling with different remotes to control your devices. “It will have the simplest user interface you could imagine,” Jobs said. “I finally cracked it.“24
Such ideations capture the spirit of the platform. What a submedium is to medium (homogenous transparent insubstantial space of movement and possibility), and medium is to medium system (momentarily fixed content in a discontinuous flow), media system is to platform: the “Swiss Army knife” of mediation. Platforms provide interfaces for those who want to program their own flow of media systems. The concept of an interface for users is important because a platform is not a meta-medium. A meta-medium is for those who want to program, period. Computer programming is the new technology of writing, in the same relation to audiovisual media as Plato was to Homer, or the Forms to Mimesis, or philosophy to poetic narrative – computer programming is the New Alphabetic Abstraction that gives most folks a headache, who just want a simple interface to coordinate the sheet complexity of organized media systems, through which they can get their heroic epic mediations. Platforms of course depend on underlying programming but that is for the Technical Support staff to concern themselves with – the home users, and the multinational corporations that cater to them, want simple interfaces to agglutinate all the media systems that themselves are in circulation, along with their contents. The platform is the friendly space station diner, where everyone knows your name, in an otherwise inhospitable and inhuman cosmic infoverse, where your name is a microscopic array of 0s and 1s. In “programming” one’s media systems on a platform, one “selects” from a pre-given feature set. The user attains the position of the broadcast programmer without having themselves to become a computer programmer. The platform spares us from the meta-medium.
Notably, a medium can threaten a platform, which is designed to remediate a media system. We are not supposed to use our video monitors as a medium, say for example by depicting a still image of a painting or photograph on our living room walls for long durations, since a continuous display of a single image will wreck the technology, causing the image to burn into the materiality of the video screen itself. In other words, if we use a platform as a medium, it becomes a medium! A still digital photo or video image, if displayed on a monitor for too long, becomes a permanent aspect of the materiality of a system that is designed for changing imagery. In this example I am describing the platform aspects of contemporary flat screen video screens, which have multiple video format inputs but also ethernet, USB and internet capabilities. Today’s video screens are platforms, not yesterday’s video systems, which only had AV connectivity.
In any epoch, the meta-medium is the medium that has yet to be remediated by another medium. In our age, it is the computer. We can hypothesize what the next meta-medium would be. To approach this possibility, we should note that every instance of “remediation” is in fact very “lossy,” to use the term of data compression. One loses much of the body and substance through the processes of feature extraction that is productive of any remediation. A remediation extracts some features, and not others. For instance, in remediating speech, writing discards the “grain of the voice” or accent and dialect, or pitch register (alto, tenor etc.), or intonation, loudness and so forth. Remediation depends upon extracting certain features in a process in which the affordances of the media dictate which features are extracted. Those so extracted thus become “essential” for the remediation, so that what is “essential” in speech, through writing, is just “the meaning,” i.e. the features of speech that writing can extract. What it can’t extract becomes “non-essential,” discarded, not part of the meaning of writing (though of course it remains integral to the meaning of speech). Film may remediate theatre, but it doesn’t remediate live performance. Music media do not remediate the acoustics of performance halls or the physicality of having one’s body resonate with instruments positioned a few feet away in a small music club. Painting extracts color, shadow, texture and form from a scene but not movement, and so on. Remediation is lossy feature extraction. One may wonder what is in fact left of original content upon successive passes of remediation, but it is also the case that each loss is also an add, because each medium comes with new capacities and affordances and with these come new “action possibilities” as Gibson would say. So remediated “meaning” is always lost when transferred, but it is also added to, renewed, regained and reconfigured by the new practices that take up the new affordances.
Therefore, one way to approach the question of the next meta-medium is to ask, “what could be lost when the computer is remediated?” What could computation lose but still be computation? My answer is: the interface. The next meta-medium would have to offer computation contiguous with thought itself. Of course many research teams around the world are at work in various ways of more or less achieving this possibility. Not a week goes by that there isn’t some prominent news item about a chip on the brain or a sensor on an organ that is accomplishing computational tasks without interface, without cognitive effort expended, and so on. Computation is becoming ever more ambient, as in the general notions of “the internet of things” and “ubiquitous computing.” But making microscopic “homeostats” or automated plug-ins or what have you is not equivalent to being a meta-medium. Basically, everything one can do with a computer, including programming it in various coding languages, would have to be achievable without interface. Only such a technology could be understood as remediating the computer, and to remediate, one has to extract some features, and lose some others. The interesting aspect about this idea – conceiving the next meta-medium – is that it takes us full circle, and we are right back at the original consideration of biomedia!
6. Conclusion: for biosystems
The reader will have noted that, in describing the five modes of media, I have given much more attention to the submedium than to the other modes. On a formal level, this may appear to be a kind of formal “imbalance” in the essay, since in a typical typology one would presumably give equal discursive weight to all of the key terms. My intention is somewhat different here. Broaching the concept of the next meta-medium brings us directly back to the question of biomedia, since I have suggested that performing computation “immediately” (via thought itself) is the implication of the current meta-medium, since the only aspect of the computer that could conceivably be lost in its remediation is the interface (typing and typing away, leaning forward too close to the monitor, aching back muscles from sitting too long in front of a screen, etc.).
I do not wish to approach this question in the manner of either general speculative futurology, or even a more grounded historicist consideration by reviewing the current state of progress in these areas towards these goals. Rather, the full-circle implied by the next stage of remediation of the meta-medium requires a return to considering other undefined concepts in Thacker’s essay. While it is of course the case that others have written about biomedia, and it may appear odd to focus only on this one essay by a single author, the limited scope of what can be approached in an article, as well as the relatively high degree of influence of this particular essay in media theory, recommend it for this in depth treatment undertaken here. What is of course at stake in the question of biomedia is the relation of various concepts around “life” to “technology.” Discursive treatments such as Thacker’s tend toward undermining the distinction between these concepts, as theorists are so often wanting to “blur the difference” or even “deconstruct” this apparent opposition. However, we saw in Thacker’s text that no specific discourse was identified or cited as to this life-technology opposition. Rather, what was identified was an opposition in general circulation, that could be spotted here and there, in this text or another one, which is another way of saying that his manner of disrupting this binary was not as rigorous as another approach that would actually offer citations as focal points (e.g. in Derrida’s texts). In a review of contemporary media theory, we will of course note that Thacker is not alone is exploring this life-technology “binary.” Thacker’s intent of course is to make the distinction disappear (as any good postmodernist would want to):
Biomedia are particular mediations of the body, optimizations of the biological in which “technology” appears to disappear altogether“25
Here technology – again – gets the quotes treatment.
In this final section I want to suggest a resolution by way of a contrarian move that affirms the distinction between life and technology, but that is also open to the possibility of a computational contiguity with thought. Thacker’s frequent use of quotes around troublesome words is compounded by his awkward use of Heidegger. On the one hand he “needs” Heidegger, because Thacker lacks a concept of technology (indicated by often putting quotes around the word), and so he needs a solid philosophical reference point of some kind. But even though Thacker is simultaneously critical of Heidegger’s view, it is basically the case that Thacker is “cheering on” enframing, or the representation of the body as “standing reserve,” which is what informatics does. So, why go to the trouble of invoking a thinker whose animus against contemporary technology is well known (Heidegger did like to meditate from forest shack dwellings in the Black Forest), and basically give one’s own added impetus to the very process criticized by the borrowed (but not quite cited) thinker, who shows up in a few footnotes? The answer of course is because the concept of technology is lacking.
Here I will offer a provisional route that indicates where some clear distinctions can be sought (noting that contemporary humanist theory in general shuns clear distinctions, and its favored mode for some time has been toward boundary blurring, undermining, destabilizing, etc.). Systems theory provides useful concepts, such as the three classic cybernetic economic orders of relative openness and closure towards matter/energy, information, or organization; also, more recent notions of self-organization that are useful. Technology and life are basically forms of organization. Technology needs to be organized by energy expenditures external to it, whereas life metabolizes energy through its own processes in order to expend the energy it needs to perform the work of self-organization. For example, when my car starts to break down, it doesn’t go wandering through the woods near my home (Pacific Spirit Regional Park) in search of berries, nuts and edible squirrels in order to obtain the energy it needs to repair itself, as much as I may wish it did. Instead, I have to take it to the car dealer, where mechanics will perform the required labor to counter the car’s inherent entropy. The car has little capacity to produce negentropy (reduction of entropy). The cybernetic notion of “self-regulation” is not the same as the ancient Greek notion of life as “self-generating.” Life produces local negentropy, meaning it orders itself and its local environment. Civilization as we know it is the only thing moving in the opposite entropic direction of the rest of the universe- while the cosmos moves towards heat death (entropy, basic thermodynamics), our cities consume more and more energy to produce ever increasingly complex social-technological orders (increasing negentropy through energy expenditure, work enacted, etc.). Somewhat moderating the energy involved of course are ever new improvements in energy efficiency, wasting less, recycling and insulating, etc. But technology has no means of self-production, reproduction, repair, and so on (though some basic robotic applications can fix a loose metal joint here and there, but again it is the human roboticists who do the legwork in producing self-repairing robots). A stone is a piece of matter or potential energy. Humans come along and provide the labor power to make out of stones some walls, thus organizing matter/energy. The stones don’t self-organize into walls. Every technology, even informatic and computational technology, remains of the order of organized (by humans) matter/energy. Life organizes its own processes to maintain, repair, and reproduce itself, out of what’s available in the surrounding environment. Humans happen to produce the most modifications in the local environment (in our energetic case, the whole planet), but it is also the case that all animals influence their local environments, because the environment is the correlate of any body.
So, while the profusion of words with quotation marks around them, undefined terms, and techno-genomic terminology in Thacker’s text seem to impute in highly suggestive overtones some sort of destabilization of the generally circulating, colloquial, sort-of philosophic concepts “life” and “technology,” actually it is a fairly straightforward distinction to make from an elementary systems theoretic perspective. What is more interesting than merely destabilizing an imagined conceptual polarity is understanding how technology itself is becoming more human, more lifelike, with the human body and its cultural situatedness becoming ever-increasingly the model, origin, telos and reference of technical development. Human perception and cognition is analyzed to develop new media systems (e.g. more mega-pixels in DSLRs, new forms of 3D video, sound compression algorithms, camera-detection of human limbs in game-playing, etc.). Modeling computers on human neural networks, designing robots and artificial limbs on the kinesic models of human musculoskeletal systems – these and similar trends point toward the informatic and technological in general becoming ever more body-like, or to put it another way – as our technologies become more sophisticated, our local modified environment increasingly resembles embodied – neurological, biological, cultural – materiality. This ranges from materials that are fleshy to the touch to chairs with ergonomic adjustments to more believable renderings of photo-realistic animations to cloning skin from one body part to place on another – in short, technology is becoming more and more humanized, but of course the human that is remediated in technology is also the human that is becoming ever more accommodated to a highly technologized environment. Again, it is systems theory that provides models for these kinds of feedback loops, not the discursive easy trick of blurring binary distinctions.
What I am calling biosystems is thus the exact inverse of Thacker’s notion of biomedia. Whereas Thacker claimed that biomedia are “the informatic recontextualization of biological components and processes” (58, 78), biosystems as described above is in fact the biological recontextualization of the informatic components and processes. It is only on this basis that the computer could conceivably be remediated as the computational contiguity with thought. For one, as discussed above, it is doubtful that it would ever be considered ethical to experiment on live human brains to the point of modifying them into computational architecture. In my own experience, getting “ethics approval” from my home institution for even a simple survey-based research instrument to be administered to my students is a painfully burdensome process. I can only imagine what asking to experiment on human brains might be like (within any ethical research contexts, of course). But more importantly, given that the brain itself (and the body in general) is becoming ever more the model, origin, telos and point of reference for the development of new technologies (biomimicry, etc.), such modifications of brain matter are in fact unlikely to be required. Computers themselves will reach a stage of basically being bio-identical to living neural networks, and on that basis the new meta-medium – as the remediation of computation – would come into its own.
On a final note, I would like to direct the reader to the YouTube clip Neural Networks for Machine Learning with Geoffrey Hinton26. The demonstration in this video shows the reversible process of both detecting a variety of patterns of numbers, and imagining such. Three decades of modeling the brain in computer science has produced a neural network that can “imagine” different shapes of the number 2. With respect to the various “brain versus computer” conceptualizations free floating (as signifiers do) in general circulation, machines have a very long way to go before they can catch up to us, and in any event, it will be us leading the machines towards ourselves. We seem to need computers to do a lot of mathematical operations that we don’t want to do, or don’t have the time to do. We export more and more tedium to computational processes, because there is always more tedium that needs to be performed, as societies become more complex. One day we will say to ourselves: “Hey, Other Brain, do all that boring crap for me, thanks.”
- Eugene Thacker, "What is Biomedia?" Configurations 11, no. 1 (2003), 77, emphasis mine
- Ibid., 53
- Ibid., 56
- Ibid., 57
- italics and quotes in original, Ibid., 59
- Ibid., 62, emphasis mine
- Ibid., 75
- Ibid., 76
- Ibid., 53
- Ibid., 75
- Ibid., 70, 74, 78
- Ibid., 77
- James Gibson, The Ecological Approach to Visual Perception, (Hillsdale, New Jersey and London: Lawrence Erlbaum Assoc. Publishers, 1986), 16-19.
- Ibid., 17.
- Ibid., 18.
- Electro-Music (accessed on Nov 11, 2013).
- Gibson, 19.
- Ibid., 20.
- Michael Filimowicz, The Plane of Mediation: the actual and virtual as cultural system, NMEDIAC: the Journal of New Media and Culture, Vol. 9:1, 2014
- Seth Copen Goldstein and Todd Mowr, "Claytronics: An Instance of Programmable Matter,". (n.d.) www.cs.utexas.edu/~skeckler/wild04/Paper12.pdf (accessed online Nov 17th 2013)
- Michio Kaku, The Physics of the Future: How Science Will Shape Human Destiny and Our Daily Lives by the Year 2100, (Anchor Press, 2012).
- Raymond Williams, Television: Technology and Cultural Form, (New York: Schocken Books, 1975), Chapter 1.
- Farhad Manjoo, www.slate.com/articles/technology/... (accessed online Nov. 17 2013)
- Thacker, 52.
- Neural Networks for Machine Learning with Geoffrey Hinton (accessed online Nov. 17 2013)
- Figure 1
- Figure 2 elements
- Figure 3
Michael Filimowicz is a multi-disciplinary artist and researcher working at the overlapping boundaries of media forms. He is founder of the Cinesonika International Film Festival and Conference of Sound Design, and Editor of The Soundtrack academic journal. His research area can be broadly construed as the phenomenology of mediation and informational semiotics. He is also Faculty Director of Interdisciplinary Programs in Simon Frazer university’s Lifelong Learning unit.