The Angel of History: A Test Card for De-Calibration

Rosa Menkman

In Front of the Angel of History
Covered in a heavy layer of make-up, she shot her face on a DV tape. She wished to mask her flaws, to be perfect. But only a short time into the shoot, the illusion shattered and she found herself forced to visit the emergency room of the hospital. An allergic reaction to the makeup hurt her eyes violently and left her sight affected for days.

Still of original source footage used for A Vernacular of File Formats, the Collapse of PAL, and Dear mr. Compression, all works 2010

Behind the Angel of History
Seven years after shooting the source footage for A Vernacular of File Formats: An Edit Guide for Compression Design (2010), it has become strange to think and refer to the image that I shot that day as a “self-portrait”: that image being “my image.” When I shot it, it was a symbol for my own imperfect being. I tried to be perfect like porcelain, at least from the outside, but my body broke out, and reminded me that there is no such thing as perfection. Not even in make-believe video.
As I aged, it wasn’t just the cliche of time—which healed the wounds of bloodshot eyes, and slowly but naturally greyed my hair—that changed my relation to this particular shot. As time passed, the relationship between me and that image fundamentally changed because of other more complex and unexpected reasons.

Compression
While digital photography seems to reduce the effort of taking an image of the face, such as a selfie or a portrait, to a straightforward act of clicking, these photos, stored and created inside (digital) imaging technologies do not just take and save an image of the face. In reality, a large set of biased protocols intervene in the processes of saving the face to memory, including, but not limited to scaling, reordering, decomposing, and the reconstituting of image data, in favour of certain affordances, which in their turn cater to techno-conventional, political, and historical settings.
Affordances, described by James Gibson in 1977 as preferred “object action possibilities,” are created by considering settings such as speed, size, and quantity as relative to each other,” are created by considering settings such as speed, size, and quantity as relative to each other.1 The bigger the file, the more time it will take to read and write it from memory, and the slower the camera will respond. As Adrian Mackenzie wrote in 2008, “Software such as codecs poses several analytical problems. Firstly, they are monstrously complicated. Methodologically speaking, coming to grips with them as technical processes may entail long excursions into labyrinths of mathematical formalism and machine architecture, and then finding ways of backing out of them bringing the most relevant features. […] Second, at a phenomenological level, they deeply influence the very texture, flow, and materiality of sounds and images.”2 Reverse engineering a standardization process is thus complex, if not generally impossible. However, although standards are often set in a way that avoids or hides all traces of testing and standardization regimes, traces can (re)surface, in the form of flaws, inherited dogmas, or (obsolete) artefacts.

Test Image
A fundamental part of image-processing history and the standardisation of settings within both analogue as well as digital compression and codec technologies, is the test card, chart, or image. This standard test image is an image file used across different institutions to test, for instance, image processing, compression algorithms, and rendering, or to analyze the quality of a display. One type, the test pattern or resolution target, is typically used to test the rendering of a technology or measure the resolution of an imaging system. Such a pattern often consists of reference line patterns with well-defined thicknesses and spacings. By identifying the largest set of non-distinguishable lines, one determines the resolving power of a given system, and by using identical standard test images, different labs are able to compare results, both visually, qualitatively, and quantitatively.3
Another type of standard test image, the color test chart, is created to facilitate color balancing or adjustments, and can be used to test the color rendering on for instance different displays. While every technology has its own test images (photography, television, or, for instance, film), this type of image has a biased legacy, rooted in the white Caucasian woman test subject. These Shirley cards (in analogue photography) or China Girls (in color film chemistry) are the “normal” standard, as is often read on these cards.4 Even though there were many Shirleys, or Caucasian white test subjects, these cards were not made to serve variation, but in fact cultivated a biased, gendered standard reference point, which unfortunately even today, continues to function as a dominant norm. On the other hand, the identities of the many Shirleys modelling the norm, stay, regrettably, unknown.

A Vernacular of File Formats
A file format is an encoding system that organizes data according to a particular syntax, or compression algorithm. The choice for a particular image compression algorithm depends on its foreseen mode and place of usage, which involves questions such as: how much accuracy is necessary for a particular task, what hard- or software will process the image, what data is important, and what can be discarded?
Every compression algorithm comes with its own set of rules and compromises, which, even though often invisible, influence our media on an a fundamental, meaningful, and often compromising level. In A Vernacular of File Formats, I explore and uncover these otherwise hidden protocols: via a series of corrupted self-portraits I illustrate the language of compression algorithms. A Vernacular of File Formats consists of one source image, the original portrait, and an arrangement of recompressed and disturbed iterations. By compressing the source image via different compression languages, and subsequently implementing a same (or similar) error into each file, the normally invisible compression language presents itself on the surface of the image.

A Vernacular of File Formats, 2010

Besides every iteration of the image, I describe not just the general use of the particular image compression, but I also try to give an explanation of how I disrupted the image and the basic affordances of the compression responsible for the aesthetic outcome. In doing so, A Vernacular of File Formats formed not only a start for my ongoing research into the politics of file formats and their inherent resolutions, but is also a thesaurus, or handbook for glitch aesthetics.

A Shirley Card for De-Calibration
When I released A Vernacular of File Formats, initially its images circulated quite naturally, following the random flow of the internet. Some of them were republished with consent or attribution, others were badly copied (also without attribution). Once in a while, I found my face as a profile picture on someone else’s social media account. Soon it became clear that particular iterations of the self-portrait had quite a bit more traction than others; these got frequent requests and pulls and were featured on the covers of books, magazines, and online music releases. One of the images became the mascot for a festival in Valencia (with a poster campaign throughout the city).
It was only some years after the release of A Vernacular of File Formats that the displacement of this portrait made me rethink my relation to the image. The first time this happened was when I read a description of the work in a piece by Kevin Benisvy, at the time a student at the University of Massachusetts. Benisvy writes, the protagonist is presented “in the act of brushing her hair, with an almost ‘come hither’ expression, as if caught by surprise, having an intimate moment in a Playboy erotic fiction.”5 I never considered the image erotic; to me, the image contained a painful and eerie vibe (it is a documentation of me losing my vision for a certain amount of time). But reading this gave me an insight into the diversity readings the image can invoke.
Soon after, a sequence of nonconsensual, non-attributed instances of exploitation appeared: the face became an embellishment for cheap internet trinkets such as mugs and sweaters, it was featured on the cover of a vinyl record by Phon.o released by the Berlin label BPitch Control, it was featured as an application button for two proprietary glitch software apps for iPhone and Android, it became the outline of the face of Yung Joey (a black rapper who photoshopped his face onto mine), and also a sponsorship campaign for a Hollywood movie about a woman being stalked, just to name a few surprising appearances.6 The image, exploited by artists and creators alike, started to lose its connection to the source—to me, and instead became the portrait of no one in particular, a ghost similar to a Shirley test image, but in this case, a Shirley for de-calibration. Following a long tradition of gendered, race-biased test cards, today I wish I had instead used the face of, for instance, African-American male models Renauld White or Urs Althaus.

1 James J. Gibson, The Ecological Approach to Visual Perception (New York: Psychology Press, 2014).
2 Adrian Mackenzie, “Codecs,” in Software Studies: A Lexicon, ed., Matthew Fuller (Cambridge, MA: MIT Press, 2008).
3 “Resolution Test Targets,” Thorlabs, https://www.thorlabs.com/NewGroupPage9_PF.cfm?ObjectGroup_ID=4338./NewGroupPage9_PF.cfm?ObjectGroup_ID=4338
4 Benjamin Gross, “Living Test Patterns: The Models Who Calibrated Color TV", Atlantic, June 28, 2015, https://www.theatlantic.com/technology/archive/2015/06/miss-color-tv/396266/.
5 Kevin Benisvy, “The Queer Identity and Glitch: Deconstructing Transparency,” http://www.kevinbenisvy.com/sites/default/files/2012%20-%20The%20Queer%20Identity%20and%20Glitch.pdf (site discontinued).
6 Both Phon.o and the creators of the sponsorship campaign have since been in touch after their respective releases, and now have my consent for using the image.

Rosa Menkman (*1983) is an artist focussing on noise artifacts that result from accidents in both analogue and digital media. She is writing regularly about resolution theory and phenomena such as glitch, encoding and feedback artifacts.