Alan Turing was a mathematician, a logician, a cryptanalyst, and a computer scientist (as i'm sure you all know.) During World War 2 he cracked the Nazi Enigma code, and later came to be regarded as the father of computer science and artificial intelligence. In the 1952, Turing was convicted of having committed criminal acts of homosexuality. Given a choice between imprisonment and chemical castration, Turing chose to undergo a medical treatment that made him impotent and caused gynaecomastia. Suffering from the effects of the treatment and from being regarded as abnormal by a society, the scientist committed suicide in June 1954.
The Turing Normalizing Machine is an experimental research in machine-learning that identifies and analyzes the concept of social normalcy. Each participant is presented with a video line up of 4 previously recorded participants and is asked to point out the most normal-looking of the 4. The person selected is examined by the machine and is added to its algorithmically constructed image of normalcy. The kind participant's video is then added as a new entry on the database.
Conducted and presented as a scientific experiment TNM challenges the participants to consider the outrageous proposition of algorithmic prejudice. The responses range from fear and outrage to laughter and ridicule, and finally to the alarming realization that we are set on a path towards wide systemic prejudice ironically initiated by its victim, Turing.
I found out about the TNM the other day while reading the latest issue of the always excellent Neural magazine. I immediately contacted Mushon Zer-Aviv to get more information about the work:
Hi Mushon! What has the machine learnt so far? Are patterns emerging of what people find 'normal? such as an individual who smiles or one who is dressed in a conservative way? What is the model of normality at this stage?
TNM ran first as a pilot version in The Bloomfield Museum of Science in Jerusalem as a part of the 'Other Lives' exhibition curated by Maayan Sheleff. Jerusalem is a perfect environment for this experiment as it is a divided city with multiple ethnical, cultural and religious groups practically hating each other's guts. The external characteristics of these communities are quite distinguishable as well, from dress code to tone of skin and color of hair. While the Turing Normalizing Machine has not arrived at a single canonical model of normality yet (and possibly never will) some patterns have definitely emerged and are already worth discussing. For example, the bewilderment of a religious Jewish woman trying to choose the most normal out of 4 Palestinian children.
The machine does not construct a model of normality per-se. To better explain how the prejudice algorithm works, consider the Google Page-Rank algorithm. When a participant chooses one of the random 4 profiles presented before them as 'most normal', that profile moves up the normalcy rank while the others are moved down. At the same time, if a profile is considered especially normal, it would make the choice made by its owner more influential on the rank than others, and vice versa.
We are currently working on the second phase of the experiment that analyzes and visualizes the network graph generated by the data collected in the first installment. We're actually looking to collaborate with others on that part of the work.
Usually society doesn't get to decide what is good or even normal for society. The decision often comes from 'the top'. If ever such algorithm to determine normality was ever applied, could we trust people to help decide who looks normal or who isn't?
While I agree that top-down role models influence the image of what's considered normal or abnormal, it is the wider society who absorbs, approves and propagates these ideas. Whether we like it or not, such algorithms are already used and are integrated into our daily lives. It happens when Twitter's algorithms suggests who we should follow, when Amazon's algorithms offers what we should consume, when OkCupid's algorithms tells us who we should date, and when Facebook's algorithms feeds us what it believes we would 'like'.
This experiment is inspired by the life and work of British mathematician Alan Turing, a WW2 hero, the father of computer science and the pioneering thinker behind the quest for Artificial Intelligence. Specifically we were interested in Turing's tragic life story, with his open homosexuality leading to his prosecution, castration, depression and death. Some, studying Turing's legacy, see his attraction to AI and his attempts to challenge the concept of intelligence, awareness and humanness, as partly influenced by his frustration with the systematic prejudice that marked him 'abnormal'. Through the Turing Normalizing Machine we argue that the technologies Turing was hoping would one day free us from the darker and irrational parts of our humanity are today often used to amplify it.
The video of the work explains that "the results of the research can be applied to a wide range of fields and applications." Could you give some examples of that? In politics for example (i'm asking about politics because the video illustrated the idea with images of Silvio Berlusconi)?
Berlusconi is a symbol of the unholy union between media and politics and it embodies the disconnect between what people know about their leaders (corruption, scandals, lies...) and what people see in their leaders (identification, pride, nationalism, populism...). A machine could never decipher Berlusconi's success with the Italian voter, it needs to learn what Italians see in him to get a better picture of the political reality.
Another obvious example is security, and especially the controversial practice of racial profiling. My brother used to work for EL AL airport security and was instructed to screen passengers by external characteristics as cues for normalcy or abnormalcy. Here again we already see technology stepping in to amplify our prejudice based decision making processes. Simply Google 'Project Hostile Intent' And you'll see that scientific research into algorithmic prejudice is already underway and has been for quite some time.
How does the system work?
The participant is presented with 4 video portraits and is requested to point at the one who looks the most normal of the 4. Meanwhile, a camera identifies the pointing gesture, records the participant's portrait, and analyzes the video (using face recognition algorithms among other technologies). The video portrait is then added to the database and is presented to the next participant to be selected as normal or not. The database saves the videos, the selections and other analytical metadata to develop its algorithmic model of social normalcy.
Any upcoming show or presentation of the TNM?
There are some in the pipeline, but none that I can share at this point. We are definitely looking forward to more opportunities to install and present TNM, as in every community it brings up different discussions about physical appearance, social normalcy and otherness. Beyond that, we want the system to challenge its model of prejudice based on its encounter with different communities with different social values, biases and norms. Otherwise, it would be ignorant, and we wouldn't want that now, do we?
A new episode of #A.I.L - artists in laboratories, the weekly radio programme about art and science i present ResonanceFM, will be broadcast today Tuesday 11th December at 4:00 pm. There will be a repeat on Thursday 13th December at 10:30 pm. You can catch it online if you don't live in London.
This week i'm talking to Bruce Gilchrist, who together with Jo Joelson is the founder of London Fieldworks, an art practice that dialogues with science and technology.
Their work, which is usually developed in collaboration with other artists and with scientists, has investigated subjects as different from each other as the caravan and nomadic culture, the animal habitat, the impact of natural phenomenon such as the weather and the light on human consciousness and the possibility to send human beings into hibernation.
The projects of London Fieldworks have led them to the Atlantic Rainforest, the Scottish Highlands, North East Greenland but right now London Fieldworks have a show at the WORK gallery near King's Cross.
The exhibition, Null Object: Gustav Metzger Thinks About Nothing, has received much coverage in the press. The first reason for it is that London Fieldworks collaborated with Gustav Metzger, an avant-garde artist who launched the auto-destructive art movement back in 1959. The idea of auto-destructive art is roughly speaking to demolish art, and reconfigure the act itself as an artwork. His work however is never empty nor gratuitous, most of his pieces deal with social and political issues: threats to the environment, nuclear weapons, nazi Germany, capitalism, etc.
So it seemed almost logical that London Fieldworks would ask the artist to sit on a chair for 20 minutes thinking of nothing. But the second reason for the vast media coverage is that while the artist was seated, readings were taken of the electrical activity taking place inside his brain. The resulting electroencephalograms were then analyzed and turned into instructions for a factory robot to drill a hole inside a bloc of stone. The result is a 50cm high cube of stone with a void that represents what happens inside the brain of Metzger when he is thinking about nothing.
In the show we'll talk about neuroscience, brainwaves, biofeedback technology and other technologies that are influencing the way we live today.
The exhibition Null Object: Gustav Metzger Thinks About Nothing is up at the Work Gallery until 9 February. The book accompanying the show is Null Object. Gustav Metzger Thinks About Nothing (available on amazon USA and UK.)
Finally, if you're in London on Friday, Jo Joelson and Bruce Gilchrist from London Fieldworks will talk about their work at the symposium Digital Reflexes: Craft and Code in Art and Design.
It might appear that London doesn't spare much thought for art & technology. The capital doesn't host any institution specifically dedicated to art & technology, like FACT in Liverpool. Nor does it have a media art festival with an international reputation such as FutureEverything in Manchester, or the AV Festival in the North East of England.
But look closer, and you'll realize that there's no reason to despair: Furtherfield has recently moved to Finsbury Park and has inaugurated its programme of exhibitions with Being Social (i should come back to it soon-ish), Arts Catalyst does a remarkable job of facilitating the meeting of artists and scientists. Speaking of which, you should check out Hexen 2.0 at the Science Museum.
But the best thing about London is that this week you can get your art & tech fix in several galleries. They don't label themselves nor their artists as 'new media' and that's nothing i'm going to complain about.
So here's a handful of exhibitions i saw last week:
For his exhibition 3 Contributions to the theory of mass-aberrations in modern religions, Thomas Zipp has filled one of the rooms at Alison Jacques Gallery with oversized conical ear tubes. Place yourself at the back of the room, grab the ear pieces and try to figure out what the sound can be or where it comes from.
I've been waiting for the exhibition Tim Lewis - Mechanisms at Flowers since i discovered his work at the Kinetica Art Fair. Most of his kinetic sculptures are worth the trip to Kingsland Road but i particularly liked Jetsam, the kiwi bird quietly picking bits of foam to build its a nest. Solely encompassed in its own world it does not react to human interference. However, Jetsam is aware of its position/location within the limit of its circumference which is defined by the radius of the robotic arm. Establishing its coordinates and stumbling across found materials which Jetsam then relocates in a pre-defined spot.
The video art pioneer 's new work, 1001 TV Sets (End Piece), contains 1001 cathode ray tube TV sets tuned to one of five analogue channels still being broadcast from London's Crystal Palace. I happened to visit the exhibition while soap opera and cooking shows were on. The ménage à trois unfolding on some of the screens was fairly amusing, but seeing bits of onions and bacon chopped on dozens of screens completely put me off lunch. The cacophony won't last however, the end of the analogue signals in mid-April will gradually change the utterly deranged soundscape into a hiss of white noise.
Raqs Media Collective is at Firth Street Gallery with a show which, behind its serene and polished appearance, has political undertones and discusses measures, control, gestures, signals and politics.
One of the four works on show, Untold Intimacy of Digits, is an animated facsimile of the handprint of a Bengal Peasant, Raj Konai. Nothing peculiar about a handprint... Except that this is one of the earliest impressions of the human body taken by a person in power with the explicit purpose of using the trace to identify and verify a human subject.
The handprint was taken in 1858 under the orders of William Herschel, primarily as a means for enforcing contracts. The British officer then sent the print to Francis Galton, a London eugenicist and pioneer of identification technologies. The image of Raj Konai's hand became the cornerstone of the edifice of the identification technology that would in time, be associated with fingerprinting and various anthropometric operations. With the setting up of the infrastructure of the Unique Identification Database three years ago, the government of India, seeks to repeat this attempt to know, map and control a turbulent population, this time through a database containing biometric and other data.
Wait! That's not all! There's also Oorwonde (Earwound). I haven't seen that one yet but it's up at the Usurp Art Gallery until 22 April.
Tim Lewis - Mechanisms remains open at Flowers until 14 April 2012.
David Hall: End Piece is on at Ambika P3, until 22 April.
Raqs Media Collective: Guesswork is at Frith Street Gallery, until 12 April 2012.
Thomas Zipp: 3 Contributions to the theory of mass-aberrations in modern religions is at Alison Jacques Gallery until 31 March.
Also on view in London until the end of the month: John Wood and Paul Harrison: Things That Happen at the Carroll / Fletcher gallery.
Last week, i was telling you about Le Cadavre Exquis, an interactive installation commissioned Making Future Work. This Nottingham-based initiative that called for artists, designers and organisations based in East Midlands to submit proposals that would respond to four distinct areas of practice: Co creation / Online Space, Pervasive Gaming / Urban Screens, Re-imaging Redundant Systems and Live Cinema / 3D.
The Urban Immune System Research, one of the 4 winning projects, investigates parallel futures in the emergence of the 'smart-city'. During their research, the Institute has produced a series of speculative prototypes that combine digital technology and biometrics: one of the devices 'functions as a social sixth sense', a second one is a backpack mounted with 4 megaphones that shouts out geo-located tweets as you walk around, a third one attempts to make its wearer get a sense of what might it feel like to walk through a 'data cloud' or a 'data meadow'.
The devices are the starting point of a series of user tests, performative research and public engagement events that seek to provoke debate and facilitate wider public discussion around potential urban futures, and our role in shaping them.
Just a few words of introduction about The Institute for Boundary Interactions before i proceed with our interview. IBI is a group of artists, designers, architects, technologists and creative producers conduct practice-based research into the complex relationships between people, places and recent developments in the field of science, technology and culture.
The name of your project is quite intriguing. Why did you call it Urban Immune System Research? How does the immune system of a city compare to the human body immune system, for example? What are the differences and similarities?
The Urban Immune System Research [UISR] project was the culmination of a two day event we ran in December 2010 as part of our LAB commission for Sideshow2010. We set out to discuss the relationship between notions of 'intelligent' systems, and principles of ecology. A whole raft of interesting and thought provoking ideas emerged but after some discussion they coalesced into the UISR project.
We found the immune system a fascinating and intriguing departure point because it demonstrates complex self-organising properties, but what's interesting about this to us is how this kind of system is understood outside of scientific circles, in the everyday and within the context of the city. There is a general understanding of these kinds of systems, but we discovered an absence in the general lexicon of everyday terms with which to describe the kind of phenomena we explored in that workshop. So in part the name of the project is to ask questions about perceptions of intelligence and explore that gap between the science and the experience.
The interest in looking at urban space as an organism developed from thinking about this relationship between ecologies and intelligent systems. We looked at how these systems scale up, inspired by Geoffery West's research into the similarities and differences between mammalian and urban scaling. So despite their very clear differences urban ecologies correlate strongly to biological systems and although made of different components behave in similar ways.
This research quickly grew into a fascination by what happens at that juncture where human technology meets ecology, how personal electronic devices, micro-biology and nano-technology effect us at the macro level. We were interested in how this will manifest macroscopically, or ecolologically if you will, and how this in turn will affect us individually as constituent parts of that urban ecology. Asking what form an Urban Immune System might take, and the devices we have developed under this title thus far are the first steps in our efforts to understand these ideas and their implications.
The devices all look to find alternative ways of connecting the individual directly to their ecology (the urban organism) and feel their place within it. These technologies operate to mediate our relationship to, and navigation through physical, social and virtual space. This process of upgrading could be seen as the momentum leading us towards transhumanism, an imagined yet possible future where the augmented body replaces natural selection as an evolutionary process in turn effecting the development of our 'ecological' surroundings.
This notion of transhumanism is another aspect that we we're very interested to explore within this project as it has a lot of synergy with the notion of the urban organism. From one perspective we are looking at the inorganic environment as an organic organism, and from another we look at the organic organism as a component within an inorganic machine.
With The Sticky Data device you were asking "What might it feel like to walk through a 'data cloud' or a 'data meadow'?" Did you find an answer to the question while you were testing the device? Is the experience of knowing how much data our body goes through every single second a stimulating one? or is it rather stressing? worrying? overwhelming? Does it influence the way you navigate a city afterwards? Would you for example avoid a quiet street because you've discovered that it might looks like a pleasant street empty of cars and passersby but with a data traffic that you find too intense?
The most stimulating thing about being able to sense geo-located data is the thought that you are physically feeling traces of people's experiences in the same place where they happened. We think this gives an extra sense of connection to a place, even if only for a moment.
It's difficult to say exactly what that should feel like, we're still playing with different haptic sensations, but the device certainly challenged our assumptions about certain areas. For example, in one test we found a really high density of data outside a bus depot, whereas across the street near a stadium, a seemingly much more social and 'eventful' place, there was comparatively little. So you definitely get a sense that the topography of a city's data layer can be quite different to that of its architectural space, but also an alternative sense of a places social makeup. So, finding themselves in a less sociable environment, did the inhabitants of the bus depot turn to more digital forms of social interaction, while the stadium offered enough 'face to face' social encounters that digital interaction was unnecessary?
The hope is definitely to ask people to question their relationship with space by providing a very different experience of navigating a city - the technologies that we use everyday are creating this digital topography, so how does this affect the urban organism and our interactions within it?
At the end of your description of the Sticky Data project, you explain that "As the user moves on, data seeds will be copied and dropped in new locations spreading them throughout the city or collected and cataloged by the device." Why did you feel the need to add this 'manipulation' of the data? Is it not going to make the 'datascape' too confusing?
This was an idea that came from discussions around the notion of the Urban Immune System. We talked about the idea that perhaps urban space already has an immune system of sorts that operates to keep the city within normative parameters. We discussed this redistribution as something that might function like an immunisation to bolster this existing immune system by disrupting it with non-normative behaviour to see how it responded.
We were interested in devices that have parasitic (viral) properties or where the owner could engage in the production of data and urban data configuration using the traces that others leave behind just through wearing the device and walking. We leave behind traces of our electronic identities almost daily and it's something that we are not really aware of.
Also, if data is part of our physical world then it in some way degrades or gets pasted over like the posters in a metro station over time, the datascape is constantly shifting. We were going to be selective over how what qualities of data we were looking for, so older data might not be as 'memetically healthy' and so may not spread as far or at all. We were interested in being deliberately disruptive to see what might happen if we push messages into and across territories. So the Sticky Data project could sift through what is there in electronic space to find data that might benefit the wearer or be most disruptive.
One of the objectives of UISR is to explore new ways to 'sense the social characteristics of a city as you would temperature, or air quality.' Do you have a better idea of Nottingham (or any other city where you have experimented with the devices) after having tested your prototypes through its streets? Do you see the city with another eye?
The devices have opened up new ways of experiencing the city, so we're pleased about that. When testing the Sticky Data device we discovered huge amounts of twitter data in surprising places - like the bus depot on an unremarkable street that we mentioned before. So the device certainly challenges your perceptions of the social makeup of your environment and certain expectations or pre-judgments you may have made. Of course it also has the ability to re-enforce some prejudices too. However, not knowing what the messages are it leaves you to read into their presence from what is physically around you, building the virtual narrative into the physical narrative of your surroundings.
In the tests we have carried out we have felt some interesting things that have challenged and re-enforced our assumptions of particular locations. However no one of us has tested the device thoroughly across the city yet as we are still fine tuning it and have remained largely within familiar areas. Personally I am looking forward to taking the device somewhere totally unfamiliar and finding out what a city you've never visited before feels like. If you have no pre-suppositions about a particular street does the device make it easier to walk down or give you spidey-sense tingle that there will be something unpleasant around the corner? We just don't know yet.
Could you describe The LOST (Local Only Shared Telemetry) device? How does it work?
The idea with the LOST device is for it to function as a social sixth sense. It's a wireless device, kept in close contact with the body that stores its owners profile. It simply transmits and receives this profile data over relatively small distances. When it finds a similar signal to its own the device communicates this to the owner by changing its temperature.
We wondered how a system that is similar to that of ants leaving pheromone trails might work in the social context of a city. In antithesis to the omniscient Internet this device doesn't use any kind of infrastructure as it communicates only locally, so the user has to physically travel to find new data rather than just clicking hyperlinks. The sensory feedback the wearer receives is specific only to the time and place in which they find themselves.
It's a thought experiment thinking that if everyone in an urban space wore such a device you would develop a very granular sense of the social make up of your very local vicinity with the cumulative heating, cooling effect of everyone else's device surrounding you. In such a way you could get a very clear feeling about whether a particular area is sympathetic to you as an individual or not. Kind of like blind man's buff, but instead of other players saying warmer or colder you simply feel it directly.
As with the sticky data device, having no lingual or visual output, it interfaces at a somatic level - we're interested in what happens when social data is perceived physiologically rather than visually. By integrating these digital sensory devices into our normal bodily senses we can start to understand the possible positive and negative implications not just of existing systems but also our rapid progress towards transhumanism.
The notion of being a trans-human is very exciting but until technologies are developed we can never really know what the implications of them will be. Devices like the LOST device allow ways of imagining how technological and biological integration might operate and in turn perhaps begin to understand their consequences individually and socially.
I'm afraid i forgot the name of the device you used for the public performance on the day of Making Future Collaboration Work. Beyond the fun and spectacular side of the performance, what are you trying to achieve with this piece?
That was the Town Crier. It's a backpack mounted with 4 megaphones that shouts out geo-located tweets as you walk around. The other two devices we made offer very subtle, private interactions, so we wanted to try something a little more confrontational.
The idea was to use the disparity between what can often be intended as very private or relatively anonymous reflections, and the openness of physical spaces that they are associated with. Shouting out these bits of text wrenches them, quite forcibly, back into public view. On the other hand though, the electronic voice puts all these statements on an even plane, and democratizes them giving a sense of the voice belonging to the place rather than any individual. These statements are at different times nonsensical, funny, or timely and touching, but they all add to the texture of a place, offering a glimpse of the collective memory embedded within it.
Are you still working on the project? Do you plan to push the prototypes any further? Add new ones?
We see this as a long term research project so we are definitely still working on not only testing and improving current devices, but also using this process to develop our understanding of the data city, the technologically augmented human, and the ecology that they create.
We're currently developing the town crier into some kind of performance work and playing with the Google Navigation voice more as a means of exploring the way in which the network operates as a continuous landmark in our landscape.
The Sticky Data and LOST device projects are still very much a works in progress. With Sticky Data we are going to continue experimenting with the way that the data is sensed or output. The immediate question we want to address is the character of the sensation in relation to the density of data being sensed. Similarly, what types of data are being sensed, and what are the most appropriate modes of sensation for these different bits of data? With the LOST stone, we are going to play with what information is used to form the user profile to find which provide most effective functionality.
Once we've worked out the technical challenges with both of these devices we want to produce enough of them for each of us to wear and live with them for a significant period of time. Perhaps with the LOST device also using willing volunteers to test them to increase the area density.
We'd like to know what it would feel like if I put on a sticky data sleeve at the same time you put on your watch in the morning and wore it wherever you go for a month. Is it an irritation, will you get muscle spasms, or forget you've got it on most of the time and only notice more drastic or uncharacteristic changes?
After this we hope to have a better idea of how we can develop the project further, fine tuning these devices and perhaps developing new ones. To put it in techno-garb, perhaps create the Urban Immune System 1.0 rather than its current beta version.
It is perhaps worth making clear that the focus will remain on provoking speculation on what the possible social implications of developing this sort of technology might be, rather than trying to create a cure for urban illness. Technology is exciting and interesting, however the implications of innovations are rarely visible until you have the grace of hindsight. One can only speculate how developments might or might not change the world, but that process of speculation is really interesting and tells us something about our current understanding of our society and technological culture.
Previously: Le Cadavre Exquis.
While the reliability of ballistic, bite-mark and even fingerprint analysis can sometimes be questioned in courtrooms, genetic evidence is still widely regarded as the forensic gold standard.
Or the deep embarrassment of European police when they found out that a mysterious serial killer known as the The Woman Without a Face had in fact never existed? The only clues that the criminal had left behind at 40 different crime scenes were DNA traces. These were collected on cotton swabs and supplied to the police in a number of European countries. The police later discovered that the DNA had very probably been left by a woman working for the German medical company supplying the swabs, who had inadvertently contaminated them.
There's more in the case against the fail-proof quality of DNA evidence. Three years ago, a crime lab analyst found out that DNA "matches" are not always as trustworthy as one might believe. While a person's genetic makeup is unique, his or her genetic profile -- just a tiny sliver of the full genome -- may not be. Siblings often share genetic markers at several locations, and even unrelated people can share some by coincidence.
And in Israel, scientists have demonstrated that DNA evidence can be fabricated. "You can just engineer a crime scene," said Dan Frumkin, lead author of a paper published in 2009. "Any biology undergraduate could perform this."
Paul Vanouse is doing just that with his latest work, the Suspect Inversion Center. Together with his assistant Kerry Sheehan, the biomedia artist set up an operational laboratory at the Ernst Schering Foundation in Berlin. Using equipment anyone can buy on the internet as well as Vanouse's own DNA, they (re)create in front of the public identical "genetic fingerprints" of criminals and celebrities.
The solo exhibition features two other biological artworks by the American artist: a series of Latent Figure Protocol lightboxes and Relative Velocity Inscription Device, a cynical molecular race reflecting on biologically legitimized racism, in which bits of DNA, instead of bodies, compete by testing their "genetic fitness". The work uses DNA samples from Vanouse family and directly references Charles Davenport's book Race Crossing in Jamaica (1929), which attempted to provide statistical evidence for biological and cultural degradation following interbreeding between white and black populations.
The press release for the exhibition says:
Vanouse's biotechnological installations do not only challenge the codes and images of contemporary knowledge production but also question the methods behind (natural) scientific findings in general: What do uncritically accepted commonplace catchwords such as "genetic fingerprint" conceal? To what extend does the technical construction of alleged naturalness notarize clichés and prejudices? Vanouse diverts biotechnologies and scientific imaging techniques from their intended uses, and amalgamates auratic iconography with technical images. Employing gel electrophoresis as artistic medium, he intentionally applies a method that bears analogies to photography: while photography allowed viewers to draw seemingly objective conclusions about human qualities based on physiognomic characteristics of the body, today, increasingly questionable social conclusions are derived from ontologized body fragments such as genes.
Curated by Jens Hauser, Paul Vanouse: Fingerprints... remains open at the Ernst Schering Foundation (google map) until March 26, 2011. The foundation, which aims to promote science and art, was showing the wonderful work of Agnes Meyer-Brandis last year: Cloud Core Scanner - an artistic experiment in zero gravity.
Let's pretend it's November 2010 and i'm writing a perfectly timely report from the STRP festival in Eindhoven. Well, i did try at the time (cf. The Physiognomic Scrutinizer and Pattern Recognition - Art for animals) but that was very far from making justice to the programme. STRP is one ambitious art & tech affair which most of the taxi drivers who dropped me to the old klokgebouw venue unceremoniously called 'The Party'. STRP does indeed offers one hell of 10 day long party:
The last edition of STRP attracted almost 30,000 visitors. They came for the concerts and parties of course, but also for the performances, exhibitions, screenings, live discussions, conferences, games and workshops.
The exhibition was particularly exciting with its mix of low tech and high tech. Zilvinas Kempinas' Double O which i had seen only in contemporary art fairs so far is made of just two fans and a strip of recording tape. You switch on the fans and hey presto! you get a sculpture that hovers between sheer poetry and vintage tech. At the other end of the spectrum were works such as Acclair's Art Valuation Service (AVS) that monitors your brain activity as you visit STRP's art exhibition.
For the first time since its creation, STRP dedicated part of his enormous exhibition space to a survey of the work by a young artist. They had the magnificent idea to chose Lawrence Malstaf, an ex-theatre set designer who's been quietly building his artistic career in the mid-1990s. The international new media art circuit discovered Malstaf's work a couple of years ago and his installations have been gracing the likes of ZKM, Vooruit and the Japan Media Arts festival ever since.
Malstaf's most puzzling and iconic works were there. From the now world famous vacuum-packing experience provided by Shrink....
... to the ars electronica anointed Nemo Observatorium:
And then there were pieces which are equally noteworthy but might not have attained the same media-attention just yet. Such as a belt to navigate invisible architecture, the moving labyrinth of Nevel...
... a duo of conveyor belts running very slowly in opposite directions. Rolls and wheels hidden underneath add a tactile dimension to the experience.
I was both attracted and horrified by Shaft which has you laying with your face under a transparent shaft where plates hover and dance until they collide and break on the bulletproof glass. Just. Above. Your nose.
More goodies awaited in the other exhibition rooms:
Lyndsey Housden & Yoko Seyama's Transient Landscapes is a performance installation that constructs and re-constructs the architecture of a room. On entering this field of vertical white lines performers as well as visitors can shape the space into patterns and images reminiscent of cityscapes and landscapes.
I felt immensely sorry for the poor electric fish brought from the Amazon River to be squeezed in a tank, endlessly photographed by curious visitors and form a choir based on their sonified electric fields.
Colin Ponthot's Monster Happy Tape is a blob of used audio tape hanging from the ceiling. By grabbing one of the yellow cables with magnetic heads at their extremity, visitors could play back sounds that might have been registered on the tape. A particular success with the kids who probably needed to be explained what a tape and a walkman are/used to be but also how physical sound can be.
There was also a big plush cat in the adjacent room: