Having previously given life to a robot that enables plants to move around as they please, Ivan Henriques has collaborated with scientists from the Vrije Universiteit Amsterdam to develop the prototype of an autonomous bio-machine which harvests energy from photosynthetic organisms commonly found in ponds, canals, rivers and the sea.
The Symbiotic Machine uses the energy collected from micro organisms to move around in search for more photosynthetic organisms which it then collects and processes again.
The Symbiotic Machine is currently spending two months in an aquarium in the Glass House in Amstelpark, Amsterdam.
Short conversation with the artist:
Hi Ivan! How does Symbiotic Machine relate to Jurema Action Plant. Is this a continuation of that previous work? Did you learn something from JAP that you are applying to the Symbiotic Machine? Or is this a completely different exploration?
The research that started with Jurema Action Plant led to the development of the Symbiotic Machine (SM). I have created a range of works that explores such concepts as: the future (reinvention) of the environment; the acceleration of techno-scientific mutations; when nature becomes culture; the use of natural resources; where these hybrids of nature and technology will take place in the near future and reshape and redesign our tools to amalgamate and be more coherent with the natural environment (these concepts were discussed in the e-book Oritur). When JAP was being exhibited I noticed that as the interaction between the person and the plant enables the machine to move, people were envision a living entity, which was responding to them - i.e. it likes me!, when JAP was moving towards the person and It doesn't like me!, when it was moving away from the person touching it. That is the reason why I gave the Action Plant a first name: Jurema.
In the past years I have been creating machines that operates within the biological time combining different energy sources. In JAP, the variation of electrical signals inside the plant changes when someone touches it and in Symbiotic Machine it is a machine that makes photosynthesis to generate energy for itself, like a plant. In JAP the machine reads electrical signals and in SM the machine makes photosynthesis in order to have these electrical signals. It is a further research into plants electricity and development of a hybrid entity.
Could you talk to us about the collaboration with scientists from the Vrije Universiteit Amsterdam? How did you start working with each other? And what was the working process like? Was it just you setting up instructions and telling scientists what to do? Or was it a more hands-on experience?
When I first met Raoul Frese, scientist from the Biophysics Lab from VU Amsterdam, (The Netherlands) I wanted to develop further JAP. I got very inspired after his speech in a symposium at the former NIMK in Amsterdam about photosynthesis. Later we did an appointment to discuss further our possible collaboration. To develop the Symbiotic Machine we had several meetings in my studio and in his lab. Soon, Vincent Friebe, PhD student from Biophysics lab also joined the team.
In this project I wanted to create an autonomous system, which is able to live by itself, as most of the living entities do. For me it is very poetic to create a hybrid living system that can move to search for its own energy source, process it and have energy to do its own life cycle.
We had lots of hands on experiences and exchanging ideas and techniques. The project started with the concept and the technology we could use, but this Beta version was designed according to the necessities and mechanisms the bio-machine required. The project also had collaborations with Michiel van Overbeek who developed the hard/software and the Mechanical Engineer lab from CEFET/RJ (Technological University of Rio de Janeiro, Brazil).
What are the photosynthetic organisms that the machine harvests? Could you give a few examples? What makes them interesting for the scientists you were working with?
For this prototype we focused in a specific algae: Spirogyra. It is a genus of filamentous green algae, which can be found in freshwater such as canals and ponds. Spirogyra grows under water, but when there is enough sunlight and warmth they produce large amounts of oxygen, adhering bubbles between the tangled filaments. The filamentous masses come to the surface and become visible as slimy green mats.
I asked Raoul Frese why he is interested in photosynthetic organisms: " Scientists are researching photosynthesis and photosynthetic organisms to learn how processes occur from the nanoscale and femtoseconds to the scale of the organism or ecosystem on days and years. It is an excellent example how a life process is interconnected from the molecules to organism to interrelated species. For biophysicists, the process exemplifies molecular interactions upon light absorption, energy transfer and electron and proton transfers. Such processes are researched with the entire experimental physics toolbox and described by theories such as thermodynamics and quantum mechanics. From a technological point of view, we can learn from the process how efficient solar energy conversion can take place, especially from the primary, light dependent reactions and how light absorption can result in the creation of a fuel (and not only electricity)."
Why were you interested in photosynthetic organisms, and in creating a machine that would feed on them and function a bit like them?
My interest in photosynthetic organisms started when I wanted to develop further JAP in a way that a hybrid organism could harvest its own energy to live like a plant. In April 2013, during the residency in NY I had the opportunity to research these microorganisms when I created the installation Microscopic Chamber #1, using a laser pointer to magnify these microorganisms, where people could see in naked eyes projected on a wall different kinds of microorganisms swimming. These living organisms were collected at Belmar beach, in New Jersey and were displayed in the installation in an aquarium where I cultivated them.
The algae Spirogyra is very common in The Netherlands. The choices of the organisms presented in my works are based on the concept, their own technology and location of the specimen. One of the ideas is to adapt the mechanics and electrical system in the machine to be capable to function with the mili-voltages that plants, animals and us have. Create an autonomous system that could use such small scale of electricity to operate. After the residency I had several meetings with scientists from VU Amsterdam where I had the opportunity to research further the Spirogyra and other photosynthetic creatures.
In this research about plant and machines I want to find a way of coexistence between living organisms and machines more integrated, and inspire people for a possible different future.
Could you explain us the shape of the floating mobile robotic structure? Because it looks much more 'organic' than typically robotic. Could you describe the various elements that constitute the robotic structure and what their role is?
The machine is designed to communicate with the environment. For this first model the machine is planned to process the algae from specimen Spirogyra to generate electricity. As this specimen is a filamentous floating organism, the robot has to be in water, floating together with the algae.
The structure is composed by an ellipsoid of revolution with 3 conical shaped arms. Attached to the arms tentacles equipped with sensors. The structure is transparent to catch sunlight at any angle. The choice for an ellipsoid of revolution is to create more surface area for the electrodes (photocells) and to use more of the sun rays onto the photocells when the light reflects in the golden electrodes - using more sunlight by consequence. The tentacles make the robot extend its senses to search for algae. The arms create closed chambers to place electronics.
The machine has a complete digestive system: mouth, stomach and anus. See the video:
Sealed with a transparent cylinder a motor, an endless worm and a pepper grinder aligned and connected by one single axis compose the mouth/anus, like a jellyfish. This cylinder has a liquid inlet/outlet (for water and algae spirogyra) placed at the end part of the endless worm. The endless worm has an important function to pump liquid in and out and to give small propulsion for the machine.
In order to "hack" the algae spirogyra photosynthesis' and apply it as an energy source, the algae cell's membrane has to be broken. The pepper grinder that is connected at the end of the endless worm can grind the algae breaking the membrane cell, releasing micro particles.
These micro particles in naked eyes looks like a "green juice" which is flushed inside the machine: the stomach.
A tube that comes from the end of the mouth with grinded algae goes though the stomach, inside the ellipsoid of revolution. This tube is fastened on a 2-way valve placed in the center of the spherical shape. Inside the ellipsoid of revolution there is another bowl, just one centimeter smaller aligned in the center. Placing this bowl inside, it creates two chambers: 1] the space between the outer skin and the bowl and 2] inside the smaller bowl. In chamber 1 the photocells are placed in parallel and in series. The photocell is composed by a plate covered with gold, a spacer in the middle covered with a copper mesh. This set up allows the "green juice" rest between the gold and copper.
After the light is shed on the electrons of the grinded algae they flow to one of these metals, like a lemon battery. As all the photocells are connected, with the help from the electronic chip LTC 3108 Energy Harvester is possible to store these mili-voltages in two AA rechargeable batteries. A life cycle with functions was idealized in order to program the machine and activate independent mechanical parts of the stomach: it has to eat, move, sunbath, rest, search for food, wash itself, in loop.
The 2-way valve mentioned above is connected as: valve 1 hooked up with chamber 1 and valve 2 with chamber 2. When the stomach works is sent information to the machine that the valve 1 has to be opened. The algae flow to this chamber and the machine uses a light sensor to go towards where there is more luminescence to make photosynthesis. After the 10 min sunbathing (photosynthesis) the machine has to clean its stomach - and the photocells - to be able to eat again. Water is sucked in again with the mouth, and via the same valve from the algae, it pumps more water inside chamber 1 in order to have an overflow of this liquid in chamber 2. The liquid, which is now in chamber 2 is flushed out by the motor turning the endless worm and having the valve 2 opened. Fixed on the edge of the structure opposite the mouth, an underwater pump connected by a vertical axis with a servo powers the movement of the structure giving possibilities to steer 0, 45 and minus 45 degrees. The movement programmed for this machine was written concerned about the duration/time, space and energy.
What is next for the Symbiotic Machine and for you?
This version of the Symbiotic Machine still has to be improved and I would like to continue the research and develop this bio-machine further. I want to keep working to improve what was done. The exhibition is from March 9th until 27th April at the Glazen Huis in Amstelpark, Amsterdam.
Previously by the same artist: Jurema Action Plant.
I spent the weekend in Eindhoven for Age of Wonder, a festival which turned up to be even more exciting and engaging than its name promised. I'll get back with images and posts later but right now i felt like blogging my notes from Nick Bostrom's keynote about Superintelligence. Bostrom is a Professor in the Faculty of Philosophy at Oxford University and the director of The Future of Humanity Institute. He talked about the ultra fast pace of innovation, hazardous future technologies, artificial intelligence that will one day surpass the one of human beings and might even take over our future.
Bostrom is worried about the way humanity is rushing forward. The time between having an idea and developing it is getting increasingly shorter. This gives less space to reflect on the safety of innovation. Bostrom believes that humans cannot see the existential danger this entails. If the future is a place where we really want to live, then we will have to think in different and better-targeted ways about ourselves and about technological developments.
Bostrom's talk started on a high and slightly worrying note with a few words on existential risk. An existential risk is one that endangers the survival of intelligent life on Earth or that threatens to severely destroy our potential for development. So far, humanity has survived the worst natural or man-caused catastrophes (genocide, tsunami, nuclear explosion, etc.) but an existential catastrophe would be so lethal that it would ruin all future for all mankind. An analogy on an individual scale would be if you find yourself facing a life sentence in prison or in a coma you don't wake up from.
So far we've survived all natural catastrophes but we need to beware of anthropogenic risks. New technologies haven't yet managed to spread doom. Nuclear weapons, for example, are very destructive but they are also very difficult to make. Now imagine if a destructive technology was easy to make in your garage, It could end in the hands of a lunatic who plots the end of human civilization.
Potentially hazardous future technologies such as machine intelligence, synthetic biology, molecular technology, totalitarism-enabling technologies, geoengineering, human modification, etc. had not been invented 100 years ago. Imagine what might emerge within the next 100 years.
So if you care about the future of human civilization and if your goal is to do some good, you need to look at how to reduce existential risk. You would need to influence when and by whom technologies can be developed. You would need to speed up the development of 'good' technologies and retard the development of others such as designer pathogens for example.
How does this play out with a rise of machine intelligence which could result in Super Intelligence?
Machine intelligence will radically surpass biological intelligence (even if it is enhanced through genetic selection for example) one day. Experts find it difficult to agree on when exactly machines will reach the level of human intelligence. They estimate that there is 90% probability that human level artificial intelligence might arise around 2075. Once machine intelligence roughly matches human's in general intelligence, a machine intelligence takeoff could take place extremely fast.
But how can you control a Super Intelligent machine? What will happen when we develop something that radically surpass our intelligence and might have the capability to shape our future? Any plan we might have to control the super intelligence will probably be easily thwarted by it. Is it possible to have any gatekeeper that/who will make sure that the artificial intelligence will not do anything detrimental to us? The Super Intelligence would probably be capable of figuring out how to escape any confinement we might impose upon it. It might even kill us to prevent us from interfering with its own plans. We should also think about any ultimate goal that a Super Intelligence might have. What if its own goal is to dedicate all the resources of the universe to producing as many paper clips as possible?
How can we build an artificial Super Intelligence with human-friendly values? How can we control it and avoid some existential risks that might arise down the road?
The forms of artificial intelligence we are familiar with can solve one problem: speech recognition, face recognition, route-finding software, spam filters, search engines, etc. A general artificial intelligence will be able to carry out a variety of challenges and goals. How can we male sure that it learns humanly meaningful values?
The new episode of #A.I.L - artists in laboratories, the weekly radio programme about art and science i present on Resonance104.4fm, London's favourite radio art station, is aired tomorrow Wednesday afternoon at 4pm.
My guest in the studio will be James Auger, a designer, researcher and lecturer operating at the intersection of art and industrial design. He is a tutor at the RCA: Design Interactions and visiting professor at the Haute école d'art et de design (HEAD) in Geneva. Together with Jimmy Loizeau, James runs Auger-Loizeau, a design studio that explores what it means to exist in a technology rich environment both today and in the near future.
In this episodes we're going to talk about James' PHD thesis Why Robots? which uses the robot as a vehicle to study how technology be domesticated. But the designer will also discuss preferable futures and electronic devices that know more about your partner's emotional state than you do.
The radio show will be aired this Wednesday 12 March at 16:00, London time. Early risers can catch the repeat next Tuesday at 6.30 am. If you don't live in London, you can listen to the online stream or wait till we upload the episodes on soundcloud one day.
Check out also James Auger's essay in the Journal of Human-Robot Interaction: Living With Robots: A Speculative Design Approach.
I discovered the work of Addie Wagenknecht a few months ago while visiting The Digital Now exhibition in Brussels. The young artist was showing Pussy Drones gifs. I didn't fully get what they were about at first but the more i looked at the porno-grotesque-aggressive images in the exhibition space that day, the more i thought she was a talent to follow. And indeed, the rest of her portfolio didn't disappoint. Addie made a painting using a drone as a brush, enrolled a stern industrial robot to rock a baby cradle, asked online sexcam performers to replicate classical paintings, and built a chandelier using CCTV cameras.
Addie Wagenknecht studied photography, traveled the world, completed a Masters at New York University as a Wasserman Scholar and right after that got a fellowship at Eyebeam Atelier, CultureLabUK and more recently at HyperWerk Institute for Post-Industrial Design and Carnegie Mellon University under Golan Levin at The Frank-Ratchye STUDIO for Creative Inquiry.
Now that the long, idle Summer hiatus in which i published roughly 0.7 posts per week is over, it's back to business as usual and i'm glad that Addie Wagenknecht has accepted to be the first artist i interview for the the new 'season'.
Hi Addie! While reading the description of The Optimization of Parenthood (Part 1 and Part 2), i realized that i almost never encounter artworks dealing with parenthood in media art. Or, because the accompanying texts mostly talks about mother, should i say feminism? Do you see these two works as new ways of exploring and discussing feminism?
Theorists wrote and said this series is celebrating the death of the mother. It's not objective, it's subjective. At the time we developed this piece I spent a lot of time trying to decide on a title: "The Optimization of Parenthood" vs. "The Optimization of Motherhood" because those are very different in my experience. We were doing a residency at The STUDIO for Creative Inquiry at Carnegie Mellon University. At the time I was pregnant and wanted to examine this false sense of balance between parenting and career in America. How the process is transparent but the structure to function is a secret. The formula is often behind the closed door of people's homes (and psychiatrist's office). I found that being critical of the choice to be a parent, as a parent, is taboo. More so, being critical of the experience as a mother is censored socially if not outright denied by everyone around me. I watched the unraveling of the carefully crafted facade of women and family v2.0.
I think women of my generation were raised to believe that we can have it all, but that theory had never really been tested, our mothers gave us something impossible. At the same time, I was playing with materiality and preconceived notions of perfection within my own work. I wanted to let go of that in a playful way. I never wanted to be responsible for feminism, yet this particular notion made sense and I want to have the poetic liberty to give that away to someone else who really wants it.
The charm of the OfP rocking robotic arm is that it is purely industrial. What made you decide to use this orange factory-like robotic arm rather than a cute robot or even an almost invisible unobtrusive robotic system?
I wanted to highlight the repetitive nature of parenting in a way that was relatable in terms of gestural motion, but foreign in its implementation. The blatantly robotic arm evokes this idea of industry - mirroring the precise, reactive nature that parenting often demands. I wanted the arm to suggest this idea of impossible flawless perfection.
You recently wore the Anonymity accessory for a performance in Vienna. Could you tell us about the performance? How it unfolded, who participated to it, how passersby reacted to the black bars, etc.
Anonymity as a concept is addictive - especially when you're living in a major metropolitan city like New York. That is why projects like Pirate Bay and Tor are some of the most successful works of our time. They have a large scale participatory aspect allowing people freedom and a chance to challenge outdated ideas around copyright. It is one to many system, no one person controls it, there is so much beauty in that. I think we are reaching a point if we haven't already where anonymity is imperative to creativity.
The performance in Vienna was all about encouraging people to openly claim anonymity, as a public statement. While living in New York, I started to became aware that we were constantly under surveillance; I was being watched by security cameras, asked to show my ID to get into a building, etc. The pervasiveness of surveillance made anonymity more desirable. Surveillance has become so ubiquitous its become comfortable. We do not think twice or challenge it. We have become such a surveillance saturated society, in some regards we expect it. Anonymity is becoming a solution for some to protect destabilized identities, revolutionaries, and hackers. It is changing the way we define the face. Mask in public spaces are beginning to be outlawed. I think that the goal has shifted that we no longer want to become an individual, but to become anonymous. People who are able to maintain anonymity have a sort of tense, mystical quality, and we wanted to explore this in a literal, physical piece within public space.
The large-scale performance was commissioned by Bogomir Doringer for the "Faceless" exhibition at MuseumsQuartier. We provided 1,200 museum attendees with limited edition, wearable black bars that allow for preemptive non-disclosure. As they walked through the courtyard, a live feed was projected into the exhibition space. It intentionally occupied the line of criticism and play, allowing the surveilled to become the surveillance.
I think Broken_links is the most irritating work i've seen recently. I keep coming back to that page -and feeling utterly silly in the process- in the hope that the images will eventually appear on the screen. I just can't help it. Did you realize that a work in appearance so simple would create such emotional response?
[laughs] Yes, that's one of the goals. It's looking at those instances when an algorithm, code, or search engine fails to properly interpret code. Essentially, broken_links is about capturing points of failure and glitches in their most literal form. The Internet is so volatile, yet at the same time it's completely cached and highly functional. Images, websites, and texts, are removed all the time without our knowledge as the user. Google, for instance, plays a powerful role because they're able to manipulate the availability of information. They show us what they want us to see, not necessarily what we searched for. So, I wanted to take the information bias, that false sense of trust, and run with it.
I was also very interested in Black Hawk Paint. Especially because I saw that you worked on it in 2008 and, at least in Europe, it's only more recently that artists and curators have started to work on the drone topic. Do you think that the work of artists who engage with UAV technology have an impact on how the public is understanding the issue?
Yes. I wanted to re-appropriate the drone technology as a tool for creativity, expanding the way people consider their potential use. I implemented a computer vision tracking system, and used the drone as a brush. The resulting images are abstract, and I consider the process of making the piece as important as the finished work.
I see Kyle McDonald's "Liberator Variations" he developed for FAT lab working in a parallel way. He noticed people's fear surrounding the Liberator and his response was to produce a series of remixed versions of the original file, transforming the 3D printed gun into a version of the OpenGL teapot, among other things. He wrote: "There is only fear when we feel disempowered, when we lack understanding, when we are censored, when we lack input and are instead being controlled."
You're a member of F.A.T. Lab. Can you tell us how you got involved in the group and how you fit into it?
I suppose I made enough provocations at some point to get an invite. [laughs] I also knew Evan, James, Steve and Geraldine quite well because we were more or less at Eyebeam together around the same time. I consider F.A.T. my friends and family. It's an honor to be part of the lab. They are all extremely talented and they've been an inspirations and constant supporters of my practice. It's really humbling.
Any upcoming project, exhibition, area of investigation you'd like to share with us?
I'm taking part in the first-ever digital art auction at Phillips NYC on October 10, where the piece "Asymmetric Love #2" will be auctioned. It is a chandelier made of steel, CCTV cameras, and internet cables. In November, at MU in Eindhoven is F.A.T. GOLD Europe, a traveling retrospective of F.A.T. Lab's work that originated at Eyebeam Art + Technology Center in April. There will be a few new pieces in that exhibition which are forthcoming. Both of these are curated by Lindsay Howard. In early 2014 the exhibition "Blackmarkt" at 319 Scholes. The pieces for this exhibition are remixed off of items bought off the Silk Road/deep web. We are working on a series of jewelry made from drugs and bootleg items, which is a new space for me. The pieces look at how perception fulfills value, and the relationship of originality, copies and demand. Finally, in June will be my first solo exhibition in Europe at RUA RED Dublin, curated by Nora O Murchú.
Sofian Audry, Stephen Kelly and Samuel St-Aubin started working on Vessels in 2010. The aquatic installation is a fleet of 50 autonomous robots that gradually build up their own micro system by interacting with each other and by collecting and interpreting data related to water and air quality, temperature, ambient light, sound, etc.
However, the robots do not simply process scientific readings, they also communicate through behaviours and interactions. For example, an increase in temperature sensed by one agent may cause it to act more aggressively, with erratic or irrational (random) movements. This change in behaviour will influence its neighbouring agents, who may respond with relative changes to their own behaviour. These agents will in turn influence their neighbours, thus creating a ripple effect of actions.
The ecosystem is thus generated over time by the robots themselves and by their particular environment.
Samuel, Sofian and Stephen have just spent the Summer improving and researching Vessels as part of their artistic residence in Platform 0 at LABoral Centro de Arte.
Since i was curious about those luminous little robots in a white swimming pool, i asked the artists to talk to us about the work:
Hi Samuel, Sofian and Stephen! I read on Laboral's blog that your artistic residency is based on the Vessels project. So what will the residency consist of exactly? Are you going to make Vessels more sophisticated? Or build on it to make an entirely different project? Or investigate another aspect of the installation?
We started the Vessels project in 2010 at the Center for Art Tapes, where the concept and technical structure was initiated within two fairly brief 2-week residencies. Since then, we've spent time fine tuning various aspects of the work to better withstand real-world environments. For example, we had to abandon our original design with air propulsion because it was too energy-consuming and would easily catch in the wind. The version we are currently working on is the third prototype and works with water propulsion. We validated the final design of the electronic boards last winter during a short residency at the Perte de Signal art center.
The goal of the LABoral residency was to assemble the first large group of robots with our new technical improvements, to finalize the material/aesthetic design and to make a first working version of the software. Because we ran into all kinds of technical problems, we decided to put less effort on the material design and more on the software. Thus we spent a large portion of our time at LABoral developing the behaviour of the robot collective.
Vessels is a fleet of 50 aquatic vehicles. That is a lot of robots. So first of all, why did you need to build so many robots? And how big are they exactly? i suspect that they will also need a large area to float around...
When we did our presentation in Halifax, we had about a dozen of robots and we felt it was hard for them to occupy an outdoor space, given their relatively small size (about 20-25 cm in diameter depending on the version). They looked kind of lost. By scaling up their population, we believe we can give a real presence to the installation in large natural environments such as lakes and ponds.
Also, we are interested in the kind of behaviors that can emerge from the interaction between a massive group of autonomous robots, which is something that has not been fully explored in the art world. A lot of work in robotic art has been done on singular robots or small assemblies of big robots but not so much with large groups of small, autonomous robotic agents. In the past decade, a lot of research in the scientific world has been carried out involving swarm robotics and multi-agent collaboration, with encouraging results. Behavioural diversity is something we're interested in exploring with Vessels, and more robots means more potential diversity within the 'population'.
Finally, we felt like 50 robots would give us more flexibility. For instance, we could show two groups of 25 robots at the same time in two different spots in a city, or even in different cities. Because the robots will react to their immediate environment, they will behave differently in the different contexts they are put in.
Is Vessels a group of identical robots? Do they all start with the same set of sensors?
Almost. All the robots have more or less the same "body". They each have a pair of distance sensors, a compass, a directional IR communication system, a pair of underwater pumps for propulsion, a set of LEDs, an onboard real-time clock and some external flash memory for data logging. They will also be using the exact same software.
Their only difference lies in the fact that each bot will eventually be equipped with a unique "environmental" sensor. Each robot has an external, pluggable "card" that we designed to take care of sound production and accommodate this unique sensor. For instance, one robot can be able to measure air temperature, while another one will know about the air pressure, another one about the pH of water, and so on. This sensor will give the robot its "personality", so to speak. They will react to their own sensor in a specific way and their reaction will influence the actions of other robots. The idea is that by putting the same group of robots in different settings (i.e. with different environmental conditions) they will produce a distinctive collective behavior.
But we're still a long way from that! In the version that we produced at LABoral, we don't yet have these environmental sensors. We focused more on establishing the software framework that will enable individual personalities AND group behaviors.
Each aquatic vehicle learns and develops a behavior through Reinforcement Learning and "Over an extended process of trial and error, RL makes it possible for computers to do things that they were not explicitly programmed to do."
If i understood correctly you do not have complete control over what the robots learn and how they evolve. So have they surprised you in the way they learn, interact, behave?
We've just begun to implement learning for individual robots in very simple tasks. We did some small experiments with Reinforcement Learning in which we were able to get a robot to learn how to go straight, which is not an easy task for these round-shaped robots (they tend to spin easily!) At this point we're taking baby steps with learning, so we have yet to see the implications of the entire population of robots with learned behaviours.
Since we wanted to build a first version that "worked" somehow, in terms of the robots going around the surface of water, being able to move straight, approaching one another, etc. we had to work at a much higher level. We thus let the learning stuff aside for a start and decided to work using an Artificial Intelligence approach that is currently very popular in the video game industry for the design of intelligent behaviors. This method, known as Behavior Trees, allows the design of complex, hierarchical behaviors. It makes it easy to design priorities for the robot and to allow it to try different strategies to achieve its goals (or fulfill its desires if you prefer). For example, in our current implementation, the robots move around freely, but when they hit an obstacle they interrupt their moving and try to avoid getting stuck. They also interrupt their behavior when they receive a message from another robot, which might change what they are doing at that moment.
Machine learning methods such as Reinforcement Learning and Genetic Programming are tricky, especially in the context of creating an artwork. They're optimization techniques, so they work well when one tries to solve a specific problem like 'how to navigate in a straight line'. But in an artistic context, the problems are blurry, so we have to invent new methodologies. For example, you can achieve interesting results by playing with the reward functions of the agents, such as what Sofian did last year at LABoral as part of the installation/performance n-Polytope by Chris Salter and collaborators. Also, the process of learning itself has sometimes a very interesting aesthetic value. So, the current focus of our research is how to use machine learning as a critical tool, helping the robots learn behaviours with respect to their environment that might eventually surprise us.
The spectator plays a role in the work. Can you explain it how the public might be involved in and maybe even influence the installation?
Although the environment sensor of some of the robots might be influenced directly by the audience (e.g. if some of them have microphones or light detectors), the installation is not meant to be interactive per se. We see it more as a piece you experience through indirect, slow interaction, where the bots are simply added to our existing ecosystem, responding to it. We hope the audience will respond to it by projecting their own cultural references on the robots, that they will recognize themselves in them.
On another level, we'd like the audience to begin to ask themselves questions. Why are the robots grouping together? Why is this robot making that sound? Why are they so hectic now while they were calm a minute ago? How does that relate to the site they're currently swimming on?
Any upcoming project, exhibition, field of research you'd like to share with us?
Samuel will be attending the Bozar Electronic Art Festival (BEAF) in Brussels from September 25 to 29. Stephen just finished a major work titled Patch at Dalhousie University (Halifax, NS/CA) with robotic agents that react to the presence of students in classrooms. Sofian's underwater artificial life installation Plasmosis is still running at the marina of Carleton-sur-Mer until September 7th (QC/CA). We are also trying to organize another research residency next year for Vessels but we have no definite plan yet.
I closed my report of the exhibition The Air Itself is One Vast Library on the promise that i'd come back to my last visit to Brighton with a few words about the crime scene-style outline of a drone that James Bridle painted on the city seafront.
Under the Shadow of the Drone, commissioned by The Lighthouse, is a one-to-one representation of one of the military drones piloted remotely to strike targets in distant areas of the world. The aerial attacks they conduct leave hundreds of people dead, many of them innocent civilians.
The controversy surrounding unmanned aerial vehicles has been recently intensified in the UK with the news that pilots at Waddington (Lincolnshire) are now working in relay with the military in the US to remotely operate American Reaper drones in Afghanistan.
For Bridle, what matters is not so much the drone in itself but the 'black box' side of contemporary warfare technology. "I have a political interest in drones as well, but beyond that, they stand for all aspects of these invisible technologies that have a great effect on the world but are kind of largely hidden from view," he told the Creatorsproject.
We might read about drones, get horrified by the way they monitor, gather intelligence, destroy and kill but we still cannot fully understand them, simply because we don't see them properly, even people who are directly affected by them hardly ever get a chance to see UAVs. Under the Shadow of the Drone suddenly brings drones into our daily life.
I had intended to write down the notes i took during a talk that James Bridle gave last month in Brussels for The Digital Now series of events but The Lighthouse has recently uploaded on youtube a similar talk that the designer gave to the Brighton audience. I highly recommend it. It is both entertaining and chilling. Bridle explains in detail his research into drones and more generally his investigation into the way we perceive and understand technology. He analyzes how the most reproduced 'photo' of a Reaper drone is actually a photoshopped image that first emerged in a forum for 3D modeling hobbyists, he discusses the Disposition Matrix and the escalating assassination program which tracks and kills suspects militant terrorists in other part of the world, etc. He also illustrates his research by explaining briefly some of his own projects such as Dronestagram: A Drone's Eye View which collects images of locations of drone attacks along with a description of the carnage they incur and A Quiet Disposition, a software system that is constantly scanning the web for news reports on Disposition Matrix and drones and finding links between them.
This much shorter video brings the spotlight on Under the Shadow of the Drone:
Under the Shadow of the Drone remains on view on the Brighton seafront, five minutes' walk east from the Brighton Wheel (do stop by The Lighthouse, they'll hand you a map with the location of the shadow) until May 26, 2013. The work was produced by Lighthouse and Brighton Festival.