The new episode of #A.I.L - artists in laboratories, the weekly radio programme about art and science i present on ResonanceFM, London's favourite radio art station, is aired this Wednesday afternoon at 4pm.
The guest of this episode is Usman Haque, one of the founders of Umbrellium. Usman is an architect who creates responsive environments, interactive installations, digital interface devices as well as many mass-participation initiatives. His skills include the design and engineering of both physical spaces and the software and systems that bring them to life. He is also the Founder of the sensor platform Pachube, now known as Xively.com.
Usman Haque happens to be one of the most thought-provoking people i know in London and today we're going to talk about the smart city vs the messy city.
New year, new episode of #A.I.L - artists in laboratories, the weekly radio programme about art and science i present on ResonanceFM.
The guest of this episode is artist and critical engineer Julian Oliver whose award-winning software and hardware works include a wall plug that manipulates the news appearing on other people's screens, a pair of augmented-reality binoculars that replace advertisements in public spaces with artworks in real-time, but also a Transparency Grenade able to capture network traffic and audio at the site of secret corporate or governmental meetings and to anonymously stream the data to a dedicated server where it is mined for information. Julian Oliver's projects might be provocative and entertaining but their ultimate aim is to make us question the technologies we use every day: who really owns them? Who made them and to what purpose? How much do they shape our behavior? Do these technologies service us as much as we service them?
During the show, however, we're not going to talk about Julian's exciting projects. Instead, i wanted to focus on the Critical Engineering Manifesto that Julian wrote a year ago together with Gordan Savičić and Danja Vasiliev. Expect explanations about why Engineering is the most transformative language of our time, questions about how to adopt the critical engineering ethos if you have next to zero technical skills, and details about Julian Oliver's upcoming projects.
As i mentioned a few days ago, the Barltett's student show is one of my favourite events of the Summer in London. It is however so overwhelming and turbulent that you need dedication and pure chance to spot the works that might interest you the most.
I'm glad my Bartlett expedition led me to The Theatre of Synthetic Realities.
The project, authored by architectural designer Madhav Kidao, is miles away from what you'd expect from an architecture work. No model, no plan. In fact, it rather looks like an essay made of photos, short videos and texts. Together, they reflect on immoral architecture, unsympathetic machines, reality filtered by artificiality and more generally, our symbiotic relationship to technology. In fact, Kidao likens his project to "an exaggerated caricature of our present and near future relationships to technology as is stands."
What made the project remarkable is its focus on how new technologies prompt new behaviour and new ethical questions and decisions. I realize i might be accused of heresy and crucified in front of the populace for writing this but ethics and social concerns are usually not the nerve centers of architectural exhibitions.
Allow me to do a bit of copy/pasting and let's see each other right after that for an interview with Madhav Kidao.
The Theatre, as illustrated in the film Making Friends and Other Functions, is a vehicle in which to explore the relationship that exists between the designer/creator and his or her repertoire of increasingly intelligent collaborative tools, be these tools to create or tools to think. The machines, under the selective guidance of the designer, construct their own reality based upon the information they extract from their environment and its unwilling occupants. This is ultimately a task with no beginning or end, and fundamentally questionable ethical integrity. As result we are left to question the role of the Architect, both in regard to creative authorship and ethical responsibility.
Hi Madhav! What are the 'Unsecure Webcams' that are mentioned in ACT I - The Observer? What makes them unsecured?
As I'm sure you are aware most live feed web and surveillance cameras can be accessed remotely through the internet. It is possibly one of the Internet's worst kept secrets that by simply googling specific codes for particular camera models, for example "intitle:liveapplet inurl:LvAppl" a list of all the live webcams of that particular make and model is given. Often, however, the owners of the cameras, usually domestic or small businesses, fail to password protect them, this is what makes them unsecured. Therefore in theory anybody is able to view the camera's live stream and with many cameras actually control the pan and tilt of the camera. As long as you have not cracked any passwords, this is entirely legal.
Whilst the feeds are usually pretty innocent, and frankly often mundane, every now and then you come across something that you no doubt shouldn't have done, an insight into a private world completely unconnected to our own other than through technology. It's quite a peculiar relationship that exists between the remote observer and the realities of an unknown space that is unfolding before them. Time is of particular importance as you are viewing in real time a world in which you have no knowledge of its past beyond the moment you logged in, yet our extended embodiment into the animate camera somehow immediately embeds us into this unfamiliar space.
This is not too dissimilar to the tele-assistance relationship between a drone pilot and the drone. Initially we are completely ignorant of where and what it is we are looking at other than what can been interpreted from the point of view of the camera. It's in our nature to be inquisitive, to people watch, and there is some perverse pleasure in trying to comprehend the events unfolding before us. However due to the lack of context more often than not this is simply educated speculation. The more I explored this strange paradigm the more I found myself just making up wholly fictitious scenarios in my head based upon the slightest clues garnered purely from body language, perceived interactions and the environment. The epistemologically variety of this action means that truth soon begins to seem irrelevant compared to the desire to fabricate an alternative reality.
The name of the project, The Theatre of Synthetic Realities, places your work in the realm of fiction. Yet, it was inspired by elements of reality. Could you tell us about the trends, behaviors and technologies that have inspired the project?
It was from this point that the concept of a Theatre of Synthetic Realities emerged, a kind of reinterpretation of Hitchcock's Rear Window for the 21st Century, in which the binoculars are replaced by the the vast interconnected complexity of our computing networks; with each input - be it a camera, a sensor or even another person - acting as a portal into a parallel environment. In much the same way as the binoculars are a prosthesis to extend our vision, the global technological systems that we symbiotically rely upon extend our powers of perception and influence around the world. In Rear Window the architecture of community and society defines the story. I was keen to explore how technology facilitates new forms of social interaction and redefined concepts of neighbourhood and community, and how this in turn redefines our concepts of architecture.
The impact technology has had upon our social systems, access to knowledge and global influence is nothing particularly new, Marshall McLuhan famously introduced the concept of a Global Village and Global Theatre almost 50 years ago. For me what is more interesting is how this has evolved into the relatively more recent concept of the Internet of Things. The trend I believe that we we will see with the Internet of Things is that it is not just objects that can be tagged and categorised but also the spaces they inhabit and the actions and events that they are associated with. In this way spaces, buildings and environments are becoming encompassed in the Internet of Things. This is not limited to architecture though, the social activities, and routines that are contained within that architecture add to the data history of a place, a form of artificial psychogeography. What this means is that in much the same way as we exist in a duality between physical and digital social circles, the spaces that we occupy do too. What we begin to see is a physical environment and the live updating digital representation of that environment. This is facilitated by new advances in sensory and scanning technologies, computer vision and biometric analysis.
I, like many others, was investigating the potential of the XBox Kinect as means of capturing complex real time data. Its capabilities have been widely publicised but in simple the idea that a machine can see in three dimensions and then also recognise human gesture is incredible. It allows any machine created using such technology to become actually embodied in a physical world. It begins to transcend the border between the physical and the digital which is a very powerful concept when thinking about intelligent environments. And as we increase our dependence upon the internet as our primary source of knowledge and interaction, our interpretation of truth becomes more reliant upon the technology that we assign to gather and interpret the real world. So fundamentally we start to view the physical world through the filter and perception of machines, in effect, a synthetic reality.
Could you describe the Vision Machine, the way humans would 'interact' with it and its ultimate purpose?
The idea of the Vision Machine originated from Paul Virilio's book The Vision Machine. In it he discusses the concept of the "automation of perception", predicting a machine that sees for itself not for the benefit of man. This concept coupled with an interest in how artificial intelligence feeds into responsive/adaptive environments intrigued me. The capabilities that a computer has to extract a significant amount of information from a physical location combined with its ability to comparatively analyse that data with the vast amounts of data stored online through a neural network is almost enough to simulate perception. The resources that a computer has immediate accesses to - such as The Internet of Things - and its methods of analysis are completely alien to our own, therefore it could be predicted that its interpretations of the physical world would be alien too. Building upon the idea of how a human would perceive a space as viewed remotely through a webcam, I wanted to create a sensationalised demonstration of how the camera itself could possibly perceive the space it inhabits in relation to its learned perceptual world.
In regard to the Theatre of Synthetic Realities, the Vision Machine is in essence a robotic actor, cameraman and director in one. Whereas in the initial webcam scenario I was in complete control of the camera and was free to make my own conclusions of what I saw, with the Vision Machine at the other end I am relegated to a more collaborative role in which the machine dictates what it is I view however provides information that I would never have ordinarily been able to extract. Working from this principal we could then predict that the collaborative relationship between myself and the semi-autonomous machine could lead to an emergent performance unique to the relationship. The next stage in the Vision Machine was to then suggest that, much like I had done, it too begins to fabricate elements of reality and as such distort the reinterpretation of the digital model of that environment as well as what the viewer believes to be true.
The project has never really been a serious proposition for a machine to be developed. In fact it was supposed to be an exaggerated caricature of our present and near future relationships to technology as is stands. The machine is portrayed as more of a mild irritation that we just coexist with rather than any kind of interactive companion. Its ultimate purpose is as an analogical device to critique and explore our concept of what digital fabrication is or could be. In this project we are fabricating reality and the representation of reality as part of the Internet of Things. It is very much an attempt to question the relationship we have to tools and technology in a world in which we continue to transfer not just physical but increasingly cognitive faculties over to those tools. The is not seen to be necessarily a negative thing but rather an interrogation of the possibilities this presents, especially in regard to new forms of collaboration.
I found 'ACT IV - Making Friends and Other Functions' a bit daunting: here is a machine that observes human beings constantly and then assesses them and assigns them a character. Do you see current technology going in that direction? And what would be the benefit of having such machines around?
One of my aims for the project was to see how technologically advanced I could make it for the least amount of money; so begging, borrowing and hacking spaces, hardware, open source software and code. Then was an attempt to demonstrate how the role of the designer is changing, particularly in the world of open source projects. Practically any designer can now create they're own tools and machines for any job-specific purpose with a relatively low budget, the RepRap being the archetype. As complex and powerful technologies become cheaper and easier to hack, like the Kinect, the designer is gifted with power and responsibility that is free from supervision.
That sense of malaise you have from the idea of camera watching and judging you is in large part due to your loss of control and empathy due to the unpredictable nature of a non-human agent. I think this is an issue that as designers, and particularly architects, we will have to address. As architecture reacts to shifts in social habits I think we will see a lot of unforeseen challenges in regard to what technologies we use and the manner in which we do so.
Fundamentally my project is an immoral one and I think the concept of an immoral architecture is something that will become increasingly prevalent in the future.
The text of your project says that "The Theatre (...) is a vehicle in which to explore the relationship that exists between the designer/creator and his or her repertoire of increasingly intelligent collaborative tools, be these tools to create or tools to think. The machines, under the selective guidance of the designer, construct their own reality based upon the information they extract from their environment and its unwilling occupants. This is ultimately a task with no beginning or end, and fundamentally questionable ethical integrity. As result we are left to question the role of the Architect, both in regard to creative authorship and ethical responsibility." Is ethical responsibility already an issue architects and designers encounter nowadays when working with new technological tools?
Ethical responsibility has always had some part to play in the design process but I think that is slowly coming to the forefront. The explosion of open source has really brought ethics into the design process as it not only transfers power from the institution to the individual but it also provides new forms and channels to disseminate information, this recent article actually being a perfect example.
For me, consciousness and intent are just as important as categorical right and wrong. For exploring emergent technology is just that, emergent, it is unknown. While a particular technology may have positive society changing impact it may well have dire consequences too. I'm not one for speculations upon utopian ideals and believe that we will have to tread and adjust the boundary of what is acceptable to progress civilisation, however whilst the unquestioning embrace and exploration of new technology is exciting we often fail to question its merit beyond novelty. As Joseph Weizenbaum suggested, just because we can do something doesn't mean we ought to. I think we have to take a utilitarian stance and ask ourselves is doing something in a new way beneficial to design and society? And if so is it not to the detriment of others?
Admittedly I think architecture is a bit slow off the mark when it comes to these kinds of issues. This is understandable when you consider the timescale architecture operates on compared to other design fields. However Architects have always been eager to incorporate new design methodologies and the ethics of the technology used will undoubtedly become an issue.
In terms of the Bartlett show, I'm guessing there is a particular reason you have asked this question so I would ask the same of you. The show is always a dazzling array of phenomenal work and the mastery of a variety of mediums used is breathtaking. I do feel the density and complexity of the show does however sometimes make it overwhelming and distracting from the work. Of course as with any degree show the work on display can only ever be a vast reduction of the full scope of the project and as such the viewer is never going to get a comprehensive understanding of that project.
All images Madhav Kidao.
The Arsenale section of the Venice Biennale of Architecture has many characteristics that makes it stand out from other architecture exhibitions. One of them is that you won't get to see cardboard models and plans. In fact, you can even walk around and inside the 1:1 scale replica of an apartment building with sharing spaces.
Hyperhabitat. Reprogramming the World is a research project directed by Guallart Architects, initiated at IaaC (the Institute for Advanced Architecture of Catalonia) in 2005, with the BCN Fab Lab in collaboration with information designers Bestiario.
The project aims to propose answers to questions such as: Could our world be in habited on the basis of information technology? How could this be organized?
Just like a digital network is made of nodes and connections, Guallart's model is a large-scale attempt to have all the elements of the physical world communicate with each other. The house functions as a small ecosystem, where each object is a piece of a widely distributed intelligence, able to interact with the others. Architecture becomes the interface that enables us to inhabit the world. The connections do not stop at the room level, objects also communicate with the whole building and can even interact with the neighbourhood or the rest of the world.
The microservers embedded in the objects interact with one another to generate relationships that are displayed as a large-format projection on one of the walls. Line codes can be drawn to suggest relationships or 'line codes' between nodes. In addition a special web platform, to be launched on November 24, will enable people around the world to put forward formulas for reprogramming the world.
Hyperhabitat is the biggest Internet 0 (a new microserver technology developed at MIT to generate ambient intelligence by linking a series of miniature computers) network ever created. The project also builds upon the creation of and the theory of the multiscale habitat, an 'urban genome' project developed at IaaC that seeks to introduce new approaches to the generation of buildings and cities by restructuring the functional relationships between the constituent parts.
A key objective of this 'reprogramming' of buildings and cities is to use artificial intelligence in order to save energy and achieve a more self-sufficient model of living. As the architect explained to El Pais, the project tries to materialize the socio-economic changes that the world is currently undergoing, we are moving from a financial economy towards a production economy based on removing the price of objects in order to give them value. Guallart added: 'The way to visualize this idea is to build dwellings which are self-sufficient, applying artificial intelligence to buildings.' The architect is obviously aware that working at the building level is not sufficient if one wants to change the socio-economic structure of the world, action would have to be taken by working at the town-planning level (cf. Guallart's Sociopolis project,a neighbourhood designed with a mind set on efficiency, functionality, digital networks.)
Slideshow of the images i took of the installation:
There's a really nice video interview of the architect with views of the workshop where the installation has been entirely developed and built on 3cat24. But you might prefer a video presentation in english than in catalan:
Hyperhabitat is on view at the Arsenale, Venice until November 23rd, 2008.
Kitchen Budapest is a brand new media lab for researchers who are not only interested in the convergence of mobile communication, online communities and urban space but who are also ready to get their hands dirty creating experimental projects in cross-disciplinary teams.
The Kitchen Budapest have released their Summer 2007 catalog. Edited and commented by Eszter Bircsak and Adam Somlai-Fischer, it is yours to download in PDF form. Trust your dear Aunt Régine, the booklet is worth leaving aside whatever you're doing right now.
The catalog highlights some of the projects developed at the Hungarian media lab.
The chapter on Mobile Expressions demonstrates the kind of playful content that can be created using mobile phones; Intelligent and Charming Things is about the way that objects around us can interact with us and even create a culture of their own; Dynamic Media Interfaces shows compelling new ways to explore (or perform) digital content; i guess i've lost everyone here and you're already busy reading the book but i'll keep on describing the catalog just in case. So, we're now at the chapter called Community Technologies which comes up with ideas for a better support for communal interaction and communication. The remaining pages are dedicated to a brief presentations of some of the workshops which took place at Kitchen Budapest (aka. KiBu).
Some of the projects developed are simples, other are quite sophisticated, some will appeal to the hacker, others have a clear interaction design feel, they are sometimes poetical, often thought-provoking and always interesting.
One of my favourite is the Landprint project which uses a lawnmover to cut text pattern into the grass (so far) or even an image that looks like the print of a photograph when viewed from above (that's the ultimate plan.)
Over the past year, i've spent an impressive amount of time ogling a blog called Variable_environment. It had all the key ingredients that get my attention: pictures (many, lavish, big, most of them snapped by Milo Keller), inspiring collaborators (including architect Philippe Rahm, multimedia designer Ben Hooker, researcher and designer Rachel Wingfield and architect Christophe Guignard), mini robotic guest stars, and great content. The blog posts document a fascinating research project called Variable environment/ mobility, interaction city & crossovers. The project starting point is the fact that our living environment and the way we live have tremendously changed over the past few decades. The postmodern city made of signs and infrastructures that Robert Venturi and Denise Scott Brown have described in the '70s in their book Learnings from Las Vegas, have been progressively "enhanced" by new layers: more signs, spaces and objects, arrival of new technologies, increasing mediation of our relationship to space, intensification of transport.
Variable environment/ is a research project that explores the new challenges faced by our living environment, focusing on interaction design as well as architecture and environment design.
An activity report has been recently published that details the Variable_environment/ project. Selected images and texts have been entirely drawn from the blog and reorganized to allow a linear reading of the project. You can download a pdf version of it (161 pages mixing french and english.)
I contacted Patrick Keller and asked him to give me the lowdown on variable_environments/. Patrick is responsible for the coordination and art direction of the project but he is also one of the founder of fabric | ch, a studio for architecture, interactions and research, and a teacher at ECAL.
You wrote on the blog that "the interdisciplinary challenges and themes relating to the creation of contemporary environments should be tackled through a collaborative efforts of designers (interaction design, design product and graphic design), architects and scientists. This transversal approach hardly ever happens in Switzerland and Europe." Why do you single out Switzerland and Europe? Have you seen examples of interdisciplinarity and collaboration between design, architecture and scientists in other countries or continents?
Well, I should first say that I had to single out Switzerland because the project was done in the teaching and research context of Switzerland (the texts in the blog are the same ones that I snail mailed to the experts who funded the research): the Variable_environment/ project was linking the two high-schools of ECAL (Ecole Cantonale d'Art de Lausanne ---Arts & Design---) and EPFL (Ecole Polytechnique Fédérale de Lausanne ---Sciences, Architecture & Engineering---), which was kind of a new education situation in Switzerland.
But this small country can in fact be considered as a good representative for the rest of Europe though (regarding education in design, architecture and sciences, there are quite good high-schools in Switzerland like ECAL, HGKZ, Accademia Mendrisio, ETHZ & EPFL) and especially because like in most of Europe, art & design disciplines (at the exception of Architecture) are usually taught in different schools, structures and locations (and btw with less money) than the rest of disciplines at university level like humanities, engineering, life sciences, etc.
Christophe Guignard and I did a round trip two years ago for ECAL, visiting lots of schools and universities in Europe and United States to see what kind of collaborations were going on between designers and scientists. So for example there is a clear difference between the way Design is being taught in Europe and in the United States or Canada: most of the big universities in the US (Stanford, Berkeley, UCLA, Harvard, MIT, Columbia, etc.) include Art, Design and Architecture schools on their campuses. This should allow for a common knowledge of the work of each other, a sort of base on which you can start, but it also mainly allow for potential easy collaborations on a daily basis. Of course, it is not because it is possible that people are taking advantage of it... Curiosity is still needed in this context because it is usually not part of any regular teaching cursus at the moment.
Architecture is a bit different, it has now a long history of collaboration with engineers: since the modern period and the industrial revolution, it's teaching has left the art schools and integrated the Universities nearly everywhere. Mainly all good architectural schools (with some few exceptions) are now part of big universities or polytechnical schools. In this sense, what has been done with this particular discipline and its teaching in the early 20th century could serve has a reference for some areas of contemporary design, even if the architectural field should also clearly rethink and instigate its relations to other disciplines now.
Which challenges does such inter-disciplinary element face? Do you think that it would be easy for scientists, architects and designers to find a common language and work in synergy?
I don't really believe in the model of the "designer-coder" who would be formed in a design or art school. Design and code, these remain two different formations where different skills are needed if you want to reach a high level, unless you are ready to study in high-schools for 10 years to reach a Master level. But on the other side, it's hard for those disciplines to work together because the rhythm of work is quite different, especially in a research context. So what could be the solution?
Of course, it's now absolutely necessary that designers understand code and scripting, because design happens also at the level of code nowadays. And in a more general way, it's important that designers are regularly confronted and include/understand the works of scientists because major transformations of societies are now happening through the applied impact of scientific researches and developments, often without the (critical) input of designers. But I also think that it's absolutely necessary that each discipline keeps its edge and profile, its own goals (or lets say its own difference) so that collaboration can be rich. And that's true for any type of collaboration, might it be design and life-sciences, architecture and nanotechnology or whatever. Some collaborations are nowadays experimental (i.e. Architecture and life sciences), some are no more but were it hundred years ago (i.e. architecture and civil engineering that has built most of our contemporary landscape). So, those we need today in the transversal areas we are interested in are designers that understand code/some sciences and engineers that understand design, a common knowledge and then highly creative and critical collaborations of any type between them.
For this, the way we teach and the structure of teaching in Europe should me modified because we will never reach this common knowledge while keeping the high-schools in design and sciences separated as well as their budget so different.
Could you tell us a few words about the AiRtoolkit and in particular about one of its application the "AR ready" project?
The AiRtoolkit project has three facets I would say: first one is the redesign (with interaction design considerations) of a known open source software (ARtoolkit, a marker based augmented reality application), second one are the " AR ready objects" while third are some uses you could have as an end user with such a software coupled with the "AR ready" objects. Each facet of the project is not so particular in itself. It is when you think about combinations of the three that it gets exciting.
When you think of a technology like the (now old) desktop computer, you usually think and focus on what you see, the screen and its content. You think less about what you don't necessarily see anymore: the way it has modified your daily working environment. As an exemple, think about the typical office table: the computer has "aspired" many objects into the screen (the famous "office" metaphor): pictures of relatives, music, mail, etc., finally even the table, that has been replaced by your own knees! Most of those objects or supports are now into the "black box", non material anymore. But what you have "gain" are hundreds of plugs and cables, a whole in the table to make way for them, a "bag" under it to hold them, headphones, remote controls, etc. So to say, a technology could be considered as the thing in itself and all its potential "collateral effects".
"AR ready objects" are about these "collateral effects": the marker based Augmented Reality technology induces invisible/digital content (usually 3d) that you'll only see through the live eye of a mobile camera (would it be a cellphone camera or a hi-tech headset), on top of what this camera is capturing. But to see this invisible or "augmented" content, you'll have to put a visible marker in the physical environment. That's in fact what we thought was interesting in this technology (beside the possibility to mix the physical and the digital): to put a visible sign to say that there is something invisible, something you can't see with your eyes and that there is an all new visual and functional level that can only be reached through the vision of your camera. We were interested to examine how such a strange environment might look like visually and redesign or distort many known objects that would use this visual language for cameras. We knew at that time that AR technology without markers was under development, but we thought that keeping the markers was a more interesting approach for the physical environment.
"Techno-mediated" is a subset of "mediated" and man probably maintains an increasing mediated relation to its environment since the time he built tools, his first shelter or cultivated its first seeds. He has put the natural environment "at a distance" and still tries to format it (or literally "inform" it) according to its needs, even if he fortunately still has to deal with natural conditions due to the limited energetic resources he can get.
Therefore a car, a house, a cellphone, a plane, a virtual environment, a space station, air conditioning, a computer, a tool, language, agriculture, etc., can all be considered as mediations to environment as well, or to inhabitable space, other humans, etc.
I agree with you that the mediations to environment are definitely increasing and that they get now for some decades "techno-mediated", provoking lots of perceptive interferences (with time, distance, location, relation to climate, etc.). These existing interferences created by those techno-mediations bring new design areas and challenges where designers could potentially work with time, distance, instant, climate, etc. and where, as architects, we are interested to work into (at least that's the area of work which we are exploring the most now with fabric | ch, the architecture, interactions & research agency I'm working with).
Is it getting too mediated already (I wouldn't make a distinction with techno-mediated)? I will tend to answer by the affirmative: any mediation has an energetic cost and we can all witness that there are energetic problems now and a clear negative impact of energy consumption on climate and natural environments, therefore on our fair existence as a group on this planet. This "climatic alarm" is probably a first strong sign telling us that we are living with too many mediations to our environment, mediations that consume too much energy (at least if we don't find new and cleaner sources of energy).
We'll need to redesign our relation to environment with less (techno-)mediations at less energetic costs, or maybe rather with variable densities of mediations over time and situations.
Are you optimistic about the future of the "interactive" city?
The "interactive city" is with no-doubt already here, even if it's not looking so "sci-fi" at the moment or even remains quite hidden in fact. Shopping, transportations, banks, communication, voting, state management, etc. are already interactive-like experiences, even if most of them are consumer-driven, profiling and very functional experiences. At the moment, the interactive city is getting built mostly by the private market (and therefore private interests), a bit by the state and a bit by engineers. Nearly no architects or designers are involved in the conceptualization and realization of big public projects (at least I haven't heard of it, please let me know if I'm wrong). This probably also because most designers and architects are a bit conservative and remain kind of blind to contemporary stakes or don't get enough involved into it.
Anyway, I think that the "interactive city" will increase, still mostly driven by private interests at first (so no, I'm not so optimistic about its future). The "real" will certainly become more and more digitalized when all kind of computers, sensors and actuators will enter the building and urban design economy (see for example the ZigBee Alliance like consortium). Then we will get an enormous amount of data from these sensors (and to whom will belong those data, captured by which programs of which companies ---proprietary or open-- will become a big question!)
The city and buildings will probably first start to "speak" (i.e. "too hot" or "too many people" here, "too much traffic" or "too poluted" there, etc.) allowing for some kind of automated tasks, which is not a very interesting approach. But then, these data and spatial computing will also allow for crazy architectural projects and an all new condition for the contemporary space we will live in.
Do you plan to develop the VE project any further?
So, from next October, this lab will be open and collaborations between the schools will increase. All kind of collaborations between design, architecture and sciences will be possible among many teachers and students (there will be no initial thematic limitations). Engineers from EPFL will follow courses in design (object, graphic, interaction), before, hopefully, designers start to follow some courses in computing and sciences.
The Variable_environment/ research has mainly served this "big picture" project for research and education in Switzerland. But in itself, the project will surely continue within this new academic context even if many opportunities are still open at the moment and if we don't know exactly which ones will go further yet (we might go to preproduction with the (Web)Cameras, continue to work on the "AR-ready" Objects and the AiRtoolkit and/or do a European research project as a big extension to what has been started with the Rolling Microfunctions (in collaboration with the Royal College of Art / Tony Dunne).
Photo credit: Milo Keller, ECAL.