The new episode of #A.I.L - artists in laboratories, the weekly radio programme about art and science i present on ResonanceFM, London's favourite radio art station, is aired this Wednesday afternoon at 4pm.
Today's guests are Evan Roth, Becky Stern, Geraldine Juárez and Magnus Eriksson from the Free Art and Technology Lab (F.A.T. Lab), a network of artists, engineers, scientists, lawyers, and musicians who are committed to supporting open values and the public domain through the use of emerging open licenses, support for open entrepreneurship, and the admonishment of secrecy, copyright monopolies, and patents.
Some of the members were at the MU gallery in Eindhoven last week for a F.A.T. Lab retrospective as well as for the launch of THE F.A.T. MANUAL. In this episode, we will be talking about 3D printed guns, Ideas Worth Spreading which allows you to deliver your own pirate TED talk, open culture and how to remove Justin Bieber from your web browsing.
The radio show will be aired this Wednesday 20 November at 16:00, London time. Early risers can catch the repeat next Tuesday at 6.30 am. If you don't live in London, you can listen to the online stream or wait till we upload the episodes on soundcloud.
F.A.T. GOLD Europe. Five Years of Free Art & Technology is at MU in EIndhoven until January 26, 2014. THE F.A.T. MANUAL is on print on demand but you can also download it for free.
Most of us don't really know (nor probably care to know) how "the network" functions, what its structure of communication cables and servers looks like or how, more concretely, our private data travel. Roel Roscam Abbing spent a month at Laboral in Gijón to work on Border Check, a software that lays out the physical and political realities behind the internet.
And i'm sorry to quote him but in this age of reckless online surveillance even Dick Cheney thinks that you never know how much knowledge you're going to need. So maybe a good place to start would be to visualize how our data are moving from place to place (and thus which government can potentially have a look through it) and Border Check enables just that:
As one surfs the net, data packets are sent from the user's computer to the target server. The data packets go on a journey hopping from server to server potentially crossing multiple countries until the packets reach the desired website. In each of the countries that are passed different laws and practices can apply to the data, influencing whether or not authorities can inspect, store or modify that data.
Hi Roel! Border Check (BC) is a browser extension that illustrates the physical and political realities of the internet's infrastructure using free software tools. Why did you think it would be interesting to investigate the travels of data packets?
I stumbled upon this topic when pursuing a personal interest I developed last year as I started with the Networked Media programme at the PZI. I joined this course because my practice has always been engaged with new technologies and the internet. At the same time however, I felt I lacked a lot of knowledge (technical, theoretical) to make statements with and about these media. For one, if someone would have asked me what the internet was, I would not really have had an answer. So one of the first things I started researching while at the PZI was exactly this. What is the internet? It was during the programming courses that I started working with software such as traceroute, which shows you how you connect to servers. Traceroute really fascinated me because suddenly it linked websites to specific machines that could be linked to a company and to a location on the world. This suddenly made the internet very tangible for me.
At the same time while reading up on the history of the internet I realised the difference between how I had previously perceived the internet as a 'cloud', 'wireless', non-physical (which is probably a more common understanding) and the internet as a bunch of physical cables that run through countries and the bottoms of oceans. Tracerouting then became a way to experience this normally invisible infrastructure.
Some of the demo you sent me show the data taking a straight line. In other cases, such as for dilma.com.br, the path is more tortuous. How do you explain this? And is every non-straight path accountable for?
The complexity of the paths has a lot to do with the degree of interconnectivity of the networks. Generally, the better the connection between you and the destination server, the smaller the amount of hops on your travel. So if you see a lot of hops and twisting paths, it is probably because there is no direct connection between you and the destination.
The complexity of the paths can also be the result of certain assumptions embedded in the databases used for the visualisation. One of these assumptions has to do with determining where a machine is located in the world. Sometimes the geographical data tied to a machine's IP address reflects where it's owning company is registered, rather than the actual physical location of the machine. So you might get visualisations where you'd see a line travel back and forth between Europe and the US. Rather than actually travelling back and forth the Atlantic, what happens, is that your travel stays in Europe, yet you get on and off networks owned by US and EU companies. Because of this when using BC it's important to click the hops to reveal the machine names, often they contain more hints of where the machines may be actually located.
In this sense BC's visualisations are sometimes a bit more abstract, showing ownership and jurisdiction rather than the physical location.
How does a particular country's laws and practices regarding data affect the path adopted? Do you have examples?
It is not necessarily the case that a state's laws and practices affect the route, rather that the route determines which states's laws and practices one is exposed to.
The internet is routed passively, that is to say, there is no way you can tell your data how it should reach its destination. The internet is designed in such a way that data will always try to find the fastest route available, and that may happen to take your data through countries like the UK that monitor all passing data.
In this sense the route is often more influenced by geography and money. Geography, because some places such as the the UK's West Coast act as a 'funnel' for submarine cables, since they are the first stop for many of those cables when they cross the Atlantic. Money, because richer countries will also have the faster infrastructures and those get a preference when it comes to routing.
Laws and policies can influence where companies set up their offices and data centres though. Facebook for example serves non-US users from Ireland, because of the low taxes it pays there. However, the fact that Facebook is registered in Ireland also means it has to comply with EU privacy laws. A funny example of what that means concretely is a 2011 'meme' that spawned on reddit where European users flooded Facebook with data requests, something that was not possible for American users of Facebook.
What were the most surprising discoveries you made while testing the data travels?
One of the more interesting things I've realised and which is something that I would like to follow up on is how 'historical' the networks actually are. If you compare maps of the submarine cables that make up contemporary intercontinental fibre optic networks with maps of telegraphy networks of the 19th century you will see a lot of similarities.
However, this historical element also became apparent to me using Border Check when for example I found out that much of Latin America is predominantly connected through the Telefonica network. Telefonica is a very large telecommunications company that resulted from the privatisation of Spain's state telecom company.
In this sense one could argue that a lot of Latin American countries are dependent on the telecommunications infrastructure of their former coloniser. This could lead to clashes of interest. In the case of Brazil this has recently become apparent when, in response to NSA spying, Brazillian president Dilma Rousseff announced plans to build a national Brazilian telecommunications infrastructure. There Rousseff wants to ensure that Brazil no longer needs to route via the United States (that is the Telefonica network) and as a consequence be subjected to American monitoring when it connects with the rest of the world.
How do you retrieve the information necessary to map these travels?
While you run Border Check it uses your browser history in order to detect whether you are loading a new website. It then relies on Layer Four Traceroute, which is a tracerouting software, to map the ip addresses of the machines that route your data on it's way to that destination website. Border Check then uses Maxmind's free GeoIP databases to link these adresses to cities, countries and companies. Maxmind to some extent gets this information from repositories of internet registries (organisations that keep track of who registers what website, who owns a certain ip-adress etc). For visualisation Border Check uses Openstreetmap with the leaflet visualisation library.
What's next for BC?
I am really interested in adding more information layers to Border Check that provide some more context on what it could mean when one surfs through a specific country. One of the initiatives I find exciting is www.diriwa.org. It tries to collaboratively map communications and informations rights in the world. I think adding this sort of information will make Border Check much richer. Other than that, releasing updates that would make the software more easy to install and run on different platforms.
The new episode of #A.I.L - artists in laboratories, the weekly radio programme about art and science i present on ResonanceFM, London's favourite radio art station, is aired this Wednesday afternoon at 4pm.
My guest tomorrow will be Erica Scourti. Erica is an artist/film-maker who's studying MRes: Moving Image Art at Central St Martins, run in conjunction with LUX. Her work uses autobiographical source material, as well as texts found on the internet to explore the mediation of personal and collective experience through language and technology in the net-worked regime of contemporary culture. Which means that tomorrow the episode will focus on online language and communication, algorithms, forms of mediated intimacy, and distributed art works. Amongst others!
A few years ago, every day, for over 10 months, Erica wrote and emailed her own diary to her Gmail account and copied the list of suggested keywords linking to clusters of relevant ads. After that, she spoke the text to webcam, creating daily portraits of her life as understood and translated by Google's algorithms. WIth another project, Woman Nature Alone, she hijacked the process by which Google's algorithms organize the hierarchy of online visibility. Erica used titles taken from stock video sites corresponding to the key words 'woman', 'nature' and 'alone' as the starting point for a series of films that show her performing each action described in the title. The video and title were then uploaded to YouTube, forming a collection of 'rushes'. After that the online works started a life of their own...
Anti-Media. Ephemera on Speculative Arts, by researcher and theorist Florian Cramer.
Publisher NAI Booksellers writes: Literature written in the style of computer code, electro-acoustic compositions with newly created sounds, but also subcultures with clearly identifiable manifestations, from Internet porn to neo-Nazis and anti-copyright activists: high-, low- and subculture have long been impossible to distinguish, including in the degree of their self-reference. Art and media criticism focuses mainly on the concepts, not on the objects themselves. In Anti-Media Florian Cramer shows, through a close reading of cultural expressions and analysis of media and art criticism, how these constantly refer to their tradition, language and medium while trying to subvert them.
I've never reviewed nor even read anything that looked remotely like this book before. It is bold, thought-provoking, and extremely fast-paced.
Cramer makes fearless statements about interactivity, pop music, social hacking (that one was a fun chapter), 'openness' and many concepts and ideas that we brandish without much thoughts as if they were magical formulas. While reading his book, however, i realized once again that our writings, works and discussions rely on cultural terms which precise meaning, extent, impacts, and limits we often take for granted.
Nothing and no one (not even Rocco Siffredi) is safe from Cramer's sharp questioning and ruminating. You will either embrace his reflections or disagree with them entirely, but a heated debate with this kind of book would, i think, constitute a valuable exercise for the critical minds.
Image on the homepage: 01.org (Eva and Franco Mattes), Nikeground, one of the work discussed in the book.
A few weeks ago, Sight and Sound, a festival produced by Eastern Bloc in Montreal, ran a workshop titled Analyze Dat: TOR Visualization. Headed by someone who presents himself (or herself) as Arthur Heist, the description of the workshop suggested an internet driven by secrecy.
This workshop explored the use of natural language processing tools to analyze the goods, products and services available on online black markets, trying to reveal a faithful cartography of the dark web.
The workshop will begin with an introduction of the tools involved in accessing the Internet's black markets (Tor bundle, Bitcoins). Participants will then process these webpages to extract information from natural language to draw a map of hidden services. These tools allow the user to go from simple word frequency analysis (i.e. cloud tags) to more complex semantic comparison and statistical relationships between those networks. The goal is to be able to visualize this data in order to get a better understanding of the inner, deep feelings society keeps hidden.
I knew about the stateless, encrypted online Bitcoin currency of course, i had heard of the Tor software that enables online anonymity but other than that, i felt that there was precious little i knew about the Deep Web, the vast submersed side of the World Wide Web that countless people are using in perfect anonymity every day to buy goods that neither ebay nor amazon will ever sell you and to exchange services that won't appear when you do a google search.
The more i looked into Tor and the many activities it enabled, the more intrigued i was. I thought that the easiest and fastest way to get a better understanding of the issue would be to interview Arthur Heist:
HI Arthur! How much can one discover about this underground economy ?
It is quite easy to find out about any good or hidden service available on the dark web. One just needs to know the first entry point that keeps track of these peculiar services.
Do you have to be a seasoned hacker, a super smart programmer or can any web user make interesting enough discoveries ?
The first pit stop is to go to the Tor project website and install the Tor browser for your operating system. Once installed, you can launch Tor browser and access any website anonymously. So, no need to be either a hacker or programmer to begin browsing the hidden web. A popular place where a lot hidden services are listed is "The Hidden Wiki". From there, you can even find search engines that specifically target onion websites (those with an cabalistic URL).
And how did you find about it in the first place ?
As a user, I had been using Tor for a few years to enhance my anonymity online. I like the fact that it allows you to bypass some restrictions applied unfairly by companies who want to protects their assets. In a way, Tor gives us back the net neutrality some companies or governments want to put at risk. Concerning the dark web more specifically, this whole economy emerged more recently as a result of the emergence of bitcoin currency approximately 4 years ago. Even though I did not get interested in bitcoin specifically, I was more fascinated by the whole range of services and activities made available by these new technologies.
From a general point of view, I have never thought that the internet was much different or more dangerous than what we can experience in the real world. Let's say you are going to Toronto for the first time and you want to buy some crack cocaïne, where do you go? Who do you get in contact with? In the same manner, if you want to find illegal services on the web, it takes the same effort to know about them.
The general public has been fed what commercial companies want them to know. They have their minds locked in a narrow place for them to consume more easily, in the same way they'd go to Starbucks instead of the local coffee shop because it's not advertised on the same scale.
Were the participants like me, attracted by the description of the workshop but totally unaware of what it entailed? Or did they come prepared and knowing what they would be looking for ?
The nice thing about the participants was that they represented in their interests the whole range of topics discussed during the workshop. Some were more interested in the political issues involved, some more in the use of natural language tools. Most of them had already installed Tor on their computers.
How exactly does this online black market reflect the traditional offline black market ?
As stated above, there are no major differences between what you can find through online or offline black markets. And as a matter of fact, in the offline black market, anonymity is also the rule, going from changing your real name to wearing disguises so as not to be recognized. The main added value that the online black market allows for is the possibility to connect dealers and customers that would not have met otherwise in real life, which is also the main characteristic of online services in general too.
Does it allow for other types of transactions, activities, exchanges of goods and services?
Of course, anonymity brings a wide range of activities that you would not be able to find if it weren't anonymous. Among things you can find through hidden services are the scary contract killers who offer to kill someone, whose prices are set depending on the popularity of the person to kill. A funnier website called Tor University offers you to write any assignment or essay you need to get better grades. Another website offers to set up pranks to your friends; for example, by breaking into their house with a fully equipped SWAT team ...
I read that law enforcement agencies were struggling to deal with online black market. Why is it even more difficult to grasp and fight than, say, traditional drug traffic?
Because of the inner nature or how Tor works, by encrypting the communications being sent, all along the way through each relay (except for the last one), it is not easily possible to track down one specific user or website. Nevertheless, one famous hack was made possible on the Tor network by setting up a few Tor routers, which all relay a lot of information. Most of it is encrypted, but when the router is chosen (by the algorithm itself) to act as the last relay, then the data being transited is sent in the clear. So, if you set up your own relay, you are able to log all data transiting on your node, and thus retrieve information people have not encrypted before sending it through the Tor network. Tor network offers anonymity, not confidentiality! I read there also were some rumors that US governmental agencies may possibly run fake drug websites, so as to be able to get an alarm when some user was buying a too large amount of drugs for it to be his personal consumption.
Can the dark web (the way it operates, protects itself, etc.) teach innocent users of the internet (like me) anything ?
Blatantly, recent news about the US Prism program shows us again that giving up your personal data into the hands of big internet companies is like leaving your luggage in your hotel lobby: how trustworthy is it, you can never be sure it won't be stolen or searched by anyone. And what the Tor network (and as an extension, bitcoins) achieves is the possibility to give us back the power to build the internet as it should, free and open. Of course, mass media like to make us think the use of these tools is evil and unsafe, whereas it is indeed the safest thing to do.
What did the participants achieve during the workshop ?
The workshop was more about awareness, discussion and showing how these various tools work and how to use them in your own practice.
Also part of Sight and Sound, a Montreal festival which, this year, explored the rhizomatic and permeating structures of society's concealed systems: The Pirate Cinema, A Cinematic Collage Generated by P2P Users.
During the Arab Spring in Tunisia, Egypt, and Libya, governments restricted the access to the Internet in an effort to hamper online peer networking and thus self-organization. Could other governments ever operate a similar media shutdown and cut their citizens off the internet?
What would we do if ever an Internet kill switch was implemented in our country? Not necessarily to prevent us from orchestrating riots but to protect the internet "from unspecified assailants".
At the latest graduation show of the Design Interactions department in London, Philipp Ronnenberg was showing 3 methods to prepare for the time after a cyberwar. The Post Cyberwar Series proposes an alternative open navigation system, a makeshift wireless communication infrastructure as well as a novel data storage.
The Teletext Social Network enables people to bypass network providers and governmental institutions and communicate using the analogue television broadcasting which was freed last April in the UK.
OpenPositioningSystem relies on the seismic activity, produced by generators in power plants, turbines in pumping stations or other large machines running in factories to provide an open navigation system. I interviewed the designer about it a few months ago.
People living in urban areas could use the Sewer Cloud as a living, self-reproducing data network. This living network would be located in the sewerage system and use the algae species Anabaena bacteria for the insertion and extraction of data.
I contacted Philipp again to ask for more details about his project:
Hi Philipp! When i first interviewed you about the OPS, you didn't mention the kill switch. How did it go from one project about positioning system to a more complex scenario in which internet has been killed off? Were you inspired by any particular events from the recent news? I'm thinking of the NSA data collection: isn't controlling the internet and surveilling our every click enough for States?
The kill switch scenario stands for "killing" the Internet. But the Internet is only one network which is under control of companies and governmental institutions. The kill switch particularly is about the Internet, but other networks such as GPS navigation and mobile phone networks can be affected as well. In all three cases, the GPS navigation network, the mobile phone networks and the Internet, the control is in the hand of companies and governmental institutions.
I wanted to create three independent network alternatives. The body of work wrapped in the series Post Cyberwar is a reflection of how dependent we are today on the authoritarian structures of the networks we are using day to day. It is not only about surveillance and tracking down activity of users, it is also about content which becomes increasingly restricted, censored and monitored. The installation of controlling instances (i.e. kill switch) within these networks is justified with cyberwar and cyber-terrorism.
Controlling the Internet and surveilling our every click is enough for getting an insight. But as we saw in Georgia, Egypt and sometimes China, shutting down the Internet and mobile phone networks (or at least parts of it), is a powerful way to prevent communication and the circulation of undesirable information.
Speaking of OPS, how much has it grown since we last talked about it? Have the prototype and software improved and has the project given rise to attention and interest?
The OPS has grown a lot. First it got attention through your first blogpost and it was reblogged by some bigger blogs. I got very diverse feedback from "this comes out when art students try to be engineers (theverge.com comments)" and people asking me to get actively involved. I have 80 registered members on the website so far, but there is not much activity yet. I want to spend more time soon to bring new content on the website and therefore activate the registered members. The prototype and the software have slightly improved being more accurate and I worked on better tuning to seismic frequencies.
I gave two talks (#geomob London and W3C Open Data on the Web workshop) about the OPS so far where I tried to convince people to come on board. There is a third presentation at OHM2013 planned.
Is the Social Teletext Network installation at the show a working prototype? Which part of the communication would it replace exactly? I can't believe it could replace all internet communication, it seems to be so rudimentary.
The Social Teletext Network in the show was showing a demo. But I have the hardware and the software ready to switch it on. The demo in the show was created with the help of the same software which is used in the real setup. Unfortunately it is highly illegal to broadcast your own TV signals, therefore I decided to show a demo in the show. I could apply for analogue (VHF) frequencies, but it is very expensive (too expensive for a student project).
It is not meant to replace the entire Internet. The technical limitations for this task are too high. The Social Teletext Network is capable to provide wireless information streaming, using the old obsolete teletext technology, which makes it harder to track or to monitor. I tried to port some comfort which we know from computer interaction to the Social Teletext Network. For example: You can zoom into specific regions on a map and visualise user locations and other information.
The Teletext specifications provide a very limited resolution and it can only display text and graphics programmed with single pixels. Overall, the strength is that you can send and receive information wireless and over a distance (5km and even more possible with the right hardware and a high antenna).
Could you explain me with more details the process of the data insertion and extraction from algae? Because if i want to retrieve some data, how do i know which algae i should fish and where?
Text, images, video and any piece of digital data is written in binary code (110011110). These 1's and 0's are then encoded to the four base-pairs of DNA (Adenine, Cytosine, Thymine and Guanine). The new base-pair string will be synthesised to a complete DNA string and inserted into living organisms. To read data out of a DNA string the base-pairs would be decoded to 1's and 0's again and from that to human readable information.
As 1 gram of DNA can hold up to 700 terabytes (700.000 gigabytes), the amount of data what you can find in a single piece is very high.
If you would insert data into algae and hide the algae at a specific site, the chance that it stays there is high. It would reproduce itself and the following generations would go on a journey. But if the conditions are good, the origin would stay at the same spot and you could still find the same data even years after you have put it somewhere. So the idea is more, that you would know by locations where you can find specific information.