Issue 7 Introduction

© Jack Henrie Fisher and Andrew Clark


MEANINGFUL INFORMATION. Issue introduction by Iker Gil, editor in chief of MAS Context


Since starting MAS Studio a few years ago, the idea of collecting and processing information in one way or another has always been present in the studio. We have a fascination with infographics and visualizing data, as well as documenting everyday places and situations through photography. While collecting and processing information has been an important part of the office, sharing information has been a critical one. Hence, the effort to produce MAS Context.

Information is a source of learning. But unless it is organized, processed, and available to the right people in a format for decision making, it is a burden, not a benefit
William G. Pollard

In this issue we had the chance to interview three designers and artists to know how they deal with information in their work: industrial designer Naoto Fukasawa, interactive media artist George Legrady, and artist Iñigo Manglano-Ovalle. It was a pleasure to have the chance to talk to them, to learn about Naoto’s definition of information, the evolution of George’s work from analog to digital, and the social and cultural commentary behind Iñigo’s pieces.

Writer Mimi Zeiger gives her input in the current debate about content versus format. When talking about information, are we talking about the message or the medium? With a background in architecture and urbanism, Javier Arbona explores format, specifically the role of blogging, using the architecture field as a case study. He puts the following question on the table, “Is there a distinction between blogging and designing?”

Editor and writer Jörg M. Colberg addresses content through the meaning of photography and its capacity to portray reality. But is that reality the one that we wish we would get from the images or the one that images can carry?

Graphic designer Jack Henrie Fisher addresses content from another perspective, examining typography, reading techniques, legibility, organization and meaning. You also saw his work before you picked up the issue—it’s on the cover—co-designed with our art director Andrew Clark.

How is technology helping us to understand the city? Writer Richard Prouty gives us his perspective on the way cities have been approached in the past and how current geo-positioning technologies are helping to produce a new way of mapping the city. But is data helping identifying its intensities, or its identities?

Two of our contributors are dealing with the ideas of contribution/collection and control/filter in their work. Digital media artist Aaron Koblin makes use of it to create fascinating projects like “Ten Thousand Cents” and “Bicycle Built for Two Thousand”. Here we showcased four projects that use information in different ways. An advocate for filtering, not just compiling information is Clay Shirky, one of the most influential thinkers on the social and economical impact of the Internet. The problem might not be the quantity of information produced today, but the filters we put in place.

And working around the concept of identity, artist Nick Gentry uses obsolete floppy disks to produce his own work. The disks, which still contain information, are combined and manipulated to generate a new identity, putting a human face on the information.

So here you have INFORMATION, our Fall 10 issue, one that has tested our capacities, and surpassed the number of contributors and content produced of our previous issues. I know we are not helping with the overload of information but I think you are going to find it meaningful.


MAS Studio is a collaborative architecture and urban design firm directed by Iker Gil. MAS studio takes a multidisciplinary approach to its work, with teams including architects, urban designers, researchers, graphic designers, and photographers among others, in order to provide innovative and comprehensive ideas and solutions.

The Data City

Map of Paris from the “Tweetography” project by urbanTick, Steven Gray and DigitalUrban at CASA


Essay by Richard Prouty, writer and author of the blog One-Way Street


On a clear late summer morning in 1802, the poet William Wordsworth paused upon a bridge before entering London. Before him lay the entire city, visible from end to end.

This City now doth, like a garment, wear
The beauty of the morning; silent, bare,
Ships, towers, domes, theatres and temples lie
Open unto the fields, and to the sky;
All bright and glittering in the smokeless air.
[. . . ]
Ne’er saw I, never felt, a calm so deep!
The river glideth at his own sweet will:
Dear God! The very houses seem asleep;
And all that mighty heart is lying still!

Wordsworth’s sonnet “Composed upon Westminster Bridge, September 3, 1802” marks a key moment in the development of the city. This period was the last time it was possible for a pedestrian to see a major city all at once. Although he probably didn’t know it, Wordsworth was gazing upon the largest city in the world. The year before Wordsworth stepped onto Westminster bridge, Britain had completed its first national census. A million people slumbered in London on that September morning. Data could see what Wordsworth could not.

“Westminster Hall and Bridge” as drawn by Augustus Pugin and Thomas Rowlandson for Ackermann’s “Microcosm of London”(1808)
© Wikicommons

As cities have grown larger, they have become more interesting to look at and, paradoxically, more difficult to see. In the industrial city, it was important to track mass and energy. In the post-industrial city, information itself becomes important. The contemporary city is the city of data streams, each possessing a rich and confounding flow of numbers emanating from a system that is itself part of a larger transnational system. There are no more Steel Cities or Motor Cities. The post-industrial city has no single identity except as a data city. It exists at the intersection of technologies. Observing the data city means identifying its intensities, not its stable identities.

Contemporary computing systems can track everything in a city except its rats. Millions of people now wander the streets of the world’s cities snapping photographs, sending messages, conducting Internet searches on hand-held devices that can have more processing power than the computers that guided Apollo 11 to the moon and back. These activities can be stamped with precise geo-positioning data, processed, stored, then served up through Internet protocols.

But do these technologies give us a better view of the city than Wordsworth enjoyed standing on Westminster Bridge? Is the city more knowable, or less?

Toward the Radiant City

Throughout the nineteenth century, the British government watched fretfully as industrialization spawned vast slums in its cities. The teaming Victorian slum was the main impetus for the development of city planning as we now know it.

Initially, city planners tackled the problem of substandard housing by building boulevards and parks. The archetype of nineteenth-century urban design was Georges-Eugène Haussmann’s plan for Paris, first developed in 1853. Haussmann demolished the medieval core of Paris and dispersed the working classes to the peripheries of the city, conditions that exist to this day. His wide boulevards, ceremonial squares, and unified apartment blocks became the model for city beautification projects around the world. Derived from the neo-classical and Beaux Arts traditions, these projects reflected Voltaire’s belief that industry and the pursuit of urbane pleasures were the hallmarks of a progressive civilization.

Nineteenth-century census data, however, painted a very different picture. A boy born in Liverpool in 1851 had a life expectancy of 26 years, while his country cousin could live to 57. Life expectancy for Britons born in towns larger than 100,000 actually fell between 1820 and 1830, even as national industrial output went up. Industrial England suffered death rates not seen since the plagues of the Medieval Ages. As if the numbers weren’t alarming enough, the Great Stink rose up from the fetid London sewer system in the summer of 1858, fouling the air and reminding everyone of the perils of density.

Census records and improved bookkeeping provided an objective and comprehensive view of the nineteenth-century city. Yet, in the popular imagination, the realm beyond the boulevards was a poisonous labyrinth populated by masses associated with criminality, sedition, and political protest. Charles Dickens described London as “a place of squalid mystery and terror, of the grimly grotesque, of labyrinthine obscurity and lurid fascination.” George Gissing looked at London and saw “the spinnings of a huge, poisonous spider.” [1]

The resolution between dismaying objectivity and uneasy subjectivity came in 1935 with Le Corbusier’s La Ville Radieuse (The Radiant City). Le Corbusier wanted to disrupt existing ways of looking at the city by de-familiarizing it, by stripping the city bare of all traces of history, memory, and desire. He stunned New York reporters in a 1935 news conference by proclaiming, “The only trouble with New York is that its skyscrapers are too small. And there are too many of them.” [2]To Le Corbusier’s remorselessly Cartesian eye, New York is nothing but a grandiose and cataclysmic mess. “On the day when contemporary society, at present so sick, has become properly aware that only architecture and city planning can provide the exact prescription for its ills,” predicted Le Corbusier, “then the time will have come for the great machine to be put into motion.” This is the paranoid city of absolute transparency, in which the master plan guides the very footsteps of its citizens. Le Corbusier believed that planned changes in the environment would be sufficient to produce measurable and predicable changes in people’s perceptions, mental life, habits and conduct. In the Radiant City, “nothing is contradictory any more. Everything is in its place, properly arranged in order and hierarchy.”[3]

Mapping the Data City

In order to build his Radiant City, Le Corbusier wanted to level existing cities and plant fields of mid-rise skyscrapers in their place. This never happened, but new technologies promise to create Radiant Cities out of existing ones. IBM’s Smarter Cities initiative is designed to place even the most unruly cities under the benign control of the server farm. MIT’s Senseable Cities program is a collection of projects intended to put the power of ubiquitous computing in the hands of ordinary citizens. The Senseable Cities home page announces, “The real-time city is now real!” Under the auspices of the MIT program, plain old Singapore will become “LIVE Singapore!” In prose that neatly elides any references to the city-state’s authoritarian government, the website describes the program as “Developing an open platform for the collection, elaboration and distribution of real-time data that reflect urban activity. Giving people visual and tangible access to real-time information about their city, allowing them to take their decisions more in sync with their environment, with what is actually happening around them.” LIVE Singapore! has not yet gone live, but its promise is very exciting. Armed with real-time data about our urban environment, we will no longer get lost or feel frustrated. Alienation will be a thing of the past.

Or so one hopes. Experiments in creating real-time views of the city are well underway, and so far the results are disappointingly impersonal. The Centre for Advanced Spatial Analysis (CASA) at University College London is a more modest version of MIT’s Senseable Cities program. Fabian Neuhaus, a CASA researcher, specializes in urban maps constructed by geopositioning data “harvested” from Twitter. His maps are landscapes of tweets. The peaks represent the most intense Twitter activity, while valleys and plains indicate lesser activity. Neuhaus calls his Twitter maps “cycle studies” to stress the dynamic nature of his models. “Cycle studies are the science of everyday life, as normal as it gets,” Nuehaus explains. “Its focus is the daily routine, with its habits and rhythms as they occur in most citizens’ lives. It is the power of the normal that brings stability and the routine that ensures security.”

“Tweetography” maps of London (top), New York (center) and Munich (bottom)
© urbanTick, Steven Gray and DigitalUrban at CASA

The Cycle Studies maps show that people tweet a lot when they’re near Times Square and in Soho in London. In other words, Neuhaus has verified that people tweet most often in exactly the places we would expect them to, that people feel compelled to express themselves most strongly around the totemic places of the city. The most recent form of mass communication is transformed into a pre-historical landscape of active volcanos, a Mannahatta of the digital age.

Wordsworth’s privileged instant of space and time, a one-shot view of London before it resumes its furious bustle, attenuates into the slow accumulation of data over time, then precisely distributed over imaginary space.

A more sophisticated synthesis of the dryly academic and the vaguely mystical can be found in the “Geography of Buzz” constructed by Elizabeth Currid of USC and Sarah Williams of Columbia University. [4] Using geo-positioning data from Getty Images, their Geography of Buzz maps out patterns of cultural consumption. Events range across a variety of cultural activities, including film screenings, concerts, fashion shows, gallery and theater openings. Currid and Williams found, to no one’s surprise, that buzz tends to be centered around well-established marque venues such as New York’s Lincoln Center and the Kodak Theater, where the Oscars awards ceremonies are held. Allegedly cool Brooklyn remains dark, its buzz too weak to register.

Currid and Willams’ study is less a document of cultural consumption than a way of seeing the city. The researchers developed a method for gathering and modeling what Williams calls “shadow data,” or “the traces that we leave behind as we go through the city.” By capturing and modeling the traces of daily life, Currid and Williams have opened up a whole new realm of investigation, using data to make visible something ephemeral and abstract, to discover the agitated hearts of America’s twin culture capitals. New York and Los Angeles are publicity cities, vast fields of murmuring punctuated by outbursts of delirium.

By pinpointing buzz, one more mystery of the city has been cleared up. Access to mass data sets, once reserved for large corporations and intelligence agencies like the NSA, has been placed at the disposal of the humble social scientist. Currid and Williams have verified that it is possible to architect mass social experiences, just as Le Corbusier predicted.

“Geography of Buzz” by Elizabeth Currid and Sarah Williams
© Maps by Sarah Williams and Minna Ninova, Spatial Information Design lab, GSAPP, Columbia University

Topologies of the Data City

As formerly hidden realms of the city become exposed to the technological eye, urban planning and architectural practice have adjusted. Several commentators have credited Google Maps and Google Earth with initiating the current vogue for green roofs. The same technologies have habituated us into to seeing cities as clusters of hyperactivity in a continuous flow of territory. We never quite step off Westminster Bridge. We merely zoom in for greater detail.

There are two potential consequences to the modeling of real-time data. One is that the city will assume the same shape as Haussmann’s Paris, with a data-rich center surrounded by information deserts. Because the hand-held devices aren’t universally available, or may be used in districts that don’t produce enough data to satisfy the algorithms, entire sections of the city will remain anonymous and mysterious, just as they were in the nineteenth century. Everything will appear in its proper place, as Le Corbusier wished, except the hierarchy will reflect the irrational industrial city he deplored.

But there’s a more optimistic consequence as well to the new data modeling techniques. Data flows traverse all boundaries, so cities will appear more integrated with their surrounding suburbs and countryside. The old city versus country antagonism will be revealed as the illusion it had been since Wordsworth’s day. There will be visual evidence that the city isn’t an exception to the suburban/rural norm. Cities won’t be realms of separate meanings, just intensities of meanings available elsewhere. Yet, at the same time, as they burn more brightly with data, cities can once again assume the role Voltaire assigned them: as the hallmarks of a progressive civilization.



1. George Gissing, “Bleak House,” in The Immortal Dickens (Whitefish, MT: Kessinger, 2004), p. 71.

2. Qtd. in Peter Nash and Norman McGrath, Manhattan Skyscrapers (Princeton: Princeton Architectural Press, 2005), p. 93.

3. Le Corbusier, The Radiant City (New York: Orion Press, 1967), p. 163.

4. Melena Ryzik, “Mapping the Cultural Buzz: How Cool Is That?,” The New York Times, April 7, 2009, sec. Arts / Art & Design.


Richard Prouty earned his PhD in English and Film Studies from Temple University. His critical essays have appeared in The Journal of Modern Literature, Film Quarterly, and Cinema Journal. His most recent publication was an essay on Rem Koolhaas’s concept of the generic city, which appeared in Static. | @rmprouty

The Architecture of Information

© MAS Studio


Information compiled by MAS Studio



MAS Studio is a collaborative architecture and urban design firm directed by Iker Gil. MAS studio takes a multidisciplinary approach to its work, with teams including architects, urban designers, researchers, graphic designers, and photographers among others, in order to provide innovative and comprehensive ideas and solutions.

Intuitive Design

© Luminaire


Andrew Clark and Iker Gil interview Naoto Fukasawa on the occasion of his lecture and exhibition in Chicago hosted by Luminaire


Product designer Naoto Fukasawa has won international acclaim for his designs that address the gaps in our everyday lives. His award-winning body of work includes products for brands such as B&B Italia, Driade, Magis, Artemide, MUJI, and Plus Minus Zero. Andrew Clark and Iker Gil interview Naoto Fukasawa on the occasion of his lecture and exhibition in Chicago hosted by Luminaire.


Naoto Fukasawa
© Andreas E.G. Larsson


IG: What is your definition of information?

NF: For me, information has two meanings: one is the media information, like newspapers, magazines, the Internet, that sort of thing. But I also understand information as all the things that we feel naturally, like touching a table.

AC: Whether received or produced, what do you consider the simplest form of information?

NF: I am always interested in small things that have a big impact. For example, if someone is wearing a new pair of shoes and I see them, I might say, “That pair of shoes will be popular”. I don’t know why, but I can really feel it. In the same way, I always try to find that special element that acts as a kind of seed. That’s the element I search for the most. There are two ways of receiving information: one is visual, to see something through the eyes, vision and mind together; the other way, while I concentrate on other things, my whole body is searching without thinking—looking at the architecture, for example. I am the type of person who can receive the information both ways. While I am talking, I am also finding something interesting—that is my uncommon way to get information. I really like that.

AC: You have said in previous interviews, “I think objects or things are shifting toward the surrounding walls for integration or otherwise into our body for integration. Maybe only things that are necessary to physically exist will stay, and all others will be integrated as functional elements.” How in your work do you design towards this integration?

NF: Technology is improving towards a specific direction. A TV used to be a huge box, for example, but now it is really thin. That is an obvious, inevitable goal: to make a TV thinner so it can be part of a wall. I am using a wall as a metaphor, not as the real physical wall. All of the products will be going either to the architectural wall side or the human side. The same thing happened with the telephone. It used to be a big object located on a table, but now it is a small thing in your ear. In this case, it has moved to the human side. The new technology will push things so they disappear in either of those sides. But some objects will remain. Chairs still exist because we need them, the table will be here…. they have existed for a long time and they will still exist. But it is important to understand that the rest of the objects are going to inevitably disappear. That is why you don’t design a very massive TV anymore. Even without design, technology will push the projects in those two ways. That is my basic understanding of the standing position of the object.

IG: However, when you were presenting your projects earlier today in the exhibition at Luminaire, there was this sense that, even though the technology can make things thin, people still want to feel that familiar image of an object, to experience a sense of a familiar object.

NF: That depends on the use we are giving to the object. If you carry the information with you, like with an iPad, it is really good that the technology allows you to make it thinner. But if you are lying on the bed watching TV, you want a bigger object that is stable in a rough surface rather than a very thin one. It depends on the way you are going to use the product. Probably, if you are going to carry around the object, the important characteristic is to be thin. When technology can make things thinner, it allows you to do anything. It gives you freedom. You can decide to make the object bulky or thin. However, some people think that just because technology can make things thinner, they can only do it thinner.


Naoto Fukasawa exhibition at Luminaire store in Chicago, 2010
© Luminaire


AC: “InfoBar” and “Neon” are both personal devices you designed that were more than cell phones. Each one pushed the direction of the “information device that respects the individual,” from graphic, function, and object perspectives. How do you approach the idea of communication and information with something like a phone next to the body?

NF: I think that if you are a product designer, you always want to push things to the limit. You want the frame to be smaller, and smaller, and smaller. In the end, you don’t want to have any kind of frame, so that the surface becomes information. That is the goal. That is why I wanted to create a telephone as information. The entire surface should be information, from the vibrations and tactic feedback to the lighting or images without any frame. You currently have to have a frame, as the technology is not there quite yet. But that is a very inevitable goal. Now we just have edges of a few millimeters, but it is still a frame.

AC: And that is important in the discussion of the person and the object. By eliminating that plane, the screen and the frame are creating a surface, and there is perhaps a closer connection between the object and the body.

NF: No, no, you only need graphical information. Why do you need a machine? If you have a projection, you don’t need to have a machine everywhere all the time. You don’t want to have any physical machine. You only need the information. That is the goal; the machine is not a goal anymore. That is a very important thing.



IG: We probably don’t need an instruction manual, either. People should know how to use the object instantly; it should be a natural behavior.

NF: Right.

IG: In a way, we need fewer graphics and more intuitiveness. Do you think that is the tendency? Can we say that graphics mask bad design?

NF: I think part of the information is received visually, but the whole body also receives other types of information. Even when I see some particular information on devices, they also provide other types of information through our other senses. It is not anymore just the visual information. Like an iPhone or an iPad, the interface allows us to use our other senses to receive information. Your brain is way ahead of the technologies. Once you have experienced an object, you understand immediately they way it functions and its potential. For example, if I want to put a nail here in the table, I need to hit the nail. If I don’t have a hammer, someone would suggest using my iPhone as a hammer. Of course, we wouldn’t use it directly as a hammer, but we can use it to connect to a website to find out a way to hit a nail. That is the idea of the new interface: it is not just a direct reading of the objects. Before, everybody would have thought that the phone was the tool to hit the nail, but now, everybody knows that is the way to create. Of course, I am exaggerating a little bit, but the idea is that the object is the tool that allows you to think.

AC: In your work exhibited at Luminaire, as well as the work in the exhibition “Supernormal”, there is a clear direction: it seeks harmony, it is refined, you can’t take anything out, otherwise it will ultimately fall apart. You also have to be able to feel the object. How do you approach the idea of awareness or thought as someone uses the object, for example, with the clock, or how someone puts the umbrella in the stand?

NF: As an animal, your body is already aware of those things without having to think about it. Your body is already very smart; there is no need to create anything. It naturally or spontaneously understands the things around you. That is why, like in “Supernormal,” there are objects that you use immediately as normal products: to write, to eat… without thinking. That is the perfect relationship. However, your mind sometimes breaks apart that perfect interaction, for example, because you have a good design mind. Before you naturally choose a pen to use it, you look at the design of the pen first. You see the pen and you say, “that’s a nice pen,” and that is why you want to use that one and not another one. But that’s wrong! Your body is honest, but you are fooled by your mind. That is the bad part of design.


Naoto Fukasawa exhibition at Luminaire store in Chicago, 2010
© Luminaire


AC: So, sometimes we get in the way.

NF: Yes, design sometimes makes you very confused.

IG: When you started your own office, you wanted to create objects that seemed like they already existed, but they didn’t exist. Can you talk about one or two products that exemplify that, like the umbrella that we mentioned before?

NF: There are a lot of elements around you, and like a jigsaw puzzle, sometimes you are missing only one piece. You gather all the pieces and, if you are missing one piece, you know exactly that is the one you need. In reality, that is the piece that everybody is missing. We all share many elements in our life because we have the same bodies, the same environments, so the missing part is something that we share with other people. That is why I have to find out this part to design something that fits exactly in the hole. Then everybody says, “That is what I want to have.” This is similar to when you cook. Sometimes, you know that there is an uncomfortable element, like the handles, the chopping board …something. You can’t really identify it, but your memory is already recording that uncomfortable element. If you have the chance to have the new cooking tool that solves the problem, it brings back the memory that you had already recorded. It means you had already created the new tool you wanted to buy, but you couldn’t identify it at all. There is a kind of time gap. That is the type of situation where you get the information.

IG: Can you talk about your roles in MUJI?

NF: MUJI is a brand and a philosophy shared by all people. It is not just that one company established the brand. It is the kind of form or desire that everybody has. Right now, everybody is getting more and more special and individual things for their life. But when you have too much, you want to be on the quieter side. MUJI decided to be on that quiet side, and everybody said, “Yes, we can be on that side.” So MUJI is always on that side, it is the only company. That is why everybody cares that MUJI has to remain being MUJI. Sometimes MUJI also goes to the other side, so my role is to discuss if a product belongs to the MUJI philosophy. And that is a very, very tricky part, because they have a huge business. I know that sometimes it is a successful product, but we have to be patient to do the right thing. That is the type of conversation we have regularly, every week. It is a very important part of the role as a design advisor for MUJI. The other one is just design director and designing products for MUJI.


Naoto Fukasawa exhibition at Luminaire store in Chicago, 2010
© Luminaire


AC: In a similar role, you also launched the company Plus Minus Zero.

NF: In Plus Minus Zero I am more involved. I am the person who created the brand itself. One of the business people had a similar approach and suggested to create a brand. But actually, the financial role has changed three times because they all had some other troubles. Only one person still keeps the original philosophy, and that is why the brand still exists. But financially it has changed, which is quite common in the design world. Now we are trying to make a new product collection to be introduced next year.

AC: In the information age, we have seen an explosion of information that is growing faster and faster each time. Plus Minus Zero is reacting against the overflow of things in the world. So, when you approach Plus Minus Zero with so many things around you, how do you work the puzzle that you mentioned earlier, how do see through all the things?

NF: If you think about MUJI, a MUJI product has to be functional. If a product breaks and you need to replace it, you buy it at MUJI. There is no desire or any personal motivation in that purchase. But object and life are not only those functional uses. Even if you already have a product and you don’t need another one, if you see the Plus Minus Zero product, you say, “I want to have it.” It is a tiny bit more than what you need. Of course, people also want to have a MUJI product this way. But MUJI is perfect and quiet. That is why they say, “This is a functional product, I need it”. But this other one is the one that involves desire. It is the one that you want to have. It is a little different. The shapes and colors are a bit more radical than the MUJI ones, too.

IG: You have a special relationship with your collaborators in your office. Does that allow you to work with a reduced number of people?

NF: When I opened my own studio, I decided that I was going to be the only designer. I was going to design everything. Of course, there are young designers who want to join us, so I tell every young designer, “This is my studio, not a company. I decided I want to do everything. Are you okay with that?” And everybody said, “Okay.” But of course, we have so many projects that I cannot work in all of them by myself. I am directing, I am the creator of the images and ideas, but we share some of the ideas with the rest of the people in the studio. Every day we learn from each other to focus on the inevitable objects. If we see clearly the objects, we don’t need to go the long route to reach them; we can be more direct. Sometimes it is very difficult to communicate with young designers who don’t really see the inevitable things, they are very young and they are not experienced. They make many wrong things and I have to say, “Why do you do such kind of things?”

IG: Can you share with us some of your current projects?

NF: The new product collection for Plus Minus Zero is quite large. We have fifteen to twenty clients, which is quite a large number. Some products are in the electronic industry, like smart phones, media… those sorts of things. We are working in really diverse products, from the furniture industry to the high-tech industry. We are not directly involved in the automobile industry, but some projects are somehow related. This brings up something important. Life is shifting from owning to sharing objects, like cars. This is happening particularly with the young people in Japan who are choosing not to buy or own a car because parking is expensive, insurance is expensive, and of course, the car is expensive. When you think about it, it is a total waste of energy and money to drive a short distance every day. People are beginning to think about ways to share things, and not only the car but also all the products around us. They should be shared and not owned. That is why designing for personal use is less important than becoming more public. We need to have a balance between the personal things and the public things. Right now, some places, particularly Japan, are too focused in the individual, and not really looking at the global picture. Our role is shifting a bit in that way. Someone has to think about ways in which everybody can be better at sharing things.

IG: We should be looking into ways of benefiting the community rather than the individual.

NF: Right. That’s a very important thing as a designer. Until now, the designer was focused on answering the individual desires, but that is too much now. MUJI is one of the answers to make the same things for everybody. It is a product that is quiet. I think that is going to be an important trend.


Naoto Fukasawa is a product designer who established Naoto Fukasawa Design in 2003. Representative works include MUJI’S CD player (part of the permanent collection, MoMA New York), the mobile phones “Infobar” and “neon” and the Plus Minus Zero brand of household electrical appliances and sundries.

Andrew Clark is a designer at MINIMAL and a collaborator in MAS Studio. He has designed solutions for communications, brand, vision, experience and visualization projects. His work is featured in “Shanghai Transforming” (Actar, 2008), “Building Globalization” (UChicago Press, 2011), and “Work Review” (GOOD Transparency). | @andrewclarkmnml

Iker Gil is an architect, urban designer, and director of MAS Studio. In addition, he is an Adjunct Assistant Professor at the School of Architecture at UIC. He is the recipient of the 2010 Emerging Visions Award from the Chicago Architectural Club. | @MASContext

Depicting Patterns

“House of Cards” for the band Radiohead
© Aaron Koblin


Visualizations by Aaron Koblin, artist and technology lead of Google’s Creative Lab


Digital media artist Aaron Koblin uses crowdsourcing to collect information and create some of his most recent projects, including “Tho Johnny Cash Project”, “Bicycle Built For Two Thousand” and “Ten Thousand Cents.” In 2008, he was the director of technology for “House of Cards,” the groundbreaking ‘music video without’ video for the band Radiohead. Throughout his visualizations, he is able to turn massive amounts of information into art.



Lasers and sensors are used to scan the band Radiohead into a three-dimensional particle-driven data experience. The code and data are launched on Google Code as an open source ‘music video without video’ project.


Full credits for this project can be found at



The paths of air traffic over North America visualized in color and form from data provided by the U.S. Federal Aviation Administration.

A collaboration with Wired Magazine and FlightView Software, these flight path renderings show the altitudes, makes, and models of more than 205,000 different aircraft being monitored by the FAA on August 12, 2008.


More information and animations can be found at



Visualizations for the New York Talk Exchange, a project by the Senseable City Lab at MIT for the MoMA. New York Talk Exchange illustrates the global exchange of information in real time by visualizing volumes of AT&T long distance telephone and IP (Internet Protocol) data flowing between New York and cities around the world. Historical visualizations include a distorting world map and borough view illustrating which cities talk with which parts of NYC.


Created with Kristian Kloeckl, Andrea Vaccari, and Franscesco Calabrese.



Ten Thousand Cents is a digital artwork that creates a representation of a $100 bill. Using a custom drawing tool, thousands of individuals working in isolation from one another painted a tiny part of the bill without knowledge of the overall task. Workers were paid one cent each via Amazon’s Mechanical Turk distributed labor tool. The total labor cost to create the bill, the artwork being created, and the reproductions available for purchase are all $100. The work is presented as an interactive/video piece with all 10,000 parts being drawn simultaneously.

The project explores the circumstances we live in, a new and uncharted combination of digital labor markets, “crowdsourcing,” “virtual economies,” and digital reproduction.


A collaboration with Takashi Kawashima.



Aaron Koblin is an artist specializing in data visualization. His work takes social and infrastructural data and uses it to depict cultural trends and emergent patterns. Currently, Aaron is Technology Lead of Google’s Creative Lab. | @aaronkoblin

Making Visible the Invisible


“We are Stardust” Universe Space, 2008
© George Legrady


Iker Gil interviews interactive media artist George Legrady


George Legrady’s work spans almost four decades and incorporates a range from analog photography to digital interactive installations. A pioneer in embracing computers with his artistic work, projects like “Pockets Full of Memories” and “Making Visible the Invisible” have made him a reference in the field. Iker Gil interviews George Legrady to know more about his development as an artist, some of his most important projects and his current interests.


IG: You studied Photography and Visual Anthropology and received a Masters of Fine Arts degree. At that point in your career, your research work “was based on a theoretical and analytic examination of the conventions by which photographic images conveyed meaning.” Can you talk about your influences, and the projects you developed then and how they dealt with information?

GL: I began in fine arts photography in the early 1970’s, a time marked by McLuhan’s vision of the “medium as message”, when optical-mechanical devices were also understood by some to be socially critical tools for cultural change. Throughout the seventies and eighties, I explored various forms of camera-based image making, from documentary photography where the subject matter was the primary focus, to formalist photography, which prioritized the exploration of formal structure and visual complexity over subject matter . This led to a conceptual approach where concepts and propositions dictated the image prior to its creation, which in turn transitioned into a practice labeled “fabricated photography,” consisting of events staged in front of the camera, usually with a large format camera, with objects, constructed elements and props assembled to stage a scene.

My first significant project was a photo documentary realized in northern Quebec in 1973, titled “James Bay Cree Documentary” [1]. I was invited by the Cree to create a visual cultural study of their way of life to promote visibility in support of their legal rights. I came to the project with the intent to classify the various aspects of the Cree culture, such as social rituals, activities, architecture, portraits, etc., based on my studies of some major photography projects that entered the fine arts tradition, in particular the FSA project such as Walker Evans’s work [2]. In the process, I became aware to what extent a documentary photograph is the result of various forces–the subtle negotiations of the image maker with the subject, the cultural-information baggage that guides disposition, and the determining influences of chance and circumstance.



“James Bee Cree Documentary”, 1973
© George Legrady


The “Urban Nature” project that followed concentrated on pictorialism, the study of form, structure, and sharpening one’s formalist skill sets, and in the process evolved into more conceptual works like “Floating Objects” [3] and “Catalogue of Found Objects” [4] which explore the higher-level questions of “what is an image,” “how does the image mean,” and “is the image true?”



“Catalog of Found Objects”, 1975
© George Legrady


“Everyday Stories” [5] was realized in the studio, in large format photography. It was analytic in approach, and based on a discussion at the time about the degree to which the photograph could be considered a language, one demonstrating the fundamental syntactic rules and structures of linguistic processes. “Everyday Stories” is a work that embodies the theoretical and analytical examination of the relation between image and caption: Four sets of still lifes featuring possible configurations of image with text, exploring the narrative potential and syntax by which images convey meaning. The first set, “Everyday Stories,” is comprised of arranged objects juxtaposed against texts from a primary school manual where the text loads the image with meaning. “Theoretical Studies” combines propositional statements with images that function to establish to varying degrees the statements’ accuracy. “Image/Text Series” consists of intentionally out of focus images with blank text panels, reducing the meaning potential of both the visual and the linguistic, and “Object Narratives” is composed of still life compositions of objects in configurations suggestive of narratives, but without the aid of text captions to ground the meaning in a predetermined way. All of the objects I used for the four sets of “Everyday Stories” consisted of things lying around inside and outside the studio, a collection of odds and ends, plastic objects found at goodwill, detritus, etc.



“Everyday Stories”, 1980
© George Legrady


Many of the photographic staged composition projects in the early eighties, and the earlier “Catalog of Found Objects,” use objects in relation to each other to create meaning, charging each other through juxtaposition with each other. At the time, the organizational principle was derived through what the Structuralist anthropologist Levi-Strauss described as ‘signification at the level of sensible properties’ [6], whereas today, we contextualize with metadata and its syntax, a system of measurable classification based on properties and attributes that can be numerically evaluated for the purposes of comparison and classification. The underlying premise of these photographic projects in my work was to approach visualization and aesthetics according to questions about the nature of the medium: “How does a photograph create meaning, create presence, and function as a rhetorical device for articulating a cultural perspective”? This analytical approach has guided my exploration of the digital.

Shortly after completing the “Everyday Stories” project, I moved to La Jolla, California in 1981 and was introduced to computer programming by the pioneer painter and artificial intelligence-based digital artist Harold Cohen at UCSD [7]. I continued the staged photography projects parallel to acquiring skills in computer programming, and had to wait six years until the first accessible digital imaging system became available [8].

IG: At which point did you decide to start developing interactive media installations?

GL: The shift from analog photography to the digital image pushed the envelope for integrating and staging of the exhibition space, as I also wanted to feature the technological machines in the gallery to reveal the process of digital image making [9], but it was not until the availability of large projected digital images that the work was transposed into the interactive installation format where spectators could witness each other’s different content outcomes based on their own selections of topics in the interactive artwork. The 1993 interactive work “An Anecdoted Archive from the Cold War,” was my first work to integrate the gallery space with a large cinematic scale projection image and a mouse stand positioned in the center where the public would interact with the computer. The gallery was painted an overall grey color and a “table of contents” text was stenciled large scale on the wall [10] to underscore the archive reference of the project.

IG: You describe your work as having an “emphasis on aesthetic research through the implementation of complex technologies for new forms of content, narratives, experiences and analysis.” This process requires a collaborative approach where research, programming, and visualization need to work together. Can you talk about this process?

GL: In the early stages of access to digital imaging systems in the mid 1980’s, one had to custom write the most basic visual processing functions, which resulted in a lot of technological exploration and inventing. I was at the time intrigued with image processing techniques derived from Shannon’s Information Theory discussion about noise, and an article by Leon Harmon on face recognition [11]. I learned how to write convolution processes by which to transform and generate new images to explore the metaphoric and narrative potential of the source of these algorithms–surveillance and space technology. The technological transformation of the social infrastructure occurred at its peak during the 1990’s with the introduction of the Internet. Technological innovations and greater bandwidth advanced complexity in the production of digital artworks, resulting, in many cases, distributing the work details into team-based effort. I have been fortunate to work with talented graduates and much of my efforts have shifted to the conceptual and aesthetic components of a project, its management and funding, etc. The challenge is in arriving at the right balance between maintaining the integrity of the aesthetic direction while incorporating the contributions of collaborators who, in the process of resolving engineering issues, are also bringing to the project conceptual and aesthetic solutions.



“An Anecdoted Archival from the Cold War”, 1993
© George Legrady


IG: The project “Making Visible the Invisible” was conceived for the Seattle Public Library. Started in 2005 and through 2014, it visualizes the circulation of books going in and out of the library’s collection. How did this collaboration with the SPL start and how does the project work?

GL: “Making Visible the Invisible” [12] was selected by the library board of trustees in response to an open call by the Seattle Arts Commission. The selection process for this commission was unique in that the finalists were introduced to the library during a weeklong residency. The artists got to study the architectural spaces, meet with the construction architects [13] (LMN) and learn the operations of the library. We met with specialists in each area, from the director to librarians, security, and IT, including library staff and maintenance. My concept addressed the library as an “information exchange center,” focusing on the library as a spatially fixed, but informationally fluid environment, where patrons could retrieve information and in the process, I would pick up their traces, aggregate their choices, and do a statistical analysis to map out the community’s reading and viewing interests. The library uses the Dewey Decimal Classification System, which allows for a precise numerical classification of books and DVDs. The Dewey is a hierarchical tree-branching type structure consisting of ten main classes [14], each divided into ten divisions, with additional sub-sections so that all subjects and topics can be classifiable. Oddly, the Dewey excludes fiction, but includes CDs and DVDs.



“Making Visible the Invisible”, 2005-2014
© George Legrady


I had to convince the overextended IT department that this project was robust in its engineering and would not compromise the integrity of the library IT infrastructure. We eventually worked out a precise scenario where the data would be retrievable every 30 minutes in XML format [15], with all personal information shaved off, so that privacy would be protected. A year and a half was invested in prototyping, with digital media designer/artists Andreas Schlegel and August Black exploring various visualization techniques, going through the literature of information visualization from data driven abstraction to basic histograms. The final version was produced in the summer of 2005 by artist-engineer Rama Hoetzlein and his partner Mark Zifchock, with visualizations tested, while Rama and Mark produced the dataflow infrastructure.

The system consists of a server that gets the data every 30 minutes, parses it, and then stores it in four time scales (day, week, month, year), so that any data can be retrievable over the ten-year period all the way through 2014. The daily number of transactions is around 20,000, with peak activity between noon and 5pm. The server software prepares the data for visualization that takes place on three computers, each of which has two screens connected to it. The visualization software is responsible for: a) checking for available data; b) loading hourly data; c) displaying graphics; d) synchronizing the displays as all six screens must function together; and e) switching between multiple visualizations.

There are four visualizations that cycle continuously one after the other, each lasting approximately three minutes and featuring aggregated data of activities of the previous hour. We inserted this time gap to maintain some distance between the checkout event and its representation so that patrons’ privacy would further be protected. The first visualization, “Vital Statistics,” consists of a literal representation of data, numerically comparing books, to non-Dewey items, to CDs, DVDs, etc. Each screen shows the totals since the morning, and in the last hour. “Floating Titles” presents the hourly activity of titles in chronological sequence, with titles color coded to indicate book, as compared to DVD or CD. “Dot Matrix Rain” provides an overview map of the whole Dewey activities of the hour, with non-Dewey titles shortly visible as they drop from the top of the screen fading at the bottom. The three visualizations were resolved over the summer while the system was being produced, whereas the most complex visualization, “KeyWord Map Attack,” was created on site at the end of the week-long installation. The animation consists of color-coded words thrown on the screen and spatially localized based on each word’s summary of Dewey classifications. This is done by keeping track, in a multi-dimensional database, each word’s occurrence and its usage in each of the Dewey categories.

IG: How do you expect both the visitor and the workers/management of the library to react to the information that you are visualizing? Do you intend it to be understood as a piece of art or as a tool to understand trends and uses that can influence the way the library can be organized?

GL: The artwork situated behind and above the central information desk is meant to function as both an aesthetic experience and informational resource. Librarians appreciate getting the overview of what is taking place at the moment and how it compares over time. Patrons and visitors tend to be thrown off at first, until they understand how to make sense of the animations and, once they do, the animations become a form of intertextual browsing. Not only do the titles flashing by awaken interest in specific topics, but their juxtaposition next to other unrelated titles leads to additional inquiries that send the patrons into the stacks for further browsing.

One of the lessons of the “Pockets Full of Memories” [16] (2001-2007) installation is that any artwork that functions to gather data [17] creates through necessity another artwork, consisting of the analysis of the collected data.

IG: This direct relationship between the audience and the installation is explicit in some of your projects, for example in both of your “Pockets Full of Memories.” Do you approach installations in a different way—content, visualization, process—that need the contributions from the audience versus those that don’t require their active participation?

GL: Each new project tends to be an outgrowth of a previous work, to try to address some unresolved issue, or to redirect the focus to a different level of the content or aesthetic experience. My interest in interactivity has been to create an experience that prioritizes the semantic constructs rather then phenomena. This approach requires more work from the audience, and that in itself is a reason to collect the audience’s contributions and to provide an overview map of how each venue has inscribed the work with their particular set of choices and contributions.



“Pocket Full of Memories”, 2001-2007
© George Legrady


“Pocket Full of Memories” installation at AURA exhibition, Budapest, 2003
© George Legrady


IG: The way you visualize the information that you research is a project in itself. How do you decide what is the best interface for each project? Is there a hit-miss approach, where several interfaces are tested for the same information? Do you decide first the information and then its visualization or vice versa?

GL: The visualization always comes out of the study of the information. It evolves primarily through iterative prototyping, but is also impacted by assumptions about the venue and the intended audience, and then hybridized through the contribution of the collaborators and conversations with the curators and clients. Chance and circumstances do play a part in the final outcomes, but they are guided by conceptual and aesthetic expectations that I bring to each work. Even though the various projects may look different, there are underlying threads that go back to earlier works, each new project engaging a problem to be resolved, and in the process revealing and generating a new set of expectations.

IG: Uncovering and mapping information have the potential to clarify complex systems and help understand behaviors and trends. What do you think is the potential of mapping and how do you think it is going to evolve in the near future?

GL: Our exponentially increasing production of data forces us to invest significant efforts into its classification, analysis, and preservation. Information Visualization provides the access for sensemaking and knowledge transfer, but it is not a neutral process. My sense is that Information Visualization may shortly initiate the same type of highly active academic theoretical analysis that was brought to the photographic image in the 1980’s, exemplified by Roland Barthes’ articles such as “Rhetoric of the Image” [18].

IG: Which projects are you currently working on and what are the main aspects that you are interested in exploring in each one of them?

GL: I have exhibited this winter at the Vancouver Olympics, a project [19] visualizing the history of observations by the sun-orbiting Spitzer satellite. This work resulted out of a collaborative invitation by the Art Center College of Design in Pasadena, and the NASA Spitzer Science Center at the California Institute of Technology. This satellite telescope went into orbit in 2003 and completed its mission in 2009. The installation consists of two projections on opposite walls in the gallery. One wall maps all of the sky locations of the 36000 observations with the intent to achieve an overview of where scientists were interested in studying. The opposite wall features a reddish live image recorded in the gallery space by a military grade heat sensing surveillance camera that actually moves its point of view replaying the angles of views of the telescope’s observations. The Spitzer uses infrared technologies to visually register heat variances, and this positioned us to use an infrared motorized surveillance camera. The project’s intent has been to integrate in the same physical location two types of representations, one factual, sequential, out in space, and in the past. The other maps the events in the gallery space where the presence of the spectators becomes the subject matter. In contrast to the visual animation of looking out into the universe, they see themselves as heat registered visual information, in the present.



“We Are Stardust” installation at the Vancouver 2010 Winter Olympics
© George Legrady


“We Are Stardust” infrared camera, 2008
© George Legrady


I began this interview mentioning the James Bay Cree documentary, a collection of 3200 social documentary photographs realized in the James Bay sub-arctic some 40 years ago. Even though geographically remote, the global technological culture has impacted the Cree like everyone else, and for reasons of cultural heritage and preservation, this project is now being re-formulated into a major research project, integrating additional anthropological, northern development, and indigenous Cree data, to be reconstructed into an interactive, cyber-infrastructure cultural atlas. Forty years have gone by and, for the Cree’s historical circumstances, and their negotiations with federal legal bodies and corporate industries, the cultural, political, and technological changes that have taken place provide the right conditions at this time to technologically coalesce the data, broadening the scope of the work across multiple perspectives, from ethnography, to arts, to culture, history, politics, and data visualization.








6. The Savage Mind, Claude Levi-Strauss, University of Chicago, 1962



9. and


11. Harmon, L. D. (1973). The Recognition of Faces. Scientific American (1973 Nov) 229(5):71-82







18. Barthes, Roland. Image-Music-Text, Hill & Wang, 1972



George Legrady is a professor of Interactive Media, with joint appointment in the Media Arts & Technology program and the department of Art, UC Santa Barbara. His focus is on research and experimental projects in the areas of data visualization, algorithmic processes, computational photography, and interactive installation.

Iker Gil is an architect, urban designer, and director of MAS Studio. In addition, he is an Adjunct Assistant Professor at the School of Architecture at UIC. He is the recipient of the 2010 Emerging Visions Award from the Chicago Architectural Club. | @MASContext

Photography, Information, and Meaning

D-Day landing at Omaha Beach photographed by Robert F. Sargent, Chief’s Photographer’s Mate in the United States Coast Guard.


Essay by Jörg M. Colberg, editor and founder of Conscientious, a website dedicated to contemporary fine-art photography


In the middle of the 20th Century, philosopher Ludwig Witt- genstein worked on what is called ordinary language philosophy, essentially studying how language works. How do we know what a word means? One of the basic ideas was that if words were taken out of their proper context, the resulting problem would essentially be artificial: All it would take to solve the problem was to simply look into the source of the confusion and pinpoint how language was misused.

In particular, philosophy itself would dissolve into a set of constant misapplications of language. So if we ask “what is reality?”, according to Wittgenstein we only think that we are posing a complex, deep philosophical question, whereas in reality we are deceiving ourselves.

Needless to say, ordinary language philosophy–and Wittgenstein’s writing–is considerably more complex than what I just–very briefly–outlined. What is more, we live in what people like to call the postmodern world, using postmodern philosophy.

What does this all have to do with photography? As it turns out, the theory of photography to a large extent relies on postmodern philosophy (think Roland Barthes). The problem with this is that we don’t have to look at things with a Wittgensteinian eye to see that, well, it might not serve us as well as we’d like to think.

Of course, postmodern philosophy-heavy on academic jargon-fits very well into the context of the art world, whose theorists also love to use jargon. It is harder to see art theorists using Wittgenstein’s writing–which not only is jargon–free, but which also works very hard to unmask jargon for what it is: A way to create confusion and to pretend there are meaningful, deep problems, when in fact there aren’t any.

Jargon aside, there is a reason why I am bringing up Wittgenstein in the context of photography: When you look at some of the questions surrounding photography, they are amazingly similar to many of the questions Wittgenstein considered. For example, the question “What does a photo mean?” can be treated in ways similar to how Wittgenstein treated “What does a word mean?”

What we are doing when we ask “What does a photo mean?” is to use language to deal with photography. This seems like an obvious statement, but if we were to be Wittgensteinian, we would realize that the results of our considerations might in fact be misapplications of language that, we think (or maybe hope), are deep insights into photography (consider Barthes’ “punctum” in this light!).

Something similar is happening when there are debates about whether or not the staging (or supposed staging) of an image means that what is depicted is not real any longer. To take an old example, Alexander Gardner staged some of his Civil War photographs by arranging corpses. So is what is depicted less real than it would have been had he photographed the corpses in their original positions? There are many different aspects to this question. At its core lies the problem of whether or not a photographs shows something that is real, how a photograph does it, and what we expect a photograph needs to do.

“The home of a Rebel Sharpshooter, Gettysburg,” (1863)
© Alexander Gardner


But does the arranging of a corpse on a battlefield to produce an image make the war itself any less real? On a fundamental level, how is arranging a corpse so vastly different from finding a good spot for a photo and then cropping it to make a point? These questions, seemingly about photography, are really about our understanding of images and not so much about the images themselves.

Of course, for a full, proper Wittgensteinian treatment we would have to know a lot more about his philosophy; unfortunately, space does not allow to go into more details here. But I think the very first step, the realization that discussing photography might seemingly produce insights, whereas in fact we are deceiving ourselves, deserves to be taken seriously. If we think about it, a lot of the ideas we have about photography are based more on what we feel (or wish) is or should be the case than what actually is. Let’s consider an example. There have been many debates about what is commonly called the Photoshopping of images, and there has been a growing number of scandals about the manipulation of images. The news business (and let’s not forget it is a business first) has reacted to this by demanding that photographers do not Photoshop their images beyond what is considered standard practice. It’s almost impossible to list the various problems that are associated with this.

At the base of such rules lies the idea that a photograph presents the facts (or truth or however you want to call it), as long as you-meaning the photographer or a graphic editor–do not mess with it too much. Of course, this is simply not true. Even before doing any kind of processing, the taking of a photograph already includes so many subjective decisions that the idea that photography will show the facts is, well, problematic.

Of course, this simple fact has been known for a long time. Yet, only since the advent of digital image processing technologies have large numbers of people started to worry about it. This is in part because we feel that a digital photograph is somehow less real than an analog one. An analog photograph typically is produced from a negative; you print the negative to get a positive. Crucially, you can hold both in your hands. In the digital world, you can print a digital image, but there is no actual negative. Instead, the ‘negative’ is a set of bytes somewhere inside a computer (a digital camera is a small computer with optics attached to it). Maybe this is why people feel that a digital photo is somehow less real than an analog one. You can open your computer and pry open your hard drive, but you won’t find a tiny image impressed on one of the little magnetic disks that are part of your hard drive. If you think about it in terms of images, there is no difference between analog and digital photography. Regardless of whether I look at an image printed on paper or on a computer screen (the analog photo I would have to scan), unless there are clear giveaways (digital noise looks different than grain, and digital photography artifacts usually look different than analog photography ones) we will be unable to tell what type of photo we’re looking at.

Yet still, we demand more from digital photographs, we feel we have to define the rules of digital post-processing very strictly. And we feel that as long as we stick with commonly accepted ways to manipulate a photograph, the resulting image is still real, whereas slightly beyond it no longer is.

We seem to treat digital images differently because at their source, there is something intangible–even though most of the standard procedures used in Photoshop are equivalent to what people use(d) to do in the darkroom. In fact, some of the Photoshop “tools” have icons that are modeled after the darkroom tools.

I don’t want to argue that we should give up on talking about what can or should be done with images. There are very obvious things that should not be allowed in a news context. Instead, now would be a good time to talk about photographic images in general, and that means not only to talk about what they do (and don’t do) but, crucially, how what we do with them and how we think about them gives them a large fraction of the meaning that we think they possess by their very nature.

For example, it is not hard to see how accepting the fact that no photograph taken by a human being is objective would be a much better way to approach photography. In that case, we could focus on what images really say and how they say it.

We have to realize that accepting that photographs are subjective does not rule out their use in a news context. A lot of the attempts to define what photographers can do with their images on a computer has to do with trying to have images that are credible. If an image is fake, it’s of no use in a news context. The problem, of course, is that it’s impossible to define how much you’re actually allowed to manipulate an image–you can’t measure the amount of doging and burning, say, like you would measure a temperature. But not only that. In principle, all images are fake, because they only show a selected part of what we might want to call “the world” (for a lack of a better phrase). So when we set rules about how much we allow someone to fiddle with an image, we assign a sense of reality to an image that actually does not exist.

This doesn’t mean that no image has any relation to what we might think of reality or that we will never be able to use images in a news context. On the contrary, we can safely use images in a news context once we understand how the meaning of images to a large extent is defined by how they are being used.

We can approach the problem from a different angle as follows: By restricting the allowed amount of Photoshopping of images, newspapers are essentially trying to solve what is widely perceived as a credibility problem. People mistrust the media. But people don’t mistrust the media because of photographs–people mistrust the media because, for example, time and again they have proven to be unreliable (remember the media coverage of George W. Bush’s case for the Iraq war?). Focusing on images will not make newspaper more credible. On the contrary, by focusing on image manipulation each and every new scandal will only decrease the overall credibility of newspapers. By focusing on image manipulation newspapers–yet again–pretend that the credibility problem does not originate in the newsroom.

In an article published on my blog [1], I provided a very simple suggestion for the problem of Photoshopping: instead of using ill-defined criteria for how much manipulation is allowed, why not make the raw (unmanipulated) images available on the newspapers’ websites? That way, readers would be able to literally see the changes the image went through. Needless to say, the raw image could still be faked, but it would be a much better way to build trust than arcane rules that might or might not be inforced.

Manipulated image provided by Sepah News, the media arm of the Iranian Revolutionary Guards. The second missile from the right was added to the original image. The image was distributed by The Associated Press and was published in multiple webs and newspapers before the manipulation was discovered.
© Agence France-Presse


The anger we feel about each and every image manipulation scandal for the most part is misplaced, especially since there already exist computer algorithms to detect obviously faked images. We shouldn’t be angry at the photographer(s), we should be angry at the editors who obviously didn’t bother to run such computer programs in their offices. It would be much easier for us to deal with such scandals if we realized that a) they are inevitable, b) a newspaper will make every possible effort to detect them, and c) a newspaper will be forthcoming about what a manipulation scandal actually means–instead of merely throwing the photographer in question under the bus, pretending they were just as deceived as everybody else.

At the core of this issue lies what an image means. But what an image means is not something that is somehow contained in the image or that somehow comes along with the image. A photograph of my mother obviously has a complex meaning for me, whereas for other people it’s a photo of an elderly lady. We could talk about the facts, but even the facts contained in the photograph are not universal. For me, it’s a fact that the lady is my mother. For another person it might be–if they know me and my mother, and for most other people it won’t be. They have no idea that it’s my mother.

Photographic facts are a very complex issue, especially because their relationship to image manipulation is so complicated. In order to move ahead, we need a better debate about photographic facts, the meaning of photographs, and to what extent this has to do with what we feel about photographs.

There is no need to involve philosophy in this at all. But I am happy to argue that if we want to invoke philosophers, we might be better served with ordinary language ones than postmodern ones,–and even if it was just to expand our thinking about photography in new directions, to avoid us having to use the same small, tired canon of photographic thinkers. Photography has undergone a lot of changes recently, and it’s about time our thinking about it does, too.





Jörg M. Colberg is the editor and founder of Conscientious, a website dedicated to contemporary fine-art photography. He has written articles for international magazines and the introduction to Hellen van Meene’s monograph “Tout va disparaître”. American Photo included him in its list of “Photography Innovators of 2006″. | @jmcolberg

It’s Not Information Overload. It’s Filter Failure

© Web 2.0 Expo Sept. 2008


Clay Shirky explains the challenge of the always evolving information: the filtering process.


Clay Shirky has established himself as one of the most influential thinkers on the social and economical impact of the Internet. He has written and lectured extensively on crowdsourcing and collaborative efforts without the need for traditional organizational structures. In this article, originally presented at Web 2.0 Expo Sept. 2008 at the Javits Center in New York, he explains the challenge of the always evolving information: the filtering process.


(This article was originally presented at the Web 2.0 Expo Sept. 2008 at the Javits Center, New York, NY, and has been transcribed with the permission by the author.)



It starts with this chart. You all know this chart. This is IDC’s version of the chart, Hal Varen and Peter Lineman of UC Berkely have a version of this chart, and Google has a version. This is the chart of how fast the information in the world is growing. No matter who does the chart, it always looks like this: up and to the right, and the rate of increase is always increasing.

We love this chart. This chart makes us feel better. This is why I am not getting anything done: I’m suffering from information overload. This has been the obvious salvation for writing-blocked tech journalists for fifteen years. When we don’t know what to write, we can always go down the hall to our editor and say, “Hey, I want to do a story about information overload.” And the editor, looking up for their overflowing email boxes, says, “That’s brilliant!” You always get to do that story. So, for fifteen years we have been reading the SAME story about information overload.

But if it has been the same story for fifteen years, and you can find stories from ’93 that is the same story that showed up in your RSS feed three seconds ago, then why is it still such a surprise? If this is the normal case, then why are we constantly talking about writing about it as if it is a big deal?

Here is why I think this is and it goes back to the printed press. Guttenberg and the invention of the movable type injected for the first time, in life outside the universe, information abundance. By the 1500’s, the cost of producing a book had become so cheap and the volume of books being produced became so large that an average literate citizen could have access to more books than they could read in a lifetime. So ye ole information overland is actually a problem of ancient providence.

The other problem that Guttenberg introduced into life was risk. If you owned a printing press, you could make money if people bought your books. But, you could lose money if people didn’t buy your books. Since you had to print the books in advance, you were taking on all the risk that the books would sell. This is the problem of publishing. The economic solution was pretty simple: make the publisher responsible for filtering for quality. There is no obvious reason why someone good at running and operating a printing press should be good at figuring out what books to print. The economic logic of “print in advance and sell it” — high upfront costs and recoup when it reaches the people — meant that the word publisher came to mean two things: (1) people who decide what to publish, and (2) people who do the publishing.

There have been many media revolutions between Guttenberg and now, by the middle of the 21st century we had recorded music, movies, televisions, but the curious thing is all of those other media types had the same economics. Whether it is the printing press or a TV tower, it costs one a lot of money to get started and one had to filter for quality. What the Internet did was introduce for the first time post-Guttenberg economics. The cost of producing anything by anyone has fallen through the floor, famously, and as a result, there is no economic logic that says you have to filter for quality before you publish. Proof of this hypothesis I leave to you, but I recommend you starting with or you can pretty much start anywhere and discover that the filter for quality is way downstream from production.

What we are dealing with now isn’t information overload, because we are always dealing with information overload, the problem is filter failure. An example you face is spam. Everyone has the morning ritual of deleting the spam out of their email: identifying the messages you have to remove, getting rid of them, and getting on with your day. This process is some combination of a mechanical filter plus a user getting rid of the last few bits and pieces. Everyone will have had the experience over the last couple of years there being a day where you say, “Oh my goodness! The volume of spam has doubled. My inbox is full of spam again.” So I set out to measure this and watched my inbox, particularly messages I had to delete in the morning. What I discovered was my experience of spam doubling, came when the volume of spam I received increased only 25%. It wasn’t actually that there was a lot more information; it was that there was just enough information to break the systems I had in place. It wasn’t about the increase in volume; it was the collapse of the filter’s I had.

Spam I think is a really good indication of the information overload problem generally. It requires multiple kinds of filters, automated and manual, and different solutions for different people. All the solutions are temporary. No matter what you use, you have to retune. There is no “set it and forget it” solution. Finally, you have to take the volume increase for granted. You have to assume you will continue to be targeted. The logic of spam is that the economic incentive to target is enormously high and the cost enormously low. It is really a filter problem, rather than an information problem.

In the context of spam and traditional information management, I started to think this is a general system design problem for our era. Not a computer system, but social systems — the institutional and social bargains we all have with one another when we are dealing with each other in our daily lives. I am trying to apply this idea of filter failure as a design lens to other types of social systems besides just managing hard drive space. Let me tell you something that happened to a friend of mine last November (2009) that illustrates this problem. A former student, a colleague and a good friend, decided to break off her engagement with her fiancé. In addition to the mix of emotion, horror and administrative work you have to go through when you are doing something like that, she also had to engage in the 21st century ritual of the “changing of the relationship status.” She had to go onto Facebook, grab the button that says “engaged” and flip it to “single” and press submit. She considers doing this and thinks about the result which will be in her news feed on Facebook. Suddenly, she realizes she might as well buy a billboard. Here is her dilemma: she has a lot of friends on Facebook and also has a lot of “friends” on Facebook — people she went to high school with, people she knows peripherally two jobs ago. She doesn’t want all of those people suddenly getting deeply personal information about her, just her narrow circle of real friends. She especially doesn’t want her fiancé and his friends to see it, and she also doesn’t want to tell his friends before he does; she wants to give him the space to do so. She goes onto Facebook to fix this problem. She first finds Facebook’s privacy policy, very clearly thought-out and written, very carefully descriptive and not hidden or buried, linked to on most pages on the site. In addition to the policy, she finds her own personal settings for managing her privacy settings. She figures out how she is going to do this. She checks the appropriate check boxes, and she is able to go to the interface, take her status from “engaged” to “single” and press submit. Two seconds later everyone of her friends in her network gets this message, “Your friend is now single.” All of her fiancé’s friends get that message too, and the email starts to pour in, the AIM starts to come in, the phone is ringing off the hook and everybody knows. Total disastrous privacy meltdown, self-inflicted.

We look for fault in circumstances like this, so it is tempting to blame my friend. Well, I have known her for a long time and she did her graduate thesis on comparative analysis of Friendster, Facebook and Meetup. This is not an average user. If she doesn’t get the interface, it is a pretty safe bet it is out of the reach of most people. So we want to blame Facebook. They had the wrong checkboxes, wrong description, wrong privacy settings setup. But it is hard to blame Facebook when they have made so much of an effort. James Grimmelmann, who writes so much about social networks, has said that Facebook has the best expressed and best executed privacy management tools he has seen on any of the networks. The actual problem is that managing your privacy preferences is an unnatural act. It is just something no one is good at, either setting up or maintaining. Prior to the present era, the only person any of us could name or call privacy preferences was Greta Garbo. This is not something we are used to. Privacy is a way of managing information flow. What my friend wanted to do was tell four to five of her close friends and they would tell the next circle out, and slowly the information would seep through the network in impartial ways and not instantaneously.

That is how it used to work. The big question we are facing around privacy now is that we are not moving from one engineered system to another engineered system with different characteristics. We are moving from an evolved system to an engineered system. We have pushed formal and explicit statements about privacy into our lives for the first time. Prior to the current era, the principal guarantor of privacy wasn’t law or regulation, and it wasn’t hardware or software. It was inconvenience. It was a hassle to spy on people. We lived most of our lives not in the bubble of privacy or the glare of publicness, we had what we called back in the day our “personal life.” That is a phrase almost no one uses anymore, except to refer to technology. We have a lot of personal technology. We don’t have so much personal life. In personal life, we can walk down the street talking with a friend and someone could be listening to you, but they are not. It is not like every word you say is being recorded for posterity. But now it is like that, a lot like that. For people like my friend it is almost completely like that, whose social life is lived hammer and tongs in those kinds of environments. This inconvenience and hassle, an inefficiency to information flow, wasn’t a bug, it was a principal feature. As long as we have a world of completely explicit privacy preferences it isn’t going to be a good fit for the way we live our lives. This is a question of filtering, not managing information. How do we want to design the filters so that privacy works the way we need it to work?

My friend is a story of outbound information flow, spam is a story about inbound information flow, and those are both relatively clear cases. There are some stories where the information is so bound up in institutional design that we can’t even identify which direction the flow that needs to be filtered should be going.

This story illustrates this problem. Chris Avenir is an 18 year-old, and because he is eighteen, he has grown up in this environment. By the time he was five the Internet was public, by the time he was fifteen MySpace, Friendster and Facebook had all launched. By the time he was twenty, he goes to college and this spring up at Ryerson College in Canada, he enrolled in the Chemistry class. Like all students since time immemorial, he says this is hard and I am going to work for the test and so Ill start a study group. Because he is eighteen, he starts the study group on Facebook and calls it “Dungeon – Ryerson College Chemistry study group.” It goes pretty well. He gets 146 of his classmates to join the group and they are sitting around talking about chemistry on the site. Suddenly he is called up on charges and the college threatens to expel him. How many charges? 147 of them: one for setting up the Facebook group and one for each of his fellow students that joined. Ryerson College says this is cheating. Here is there point of view: “Our academic misconduct code says if work is to be done individually and students collaborate, that’s cheating, whether it’s by Facebook, fax or mimeograph.” They are saying Facebook is media, we are treating this as publishing and once you are operating under a mediated environment, it is immaterial to us how it works. Here is Avenir’s reaction, “If this kind of help is cheating, then so is tutoring and all the mentoring programs the university runs and the discussions we do in tutorials.” He named the group the “Dungeon” because that is the name of the room on the Ryerson campus where the real study groups meet. He thought, Facebook is just an extension of group life, and I am just extending it into this zone — Facebook. What had Avenir done to freak Ryerson out so much?

What he had done was crash two different kinds of information flows into one another. Every college has two different messages, an inside message and an outside message. The inside message is welcome to the community of scholars and we are glad you here, come join us, we are having the best kind of conversation, the best kind of class to be in are small seminars where you can discuss things with your peers. It is very much about community, conversation, and joining the group. To the outside world they say we do quality control of individual minds, we pack them with education and when they have enough education packed in them, we slap a diploma on them and ship them off. The thing that keeps these two modes from colliding is just the inconvenience of the real world. It is a hassle to get groups together, to coordinate times to meet. Real world stuff stays pretty much bounded by the walls of the campus, and those two messages are just separate.

What Avenir did by moving the study group to Facebook was he caused those two messages to collide, and we have the clash of metaphors. Ryerson College says Facebook is like media, Avenir says it is just an extension of the real world and we are caught in this either/or choice, a bit like the public or private choice in privacy. The problem is that if you are going to make that choice, you are going to make the wrong choice. You know what Facebook is like, and it is not like a fax machine or a mimeograph and it is not like a meeting in the basement of Ryerson. Facebook turns our to be a lot like Facebook. There is no metaphor that can be picked up and slapped on it that will tell us what to do about it. Facebook is different than what has gone before it ,and if it wasn’t it would get any users. Facebook is only worth spending time on because it is different.

There is no simple solution to the problem. Avenir has a point. He has been invited into an environment where group conversation is normal, and he thought he was doing the right thing. For all of Ryerson’s terrible overreaction, they have a point, too, because even though there are study groups that meet in the real world Dungeon in the basement, none of those tables seat 146. If you have a small study group, half a dozen or so, somebody comes in and says, “You know, I am really here to just mooch off you guys, I just want to know the answers to the chemistry test and I am not going to participate,” you get kicked out. Small groups defend themselves against free-riders, large groups don’t.

The Internet allows large systems that are free-rider tolerant rather than free-rider resistant. If there are a 146 people in a Facebook group, then somebody is free-riding. There is more than enough information out there. We have known the formulation for hydrochloric acid for some time now. We aren’t asking the students to figure it out so we know it, we are asking the students to figure it out so they have experience in figuring things out. This is exemplary of the filter failure. When you see the Ryerson College/Chris Avenir fight, it isn’t over information or access to information, but rather, a fight over flows and access to flows. It suddenly becomes clear that what we are dealing with is not putting the filter back at the source — the way we have always done in the past, but rethinking the institutional model. You have to have good conversation and individual effort, and you have to design a system that accommodates both. Currently, we are breaking the system we’ve got.

Part of the reason information overload presents such a consistent problem in the current environment is that we don’t have the obvious tools to pick up. Using a metaphor of current media and of physical space, each of those illuminates part of the current landscape but not enough.

We are really pitched forward into a new challenge and I believe this isn’t a design problem. I don’t think anybody can start going out and coding the college of the future tomorrow. This is more of a mental shift; a way of seeing the world that assumes that we are to information overload as fishes are to water — it is just what we swim in. Isaac Asimov once said, “If you have the same problem over a long time, maybe it is not a problem, it is a fact.” That is information overload. Talking about it as if it explains or excuses anything is actually a distraction. We have had information overload in some form or another since the 1500’s. What is changing now is the filters we use for the most of the 1500 period are breaking, and designing new filters doesn’t mean simply updating the old filters. They have broken for structural reasons, not for service reasons.

In some situations this will be a simple matter of programming. Certainly the pressure to get this right has led an enormous number of post-categorically filtering mechanisms. That is why Digg voting mechanisms work, “tagging” mechanisms work; it is the logic behind all search engines.

Some of it will not. Some is actually going to be around rethinking social norms. When we feel ourselves getting too much information, I think the discipline to say to ourselves is not “what happened to the information,” rather “what filter just broke, what was I relying on before that stopped functioning.” When we start asking that question, we will get some clue as to where to put the design effort.


Clay Shirky divides his time between consulting, teaching, and writing on the social and economic effects of Internet technologies. He is an adjunct professor in NYU’s graduate Interactive Telecommunications Program, where he teaches courses on the interrelated effects of social and technological network topology.

Information and the Reluctant Image


Cloud Prototype nº 1 © Iñigo Manglano-Ovalle


A conversation between Iker Gil and artist Iñigo Manglano-Ovalle in his Chicago studio.


Cloud Prototype No. 1

“Cloud Prototype No. 1” (2003) starts out of the reluctance to will form into being, and by that I mean a reluctance to actually give form or shape to a thing. To approach the idea of the event as something that has its own will. To keep the artist’s hand at a distance. So what first seems like something that has a kind of traditional aesthetic in terms of its formal aspects, it’s actually something quite different.

It is a recording of a moment in time of a very large thunderstorm. I worked with a group called the Convective Modeling Group down at University of Illinois at Urbana-Champaign. It is one of the few sites that actually track a thunderstorm three dimensionally. They bring in so much data on the storm systems that they need a supercomputer to house it all. By the time I got there, they had already channeled all the data they needed and they were basically producing a film, a 3D film of the storm as it takes place. Working with them, I selected a moment of that storm, the moment before it explodes, before it actually bursts. Working with data across time, I realized that I was falling into the trap of aesthetics. What moment of this cloud looks better? So I had to remove myself from that, and then go back conceptually and just email them and say, “You know what? I want this moment, whatever it looks like. The one right before it burst.”

It all started because I wanted to make a sculpture that would be the companion to a film of mine about Robert Oppenheimer, called “Oppenheimer”. I wanted to make a sculpture of a mushroom cloud, of the Trinity test. But because no 3D data exists of that event, I was forced to begin to model it, and I really disliked that. So then I turned to a natural event, one that possibly could already be an explosive event.



Cloud Prototype nº 2 © Iñigo Manglano-Ovalle


Ultimately, the work in its inception is about politics, it’s about tuning into climate but on a much more complicated level. Climate understood as our contemporary condition: a social, political, cultural… climate. The work starts purely conceptual and it returns to aesthetics, but in a way that high math does, or high philosophy does. You end up dealing with strange things such as truth and how truthful can you be to the thing. And then you end up with the form, falling trap to beauty.

It is scary and beautiful at the same time, and that is why it fits so well with “Oppenheimer”. Because I was looking for the Virgil that would guide us through our contemporary inferno. I thought Oppenheimer would be that person. He falls into that ambiguous state of being the beginning of the destructive force of the atomic weapon, of the weapons of mass destruction, at the point of looking for those weapons of mass destruction. But then he flips and becomes the conscience. He has that duality. And I am interested in that duality, where the beautiful meets the monstrous At 10 years old, I was staring at Goya’s Black paintings and saying how beautiful, looking at “Saturn Devouring his Son”. That’s the moment in aesthetics and philosophy and ethics where evil and good, truth and fiction, meet. And all the projects are about that. It’s all about that moment of ambiguity.

When I was making these Clouds I was also really interested in notions of surveillance. This notion of the hovering above and looking down, and I was interested in how surveillance camouflages itself as a mirror. I was thinking about how we, post 9-11, in a sense, have embraced surveillance. The public actually becomes a part of surveillance itself. It becomes the apparatus, the perfect sphere, the perfect-mirrored sphere. And yet the Cloud does the opposite. It distorts, it doesn’t allow for there to be a perfect reflection.


Iceberg (r11i01)

In 2005 I began to work on Iceberg (r11i01). Having looked at an ephemeral event, which is a kind of body of water that is vapor, then I thought about a body of water that would be solid, so I looked for an iceberg.

Iceberg (r11i01) is data from another research group, the Canadian Hydraulic Center, which is one of the few centers in the world that models or topographically records icebergs in three dimensions, above and below water using sonar below and radar above. They actually allowed me to have data on a number of icebergs, and the first one is the r11i01. It is a complicated piece because on one level, I am interested in the iceberg because it’s actually, in terms of data, pre-linguistic. The water that forms this iceberg is 50,000 years old. So that’s before language, it’s prehistorical. History has not arrived yet, language has not arrived. In terms of data, I was sent a spread sheet of x, y and z coordinates, thousands of points above and below the water line.

So what happens when the iceberg is released into our climate, what is its true impact? How does it enter language, our imagination? What it speaks to now becomes of great interest to me. Unlike the Cloud, where I was interested in the skin, the surface, here I became less interested in the surface of the thing and more interested in the data set. Each of the points [xyz coordinates] became interesting and I wanted to know how hey were connected to each other.



Iceberg (r11i01) © Iñigo Manglano-Ovalle


You could say that in terms of geometry and technology that the Cloud is a nurb surface and this is a polygonal mesh, two very different things. But more importantly I was interested in the mesh as a notion or representation of a network of information exchange. Iceberg (r11i01) is architecturally a metaphor for a current state, almost a postmodern state, and this is a word I don’t like to use a lot, postmodern, because I am not sure I believe in it. I liken the iceberg to taking a beautiful geodesic sphere by Buckminster Fuller and in a fit of anger for his failure to deliver utopia, crushing it like a piece of paper and throwing it in the waste basket. And then with a great sense of regret and urgency, running to retrieve it and restore its pure geometry. Only to reconcile myself with the fact that Fuller’s regular geometry is no longer viable and maybe perhaps should be thrown away, literally. What we are left with is non-hierarchical geometry where there is no one point in the system that has any hierarchy over any other point or is replicated. That was the complexity of having to make this thing. Because there was nothing uniform about it.

We ran into a problem of how to build it, with every vertex and edge unique. The guys at Rhino developed a program that could actually model each joint and number them so we could print them in plastic three-dimensionally. The sculpture is in fact the digital print of itself, a digital print of the original data set. All the labels, almost everything comes out of a printer. And one vertex has attached to it a USB memory stick. These sculptures always carry their complete set of information. It’s not quite clear to me whether the sculpture is growing out of this memory stick. Whether it acts like a seed out of which then the tree grows. Or whether the sculpture carries its history much like the tree bears its fruit. So in a sense, the inclusion of the memory stick keeps the piece from becoming a visualization. Less a model of an iceberg and more frozen moment of the phenomena that is information exchange. It might just be a moment.



Iceberg (r11i01) © Iñigo Manglano-Ovalle


One of the things that happens in creating the work is that there is this other studio practice that is almost all communication, between people that I meet or I don’t meet who are making connections for me out there, whether it’s engineers, scientists, research centers and so forth, to get to the point which a project can actually be done. The studio is never physical. In fact, the work never happens in the studio.


Phantom Truck

(2003-2007) I remember the day former U.S. Secretary of State Colin Powell addressed the U.N. Security Council prior to the U.S. invasion of Iraq. Even as I was watching the speech live I knew I wanted to do something with it, and it came to me rather quickly that I had to construct a mobile biological weapons lab. Here the data set is the speech and the information that followed.

In a way, it’s forensics, coming in after the event, similar to both the Cloud and the Iceberg, although more blatantly political, but still dealing with the notion of reconstructing an ephemeral moment. The problem is we are looking for something that is actually moving, that is unlocatable, and yet we are seeking a certain sense of certainty and stability. What I had to start with were Colin Powell’s slides and his own presentation, and then images that appeared in the press after they had found the vehicles, that later turned out to be not real. Also photographs from white papers, from the Department of Defense and CIA. My work was almost exclusively research, scaling and patching photographs togethers, research on similar trucks and the companies that fabricated tem, all to achieve a faithful representation of what was from the start a fabrication. And ultimately Phantom Truck is a fabrication of a fabrication.



Phantom Truck © Iñigo Manglano-Ovalle


It is always hidden in a darkened space where it is only made visible by the presence of the viewer. The viewer is the apparatus of its visualization. Like the Cloud and Iceberg, when you approach it, you don’t know what it is, but you take it for what it is. What’s important to me in all three pieces is a phenomenological relationship of the viewer to the piece. And much of it is about locating yourself in relationship to it. In this case your location is one in darkness, where you almost have to stand still, let your eyes adjust. At this point the viewer is actually causing its appearance. And still what the eye reveals is a fabrication, an apparition of shorts, which is what makes it a Phantom. Or what the Greeks called “the thing made visible”.


Search / En búsqueda

This 2001 project turns La Plaza Monumental de Toros in Playas de Tijuana into a radio telescope. It takes the bullfighting arena and converts it into a parabolic dish and suspends a radio antenna above it. Everything else that happens here is essentially one of minimizing a structure that is already very minimal, removing all the advertisement, replacing all the colored flags with white flags, removing all text except the names of all the bullfighters. It is site specific.



Search / En búsqueda © Iñigo Manglano-Ovalle


This is 50 meters south of the US metal fence between Tijuana and San Diego. It wants to respond to the fact that that border is one of the most highly surveyed borders, with all shorts of monitoring technology, whether it would be sound, radar… It responds to that by wanting to make an even larger monitoring system. It responds to it by actually wanting to make a radio telescope to search for the “real” aliens. What it’s doing is mimicking what SETI does, it’s looking for alien life. So it becomes both a radio telescope and also a pirate radio station to broadcast the information that it receives. Now, most of the information it receives is basically static, no information, which in a sense replicates also what SETI is getting at all time. Interestingly, what SETI is looking for is a radio frequency of the existence of hydrogen, because if they find hydrogen, they know they can probably then find oxygen. And if they find hydrogen and oxygen, then they have water. Life. What the radio station in “ La Busqueda” did was have a transmitter that broadcast the radio signal through Tijuana and parts of Southern San Diego of what was being heard by the radio telescope. It broadcast across the FM spectrum; the transmitter would change its frequency all the time. It would momentarily interrupt every frequency with static. The public knew that there was the radio broadcast of this, but they had to search for it. The problem was that if you searched for on the radio you would have little chance to actually finding it amidst all the other static and if you did come across the broadcast it is unlikely that you would know you had. Ironically, evidence of it was only found by people who were not looking for it. It was actually the cabbies in Tijuana who at coffee shops started to talk to each other about this phenomenon. While listening to their favorite stations in their cabs they notice moments of dropout and static and they start to talk to each other and there started to be reports about these conversations. One report mentioned that some thought the electronic disturbance had to do with aliens.

Now the monitoring device/radio transmitter is the transgressor, it is crossing and invading all the frequencies, it comes and goes. It becomes a metaphor for migration or transgression, breaking all sorts of laws while it’s doing it, but it’s unlocatable by the Federal Communications Commission because they can never trap the signal. It is guaranteed that it will never be locatable.



Search / En búsqueda © Iñigo Manglano-Ovalle


Unwanted feedback between the antenna and the dish was filtered through a system that then fed a series of 50 subwoofers. The whole arena was subsequently turned into a large mega base speaker that delivered waves of infrasound to the public that came to see the radio telescope. The visiting public could sit and meditate as the sound penetrated their thoraxes. People would come and get massaged by the piece, and it became a very public event.

Of course this project is dealing with information in a very different way. It’s responding to information gathering by responding to the border, but I would say that the gathering of information is less important here than the tactics of dissemination.

Nocturne (White Poppies)

Nocturne (White Poppies) started in 2001 shortly after the US invaded Afghanistan. I put out a call to Associated Press photographers for images of heroin poppies photographed in Afghanistan using night vision. I got a few response photos and I used one as the starting point for my work. The image was of a single Papaver Somniferum photographed in southwestern Afghanistan a few weeks after the bombing of Kabul. The white and pink petals of the poppy translated into a highly chromatic green. This image of a sole flower in what had become a war zone was the inspiration for the installation Nocturne (White Poppies).



Night vision photograph of Afghan heroin poppy flower © Iñigo Manglano-Ovalle


When you enter the dark room, you see a large-scale image of flowers that are fluttering in a breeze. As your eyes adjust, you begin to see that there is something in the room with you–a set of poppies made in Chicago by botanists at the Field Museum of Natural History. Half a dozen of these flowers are attached to flexible armatures that elevate them above the floor. It’s as if this set of hands was in the act of presenting a bouquet. In this sculptural assemblage are small computer fans that blow onto the silk petals to make sure that they are in constant motion. The camera only focuses on the flowers in such a way that you never see the fans in the projected image. This assures that there is a certain live-ness to what is essentially a real time, closed circuit projection. Live sound from a short-wave radio receiver that searches the bandwidth for transmissions from Central Asia, together with this live video, creates a situation wherein the viewer is present at an actual event rather than experiencing a playback.



Nocturne (White Poppies) © Iñigo Manglano-Ovalle


Similar to Phantom Truck, the phenomena that is ‘visibility’ is completely locked into the apparatus of imaging. In Phantom Truck the apparatus is the viewer him/herself who is necessary for making the truck appear, while in Nocturne the imaging apparatus is external and autonomous to the viewer. And yet, while in Phantom Truck you can locate both the object and the apparatus, Nocturne complicates this assignation. Nocturne’s night vision camera cannot see the flowers in complete darkness. It needs at least a modicum of light to initiate the circuit. Usually flicking a cigarette lighter in Nocturne’s room is enough for the camera to begin seeing its target and transmitting the image to the projector. But here is where it gets tricky, because it’s now the projected image that lights the flowers and makes them visible. So without the image of the object, the object cannot be seen, which essentially reverses the a priori condition of seeing. Nocturne images the object so that it can be visible, rather than seeing a preexisting object.

The inception of meaning can’t be located, it’s slippery and fluid. By the time we are brought into the event, its image has already been created for us. These projects attempt to create moments in which we become conscious of that apparatus. How was Colin Powell’s Truck fabricated? How did we get to the point of creating an image of Iraq and Afghanistan that we could possibly invade? How is the image of our current climate being constructed? Who is involved in constructing that image? How does the debate of faithful representation drive politics? And what, if anything, does truth have to do with this?


Iñigo Manglano-Ovalle is an artist who investigates diverse subjects such as technology, climate, immigration and the global impact of social, political, environmental, and scientific systems. His work has been exhibited at acclaimed international institutions such as the Museum of Contemporary Art in Chicago.

Iker Gil is an architect, urban designer, and director of MAS Studio. In addition, he is an Adjunct Assistant Professor at the School of Architecture at UIC. He is the recipient of the 2010 Emerging Visions Award from the Chicago Architectural Club.

Lines of Reading

© Jack Henrie Fisher


Artwork by Jack Henrie Fisher, graphic designer and design research at Jan Van Eyck Academie in Maastricht, The Netherlands


Jack Henrie Fisher has explored the relationship between typography, data and format in numerous projects. As a design researcher at the Jan van Eyck Academie in The Netherlands, he is working to engineer a set of procedures by which a conversation can take place and be immediately transcribed, elaborated, contested, formalized and distributed as a book. Here he presents three of his projects in which he examines typography, reading techniques, legibility, organization and meaning.


This typographic essay invokes an opening in modernist formal enclosures by exigencies of reading and textuality. The essay composes a reading of Gyorgy Kepes’s landmark design text “Language of Vision”. It constructs an index of significant terms and visualizes their recurrences and concatenations throughout Kepes’s book. In its investigation into the “language” of an analytic typography, the essay draws out the formal ramifications of specific textual technologies, of lines of reading in printed books and of the crossing linkages that constitute hypertext. The essay is at once a careful discursive examination of the foundations and limitations of modernist form, and a forward-looking elaboration of what new lines of reading might be imagined and inscribed–in which typography and data electronically break the static surface of the printed artifact to crystallize new figures of meaning and information. 162-page book



This poster series was designed for the conference Becoming-major/Becomingminor organized by Vanessa Brito at the Jan van Eyck Academie, Maastricht, The Netherlands in 3-4 December 2009. The conference was occupied with the problem of trying to think the ethical implications—and the potential for emancipation—of a ‘minor’ image of thought which embodies notions of passivity, automatism and machinic repetition, as opposed to the “major” or enlightenment ideal of thought most famously articulated by Kant in his essay “What is Enlightenment?” The poster series begins to elaborate a procedural typography from the idea of the minor. Each poster performs a different anagrammatical reading-through of “What is Enlightenment,” successively finding the letters of the conference title hidden in the words of Kant’s text. In this automated reading technique, a different passage from Kant is highlighted each time, and a different condition for legibility is made in the uneven organization of the title letters. A2-size posters.





Jack Henrie Fisher is a freelance graphic designer and a design researcher at Jan van Eyck Academie. He has worked and taught at Bruce Mau Design, studio/lab, and the University of Illinois at Chicago. He is formulating a practice with typography as an experiment with forms of ascesis connected to listening and writing.

Discontented, or the Pursuit of Content in a Format Age

© Mimi Zeiger


Essay by Mimi Zeiger, writer and founder of the architecture zine and blog loud paper.


It’s been said and said again and again to the point of cliché that we live in the Information Age. The love children of Marshall McLuhan and Steve Jobs, we float in amniotic soup of digital signifiers and suckle identity updates like mothers’ milk. We consume the information; we are the information.

Indeed, this morning I woke up and gobbled up thirteen somewhat important emails, ignored several dozen tweets and Facebook wall posts, and thought about searching for pictures of Chelsea Clinton’s wedding, all before brewing my morning coffee. (Heck, I couldn’t even make it through the above paragraph without checking my iPhone for dispatches from the outer reaches of the World Wide Web.) Then there is the print media. Considered dead by some, but lurching on in a pile stacked high on my kitchen table. A brief inventory reveals six battered and bruised New Yorkers, a few ignored Harpers (they are just so depressing), a Vegetarian Times, and a slim, unread copy of Fortune–“The Future of Reading” issue. (Architecture and design magazines languish in another forlorn stack.) As for my laptop, the desktop is full of PDFs and Word docs that stare out from the screen like shelter puppies with pleading eyes. Read me, read me, they cry. And it’s best not to talk about my bookmarks toolbar. A sure case of digital disposophobia, I expect the folks from TLC’s Hoarding: Buried Alive to come a’knocking any second.

I’m awash in information. And I am not alone.

But what is all this information fed to me in any number of formats? What kind of content is provided and, really, does it matter? Early in The Shallows, a book that takes a “This is your brain on drugs” approach to the Internet, author Nicholas Carr riffs on a McLuhan classic: “The medium is the message.”

Carr writes, “McLuhan understood that whenever a new medium comes along people naturally get caught up in the information—the “content”—it carries. They care about the news in the newspaper, the music on the radio, the shows on the TV, the words spoken by the person on the far end of the phone line. The technology of the medium, however astonishing it may be, disappears behind whatever flows through it—facts, entertainment, instruction, conversation. When people start debating (as they always do) whether the medium’s effects are good or bad, it’s the content they wrestle over.” [1]

I beg to differ. As successive technological developments (iPhones, iPads, Kindles, etc.) and the economic recession couple together to cripple print publishing (wave goodbye to books, newspapers, and magazines), the discussion is all about the medium. Forget the medium is the message; the message is the medium is the medium. Format has come to dominate the debate, not content. Or, as Bryan Boyer suggests in his essay, The Mediators, “The format is the message.”

And then Boyer sketches out how formatting (from both architecture to publishing) only just hints at the content it’s supposed to contain without every revealing any depth behind the façade:

“Every building, every publication, every bit of output from the architect is formatted for realization and tailored to an audience. Architects no longer enjoy the simple pleasure of designing buildings, they design a library in Caracas for the city government, or a stadium in Belarus for an international magnate. They write books for post-critical academics, pamphlets for North American students, and websites for the image-hungry public to name just a few examples. The work of the architect has never been more tied to all the specificities of client, market, place, and politics nor have the concerns of these groups ever been more enmeshed. Each format has its own set of catalytic constraints, biases, and conventions that the architect must work with.” [2]

What’s become clear in the past two years, as I’ve been involved in panel discussions, meetings, lectures, the Leagues and Legions forum board (a think tank on architecture and publishing), exhibitions and Google chats about the state of publishing and architecture (some of which I’ve even organized) is that the medium, even when it a product of networked culture and not a printed object, is a fetish. It is something to be worried and mulled about conceptually. Older, recognizable formats—the newspaper, the blog post, the zine—become stand-ins for broads idea about content—news, opinions, music reviews—but also for what that medium once represented—a citywide discourse, a citizen journalist, an alternative publishing network.

Content is even less interesting on the retail side of book publishing, per a recent piece by Timothy Carmody on entitled “Why Metadata Matters for the Future of E-Books.” Carmody quotes publisher and distributor Don Hill. “[Hill] added that the major e-book retailers were unlikely to do much to push for enhanced titles, or create them: ‘I could see Apple getting involved as a way to expand hardware sales in the education or business market, though they’ve shown no inclination to create content so far.’” [3]

With mainstream magazines (design and otherwise), content remains illusive not only because of the medium’s trend-tracking periodic nature, but also because of the economic model that privileges advertising over editorial. The number of ad sales per issue determines by percentage the number of editorial pages. With advertising revenue down 26 percent in 2009, you find thinner and thinner magazines. [4] So, even while we are away in information, there is less of it coming from traditional sectors.

Not beholden to ad sales and distribution, independent publications, often whimsically funded on grants or labors of love, tend to push beyond generic content. What is revealed can be surprising and not easily categorized. At Publishing Futures: Content, Context, and Emerging Formats, a panel discussion held at University of Illinois at Chicago’s School of Architecture last spring, Marc Fischer took the conversation, which until that point had circled the issues of format and content, in a new direction. Fischer, a member of the artist collective Temporary Services that publishes under the imprint Half Letter Press, described the group’s long-standing collaboration with incarcerated artist Angelo. Several small, perfect bound publications came out of the partnership, including the 2003 Prisoners’ Inventions, which documents in drawings and text the intricate ways inmates adapt to their celled lives. Fischer explained how the collaboration gave voice to a population otherwise invisible and mute, if not muzzled. The simple format, 100 pages filled with illustrations, offered depth and insight, not surface speculations.

Recently, I’ve found that the most satisfying way to content—to narrative stories, rich reporting, and interviews, to be more specific—is to actually read. I read on my iPhone on the subway, shuttling around underground, I stare into my small screen. I use the app Instapaper to save digital content in a flourish-free format that can be read offline. [5] Let me repeat, offline. Paired with Longform, a website that offers selection of texts curated by Aaron Lammer and Max Linsky.

(Let’s pause for a moment on the term “curator.” In a Web 2.0 context, “curating” has trumped “editing”, and what had been a title reserved for the gallery or museum now finds itself on the masthead, or what was left of it. The difference between the two roles at this point is marginal, with the exception of format. Both duties at their broadest involve selecting content and bringing it together under in service of an idea, theme, or discourse. Where “curatorial” was traditionally reserved for the art objects and “editorial” for texts, the digital liberation of content from the page or object means that just about everything is content, hence our overload of information. It also means everyone is a curator and a publisher (not editor, per se). Every individual has the daily role of choosing what variety of text and image to consume, and which bits to broadcast to an ever increasing network of Facebook friends and Twitter followers.) Which, in turn, frees the publication from the act of publishing. Forget paper or websites, publishing is a happening, an act, and event to generate content in itself.)

Returning to Longform, a site that curates so you don’t have to, the curators write, “We post articles, past and present, that we think are too long and too interesting to be read on a web browser.” [6] Interestingness and misalignment with given formats (length) are the benchmarks. I read with abandon the stories that hide in Vanity Fair, the New Yorker, Wired, or in other shadowy parts of the web. Diamond thieves captivate me, as do the natural history of the octopus, recovering blogger’s laments, and the mystery of those who fell from the World Trade Towers on 9/11. Applied to architecture and design publishing, the Longform model is tantalizing. In a flooded world of information, this combo of tools privileges the text over the tool, (even as it is dependent on the iPhone and iPad), the content over format, the message over the medium.



1. Nicholas Carr, The Shallows: What the Internet is Doing To Our Brains, Norton, New York, 2010, pg.2.

2. Bryan Boyer, The Mediators, January, 2009,

3. Timothy Carmody, Why Metadata Matters for the Future of E-Books,”, August, 2010.

4. Josh Quittner, The Future of Reading, Fortune, March 8, 2010, pg. 63. (Okay, I broke down and read it.)




Mimi Zeiger is a Brooklyn-based freelancer, writing on architecture, art, and design for a variety of publications including The New York Times, Dwell, and Architect, where she is a contributing editor. She is the founder of the architecture zine and blog loud paper.

Spaces for Architectural Discourse and the Unceasing Labor of Blogging

© MAS Studio


Essay by Javier Arbona, writer and PhD candidate in geography with a background in architecture and urbanism.


(…)the conceptual act of architecture is the critique, transformation, and creation of institutions. Thus architecture can be considered, paradoxically, contradictory to building, to its institutionalizing presence.
Peter Eisenman


Using the architecture field as a case study, in this paper I speculate that the way we use the web is not the ongoing result of a titanic design-by-world-community, as we’re often led to believe. Navigation and communication on the web emerge from the consensus between two powerful groups of idea-shapers: the ‘legal-rights’ people and the ‘design-intelligence’ people. Together they build the ideological basis of the web, producing a spatiality that undergirds social experiences throughout life. Blogging, as a typical web practice, serves to show how the consensus at best ignores, and at worst advances, the conditions for free labor, the work necessary for making the web a growing capitalist infrastructure for accumulation. I finish by asking if architecture, a discipline in many ways dedicated to the critique of space, has room to counter dominant forms of spatial relations that the web engenders.

Our rights; their labor

First, a few seemingly simple questions: Who blogs? What professions are blogging? It‘s actually very difficult—if not impossible—to answer these. Besides, given the familiar issues of authenticity, identity, and the multiple avatars that some people maintain across various websites, it’s thorny to try to pinpoint the relation between specialized groups, and the actual content of what they’re writing about online. One moment it could be a new building; the next it could be kitten videos on Youtube.

Are professionals, be they academics, architects, or others with higher education degrees, blogging about topics willy nilly, only as a necessary social appendix to their professional culture, such as in the way that cocktail parties function? Are they blogging about their work (sometimes a form of theorizing), or are they leveraging the excess time above and beyond their socially necessary labor time to do some blogging on the side? The answer very likely is that they’re doing all of these, in some proportion or another.[1] Given the variability and flexibility of blogging, and how it has reached into various corners of life, it’s also very challenging to draw any trustworthy conclusions about how architects are using blogs as part of their discipline, or architecture students as part of their design education. How to even distinguish between architects who blog from aficionados who just blog about architecture? These identities are often interchangeable or interblending, especially if one examines longer temporal scales.

What about the distinction between blogging and designing? It doesn’t really exist, according to some participant-observers. “Just ask (the blogging directors of the architecture firm FAT) Sam Jacob or Charles Holland,” says Enrique Ramírez.[2] Many are blogging while designing, and vice versa, blurring distinctions between spaces for theory, collaboration, entertainment, documentation, and production.

Blogs, thus, perform several functions in the context of so-called professions like architecture. This implies that if one wanted to understand how blogs relate to architecture disciplines writ-large, one can’t just take the content of blogs at face value. One has to instead examine content along with the politics behind forms of tapping into the web (including blogs). By learning from such a combination, I’d like to offer here a wider critique of what all of us—students, critics, bloggers, architects, and academics (sometimes being more than one of these at a time)—are not doing, but perhaps could be. What if we were to examine the interface, the labor, and the fruits of blogging as interrelated moving parts? How would that change the architecture discipline as it evolves along with media?

So, with that in mind, let’s go backward in order to move forward again: What are people—whomever or whatever they may be—blogging about?[3]

In general, some anecdotal evidence and perfunctory study seems to indicate that there is a remarkable amount of legal content and discussion on blogs, especially pertaining to web law itself, as well as a visible presence of influential, online special interest law projects (i.e. the Electronic Frontier Foundation, Digitaldemocracy, and the Berkman Center at Harvard University).[4] This may or may not come as a surprise. Some day, historians will perhaps look back and point to this with curiosity. It personally strikes me as somewhat surprising, because those who posses standard legal wisdom are also throwing that common sense to the wind, what with the web being notorious as a fly net that catches embarrassing details of the past (i.e. rants, insults, nudity, bad legal advice) and exposes them in the present.

But upon second thought, this makes some sense. As Pew research has confirmed, people spending most time online tend to be affluent (with salaries above $75,000) and digitally-connected, awash in a fog of networked devices, some of those also highly portable.[5] Perhaps Blackberry-powered lawyers do fit the profile, after all. Nonetheless, the point wasn’t to monolithically ascribe generalized online content to specific professions (impossible, in the end), but rather to focus on the ideas themselves as etched onto web pages and circulating in society. Drawing from this data, then, one basic assumption to start with can be that when we all use the web, we enter into an experimental lab of privacy and intellectual rights, but not because of some inherent destiny for it to be such a space. Instead, it is such because there are people that drive the web to be that way.

Meanwhile, design culture—very broadly understood—is often focused on wealth disparities and access to the web. A global digital-divide is neatly problematized and packaged, as shown by the invention of devices like the 100-dollar laptop. But this design culture (which includes interaction, interiors, architecture, mobile web and more) seeks to address the digital disparity by thinking about ever-more clever products than through what could be various coexisting arenas of social development and communal access, technology being a variable, but not the panacea.

The obstacle, though, is that architects and their peers mostly abandoned arenas of the social as failures. But, they jumped that ship only after distribution of the means of informational and cultural access was unevenly guaranteed only in certain privileged contexts. I mention this just to point to one of the ways in which one form of spatial imagination dominated by the network as a metaphor of a global connected society and as an actual infrastructure is simultaneously occluding other alternative spatial imaginations of conduits to guarantee equal-rights access to venues of justice, education, and communication.[6]

So far, very schematically, I’ve mentioned, first, a growing suite of legal-rights concerns that tend to permeate every interaction on the web. And second, we also have the fetish for ‘design intelligence’ (aka ‘design thinking’) that discursively enables the frenetic production of networked commodities purportedly connecting every place and everyone through phones, smart walls, and even geocoded photos. The two—a legal matrix and a spatial ideal—are not independent of each other. (How could they be?) They both form a mutually-reinforcing ensemble and together shape a powerful ideology.

Although the discourse of rights is significantly more popular than that of design, each one enables the other. Together, they assure the success of the vision of a network of self-ruled and self-interested individual consumers that communicate with each other, but who then can choose how and when to dissipate back into personal private realms. Rights discourse promises free-speech, and a democratic web, thereby naturalizing the connected system through which we navigate and share online as the virtual space that is supposed to protect speech and democracy, not just what could be one of many different spaces. This is what we need to challenge, but so far we’ve often missed chances. Debates such as a recent invasion of privacy fallout over Google’s social network application Buzz establish that online rights and personal privacy must be the default focus of thinking and activism, more than other potential arenas that could weave between offline and online worlds. In fact, a wide slice of the digerati seems to assume that there is a near-universal passion for abstract issues of privacy and digital rights. This is best exemplified by the world’s most popular blog, BoingBoing, which routinely covers censorship, electronic surveillance, and corporate assaults on personal content, but that’s about it in terms of its agitprop. The brouhaha about Google in China or Facebook privacy also support this point.

The problem I want to unpeel now is how power is distributed between these two communities of ideas, the one about rights talk, and the other about design talk. I’ll begin with rights as the obvious gorilla that dominates discourses about the web, and in ways that will be shown, greatly determines for the rest of us the usual arrangement of the virtual space (the network) of online interaction. The features of that network—the constant invitation to ‘tag’ content, for instance—constantly remit us back to the thought-space of rights (and democracy, by extension).

From there, I’ll move to the layer of design talk, focusing on a subset of architecture and design blogs. Aside from the generalized digital divide, then, there’s a second-order divide between the top of the now-well-established architecture blogs, and the rest. As a brief case study, I try to show that it is those on the top that commonly mix with—or even become—digital culture gurus, often consolidating visions of the web’s future for all. At the end of this short essay, I will finish with some proposals and incipient practices of what can be done as a critical intervention in this network to start to cut across divides.

The embedded behavioral cues of blogging

The run of the mill framing of the social problem of rights and online presence is foundational even to the establishment of an everyday agreement of how we’re supposed to blog, starting with the very design and structure of blog pages and platforms. This also sets up an unwritten set of conventions that bloggers acquiesce to (myself included), producing in the process a certain blindness to class and economic issues. Blog layouts, which are commonly predesigned and provided by blog platforms “free” for users to easily set-up, camouflage ways in which blogging is also laboring.

Through a series of virtual devices common to most all blogs (like “apps” for quick reposting, emailing, retweeting, bookmarking on other sites, or, say, “sharing” on Facebook, Digg, etc), the work chores of circulating content are hidden by what seem like benign, abstract socio-communal acts—the appearance of a gift economy. Related to this, we also see common contradictions, such as one between fully trademarked or partially licensed content (i.e. Creative Commons badges like the one that I will, in fact, proceed to use for this very paper), side by side with campaigns like “defeat censorware.”[7] This implies, by the way, an untenable union between private intellectual property rights and open, free speech, but that’s another essay.

If it is true that these sharing apps and badges of functionality constantly refer back to the thought-space of rights, it is at the expense of concentrating on the divisions of labor that we all partake in. Privacy and rights have been heavily discussed in recent years, while labor time (as a blogger or a content contributor on social networking sites like Facebook, which are private content spaces), or labor rights (as online producers), have not been problematized all that much, and especially not as necessary components of the vaunted democracy.[8] If there is such a thing now as a unifying consciousness of a blogger, it is one along the lines of being digital consumers with consumer-right demands (like privacy)—but not of workers.[9]

Focusing briefly on one aspect (out of countless) of this functionality that reaffirm our rights as net-citizens, a glance at the pages of some of the most powerful blog platforms and social sharing sites (i.e. WordPress, Blogger, Delicious, Twitter) can give us an idea of what content people are “tagging,” the practice of collectively distributing and filing content online, like a vast archival brain. (For example, on Twitter, people add hashtags to tweets, collecting all posts that pertain to the tagged subject, like this: #Beyonce. On Delicious, bookmark collections and their annotations can even be licensed for various uses under the Creative Commons guidelines).

What emerges from tag lists over and over is a picture that shows professional identities and professional or academic categories becoming less important. For example, on any given day, ‘medicine’ no longer has the appearance of being a hollowed discipline on Delicious. The people seem to have less use for ‘medicine’ than ‘health,’ which emerges in its stead to cross disciplinary boundaries and interest areas. This lends the impression that the web democratizes content and spans a lay public, the news media, and the professionals as equal agents in an even field, freed from a wage-economy.[10] (Is there any coincidence in that this phenomenon preceded one of the most massive periods of wage devaluation for journalists and other knowledge workers?)

To draw a brief working lesson from this short summary of tagging and such, one could say that while disciplines do get reconfigured—some of their members fading out into off-line irrelevance, perhaps—traditional disciplinary categories (medicine, architecture) can paradoxically gain traction and entrenchment. But, in order to do so, these disciplines must adopt the dominant practices of the web as badges of insider authority. Bloggers tend to be the leading edge that can legitimate a discipline for the twentyfirst century, persisting no matter how much they get disparaged by the discipline’s older establishment. All the while, the disciplines, architecture included—just about all of us ‘in it’—leave powerful ideologies of the web alone, including malformed concepts inherited from the hyperactive rights talk, such as plurality, democracy, and the web as somehow a-spatial or post-geographical.[11]


Maybe more than anything before them in history, blogs do seem to achieve their buoyancy not from some baptismal light shone upon them by institutions of authority, at least not usually, but from their interpersonal networks on and off line. As the popular wisdom goes, they achieve prominence through popular citation—tagged, tweeted, shared—organically rising to the top of the cumulus.

But perhaps like any other discipline intersecting with the web, if architecture ever had such a “golden age” of an online meritocracy, it quickly outgrew it. Architecture developed a pyramidal structure in the earlier part of the 00s that can now greatly accelerate the vetting process from above to below, both online (as in Archinect’s School Blog Project, where better postings are often selected for the main news feed), as well as in events like Postopolis, or through communities where the important architecture-related bloggers meet the Silicon Valley brain trust (like at TED conferences).[12]

Popular architecture blogs include Inhabitat, BLDGBLOG, Dezeen, Pruned, ArchDaily, and DailyDose. Just by studying bookmarking sites, one can tell that these blogs coalesce at the top. There also are the wildly popular (though not exclusively architectural) Worldchanging, Treehugger, Curbed, and Gothamist blogs, which actually tend to pay salaries to small armies of staff. (We should not forget that the myriad comments left on sites are also an unpaid form of blogging).[13] These websites can be at least ten times more popular than blogs in the bottom of the pile. Some, like BLDGBLOG, have achieved coveted spotlighting on some of the most viewed sites in the world, such as Yahoo’s front page.

All this attention has been positive for architectural discourse, setting aside for a moment concerns about the lack of a critical approach to the cross between the rights talk and the design talk, but here’s the key factor: architecture academia and related institutions have largely missed the debate anyway, and shouldn’t they be the ones instigating it?

Architecture schools and institutions haven’t tried to come to terms with how these popular bloggers, much like the vast universe of other bloggers in other disciplines, establish a presence that sustains and nourishes a perch at the top. A co-mingling between human labor and the internet infrastructure results in particular socio-spatial configurations. It is telling that these networked subjects—the bloggers—sustain the uneven distribution of power by constantly laboring (mostly for free) in the digital salt mines: interacting on Twitter, constructing a page on Facebook, using Archinect commentary boards, and incessantly tapping on their phones to nourish networks. To slow down is to fade out.

A proposal

Alongside the digital rights talk, we have yet to see an institutional response—an amelioration (or maybe just a single fellowship for an architecture blogger)—toward these spatial relations of power on the web.[14] Because we haven’t, those of us in the architecture community forestall the opportunity for carving other spaces that can accommodate and foster newer forms of labor, such as inside the academy. Ideally, these spaces should also be either somewhat independent or better yet, critical of the extent of prevalent spatial discourses and practices that pertain to the web. Why not, for instance, have a challenger—one that does for theorizing network spaces what the Berkman Center does for rights talk—and at the same time critique the prevalent ideas about how democracy “should” be spatialized and protected online and offline?

I cannot wrap-up, though, without applauding the recent emergence of the shadowy #lgnlgn (“Leagues and Legions”), an ever-changing alliance of architecture media participants to challenge corporate mega-conglomerates, first spearheaded by Mimi Zeiger, but a real collaboration among several others. #lgnlgn isn’t anything else but an experimental way of appropriating (usually) corporate web tools for insurgent, dispersed publication. Also, what about the Network Architecture Lab run by Kazys Varnelis at Columbia University’s GSAPP? Whether or not these fledgling ventures will be somehow supported and recognized by the academy or other institutions in the long run has to be fought for and demanded, lest the velocity of change leave them behind.[15]

What is needed now is an architectural imagination that can problematize the cartographies and ideologies of the web, showing that far from the imaginary boundless stew—an ideal, uncontested space—access and rights themselves, and therefore truly democratic speech, are bordered, spatialized, and conflicting in particular ways that need to be thought about.[16] We need an examination of the lines of labor mobility and flexibility, as well as to look beyond the simple spatial mappings that only show connections, but can’t say anything about the actual relationship between the politics of connectivity and the content that tends to prevail. Or, does architecture have anything else to contribute beyond the proselytizing of green commodities, the fetishism for informal architecture, the proliferation of domestic nest blogs, and the star vehicles like Dezeen? It remains to be seen.


ACKNOWLEDGEMENTS: Thanks for feedback and comments along several evolutions of this first-talk, then-essay to David Basulto, Bryan Finoki, Amber Frid-Jiménez, Cristobal García, Nam Henderson, Mark Jarzombek, Rafael Marxuach, Enrique Ramírez, Kazys Varnelis, and Mimi Zeiger.



1. A fair question would be “why look at blogs at all?” For the sake of brevity, it should be noted that this essay emerged (with new revisions and lengthy edits) out of an invitation, alongside Kazys Varnelis, to speak at the MIT History, Theory and Criticism Forum on “Blogitecture: Architecture on the Internet,” April 7, 2009. Beyond the call of duty to address that topic, it is generally assumed throughout this essay that constantly updated blogs (or weblogs) are the most dominant mode of publicly visible online communication (unlike email or chat, which are more private). See for audio and slides of the talk, uploaded by Kazys Varnelis.

2. Enrique Ramírez. Email to Javier Arbona and Kazys Varnelis. April 12, 2009.

3. Thus far, I have not stopped to define what the term “blogging” (aka web-logging), ultimately means. This is intentional. Not to be aphoristic, but blogging, in my view, is best left as ‘something that bloggers do,’ and that they necessarily reinvent all the time. Given the true diversity of forms of blogging, the only consistent element among blogs is the frequency of updates in comparison to a static web page.

4. My thanks go to Anne R. Kenney, Cornell University’s Carl A. Kroch University Librarian, for our enlightening (and fortuitous) conversation about digital archives and her knowledge of law profession blogs, as we rode on a bus from New York City to Ithaca, NY, on March 21, 2009. For more examples of the discourse of blogger rights, speech protection, and consumer rights see EFF’s “Blogger Rights” found at: Last accessed on March 13, 2010. Or see: Center for Digital Democracy. “Protecting Privacy, Promoting Consumer Rights and Ensuring Corporate Accountability.” Digital Marketing, Privacy & the Public Interest found at: Accessed on March 13, 2010.

5. Mary Madden, Sydney Jones. “Networked Workers.” Pew Internet and American Life Project. Washington, DC: September 24, 2008.

6. Stephen Graham, Simon Marvin. Splintering Urbanism: Networked Infrastructures, Technological Mobilities and the Urban Condition. (London: Routledge, 2001). See also: Matthew Gandy. “Rethinking Urban Metabolism: Water, Space and the Modern City.” City, Vol. 8, No. 3, December 2004. Pp. 363-379.

7. The Defeat Censorware campaign was prominently featured on the front page of the BoingBoing blog as of March 29, 2009, and is archived at the time of this writing under:

8. On forms of labor see Tiziana Terranova. “Free Labour: Producing Culture for the Digital Economy.” electronic book review. 2003. Last accessed at on March 13, 2010. Says Terranova: “It is about specific forms of production (Web design, multimedia production, digital services, and so on), but is also about forms of labor we do not immediately recognize as such: chat, real-life stories, mailing lists, amateur newsletters, and so on. These types of cultural and technical labor are not produced by capitalism in any direct, cause-and-effect fashion; that is, they have not developed simply as an answer to the economic needs of capital. However, they have developed in relation to the expansion of the cultural industries and are part of a process of economic experimentation with the creation of monetary value out of knowledge/culture/affect.”

9. A refreshing antidote to this condition was a recent conference: “The Internet as Factory and Playground: A Conference on Digital Labor,” held at Eugene Lang College, The New School, New York, NY, on November 12-14, 2009.

10. These comments are arrived at by making a qualitative assessment of popular tags on Delicious and WordPress. Screen captures of tags are available at:”high-quality-rolex-replica-watches”/

11. I have a rough suspicion that a fascination with so-called informal architecture, an identifiable strain in architecture blogging, has been due to an exploitative parallelism made by architecture bloggers between hagiographies of the web (as somehow liberated from the state) and slums, interpreted as similarly liberated. Here is an example: “The Net Generation in particular recognizes itself in the story of this self-developing city, which is powered by the collective intelligence and individual aspirations of hundreds of thousands of people.” Matias Echanove and Rahul Srivastava. “Taking the Slum Out of Dharavi.” airoots (, February 21, 2009. Accessed on April 12, 2009.

12. If someone truly embodies the human interface between Silicon Valley and architecture culture, on a basic level, I would have to point to Cameron Sinclair, founder, with Kate Stohr, of Architecture for Humanity, who embodies the ethos of an architecture blogger (always depositing commentary all over the web), at the same time that their projects like the Open Architecture Network ( emerge out of the TED/ Silicon Valley milieu. See: for more. Last accessed on March 13, 2010.

13. Sources consulted for this project are bookmarked at:

14. Since the writing of this essay, however, Geoff Manaugh, writer and founder of BLDGBLOG, has broken through that glass ceiling by earning a pioneering fellowship in a new program at the Canadian Centre for Architecture (“bloggers in the archive”) which at least begins to facilitate academic practices for bloggers, and somewhat begins to value their labor time in alternate ways to the usual Google ads. Manaugh’s boundary-crossing with CCA illustrates to some extent what I argue for in this paper.

15. An excellent example of what the Network Architecture Lab provides, as well as the kind of necessary discussions that open up issues of architecture’s canon in the digital age, can be found at: Accessed on March 13, 2010. See also: Jo Guldi. “Reinventing the Academic Journal: First, Take Down Your Website.” Inscape. February 7, 2009. Last accessed on March 13, 2010. Guldi explains how the academy could begin to spatialize itself on line as a kind of endowed curator of the glut of online content, but so far journals have approached the web with trepidation.

16. For some background, see: Fred Turner. From Counterculture to Cyberculture: Stewart Brand, the Whole Earth Network, and the Rise of Digital Utopianism. Chicago: University of Chicago Press, 2006.


Javier Arbona is a PhD candidate in geography at UC Berkeley with a background in architecture and urbanism. His work looks at the politics and ideas of land use, spatial practices, design and visual culture, experimental landscapes, social movements, mappings, social theory, digital culture, and ephemera.


© Nick Gentry


Artwork and text by artist Nick Gentry


Everything has a life cycle and with technology there is a relentless push into the unknown for newer, more efficient products. Since their introduction in 1981, billions upon billions of floppy disks have been manufactured and 30 years later production is coming to an end. Despite their previous dominance, physical media objects will eventually become rare artefacts. The floppy disk stands firm and lives on as a metaphor for the increasing pace of the modern life cycle, mass production and the throwaway culture of today.

Reusing objects can negate the need for waste, with a new function that also often has more charm than that of the original. Seeing art produced in this way can encourage a more creative approach to everyday objects that are deemed to be obsolete or useless. What brings the overall concept to life is that blend of the nostalgic and familiar, together with the freshness of a new form of expression.

As information is released from the physical form, it allows personal data and identities to be revealed and permanently shared online to an infinite degree. At the same time, individuality and privacy is now considered to be more precious than ever. It is now common to cultivate a second identity online. Although this online identity can be comprehensive in detail, it is a virtual representation rather than the real thing and is in some way created by the individual. The paintings replicate this process as the disks contain an assortment of historic data, joined together to create a whole new identity.

Humankind is integrating with technology at an exponential rate. This merge has been happening throughout human existence, leading today to a crucial tipping point in the process. The majority of people now own a mobile phone, often carried everywhere. Mobile phones then make the transition to become computers, with endless functions that can be customised to the individual. Currently all this functionality is on a device that is close to, but outside of the body. If this becomes internal it would raise a fundamental question of identity; can a human still be considered to be an entirely organic being?

The paintings seek to simply highlight this new movement, as it becomes increasingly apparent as an important cultural and social transition of our time. Will humans be forever compatible with our own technology?


© Nick Gentry

© Nick Gentry

© Nick Gentry

© Nick Gentry

© Nick Gentry

© Nick Gentry

© Nick Gentry

© Nick Gentry

© Nick Gentry

© Nick Gentry


Nick Gentry is a London-based artist who reuses obsolete media formats of the past, like floppy disks and VHS tapes to create his work. In it, he explores issues of information, identity, and technology. Besides several group exhibitions, he recently had a solo show called “Auto emotion” at Studio55 in London. | @nickgentryart

Share 'Identity' on Delicious Share 'Identity' on Digg Share 'Identity' on Facebook Share 'Identity' on Google+ Share 'Identity' on LinkedIn Share 'Identity' on Pinterest Share 'Identity' on reddit Share 'Identity' on StumbleUpon Share 'Identity' on Twitter Share 'Identity' on Add to Bookmarks Share 'Identity' on Email Share 'Identity' on Print Friendly