Rio Ops Center, designed by IBM. [Photo by the City of Rio de Janeiro]
By now you’ve heard the “smart cities” pitch. Our streets will be embedded with sensors, our buildings plugged into the internet of things, our commons monitored by cameras and drones, our urban systems recalibrated by real-time data on energy, water, climate, transportation, waste and crime. Any day now, our cities will be marvelously transformed into efficient machines. But it’s not so easy to see where you and I fit in. Most discourse on “smart” and “sentient” cities, if it addresses people at all, focuses on them as sources of data feeding the algorithms. Rarely do we consider the point of engagement — how people interface with, and experience, the city’s operating system.
As Ada Louise Huxtable might put it: Kicked a city lately?
Typically the urban interface is imagined as a screen. A 2011 report by the Institute for the Future predicted displays “embedded in buildings, kiosks and furnishings,” delivering “‘supercharged’ interactions that combine speech and gestural inputs with immersive, high-definition graphics,” while “ambient interfaces, which boil down complex streams of data to one or two simple indicators, will lurk in the background of everyday urban life, quietly signaling in our periphery.” 1
And behind all those screens is a flood of data. The designers and engineers at Arup, consulting for the city of Melbourne in 2010, proposed that so-called “smart cities” manage their informational riches in centralized “clearing houses,” where analysts can consolidate and compare data from disparate sources. A “real-time city model,” they wrote, “can enable an interface with citizens — a form of ‘my city,’ which also enables feedback loops from people themselves. As well as an interface onto the city, it also forms a kind of interface for the organization, possibly even indicating how to change the organization itself.” 2 Thus, personalized streams of city data are rendered into “actionable” information that makes our cities more legible, efficient and livable. 3
Dashboard proposed by Arup for Melbourne city staff. [From Melbourne Smart City]
As more cities adopt these technologies, we are beginning to see the political and epistemological contradictions of the smart city writ large, in steel and silicon. Underlying these personalized data streams and opportunities for public engagement is still, almost always, a “black box” control system. We’re empowered to report failed trash pick-ups or rank our favorite hospitals, but not entitled to know what happens to our personal data each time we pass through a toll booth, or how the doctor we rarely see knows our cholesterol is up. We often have little understanding of how and where the mediation of urban systems takes place within the city itself. Nor do we know how our intelligence translates into urban “sentience,” and what is gained or lost in the conversion.
City governments, technology companies and design firms — the entities teaming up to construct these highly-networked future-cities — have prototyped various interfaces through which citizens can engage with the smart city. But those prototypes embody institutional values that aren’t always aligned with the values of citizens who have a “right to the city.” Judging from the promotional materials released by Cisco, Siemens, IBM, Microsoft, and the other corporate smart-city-makers, you’d think that one of the chief preoccupations of the smart city is reflecting its own data consumption and hyper-efficient activity back to itself. At its heart is a “control center” lined with screens that serves in part to visualize, and celebrate, the city’s supposedly hyper-rational operation. Rio’s Ops Center, designed by IBM, integrates data from 30 city agencies; its layered screens feature transit video feeds, weather information and maps of crime statistics and power failures and other snafus. The city is thus partitioned into atomized projects, services and flows, each competing for technicians’ attention. We see a similar “widgetization” in Arup’s proposed dashboard for Melbourne staff: “This is Your City In Real Time.”
Governments and their citizens need to think more deeply about these designs. What does it mean to “modularize” urban services? To offer a map-based snapshot of something as complicated as “public health”? To permit users to filter data streams of interest? To dedicate prime screen space to “fast-moving” data while pushing relatively static urban dimensions to the bottom of the screen? What kind of intelligence do these windowed screens manifest?
If the ops-center dashboard has received too little critical analysis, the public interface has received almost none at all. Some smart-city proposals represent the public interface as a schematic mockup, with apparently little regard for interaction design. Others proffer a completely blank slate. (Intel renders its Sustainable Connected Cities interfaces as tiny, benevolent explosions.) The range of imagined programs and services is shockingly narrow: typically the street interface is little more than a conduit of transit information, commercial locations and reviews, and information about tourist attractions and cultural resources.
Tiny, benevolent explosions. Promotional image for Intel’s Sustainable Connected Cities program.
Many city governments have developed web portals to showcase their open data, and they host hackathons and competitions, usually resulting in apps that serve a single function — finding farmer’s markets, for instance, or measuring air quality — and that rarely survive without sufficient institutional support. (Again, the “widgetization” of urban resources.) Almost always, they frame their users as sources of data that feed the urban algorithmic machines, and as consumers of data concerned primarily with their own efficient navigation and consumption of the city. These interfaces to the smart city suggest that we’ve traded in our environmental wisdom, political agency and social responsibility for corporately-managed situational information, instrumental rationality and personal consumption and convenience. We seem ready to translate our messy city into my efficient city.
Is that the city — or the urban interface — we want? Of course there will be people who opt out of urban “smartness” altogether and move off the grid. But assuming that greater populations will find themselves residing in networked, intelligent megalopolises, we need to give more serious consideration to designing urban interfaces for urban citizens, who have a right to know what’s going on inside those black boxes — a right to engage with the operating system as more than mere reporters-of-potholes-and-power-outages. We need to focus attention on the “bleed points” between the concrete and digital and social city, those zones where citizens can investigate the entwinement of various infrastructures and publics. 4 And we need to examine the platforms that are already in existence, and those that are proposed for future cities. Even the purely hypothetical, the speculative — the “design fiction,” or what Bruce Sterling calls the “diegetic prototype” — can illuminate what’s possible, technologically, aesthetically and ideologically; and can allow us to ask ourselves what kind of a “public face” we want to front our cities, and, even more important, what kinds of intelligence and agency — technological and human — we want our cities to embody.
We’ll need to consider how these interfaces structure their inputs and outputs, how they illuminate and obfuscate various dimensions of the city, how they frame interaction, how that interaction both reflects and informs the relationship between citizens and cities, and ultimately how these interfaces shape people’s identities as urban subjects. We’ll need to challenge the common equation of “interface” with “screen,” and the implications of reducing urban complexity to a two-dimensional visualization. Can we — and I do believe this must be a collaborative, interdisciplinary enterprise — envision interfaces that honor the multidimensionality and collectivity of the city, the many kinds of intelligence it encompasses, and the diverse ways in which people can enact their agency as urban subjects?
Pothole reporting as envisioned by the app Improve My City.
The Urban Stack
In his 1997 book Interface Culture, Steven Johnson defines the interface as “software that shapes the interaction between user and computer. The interface serves as a kind of translator, mediating between the two parties, making one sensible to the other.” 5 It is thus more semantic than concretely technological. Branden Hookway, whose own book on the subject will be published next month, agrees that the interface does its work “not as a technology in itself but as the zone or threshold that must be worked through in order [for the user] to be able to relate to technology.” 6 In that working-through, the interface structures the user’s agency and identity and constructs him or her as a “subject,” which is different from a mere “user,” in that the subject’s identity shifts in response to contextual variations and is informed by historical, cultural and political forces.
But the zone between person and machine is only the most visible type of interface. Computer systems are commonly modeled as a “stack” of protocols of varying degrees of concreteness or abstraction — from the physical Ethernet hardware to the abstract application interface — with interfaces between every layer of this stack. 7 Alexander Galloway defines “interface” broadly, as “a general technique of mediation evident at all levels.” At the user level — where we kick the city — the technique might be graphical, sonic, motion-tracking, gestural (using hands or mice), tangible/embodied (involving the physical embodiment of data and the bodily interaction of users), or of another variety. 8
Thus, we might think of future-city technologies as an “urban stack.” At the highest level, we find all those zoomable maps and apps that translate urban data into something useful. Today, the most ubiquitous vehicle for this digested and visualized data is the cell phone. 9 The widespread availability of open data via smartphone apps (and globally via text message) has inspired many urban residents to explore “deeper” down the stack, to understand how local systems work behind the scenes: how their water arrives at their homes, for example, or where their garbage goes when disposed. Previously in this journal, I’ve profiled “infrastructural tourism” and DIY data-science projects that connect citizens with those often-obfuscated networks. 10
The urban stack. Promotional image for Living PlanIT’s Urban Operating System.
Yet much of what’s “beneath” or “behind” the user interface remains inaccessible and unintelligible. Powering these public-facing interfaces are highly sophisticated technical and administrative networks that integrate urban services and infrastructures — water, power, police and fire services, snow removal, etc. — with computer operating systems. 11 Living PlanIT, for instance, “owns and monetizes” the Urban Operating System (UOS™) — with projects in London; Almere, the Netherlands; and Paredes, Portugal — which “extracts, aggregates, analyses and manages sensor data” in urban environments, thereby “harvesting useful intelligence and also enabling management, control and greater efficiency for many city services.” 12 Control and efficiency: these are the values — and the ends of intelligence — built into this system. Yet citizens don’t come into contact with the Operating System; they merely reap its efficient rewards. The obfuscation of the OS — largely intentional and perhaps even necessary, to the extent that it enables us to focus attention on the data most immediately relevant to our urban experiences — is also risky. We forget just how extensively these layered interfaces structure our communication and sociality, how they delimit our agency, and how they are defining the terrain we’re interfacing with.
Futurist and urban theorist Anthony Townsend acknowledges, “It is difficult to see the consequences of decisions about smart cities. The stuff of the smart city is literally invisible and usually illegible to the layperson. It is hidden and privately-held. It is unimaginably complex, and its impacts are often subtle, indirect and dynamic.” 13 Where’s the citizens’ “Lookout Tower,” he wonders? How might we conceive of interfaces that allow us to monitor those aggregators and protocols, and even deeper levels of the urban stack — the code, the hardware, etc. — that undergird integrated (and often proprietary) urban operating systems? Below the human-computer level of the urban stack, we have the wireless networks that transport the data from and to us, and the application programming interfaces (APIs) that allow various entities — including third-party companies, non-profits or individuals — to build apps that tap into our cities’ open data. 14 Particularly given the complexity of these networks, and the profound implications their algorithms can have for urban politics and our identities as urban “subjects,” we should have a means of looking inside the box, if not tinkering with the code.
Indianapolis, 1914. [Courtesy of the Library of Congress]
Observers have long sought to wrap their heads around complex urban operations and to picture the totality of the urban domain. The rise of print in the 15th century brought new maps and guidebooks and public posters that shaped residents’ comprehension of and interaction with their cities. 15 The explosion of newspapers in the late 19th century likewise offered a new means of “overviewing” the expanding, increasingly diverse, polyglot metropolis. The inventions of panoramic and aerial photography and, eventually, satellite imagery offered ever-more comprehensive, scalable representations of cities — and our places within them. 16 Today, media facades, public screens, ambient interfaces, responsive architectures, and other forms of “public interactives” are transforming our physical environments into interfaces in their own right. 17 In the Melbourne proposal, Arup envisioned screens embedded in architectural facades, at transit stations, on the side of trams, and hanging from posts on every block. Even local waterways, the designers suggested, could become “ambient” conduits for visually (and perhaps sonically and haptically) sharing information about their own workings.
Yet I have to wonder: interfaces to what? What is the “city” they propose to put us in relation with, and how deep into the stack does that relation go? In too many cases the “city on the screen” is little more than a set of measureable events, trackable movements, and rate-able services. Could we develop urban interfaces that actually help us wade through, make sense of — and critically engage with — the oceans of data generated by our cities and presented to us in edited form? Could alternative modes of presentation encourage us to think about the biases, affordances and limitations built into our tools and techniques of data representation? Could we “read” our urban interfaces — our windows into the urban operating system — as a means of assessing the ethos of urban development, ensuring that our cities’ operations are upholding an open, democratic ethic? 18
Arup expressed a lofty vision:
A smart city is one in which the seams and structures of the various urban systems are made clear, simple, responsive and even malleable via contemporary technology and design. Citizens are not only engaged and informed in the relationship between their activities, their neighborhoods, and the wider urban ecosystems, but are actively encouraged to see the city itself as something they can collectively tune, such that it is efficient, interactive, engaging, adaptive and flexible, as opposed to the inflexible, mono-functional and monolithic structures of many 20th century cities. 19
But we should push this even farther: the city should be not only tune-able, but also intelligible, tinker-able and hack-able. The future-cities we’re developing should position themselves in opposition not only to the inflexibility and mono-functionality of 20th century cities, but also to the proprietary, trademarked “smartness” that is the dominant model for 21st-century cities. Rather than making the city’s services and networks appear seamlessly integrated, rather than disappearing the interfaces between the deep levels of the urban protocol stack, our interfaces could highlight the seams — in our infrastructural networks, between various layers of the urban stack, and even within the social fabric — thereby helping us to better understand how our cities function, and how we can develop the necessary tools to monitor and modify their operation.
Renderings from Arup’s Smart City Melbourne.
Interface Critique
The most prevalent ways of thinking about human-computer interaction (HCI) are framed by values central to engineering. According to media scholar Johanna Drucker, the evaluation of interfaces typically involves “scenarios that chunk tasks and behaviors into carefully segmented decision trees” and “endlessly iterative cycles of ‘task specification’ and ‘deliverables.’” 20 Such thinking tends to equate the “human” in HCI with an efficiency-minded “user.” This is of course how much smart-cities discourse frames inhabitants, too — as efficiency-minded, affect-less consumers of urban resources. But if we want our cities to embrace a wider set of experiences and values — serendipity, ambiguity, even productive, “seamful” inefficiency — and to facilitate more diverse forms of human agency, what should we be looking for in those interfaces between the city and its inhabitants? 21
I offer elsewhere a more thorough discussion of a methodology for interface critique that draws from the humanities and social sciences as well as engineering. Here, I’ll simply offer a rubric for how we might evaluate our urban interfaces. We should consider:
- The materiality, scale, location, and orientation of the interface. If it’s a screen: where is it sited, how big is it, is it oriented in landscape or portrait or another mode, does it move, what kinds of viewing practices does it promote? If there is audio: where are the speakers, what is their reach, and what kind of listening practices do they foster?
- The modalities of interaction with the interface. Do we merely look at dynamically presented data? Can we touch the screen and make things happen? Can we speak into the air and expect it to hear us, or do we have to press a button to awaken Siri? Can we gesticulate naturally, or do we have to wear a special glove, or carry a special wand, in order for it to recognize our movements?
- The basic composition of elements on the screen — or in the soundtrack or object — and how they work together across time and space.
- How the interface provides a sense of orientation. How do we understand where we are within the “grand scheme” of the interface — how closely we’re “zoomed in” and how much context the interface is providing — or the landscape or timeframe it’s representing?
- How the interface “frames” its content: how it chunks and segments — via boxes and buttons and borders, both graphic and conceptual — various data streams and activities.
- The modalities of presentation — audio, visual, textual, etc. — the interface affords. What visual, verbal, sonic languages does the interface use to frame content into fundamental categories?
- The data models that undergird the interface’s content and structure our interaction with it: how sliders, dialogue boxes, drop-down menus, and other GUI elements organize content — as a qualitative or quantitative value, as a set of discrete entities or a continuum, as an open field or a set of controlled choices, etc. — and thereby embody an epistemology and a method of interpretation.
- The acts of interpretive translation that take place at the hinges and portals between layers of interfaces: how we use allegories or metaphors — the desktop, the file folder, or even our mental image of the city-as-network — to “translate,” imperfectly, between different layers of the stack.
- To whom the interface speaks, whom it excludes, and how. Who are the intended and actual audiences? How does the underlying database categorize user-types and shape how we understand our social roles and expected behavior? This issue is of particular concern, given the striking lack of racial, gender and socioeconomic diversity in much “smart cities” discourse and development.
And finally, what kinds of information or experience are simply not representable through a graphic or gestural user interface, on a zoomable map, via data visualization or sonification? While some content or levels of the protocol stack may be intentionally hidden — for the sake of “public” security, for instance, or to protect Cisco’s and IBM’s intellectual property — Galloway argues that some things are plainly unrepresentable, in large part because we have yet to create “adequate visualizations” of our network culture and control society. 22 Urbanist and designer Adam Greenfield proposes that some aspects of the city need simply to be made recognizable to the machine, translatable through the interface:
As yet, the majority of urban places and things appear to the network only via passive representations. The networked city cannot come into its own until these are reconceived as a framework of active resources, each endowed with some manner of structured, machine-readable presence, and the possibilities for interaction such provisions give rise to.
Yet we should also consider — as Greenfield does in his larger body of work — the possibility that some aspects of our cities are not, and will never be, machine-readable. 23 Affect, for example. Or beauty. In our interface critique, then, we might imagine what dimensions of human experience and the world we inhabit cannot or should not be translated or interfaced.
Stills from a promotional video for Urbanflow Helsinki.
Slabs and Clouds
We examined several real and hypothetical urban interfaces in the examples above. Let’s now consider, in a bit more depth, two speculative projects that raise further questions about how interfaces function, and how they inform our relationships with our data and our cities.
Urbanscale, an “urban systems design practice” — once a team, but now Greenfield’s solo practice — lamented the way most cities make use of their digital surfaces and “situated screens”: 24
We don’t believe that any particularly noteworthy progress will be made by dumping data on a screen and calling it a day, let alone transposing an utterly inappropriate “app” model from smartphones to large, situated displays. Urbanflow (Urbanscale’s urban communication system) supports our contention that whether municipal, commercial or citizen-generated, data only becomes understandable and usefully actionable when it’s been designed: when it’s been couched in carefully-considered cartography, iconography, typography and language.
Greenfield’s team proposed an “urban operating system” that would facilitate journey planning and wayfinding (privileging pedestrian travel), service discovery (including locations, reviews, open hours, etc.), access to ambient data (including information on all the signature “urban informatics” concerns, like air quality and noise pollution) and “citizen responsiveness” (i.e., encouraging citizens to report problems and make requests for public services). Urbanflow, they claimed, would offer a “dedicated platform where the city and its citizens can meet.”
Their detailed proposal for the city of Helsinki envisioned larger-than-human-scale slabs positioned throughout the city. These screens would detect motion, “wake up” when you walk by and “hail” you to interact, and they would immediately place you in the middle of a map — “You Are Here” — situated at your current coordinates, oriented to reflect the cardinal direction you’re facing. Unfortunately, these mini-monoliths would obstruct your view of the real city; when you were interacting with the screen, you’d have no choice but to be immersed in it. Yet the interactive elements would help orient you spatially and temporally. The map’s night and day modes would reflect the time of day. You’d also see, by default, a ring indicating how far you could walk in 5 minutes; zooming out would bring into view 10-, 15-, and 30-minute walkable areas. The preferred subject position here, obviously, is that of the pedestrian. 25 Whether that’s your mode of transit or not, Urbanflow suggests that “the city is here for you to use” (to quote the title of Greenfield’s planned series of ebooks). 26 The interface thus reinforces a primarily egocentric position and an instrumental approach to the city.
Printing out a route map with Urbanscale Helsinki.
Urbanscale and their partners, Nordkapp, paid more attention than most smart-city developers to interface design and HCI-driven understandings of user experience, while also allowing for serendipitous discovery and evincing a keen aesthetic sensibility. When you touched anywhere on the map — crisply rendered in light-on-dark-grey at night, and lime-green-on-white during the day — a “ripple effect” would register your touch and offer a peek at the available information. You’d thus immediately recognize the map’s stack of data, and the city’s parallel stack of services. You could toggle those layers — represented by flat, highly abstract icons in Marimekko colors — on and off, allowing for focused navigation or comparison of disparate data sets. While working with any particular layer(s) of data, the others would be dimmed or hidden. You could also pan and zoom via familiar swipe and pinch gestures; thus the same screen could display “contextual, hyperlocal information as well as broader, citywide content.” 27 Urbanflow would also allow you to search for streets, places and things with an on-screen keyboard, and to print travel routes and transit tickets from an embedded printer, translating the digital interface into a physical one. 28
The design team claimed that, with Urbanflow, “the city itself becomes … easier to navigate for visitors, and more serendipitous for locals. City officials and municipal governments would be provided with a completely new way to connect with citizens and visitors.” Still, we must wonder about the politics of the interface’s egocentric framing: the fact that you, the user, are always at the center. What urban imaginaries and realities are “toggled off” when your own navigational and informational and service needs are always front and center? In addition, we should consider what it means to represent highly processed data as flat, abstract icons. Much nuance is lost when complex information, generated via unintelligible methodologies by invisible entities, is collapsed into festively-colored 2D boxes. Finally, we should consider the implications of the screen’s monolithic stature. The swipe gestures and ocularcentric presentation suggest that the city, like your smartphone, is a place for casual visuality and “like”-based politics, where you can simply filter out that which doesn’t interest you and route yourself around inconveniences. After all, the city is here for you to use.
Renderings for The Cloud.
Our second example, “The Cloud,” is an interfacial monolith at superscale. Bringing together a massive global team of designers and engineers and artists and organizations (including Arup, Google and MIT), the Cloud was to be an observation deck for the 2012 Olympic Games. Yet just as Nest’s thermostat isn’t your everyday widget, the Cloud wasn’t your everyday tower. Its tall, spiral ramp would carry pedestrians and cyclists high up to a cluster of transparent inflatable spheres and observation decks, where visitors could walk “among the clouds.” It was a rather high-concept project — the team’s 2009 proposal (never realized) was rich with allusions to poetry, art, architectural history and meteorology — but we’ll focus here on the Cloud as an interface. 29
First, the Cloud would provide points of view and visceral experiences not typically accessible to the street-level Everyman, particularly not in mountain-less London. There is of course an ideology, and a history, to this privileged perspective — that of the deity, the satellite and other militaristic “visioning machines,” the CEO in the executive suite. Second, the inflatable structure was to be itself a substrate for data visualization; it would allow for the geo-location of information about medal winners and attendance at the Olympic events, energy use, transportation patterns, mobile phone activity, even historical information about the region. And it would display some of that data via augmented-reality interfaces, which would “layer” information on top of the landscape visible below. But the Cloud would also generate data — this is my third point — by collecting rainwater, harnessing wind and providing a unique sensory experience for visitors to be in the weather — in the ubiquitous London clouds while in the Cloud. As explained by Dan Hill, then an Arup representative on the design team, the Cloud would “take aggregate individual patterns and reveal them at civic scale, thus binding the city’s activity together via a gentle ambient drizzle of data.” 30
Continuing the meteorological metaphor:
Like all tell-tale signs of brooding weather, the Cloud is a display system. It is both screen and barometer, archive and sensor, past and future. The patterns of its animated skins offer a civic-scale smart meter for the Olympic Park and for London as a whole, sign-posting particular events, transport patterns, weather forecasts, timetables and footage which can be real-time or decades old. Its movements can reveal the movement of people below, or even within its structure, detected by hidden sensors — a space alive to the touch, an aerial ecology.
It implements a radically new non-Cartesian method of spatial display (a suspended ?eld of distributed LED signage) that enables it to be seen from all directions, including from within. It destroys the antique divide between audience and spectacle; the people become the project & projection, watching and learning from themselves, transmuted into light. 31
Pie-in-the-sky it may have been, but the embodied interactive experience is worth considering as a model for urban interfaces. The Cloud proposed a macro-scale view that approximated the scale of modern information networks, literalizing the metaphor of the data cloud. Participants would have assumed the position of “aggregators” by walking around the space and kinesthetically, choreographically, tying together various threads of data. 32 Its primary publics would have been the visitors to Olympic Park — but because the monumental structure would be widely publicized on television and the internet, its “audience” would have been global. That audience could have posted their own relevant content on social media, which might have then made its way to one of the screens inside the Cloud, for folks on-scene to see.
But we don’t know how that data would have been made perceptible. We see no representation of the embedded screens or their interfaces in the team’s renderings, and we have no indication of what other sensory outputs might have been engaged. The renderings emphasize visitors’ elevated point of view and kinesthetic experience, which do indeed allow for a novel “embodiment” of urban data. But I wonder: by transforming surveillance and aggregation into an immersive display, a fair attraction, without addressing the power and privilege associated with those perspectives and roles, would the Cloud have “destroy[ed] the … divide between audience and spectacle,” or would it have turned its subjects, and their data, into spectacle?
The Cloud.
Breaking the Frame
In his critique of financial interfaces, economist and philosopher Georgios Papadopoulos acknowledges the potential of the interface to function disruptively (such a shame that this term has been spoiled by Silicon Valley!), unmasking the norms and limitations of the financial system it models, and offering a “transparent” look at its underlying ideologies. 33 Galloway similarly calls for a “counter-cartography” — which might be realized through the urban interface — that reveals and tests the protocols of our “informatic imagination.” It’s not enough merely to “intervene at the level of ‘content’” — for instance, to use a flow chart or PowerPoint or Google Map to trace networks of power. We have to test the “prohibitions” of our platforms’ forms and materialities and affordances, too; we need to experiment with “new data types, new ‘if-then’ statements, new network diagrams, new syllogisms.” 34
Can we create a formal or structural parallel between the urban structures we desire and the interfaces we create to mediate those cities? Are we sure, Hill wonders, that core civic values — serendipity and productive inefficiency, personal and civic responsibility, “meaningful activity from citizens and government, the city as public good, and … diversity and regard for the affective dimensions of urban experience — are part of the smart city vision?” Furthermore, he asks, “are our governance cultures and tools in the right shape to genuinely react to the promise of The Network?” 35 Are these same values embodied formally in our smart city interfaces? Could governments use these tools to “boldly prototyp[e] new versions” of themselves? Could citizens use these same tools to investigate urban power structures and access to resources? We should be using our urban interfaces to afford our publics a peek “down the urban stack,” to the invisible infrastructures that make the city work; to call attention to the unrepresented populations and urban problems that are filtered out of our whitewashed and abstracted city renderings; to highlight opportunities for improvement, and the roles everyday people could potentially play in effecting that change. We could be using our urban interfaces to educate our publics about the nature of government and the expanding “science” of urban management — about the methodologies of data gathering and analysis, the politics of visualization, the algorithms behind the “urban operating system,” and the servers and wires and waves that make it all possible.
Our urban interfaces could compel us to ask questions about what kind of cities we want, and what kind of citizens we want to be. The creation of a better interface — an interface that reflects the ethics and politics that we want our cities to embody — is necessarily a collaborative process, one drawing on the skills of designers of all stripes, technicians, engineers, logisticians, cultural critics and theorists, artists, bus drivers and sanitation workers, politicians and political scientists, economists, policymakers and myriad others (including women and people of color, who have been egregiously underrepresented in relevant debates). If our interfaces are to reflect and embody the values of our city, the conception and creation of those interfaces should be ours, too — not Cisco’s, not the administrators’, certainly not mine or yours. But ours.
Comments are closed. If you would like to share your thoughts about this article, or anything else on Places Journal, visit our Facebook page or send us a message on Twitter.