How much of what we call a city is still embodied in streets and buildings, parks and squares? Nearly two decades ago, William J. Mitchell declared, “The classical unities of architectural space and experience have shattered — as the dramatic unities long ago fragmented on the stage — and architects now need to design for this new condition.” 1 Actually, architects (among others) can and must now work from within this condition: the architectures of software, teams, markets and communities may not look or work anything like the architectures of the buildings and cities and networks through which they circulate.
Let’s start with my studio. I inhabit two architectures, more or less alien to each other — an information architecture, and a physical one. My office is a notebook computer that weighs less than seven pounds, and I carry it with me when I travel; my storage “space” is on data servers in Germany. Every generation of software or hardware re-engineers this new soft world. If West 30th Street outside my Manhattan window were changing as fast as the programs I use daily, it would be unrecognizable most mornings. How could I possibly explain my work needs to an architect, or configure a collaborative design environment for a multi-person studio, or even (yikes) a public space?
Taken separately, most data networks and applications do not reflect this reality. They are planned, designed, built and deployed through the public, civic and commercial sectors over long product development and marketing cycles, as if the whole system were as fixed and “hard” as the Manhattan Bridge. The old trolley lanes now carry cars and trucks, and the approaches have been crudely adapted, but the thing itself seems to hold its shape, and stands the same distance from my door as when it opened in 1909.
Fortunately, software technologies can be taken apart and rebuilt on short notice at artisanal scale. A few skilled people can transform existing tools or build new ones, along with entirely new forms of work and organization. Here I present an overview of visualization, simulation and design tools as they have emerged and evolved to the state of their present art, and I offer some thoughts about how they might fruitfully co-evolve with physical environments, as deeply social media.
The New Soft City
A city is “soft” to the extent that it can be characterized as a meta-network of data flows and personal communication, operating to some extent independently of the “hard” infrastructure of buildings and roads. Arup’s Dan Hill defines the new soft city as one whose physical elements are functionally augmented by cameras and other sensors that feed data to everything from ventilation equipment to command-and-control centers for surveillance, emergency response, and day-to-day operation of public services. 2 Piggybacking on this infrastructure, Hill says, is another layer of softness that includes buildings and streets acting as urban informatic systems, displaying real-time public transit data and parking availability, as well as innovations like smart meters in private homes. In the new soft city, people are offered “awareness,” governments obtain “intelligence,” and planners, architects and engineers get lots of rich new “data mines.”
To this construct we might add yet another layer of mediation. The new soft world is a radically symbolic, social and political environment, whose physical components constitute a subset of urban media. The very bedrock of this place is blurred and fluid. Every time you wake up there, the shape of your job, your neighborhood and your social network has already changed a little bit.
The term “soft city” was actually coined by Jonathan Raban in 1974. It had nothing to do with computer software, hardware or digital infrastructure:
Cities, unlike villages and small towns, are plastic by nature. We mould them in our own images: they, in turn, shape us by the resistance they offer when we try to impose our own personal form on them. In this sense, it seems to me that living in cities is an art, and we need the vocabulary of art, of style, to describe the peculiar relationship between man and material that exists in the continual creative play of urban living. 3
Raban was writing about the art of inhabiting a city where acquaintances or neighborhoods mutate unrecognizably, or disappear, in just a few years. A street corner might seem solid — it was there before you were born and will be there the next time you pass by — but it has lost its essential place-ness. It is superficially stable but effectively temporary, contingent. Buildings and streets are the fossils of a long-dead neighborhood; they persist even as another neighborhood imagines itself into place.
In this new soft world, a building is not so much an artifact enhanced by communication and control systems as it is a configuration of physical and virtual elements in a constant state of co-transformation, with mostly mismatched capabilities (what it can actually do) and affordances (what it looks like you might use it for). Such mismatches are inevitable in an era when the uses and capabilities of buildings change far more often than buildings are torn down. Software developers use the term “beta” to describe a provisional version released at no charge to proficient users willing to work with something that hasn’t been fully vetted — often the only effective way to complete a project. But the project of a city is never complete. As local demographics, macroeconomic factors, and transportation and communication technologies change, so too does the optimal form and function of every building and neighborhood — and, thus, the optimal form of the city.
This is true in a thousand ways, down to the scale of a single room. A class of students with laptops and wifi needs a different physical configuration than a traditional lecture hall or seminar room. Just ask any instructor who has looked out upon a sea of students with screens open to porn, poker, social chatter, class notes, or all of the above. The physical isolation of instructor and students in that classroom no longer sequesters them from the world. “Outside” has lost its meaning and its purpose. Now imagine the design of an entire university. How much of the campus is there out of habit? What hybrid architectures would make more sense today?
Professionals, too, are caught between worlds. A lot of telecommuters and cubicle captives need Facebook and Twitter to stay (approximately) sane through the day — with or without pink noise, deer in the backyard, or Nespresso. In the new soft world, real-time social connection is fundamentally “infrastructure,” as essential as the underground conduits that house electrical and communication cables.
I am a member of numerous formal and informal communities, some of which are poorly (or maybe just primitively) supported by legacy environments we intermittently struggle to outgrow. The shapes of those hard rooms and buildings (and other urban structures) are almost never reconceived from scratch. They change by having more mechanisms grafted on: ethernet cables, servers, security cameras, ID scanners, and so forth. As you read this, office buildings in lower Manhattan are being converted to residential uses in response to the flight of commercial tenants whose employees work at greater physical distance from the empty trading floor of the stock exchange. More profound changes in how we understand, imagine and build natively soft cities are on the horizon.
Before you get too dizzy, here’s a bit about the next buzzword you might be expecting: “smart city.” The term has been applied to an impossible range of concepts, from judicious urban planning, to cultural and intellectual human capital, to government efforts to saturate urban environments with sensors, networks and command facilities. But for a city to be meaningfully intelligent, let alone wise, it’s not going to be enough to blanket its private and public structures with cameras and sensors feeding monstrous servers with überbits of data, no matter how many command-and-control centers are built under City Hall, the Pentagon, the World Bank, Autodesk or your desk.
For a “soft” city to be a city at all, it must be a human community that is collectively competent to shape its own smart materials and technological conditions, and to persist in meaningful form as its environment mutates. One way to do this is for the virtual city to constitute itself as the designer of that process of mutation: always reconstituting itself by redefining its own infrastructure — not so much in its own image, as in the image of its own aspirations.
For civil engineering to be civil, for urban design and planning to be legitimate and competent, even for public artworks to be honestly public, there must be spaces shared by citizens, their representatives and professional experts. As access to personal computers and networks spreads, along with the skills required to use and change their capabilities, more and more of us are effectively in a position to be full citizens of this new blended-reality polis. The necessary techniques for visualization, simulation, and augmented collaboration and design are in development right now, as they have been for a long time, and we all need to learn how to use them.
The Sustainable City
In a rapidly changing world, buildings and cities must become ever-more-complex hybrid systems of data, energy and material exchange just to maintain a reasonable approximation of their current form (this is what sustainability means) let alone evolve to new levels or different orders of capability and quality. 4
It’s no accident that the concept of “high-performance architecture” adopts the terminology of the high-performance automobile: buildings and cars are pre-digital forms that lend themselves to augmentation and optimization by the use of digital logic to improve their physical systems and by the addition of digital sensors and controls to those systems. These are incremental upgrades. We ask the car to go a bit faster or farther or more safely on a gallon of the same gas; we ask the building to lose or gain kilowatts, British thermal units, or gallons of water.
In this process, though, the building is transformed in a subtle but crucial way, from an implicitly stable environment to a dynamic (and replaceable) object, a piece of equipment. A “smart” building is upgraded, tuned and controlled in real time, or close to it: I tweak the settings on my home digital climate control system in response to monthly feedback from minutely itemized reports about fuel, electricity and water consumption provided by smart-metered utilities. On a practical level, this simply enables new efficiencies at the same order of consumption, but conceptually I now have a very different relationship to that house. Part of what it means to inhabit this environment is to operate this “machine for living” 5 through an assortment of digital-mechanical interfaces: the screens and buttons of climate-control modules and smartphone apps. I may frame this as environmental ethics or household economics, but I’m not in it for the sake of transformation; I want to maintain the status quo. What is sustainability but as much as possible, for as long as possible, of what I already have?
A so-called smart city is analogous: the streets and buildings are right where they were when these were dumb cities, but now they have lots and lots of sensors producing “Big Data.” 6 In the near term, the cost of installing and operating these systems is justified by the potential for optimization: gadgets grafted onto existing infrastructure can do the same work as before, only faster and cheaper. In an emergency, city agencies can be commanded and controlled at new levels of performance. Over time, the combination of lots of data and advanced information technologies makes it practical for city managers and engineers to design more efficient systems, infrastructures and regulations. 7 In principle, this is no more complicated than improving record-keeping and information management in order to render a higher quality of services and mitigate their cost. In practice, however, it amounts to creating a whole new class of IT infrastructure, on the assumption that the data will find its way not only into the daily operations of municipal agencies, but also into what I call “Big Design”: planning, design and governance software tools that apply big data resources in ways that are fully accessible not only to specialists but also to citizens and their non-technical representatives, who must ultimately make decisions and assume the risks of implementation.
The New Soft Citizen
Some of these tools will fall under the rubric of data visualization: charts and maps that help humans make sense of the large and complex data sets created and curated by smart infrastructures. Others will offer simulation capabilities: now that we know how a system performs under given conditions, what would happen if this regulation or that traffic pattern were modified? Still other tools will enable new modes of architecture and urban development: design software can be plugged into data sets which provide parameters, drawing input from every sensor in the existing system, from wind patterns and thermal properties to patterns of occupation and circulation.
Smart city infrastructure can augment the ability of managers, planners, designers and engineers to define and implement a fundamentally better next generation of buildings, cities, regions — right?
Maybe. For that to be a serious proposition, it’s going to have to be normal for planners and designers not only to collaborate productively with engineers, including the engineers who build software, but to do so with the full and competent participation of the only people they mistrust more than each other … customers.
As far as Autodesk is concerned, artists and architects and structural engineers are the end-users of its products. This is false. The actual end-users of computer-aided design software are the people who live and work in the conditions created by the people who use those tools: the politicians and businesspeople who make decisions based on the charts and predictions and renderings, and, moreover, the citizens of the smart cities governed and remade by them.
A city is not a BMW. You can’t drive it without knowing how it works. All the “open data” in the world won’t amount to transparency in government until the data is accessible in usable form, and the software tools themselves can be examined and modified at will by the real end-users. Just as citizens have rights and responsibilities in matters of physical infrastructure, they have the same rights and responsibilities in regard to digital infrastructure, including information delivery. To be a citizen of a digital city requires understanding what the databases do and don’t contain, and what they could contain, and how the software used to process that data and drive design decisions does, doesn’t, and might yet perform. To be a citizen of a new soft city means to be a citizen of software itself. Such software must therefore never be a “black box.”
A black box is a device whose inner workings cannot be seen or manipulated. It cannot be serviced or modified; the only tests are for inputs and outputs to conform to requirements. This is typical of commercial software, whose computer code remains the private property of its publisher, even after you pay for it. For software oriented primarily to standardized trades like 2D drafting, video editing or accounting, the black box framework makes sense. But the smart city requires “white box” technologies whose workings are accessible: you can see beyond the inputs and outputs to the underlying logic. This does not mean that everyone needs to be the equivalent of a professional software engineer to be enfranchised as a “new soft citizen.” It does imply, though, that the software and hardware infrastructure of digital public space must be designed and built with specific kinds of legibility and flexibility as basic design requirements.
The advent of geometric perspective rendering in the 15th century was an initially esoteric technical advance, but it provided clients a sense of what it would be like inside a proposed building. Long before computers, technical drawings used a combination of simultaneous orthographic (front, side, top) and auxiliary perspective views of a 3D object to be fabricated. In the digital era, we have developed computer-aided design software in which not only graphic standards but materials performance and professional workflows can be embedded. To the extent that these standards are reliably quantifiable, stable and relevant, building them into the tools makes sense.
The new soft city, however, is a long way from standardization (or standardizability) in its operations and structure. The full implementation of integrated sensors, analysis, operations, planning and design embedded in a physical environment may be inevitable, but for now it a world of experiments.
Scripting the Environment
However you feel about the idea of a city as a “living laboratory,” there are parallel domains in which such experiments are common. New soft citizens can learn from artists and designers who are experimenting on as well as with digital tools. Multimedia performance artists manipulate video, audio and 3D in real time with Max and PureData; graphic designers redefine information graphics in Processing; architectural software geeks build new plugins to extend or alter the functionalities of conventional design software from Rhino to Revit.
Max, for instance, was originally designed in the 1980s to enable musicians to build logical compositions and virtual instruments on computers without actually having to write code. In the years since, it has added graphics capabilities for multimedia performance and general use as a software development platform. New media artist Luke DuBois has worked with Max and Jitter, the graphics suite he helped create for Max, to use data for creative applications. It could just as easily be used to create visualization and analysis tools to help policy experts, governments and advocacy organizations. Max now serves a diverse community of autodidacts, who want and need to provide themselves with new multimedia software, without necessarily becoming programmers per se: interaction designers (for game and exhibit and architectural use cases), performers, installation artists and musicians. Its documentation, structure and interaction design are all optimized for the support of bootstrapping individuals and small groups, working at least initially with off-the-shelf computers and the internet.
Imagine yourself in a world in which there’s no sharp distinction between this kind of highly abstracted form of computer programming, the definition of performance requirements, and the use of software to support design by and for human beings. Don’t laugh — user scripting on this level could soon be mainstream in planning and urban design applications, moving well beyond visual effects into the pre-testing of regulatory frameworks. Rather than designing the shape, you will design the logic whereby the software constructs a thousand shapes you never would have conceived. In experienced hands, such a digital design platform can support a new level of clarity in elaborating program requirements or revealing forms latent in new functional needs and technical capabilities, unconstrained by habit or the atavisms of pre-smart buildings and cities.
This order of meta-design may be profoundly attractive to engineers, as it shifts the emphasis in defining the form of a building to something more like planning, in which the protagonist designs and tests rule sets through mathematical simulation against desired outcomes — as opposed to the current practice of designing a building and testing a proposed final form against benchmarks (think LEED analysis software), or simply foisting the design on the urban fabric and leaving it to posterity to adapt or replace (think New York Penn Station).
The key here is understanding planning as a process of designing rule sets driven by explicit goals and values, in other words a kind of algorithmic or parametric composition, and understanding design as a playing through of those rule sets in the context of a living human settlement in a context of its own, the physical realities of built and natural systems. The languages of planning, design, art and engineering may still be foreign to one another, but their media have already converged.
It sounds like science fiction, but so did notebook computers in 1970. In fact, resource management games like SimCity and Civilization work like this: algorithms “play out” the effects of resource allocation by players over time. Adding a meta-level to the game, in which players experiment with scenarios on smart models of real cities, using real data and state-of-the-art predictive mathematical model design, is perfectly feasible. The necessary computers and networks are already in place. Imagining students and social leaders getting collectively smart enough to use them might not be that crazy.
This potential can be seen in its most advanced form in performance-driven augmented design software like Dassault CATIA or the Gehry Technologies variant, Digital Project. Rather than using computers to accelerate the elaboration of design details, renderings, distribution and use, we might just as well proceed by defining parameters explicitly and carefully, leaving it to the software to derive alternative building layouts, configurations of public space and infrastructure. This is the kind of thing Martin Riese advocates as “engineering-driven design tools” and John Frazer calls “active software.” 8 If that software already knows what kinds of structures are required by particular sizes and shapes of space, and you can tell it enough about how many people need to do what in there, how the sun and seasons will need to be mitigated, it can submit formal and technical solutions for evaluation.
This approach emphasizes the quality of the analysis behind the design of the parameters, and the quality and relevance of parameters to the eventual built form. It’s critically important that such tools make their assumptions and methods both obvious and malleable. If you can’t tweak the priority level of a parameter in the determination of ultimate form, or run variations, it’s a safe bet that most design outcomes will be, at best, inappropriate to real-world programs. If, on the other hand, such tools provide the right kinds of explicitness and flexibility, they can make it possible to undertake new levels of innovation, by offering logical but unexpected configurations that directly answer novel requirements without preconception, and new levels of performance optimization early in the design process.
The Beijing National Aquatics Center, nicknamed “the Water Cube,” is one particularly encouraging precedent. The famous bubble surface is actually a spaceframe — the support structure of the building — developed for the combined value of its imagery and its load-bearing capacity. Arup’s Tristram Carfrae, one of the lead engineers, characterized the workflow as “highly collaborative and proactive to the point that the Water Cube does not have a single ‘author.’” 9 The team of Australian and Chinese architects and engineers developed a simultaneously symbolic and technical solution:
The geometry was prescribed and adjusted initially using spreadsheets and Microstation, later using a bespoke piece of software. But the sizes of the structural elements and interconnecting nodes was then decided by an active system made for this project. This iterative system determined the structure based on material properties, input geometry and applied loads (gravity, wind, snow and seismic) and member strength formulae contained in the Chinese codes for structural design. The output from this process was in the form of calculations that could be directly submitted to the Chinese authorities for approval and a 3D CAD model that had all the geometric information necessary to build the actual structure. … Most design outcomes were considered simultaneously by a team of 20 or so people who communicated intensely for a four-week period until the solution crystalised, fully formed, from the melting pot of ideas and desires. 10
That was 10 years ago. Now, the free Grasshopper 3D generative modeling extension for Rhino can do this kind of work on a standard laptop. 11 Grasshopper’s visual programming interface works like Max: figure out what you want it to do (build the logic diagram) and it will write and run the code for you. For students, that’s less than $200 in software. Software extensions that predict the technical performance characteristics of CAD designs range from LEED plugins for Sketchup to visualization and simulation suites for AutoCAD, and many are available to students for free.
As the need for high-performance buildings grows, working relationships among innovative architects, engineers and software developers will continue to evolve rapidly, along with the collaborative software they use.
Open Data Infrastructures
Todd Park, Chief Technology Officer of the United States, has advanced an open data agenda shaped by both a public service perspective and private sector experience. 11 Making government data available online in machine-readable form increases the effective value of public information, enabling tech entrepreneurs to transform raw data into useful services, which in turn creates a market that spurs economic growth in the technology sector. But that is only part of the story. The best reason for public agencies at every level of government to open their statistical and geographic information resources is the potential for those resources to be used directly in smart design and planning software that serves the full spectrum of users: experts, representatives and citizens.
These categories of end-users have distinct needs in terms of the particular kind of “open” they need their data to be. For such data to be effectively public — rather than simply a free service to large institutional and commercial entities — we need a new generation of visualization tools that are easily operated by a concerned novice, and that provide access to their underlying logic so that advanced users can assess functions and experiment with alternatives. We need new media of public discourse that help users aggregate, visualize and analyze data in the same digital space as open planning and design processes, which will fuel a virtuous cycle of public awareness and engagement, leading to better-informed debate. The “sunshining” of the underlying data will improve its kind and quality over time.
To the extent that a piece of software acts as the medium of a public mandate or functions as a public space, it is a public asset. Its form and functions are therefore matters of public interest and responsibility. This will ultimately require an approach geared not only to supplying dot-coms with raw data, but also making sure that dot-gov, dot-org, and dot-edu needs are provided for and their constituencies fully engaged. It’s one thing to provide machine-readable data in forms that can be exploited by tech businesses, something else entirely to provide effective access to the data, to inform public discourse about all the different mandates the data is actually collected to serve, from quality assurance in federal-level services to the definition of goals and strategies for research, education, and urban- or district-scale development. Who will design and build the tools citizens and advocacy groups need for open data to be open to them too?
A fully functioning open data ecosystem must include tools that make sense of the data in forms that can be understood, debated and adapted to new uses in public discourse and future-making at every level, and actively support a culture of new soft citizenship. This is not a simple matter of open data but rather a complex one of open information media, through which the whole body politic can augment itself over time. The potential for socially constructive applications and practices is just now appearing on the horizon. The Ford Foundation has been convening symposia around these issues, such as Change By Design. Likewise, the Personal Democracy Forum, the White House-sponsored Energy Datapalooza, and the Council on Foreign Relations are all working to make sense of the stakes, and the potential for a new generation of what I call “deeply social media.”
Public Tools For The Public Realm
To varying degrees, commercial software companies already engage their clientele in improving their products over time, through forums, conventions, social media initiatives, Application Programming Interfaces (APIs), conventional surveys, user testing, and so on. But in any situation where there are real human stakes (workplace design, for instance) and especially in the public realm, the success — and legitimacy — of the design process depends on engaging ultimate end-users (a.k.a. citizens) in defining parameters and outcomes. An essential component of public space is the public-ness of the definition of its purpose, meaning and capacities. In a truly smart city, the design process will adopt the methodologies of the open-source movement.
Open-source urban design gives professionals access to the raw data and material they need to answer the important questions: how does this thing or place actually work, and how can it really work better? Who will ultimately be stuck with the outcome? Who will buy, not buy, sell, quit, or move in or away down the road? Further, a new soft space is emerging in which it is no longer necessary to isolate design from the general flow of public discourse: a personal computer can now be used to address a design improvement or alternative to a virtual model of the present city. The future of civic engagement may involve citizens sending code to the planning commission, as previous generations sent irate letters to the editor or city council.
Michael Kwartler, founder and director of the Environmental Simulation Center in New York City, has been working in this area since the days when graphics software was less powerful and a lot more had to be done from scratch. One current project is assisting a developer in meeting the Visual Simulation Ordinance passed in 2009 by the city of Glen Cove, on Long Island. Among the ordinance’s requirements for proposed developments above a certain size:
REAL-TIME ANIMATION — An immersive three-dimensional digital model of a place or environment which is dimensionally verifiable. It supports freedom of movement by the viewer by rendering the flow of images as the viewer moves freely though the virtual environment of the three-dimensional digital model. This permits a viewer to “walk through” a three-dimensional model at eye-level, look around and choose their own path or location to view a particular development action. All verifiable real-time animations must document the sources used to create the 3D model of existing and proposed conditions.
The ESC itself helped write this ordinance, at the behest of Glen Cove’s mayor. Though it refers to a “simulation,” there is no predictive mathematical model at work, only an interactive perspective rendering of a virtual model of the proposed structures, as they will appear in context. For a true simulation, consider the ESC’s Adaptive Reuse Study for lower Manhattan. The graphic is rendered in aerial perspective from an Oracle GIS database, containing floor-by floor information about each building’s size, shape, age and vacancy rate. A user can set parameters to see how much of how many buildings might be suitable for residential rezoning, to help adapt the district to the fact that much of the new soft Wall Street is relocating to the internet.
John Frazer, who has been experimenting with computation and architecture at the Architecural Association in London for over 40 years, predicts further exponential increase and acceleration of the power of computer hardware and networks to support these developments. But I believe the processing and carrying capacities are already way ahead of our collective ability to richly exploit them. That “collective ability” — whether you think of it as cultural capital, social infrastructure or collective wisdom — will have to be cultivated over time, rather than designed and constructed. As citizens work within soft city spaces whose functions embed learning resources in their representation of the built city, they will find opportunities to collaborate where we now only find places to “vote,” “rate” and “comment.”
I know, it sounds Utopian. Dreamy. I would never have believed it either, without the experience of actually building a working prototype. Betaville is an open-source massive multiplayer environment for real cities, in which ideas for new works of public art, architecture, urban design and development can be shared, discussed, tweaked and brought to maturity in context, and with the kind of broad participation people take for granted in open-source software development. The sketch can be anything from a massing model or rough outline to a reasonably detailed presentation of exterior and interior spaces, floor by floor, with background information embedded or linked to external web resources, for potential stakeholders to comment on via text chat. The world is built with the same underlying geometry as standard GIS mapping tools. The model can actually be used as a GIS data browser, by linking online databases directly to objects, or as an aggregator of public information through external web links. Over time, multiple iterations of proposals for additions or changes to a plaza, block or district can be offered, reflecting the input of participants.
For a planner or a developer, this kind of open ideation platform offers new opportunities to get and give value in the formulation of requirements and the development of consensus early and often. For an artist or designer, it is becoming practical to engage user and neighbor groups much more deeply and effectively: any family or community with a computer and internet access can be “in the game,” in a virtual world whose perceptual range offers a God’s-eye-view of regional mapping tools as well as an eyes-on-the-street perspective. The openness of the environment can actually provide a medium for more adventurous concepts than conventional competitions or “living lab” initiatives, in which teams of researchers and officials experiment on/in/with people, homes and communities on the tacit understanding that nobody will do or say anything too crazy to the customers. 12 The legitimacy of this approach relies on a clear distinction in practice between engaging citizens as participants and treating them or their environments as experimental subjects.
A virtual urban ideation environment can provide for more radically innovative collaboration, precisely because it does not call for urban vivisection, i.e. trial-and-error experiments with the city itself, over broader constituencies and longer timelines.
Hello, New Soft World!
You already live and work in a new soft city, or its equivalent at other scales. Expect (demand!) new communication and design media — software tools to support the development of your own and your colleagues’ and neighbors’ collective analytical and future-making skills.
The necessary infrastructure is either already in place, or can be assembled from available technologies — public data and public-interest social networks and real-time 3D planning and distributed collaboration software platforms. But the media and informational assets of the digital public domain are not merely infrastructure. Like a train station or a public square, they are also works of public art that express shared values and aspirations. It makes no sense to provide public access to data whose sources and parameters are effectively secret, or to provide simulation, planning and design tools whose logic is off-limits for critical evaluation or reconfiguration.
If this vision sounds ambitious, ask yourself if the words “community” and “city” were ever meant to denote simple or static worlds. If you are reading this as a professional, would you rather work in a world where you have more or less access to the people who will be most affected by your efforts? Would you rather have them more or less aware and skilled? If you’re reading this as a citizen, would you rather live in a world where you have more or less access to the information and processes that shape it? Would you rather be more or less sophisticated about the ways and means and culture of the professions that inform the political decisions shaping your environment, and its future?
The technology doesn’t care, but you should. Go ahead, it’s all around you. Open it up.
06.28.2013 at 16:06
it's odd that the author brings in a rather dubious and failed example of architecture and planning as an example - the water cube (in addition to most of the structures built for the beijing olympics) - it's been only partially re purposed, and is increasingly showing its rather young age.
also - these "soft system" utopias tend to gloss over issues such as politics, socio-economics, demographics, and property ownership - I predict these sorts of worlds built on "consensus" would ultimately come under attack by those who will be increasingly disenfranchised by this so-called "collaborative" process.
It's also nice to say that all this information shouldn't be kept secret and used against the citizenry, but we all know that this won't happen.
06.28.2013 at 16:29
Also - the benefit of living in an urban environment are the social discrepant events that propel society forward - these "collisions" between disparate power structures and groups that shape our cultures and social innovations - this proposal effectively wants to eradicate this necessarily messy conflict and "negate authorship" - which is extremely alarming - because then it becomes impossible to pinpoint who is really behind the green curtain.