Databodies in Codespace

As the bioengineering of people and cities converges, where do we locate the public sphere?

Broadway and Prince Street, New York City
[Maciek Lulko]

In late 2016, on a conference stage in Palm Springs, California, decision scientist Hannah Bayer made a bold declaration: “We’re going to measure everything we can possibly measure about 10,000 people over the course of the next 20 years or more. We’re going to sequence their genomes; track everywhere they go, everything they eat, everything they buy, everyone they interact with, every time they exercise.” 1

“We” is the Human Project, born as a collaboration between two research labs at New York University — the Institute for the Interdisciplinary Study of Decision Making (a world leader in neuroeconomics) and the Center for Urban Science and Progress (ditto for urban informatics) — with startup funding from the Kavli Foundation. As you might suspect from those origins, the partners are less interested in defining the essential qualities of our species than in understanding how those qualities are operationalized. “Human,” here, is an acronym: Human Understanding through Measurement and Analytics.

HUMAN, here, is an acronym: Human Understanding through Measurement and Analytics.

Any Quantified Self enthusiasts in that audience who might have relished the chance to be so intimately measured were out of luck. As the Human Project is a scientific study, it needs a representative sample. Researchers started by crunching datasets to identify 100 “micro-neighborhoods” that embody New York City’s diversity, and next they will contact randomly targeted households in those areas, inviting people to join the study, “not just as volunteers, but as representatives of their communities.” With promises of payment and self-enlightenment, recruiters will try to turn 10,000 human subjects into HUMANs. 2

Let’s say your family volunteers. To start, you might submit blood, saliva, and stool samples, so that researchers can sequence your genome and microbiome. You could undergo IQ, mental health, personality, and memory testing; and agree to a schedule of regular physical exams, where the researchers collect more biological samples so they can track epigenetic changes. They might compile your education and employment histories, and conduct “socio-political” assessments of your voting, religious, and philanthropic activity. (As the project leaders did not respond to interview requests, I pieced together this speculative protocol from their promotional materials, academic papers, and public statements.) 3

Krocky Meshkin artwork, Headless Sightings NYC - Prospect Park
[Krocky Meshkin]

If you don’t have a smartphone, they may give you one, so they can track your location, activity, and sleep; monitor your socialization and communication behaviors; and push “gamified” tests assessing your cognitive condition and well-being. They may “instrument” your home with sensors to detect environmental conditions and track the locations of family members, so they can see who’s interacting with whom, when, and where (those without a home are presumably ineligible). You may be asked to keep a food diary and wear a silicon wristband to monitor your exposure to chemicals. Audits of your tax and financial records could reveal your socioeconomic position and consumer behavior, and could be cross-referenced with your location data, to make sure you were shopping when and where you said you were.

The researchers assert that, for the first time ever, they are able to quantify the human condition.

With your permission, researchers could access new city and state medical records databases, and they could tap public records of your interaction with schools, courts, police, and government assistance programs. They could assess your neighborhood: how safe is it, how noisy is it, how many trees are there? Finally, they could pull city data — some of it compiled and filtered by the Center for Urban Science and Progress — to monitor air quality, toxins, school ratings, crime, water and energy use, and other environmental factors.

What does all this measuring add up to? The researchers assert, “For the first time ever we are now able to quantify the human condition.” By investigating “the feedback mechanisms between biology, behavior, and our environment in the bio-behavioral complex,” they aim to comprehend “all of the factors that make humans … human.” 4 Of course, that requires a huge leap of faith. As Steven Koonin, the theoretical physicist who founded the Center for Urban Science and Progress, observes: “What did Galileo think he was going to see when he turned his telescope on the heavens? He didn’t know.” 5

People walking up stairs
[Fumigraphik]

Now the telescope is turned inward, on the human body in the urban environment. This terrestrial cosmos of data will merge investigations that have been siloed: neuroscience, psychology, sociology, biology, biochemistry, nutrition, epidemiology, economics, data science, urban science. A promotional video boasts that the Human Project has brought together technologists, lawyers, ethicists, and “anthropologists, even!” to ask big questions. Even anthropologists! (It’s notable that several relevant fields — social work, geography, and most of the humanities — don’t make the list.) 6

Now the telescope is turned inward, on the human body in the urban environment.

This is the promise of big data and artificial intelligence. With a sufficiently large dataset we can find meaning even without a theoretical framework or scientific method. As Wired­-editor-turned-drone-entrepreneur Chris Anderson famously declared, “Petabytes allow us to say: ‘Correlation is enough.’ We can stop looking for models. We can analyze the data without hypotheses about what it might show. We can throw the numbers into the biggest computing clusters the world has ever seen and let statistical algorithms find patterns where science cannot.” 7 Human Project director Paul Glimcher says that collecting data on “everything we can think of” — at least everything related to biology, behavior, and environment — will allow researchers to model every imaginable “phenotype,” or set of observable characteristics, both for people and the cities they inhabit. 8

Medical researchers have long harbored similar ambitions. The Framingham Heart Study (which began in 1948) and Seven Countries Study (1956) investigated the impact of diet, weight, exercise, genetics, and smoking on cardiovascular health. The Nurses’ Health Study (1976) collected biospecimens and questionnaires from hundreds of thousands of nurses, to better understand how nutrition, weight, physical activity, hormones, alcohol, and smoking affect disease. The English Longitudinal Study on Aging (2002) periodically interviewed and examined participants over the age of 50, looking for correlations among economic position, physical health, disability, cognition, mental health, retirement status, household structure, social networks, civic and cultural participation, and life expectancy. Some of these studies also considered environmental aspects of public health, although they didn’t have access to today’s rich geospatial data.

Krocky Meshkin artwork, Never Let Me Go
[Krocky Meshkin]

Fast-forward to the age of smartphones and neural nets. Apple recently announced that its Health app will allow users to access personal medical records. The company is also developing apps to aid studies and even sponsoring clinical trials. 9 Seemingly everyone is trying to break into the risky but lucrative health tech market, which offers ample opportunities for data harvesting. And many medical providers are happy to cooperate. A few years ago, Google’s AI subsidiary Deep Mind and London’s Royal Free Hospital partnered to develop new clinical technologies, but they didn’t adequately inform patients about the use of their data, and were rebuked by the British government. 10 More recently, Facebook has approached hospitals about matching anonymized patient data with social media profiles to find patterns that might inform medical treatment. Plans were “paused” last month, as the Cambridge Analytica scandal came to light. 11 When I brought up this trend in a recent lecture, one of the attendees, a health informatics researcher at a Philadelphia hospital, emphatically declared, “All of us want to work with Google.” It’s easy to see why. More data can lead to better care, and the potential benefits of so-called “precision medicine” are enormous.

Seemingly everyone is trying to break into the health tech market, which offers ample opportunities for data harvesting.

To its credit, the Human Project is advised by privacy and security experts and has announced strategies for keeping data safe. Recruiters use videos to secure consent from subjects (some as young as seven years old) who may not understand legalese, and the FAQs state that data will be anonymized, aggregated, and protected from subpoena. According to reports, the data will be compartmentalized so that researchers have access only to the particular slice (or “data mart”) relevant to a given study. These “heavily partitioned data silos” will reside in sealed zones at the project’s data center in Brooklyn: a monitored green zone with limited data; a yellow zone, accessible via thumbprint and ID card, where researchers consult the anonymized data marts; and a high-security red zone, where the “crown jewels” are held. 12 It seems fitting that researchers will have to offer up their own biometrics to access their subjects’ data.

Yet even if personal data are secure, methodological and ethical risks are exacerbated when university research programs are spun off into private companies. The Human Project is run through a partnership with Data Cubed, Inc., a health tech startup founded by Glimcher that aims to monetize the project tools (particularly the Phenome-X phenotyping platform) and ensure that “participants and the study benefit when for-profit companies use insights from [project] data for profitable, socially responsible work.” 13 Given the stakes here, that relationship needs close scrutiny.

What’s more, the blind faith that ubiquitous data collection will lead to “discoveries that benefit everyone” deserves skepticism. Large-scale empirical studies can reinforce health disparities, especially when demographic analyses are not grounded in specific hypotheses or theoretical frameworks. Ethicist Celia Fisher argues that studies like the Human Project need to clearly define “what class, race, and culture mean, taking into account how these definitions are continuously shaped and redefined by social and political forces,” and how certain groups have been marginalized, even pathologized, in medical discourse and practice. Researchers who draw conclusions based on observed correlations — untheorized and not historicized — run the risk, she says, of “attributing health problems to genetic or cultural dispositions in marginalized groups rather than to policies that sustain systemic political and institutional health inequities.” 14 A recent report by Kadija Ferryman and Mikaela Pitcan at the Data & Society Research Institute shows how biases in precision medicine could threaten lives. 15 And history offers many examples of ethical problems that arise when health data circulate beyond the context of their collection. 16

Headless man on sidewalk in New York City
[Krocky Meshkin]

We’ve seen such biases realized in other data-driven models, notably in law enforcement. Contemporary models of “actuarial justice” and “predictive policing” draw correlations between specific risk factors and the probability of future criminal action. Courts and police make decisions based on proprietary technologies with severe vulnerabilities: incomplete datasets, high error rates, demographic bias, opaque algorithms, and discrepancies in administration. 17 “Criminal justice management” software packages like Northpointe’s dramatically overestimate the likelihood of recidivism among black defendants. 18 Even the instruments used to collect data can misfire. Biometric technologies like facial recognition software and fingerprint and retina scanners can misread people of color, women, and disabled bodies. 19 As has always been the case, race and gender determine how “identities, rather than persons, interact with the public sphere.” 20

Race and gender determine how ‘identities, rather than persons, interact with the public sphere.’

These problems are compounded as datasets are combined. Palantir software now used by some local governments merges data from disparate city agencies and external organizations, enabling police to collate information about suspects, targets, and locations. 21 In New York, for example, Palantir worked with the Mayor’s Office of Data Analytics and the Office of Special Enforcement to develop a tablet application “that allows inspectors in the field to easily see everything that the City knows about a given location.” 22 Key analyses, even decisions about where to deploy resources, are automated, which means that “no human need ever look at the actual raw data.” 23 Biology, behavior, culture, history, and environment are thus reduced to dots on a map. End users don’t know which agencies supplied the underlying intelligence and how their interests might have shaped data collection. They can’t ask questions about how social and environmental categories are operationalized in the different data sets. They can’t determine whether the data reinscribe historical biases and injustices.

All of this is to say that past efforts to combine vast troves of personal and environmental data should make us wary of new initiatives. As Virginia Eubanks demonstrates in Automating Inequality, “Marginalized groups face higher levels of data collection when they access public benefits, walk through highly policed neighborhoods, enter the healthcare system, or cross national borders. That data acts to reinforce their marginality when it is used to target them for suspicion and extra scrutiny. Those groups seen as undeserving are singled out for punitive public policy and more intense surveillance, and the cycle begins again. It is a kind of collective red-flagging, a feedback loop of injustice.” 24

New York City financial district
[E. J. Peiker]

Environmental Epidemiology

While the neuroeconomists on Glimcher’s project gather data on everything “that makes humans … human,” their partners in urban informatics control a voluminous flow of information on what makes New York … New York. With special access to municipal data held by many offices and agencies, researchers at the Center for Urban Science and Progress have built “one of the most ambitious Geographic Information Systems ever aggregated: a block-by-block, moment-by-moment, searchable record of nearly every aspect of the New York City Landscape.” 25 In a video promoting the Human Project, every urban scene is overlaid with a bullseye, a calibration marker, or a cascade of 0’s and 1’s, signaling an aggressive intent to render the environment as data.

The Human Project researchers regard the urban habitat as something that can be rehabilitated or reengineered.

The partnership with CUSP may give the Human Project an advantage in the race to quantify health outcomes, but it is not the only such effort. The National Institutes of Health is building All of Us, a research cohort of one million volunteers with “the statistical power to detect associations between environment and/or biological exposures and a wide variety of health outcomes.” 26 The NIH receives data and research support from Verily Life Sciences, an Alphabet company that, in turn, runs Project Baseline, a partnership with Duke, Stanford, and Google that aims to recruit 10,000 volunteers to “share [their] personal health story” — as well as clinical, molecular, imaging, sensor, self-reported, behavioral, psychological, and environmental data — to help “map human health.” 27 Ferryman and Pitcan have diagrammed the complex topology of these projects in their Precision Medicine National Actor Map. Sidewalk Labs, another Alphabet company, recently announced Cityblock Health, which seeks to connect low-income urban residents with community-based health services, including clinics, coaches, tech tools, and “nudges” for self-care. 28 Again, the precise targeting of individual patients and neighborhoods depends on a vast dataset, including in this case Google’s urban data.

All of these initiatives see public health through the lens of geography. The Human Project even refers to its emerging databank as an “atlas.” Programs like Cityblock Health conceive the urban environment not just as a background source of “exposure” or risk, but as a habitat in which biology and behavior inform one another. The qualities of this habitat affect how people make choices about diet and exercise, and how bodies respond to stress or industrial hazards. What seems to set the Human Project apart is that its researchers regard that habitat not as a given, but as something that can be rehabilitated or reengineered. Once researchers have identified relations between the city or neighborhood and the “human condition,” they can tweak or transform the habitat through urban planning, design, and policy. Their insights can also guide “the construction of future cities.” 29 Individual phenotypes are mapped to urban phonotypes, databodies to codespaces.

Street intersection in New York City
[DeShaun Craddock]

Constantine Kontokosta, the head of CUSP’s Quantified Community project, is one of the most prominent advocates for this worldview. He wants to “instrument” neighborhoods with sensors and engage citizens in local data collection, so that the urban environment becomes a “test bed for new technologies”; “a real-world experimental site” for evaluating policy and business plans; a 3D model for analyzing “the economic effects of data-driven optimization of operations, resource flows, and quality-of-life indicators.” Machine-learning algorithms will find patterns among data from environmental sensors and residents’ smartphones in order to define each neighborhood’s “pulse,” to determine the community’s “normal” heartbeat. 30 Here, again, we see the resurgence of biomedical metaphors in urban planning.

Meanwhile, a group of Human Project-affiliated researchers at Harvard and MIT are using computer vision to assign “Streetscores,” or measurements of perceived safety, to Google Street View images of particular neighborhoods. They then combine those metrics with demographic and economic data to determine how social and economic changes relate to changes in a neighborhood’s physical appearance — its phenotype. 31 This work builds on the PlacePulse project at the MIT Media Lab, which invites participants to vote on which of two paired Street View scenes appears “livelier,” “safer,” “wealthier,” “more boring,” “more depressing,” or “more beautiful.” In such endeavors, Aaron Shaprio argues, “computer-aided, data-mined correlations between visible features, geographic information, and social character of place are framed as objective, if ‘ambient,’ social facts.” 32 The algorithmicization of environmental metrics marks the rise of what Federico Caprotti and colleagues call a new “epidemiology of the urban.” 33 The new epidemiologists echo the “smart city” rhetoric I’ve critiqued often in these pages, but now the discourse is shaded toward the dual bioengineering of cities and inhabitants.

The new epidemiologists echo the familiar ‘smart city’ rhetoric, but now the discourse is shaded toward the dual bioengineering of cities and inhabitants.

Cities have long been regarded as biophysical bodies, with their own circulatory, respiratory, and nervous systems — and waste streams. In the mid-19th century, as industrialization transformed cities and spurred their growth, physicians were developing new theories of infectious disease (e.g., miasma, filth), complete with scientific models and maps that depicted cities as unhealthy. City planners and health officials joined forces to advocate for sanitation reform, zoning, new infrastructures, street improvements, and urban parks. 34 Healthy buildings and cities were associated with certain phenotypical expressions, although designers did not always agree on the ideal form. Frederick Law Olmsted’s parks, Daniel Burnham’s City Beautiful movement, Ebenezer Howard’s Garden Cities, 1920s zoning ordinances, Modernist social housing projects and sanatoria: all promised reform, yet produced distinct morphologies. 35

As the 20th century proceeded, epidemiologists focused on germs and the biological causes of disease, while modernist architects turned toward formal concerns and rational master plans. Public health and urban planning drifted apart until the 1960s, when the environmental justice and community health center movements brought them together again. Today, initiatives like the World Health Organization’s European Healthy Cities program and New York City’s Active Design Guidelines encourage the integration of health and planning. Now the focus is on designing cities that promote exercise and social cohesion, and that provide access to healthy food and quality housing. 36 Given the rise of artificial intelligence in both health and urban planning, we might imagine a Streetscore or “pulse” for healthy neighborhoods, which could be used to generate an algorithmic pattern language for urban design: every healthy neighborhood has one playground, two clinics, lots of fresh produce, and a bicycle path.

People reading on New York City subway
[Susan Jane Golding]

Where do quantified humans fit in this new planning regime? Consider the fact that China is preparing to use Citizen Scores to rate residents’ trustworthiness and determine their eligibility for mortgages, jobs, and dates; their right to freely travel abroad; and their access to high-speed internet and top schools. “It will forge a public opinion environment where keeping trust is glorious,” the Chinese government proclaims. 37 This is the worst case scenario: obedience gamified, as Rachel Botsman puts it. Humanity instrumentalized.

At least for now, most urbanists recognize that a city is more than a mere aggregation of spatial features that an AI has correlated with ‘wellness.’

Will the new data-driven urbanism — with its high-security data centers and black-boxed algorithms and proprietary software — usher in another era of top-down master planning in North America? Perhaps. But at least for now, most urbanists recognize that a city is more than a mere aggregation of spatial features that an AI has correlated with “wellness.” As Jane Jacobs argues, a healthy city is built on social inclusion and communication, and a shot of serendipity. Researchers affiliated with the Human Project are investigating Jacobsian questions like how economic changes affect housing and, in turn, residents’ social networks and health. 38 Others are asking how cities “encourage the free flow of information” and “how geography interacts with … knowledge” — you might say, how a city can be designed to provide the spatial conditions for a public sphere. 39 So in their rhetoric, at least, the project investigators recognize the political importance of involving communities in the research process and in the urban environments that may be reshaped by it.

Kontokosta says his Quantified Community initiatives focus on the neighborhood scale in order to “connect and engage local residents” not only in data collection, but also in “problem identification, data interpretation, and problem-solving.” 40 Locals assume the role of “participatory sensors,” using their own smartphones to collect data and helping build and install ambient sensing devices. They also act as ground-truthers who verify harvested data through direct observations and experiences. On a more fundamental level, Kontokosta says he wants community members involved as research designers who help project leaders understand areas of curiosity and concern. Locals can identify the pressing problems in their neighborhoods and the sources of data that can provide insight. CUSP aims to bring communities typically excluded from “smart city” discussions into the planning process. One might hope that this would lead to a long-term personal investment in neighborhoods and interest in local planning and politics.

Silhouettes in New York City
[E. J. Peiker]

Self-Datafication as Civic Duty

The Human Project study design envisions that participants will be motivated by payment and by the promise of insight into their own health and their families’ medical histories. Data are currency. 41 But there’s a civic vision — and a civic aesthetic — behind this work, too. As the researchers gear up to collect data, they have rebranded the website with stock photos representing “diversity” and urban vitality, washed in New York University’s signature violet. The new logo, which evokes a circular genome map, is rendered in watercolor, humanizing all the hard science. 42

Framing the project as a “public service” may help convince New Yorkers to share their most personal data. 43 Contributors are assured that they will be more than mere research subjects; they will also be “partners” in governing the study, responsible for vetting proposals from researchers who want to use the databank. 44 They’ll receive newsletters and updates on research discoveries that their data has made possible, and they’ll have access to visualization tools that allow them to filter and interpret their own data and aggregate data for the study population. Apparently, handing over bank statements and biometrics is a form of activism, too: “instead of giving [their] data for free, to corporations,” they can “take [it] back,” “bring [it] together as a community… to make a better world.” 45 Glimcher maintains that New Yorkers will see the potential to generate new knowledge, therapeutics, and urban policy and will understand “that this is a civic project of enormous importance.” 46

Offering oneself up as data, or as a data-collector, is often framed as an act of civic duty.

Offering oneself up as data, or as a data-collector, is often framed as an act of civic duty. Participation in U.S. census and government surveys, for instance, has historically been regarded as part of the “social contract”: citizens yield their personal information, and the government uses it for the public good. 47 In the 19th century, philanthropists, researchers, and activists garnered support for social and industrial reforms by generating an “avalanche of numbers.” 48 And in the early 20th century, as the social sciences popularized new sampling methods, a swarm of surveyors and pollsters began collecting data for other purposes. According to historian Sarah Igo, these modern researchers “billed their methods as democratically useful, instruments of national self-understanding rather than bureaucratic control.” Because they had to rely on voluntary participation, they manufactured consent by emphasizing “the virtues of contributing information for the good of the whole,” for the “public sphere.” Divulging one’s opinions and behaviors to Gallup or Roper pollsters was a means of democratic participation — an opportunity to make one’s voice heard. Pollsters, unlike newspaper editors and political commentators, were “of the people,” and they argued that their methods were “even more representative and inclusive than elections.” 49

Around the same time, A.C. Nielsen, which started off in manufacturing, marketing, and sales research, began acquiring and developing technology that allowed it to monitor people’s radio-listening (and, later, TV-watching and web-surfing) behaviors. Nielsen ratings drove advertising placement and programming decisions. Commercial broadcasters, meanwhile, began funding academic studies and incorporating social-scientific research into their operations, furthering the integration of academic and industry agendas. As Willard D. Rowland shows, the “image of certainty and irrefutability” cultivated by social scientists allowed them to “mesh neatly into the interaction of industrial, political, communications, and academic interests.” 50

New York City subway
[Christophe Leung]

Modern survey methods, Igo says, “helped to forge a mass public” and determined how that public saw itself in mediated representations. Surveys shaped beliefs about normalcy and nationality and individuality. 51 But like all methods of data-collection and analysis, those social surveys reflected and reinscribed biases. Consider the canonical Middletown Studies, sociological case studies conducted by Robert and Helen Lynd in the “typical” American city of Muncie, Indiana, in the 1920s and ’30s. Igo shows how the researchers were compelled to paint a picture of cultural wholeness and cohesion, and how they excised non-white, non-native and non-Protestant Americans from their portrait of this “representative” community. 52

The computationally-engineered city produces the urban citizen by measuring her.

We can trace these histories forward to the cutting-edge work being conducted at the Institute for the Interdisciplinary Study of Decision Making. The researchers’ overarching goal, to link decision-making to social policy, is reflected in their motto: “from neurons to nations.” Yet the extraction of neurons will never fully describe the individual subject, let alone the nation in aggregate. Even the myriad data sources collated by the Human Project cannot capture “the human condition.”

As Hannah Arendt observes, the disclosure of who one is “can almost never be achieved as a willful purpose, as though one possessed and could dispose of this ‘who’ in the same manner he has and can dispose of his qualities.” Who one is, rather than what one is, is revealed to others through speech and action and physical identity. 53 Quantifying humans and habitats turns them into “whats”: into biometric entities and Streetscores. This ontological reduction inevitably leads to impoverished notions of city planning, citizenship, and civic action. Shapiro argues that because planning algorithms like Streetscore embed “indicators of deviance and normativity, worth and risk,” they promote “normative and essentialist … aesthetics.” 54 The computationally-engineered city produces the urban citizen by measuring her. Then, Caprotti argues, “you’re actually producing a subject for governance.” 55

Krocky Meshkin artwork, All Day I Dream About Status
[Krocky Meshkin]

When civic action is reduced to data provision, the citizen can perform her public duties from the privacy of a car or bedroom. If her convictions and preferences can be gleaned through an automated survey of her browser history, network analysis of her social media contacts, and sentiment analysis of her texts and emails, she needn’t even go to the trouble of answering a survey or filling out a ballot. Yet she has no idea how an artificially intelligent agent discerns “what” kind of subject she is, how it calculates her risk of heart attack or recidivism, or how those scores impact her insurance premiums and children’s school assignments. Likewise, the researchers who deploy that agent, like those now working with Palantir and Northpointe, have no need to look at the raw data, let alone develop hypotheses that might inform their methods of collection and analysis. In this emerging paradigm, neither subjects nor researchers are motivated, nor equipped, to challenge the algorithmic agenda. Decision-making is the generation of patterns, a “pulse,” a “score” that will translate into policy or planning initiatives and social service provision. This is a vision of the city — society — as algorithmic assemblage.

All our bodies and environments are already data — both public and proprietary. So how can we marshal whatever remains of our public sphere to take up these critical issues?

And this is the world where we now live. 56 All our bodies and environments are already data — both public and proprietary. 57 So how can we marshal whatever remains of our public sphere to take up these critical issues? How can we respond individually and collectively to the regime of quantitative rationalization? How might we avert its risks, even as we recognize its benefits? We can start by intervening in those venues where pattern recognition is translated to policy and planning. Wouldn’t it be better to use algorithms to identify areas and issues of concern, and then to investigate with more diverse, localized qualitative methods? After the scores are assigned and hotpots are plotted on a map, we could reverse-engineer those urban pulses, dissect the databodies, recontextualize and rehistoricize the datasets that brought them into being. To prepare for this work, the ethicists and social scientists — even anthropologists! — should call in the humanists at every stage of research: from the constitution of the study population; through the collection, analysis, and circulation of data; and finally as those datasets are marshaled to transform the world around us.

Projects like NYU’s and Alphabet’s and the NIH’s could yield tremendous improvements in public health. And even in their methodological and ethical limitations, they can teach us a few things about measuring a public and the spheres in which it is constituted. The methods by which publics and public spheres become visible — to one another and to the sensors that read them — reflect the interests and ideologies of their sponsors. At the same time, these databody projects remind us that public health is a critical precondition for, and should be a regular subject of debate within, the public sphere. 58 They signal that the liberal subject has a physical body, one whose health and illness, pleasure and pain, affect and cognition, race and gender, class and sexual orientation, affect its ability to safely navigate and make itself seen and heard amidst the myriad publics that emerge across our digital and physical worlds.

Author’s Note

Thank you to Jessa Lingel, Ezekiel Dixon-Roman, Joe Turow, Aaron Shapiro, and their colleagues at the University of Pennsylvania, and to Patrik Svensson, Johanna Drucker, Snowden Becker, Dana Cuff, and their colleagues at UCLA, who offered invaluable feedback on earlier versions of this article. I’m also grateful to Matthew Shen Goodman and Kenneth Tay for offering helpful comments on earlier drafts of this work.

Notes
  1. Hannah Bayer, “What If We Could Quantify the Entirety of the Human Condition?,” TEDMED, Palm Springs, California, November 30 – December 2, 2016.
  2. See The Human Project, “Frequently Asked Questions,” for a broad overview of the study methods. In September 2017, the project posted a job ad seeking “18 energetic, passionate, and outgoing” field recruiters who would “conduct in-person household surveys and act as brand ambassadors.”
  3. The speculative study protocol in these three paragraphs is based on reports in Okan Azmak, Hannah Bayer, Andrew Caplin, Miyoung Chun, Paul Glimcher, Steven Koonin, and Aristides Patrinos, “Using Big Data to Understand the Human Condition: The Kavli HUMAN Project,” Big Data 3:3 (2015), http://doi.org/8bt; Dennis Ausiello and Scott Lipnick, “Real-Time Assessment of Wellness and Disease in Daily Life,” Big Data 3:3 (2015), 203-08, http://doi.org/gb5fkj; Leslie Mertz, “The Case for Big Data,” IEEE Pulse (October 3, 2016); Aviva Rutkin, “Tracking the Health of 10,000 New Yorkers,” New Scientist 228:3044 (October 24, 2015), 20-21; and the FAQs on the Human Project website. The actual study protocol may vary.
  4. Kavli Foundation, “The Human Project.” Emphasis mine.
  5. Quoted in Rutkin. Koonin was comparing the ambitions of the Human Project to the Sloan Digital Sky Survey, which “has transformed galactic-level cosmology from a small data science to a big data science and has catalyzed a renaissance in astronomy.” See Azmak, et al.
  6. The Human Project, “The HUMAN Project (Long Version),” Vimeo.
  7. Chris Anderson, “The End of Theory: The Data Deluge Makes the Scientific Method Obsolete,” Wired (June 23, 2008). See also Rob Kitchin, “Big Data, New Epistemologies and Paradigm Shifts,” Big Data & Society (April-June 2014), 1-12, http://doi.org/gcdzc5.
  8. Julie Anne Schuck, Social and Behavioral Sciences for National Security: Proceedings of a Summit (Washington, D.C.: The National Academies Press, 2017), 15-16.
  9. Anesh Chopra and Shafiq Rab, “Apple’s Move to Share Health Care Records is a Game Changer,” Wired (February 19, 2018); Natasha Singer, “Apple, in Sign of Health Ambitions, Adds Medical Records Feature for iPhone,” New York Times (January 24, 2018).
  10. Julia Powles and Hal Hodson, “Google DeepMind and Healthcare in an Age of Algorithms,” Health and Technology (March 2017), http://doi.org/gcs9ch; Alex Hern, “Royal Free Breached UK Data Law in 1.6m Patient Deal with Google’s DeepMind,” The Guardian (July 3, 2017).
  11. Jacob Kastrenakes, “Facebook Spoke with Hospitals About Matching Health Data to Anonymized Profiles,” The Verge (April 5, 2018).
  12. Marc Santora, “10,000 New Yorkers. 2 Decades. A Data Trove About ‘Everything,’New York Times (June 4, 2017).
  13. This statement is from a page on the Human Project website that has since been removed: “Powered by d3,” accessed September 2017.
  14. Celia B. Fisher, “Will Research on 10,000 New Yorkers Fuel Future Racial Health Inequality?The Ethics and Society Blog (August 30, 2016). Ezekiel Dixon-Román also writes about the ways in which certain bodily norms are embedded into our data, and how “fabrications of race” in education data can shape educational practice and policy. See Ezekiel Dixon-Román, “Toward a Hauntology on Data: On the Sociopolitical Forces of Data Assemblages,” Research in Education 98:1 (2017): 44-58, http://doi.org/cnct.
  15. Kadija Ferryman and Mikaela Pitcan, Fairness in Precision Medicine (Data and Society, February 2018).
  16. For example, one of the most important cell lines used in medical research was started in 1951 by doctors at John Hopkins Hospital who sampled and cultured cancer cells from an African American patient, Henrietta Lacks, without her permission. Similarly, medical historian Joanna Radin has shown how hereditary and public health data collected from the Pima Gila River Indian Community “has become a dataset now used by statisticians as well as genome scientists.” Her report examined “the persistence of place and personhood in the history of big data.” See Joanna Radin, “Off the Rez: How Indigenous Bodies Became ‘Big Data’” (Max Planck Institute for the History of Science, 2014). See also Kim TallBear, “Genomic Articulations of Indigeneity,” Social Studies of Science 43:4 (2013), 509-33, http://doi.org/gcnznp.
  17. Robert Brauneis and Ellen P. Goodman, “Algorithmic Transparency for the Smart City,” Yale Journal of Law and Technology 103 (2018), http://doi.org/cncv. Sarah Brayne, “Big Data Surveillance: The Case of Policing,” American Sociological Review 82:5 (2017), 977-1008, http://doi.org/gcsq6p; Glenn Cohen and Harry S. Graver, “Cops, Docs, and Code: A Dialogue Between Big Data in Health Care and Predictive Policing,” UC Davis Law Review 51:437 (2017); Malcom Feeley and Jonathan Simon, “Actuarial Justice: The Emerging New Criminal Law,” Criminal Justice and Crime Control, in David Nelken, ed., The Futures of Criminology (Thousand Oaks: Sage, 1994): 173-201; Andrew Guthrie Ferguson, “Policing Predictive Policing,” Washington University Law Review 94:5 (2017): 1113-95.
  18. In 2016, ProPublica ran a statistical test on Northpointe software [PDF] showing that, even when isolating the effects of race, age, and gender, black defendants were 77 percent more likely to be flagged as future violent offenders. Julia Angwin, Jeff Larson, Surya Mattu, and Lauren Kirchner, “Machine Bias,” ProPublica (May 23, 2016).
  19. Shoshana Amielle Magnet, When Biometrics Fail: Gender, Race, and the Technology of Identity (Durham: Duke University Press, 2011); Simone Browne, Dark Matters: On the Surveillance of Blackness (Durham: Duke University Press, 2015); Riese Jordan Lin, Don’t @ Me: Surveillance, Subject Formation, and the Digital Information Economy, Masters Thesis, San Francisco State University, Fall 2016.
  20. Karla F. C. Holloway, Private Bodies, Public Texts: Race, Gender, and a Cultural Bioethics (Durham: Duke University Press, 2011): 7.
  21. Palantir, “Law Enforcement.” See also Mark Harris, “How Peter Thiel’s Secretive Data Company Pushed Into Policing,” Wired (August 9, 2017); Ali Winston, “Palantir Has Secretly Been Using New Orleans to Test its Predictive Policing Technology,” The Verge (February 27, 2018).
  22. Quoted in Brendan O’Connor, “How Palantir Is Taking Over New York City,” Gizmodo (September 22, 2016). New York City later canceled its Palantir contract because of the company’s lack of transparency and ongoing debates over who controlled the data. See William Alden, “There’s a Fight Brewing Between the NYPD and Silicon Valley’s Palantir,” BuzzFeed News (June 28, 2017).
  23. Harris, op. cit.
  24. Virginia Eubanks, Automating Inequality (St. Martin’s Press, 2017): 6-7.
  25. New York University Institute for the Interdisciplinary Study of Decision Making, “Kavli HUMAN Project: Preliminary Study Design” (Kavli HUMAN Project / New York University, 2015): 19.
  26. National Institutes of Health, “All of Us” and “Scientific Opportunities.” See also David J. Kaufman, Rebecca Baker, Lauren C. Milner, Stephanie Devaney, Kathy L. Hudson, “A Survey of U.S. Adults’ Opinions About Conduct of a Nationwide Precision Medicine Initiative® Cohort Study of Genes and Environment,” PLOS ONE 11:8 (2016), http://doi.org/gbqdkp. Critics have begun questioning the efficacy and feasibility of the program, given the complexity of other, smaller-scale bio-banking projects. Meanwhile, Israel is engaging in an ambitious project to create a national online medical database including 9 million residents.
  27. Verily Life Sciences, Project Baseline. See also Adam Rogers, “That Google Spinoff’s Scary, Important, Invasive, Deep New Health Study,’ Wired (April 20, 2017).
  28. Iyah Romm, “Announcing Cityblock: Bringing a New Approach to Urban Health, One Block at a Time,” Sidewalk Talk, the blog of Sidewalk Labs (October 1, 2017). See also the Cityblock website. The Neighborhood Health Hubs are to be more than clinics; they’ll also offer communal areas, classrooms, family counseling, public programming, free internet, and space for community organizations to speak with patients about financial assistance, benefits, and other community services.
  29. Philip Salesses, Katja Schechtner, and César A. Hidalgo, “The Collaborative Image of the City: Mapping the Inequality of Urban Perception,” PLOS One 8:7 (2013), e68400, http://doi.org/f5bs55.
  30. Constantine E. Kontokosta, “The Quantifiable Community and Neighborhood Labs: A Framework for Computational Urban Science and Civic Technology Innovation,” Journal of Urban Technology (2016), 7, http://doi.org/cncw; Constantine E. Kontokosta, Nicholas Johnson, and Anthony Schloss, “The Quantified Community at Red Hook: Urban Sensing and Citizen Science in Low-Income Neighborhoods,” Bloomberg Data for Good Exchange Conference (September 25, 2016), 6. See also Kontokosta’s Urban Intelligence Lab.
  31. Nikhil Naik, Scott Duke Kominers, Ramesh Raskar, Edward L. Glaeser, and Cesar A. Hidalgo, “Do People Shape Cities, or Do Cities Shape People? The Co-Evolution of Physical, Social, and Economic Change in Five Major U.S. Cities,” Harvard Kennedy School, Faculty Research Working Paper Series, RWP15-061 (October 2015); Edward L. Glaeser, Scott Duke Kominers, Michael Luca, and Nikhil Naik, “Big Data and Big Cities: The Promises and Limitations of Improved Measures of Urban Life,” Harvard Kennedy School Faculty Research Working Paper Series, RWP15-075 (December 2015); Nikhil Naik, Scott Dike Kominers, Ramesh Raskar, Edward L. Glaeser, and Cesar A. Hidalgo, “Computer Vision Uncovers Predictors of Physical Urban Change,” PNAS 114:29 (July 18, 2017), 7571-76, http://doi.org/gbsmnm. See also the City Forensics project at the University of California, Berkeley: Sean Arietta, Alexei A. Efros, Ravi Ramamoorthi, Maneesh Agrawala, “City Forensics: Using Visual Elements to Predict Non-Visual City Attributes,” IEEE Transactions on Visualization and Computer Graphics (2014): 2624-33, http://doi.org/226.
  32. Aaron Shapiro, “Street-level: Google Street View’s Abstraction by Datafication,” New Media and Society (2017), 11, http://doi.org/cncx.
  33. Federico Caprotti, Robert Cowley, Ayona Datta, Vanesa Castan Broto, Eleanor Gao, Lucien Georgeson, Clare Herrick, Nancy Odendaal, and Simon Joss, “The New Urban Agenda: Key Opportunities and Challenges for Policy and Practice,” Urban Research and Practice (2017), 367-78, http://doi.org/cncz. These researchers are drawing on earlier empirical research on urban appearance by Amos Rapoport, Kevin Lynch, and Jack Nasar, as well as George L. Kelling’s and James Q. Wilson’s “Broken Windows Theory,” which links environmental disorder and incivility to increased crime.
  34. On the historical relationship between urban planning and public health, see Jon A. Peterson, “The Impact of Sanitary Reform Upon American Urban Planning, 1840-1890,” Journal of Social History 13:1 (Autumn 1979), 83-103; Jason Corburn, “Reconnecting with Our Roots: American Urban Planning and Public Health in the Twenty-first Century,” Urban Affairs Review 42:5 (2007), http://doi.org/cf7mp8; Jocelyn Pak Drummond, “Measuring and Mapping Relationships Between Urban Environment and Urban Health: How New York City’s Active Design Policies Can Be Targeted to Address the Obesity Epidemic,” Masters Thesis, Massachusetts Institute of Technology, 2013 [PDF]; Bonj Szczygiel and Robert Hewitt, “Nineteenth-Century Medical Landscapes: John H. Rauch, Frederick Law Olmsted, and the Search for Salubrity,” Bulletin of the History of Medicine 74:4 (Winter 2000), 708-34, http://doi.org/fp9zqj.
  35. Urban renewal in Europe after World War I enabled modern architects to design new, hygienic forms of social housing, and many of those same architects also employed the tropes of modernism to design new sanatoria, whose flat roofs, terraces, balconies, and recliner chairs afforded patients plenty of opportunities for open-air relaxation. Margaret Campbell, “What Tuberculosis Did for Modernism: The Influence of a Curative Environment on Modernist Design and Architecture,” Medical History 49:4 (2005), 463-88. See also Giovanna Borasi and Mirko Zardini, eds., Imperfect Health: The Medicalization of Architecture (Montreal: Canadian Centre for Architecture, Lars Muller Publishers, 2012), excerpted in this journal as Giovanna Borasi and Mirko Zardini, “Demedicalize Architecture,” Places Journal (March 2012), https://doi.org/10.22269/120306.
  36. Hugh Barton and Marcus Grant, “Urban Planning for Healthy Cities: A Review of the Progress of the European Healthy Cities Programme,” Journal of Urban Health: Bulletin of the New York Academy of Medicine 90:1 (2011), http://doi.org/cnc2; Lawrence Frank, Peter Engelke, and Thomas Schmid, Health and Community Design: The Impact of the Built Environment on Physical Activity (Washington, D.C.: Island Press, 2003).
  37. Rachel Botsman, “Big Data Meets Big Brother as China Moves to Rate its Citizens,” Wired (October 21, 2017).
  38. Eillie Anzilotti, “Quantifying Everything About Urban Life,” CityLab (October 14, 2016).
  39. “Kavli HUMAN Project: Preliminary Study Design,” op. cit., 127.
  40. Kontokosta, “The Quantifiable Community and Neighborhood Labs,” 2.
  41. For more on incentives for participation in precision-medicine initiatives, see Kaufman, et al., 8.
  42. For images of the rebrand, see the portfolio of The Human Project’s graphic designer Aerial Sun.
  43. Schuck, op. cit.
  44. “Participants as Partners” is the first of six “Core Values” defined on the project’s website: “The Human Project is a partnership between participants, staff, and their company Data Cubed, all of whom play a role in governing the study.”
  45. In the video cited in footnote 6, Glimcher says, “The Kavli HUMAN project offers each of this challenge: Take your data back. Instead of giving our data for free, for corporations [sic], let’s bring our data together as a community. Let’s use that data not to sell things, but to make a better world.”
  46. Quoted in Anzilotti, op. cit.
  47. The Council of Professional Associations on Federal Statistics, “Providing Incentives to Survey Respondents” (September 22, 1993). I am grateful to Johanna Drucker for highlighting the “social contract” implications of government data.
  48. Ian Hacking, “Biopower and the Avalanche of Printed Numbers,” Humanities in Society 5 (1982): 279-95. See also Sarah Igo, The Averaged American: Surveys, Citizens, and the Making of a Mass Public (Cambridge: Harvard University Press, 2008): 302-03, n. 6.
  49. Igo 7, 8, 119, 121.
  50. Willard D. Rowland, Jr., “The Symbolic Uses of Effects: Notes on the Television Violence Inquiries and the Legitimation of Mass Communication Research,” in Michael Burgoon, eds., Communication Yearbook 5 (International Communication Association, 1982), 391-92. Burgoon describes the Bureau of Applied Social Research at Columbia University, a cooperation largely between Frank Stanton of CBS, Paul Lazardfeld, Hadley Cantril, and the Rockefeller Foundation, which represented “a structure for the pursuit of audience research in the United States rooted firmly in a combination of fascination with empirical social science methodology, practical marketing research experience and broadcast industry commercial and political needs. … The early efforts of the Bureau involved two important problems faced by the broadcasting industry. One was the vital need to develop a better ratings research capacity that would demonstrate radio’s ability to compete with the print media as an effective advertising tool. The second problem was to show that, in spite of all the then current criticism by some in Congress and the FCC, this privately-held, commercially-motivated, and network-dominated medium as in fact exercising its public trust obligations under the law and was providing a socially responsible service. … [It] served as a vehicle for contract research underwritten by both industry and government, often jointly, and as such it became a model for the development during the post-war arrival of television for a host of centers, institutes and school of communication research that also depended heavily on commercial and governmental grant funding.”
  51. Igo, 282.
  52. Igo, 55. Meanwhile, in the UK, from the 1930s through the 1960s, hundreds of volunteers contributed to the Mass-Observation project, chronicling their own lives and others’ lives in an attempt to understand everyday life in Britain. These volunteers were both data sources and data collectors. And in their attempts to overcome class divisions in recruiting members to their ranks, they imagined themselves to be “build[ing] a new society with the capability to reshape itself through informed civic participation” and “collectivized forms of self-knowledge.” See Tony Bennett, Fiona Cameron, Nelia Dias, Ben Dibley, Rodney Harrison, Ira Jacknis, and Conal McCarthy, “A Liberal Archive of Everyday Life: Mass Observation as Oligopticon,” in Collecting, Ordering, Governing: Anthropology, Museums, and Liberal Government (Duke University Press, 2017), 121, 129.
  53. Hannah Arendt, The Human Condition, 2nd edition (Chicago: University of Chicago Press, 1998), 159, 179.
  54. Shapiro: 3, 10-11.
  55. Quoted in Gregory Scruggs, “The ‘New Urban Citizen’ and the Dangers of the Measurable City,” Citiscope (August 25, 2017). This is more than biopolitics, which Foucault describes as an “endeavor, begun in the 18th century, to rationalize the problems presented to government practice by the phenomena characteristic of a group of living human beings constituted as a population: health, sanitation, birth rate, longevity race.” From these populations we can model individual phenotypes. See Michel Foucault, “The Birth of Biopolitics” (1979), in Paul Rabinow, ed., Ethics, Subjectivity, and Truth (New York: The New Press, 1997), 73-79.
  56. See Zeynep Tufekci, “Engineering the Public: Big Data, Surveillance and Computational Politics,” First Monday 19:7 (July 2014).
  57. Consider the Aadhaar system in India, which assigns residents a unique ID number based on demographic and biometric data. See “Digital India Initiatives Playing Major Role in Smart Cities Mission,” Financial Express (March 16, 2017); Anita Gurumurthy, Nandini Chami, and Sanjana Thomas, “Unpacking Digital India: A Feminist Commentary on Policy Agendas in the Digital Moment,” Journal of Information Policy 6 (2016): 371-402, http://doi.org/cnc3; Rhyea Malik and Subhajit Basu, “India’s Dodgy Mass Surveillance Project Should Concern Us All,” Wired (August 25, 2017); “Will India Overcome Challenges to Build Smart Cities?,” Knowledge @ Wharton (February 26, 2016).
  58. See Christopher Hamlin, “Public Sphere to Public Health: The Transformation of ‘Nuisance,’” in Steve Sturdy, ed., Medicine, Health and the Public Sphere in Britain, 1600-2000 (London: Routledge 2002), 189-94.
Cite
Shannon Mattern, “Databodies in Codespace,” Places Journal, April 2018. Accessed 01 Jun 2023. https://doi.org/10.22269/180417

If you would like to comment on this article, or anything else on Places Journal, visit our Facebook page or send us a message on Twitter.