For months, the Covid-19 virus has passed from body to body around the world. Its corporeal work is silent, but it reshapes the soundscape wherever it goes. 1 Coughs and sneezes turn paranoid heads; ventilators whoosh in hospital rooms; streets go suddenly quiet, as people shelter inside. Kids home from school create a new daytime soundtrack, and neighbors gather on balconies in the evening, to sing together or applaud health workers. As physicians monitor the rattle of afflicted lungs, the rest of us listen for acoustic cues that our city is convalescing, that we’ve turned inward to prevent transmission.
As we adjust to new spatial confines, to an altered sense of time, we also retune our hearing.
These new sounds and silences are so affecting because cities have long been defined by their din: by the density and variety of human voices and animal sounds; the clamor of wheels on cobblestones; the mechanical clangs, electrical hums, and radio babble; the branded ringtones and anti-loitering alarms. Most hearing people are adept at interpreting the cacophony. 2 We know which of the sounds within our radius need attention and which can be ignored. 3 At times of crisis or change, our senses are heightened, recalibrated. As we adjust to new spatial confines, to an altered sense of time, we also retune our hearing. Seismologists, for instance, have registered the Covid-19 shutdowns as a quietness that helps them perceive tectonic movements. 4
But contextual shifts are not always as sudden as a viral pandemic. We are constantly revising the way we listen to the city, and for at least a century our aural capacities have been growing in the direction of urban surveillance and public health. With technology, we track sounds over greater distances, at different timescales and intervals, discerning patterns and aberrations that are often encoded as symptoms, so that we (or our public officials) can diagnose problems and apply cures. Indeed, many of the modern technologies used to sound out the city are inspired by diagnostic tools from medicine and psychology. Through these soundings, we grasp the city’s internal mechanics, assess the materiality of its parts, analyze its rhythms. 5 And those two domains, surveillance and health, are increasingly entwined with a third, machine intelligence. 6
With all the attention given to urban applications of machine vision — from facial recognition systems to autonomous vehicles — it’s easy to forget about machines that listen to the city. Google scientist Dan Ellis has called machine listening a “poor second” to machine vision; there’s not as much research dedicated to machine listening, and it’s frequently reduced to speech recognition. 7 Yet we can learn a lot about urban processes and epistemologies by studying how machines listen to cities; or, rather, how humans use machines to listen to cities. Through a history of instrumented listening, we can access the city’s “algorhythms,” a term coined by Shintaro Miyazaki to describe the “lively, rhythmical, performative, tactile and physical” aspects of digital culture, where symbolic and physical structures are combined. The algorhythm, Miyazaki says, oscillates “between codes and real world processes of matter.” 8 The mechanical operations of a transit system, the social life of a public library, the overload of hospital emergency rooms: all can be intoned through algorhythmic analysis.
Our tools for urban listening embody particular ways of knowing the city.
How we imagine ourselves as listening subjects, as hearing bodies, informs how we make sense of our sonic environments. As we listen to the city with both human and machinic ears, we sound it out as a particular kind of resonant or reflective body or system. If we are constantly listening for alien accents or breaking glass and gunfire — as some automated police systems do — we might imagine the city as a body that needs protection from threats. If our stock-trading bots equate the hum of vehicular traffic with economic production, we might be alarmed by quiet streets. If, instead, we listen to the city at macro scale, as an ecology of diverse lifeforms and resources and habitats, we might recognize a dynamic, vital system to be stewarded for future generations of humans and other species. When the sounds of the pandemic recede, how will our hearing be changed? Our tools for urban listening embody particular ways of knowing the city, with implications for how the city is designed, administered, policed, beautified, and maintained.
In other words, how we listen to the city is as important as what we are listening for. Amid the rise of artificially intelligent, algorithmically attuned ears, scoring the city in accordance with their own computational logics, we humans need to better understand our own acoustic agency so that we can make thoughtful choices about how to supplement our ears with machinic ones. In a world defined by climate crisis, surveillance capitalism, and the periodic collapse of global health, we need to think as much about a city’s resonance as we do its resilience and livability.
The Stethoscope
Cities historically were compared to organic bodies, and many tools for sounding out the city were developed by first listening to ourselves. The human body is a resonance chamber whose particular sonic qualities can reveal its condition of well-being. To diagnose a patient with fluid in the lungs, Hippocrates advised a technique called succession: “you will place the patient on a seat which does not move, and an assistant will take him by the shoulders, and you will shake him, applying the ear to the chest, so as to recognize on which side the sign occurs.” 9 Leopold Auenbrügger, in 1761, proposed a slightly less violent method, percussion, which involved striking the body and listening to its internal resonances to locate diseases of the lungs and heart. 10
Laennec advocated for the use of instruments to ‘mediate’ the physician’s attention to audible movements inside the body.
Yet even in Auenbrügger’s time most diagnoses relied on the doctor’s visual examination and the patient’s subjective testimony. There was no need to listen deeply to the body because pathologies were attributed not to deep internal causes, but to an imbalance of humors. As autopsies became more widely accepted, those deeper causes were eventually revealed. But “without the larger ideological edifices of empiricism, pathological anatomy, and physiology,” Jonathan Sterne observes, physicians “found listening to the interior of the body to have no practical, informative purpose. … It was only when the body came to be understood as an assembly of related organs and functions that percussion … would take on such a primary role in medical diagnosis.” 11
The stethoscope, introduced by René-Théophile-Hyacinthe Laennec in the early 19th century, marks an epochal turn in the histories of listening and medicine. Treating a young, corpulent woman “laboring under general symptoms of a diseased heart,” Laennec found that her gender, age, and girth made “direct auscultation” — laying ears and hands on her body — “inadmissible.” And so he “rolled a quire of paper into a sort of cylinder and applied one end of it to the region of the heart, and the other to my ear, and was not a little surprised and pleased, to find that I could thereby perceive the action of the heart in a manner much more clear and distinct than I had ever been able to do by the immediate application of the ear.” 12 Laennec’s 1819 treatise De L’auscultation Mediate (On Mediate Auscultation) advocated for the use of instruments to “mediate” the physician’s attention to audible movements inside the body.
The stethoscope further mediated a transition in medicine and its ways of knowing. According to Sterne, Laennec’s followers cultivated an “audile technique,” a rational mode of observation, that was “instrumental in reconstructing the living body as an object of knowledge.” Mediate auscultation placed a new physical distance between the doctor and patient, and it established sound as a source of medical data. Seeking to validate listening as a scientific method, practitioners created a taxonomy of the body’s internal sounds, a “new medical semiotics,” with each sound indexically representing a specific movement of liquids or gases. Auscultation, Sterne writes, was a “hydraulic, physiological hermeneutics.” 13
From here it is not a far leap to contemplate the stethoscope being used to listen to other systems. Natural philosopher Robert Hooke had already imagined the sounding body as a machine or factory:
There may be … a Possibility of discovering the Internal Motions and Actions of Bodies by the sound they make, who knows but that as in a Watch we may hear the beating of the Balance, and the running of the Wheels, and the striking of the Hammers and the grating of the Teeth, and Multitudes of other Noises; who knows, I say, but that it may be possible to discover the Motions of the Internal Parts of Bodies, whether Animal, Vegetable, or Mineral, by the sound they make, that one many discover the Works perform’d in the several Offices and Shops of a Man’s body, and thereby discover what Instrument or Engine is out of order. 14
Monitoring the body’s functions with new technical and conceptual instruments required specialized knowledge, which elevated physicians’ social status. Over time, tools like specula, endoscopes, X-ray machines, and MRIs enabled further investigation of internal causes for external symptoms, and the discovery of maladies with no external expression. 15 Yet the stethoscope has a special place in history, as the instrument that first registered a new way of knowing. Auscultation — mediated listening — is fundamental to modern life. Indeed, Sterne links the instrumentation of medicine to the growth of industrial cities. “Medicine itself industrialized,” he says, “in gaining a more rationalized structure; in taking shape as a self-conscious profession; in a heavier investment in the discourses of science and reason; and, finally, in its adoption of technology.” 16

The Sound Meter
The professionalization of landscape architecture and city planning were not far behind. The 19th-century urban reformers who portrayed the city as a body, with its own circulatory, respiratory, nervous, and excretory systems, drew heavily on medical discourses of the day. 17 As early as the 1850s, designers joined forces with health officials to push for public sanitation measures, water and waste removal infrastructures, and amenities like playgrounds and bathhouses. Public parks, “the lungs of the city,” were prescribed to clear out the “miasma” of urban decay and filth. But as physicians traded humoral pathology for empirical science and clinical physiology, they came to understand that infectious diseases were caused by germs, not foul air. Early 20th-century planners influenced by models like Baron Haussmann’s Paris, Daniel Burnham’s City Beautiful, and Ebenezer Howard’s Garden Cities conceived land-use zoning as a way of “immunizing urban populations from the undesirable externalities of the economy.” 18
As combustion engines, horns, and sirens proliferated, the city-as-body was becoming a “machine.” Public advocates warned about the effects of noise exposure on both the urban body and the human bodies living within it. 19 City-dwellers sought respite in libraries and other cultural spaces, often sited in a park-like setting, removed from the sullying racket of the business district. Urban reformers wrote the first noise ordinances and sound-sensitive zoning policies. In 1906, Julia Barnett Rice, a non-practicing medical doctor, founded New York’s Society for the Suppression of Unnecessary Noise, which lobbied for quiet zones around city hospitals and national legislation like the Bennet Act, which regulated boat whistles in urban harbors. 20 Soon afterward, philosopher Theodor Lessing founded the German Association for the Protection from Noise, which convinced some cities to install noise-dampening pavements and regulate train signals and steam hammers. 21
The first urban noise surveys, produced in the 1920s, revealed the limits of efforts to instrumentalize and objectify hearing.
And the new urban administrative machine required new tools to regulate the machinic environment. The portable audiometer produced a “subjective” measure of loudness; its operator compared a test sound with a reference tone, which could be dialed down until it was masked by the sound under investigation. A later technology, the acoustimeter, added a microphone, amplifier, and indicator signal, eliminating the need for user judgment. These new tools of urban auscultation were combined with a new unit of measurement, the decibel, to produce the first urban noise surveys in London, New York, Chicago, and Washington, D.C., in the 1920s. As Karin Bijsterveld notes, “Although audiometers were at first used in a strictly medical context to test hearing, the city turned out to be a crucial context for [their] development and application.” 22
This context quickly revealed the limits of efforts to instrumentalize and objectify hearing. The meters couldn’t replicate the way human ears perceived loudness, and they had trouble tracking fluctuating sounds. Bell Labs’ Rogers Galt, who reviewed urban sound surveys for the Journal of the Acoustical Society of America in 1930, emphasized the subjective, situational nature of aural perception. Whether a sound was perceived as noise, he wrote, depended on how long it lasted and how often it occurred, whether it was steady or intermittent, who made the sound, who was disturbed, and whether the sound was understood as necessary. 23 “Noise” was a product of acoustics and psychology.
Whether or not cities actually were too loud, measurable “noise levels,” with their positivist certainty, “became the sign of how bad the situation was.” 24 Public health concerns were taken seriously only after noise exposure could be quantified. Leonardo Cardoso, in his study of sound politics in São Paolo, argues that the seemingly objective measurements produced by sound-level meters came to “replac[e] our ears as the authoritative hearing actor” and ultimately conditioned our hearing to a world that the instrument could validate. “Through the minuscule repetition of a series of exposures to sound that are allowed to exist thanks to the [meter’s] validation, this technological being” has reshaped our own organic perceptual instruments. 25 We became attuned to what the machine is capable of sensing.

The Sensor Array
Quantifiable levels play an even larger role in defining urban performance in the so-called smart city. 26 As our cities grow increasingly datafied, algorithmically filtered, and optimized for efficiency, they require new instrumentation. Noise is a common target for machine listening, as a quality-of-life issue (one of the top complaints to New York City’s 311 line, for example) that is hard to police through analog methods. Many cities, including New York, Dublin, Sydney, Paris, and Singapore, have deployed distributed networks of sound sensors to assess urban noise. The Sounds of New York City (SONYC) project, run by NYU’s Center for Urban Science and Progress and developed in collaboration with the city departments of health, environmental protection, and parks and recreation, has placed dozens of sensors to “monitor, analyze, and mitigate noise pollution.” 27 Each node includes a microphone and a small Raspberry Pi computer, and the data are processed by machine listening — specifically, by artificial intelligence trained on audio datasets annotated by “citizen scientist” volunteers according to a taxonomy of urban sounds. The aim is to extract “meaningful information” from environmental audio, so that cities can identify and target specific sound sources that present problems, like jackhammers, idling engines, loud HVAC, barking dogs, or car horns. 28
As our cities grow increasingly datafied, algorithmically filtered, and optimized for efficiency, they require new instrumentation.
The SONYC team has also created a visualization tool, Urbane, that generates a 3D map of a city’s sound data over time and connects it to other urban data streams, so that local governments can efficiently schedule inspections at sites of potential noise code violation. Claudio Coletta and Rob Kitchin propose that such systems could be made “algorhythmic,” or responsive to urban flows and fluctuations across seasons, days of the week, and times of day. Planners could correlate noise readings with data about road surfaces, vehicle counts, traffic speed, topology, and other variables to create daytime and nighttime sound maps that inform noise reduction policies. Here, Coletta and Kitchin write, “we have a set of algorhythms at work, algorithmically measuring, processing, and analyzing urban sound and its rhythms.” 29
SONYC’s makers argue that the system will also provide timely information to “those in a position to control emissions” — construction-site managers, truck drivers, pet owners, and so on — and incentivize “self-regulation.” 30 Self-regulation is a key principle of “performance zoning,” which proposes that urban residents can do as they please in their homes and businesses and public spaces, so long as they don’t exceed certain thresholds for noise, toxic emissions, and other measurable behaviors. In a city zoned by performance standards, algorithmic auscultation with embedded sensors could be a means of discipline and regulation. 31 As Cardoso foretold, the acoustic panopticon — the panacousticon — would compel human bodies to operate in accordance with its machinic logic. 32
Data ethicists warn that the racial and gender biases built into our measuring machines will further inequities in care.
I’ve written elsewhere about the convergence of algorithmic planning and “smart cities” with biometrics and “precision medicine” — the pursuit of optimized cities that cultivate optimized bodies. 33 We can imagine a future city whose acoustic qualities are computationally tuned to promote physical and mental health. (Researchers have already proposed using computer audition to monitor the spread of Covid-19 and ensure social distancing.) 34 Yet data ethicists warn that the racial and gender biases built into our measuring machines will further inequities in care, as they have in medicine and in the provision of urban services like housing and policing. 35
The turn toward algorithmic city planning mirrors what is happening in medical offices. Some health professionals worry that the stethoscope is going out of fashion, supplanted by echocardiography and handheld ultrasound devices that increase the physical and affective distance between doctor and patient. Yet anthropologist Tom Rice finds that some physicians remain committed to auscultation as an “index of sympathetic and empathetic medical practice.” 36 So, too, could we commit ourselves to sounding out the city with more empathic modes of instrumented listening.

Listening to Systems
In the 1960 and ’70s, acoustic ecologists like R. Murray Schafer, Barry Truax, and Hildegard Westerkamp developed qualitative, subjective methods for studying relationships between humans and their environments. These researchers deployed field units to make comparative site recordings across time, invented annotation systems, made maps, and experimented with alternative ways to visualize sonic data. 37 In the decades since, their followers have conducted longitudinal sound studies that reveal insights about climate change, species loss, urbanization, gentrification, and other forms of environmental and social change. 38 The sociologist Henri Lefebvre, in an essay published after his death, proposed an embodied practice of rhythmanalysis, a way of mediating urban perception with one’s physical presence. The rhythmanalyst, Lefebvre says, “listens — and first to his body; he learns rhythm from it, in order consequently to appreciate external rhythms. His body serves him as a metronome.” The body is a means of mediate auscultation; the rhythmanalyst approaches the city as a physician would, listening for “malfunctions of rhythm, or … arrhythmia.” 39
Lefebvre advised that we ‘listen to a house, a street, a town, as an audience listens to a symphony.’ We should also listen to urban systems like transit and public health, regardless of their musicality.
This is a holistic practice, extending across spatial and temporal scales. Lefebvre advised that we “listen to a house, a street, a town, as an audience listens to a symphony,” discerning the role of each agent, or instrument, in composing the whole. 40 We should also listen to urban systems like housing and transit and public health, regardless of their musicality. To be good stewards of (or interventionists in) these systems, we must be able to recognize submerged sounds and obscure patterns, with and without machines. As a viral pandemic sweeps the world, we can supplement our reading of public health statistics by listening with our bodies to the street and the supermarket.
Using the example of a car engine, Bijsterveld draws a distinction between “monitory listening,” which tells drivers whether the internal mechanisms of a system are working as they should, and “diagnostic listening,” which experts use to identify internal problems based on a taxonomy of aberrant sounds. 41 These two modes of listening are constantly happening all around us, and they are crucial to the maintenance and care of the city’s technical and social infrastructures. 42 Civil engineers, for example, listen to ambient vibrations, harmonic excitations, and wave propagation to detect structural weaknesses in buildings and bridges and transit beds. And advanced instruments help us listen across urban scales that are not easily heard by human ears or bodies. Researchers in Alister Smith’s Listening to Infrastructure lab at Loughborough University study sensors that monitor high-frequency “acoustic emissions” from “geotechnical assets” (buried pipelines, foundations, retaining structures, tunnels, and dams) in order to assess their condition, locate weakness, and target maintenance work. 43
This work to auscultate infrastructure, to render it sensible, helps us appreciate how much listening we have ceded to machines.
This applied research extends a tradition among artists who have sounded out infrastructural elements. For the centennial of the Brooklyn Bridge, in 1983, Bill Fontana mounted eight microphones under the bridge’s steel grid roadway and broadcast live sounds at the World Trade Center plaza. In 1999, Stephen Vitiello spent six months in residence on the 91st floor of the World Trade Center, recording how Tower One swayed and creaked with the wind. Such works make sensible the micro-rhythms and macro-scale physical stresses that infrastructures withstand and amplify the distinct mechanics of their materials and construction techniques. 44 Other artists have encouraged listening to technical and media infrastructures, such as WiFi networks, cell connections, and the global positioning system. Since 2004, Christina Kubisch has hosted “Electrical Walks” in several dozen cities. Participants wear specially designed headphones that translate electromagnetic signals into audible sounds, disclosing the waves and particles — generated by activities like ATM transactions and CCTV surveillance — which perpetually envelop and penetrate urban bodies. Similarly, Shintaro Miyazaki and Martin Howse use logarithmic detectors, amplifiers, and wave-filter circuits to transform electromagnetism into sound, revealing the “rhythms, signals, fluctuations, oscillations and other effects of hidden agencies within the invisible networks of the ‘technical unconscious.’” 45
This work to auscultate infrastructure, to render it sensible, helps us appreciate how much listening we have ceded to machines. Turbines, windmills, freezers, vent fans, and hard-to-access machines in the off-limits “clean rooms” of pharmaceutical and tech manufacturing facilities — all signal their health to system operators by chugging along with a consistent tone and rhythm. AI can purportedly predict and prevent infrastructural snafus by scanning for idiosyncrasies within high-performance systems. 46 Some players in the predictive analytics field build training sets with sound samples of well-behaved machines, while others listen across a wide array of systems, identify anomalies, and then invite human engineers to help them analyze and classify the aberrant sounds. Humans also play a mediating role as liaisons between automated sonic analysis and the deployment of emergency services or maintenance workers. A manager overseeing a water treatment plant during a violent storm might rely on a dashboard of sonic alerts to pinpoint mechanical failures and then dispatch staff — or robots — to fix the problem. In the future, this auscultative agent might be the only human in the facility.

Listening to Ourselves and Each Other
And sometimes the city’s artificially-intelligent ears are turned on us. Xiaochang Li and Mara Mills describe the historical role of “vocal portraits” in criminal records. Since the early 20th century, police departments across the U.S. and Europe have recorded and archived voices for forensic purposes — to aid in speaker identification, or to allow researchers to identify supposed qualities of the criminal character. Today, international law enforcement agencies use software to match speech samples from phone calls and social media posts with “voice-prints” in a shared database. China has reportedly linked voice-prints to transit ticket machines, health care and educational systems, and citizens’ national IDs. 47
Speech recognition also helps investigators probe alibis and insurance companies verify claims. Layered voice analysis software can purportedly detect lies and incriminating affective qualities like embarrassment, overzealousness, anxiety, or an “attempt to outsmart” the interviewer. Embedded in the software are algorithmic shibboleths: inflections, catchwords, expressions, and marks of dialect that act as “aural biopolitical signatures” of individual identity. 48 At immigration offices, forensic linguists scrutinize accented voices to determine whether they match the traumatic narratives presented in asylum claims, a disembodiment that artist Lawrence Abu Hamdan says violates the principle of habeas corpus, which stipulates that the accused must appear before the judge in recognition of the fact that “the voice is a corporeal product” whose semantic and forensic value exceeds its written documentation or audio recording. The voice has a body. He proposes that heterogenous, hard-to-place accents should be understood as a “biography of migration,” a sonic composition that defies the body’s identification with a single nation-state. 49
And a growing number of schools, prisons, hospitals, and city governments deploy audio analytics to passively monitor their populations. The Dutch company Sound Intelligence, founded in 2000, makes software that scans voices in the environment for signs of fear, anger, and duress, and then summons authorities or records a sonic event for forensic purposes. This “aggression detection” software is loaded on microphones made in California by Louroe Electronics and security cameras made in Sweden by Axis Communications, which are then marketed through school safety and law enforcement catalogs and conventions. (Sound Intelligence also offers systems that detect and geolocate gunshots and broken glass.) 50 While some customers told ProPublica that these products have become “indispensable” in their operations, reporters found that the systems were often hyper-sensitive and unreliable, interpreting rough, high-pitched voices — like those you might hear often in high-school gyms and cafeterias — as aggressive, and even mistaking slammed lockers for gunshots. 51

Such machines can listen from the macro to the micro scale, taking in the chatter of an entire concert hall or public square and then homing in on the granular properties of an individual voice. A large enough network of automated ears could hypothetically listen at the scale of the city, identify anomalies, and sonically access the interiority of urban subjects, discerning their identity or intention, their humor or their health. Which, again, underscores the role of human arbiters. Just as we want an empathetic physician at the other end of the stethoscope (and a robust health department and licensing board setting the terms of that relationship), we should want a qualitative methodologist to contextualize urban noise data, a human engineer to make sense of recorded vibrations in buried pipelines, an asylum-seeking body present before the judge to defend herself with the full power of her voice. Machines might be used to listen widely, to identify general areas and issues of concern, but we should then follow up with diverse, localized, qualitative methods of investigation. And sometimes it’s best that we not listen at all — that we let the city’s sounds be ephemeral and private and inscrutable.
We might imagine a machine listening system serving a compositional role, creating soundtracks that report the operational status of transit or waste management systems.
Sarah Barns proposes that we recognize the future city as a “complex field of cognition, computation, desire and experience,” an assemblage of vibrating, resonating, listening, sounding machines and bodies, including those of other species. 52 The polyphonic city contains many distinct ways of sensing and knowing, of diagnosing and healing, our selves and our spaces. Perhaps listening machines — rather than making scripted determinations about what “meaningful” information is extracted from the sonic environment — could be recruited by cities or community groups or artists to amplify the messy richness of that assemblage, or to highlight the machines’ own subjectivity, or to compel us to listen to ourselves, and our machines, listening. 53
Attending to whole ecologies, rather than specific sounds, reminds us that we live amid great biodiversity, and that listening can be a means of caring for those ecologies, rather than controlling or disciplining them. 54 For example, the Manchester-based company Sensemaker designs bespoke kits that enable journalists to gather recorded audio and local biodata that can prompt investigative reporting and editorial responses. Perhaps those sensor kits could be used for sonic investigations of questions like why all the warblers have left the city park, or what it means that traffic noises have increased in a neighborhood adjacent to rezoned territory. 52 Another example: experimental musician Julianna Barwick and music technologist Luisa Pereira created a generative soundtrack for a New York hotel lobby, using a rooftop camera that told a computer about environmental conditions, cuing up looping synthesizers and breathy voices to register the presence of birds or airplanes, moonlight or clouds. 55 We might imagine a machine listening system serving a similar compositional role, creating soundtracks that report the operational status of transit or waste management systems.
Other projects push back against machine listening by surfacing its flaws. “Laughing Room,” by Hannah Davis and Jonny Sun, and “Hey Robot,” by Everybody House Games, are interactive games that invite players to read the computer’s “personality” in order to trigger its canned laughter or elicit other responses. 57 As we probe a machine’s glitches and failures, we get a better sense of the logics by which it operates, the taxonomies and training sets that underscore its performance, the way it operationalizes affect through keywords and speech patterns. Our laughter is a means of auscultating the machine itself.
Then again, human auditors are glitchy, too. 58 We harbor sonic prejudices and modulate our attention when we hear particular rhetorical registers or vocal affectations. And we are conditioned by our class, race, gender, and personal and cultural histories to tune in to, or out of, particular environmental sounds: traffic noise, loud neighbors, street music, howling winds, crying babies, braying animals. Like machine algorithms, we run on biased training sets.
Recognizing the logics and illogics of automated systems can help us see the variables that condition our own practices of immediate auscultation, and the sounding and listening capacities of the other entities who share our environments. A polyphonic mode of distributed listening helps us appreciate how our actions — making music and noise, building and maintaining infrastructure, tracking and monitoring fellow citizens, creating acoustic space for bodies to rest and heal — reverberate across time and space, and beyond the range of human ears.

If you would like to comment on this article, or anything else on Places Journal, visit our Facebook page or send us a message on Twitter.