Archive for the ‘Article’ Category

LA Existential

Posted on: November 3rd, 2015 by Rebecca Tuhus-Dubrow

Earlier this month, Gov. Jerry Brown’s announcement of California’s first-ever mandatory restrictions on water use drew attention to the state’s uneasy relationship with its natural resources. “Mother Nature didn’t intend for 40 million people to live here,” University of Southern California historian Kevin Starr told the New York Times.

If any city is known for violating natural boundaries, it’s Los Angeles. The city’s early water grab is the stuff of classic cinema. But its nefarious reputation hardly ends there. Unlike the bastions of hippies and geeks up north, L.A. is chiefly defined in the popular imagination by its crimes against nature, from pollution to freeways to blond dye jobs.

Some of these stereotypes are at best exaggerated. (For the record, you see mostly dark hair in this largely Latino city, and you see plenty of gray hair, too—on thirtysomething hipsters.) Other clichés are rooted more firmly in reality. The city’s Walk Score is a lackluster 64, compared with 84 for San Francisco and 88 for New York.

But, in recent years, Los Angeles has made headway on its most infamous environmental problems, and is even trying to position itself as a green leader. Smog has greatly diminished. Despite adding 1 million people to its population, the city claims to use the same amount of water as it did 30 years ago. Los Angeles is also heavily investing in mass transit while growing denser. (An EPA report found that between 2005 and 2009, the metropolitan area grew significantly more compact, as two-thirds of new housing was built on already developed land.) And Mayor Eric Garcetti’s new sustainability “pLAn” could have been drafted by Al Gore. It lays out a comprehensive suite of goals, such as eliminating coal from the city’s energy portfolio and diverting 90 percent of waste from landfills, both by 2025. In short, a place long known for its suburban character is becoming more of a city. And a place known for defying natural limitations is beginning to try to honor them—a goal that’s at once humbler and more ambitious.

Readers outside the region may have already seen an article or two about how this or that aspect of L.A. isn’t so terrible anymore. Within the region, these changes have collectively contributed to a sense of a new and improved L.A.—an emerging mythology of a more sustainable, responsible, and communal city. Granted, it’s a myth in more than one sense. To apply those adjectives to L.A. requires some squinting (and perhaps politely ignoring the Lexus that just cut you off on the 405). And the drought has the potential to pit water-consumers against each other rather than pulling them together. But this narrative could nevertheless reshape the city’s self-image. Indeed, outsiders who cling to the old clichés about L.A. have themselves become a target of ridicule. As the real-estate blog Curbed LA put it, “New York Times stories about Los Angeles are amazing because they’re like seeing the city through the eyes of a dorky time traveler from 1992.”

The most explicit attempt to capture the shift in the zeitgeist is the notion of the “Third Los Angeles,” a term coined by Los Angeles Times architecture critic Christopher Hawthorne. In an ongoing series of public events, Hawthorne has proposed that L.A. is moving into a new phase of its civic life. In his formulation, the first Los Angeles, a semi-forgotten prewar city, boasted a streetcar, active street life, and cutting-edge architecture. The second Los Angeles is the familiar auto-dystopia that resulted from the nearly bacterial postwar growth of subdivisions and the construction of the freeway system. Now, Hawthorne argues, this third and latest phase harks in some ways back to the first, in its embrace of public transit and public space (notably the billion-dollar revitalization of the concrete-covered Los Angeles River). Hawthorne’s focus is not specifically environmental. But a more publicly oriented city also tends to be a greener one. This is partly because mass transit and walking mean lower carbon emissions. And more broadly, willingness to invest in the public realm tends to coincide with political decisions that prioritize the public good, including ecological sustainability.

Any great city has its own mythologies. But perhaps in Los Angeles, as in California generally, myths loom particularly large. First, real estate boosters sold Southern California as an “Earthly Paradise,” a place for Midwesterners to bask in sunshine and to own an affordable single-family house. Before long, critics exposed class violence and sinister undertones, casting L.A. as a noirish hell or, in the words of writer and labor activist Louis Adamic, simply “a bad place.” Then, in 1971, came another overhaul to the myth, when British architecture critic Reyner Banham famously celebrated the city in Los Angeles: The Architecture of Four Ecologies, praising its charming bungalows and exhilarating freeways. Two decades later, Mike Davis documented these competing myths in his 1990 best-seller City of Quartz, and sided caustically with the critics, offering a dystopian vision of “Fortress Los Angeles.”

For all the disagreement over whether Los Angeles was dream or nightmare, there was one point on which everyone seemed to agree—that it was not a real city. Adamic called it “a great, overgrown village.” Or, if it was a city, it was, in the words of an essay published in the late 20th century, “the first American city”—a model for the sprawl, privatization, and car dominance that was to become typical of U.S. municipalities. Jane Jacobs wrote neutrally but damningly, “Los Angeles is an extreme example of a metropolis with little public life, depending mainly instead on contacts of a more private social nature.” And in his great 1997 book, The Reluctant Metropolis, William Fulton showed how all the different parts of the metropolitan area scrambled to escape any sense of Los Angeles identity or community—the reverse, in a sense, of the kid from Long Island who implies he’s from New York City.

On all of those fronts, there are signs of change. One of the most obvious counter-examples is CicLAvia, the kind of phenomenon that makes Jacobs acolytes swoon. Launched in 2010, it’s a festive event during which miles of streets are closed to cars and swarmed by bikes. Taking place every two to three months, and rotating among different neighborhoods (Echo Park, the Valley, South L.A., etc.), each occasion attracts a diverse crowd of tens of thousands of people. They are the type of feel-good events—some might even call them utopian moments—where strangers smile at each other and ordinary life feels suspended. Traffic lights blink, and even cops whiz by on two wheels, wearing endearingly dorky helmets. In every sense—the car-shunning, the enthusiastic proximity to strangers, the exploration of different parts of the city—CicLAvia is antithetical to the guarded, privatized, auto-carved Los Angeles of lore.

CicLAvia remains a special occasion, but everyday transit is slowly improving as well. Banham wrote that the freeway “is where the Angeleno is most himself, most integrally identified with his great city,” and he predicted that “no Angeleno will be in a hurry to sacrifice it for the higher efficiency but drastically lowered convenience and freedom of choice of any high-density public rapid-transit system.” In 2008—pushed in part by unbearable traffic—Angelenos proved him wrong. On that Election Day, citizens of Los Angeles County voted for Measure R, which imposed a half-cent sales tax to support funding for transportation projects, including the expansion or construction of 12 rail and bus rapid transit lines. It is expected to generate $40 billion in revenue over 30 years. This choice stands in stark contrast to the famous Proposition 13, the 1978 California anti-property-tax law which has wreaked havoc on the state’s budget for public investment ever since. Jonathan Parfrey, executive director of the L.A.–based organization Climate Resolve and a former commissioner at the Department of Water and Power, told me, “The day we voted for Measure R, we voted for a new Los Angeles.”

Then there’s water. Another central part of the old Los Angeles myth was embodied in a quote famously attributed to water engineer William Mulholland at the opening of the Los Angeles Aqueduct just over 100 years ago: “There it is. Take it.” These words were interpreted as a slogan for a city that would siphon water from wherever it pleased to hydrate a burgeoning population.

Now, if only out of desperation, there is at least a strong competing ethos. Starting in the early ’80s, the city got more serious about conservation, as seen in its mass conversion to low-flow toilets. The city has been responding to the current drought on a number of fronts. It has significantly reduced its own water use, especially in the Parks Department. It has offered a rebate to homeowners who replace their lawns with drought-tolerant landscaping, as well as rebates for installing rain barrels, among a variety of other measures. (It remains to be seen how the city will implement the new mandatory state restrictions.) The Department of Water and Power is also preparing a new Stormwater Capture Master Plan, and L.A. has a target of reducing imported water use by 50 percent by 2025. According to Andy Lipkis, founder and president of the influential nonprofit Tree People, even in a drought, the proper technology can capture significant amounts of water—3.8 billion gallons per inch of rainfall.* Mayor Garcetti just launched a corny public awareness campaign urging conservation. Contra Mulholland, the new slogan is “Save the drop.”

Of course, Los Angeles is far from alone in its bid for environmental virtue. It is following national trends; like many cities, it now has a chief sustainability officer. Plans for an ambitious new recycling system (including food waste) are in the works. And let’s not forget that California is a pioneer in addressing climate change. Its groundbreaking 2006 law, the Global Warming Solutions Act, requires the state to reduce its greenhouse gas emissions to 1990 levels by 2020. As of this January, all components of this law, including a cap-and-trade program, are fully operational, and many of LA’s green initiatives are connected with that. What makes the developments in LA that much more remarkable, though, is that it’s … L.A. Observing them is sort of like seeing a guy get out of his Hummer and carry his reusable canvas bags into the grocery store.

For the same reason, L.A.’s evolution is particularly inspiring—that is, it could serve as a model for other not-so-green cities. Los Angeles started out as very strange, unlike the urban model found in Europe and the Northeast. And then it became more normal, as other places were similarly built around the automobile, subdivisions, and strip malls. Now, L.A. is becoming, in some ways, more like the cities that preceded it. In other ways, it has begun to capitalize on the natural assets it does have to become a 21st-century city: According to Environment America, Los Angeles now ranks first in the country for total installed solar PV capacity.

And yet, many caveats are in order. Indeed, one could easily live in L.A. with little sense of its supposed reinvention. Almost none of its new transit projects has been completed yet. According to a new UCLA report, 73 percent of L.A. County residents drove to work alone in 2013. A 2012 ballot measure to extend Measure R narrowly missed the needed two-thirds majority. A housing shortage is causing economic pain and ensuring long commutes.

On top of all that, the city still imports more than 85 percent of its water, and the current drought is likely a foretaste of the future. Due to climate change, the Southwest may experience megadroughts lasting decades. As attempts at water independence become more necessary, they also become more difficult. Meanwhile, the drought has eroded decades of progress on smog (since rain clears away air pollution), and water scarcity also leads to higher energy consumption.

Given this state of affairs—the exciting momentum, the daunting status quo—what role will the city’s emerging mythology play? The danger, of course, is that the narrative of a more environmentally sound, civic-minded city could in some cases amount to mere lip service. It could gloss over the city’s social and economic disparities, some of which could even be exacerbated as the city’s new attractions lure more creative-class types.

But more charitably, in some ways the new storyline could be self-perpetuating. It could affect how people vote—another transit measure is expected to be on the ballot in 2016—and how they perceive each other. Perception can’t manufacture water, but it can encourage conservation, and it can foster the public street life that coincides with sustainability—the opposite of the fortress mentality so often ascribed to L.A. At the most recent CicLAvia, in the Valley, I witnessed the following exchange: A woman emerged from a Porta-Potty, and a man, apparently a stranger, asked her if she’d watch his bike while he took his turn. Every element of that scene was disorienting. Generations of seekers did not head West with the fantasy of sharing outhouses and entrusting bikes to each other. But if, as we keep hearing, California needs a new dream for a new age, that scene is not a bad place to start.

How to Solve Climate Change with Cows (Maybe)

Posted on: June 3rd, 2014 by Rebecca Tuhus-Dubrow

In the United States, there is famously little consensus on the topic of climate change. But among the community most concerned about it, certain convictions are widely shared: Fossil fuel emissions deserve nearly all the blame for warming our planet. Meat—especially from flatulent cattle—is an environmental scourge. The Koch brothers, with their campaigns against solar power and cap-and-trade legislation, are (to avoid a less printable word) jerks. And we are probably all doomed.

But over the past few years, a new strain of environmental thinking has begun to challenge nearly all of these tenets. This growing movement includes climate activists, scientists, and also farmers, who play a key role. Many of them would agree with mainstream environmentalists about the Koch brothers. But they argue that the way we’ve been thinking about climate has been, if not all wrong, at least woefully incomplete.

The core premise of their thinking is a belief in the overlooked importance of soil. Carbon, harmful at current levels in our air and water, is essential in the ground, where it makes soil rich and fertile. Our greenhouse-gas problem, they argue, began long before we realize, with agricultural mismanagement and other disruptions of land deep in human history, and solving it depends on restoring our soil to the point where it pulls immense amounts of carbon out of the atmosphere—possibly enough to reverse the effects of industrial emissions.

Throughout the country and abroad, farmers and rangers have begun experimenting with innovative ways of planting crops and grazing animals that are intended to revitalize the soil. The best known is a particular method of raising livestock devised by biologist Allan Savory, born in what is now Zimbabwe, who offers a vision of land management that would send cows grazing across the American plains, emulating the ancient herds of ruminants that once trampled and enriched grasslands.

The movement has spawned a number of advocacy organizations, including the Soil Carbon Coalition, an Oregon-based group whose motto is “Put the carbon back where it belongs.” Scientists have been studying the carbon storage capacity of soil to determine what is possible. And enthusiastically chronicling all of these developments is a small cottage industry of recent and forthcoming books: “The Soil Will Save Us,” by Kristin Ohlson; “Cows Save the Planet,” by Judith Schwartz; “Grass, Soil, Hope,” by Courtney White. All of them are characterized by an upbeat optimism at odds with the often bleak outlook of the traditional green movement.

“When we’re only talking about reducing fossil fuels, it’s depressing because as individuals we feel helpless,” said Schwartz. “When you start to look at the function of the soil and land, things look very different.”

This optimism can be tremendously seductive, and takes the form of some lavish claims—Savory has said that if his method were widely implemented, we could suck enough carbon out of the air to return atmospheric carbon dioxide to preindustrial levels within a matter of decades. When you see things from the perspective of this movement, the problem seems paradoxically both much bigger than we realized, and much easier to solve.

Though they share goals, and offer largely complementary solutions, these new activists don’t always sit easily alongside the more conventional climate movement. Some complain that the latter focuses too single-mindedly on emissions while paying no more than lip service to other remedies. And they’ve engendered critics who believe that soil doesn’t have close to the carbon-storing capacity that Savory and some advocates claim—and that if the movement gains steam, it could shift the focus dangerously away from the imperative to cut fossil fuel emissions.

Still, even these critics agree that carbon sequestration in soil should be part of the answer. How big a part remains to be seen, but the conversation about climate has begun to change—and the tensions that emerge offer a window into the full scope of how human activity affects the planet, for ill and, potentially, for good.

***

We may call it dirt, but soil is an enormously complex substance. When healthy, it’s moist, loamy, and black, teeming with living organisms. “The best way to describe it is just envision black cottage cheese,” says Gabe Brown, a North Dakota farmer who has gained renown in agricultural circles for his success in integrating crops and livestock. Much soil, though, bears more of a resemblance to a gray block of parmesan cheese. And the key ingredient that makes the difference—that allows all of that life to thrive—is carbon.

“If you scoop up some earth, you can see if it’s got carbon in it because it’s dark and it’s rich and it’s light, and it’s got many feet of fungal roots and roots of the plants holding it together,” said Adam Sacks, cofounder of Biodiversity for a Livable Climate, a Boston-area-based group that advocates for soil-based carbon storage. “It’s the most complicated ecosystem on earth. We know very little about how it works, but we know how it works when we use it, and when we abuse it.”

And for about 10,000 years, since humanity began practicing agriculture, we’ve been abusing it. Deforestation and plowing disrupt all of those exquisite networks, impoverishing the soil and releasing carbon into the air in the form of carbon dioxide. As far back as the 1950s, scientists were aware that loss of carbon from modern soil was possibly related to climate change.

According to one theory, there’s an additional contributor to the rich historical soil we are losing: animals. In the 1950s and 1960s, Allan Savory was a game ranger in his native Africa, where grasslands were rapidly turning to desert. His job involved closely observing the land and patterns of animal interactions with it, and he noticed that, contrary to conventional game ranger wisdom, clearing land of animals, to allow it to “rest,” did not help it recover; in fact, the land appeared to suffer.

The theory Savory developed, drawing on both agronomical thinkers and folk wisdom, held that wild herds of ruminants—bison, wildebeest, and so forth—displayed a cluster of behaviors that was essential to the health of the land. The animals bunched tightly in herds as a defense against predators. For a few hours or days they would graze on grasses and other plants, which stimulated plant growth; trample the ground underfoot, which left plant residue as cover; and deposit their dung and urine, which acted as fertilizer. Then they would all flee together to a different site, allowing the soil to absorb all those nutrients and the land to recover from the impact, and the process would repeat.

Humans have interrupted that process in various ways—by hunting the animals to extinction, by destroying their habitat, and by raising livestock with a completely different relationship to the land. Savory’s hypothesis was that rangers and farmers could reproduce it by managing livestock to mimic those ancient patterns. Working with pastoralists around the world, he began to implement this approach, which he calls Holistic Planned Grazing. (In a much-viewed TED talk last year, he brandished impressive before-and-after photos of land transformed from barren desert to lush and fertile terrain through these methods.)

His initial theory focused on improving the land, but eventually Savory came to see the soil as a powerful tool for capturing atmospheric carbon, and began to promote his idea as something much bigger: a way to fight climate change. In an e-mail interview, Savory lamented that biodiversity, desertification, and climate change are treated as separate problems. “But they are all one and the same issue—massive environmental disruption caused mainly by agriculture (the production of food and fibre from the world’s land and waters) and by fossil fuel use.” Holistic Planned Grazing, he believes, can help address all of them. (As for methane emissions from cattle, Savory and his supporters argue that they would be much more than offset by the carbon sequestration.)

Carbon sequestration isn’t a new idea—“carbon sinks” have long been considered one way to offset greenhouse gas emissions, whether in the form of new forests or exotic schemes to bury carbon in the ocean. But it has typically been seen as a sort of extra: By all means, plant some trees, but the most salient objective has been to stop burning fossil fuel and transition to clean energy. Savory, while agreeing with the need for that shift, switched the emphasis to capturing carbon—and saw far more promise in it than climate activists had.

Not everyone in the movement buys into Savory’s grazing theories; others focus more on planting patterns and cover crops. But like Savory, the advocates who have latched onto this idea see it as a corrective to the emissions-centric focus of the mainstream environmental movement, which they consider both too narrow and, so far, futile. “I see little hope of emissions reductions,” said Sacks of Biodiversity for a Livable Climate. “They’ve been a complete failure to date.” Seth Itzkan, president of the Somerville-based ecological consultancy Planet-TECH Associates, says: “I don’t want to berate the climate movement, but they also need to be doing something else. And the largest quickest way is ecological restoration and restoration of soils.”

The Savory Institute, founded in 2009, has established 10 Savory Hubs in various countries where people learn, teach, and locally adapt Holistic Planned Grazing. The past few years have seen a flourishing of related activity as well. Peter Donovan of the Soil Carbon Coalition has been traveling on an old school bus around the country trying to measure the carbon content of soil. Dozens of conferences and workshops have been held on the topic of improving soil health; the UN has designated next year the International Year of Soils. USDA scientists are working to understand better how soil works and how we can measure the carbon content.

“I do think that awareness of this dynamic is growing,” said Judith Schwartz, author of the 2013 book “Cows Save the Planet.” “There’s been incredible receptiveness.”

The movement has created some unexpected allies, attracting farmers and ranchers who are not typical environmentalists—indeed, who may not even believe in global warming. Healthy soil both absorbs much more water, meaning less runoff and flooding, and withstands drought better because of the water it retains. Even if soil played no role in sequestering carbon and mitigating climate change, it would help individuals to survive the extreme weather events that are projected to become more common. Soil quality also correlates with nutritional quality in crops and meat.

Gabe Brown, the North Dakota farmer, has implemented Holistic Planned Grazing as well as other techniques, such as cover crops, on his land. He has seen his soil go from dry and gray to black and crumbly, and says scientists who measured the carbon content in his soil found that in places it had tripled in the past 20 years. He says he speaks to other farmers at about 30 conferences and workshops each year. “They’re concerned about organic matter, but they do not think about it in terms of carbon sequestration and what it’s doing for climate change,” he said. “I’m going to catch a lot of flak for saying this: Carbon drives the system. It’s all about carbon.”

***

For all the optimism of the soil-as-savior movement, and all the promise of isolated local experiments such as Brown’s, it’s far from certain that global soils have anywhere near the carbon-storage capacity we’d need to compensate for emissions.

Rattan Lal, a soil scientist and director of Ohio State University’s Carbon Management and Sequestration Center, says that, based on extrapolation of his measurements, the earth’s soil could theoretically absorb about three gigatons of carbon per year. Currently, he says, the atmosphere retains about 4.3 gigatons of emissions during the same period. “The problem is that this three gigatons is the maximum potential,” he said. “And then it cannot go on forever.”

According to Lal, in about 50 years, the soil would be saturated. In effect, he sees this approach as a way to buy us some time to shift to alternative energy sources. (Lal believes Savory’s method is an effective way to increase the soil’s carbon content, but focuses on other methods himself, such as mulch farming, conservation tillage, agroforestry, diverse cropping systems, and cover crops.)

There is also debate about the effectiveness of Savory’s specific approach. David Briske, professor of ecosystem science and management at Texas A&M University, agrees that fighting global warming by capturing carbon in soil “is clearly a valid concept,” but calls Savory’s more ambitious claims “highly misleading,” and believes Savory is “using carbon sequestration as a way to garner support for his grazing method.”

Others cite other numbers that suggest the soil has a higher maximum capacity, and believe that we can continue adding carbon to the soil, and even build new soil, far more than science has recognized. Assessing carbon content at the global scale is tricky because, among other reasons, there are different kinds of carbon, organic and inorganic. Kristine Nichols, a USDA scientist who studies soil, says that over time, organic carbon, which is more prone to cycling in and out of the air, can be stored, with proper management, for longer persiods of time—and over the very long term, some of this will form new fossil fuels. “There’s a tremendous amount of potential for the soil to absorb carbon,” said Nichols. “The potential in that soil environment is greater than the amount of CO2 that’s in the atmosphere…there’s more carbon that can be in that soil environment than we ever thought possible.”

In part, the argument about soil has turned into a tug-of-war about data, which on a subject this large and complex is often inconclusive or missing. “There are things we know we’re doing, but the data is not there,” acknowledged Daniela Ibarra-Howell, CEO and cofounder of the Savory Institute, which has begun to systematically gather information about the carbon content at the Savory Hubs.

Beyond data, there are questions of competing priorities. The climate movement has limited resources, and it has its hands full with emissions control: reducing what we pump into the atmosphere, finding long-term solutions that don’t use fossil fuels, and persuading governments to invest more money in regulating and developing these technologies. In this framework, even a well-intentioned movement that offers a shiny but uncertain promise is potentially a risky distraction.

“I think it might detract from the major issues of reducing fossil fuel emissions,” said Pushker Kharecha, a research scientist at Columbia’s Earth Institute, referring to Savory’s approach. “A heavy over-emphasis on land use as a panacea does detract from the more fundamental issue of shifting our energy infrastructure to clean energy.”

It’s not hard to see the appeal of a movement that promises not only a carbon sponge, but more delicious food, hardier land, and profits for small farmers along the way. Better soil also enhances food security for developing countries. That doesn’t make soil improvement the miracle cure its staunchest advocates claim. But given the magnitude of the challenge, there’s sense in the message that we can make the most of the resources we already have underfoot—even if, as we might have guessed in the first place, the world will not be saved by cows alone.

The Repurposed PhD

Posted on: November 6th, 2013 by Rebecca Tuhus-Dubrow

ON a recent Sunday afternoon, a monthly meeting convened around a long table in a Whole Foods cafeteria on the Upper West Side of Manhattan. As people settled in, the organizer plopped down a bag of potato chips and tackled housekeeping matters, like soliciting contributions. But she did not insist. “I know that some of you are in fragile situations,” she said.

One attendee recalled scraping by on $9,000 a year. “I was exhausted by years of living in poverty,” she said. Her neighbor chimed in: “Amen, sister.”

An eavesdropper might have been surprised to learn what the group had in common: formidable academic credentials. Sitting at the table were a historian, a sociologist, a linguist and a dozen other scholars. Most held doctorates; a few were either close to completion or had left before finishing. All had toiled for years in graduate school but, by choice or circumstance, almost none had arrived at the promised destination of tenure-track professorships (the one who had was thinking of leaving). Now they found themselves at a gathering of a group called Versatile Ph.D. to support their pursuit of nontraditional careers.

After a round of introductions, the participants broke into clusters to swap stories and tips. A 32-year-old man who had studied ancient religion at Princeton wore a T-shirt emblazoned with the name of his employer, a finance website; he talked up his job to a physicist who was finalizing her thesis. The historian, a teacher at an elite private school, advised a recent American studies Ph.D. on where to find job postings and how to package himself. That young Ph.D., Adam Capitanio, who completed his degree in 2012, had looked for an academic position for three years, focusing his search on the Northeast and applying for at least 60 jobs. He hadn’t received a single interview. Now he was working as an editorial associate at an academic publisher, trying to devise a long-term plan. “Things were kind of desperate before I had that job,” he said. “This gives me some flexibility to figure out what I actually want to do.”

Dr. Capitanio’s experience is far from unusual. According to a 2011 National Science Foundation survey, 35 percent of doctorate recipients — and 43 percent of those in the humanities — had no commitment for employment at the time of completion. Fewer than half of Ph.D.’s are expected to land tenure-track jobs. And many voluntarily choose another path because they want higher pay or more direct engagement with the world than monographs and tenure committees seem to allow.

Though graduates have faced similar conditions for decades, the past few years have seen a surge in efforts to connect Ph.D.’s with gratifying employment outside academia and even to rethink the purpose of doctoral education. “The issue itself is not a new issue,” said Debra Stewart, president of the Council of Graduate Schools. “The response, I would say, is definitely new.”

In addition to New York, Versatile Ph.D. groups have formed in at least seven other cities, including Philadelphia, Chicago and Los Angeles. Abundant online resources help Ph.D.’s turn curricula vitae into résumés and market their skills to nonacademic employers. And former academics can find kindred souls at blogs like “Chronicles of a Recovering Academic” and “Dr. Outta Here” (obscenity alert).

The spirit of change has even begun to take root inside the ivory tower. The University of California, Berkeley, held a “Beyond Academia” conference last spring, hosting Ph.D. speakers who have succeeded in other domains, from consulting to biotech. Similar events are planned at the Graduate Center of the City University of New York, which established its new Office of Career Planning and Professional Development in February.

The problem is especially urgent in the humanities. For Ph.D.’s in STEM disciplines (science, technology, engineering, mathematics), industry has long been a viable option. But students who study, say, Russian literature or medieval history have few obvious alternative careers in their fields. They confront questions about their relevance even inside the academy, let alone outside it.

In August, the Scholarly Communication Institute released a report titled “Humanities Unbound: Supporting Careers and Scholarship Beyond the Tenure Track.” In it, Katina Rogers, the lead researcher, discusses the nascent concept of alternative academic, or alt-ac, professions. The term has gained widespread currency (and its own Twitter hashtag) and can refer to jobs within universities but outside the professoriate, like administrator or librarian, as well as nonacademic roles like government-employed historian and museum curator.

Dr. Rogers suggests that alt-ac is less a matter of where you work than how — “with the same intellectual curiosity that fueled the desire to go to graduate school in the first place, and applying the same kinds of skills, such as close reading, historical inquiry or written argumentation, to the tasks at hand.” In an interview, she credited the neologism with infusing “positive energy” into the often gloomy conversations about alternative careers. The alt-ac ethos holds that nonacademic work is not a fallback plan for failures but a win-win: Ph.D.’s can bring their deep expertise and advanced skills to a whole gamut of challenges, rather than remaining cocooned in the ivory tower.

Karen Shanton explored unconscious cognitive processes for her philosophy Ph.D. from Rutgers but works at the National Conference of State Legislatures, which provides legislators with nonpartisan analysis. She won the two-year fellowship from the American Council of Learned Societies. Its Public Fellows program, created in 2011, places Ph.D.’s from the humanities and social sciences in nonprofit and government organizations.

Dr. Shanton said her education “absolutely” informs her work, which focuses in part on voter ID laws, as she draws on her writing and thinking skills as well as her knowledge of how the mind works. “It’s actually kind of great because it has a lot of the benefits of academia,” she said. But “with politics, you can have a sort of more immediate impact.”

While the alt-ac perspective is relatively rosy, some disenchanted academic refugees embrace what they call the “post-ac” identity. The website “How to Leave Academia” recently published a post-ac manifesto, defining the orientation as “a belief that the current system is flawed, cruel, unsustainable and therefore impossible to directly engage with.” In this view, Ph.D. programs, with their false promises, lure students to serve as cheap labor, first as teaching assistants, then as poorly paid adjuncts when tenure-track jobs elude them.

“Post-ac discourages people from pursuing graduate work,” write the authors, Lauren Whitehead and Kathleen Miller, under the pseudonyms Lauren Nervosa and Currer Bell. Dr. Miller also penned the blog post “I Hate My Post-Ac Job: What Happens When You Don’t Land the Perfect Postacademic Career.” In it she writes: “Graduating, leaving academia, moving to a new city, starting a new job, and then hating it? Sheesh. Let me tell you — it’s hard to feel like a success story.” Unable to secure academic employment after completing her doctorate in English literature in 2012, Dr. Miller is now preparing to start her own life-coaching business.

A HANDFUL of professors at Stanford, sensitive to the exploitative potential of graduate school but convinced of its value, are trying to instigate meaningful change. Last year, six of them wrote “The Future of the Humanities Ph.D. at Stanford,” a much-discussed white paper promoting the redesign of curriculums to prepare humanities Ph.D.’s for “a diverse array of meaningful, socially productive and personally rewarding careers within and outside the academy,” as well as reducing time to degree, which often takes close to a decade.

Russell A. Berman, a German professor and an author of the paper, feels a responsibility to recognize these practical exigencies. “Graduate education is primarily an intellectual undertaking,” he said. “But most of the participants are at an age where they also have to be making career choices.” He added, “The academic job market is so weak that it just can’t be business as usual for department faculty.”

And yet he does not buy into the popular notion that there are just too many Ph.D.’s. “I think that doctoral education is good for individuals who are passionate about the topic,” Dr. Berman said. “I think it’s good for society. They contribute in lots of different ways.”

The professors called on Stanford to offer supplementary funds to departments that devised plans for alternative career preparations and shortening time to degree. The School of Humanities and Sciences requested proposals, but few departments responded. At the same time, new programs have been set up to help link humanities Ph.D. students with jobs in Silicon Valley and in high schools.

Initiatives are afoot at other schools as well. Collectively, they could begin to alter expectations.

While not grappling with the same existential questions as humanities programs, the Polytechnic Institute of New York University is trying to expand career options for its Ph.D. candidates. It has opened two incubators over the last few years, with a third to open soon, offering space, legal services and marketing advice to facilitate entrepreneurship. The draw, according to Kurt H. Becker, associate provost for research and technology, is “a career path that would allow them to be much more in control than if you’re a postdoc or an assistant professor, where your career path is pretty much mapped out.”

The Praxis Network consists of “digital humanities” initiatives at eight universities, focusing primarily on graduate education. They aim to prepare students for roles outside the professoriate, stressing skills like collaboration, technology and project management. Students in the Digital Fellows Program at the City University of New York Graduate Center, in its second year, commit 15 hours a week to a selected project and related activities. One historian completed a project called “Data Mining Diplomacy: A Computational Analysis of the State Department’s Foreign Policy Files.” Fellows also design Web sites and organize a workshop series for other students, all of which is far removed from the traditional humanities experience of sitting alone in a room with a stack of books.

“We are really thinking about it as a kind of laboratory for reshaping doctoral education and rethinking the kind of skills that we give our students,” said Matthew K. Gold, an English professor who runs the program.

Ethan Watrall, a professor of anthropology at Michigan State University, runs the Cultural Heritage Informatics Initiative as part of Praxis. “I try to destigmatize this idea of not going on to a tenure-track job,” he said. “It doesn’t matter — who cares? If you’re happy and that’s what you want to do, that’s awesome.”

He believes the culture has begun to change, “mostly because of the sort of desperate need for it to change.”

Still, he said, a transformation is only beginning. “The academy is a big ship and it takes a long time to turn it.”

How to solve America’s childbirth cost crisis

Posted on: July 7th, 2013 by Rebecca Tuhus-Dubrow

Almost two years ago, pregnant with my daughter, I paid my first visit to the Cambridge Birth Center. Located inside an old Victorian house, the facility is hard to distinguish from a modestly appointed home, with blond wood floors and three spacious bedrooms, each attached to a bathroom with a large tub. If the interior is comforting, so is the view: the Cambridge Hospital, part of the same campus, is visible through most windows. Should you need to get there in a hurry, the trip would take about ten seconds.

Like most of my fellow patients, I chose the birth center because I wanted to avoid the high-tech approach typical of hospitals, but I didn’t feel entirely at ease with the idea of a home birth. The birth center hit the sweet spot. I loved the cozy environment designed expressly with birth in mind. I adored my unflappable midwife, Heidi, a genius at deflating anxieties. And when in labor, I benefited from the warmth and expertise of the midwife on call, Connie, who calmly coached me through delivery.

But I didn’t realize until later that there were other reasons to love birth centers—namely, hard economics. In light of this week’s big Times article on the staggering costs of maternity care in the United States, it’s time that birth centers receive the recognition they deserve as a viable alternative. A recent major study confirmed that for low-risk pregnancies, birth centers provide equally safe care for much lower costs than hospitals. Even the Affordable Care Act acknowledges their value; a little-noticed provision of the law mandates that Medicaid cover birth center services. Yet, thanks to a combination of unfriendly laws in some states, insurer resistance, and lack of public awareness, far too few American women have access to this form of excellent, cost-effective maternity care.

First, to clarify the terminology: a free-standing birth center is a homelike, midwife-led facility that offers prenatal care and delivery services and has emergency arrangements with a hospital. It can be located inside a hospital, but it must be separate from the acute obstetric care unit. (Some hospitals use the term “birth center” to describe conventional units, which has caused confusion.) Distance from the hospital varies. I must admit I would have felt much less comfortable with a more remote hospital. But in some rural areas, women live hours away from the nearest one, and it may be preferable to have a facility closer to their homes than to the hospital. All birth centers monitor patients throughout pregnancy to ensure that they are low-risk; if not, they are referred to hospitals.

According to the American Association of Birth Centers, the facilities operate according to the “wellness model” of pregnancy and birth, as opposed to the medical model that sometimes seems to treat these events like illnesses. As AABC executive director Kate Bauer told me, “In a hospital, all women are treated as if they are high-risk. In a birth center, every woman is seen as low-risk, unless her risk level is elevated…It’s appropriate use of resources.” You can’t get an epidural at a birth center. They rely on low-tech pain-management techniques such as warm baths and changing position, though some do offer conservative doses of Demerol and nitrous oxide.

That birth centers offer safe, economical care is not news. A landmark 1989 New England Journal of Medicine study reviewed records of 11,814 women at 84 birth centers and found that there were no maternal deaths, while the neonatal mortality rate was similar to that of low-risk hospital births. The rate of cesarean sections (which involved transfers to hospitals) was 4.4 percent. The article concluded, “Few innovations in health service promise lower cost, greater availability, and a high degree of satisfaction with a comparable degree of safety. The results of this study suggest that modern birth centers can identify women who are at low risk of obstetrical complications and can care for them in a way that provides these benefits.” (The authors emphasized that all of the birth centers in the study were accredited; the safety of unaccredited centers is uncertain.)

A new study published in January in the Journal of Midwifery & Women’s Health ratified those results. Because safety is such a concern, allow me to dwell again on the numbers. This study reviewed the records of 15,574 women at 79 birth centers from 2007 to 2010. Again there were no maternal deaths, and again the neonatal mortality was comparable to low-risk births in hospitals. Just 6 percent of the women ended up having cesareans at affiliated hospitals, compared with about 25 percent of low-risk women who started out at hospitals. (The total nationwide rate of cesareans in 2010 was 32.8 percent.) The study estimated that given the lower cost of the facilities, and the much less frequent interventions, these birth-center deliveries saved approximately $30 million.

Despite these positive outcomes, only a tiny minority of women (.3 percent in 2010, according to the CDC) give birth in such centers. The AABC estimates that if just 10 percent of the 4 million women who give birth annually did so in birth centers, the savings would come to at least $2.6 billion. The provision in the Affordable Care Act—stipulating that Medicaid programs reimburse birth centers—was included because it was projected to save money.

By some indications, trends are favoring birth centers. According to the AABC, their numbers have grown from 170 in 2004 to 251 today. Forty-one states license birth centers, the American Public Health Association has issued guidelines for licensure, an entity called the Commission for the Accreditation of Birth Centers does what its name suggests, and the AABC also has standards for members. All of this constitutes what the recent journal article calls an “infrastructure of standards, accreditation, and licensure” that contributes to the safety and reliability of accredited centers.

Historically, tensions have simmered between the midwife community and ob-gyns, but even the American College of Obstetricians and Gynecologists has endorsed birth centers—albeit in a somewhat roundabout manner. In a 2008 statement reiterating their strong opposition to home births, ACOG asserted that delivery “in a hospital or accredited birthing center is essential” (emphasis mine).

Birth center advocates were also thrilled by the Medicaid provision in the health care reform law, sponsored by Senator Barbara Boxer: not only should it mean expanded birth center coverage for low-income women, but private insurance often takes its cues from the federal programs, so a ripple effect is conceivable. Now, despite the potential savings, private insurance often resists covering birth centers. (I was fortunate on this front: my total out-of-pocket expenditure was a $15 co-payment for my first prenatal appointment.)

But amid this progress, serious gaps remain. Nine states do not license birth centers, and the Medicaid provision applies only to those who do. The AABC claims, too, that some states have not properly implemented this part of the law. State laws vary widely, with some much more hospitable to birth centers than others. These differences don’t always break down according to familiar patterns: In Texas, for instance, birth centers flourish, while Maine does not license them.

As a result, many women who would prefer birth centers don’t have the option—and many of these end up having c-sections and other interventions in hospitals. I blame not greed or evil but the so-called law of the instrument: the tendency to rely on the tools you have. The debate about childbirth is particularly polarized—with some convinced that pregnancy should not be treated as a medical condition—but not unique. Take mental health. If you suffer from anxiety and you go to a psychiatrist, you will likely leave with a prescription in hand. Some people know they want Zoloft, but others would like to try a less medical approach first.

A final obstacle for birth centers in reaching their full potential is cultural. The majority of women seem to think that birth centers are a risky option for a hippie fringe, one step removed from giving birth in a field under a full moon, orgasmically. Granted, birth centers will never be right for everyone. I know women who don’t see the point of trying to give birth without an epidural. (As writer Anne Lamott memorably put it: “I have girlfriends who had their babies through natural childbirth—no drugs, no spinal, no nothing—and they secretly think they had a more honest birth experience, but I think the epidural is right up there with the most important breakthroughs in the West, like the Salk polio vaccine and salad bars in supermarkets.”) Then there are women who wouldn’t consider leaving the privacy and comfort of their own home to have a baby. But there’s a good-sized contingent that wants to try for a low-tech birth in a relaxed setting that’s been proven safe. Birth centers are more likely to provide this experience than hospitals are. They should be seen not as sites for New-Age-y, hazardous adventures, but as places that offer mainstream, high-quality maternity care.

One last advantage of birth centers: the parties. The Cambridge Birth Center has thrown two afternoon shindigs for families since my daughter was born last July. She got to meet the midwife who listened to her heartbeat in utero, and the one who delivered her, as well as other kids who were born there. And I got to return again to the place I’d visited so frequently, this time eating chips and salsa.

Children of the Hyphens, the Next Generation

Posted on: November 23rd, 2011 by Rebecca Tuhus-Dubrow

When my parents married in 1977, women’s liberation was in full swing and my mother was a consciousness-raiser. She was about as likely to take my father’s name as she was to sport a veil at the wedding. She would remain Ms. Tuhus. Nine months later, the surname for their new baby (me) was self-evident. My parents yoked their names into a new one: Tuhus-Dubrow.

“I knew that was the best I could do,” my father told me. “As opposed to just Tuhus.”

Other parents, albeit a small minority, had the same idea. By the mid-1970s more women were keeping their maiden names, so hyphenating the names of the children seemed like the next logical raspberry to blow at the patriarchy, a stand against the family’s historical swallowing up of women’s identity.

Hyphenation has other pluses. The invented names are distinctive; I’ve never come across a Tuhus-Dubrow outside my immediate family. The inconveniences — blank stares, egregious misspellings — are outweighed by the blessing of never having to worry about a Google doppelgänger.

The problem, of course, is that this naming practice is unsustainable. (Growing up, I constantly fielded the question, “What will you do if you marry someone else with two last names? Will your kids have four names?”) Like many of the baby boomers’ utopian impulses, it eventually had to run up against practical constraints.

I don’t have children yet, but plenty of others in my cohort — the first in which nontrivial numbers were born hyphenated — do. And reproducing while hyphenated brings inevitable quandaries. I was curious to see how my peers have handled them. So I asked around. What I found was a whole gamut of solutions. The name-blending pioneers now have grandchildren whose names embody an intriguing mix of the traditional and the maverick.

I encountered several women who kept their own hyphenated names when they married, but gave their children the father’s surname. This scenario seems to deviate the least from the mainstream: after all, many other women with single surnames do the same.

Zoe Segal-Reichlin, 33, a lawyer for Planned Parenthood in New York, was typical in her approach to naming her son, now 10 months old. She said she flirted with alternatives: hyphenating three names, picking either Segal or Reichlin to link with her husband’s name. But ultimately, none felt quite right, and going with the father’s name won out as the most practical choice.

“It was the best of bad options,” she told me.

Same-sex couples face their own quandaries, since there is no tradition to follow. Cora Jeyadame (née Stubbs-Dame), 37, a first-grade teacher in Newton, Mass., was determined to share a name with her child, and to think ahead more than her own parents had.

“It’s a one-generation solution,” she said of hyphenation. She and her wife, whose surname was Jeyapalan, spliced their names together into an entirely new, hyphenless amalgam.

How did they decide on the name? “I actually put it out on Facebook,” she said: “ ‘I challenge you to come up with good combinations.’ ” The winning entry, Jeyadame, is the legal surname of Cora and her 4-month-old; her wife uses it socially.

Naming decisions raise novel questions for hyphenated men. There is little precedent of husbands changing their names at marriage or giving up the prerogative to pass their names on. Traditional practices grew out of a male-dominated culture and a need for simple rules. But there is another, less obvious motive: to hold men accountable for their offspring.

“How do you attach men to children?” said Laurie K. Scheuble, a senior lecturer at Pennsylvania State University who has done several studies on naming practices. Names are “a very functional and practical way” to do so.

But perhaps, in an age when men wear BabyBjorns, it is no longer always necessary. When Daniel Pollack-Pelzner, 32, an English professor who lives in Portland, Ore., married Laura Rosenbaum, he toyed with the idea of a creative synthesis.

But “Rosenpollackpelznerbaum sounded like a weapon of mass destruction,” he said. When they had a son, giving him Daniel’s last name seemed too complicated, so they gave the baby Laura’s.

Mr. Pollack-Pelzner initially worried that having a different name would arouse suspicions, leading to airport frisks and other indignities. But since his son was born, “I’ve hardly thought about it at all.” No one has ever challenged whether he is the toddler’s father: “The poor guy is cursed to look just like me.”

Nathan Lamarre-Vincent and his wife, Sarah Miller, went the opposite direction, giving their children Nathan’s hyphenated name. Mr. Lamarre-Vincent, a 34-year-old Harvard postdoctoral fellow in molecular biology, said it was a default decision: “We were both kind of go-with-the-flow,” he said, and simply hewed to tradition.

The irony is that the name is the product of his own parents’ defiance of that tradition. It is a little like following every step of an old-school Thanksgiving recipe, but starting out with a Tofurky.

In a 2002 paper, Ms. Scheuble and her husband, David R. Johnson, a Penn State professor, predicted that the importance of a family name could begin to decline. Thanks to more divorce, remarriage, same-sex unions and retention of maiden names, it is far from unusual for members of the same nuclear family to bear different surnames.

Nevertheless, the vast majority of families stick with custom. According to a 2009 study analyzing data from 2004, only 6 percent of native-born American married women had unconventional surnames (meaning they kept their birth names, hyphenated with their husbands’ names, or pulled a Hillary Rodham Clinton).

I know lots of women, including myself, who kept their birth names at marriage. But according to my anecdotal observations, which others seconded, rates of hyphenation seem to have fallen since my brother and I were born.

As Ms. Segal-Reichlin said, “At the time I think they thought they were going to be the wave of the future,” but it has not panned out that way. Still, hyphenated names are not entirely a relic of the ’70s, like sideburns and lava lamps: witness the Jolie-Pitts.

Based on my conversations, the verdict on hyphenation was mixed.

“When I was young I hated it,” said Sarah Schindler-Williams, a 32-year-old lawyer in Philadelphia. “It was long, it never fit in anything. I was always Sarah Schindler-Willi.”

But most, including Ms. Schindler-Williams, eventually grew to appreciate their cumbersome monikers. Names frequently convey information about their bearers: Weinberg or O’Malley gives you a hint about the person attached to it. But conjoined names, several people mentioned, also say something extra about your parents’ egalitarian values. (Unless you are British; then it means you’re posh.)

What did our parents expect us to do when we reached this stage of our lives? They trusted it would all work out somehow. As Ms. Segal-Reichlin’s parents told her, “We figured that was your problem.”

Alzheimer’s Alert

Posted on: April 29th, 2011 by Rebecca Tuhus-Dubrow

Last week, new guidelines for diagnosing Alzheimer’s defined a “preclinical” stage of the dreaded disease. Evidently, the telltale pathology—in particular, the plaques that encroach on the brain—can be detected years, if not decades, before the patient ever forgets a familiar name or neglects to feed a pet.

The announcement renewed a debate that has flared in recent months: Since there’s no cure, critics believe that an early diagnosis of Alzheimer’s would serve only sadistic doctors, masochistic patients, and greedy business interests. They worry that Big Pharma will sell snake oil to a huge, desperate market, and that health insurance companies and employers could use the information against patients. Others, however, point to the benefits of advance notice. You might take that long-deferred trip to Antarctica, for example, or try to squeeze in extra visits to the elliptical machine. (There is some evidence, albeit inconclusive, that exercise helps stave off the mind’s deterioration.)

Complicating matters, the “bio-markers” that show up in an early diagnosis do not necessarily lead to symptoms. For unclear reasons, some brains seem to function well despite the incursion, while others succumb more readily. In many cases, patients die from other causes before the plaques wreak havoc. Given the gaps in knowledge, the guidelines stress that the tests are for research purposes only. (The idea is that studying the earliest manifestations of the disease will illuminate its genesis and ultimately yield therapies that keep symptoms at bay.) Yet some are concerned that before long, bio-markers will be used to test regular patients. What are the implications of diagnosing an incurable disease in seemingly healthy people?

Doctors, patients, and bioethicists have been grappling with this question for years. For most of human history, unpleasant and obvious symptoms indicated disease. You knew you were sick thanks to your projectile vomiting, or the searing pain in your head; or perhaps you were tipped off by the suppurating sores. To some extent we still rely on these signs of illness, but increasingly, people are notified of their disease—or their propensity for it—by lab results. While a diagnosis of a mysterious ailment can be something of a relief, a diagnosis of pathology when you feel perfectly healthy is more like a condemnation.

Take the example of the BRCA 1 and 2 genes, discovered in the mid-1990s. Certain mutations of these genes dramatically increase the risk of breast and ovarian cancer. Masha Gessen, who tested positive, explored her experience with humor and insight for Slate, and in her book Blood Matters. In this case, patients can at least take some action, though the options are hardly appetizing: Gessen chose a preventive double mastectomy.

A better analogue to Alzheimer’s is Huntington’s disease, a degenerative neurological disorder that leads to uncontrollable movements and dementia (and often suicide). If one of your parents is unfortunate enough to have this incurable disease, your chances of getting it are 50-50. Symptoms usually don’t appear until early middle ages, but genetic testing for presymptomatic diagnosis has been available for years.

After the genetic marker for Huntington’s was discovered in 1983, there were serious ethical concerns about testing, most of which now sound familiar. The tests were not infallible, so inaccurate diagnosis was a threat. Another worry was that people who tested positive would be pressured not to reproduce. Then there was the psychological impact of the diagnosis. The first imperative of medicine is “Do no harm,” and delivering such distressing news seemed like it might violate that precept.

Before the test became widely available, health care providers collaborated with patients and family members to formulate a set of standards for its use. Several patient-advocacy associations developed guidelines establishing a patient’s right to refuse the test and attempting to protect confidentiality. At-risk people met with a genetic counselor, a psychologist, and a medical geneticist for advice, and this preparation could last up to two years. Patients were encouraged to imagine their responses to various outcomes and, eventually, assimilate the results. (People who undergo testing for the BRCA genes also frequently meet with genetic counselors.)

Over the years, researchers have examined the repercussions of the tests and identified both benefits and harms. Unsurprisingly, gene-positive results lead to depression, anxiety, and isolation; worries about employment prospects; and regrets about knowing of a difficult future. But there are also upsides: the end of agonizing uncertainty, emotional connection to gene-positive relatives, and the ability to focus on the important things in life. Those who learn they do not have the mutation, of course, feel tremendous relief.

And yet, despite the parallels, Alzheimer’s is different from Huntington’s in several fundamental ways. Huntington’s is a rare disease, affecting only about 30,000 Americans. Alzheimer’s dementia currently afflicts 5.4 million, a figure that is projected to reach 13.5 million by 2050. Given those numbers, the costs of preclinical testing and counseling for anyone who is even at risk of Alzheimer’s would be astronomical. What’s more, for all its devastation, the disease usually allows its victims a long, normal life before the ugly descent into illness. The stakes of early diagnosis are simply not as high as they are for Huntington’s.

As of now, given the uncertain relationship between the bio-markers and dementia, an early diagnosis would seem to offer little of use. As the new guidelines point out, the greatest risk factor for Alzheimer’s is advanced old age—and you don’t need a brain scan or a spinal tap to tell you you’re a very senior citizen. At that point, even if you dodged the fate of Alzheimer’s, you could fall prey to another kind of dementia. In the end, we’re all bio-marked for death and decline.

The $100 million pond

Posted on: April 10th, 2011 by Rebecca Tuhus-Dubrow

The coral reefs of Hawaii are enchanting: a full spectrum of brilliant colors, teeming with spiky urchins, striped damselfish, sluggish sea cucumbers, and hundreds of other creatures. Many of these species are found nowhere else in the world, and the ecosystem’s uniqueness makes it a darling of oceanographers. Researchers, Hawaiian residents, and visiting snorkelers can all agree: The reefs are a priceless treasure, and their disappearance would constitute an incalculable loss.

But Hawaii’s reefs are more than just photogenic seascapes with sentimental value; they’re economic powerhouses. They provide a suite of quantifiable benefits to Hawaiian society, through fisheries, tourism revenue, and their role as a buffer against wave erosion and tropical storms. The authors of one study collected a mass of data — ranging from fishery income to the cost of renting masks and fins — and placed the value of the reefs at a minimum of $360 million per year. To thoughtlessly damage a coral polyp, in this view, is tantamount to shredding a $20 bill.

A growing number of environmentalists, scientists, and economists have embraced the concept of putting a price tag on nature, which is reframed as “natural capital” and “ecosystem services.” Rather than casting nature as some abstract, awe-inspiring entity, or as a luxury trumped by economic imperatives, they see it as a provider of an array of specific, identifiable services that are vital to our well-being. And increasingly, they are deploying sophisticated methods to arrive at precise and credible dollar values. They hope that their painstaking analyses will lead to smarter decisions about land and water management. And, more broadly, the thinking is that hard numbers will resonate more than odes to Mother Earth — that dollar figures will allow people to better understand their own reliance on natural goods and services, as well as the costs of neglecting or destroying them.

“The value that nature delivers to us is economically invisible,” says Pavan Sukhdev, study leader of a UN-sponsored report on The Economics of Ecosystems and Biodiversity. “Effectively we pretend that it’s zero. The point is, it’s there.”

In recent years, this approach has gained a foothold in academia, international institutions, and a number of governments. In the United States, China, Costa Rica, and elsewhere, governments have opted to fund the preservation of forests, watersheds, and other ecosystems — and not because, or not only because, of their beauty. The primary impetus, rather, is the “services” they provide, including air and water purification, carbon sequestration, flood control, and drought prevention. The World Bank has sponsored numerous relevant projects, and a Stanford-based initiative, the Natural Capital Project, draws together environmental organizations and academics to advance and implement the idea.

And yet, for all of the obvious appeal of this approach to green types, there are serious concerns about translating ecological value into dollars and cents. Commodifying nature offends the sensibilities of some environmentalists, who believe we should prize it for its intrinsic worth, and for ethical and historical reasons. If we appraise nature only for the “services” it provides to humans, could that lead to an overly anthropocentric ethos that jettisons any elements that do not obviously accrue to our benefit? Several years ago, Douglas McCauley, a doctoral candidate in Stanford University’s biology department, published a much-discussed Nature commentary, in which he stressed the dangers of relying too heavily on this model.

“Ecosystems were not made to serve people,” McCauley wrote in an e-mail from Kenya. “I worry about teaching children and legislators to protect nature *because of* these services. How will generations raised on this message treat costly panda bears, worthless butterflies, or forests whose water purification services have been replaced by human innovation?…I think there are bigger and better reasons to protect nature.”

Most advocates of monetary valuation acknowledge that it cannot constitute the sole basis for conservation; it supplements other, less tangible rationales. (At the same time, some assessments do try to account for those bigger, better reasons, classified as “cultural services,” which include aesthetic and spiritual value.) But proponents insist that without this additional tool in the conservationists’ arsenal, ecological disaster awaits — policy makers deal in numbers and dollars, so pragmatic environmentalists must speak that language too.

As Gretchen Daily, a Stanford biology professor and chair of the Natural Capital Project, puts it, “Conservation, using traditional approaches, is utterly doomed to fail.”

Some parts of nature, of course, have long had prices: Timber, salmon, blueberries, and other goods have a market value. But efforts to place a price on natural services with no market value began to take shape in the 1970s and 1980s. This period saw the advent of a field called “ecological economics,” in which scientists and social scientists attempted to reconcile two disciplines that were often at odds. In 1997, some of this work culminated in a seminal Nature paper by Robert Costanza, then a professor at the University of Maryland, and others. The paper estimated the value of the earth’s natural capital to be an average of $33 trillion per year, as compared with a global gross national product of about $18 trillion. Reflecting on natural capital’s indispensability to the world economy, the authors suggested that it should be factored into everything from national accounting systems to commodity prices.

In the mission to valuate nature, there are two distinct but complementary tacks. Calculating the global or national value of natural capital is a fundamentally “rhetorical” exercise, as Duke University law professor James Salzman notes; the point is to convey in a forceful, concrete manner the enormous worth of these assets. The other approach, which has gained currency recently, is much more practically oriented: to try to assess the economic outcomes of different interventions in specific places, in order to manage land and water more wisely.

So how do you assign a dollar value to something as nebulous and unpredictable as nature? It’s not easy, but a variety of methods have emerged. One relatively straightforward strategy is to make a direct comparison to alternative scenarios. What cost would be incurred — in the form of natural disaster, lost income, etc. — by the degradation of, say, a wetland, which offers water filtration and flood protection? Or a forest, whose benefits include carbon storage, air and water purification, timber, and fuel wood? Similarly, how much would it cost to create a technological substitute for, say, clean drinking water, such as a filtration plant? Or a levee to replace mangroves as flood buffers?

“If we were to provide that service, is it cheaper to provide it with landscape management?” asks Salzman, who is involved in the Natural Capital Project. “You can provide it through built capital or you can provide it through natural capital.”

Assessments of more amorphous value can be made by proxies. For example, you can compare housing prices near a beach or a mountain range to similar houses in a different locale. This, in theory, provides a clue about the implicit value homeowners place on the natural area. Similarly, the “travel cost method” looks at what visitors pay to enjoy a site in terms of transportation expenditures and time. The “contingent valuation” method surveys people about how much they are willing to pay for a given ecosystem service.

Of course, daunting challenges arise. First are the scientific ones: Ecosystems are complex, dynamic affairs, subject to so-called feedback and threshold mechanisms, in which abrupt, unforeseen changes take place. It is often exceedingly difficult to predict how an intervention will affect the multiple, interacting services at stake. Then there is the task of translation into monetary values. The method of surveying people depends on their opinion of the value — and part of the problem is limited awareness of that value, as Costanza, now at Portland State University, points out. (On the other hand, such surveys entail no actual financial sacrifice, so the answers could be artificially generous.) There are also difficult ethical questions, regarding, for instance, how much weight to give present versus future benefits. Stefano Pagiola, an economist at the World Bank’s Sustainable Development Department, says the field is “polluted with bad estimates.”

Despite the hurdles, there are now a number of projects underway throughout the globe, often under the rubric of “payment for ecosystem services.” Not all of them attempt to rigorously determine the monetary value of the services; they simply recognize the economic benefits of conservation, and offer some financial reward on that basis.

One of the most celebrated examples is in New York City, which gets most of its water from the Catskill/Delaware watershed. By conserving land upstream — through acquisitions and payments to landowners — the city has avoided the need to build a new filtration plant, which would cost billions of dollars. Most recently in 2007, the city pledged $300 million over 10 years to these investments. Within the past two years, Santa Fe and Denver have opted for similar policies, as the news source Ecosystem Marketplace has reported. In China, devastating floods and drought in the late 1990s spurred leaders to take action, when they realized deforestation was a primary culprit. (Forests absorb and slowly release rainwater, providing the “services” of flood and drought prevention.) They launched initiatives, still ongoing, to pay villagers to conserve forests and to convert cropland back to forest and grasslands. Costa Rica also has a longstanding program paying landowners to conserve and restore forests.

A few models are emerging to try to calculate reliable, meaningful dollar values. One, in development by the Natural Capital Project, is called InVEST, for Integrated Valuation of Ecosystem Services and Tradeoffs. The software analyzes the benefits of services like water quality and quantity, erosion control, carbon sequestration, and pollination. It models alternative scenarios to inform users how ecosystem services would be affected, presenting outcomes in the form of maps, balance sheets, and “tradeoff curves.” The results show both tradeoffs and synergies between different services — so that, ideally, policy makers might choose to prioritize the areas that maximize useful service provision. At least four “water fund” projects in South America — in which downstream users, including a hydropower company and a beer bottling business, pay people upstream to keep the water clean — are using InVEST. The Natural Capital Project is working with Google to make it freely available on the Internet. Other new tools — two prominent examples are EcoMetrix and ARIES — have similar aims.

But even as these techniques become increasingly refined, they will not suffice to effect the widespread change their inventors are seeking. Only in certain cases will crunching the numbers show conservation to be the fiscally obvious choice. For example, in the absence of a carbon tax or a cap-and-trade policy, the tools can calculate the social value of carbon storage, which would mean long-term economic savings for society at large. They can’t, however, claim that carbon storage has a current monetary value for an individual company or municipality or private landowner.

“I don’t want to be too cynical,” says Steve Polasky, an economist at the University of Minnesota who works on InVEST. He acknowledges that some people would consider the social value seriously, but “the cynical part of me says, you know, you really do have to have the right set of incentives in place.”

The development of these tools and the proliferation of related projects have occurred against the backdrop of a broader philosophical shift in environmentalism. Historically, conservationists had sought to fence off pristine wilderness, protecting it from any human interference. Over the past few decades, the ethos has evolved to emphasize the interdependence of humanity and nature. In part, this is sheer realism: With a global population nearing 7 billion, people have to find ways to live in concert with nature. This newer focus also stems from the recognition that human survival depends directly on healthy ecosystems. Not incidentally, it also happens to be seen as a more effective way of selling conservation to those who are disinclined to be tree-huggers — i.e. it’s not just for the cute and furry animals, it’s essential to us, too.

But this more utilitarian view — nature as handmaiden to human well-being — has elicited skepticism. One critique questions its accuracy. Mark Sagoff, director of the Institute for Philosophy and Public Policy at George Mason University, has written that “we benefit from nature not by preserving but by ‘improving’ it — for example, by plowing a field, building a road, constructing a house, drilling a well, damming a river, farming a salmon or oyster, or altering a genome.” He has challenged the received wisdom about New York’s water plan, pointing out that the city uses chlorine to disinfect the water, and that one of the greatest sources of pollution is fecal matter from wildlife.

The utilitarianism also troubles some environmentalists: What happens when the filtration plant becomes cheaper than conservation easements? These environmentalists hold dear the useless magnificence of nature, its bizarreness and its fearsomeness, whether in the funny face of a snub-nosed monkey or the dizzyingly precipitous walls of a canyon. The value they perceive resists quantification — even in the form of “cultural services.”

And yet, they see that older approaches have left the environment deeply vulnerable. The greatest weakness of pricing nature — its kinship with market ideology — is also its greatest strength.

Douglas Kysar, a Yale law professor and author of the book “Regulating from Nowhere: Environmental Law and the Search for Objectivity,” expresses what may be a common ambivalence — wary in theory, resigned in practice. “We used to have this idea of being humbled by nature, being awestruck by its ability to exceed our comprehension, to exceed our mastery,” he says. He worries that the logic of ecosystem services “reinforces a deeper mindset that is very much in tension with the needs of ecology.”

But on an immediate, practical level, he supports the efforts as far preferable to the status quo. “It fits with the dominant ideology — and if you can’t beat ’em, join ’em.”

Law lab

Posted on: December 12th, 2010 by Rebecca Tuhus-Dubrow

This past week, wrangling over the Bush-era tax cuts has riveted Washington. The spectacle is only the latest round in an endless debate, one that has launched innumerable op-eds, cacophonous talk-show segments, and dinner-table quarrels. As conservatives see it, higher tax rates hurt job creation as well as undercut the incentive for entrepreneurship and hard work. Many liberals cast these downsides as modest, while stressing the value of tax revenue. Will tax cuts bring a bloom of free enterprise — or exploding deficits? Economists have studied the issue ad nauseam, but firm conclusions are elusive. It is difficult to tease out cause and effect, because at any given time, economic conditions other than taxation also shape behavior.

If only there were a scientific way to determine the real impact of taxation on industriousness, labor supply, and innovation.

According to some scholars, there is. Randomly assign a representative sample of the population — say, 10,000 taxpayers — a lower tax rate, and see what happens. Did these Americans, on average, behave any differently than their counterparts? Did they work longer hours or more jobs, start more businesses, hire more employees?

In other words, test government policies using the same technique — randomized controlled trials — used to test new drugs. A growing chorus of legal scholars, economists, and political scientists believes that such trials should be conducted to evaluate a wide range of laws: gun control, safety and environmental regulations, election reforms, securities rules, and many others. And some believe that we are ethically obligated to do this, because laws affect our lives so pervasively. Understanding the true costs and benefits of legislation, they say, is essential to making good policy — and we may know much less about our own laws than we think.

“The randomized experiment is kind of the gold standard in medicine and social science,” says Ian Ayres, a Yale law professor and economist who advances the idea of the tax experiment in a forthcoming paper. “We should use that same tool to inform us whether laws work.”

Already, randomized trials have migrated beyond medicine. In recent years, experiments to test development programs in poor countries have grown common. Even in the United States, such trials have been used for several decades, in a scattered way, to evaluate social services such as job training and welfare reform, as well as criminal justice and education policies. In one controversial experiment described last week in The New York Times, New York City is testing a homelessness-prevention program by randomly denying services to some households. But now, proponents argue that this instrument should be used in other areas of law — that a culture of experimentation should take hold, and randomized trials should become the norm in lawmaking rather than the exception.

There are certainly potential problems with this vision. First is the question of effectiveness: In some cases, it may prove too difficult to run an accurate test. The full repercussions of laws often take years to manifest themselves, and small-scale experiments do not always translate well to larger settings. Also at issue is fairness. Americans expect to be treated equally under the law, and this approach, by definition, entails disparate treatment.

“The problem is, we’re dealing with laws that have a huge impact on people’s lives,” says Barry Friedman, a law professor at New York University. “These aren’t casual tests. It’s not, you try Tide or you try laundry detergent X….Here we’re talking about basic benefits and fundamental rights.” Though Friedman is sympathetic to the goal of gaining better empirical knowledge, he says, “My guess is some of it’s doable in some contexts, and a lot of it’s not doable in other contexts.”

But others are more sanguine, and they make the opposite argument: That precisely because the stakes are so high, the laws that we enact on a large-scale, long-term basis must be more rigorously tested. This wave of thinking is part of a broader trend in fields from health care to education: Our practices should be “evidence-based,” rather than deriving from theories and unproven assumptions. The question is whether this kind of scientific approach can successfully take on a project as unruly as our society — and our politics.

The concept of randomized trials is usually traced back to the 1920s, when Ronald Fisher, a British geneticist and statistician, used the technique in agricultural trials. Fisher randomly assigned plots of land different fertilizers or crop varieties, and compared the results.

Before long, scientists began to apply the method to people. The virtue of randomization is that, provided the numbers are large enough, you can create two groups that are close to statistically identical — the same distribution of gender, age, height, educational background, and other characteristics. Any change in the treatment group can then be confidently attributed to the intervention, rather than to other factors. (In a variant of the concept, trials can randomly assign different interventions to multiple groups.) In observational studies, by contrast, it can be difficult to distinguish correlation from causation.

“The typical experiment involves very, very elementary statistical analysis,” says Donald Green, a Yale political scientist. “When things are done properly, you take the treatment group average and subtract the control group average. It could not be more simple.”

Randomized trials of drugs were first conducted in the late 1940s, and by the 1970s, they were widely used in the United States to evaluate new drugs. Now, of course, it’s unthinkable that the FDA would approve a new drug without clinical trials to show that it’s safe and effective.

The use of randomized trials outside medicine actually dates back even further, to a local experiment. In 1930s, the Cambridge/Somerville Youth Study, designed to reduce delinquency, randomly assigned over 500 boys to either a treatment group, which received visits from a counselor and other services, or a control group, which received neither. A 30-year follow-up found that the intervention had not diminished criminality and, strangely, seemed to have slightly exacerbated it. Since then, a number of comparable experiments have ensued. In the 1980s, police departments in Minneapolis and Milwaukee randomly assigned mandatory arrest for domestic violence offenders to assess its effect on recidivism. They initially found that arrest deterred future offenses, but over time a more complex picture emerged; particularly in areas with high unemployment, arrests appeared to provoke further battery. Similar state-run experiments have shown the effectiveness of unemployment programs that offer job-search assistance.

Though government has proved receptive to the idea in some areas, we are still far from the ideal envisioned by the most ardent proponents. They hope for a time when such experiments are the default. In the world they foresee, any politician who opposed a trial would be suspect, as resistance would appear to indicate lack of confidence in one’s position.

One promising area for expanding the use of trials is regulation. Michael Greenstone, an MIT economist, laid out this case in a chapter in the 2009 anthology New Perspectives on Regulation. While regulations profoundly affect everyday life, he argues, our system for assessing them is dreadfully inadequate. Regulations determine, he notes, the loans we can get, the kinds of materials we can use to build homes, the velocities of our vehicles, and the quality of our air and water. But they are typically evaluated, if at all, prospectively — when analysis amounts to educated guesswork.

We study regulations only at “the very moment when we know least about the consequences,” Greenstone says. “There is no culture of trying to understand ex post what the consequences are.”

As an alternative, he recommends introducing regulations on a small scale prior to rolling them out. Safety regulations, such as new rules for cars or cigarette lighters, could be randomly tried in some areas and evaluated after a designated period. If the benefits exceed the costs, they should be expanded; if not, they should be scrapped. Greenstone suggests that industrial plants could be randomly subjected to different environmental regulations. After a given amount of time, the air and water quality in the vicinity would be measured, along with health outcomes for people in the area and the effects on plants and animals. Various versions of regulations could be tried in different states or municipalities.

In a paper to be published in the University of Pennsylvania Law Review this spring, Yale professors Ayres and Yair Listokin and George Washington University law professor Michael Abramowicz advocate the systematic introduction of randomized trials throughout government — in legislatures and administrative agencies, at the state and federal level. They suggest that trials be “self-executing,” in that policies would be automatically enacted based on their results (though lawmakers would be able to overrule this default).

As proponents of this idea frequently note, Justice Louis Brandeis famously called the states the “laboratories of democracy,” but state-level innovations do not meet the standards of real, controlled experiments. Ayres and his coauthors propose that the federal government coordinate trials, randomly assigning states (provided they consent) to adopt certain laws; as Ayres puts it, “In the laboratory, you don’t let the rats design the experiments.” Another proposal, perhaps more realistic, is for states themselves to randomize across municipalities or counties.

There are concerns, of course, about how practical widespread tests would be. Some domains are more susceptible than others. Education policy, election laws, and criminology seem particularly ripe for randomized trials (and many have already been conducted, both by academics and government). In these realms, interventions are not likely to be obviously experiments. A kid in a class doesn’t know or care if his class size or teacher’s technique is part of an experiment. The same is probably true of a criminal who receives a particular sentence, or a voter who receives certain literature in the mail. Moreover, the outcomes are relatively conducive to measurement (test scores, recidivism rates, voter turnout). But in other cases, participating knowingly in an experiment could distort the result (this phenomenon even has a name, the Hawthorne effect). This problem could be especially acute for businesses that desire a particular outcome — such as lenient securities laws or safety regulations — and alter their behavior for the duration of the trial. And some experiments — such as giving individuals different tax rates — would surely be controversial. Finally, as even landmark trials like the domestic violence study underscore, the results do not always furnish clear-cut policy prescriptions.

The essential goal here is to take the politics out of policymaking, to replace dogma with data. It’s a noble ideal. But society will always have to contend with clashing values and priorities. Gun control law, for example, may be quite conducive to experimentation. Virginia, say, could randomly assign counties to different restrictions, and measure crime rates over the next few years. But even if the trial demonstrated that gun control reduced crime, many people believe in the liberty to bear arms as a nonnegotiable right. If data showed that air and water pollution had no effect on human health, some would oppose the regulations on industry, while environmentalists would still support them. Good data can only get us so far.

The other obstacle involves the current state of our political discourse. Opponents of a law could still claim that the trial proved nothing. While academics consider it the gold standard, the public may not place more stock in it than in other kinds of research that are more easily manipulated. Already, suspicion of government is rampant, and opponents accuse the Obama administration of technocratic overreach. People don’t like to feel like guinea pigs, and states may chafe at playing the role of lab rats. It is easy to imagine that a substantial segment of the population would view randomized trials as nothing more than another elitist scheme.

That said, proponents of this idea don’t claim it’s a panacea. They do believe randomized controlled trials are, if imperfect, the best way we have of generating empirical data. In his recent essay, Greenstone argued for a “new era in regulatory reform.” The first era, he writes, was that of the New Deal and the Great Society, when the focus was on well-meaning efforts to remedy social problems. In Greenstone’s view, the effort seemed to count more than the results, with little emphasis on follow-up and evaluation. Then, in a backlash, came the second era, under the Reagan administration, when government was recast as the problem. Now, Greenstone believes, we must instigate a third era, in which government is neither demonized nor valorized, when results are measured meticulously and count for more than good intentions.

As Donald Green of Yale says, “We test pharmaceuticals because there are billions of dollars at stake, and lives.” The same, he argues, is true of our laws, yet we don’t subject them to the same scrutiny. “In some ways the question is, how badly do we want to know?”

It’s Alive!

Posted on: June 13th, 2010 by Rebecca Tuhus-Dubrow

Buildings, in many ways, represent the opposite of nature. From a modest suburban house to the most majestic skyscraper, a building signals the presence of people in a place, differentiating human spaces from their surroundings. The built environment consists of organized, inert structures that contrast with the wildness, vitality, and constant change of the natural world.

Buildings clash with nature in another sense, too — constructing and occupying them takes a substantial toll on the environment. In the United States, the construction industry is responsible for much of the waste that ends up in landfills. The use of buildings — consider the lights, the elevators, the air conditioning — accounts for a healthy fraction of the country’s electricity consumption and carbon dioxide emissions.

In recent years, lower impact “green buildings” have crept up in popularity. But a new movement believes that these measures have not gone nearly far enough — that even today’s ecoconscious apartments and offices produce waste and greenhouse gases, while merely scaling back the damage. What we need to do, according to the architects and scientists driving this movement, is fundamentally rethink the concept of a building.

Sometimes called “biomimetic” or “regenerative” architecture, this approach applies insights from nature to the built environment, and seeks to blur the distinction between the two. In some cases, this means mimicking specific functions of organisms or their habitats. In other cases, the emulation is more general: conceiving of buildings as closed-loop ecosystems that, like a forest or a savanna, draw their energy from the elements and produce no net waste — and perhaps even improve the surrounding environment.

“I see it as a complete game-changer,” says Eden Brukman, vice president of the International Living Building Institute. Brukman’s organization runs the Living Building Challenge, a certification program in which the essential goal is to design buildings that function like ecosystems. One objective, she says, is “to get people to start to think differently about how we approach the built environment. How would nature solve these problems?”

This movement is still young, with few actual buildings completed and much research yet to be done. Not all of its ideas may turn out to be practical. But it has recently generated considerable interest, with discussions of the topic among the most popular at green building conferences and classes on offer at a number of architecture schools, suggesting it could become a significant force in the field.

According to its proponents, this approach has the potential to be vastly more ecologically sustainable than current building practices, even those dubbed green, and they see a profound shift in this direction as crucial for architecture, not to mention the planet. Ultimately, they raise the prospect of a future where the built environment works in a radically different way — not as a foil for nature, but as seamlessly integrated with it as possible.

Architecture didn’t always diverge so sharply from nature. As Jason McLennan, chief executive of the International Living Building Institute, has noted, pre-modern dwellings, such as igloos in Alaska and adobe structures in the Southwest, boast many of the virtues this movement is calling for. They use local materials; they moderate indoor temperature simply through their structure; and they have little impact on the environment. These buildings arguably have more in common with a bird’s nest or a beehive than with today’s high-rises.

Eventually, of course, technological advances led to very different sorts of buildings. The influential modernist architect Le Corbusier called houses “machines for living in” nearly 90 years ago, and that has served as a dominant metaphor ever since. Buildings have become highly artificial, voracious entities that use astounding amounts of water and electricity every day. And increasingly, buildings are constructed throughout the world without much deference to local conditions: Similar tract housing can be found in diverse parts of the United States; steel-frame office towers rise from every continent’s metropolises. The logic is that buildings create their own environments — these days materials can be transported from anywhere, and sophisticated HVAC systems can achieve any desired indoor temperature.

But in the past 15 years or so, another idea has begun to gain currency: Why not take cues from organisms that have thrived in the architecture’s setting? In 1996, a landmark project was completed: the Eastgate Centre, a large office building and mall in Harare, Zimbabwe, designed by architect Mick Pearce. Modeled after termite mounds found in the region, the building has no air conditioners, yet stays cool through a ventilation system inspired by nature. Termites perpetually dig (and plug) holes to catch breezes and modulate the temperature within their mounds. Using fans, vents, and funnels, the Eastgate Centre mimics this system. It uses 10 percent of the energy of buildings of similar size, and was estimated to save $3.5 million on energy in the first five years of operation.

This building is an example of what came to be known as “biomimicry.” The idea isn’t specific to architecture: In a 1997 book, science writer Janine Benyus argued that in nearly all areas of human endeavor, it was instructive to ask what nature would do in trying to solve problems. From emulating prairies in agriculture to imitating leaves to capture solar energy, the book chronicled efforts underway to capitalize on nature’s genius. Benyus later established two organizations — the Biomimicry Guild and the Biomimicry Institute — to promote these goals.

Until recently, biomimicry has been slow to catch on in architecture. (Architects distinguish between biomorphism — borrowing the forms and aesthetics of nature — and true biomimicry, which looks to underlying principles and may not bear an obvious visual resemblance to the original.) But in the past few years, a number of architects, most inspired by Benyus’s work, have taken a keen interest and are currently working on biomimetic projects.

The global architecture firm HOK has established a collaborative relationship with the Biomimicry Guild, working with its biologists. Plans for a project in India, to design a city, include rooftops that imitate the “drip-tip” structure of a local fig leaf; this structure encourages the rapid runoff of water, essential during monsoon season. British architect Michael Pawlyn is working on a greenhouse in the Sahara desert that desalinates seawater, inspired by the Namib Desert beetle, which manages to survive by catching water droplets from fog on its shell and funneling them to its mouth.

“Some people when they hear about this think it sounds too good to be true,” says Pawlyn. “But the example of the beetle shows that it is possible to harvest water in the desert.”

Biomimicry is not automatically ecofriendly; famous examples from outside architecture include Velcro (modeled after plant burrs) and the airplane. But the practitioners of biomimetic architecture aim to borrow from nature in a way that maximizes efficiency and reduces impact. By the same token, sustainability does not require copying mechanisms found in other species. But, as proponents are fond of pointing out, nature has benefited from a 3.8 billion-year R&D period, and neglecting all that wisdom strikes them as folly.

Another recent development also takes inspiration from nature, but in a different way. Rather than imitating particular organisms, this approach attempts to abide by the general principles of a natural system, giving as much back to the environment as it takes.

In a rain forest, say, water is recycled; energy comes from the sun; waste from one element becomes sustenance for another. The Living Building Challenge, formally launched in November 2006, is an effort to push architects to embrace these tenets. It establishes a high standard for extremely green buildings — even more ambitious than LEED, the certification system run by the US Green Buildings Council, which has already pushed architects toward greener designs.

A certified “living building” would interact with its surroundings in a benign, even beneficial, way. The specific requirements depend on the kind of project, but buildings can earn credit by supporting urban agriculture, encouraging car-free living, capturing rainfall for water, and using salvaged building materials. There is a “red list” of prohibited toxic materials, and no combustion is permitted to produce the building’s energy — generally energy must be solar, wind, or geothermal. The projects must be net zero energy, but some are trying to generate more energy than they consume — one aspires to produce three times as much as it uses.

In this sense, the buildings aim not only to minimize the negative effects of the built environment, but to convert them into positive influences, just as rain forests, as carbon sinks, are beneficial for the planet.

Brukman likens a living building to a flower: “A flower is rooted in place, but it collects all its own water for use and reuse, it operates efficiently, and it’s beautiful.”

About 70 projects have been registered, meaning that they will attempt to meet the standards (partial certification is also available). No buildings have yet achieved certification, because they must first be operational for a year. Five are currently occupied, and three are expected to be certified by the summer.

One of these is the Omega Center for Sustainable Living, an education center and waste-water treatment facility in upstate New York completed in May 2009. The waste water moves through treatment zones with various kinds of organisms — bacteria, fungi, algae, plants, fish, and snails. The water is thus effectively treated without the need for chemicals, and it will then be reused for irrigation and toilet water onsite. Another building projected to earn certification is a Seattle office, still in the design phase, where a photovoltaic array will cover the roof and the bathrooms will have compost toilets. The Seattle City Council passed legislation last December creating a pilot project for up to 12 living buildings.

Of course, there are formidable obstacles to the widespread adoption of this approach. Cookie-cutter modern construction is popular because it’s cheap and functional, and the methods are widely familiar. Even the less ambitious LEED certification entails upfront costs that are prohibitive for many buildings. The requirements for living buildings are almost unfathomably demanding in the context of prevailing industry practices, and in the near term, only scattered showpiece buildings are likely to comply.

But architectural trends often grow from just a handful of influential buildings. And as architects and builders start to develop a knowledge base, proponents hope that their gold standard may eventually become simply standard — however quixotic that hope may now appear.

The magic cure

Posted on: May 9th, 2010 by Rebecca Tuhus-Dubrow

You’re not likely to hear about this from your doctor, but fake medical treatment can work amazingly well. For a range of ailments, from pain and nausea to depression and Parkinson’s disease, placebos–whether sugar pills, saline injections, or sham surgery–have often produced results that rival those of standard therapies.

In a health care industry fueled by ever newer and more dazzling cures, this phenomenon is usually seen as background noise, or even as something of an annoyance. For drug companies, the placebo effect can pose an obstacle to profits–if their medications fail to outperform placebos in clinical trials, they won’t get approved by the FDA. Patients who benefit from placebos might understandably wonder if the healing isn’t somehow false, too.

But as evidence of the effect’s power mounts, members of the medical community are increasingly asking an intriguing question: if the placebo effect can help patients, shouldn’t we start putting it to work? In certain ways, placebos are ideal drugs: they typically have no side effects and are essentially free. And in recent years, research has confirmed that they can bring about genuine improvements in a number of conditions. An active conversation is now under way in leading medical journals, as bioethicists and researchers explore how to give people the real benefits of pretend treatment.

In February, an important paper was published in the British medical journal the Lancet, reviewing the discoveries about the placebo effect and cautiously probing its potential for use by doctors. In December, the Michael J. Fox Foundation announced plans for two projects to study the promise of placebo in treating Parkinson’s. Even the federal government has taken an interest, funding relevant research in recent years.

But any attempt to harness the placebo effect immediately runs into thorny ethical and practical dilemmas. To present a dummy pill as real medicine would be, by most standards, to lie. To prescribe one openly, however, would risk undermining the effect. And even if these issues were resolved, the whole idea still might sound a little shady–offering bogus pills or procedures could seem, from the patient’s perspective, hard to distinguish from skimping on care.

“In the last 10 years we’ve made tremendous strides in demonstrating the biological veracity of the placebo effect,” says Ted Kaptchuk, an associate professor at Harvard Medical School and one of the coauthors of the Lancet article. “The frontier is, how do we utilize what is clearly an important phenomenon in a way that’s consistent with patient-practitioner trust, and informed consent?”

There are limits to even the strongest placebo effect. No simulation could set a broken arm, of course, or clear a blocked artery. As a rule, placebos appear to affect symptoms rather than underlying diseases–although sometimes, as in the case of depression or irritable bowel syndrome, there’s no meaningful distinction between the two. Moreover, placebos have often received undue credit for recovery that might have occurred anyway. Indeed, the effect is famously difficult to identify, measure, and even coherently define. There is debate about the magnitude of the response, with some calling it modest at best, and opposing the idea of using placebos clinically.

But according to advocates, there’s enough data for doctors to start thinking of the placebo effect not as the opposite of medicine, but as a tool they can use in an evidence-based, conscientious manner. Broadly speaking, it seems sensible to make every effort to enlist the body’s own ability to heal itself–which is what, at bottom, placebos seem to do. And as researchers examine it more closely, the placebo is having another effect as well: it is revealing a great deal about the subtle and unexpected influences that medical care, as opposed to the medicine itself, has on patients.

Phony treatment is hardly a novel concept in medicine. The word “placebo”–Latin for “I shall please”–has been used in a medical context since at least the late 1700s, referring to inert treatments given to placate patients. Arguably, until the scientific breakthroughs of the 20th century, medical history was little more than one long series of placebos.

But in the postwar era, the profession changed in a way that relegated placebos to the shadows. New medicines began to emerge that actually cured diseases. At the same time, the longstanding paternalism of doctors was yielding to a new ethos that respected the patient’s right to understand and consent to treatment. Gradually, fake pills began to seem less like a benign last resort, and more like a breach of trust. To be sure, some doctors continued to use placebos–typically, “impure” placebos such as vitamins that had no specific effect on the malady in question. But they did so quietly, knowing the practice was frowned upon.

As sugar pills were losing their place in the physician’s arsenal, they assumed a different role: as a neutral placeholder in drug testing. This development is usually traced back to a 1955 paper by Henry Beecher, a Harvard anesthesiologist who argued that the placebo effect was so potent that researchers needed to account for it when testing new drugs. Today, the “gold standard” of medical testing is the randomized clinical trial, in which the new drug must beat a placebo to prove its worth.

In the last decade-plus, however, the accumulating data have sparked a renewed interest in the placebo as a treatment in its own right. Numerous studies have shown that it can trigger verifiable changes in the body. Brain scans have shown that placebo pain relief is not only subjectively experienced, but that in many cases the brain releases its own internal painkillers, known as endogenous opioids. (This placebo effect can even be reversed by the opioid-blocker naloxone.) Another study, published in Science in 2009, found that patients given a topical cream for arm pain showed much less pain-related activity in the spinal cord when told it was a powerful painkiller. A 2009 study found that patients benefited as much from a fake version of a popular spinal surgery as they did from the real one; asthma patients have shown strong responses to a mock inhaler.

Impressed by such findings, some researchers and clinicians hope to import them somehow from the laboratory into the doctor’s office–adding placebo, in a systematic way, to the doctor’s repertoire.

The first conundrum doctors face is how to honor the principle of informed consent, their ethical and legal obligation to fully explain a treatment. Clearly, a doctor would violate this rule by passing a sugar pill off as a real prescription drug, and thinkers have begun to wrestle with this challenge.

One audacious tack would be to tell the truth: to notify patients that they are about to be given a fake pill. The idea sounds absurd, and doctors have long assumed that would ruin the effect. But there’s almost no research on the question, and it may not be as unthinkable as it seems. One reason it could work involves “classical conditioning”–the notion that we can learn on a subconscious level, like Pavlov’s dogs, to biologically respond to certain stimuli. This concept suggests that the brain could automatically react to the placebo in a way that doesn’t require conscious faith in the drug. (The placebo effect has been observed in rodents, which bolsters this theory.) Another reason is that, according to many researchers, the trappings of medical care contribute to the response. So in certain circumstances, doctors could conceivably give a placebo with total transparency, conspiring with patients to trick their own brains–though the lack of research means there is little evidence to support this hypothesis now.

A second approach may be to integrate placebos with real treatments, and to reconsider whether this should still be viewed as fakery. A groundbreaking study published in February in the journal Psychosomatic Medicine found that in one group, psoriasis patients who received a topical cream treatment, alternated with placebo “reinforcements,” did as well as patients who got up to four times more of the active drug. The authors hypothesized that the effect was due primarily to conditioning–the brain learned to associate the cream with healing and sent the same signals even when the cream was inert.

This is just one study, but the implications could be profound. It suggests that in some cases doctors could essentially dilute medications, perhaps dramatically, and get the same results. Robert Ader, the study’s lead author and a psychiatry professor at the University of Rochester, says this approach has the potential to address maladies that operate through the nervous systems, such as pain, some auto-immune diseases, and hypertension. Ader envisions a future in which a physician writes a prescription consisting of the drug, the dosage, and the “reinforcement schedule.” Under a reinforcement schedule of 80 percent, the patient would get a bottle of 100 pills, 20 of which were dummies.

“You’re talking about many, many, many millions of dollars a year in drug treatment costs,” says Ader. He adds, “If [doctors] can produce approximately the same therapeutic effect with less drug, then it’s obviously safer for the patient, and I can’t believe they wouldn’t want to look into doing this.”

In either scenario–prescribing ersatz medicine alone or cutting active treatment with it–it’s easy to predict the concerns and controversies that would ensue. Might cost-conscious health care providers and insurers be tempted to push placebos for financial reasons? Would patients feel cheated and confused? Whether placebos can be successfully reframed as novel medicines and helpful “reinforcements” remains to be seen.

For other researchers, the data have led to very different territory: They’re looking for ways to elicit the placebo effect while jettisoning the placebo altogether.

Some researchers argue that the real source of a placebo’s effect is the medical care that goes along with it–that the practice of medicine exerts tangible healing influences. This notion has received support from experiments known as “open-hidden” studies. Fabrizio Benedetti, a professor at the University of Turin Medical School, has conducted a number of these, in which patients receive painkiller either unknowingly (they are connected to a machine that delivers it covertly) or in an open fashion (the doctor is present, and announces that relief is imminent). Patients in the “open” group need significantly less of the drug to attain the same outcome. In other words, a big part of the effect comes from the interactions and expectation surrounding the drug. Some call the disparity between the two scenarios the placebo effect. (Others, however, say the word “placebo” should be reserved for inert treatments, and press for different terms, such as “meaning response” or “context effect.”)

“Medicine is intensely meaningful,” says Daniel Moerman, a professor emeritus of anthropology at the University of Michigan at Dearborn who coined the phrase “meaning response.” “It’s this highly stylized, highly ritualized thing.” He urges us to “forget about the stupid placebo and start looking at the system of meaning involved.”

A recent study by Harvard’s Kaptchuk suggests the importance of ritual and the doctor-patient relationship. A 2008 paper published in the British Medical Journal described experiments conducted on patients with irritable bowel syndrome. Two groups underwent sham acupuncture, while a third remained on a waiting list. The patients receiving the sham treatment were divided into two subgroups, one of which was treated in a friendly, empathetic way and another with whom the doctors were businesslike. None of the three groups had received “real” treatment, yet they reported sharply different results. After three weeks, 28 percent of patients on the waiting list reported “adequate relief,” compared with 44 percent in the group treated impersonally, and fully 62 percent in the group with caring doctors. This last figure is comparable to rates of improvement from a drug now commonly taken for the illness, without the drug’s potentially severe side effects.

“It’s amazing,” says Kaptchuk. “Connecting with the patient, rapport, empathy . . . that few extra minutes is not just icing on the cake. It has biology.”

It may be, then, that the simplest and least ethically hazardous way to capitalize on the placebo effect is to acknowledge that medicine isn’t just a set of approved treatments–it’s also a ritual, with symbolism and meaning that are key to its efficacy. At its best, that ritual spurs positive expectations, sparks associations with past healing experiences, and eases distress in ways that can alleviate suffering. These meanings, researchers say, are what the placebo effect is really about.

If this is true, then the takeaway is not necessarily that we should be dispensing more fake pills–it’s that we should think less about any pill and more about the context in which it’s given. Whether we call it the placebo effect or use new terms, the research in this field could start to put a measurable healing value on doctors’ time and even demeanor, rather than just on procedures and pills. And that could change medicine in a way that few blockbuster drugs ever could.