cara agar cepat hamil weigh loss factor : Mei 2013

Rabu, 22 Mei 2013

Rule 14-1B: "Science" and "Tradition" in Golf

Yesterday, the United States Golf Association (USGA) announced a rule change. Coming into effect in 2016, Rule 14-1B will prohibit the use of so-called "anchored strokes" in sanctioned play. Rather than try to describe what "anchoring" is, here's a helpful graphic provided by the USGA:

Source: http://www.usga.org/uploadedImages/USGAHome/rules/UNDERSTANDING%20ANCHORED%20STROKES.jpg
As a strategy for putting, "anchoring" has become increasingly popular—and controversial—over the last decade or so. According to ESPN, four out of the last six winners in major championships used "anchored strokes," a rate of success that has fueled speculation about what (if any) competitive advantage such a stroke might confer.

I'm not a golf fan, and I don't have an opinion one way or the other. What I'm interested in is the way this issue has been both contested within the golf community and portrayed in the media. Specifically, I was struck by how the old clash between "science" and "tradition" is playing out in some interesting ways. Here goes:

A central issue in debates over Rule 14–1B is, in the words of USGA President Glen Nager, "whether those who anchor play the same game" as those who don't. Nager's claim rested on the notion of the "traditional stroke" or "traditional free swing," which he and the USGA aim to defend against the rising tide of "anchoring."


Paul Azinger, a golf analyst for ESPN, challenged this claim from two directions. First, he thinks the "same game" argument is a specious one. "Who plays the same game as Tiger Woods?" The distinction between "anchoring" and something like driving distance on this score just doesn't hold up. Second, he thinks Rule 14-1B is an attack on success, and that appeals to "tradition" or the "spirit of the game" are just window-dressing.

Now, one could imagine the USGA countering with scientific evidence: physiological tests about caloric efficiency, say, or anatomical studies of joint wear, or simple physical demonstrations. As one commenter on ESPN put it: "Physics 101: Levers are much easier to control than pendulums." Or what about statistics? Is there evidence that "anchored" putters actually score better?

But the USGA has declined to conduct any experiments or run any regressions. Indeed, they've rejected any appeals to scientific or statistical studies. The 39-page document justifying Rule 14-1B makes the USGA's position on this issue crystal-clear:

Although we understand that people often look for statistical data when engaged in
a factual and policy debate, we believe that these assertions are misplaced in the present context and reflect a misunderstanding of the rationale for the Rule and the principles on which the Rules of Golf are based.
Those principles, the report concludes, rest "on considerations such as tradition, experience and judgment, not on science or statistics." The prohibition on "anchoring" isn't about whether or not it actually confers an advantage (on one player or many, in a career or a single putt), but about the fact that it leads to "reducing variables and alleviating inherent obstacles that otherwise exist in the traditional free swinging method of stroke."

On one level, this makes total sense. As the USGA points out, they never conducted scientific studies to determine the possible advantage of throwing the ball instead of hitting it with a club. And we all recognize the arbitrary distinctions conferred by the rules of something like golf and adhered to out of a sense of tradition.

On another level, though, there are interesting exceptions to the appeal to tradition in the face of science—or, to be more specific, technology. With regard to the material, shape, and size of clubs and balls, the USGA engages in a great deal of technical specificity, including the stipulation of exact protocols for testing things like moment of inertia and initial velocity.

Experimental Set-up for Measuring Moment of Inertia
(http://www.usga.org/equipment/testing/protocols/Procedure-For-Measuring-The-Moment-Of-Inertia-of-Golf-Clubheads/)
Remember, clubs like the one pictured above are called "woods" for a reason. And yet, the USGA has allowed driving clubs to go metal (or carbon fiber) – within carefully-defined and scientifically-tested limits of size, weight, and flexibility. That's part of what explains the fact that, in 1980, no one hit the ball more than 280 yards; and today, 90% of male professionals do

So, science and statistics are used to police the advantage conferred by "technology" (the construction of implements necessary for the game) but are rejected outright when it comes to "method" (the use to which those implements are put). Putting, to put it another way, is about the putt, not the putter. 

Does this division make sense? It might help to look at cases in other contexts to see how such matters have been adjudicated elsewhere. In baseball, for example, metal bats are prohibited in the Major Leagues—not (only) because wood is traditional, but because the "trampoline effect" of metal means balls travel faster off the bat and endanger fielders (and especially pitchers). 

Another example of the boundary between "science" and "tradition" is so-called "card counting" or "advantage gambling" in card games. While technically legal, many casinos find ways to discourage players from gaining an advantage through the use of probability theory. There's some sense that the shotgun approach of card-counting is—even when dramatized—somehow not "the same game" as the blackjack the rest of us play. 

The same goes for controversy surrounding the statistical approach to baseball managing popularized as "Moneyball" (and written up on this blog here and here). From Billy Beane to "anchored putting," science and technology serve somewhat tenuous roles in the evolution and policing of some of our oldest pastimes.

Minggu, 12 Mei 2013

Cold War Science / Cold War Synthesis

BOOK REVIEW: Audra Wolfe, Competing with the Soviets: Science, Technology, and the State in the Cold War (Johns Hopkins University Press, 2013).

Back in 2011, AmericanScience interviewed writer and editor Audra Wolfe about her work cataloging the papers of American geneticist Bentley Glass. When asked whether the Glass papers indicated that "the 'story' we have about Cold War science is wrong," Wolfe suggested that we'd have to get back to her in a year or so.

Well, it seems that we now have a chance to learn Wolfe's take on Cold War science – not from her research on Bentley Glass, which is ongoing, but from her book Competing with the Soviets, a short, textbook-style history of science and technology in the United States during the Cold War. The book examines the role that science and scientists played in maintaining state power, and how Cold War concerns shaped individuals, institutions, funding streams and research agendas.

The book hits on many of the stories that we've come to associate with Cold War science: massive technoscientific achievements like the atomic bomb and the Apollo missions; the engagement of scientists in politics (and its outcomes) as illustrated in the Oppenheimer security hearings and the Nuclear Test Ban debates; and moments of astonishing technological hubris including the atomic-earthmoving proposal Project Plowshare (with which the book opens) and Reagan's Strategic Defense Initiative. Wolfe also dives into the history of the social sciences, considering for example the role of American economists and economic ideas in U.S. efforts to "win the hearts and minds" of those living in the developing world, and psychologists' misguided efforts to address entrenched racism at home.

Wolfe displays a great skill for balancing sweeping summary and illuminating detail. As a case in point, her excellent discussion of the academic-military-industrial complex presents a few key illustrations of the phenomenon, one of which is the SAGE air-defense system at MIT. In just two paragraphs she gives a sense of the immense scale and importance of this enterprise: an $8 billion dollar budget to compare with the Manhattan project's $2 billion; the use of 25 percent of IBM's workforce; the eventual employment of half of all of the computer programmers in the country, and so on. One gets the point – and how.

Of course there are stories and topics missing here, but Wolfe readily acknowledges this at the outset. So for example, you won’t find biomedicine discussed, though we know well from the work of Angela Creager and others that this was an area in which Cold War politics had significant effects. But in its selectivity the book achieves a more important goal, which is concision, and readability. The aim – following that of the series in which it appears – is to be an introductory text that offers an engaging and historiographically informed overview, and in this Wolfe succeeds admirably.

I only regret that didn't read the book until after I’d finished my first term teaching the history of 20th century science and technology and not before. Its synthesis of the large and complex – and sometimes contradictory – literature on the Cold War in our field of course makes it an ideal book for students, and for scholars who aren’t historians of science who’d like an introduction to the subject. But it also prompted me to think about the ways in which I’d organized my teaching of Cold War science and technology, and how I might do this in the future. In other words, I imagine that Competing with the Soviets will be just as helpful to those of us who think we already know quite a bit about science and technology in the Cold War.

Competing with the Soviets doesn’t really attempt to give a new account of this history, so if you are looking for a radical revision you won’t find it here. Wolfe generally deals with unresolved historiographical debates by acknowledging that historians disagree. She does bring her own scholarly perspective to bear, however, most obviously in her decisions about what aspects of this history to include and how to tie them together. To take one example, I found her inclusion of biotechnology, a story that is rarely nested so directly into that of Cold War politics, a thought-provoking choice.

There’s been a lot of talk recently in HSTM of whether and how we should rethink our current tendency towards (some would say pathology of) in-depth case studies and the sometimes narrow vision of history that results. The most recent Osiris explores alternative approaches, in particular what Rob Kohler and Kathryn Olseko call "mid-picture" history, which relies on case studies but uses these to explore ideas and concepts that cut across historical sub-disciplines. Wolfe's book is the more traditional alternative to the case study: a synthetic overview. And it is a reminder of how valuable a clear, well-researched synthesis -- one sophisticated, holistic take on all those little case studies -- can be.

Jumat, 10 Mei 2013

Wild at Heart: Finding Evolutionary Narratives in Evangelical Christianity

We asked Myrna Perez, whose work focuses on the public role of evolutionary biology during the last quarter of the twentieth century, to reflect on that topic in a post. She's currently writing a dissertation about Stephen Jay Gould; you can find out more about her work here. 

What is so compelling about returning to our evolutionary origins? Why do we think that getting back to an earlier period in human history will make us healthier, happier and more fulfilled? In Wednesday's post, Lukas explored the appeal and historic origins of “paleo-diets” in order to make the intriguing suggestion that our attraction to these evolutionary narratives reveals a kind of ambivalent anxiety about modernity. 

When I think of these “cave-man diets” I’m struck by another aspect of this evolutionary origin story: namely, what they imply about human sex difference. The image of the cave-man offers a certain type of uncivilized, rugged masculinity – one that has been hemmed in by the advent of agriculture, domesticity and the trappings of urbanized, modern life. 

Of course, it’s not only men who follow paleo-diets. But it would be hard to deny the slant toward men for this protein-focused, weight-lifting, and wild-man-creating diet and exercise regime. In the logic of the “cave-man diet” men reach their fullest potential by shedding the cloak of civilization in order to return to a purer, more natural state of being.

Now, I’m fairly sure that there isn’t space in a blog post to get to the bottom of the relationship between evolutionary theory and modern gender dynamics. But what I would like to explore is the pervasiveness of the gender and sexuality models coming out of much of evolutionary psychology for the past several decades, by looking in one surprising place: the popular sub-culture of American evangelical Christianity.

Evolutionary psychology is most often understood as opposing (or at least the opposite of) religion in American culture. After all, it seeks to understand contemporary human social behavior as a collection of evolved adaptations, with no reference to divine agency or supernaturally-gifted morality. As a research agenda, it has suggested powerful explanatory perspectives for much of human sociality. 

And its popular appeal is documented by a cursory glance at the science section of any mainstream news outlet—here, and here. These popular articles often suggest that evolutionary psychology has unlocked the key to human sex difference, mate choice and sexuality—men have a hard time with commitment because sperm require little investment and must be spread around. The female orgasm is an adaptation to secure fertilization. And so on. 

But since its origins in the 1970s, evolutionary psychology has been heavily criticized for offering what many see as deterministic models for human behavior—particularly these pronouncements on human sex difference and gender roles.

It’s clear that evolutionary explanations have captured the attention of scientific,  humanistic and popular discussions of human sex and gender. What does this have to do with American evangelicals? Intrigued by a surprising set of parallel arguments and imagery between the “cave-man” of evolutionary psychology and a rugged wild-man in recent versions of evangelical pop-theology, I wondered if there was something more than a shared set of culture images. Turns out there are some interesting connections. 

But first, what are these books?

One very successful and powerful articulation of this growing masculinity narrative is found in the writings and ministry of John Eldredge. His 2001 book Wild at Heart: Discovering the Secrets of a Man’s Soul spawned a booming cottage industry of other books (including some for women), bible studies, and camp retreats (here's an advertisement) that distill a version of rugged-man masculinity for evangelical popular culture.


The language in Eldredge’s writing is strikingly reminiscent of the late nineteenth-century wilderness cult that Lukas introduced in his post. Vigorous health and true human nature can best be found by going out into the wild; Eldredge argues the “wild” is at the core of true masculinity—“adventure, with all its requisite danger and wildness, is a deeply spiritual longing written deep into the soul of a man. The masculine heart needs a place where nothing is prefrabicated, modular, nonfat, zip lock, franchised, on-line, microwavable.” 

These feminine and feminizing are the elements of modern life that prevent men from being who they are truly meant to be. It is only fear, Eldredge claims, that keeps men at home, “Deep in a man’s heart are some fundamental questions that simply cannot be answered at a kitchen table… It is fear that keeps a man at home where things are neat and tidy and under control.

As with the paleo-diets, as with the wilderness man, there is a deep sense that somehow civilization has violated the essential nature of a man. All throughout Eldredge’s description, there is the distinct implication that men have been feminized by the domestic space—a space in which they cannot truly be themselves—“ the core of a man’s heart is undomesticated and that is good. ‘I am not alive in the office, ‘ as one Northface ad has it…. Their conclusion? Never stop exploring.” The encouragement to meet rugged wilderness from the outdoor-sport company Northface is a touchstone for this return-to-nature ideal.

Now, books such as Wild at Heart do not suggest that men are most fully themselves out in the woods because of their evolutionary origins, but rather because they are created by God to want these rugged adventures. In this view, outdoor sports, nature walks and following rivers to their end, are ways of fulfilling a man’s ultimate God-given identity and purpose. Eldredge, for instance, argues that men are “wild at heart” because of where Adam was created, “Eve was created within the lush beauty of Eden’s garden. But Adam… was created outside the Garden, in the wilderness… Man was born in the outback, from the untamed part of creation… And ever since then boys have never been at home indoors, and men have had an insatiable longing to explore.”

Eldredge’s drawing on Genesis rather than Darwin suggests that all this may actually have very little to do with evolutionary psychology. After all, it may come as no shock that evangelical Christianity expresses a hetero-normative and binary view of gender identity. However, at least on a superficial level, there are intriguing parallels in the appeal to “the wild” and “wilderness”—and I am excited by the possibilities of exploring further the similarities in “cave-man” and “wild-man” imagery in books like Eldredge’s and in their evolutionary alternatives.

Additionally, it seems very possible that Eldredge drew upon the adolescent development model advocated for in Michael Gurian’s 1999 book A Fine Young Man, in the development of his view of Christian masculinity.


Not only does Gurian argue in the same fashion as Eldredge that feminist cultural elements have eroded the essential nature of young men, both Gurian and Eldredge fashion a masculine ideal that is described in the former as a “warrior-artist” and the latter as a “warrior-poet”.

An even more compelling reason to continue exploring this connection comes from a distinct irony—that is, American evangelicals have not been known as the champions of Darwinian evolution. A recent Gallup poll reports that forty-six percent of Americans believe in creationism over any type of evolution. This fact alone should make us wonder how and in what way evolutionary narratives have been so appealing in American society over the last few decades. 

Martha McCaughey, a feminist scholar and cultural anthropologist who has explored the appeal and influence of the “cave-man,” suggests this form of masculinity largely owes its success to the strength and authority of the Darwinian evolutionary narrative in contemporary American society. In her view, evolution has replaced our religion, and so men find their identity as Darwinian cave-men. Perhaps evolution has won out in academic circles, but this hardly seems the case for most of the country. Is it possible that evolutionary psychology has reached unintended audiences through the Christian ministries of authors like John Eldredge?

So to return to the initial set of questions—what is so compelling about our pre-historic origins that they have life in gay-rights activist Dan Savage’s arguments against monogamy, as well as evangelical Christian views of sex difference? 

Perhaps the answer is blindingly simple: an essentialized view of human nature, whether inherited from an evolutionary past, or given by a Creator in Genesis, is comforting, powerful and appealing. It sets aside the process of framing and constructing gender—making sex difference natural and straightforward. 

Nevertheless, for scholars interested in historicizing evolutionary psychology, there is much fruitful ground to be gained by looking into the reaches, influences and permutations of this current expression of the fundamentals of human nature.

Rabu, 08 Mei 2013

The Curious History of the Paleo-Diet, and its Relationship to Science & Modernity

Joseph Knowles emerging from the woods in his "Wilderness Garb," Oct. 4th, 1913

Over the past few years, I've been following the career of a new fad called the "paleo-diet," which advises us to adopt the eating habits of the Pleistocene. I first became aware of it from a New York Times article featuring John Durant, a 20-something office worker turned fitness guru from Manhattan who tries to live as our ancestors did before the dawn of agriculture. On his website, Durant explains that when he started working at his first job out of college, he began to notice that he often felt tired, anxious, and stressed out. He also started to put on weight and noticed that his complexion was becoming uneven.

On the lookout for an explanation for what might be going on with his body, Durant came across the UC Irvine Economist Art de Vany, who had developed a so-called evolutionary fitness regimen. Durant decided to give it a try, and began to eat a diet that is high in fat and protein, as well as fresh fruits and vegetables, but completely avoids grains and all processed foods. Moreover, Durant began to fast for long periods in between meals to simulate the lean times that hunter gatherers often had to endure. Indeed, some advocates of the paleo-diet even go so far as to engage in strenuous exercise before breaking a fast, reasoning that early hominids had to hunt down their prey before consuming a large dose of protein.

There's been a lot of chatter about the relative merits and shortcomings of the paleo-diet recently (including an advice column at the Huffington Post and a hilarious review of Marlene Zuk's book Paleofantasy: What Evolution Really Tells Us About Sex, Diet and How We Live on Salon). I'm not going to evaluate any of the substantive claims made either for or against this lifestyle.  Instead, I want to give a bit of historical context for these discussions from the late 19th and early 20th century (see the image above!).

Most people who have written about the paleo-diet cite a 1985 article in the New England Journal of Medicine entitled "Paleolithic Nutrition — A Consideration of Its Nature and Current Implications" as the point of origin for the fad. In what follows, I'll try to push the narrative considerably further back into recent history. But the NEJM article is worth taking seriously because it makes an important point about not only this fad diet, but indeed every fad diet: they all claim to be grounded in science. What is unique and special about the paleo-diet is that it draws on an unusual branch of science, namely evolutionary theory.

On his website, Art de Vany claims that our evolutionary history did not prepare humans for a modern lifestyle.  To see why one might think this, it is worth taking a detour and listening to an excellent TED Talk that Daniel Dennett gave several years ago.  In his talk, Dennett used a piece of chocolate cake to explain Darwin's curious form of "reverse reasoning." It's not true that we like the chocolate cake because it is sweet, Dennet explains. Rather, it is sweet because we like it.

There is nothing about cake that is inherently sweet. You can stare at a sugar molecule for as long as you want, and you will never understand why it tastes sweet. To understand that, you have to know something about how our brains are wired. And this wiring, Dennett explains, is a product of evolution. Our brains evolved to give us a psychological reward--the taste of sweetness--whenever we eat something that contains sugar, which, of course, is rich in calories. Something similar holds true for fat, salt, and a number of other foodstuffs.

The claim made by proponents of the paleo-diet is that this was good thing during the Pleistocene, because humans did not have access to a lot of calorie-rich foods. To survive and have offspring, you had to consume all the calories available. But in today's world of industrial agriculture and high-fructose corn syrup, that is no longer the case. Differently put: there was no such thing as chocolate cake during the Pleistocene. Probably the sweetest thing anyone would have eaten at that time was a carrot. The chocolate cake is what the ethologist Niko Tinbergen called a super-normal stimulus -- what my own behavioral ecology teacher called "the Dolly Parton effect"--something that is way off the scale of what our bodies have evolved to cope with.

Now, advising people to avoid or at least moderate the consumption of processed foods that are high in salt, fat, and sugar is not in the least bit controversial. I am willing to bet that any conventional nutritionist would be on board with the idea that just because something tastes good does not mean it is good for you, and that we should be careful about simply giving in to all of our cravings. But proponents of the paleo-diet want to go several steps further. Beyond advocating that we avoid foods packed with super-normal stimuli, they also counsel us to avoid dairy, grains, and cereals; indeed, anything that was unavailable prior to the development of agriculture. In so doing, they add an extra ingredient to the evolutionary reverse argument, namely an aversion to modernity.

To see why this is the case, it is useful to extend our historical vision beyond modern-day evolutionists such as Dennett and recent proponents of the paleo-diet like Durant and de Vany. In particular, I want to use the example of Joseph Knowles (pictured above) to show that the paleo-diet is rooted in a much older tradition of what constitutes healthy living.

Joseph Knowles was an artist and illustrator who became famous almost overnight for what he described as an "experiment" that consisted of trying to survive for two months alone in the Maine wilderness. His fifteen minuts began when reporters from the Boston Post photographed him gingerly disrobing, discarding his knife and other accoutrements of modern life, demonstrating his ability to make fire by rubbing pieces of wood against one another, and entering the woods, all on the morning of August 10, 1913.

Joseph Knowles demonstrating his wilderness survival skills just before heading off into the forest, August 10th, 1913.

During the two months he allegedly spent in the wilderness, Knowles periodically sent updates about his adventures to the Post, written in charcoal on a piece of tree bark.  Among other things, he recounted spending the first few days subsisting on berries before learning how to fish trout and hunt partridge and deer. He also wove strips of tree bark together to create a kind of textile that he could fashion into clothing and shoes. Then, on August 24th, about two weeks after he entered the forest, a front page story in the Post described how Knowles had successfully killed a bear using nothing but his wits and a club.

When he emerged from the wilderness wearing the bearskin on October 4th, Knowles received a hero's welcome. He was cheered on at every stop of the way from Maine down to Boston, and huge crowds gathered to see him arrive at North Station before he gave a rousing speech about his experiences in the Boton Common. In the months that followed, Knowles wrote a best-selling book about his adventures entitled Alone in the Wilderness and received top billing on the Vaudeville circuit.

There's lots to be said about Joseph Knowles, including the fact that a rival newspaper published evidence to the effect that he had spent most of his time in the "wilderness" drinking beer in a friend's cabin. But I want to focus on one piece of the story in particular. One of the first things Knowles did after arriving in Boston was to pay a visit to Dudley Allen Sargent, the Director of the Hemenway Gymnasium at Harvard University.

Dudley Sargent examines Joseph Knowles at Harvard's Hemenway Gymnasium.

In his autobiographical account of the saga, Knowles quoted Sargent as attesting to the fact that his time in the wilderness had left him in better shape than any of the college's "football men," reporting, among other things, that "With his legs alone he lifted more than a thousand pounds." Sargent also noted a remarkable improvement in Knowles' complexion: "Subjected to the action and the stimulus of the elements, Mr. Knowles' skin has [come to serve] him as an overcoat, because it is so healthful that its pores close and shield him from drafts and sudden chills." Thus, Sargent declared the "experiment" a complete success. "Forced to eat roots and bark at times, and to get whatever he could eat at irregular hours, his digestion is perfect, his health superb."

Along with this testimonial, Knowles also included a chart comparing some of his vital statistics from before and after the time that he spent in the wilderness. Not only had he lost more than ten pounds, but, remarkably, he had grown slightly taller as well. Moreover, his muscles all increased in size and in girth, and his lung capacity shot up from 245 cubic inches to an astonishing 290 cubic inches!

Joseph Knowles' vital statistics before and after the wilderness "experiment."

As historians of science and environmental historians well known, Joseph Knowles was part of a larger cultural movement that Roderick Nash's classic account describes as a kind of "wilderness cult." Other notable examples of this movement's popularity include the founding of the Boone and Crockett Club in 1887, the Sierra Club in 1892, the Boy Scouts of America in 1910, as well as Theodore Roosevelt's fierce advocacy on behalf of wilderness preserves such as Yellowstone National Park as a place in which white, urban elites could experience what he called the "strenuous life."

It is no surprise that the wilderness cult took off when it did. At a time in which America was becoming increasingly urban, industrial, and ethnically diverse, many worried that rather than heading for increasing prosperity, the country was inevitably on the decline. Thus, it seemed natural to harken back to a simpler and more authentic past, one in which people's communion with nature left them healthier in body, mind, and soul. It was, after all, during this period that the historian Frederick Jackson Turner used a podium at the 1893 Chicago World's Fair--a celebration devoted to industrial progress in a city that did more than any other to conquer the west--as a platform from which to mourn the official closing of the nation's western frontier. And it was also during this period that Madison Grant, director of the Bronx Zoo and Trustee of the American Museum of Natural History, published his eugenic masterpiece, The Passing of the Great Race. Envisioning a dark future indeed, Grant counseled his readers to eschew the comforts and luxuries of modern civilization and allow the Darwinian struggle to continue tending the health of the gene pool.

Few things sum up these sentiments as well as the first edition of Ernest Seton's Handbook for the Boy Scouts of America. "We have lived to see an unfortunate change," he lamented on the very first page of the Handbook. "Partly through the growth of immense cities," and "[p]artly through the decay of small farming," he continued, America entered a period that Seton and so many others described using the word "Degeneracy."  Thus, it was to "combat a system that has turned such a large proportion of our robust, manly, self-reliant boyhood into a lot of flat-chested cigarette smokers, with shaky nerves and doubtful vitality" that he brought scouting to America. Mindful of the fact that "Consumption" had become "the white man's plague," he concluded, "I should like to lead this whole nation into the way of living outdoors for at least a month each year."

In closing, let me forestall a possible misinterpretation. Of course I do not mean to imply that Durant and other advocates of the paleo-diet are all eugenicists at heart. That is certainly not the lesson I hope people take away from the history that I have tried to present. But I do think that a few striking and salient parallels present themselves.

Perhaps it is a cliche to say that we are living through a time of enormous change, just as people during the American Gilded Age and Progressive Era did, but that does not make it any less true. One thing that I would like to suggest we are seeing, not just in the paleo-diet, but certainly there as well, is a kind of aversion towards modernity. People now as well as a hundred years ago have looked and are looking to the past in search of a simpler, more authentic, and, importantly, more healthful way to live one's life.

But what is so curious about all of this is that so many of these people--from Joseph Knowles to Art de Vany--are also looking to science, a quintessentially modern institution if there ever was one, for both advice on how to get there as well as for the authority to argue that an earlier period in human history really was healthier and more adapted to our physical, spiritual, and emotional needs. 

Sabtu, 04 Mei 2013

The High Quality Research Act: Searching for Ways Beyond "Politicization"

This post is a continuation of our on-going discussion here at American Science of Rep. Lamar Smith's High Quality Research Act (HQRA), which would cut the National Science Foundation's funding to certain kinds of research, especially in the social sciences.

It was only a matter of time before someone dropped the p-word, "politicization," in discussions of the HQRA. It's a word that haunts these kinds of topics. The first appearance of the word in this context that I noticed was in this post by Michael McAuliff and Ryan Grim at the Huffington Post.


I want to question and probe their discussion.

McAuliff and Grim use the p-word in their first paragraph when they write that the HQRA "would in effect politicize decisions made by the National Science Foundation." They never define the term. They then go on to quote approvingly from a letter that Rep. Eddie Bernice Johnson (D-Texas) wrote to Lamar Smith: "This [the HQRA] is the first step on a path that would destroy the merit-based review process at NSF and intrudes political pressure into what is widely regarded as the most effective and creative process for awarding research funds in the world." They summarize Johnson's letter as claiming that the HQRA was a "dangerous politicization of one of the most successful scientific research promoters in history." Politicization isn't Johnson's word; it's theirs, though Johnson does use close approximates like "political intrusion" and "political pressure."

Johnson also lays out this beaut of an argument, which I pull from his letter: The "NSF's peer review process" has been "the gold standard for how scientific proposals should be judged and funded." And "in this context, the term 'peer' is not simply a fellow citizen as we encounter on a courtroom jury. It means very specifically another scientist with expertise in at least some aspect of the science being proposed." Therefore: "Politicians, even a distinguished Chairman of the Committee on Science, Space, and Technology, cannot be 'peers' in any meaningful sense."

Democracy Be Damned!!!!

What is going on here?

As many in science and technology studies have argued, the rhetoric of politicization assumes that science is somehow non- or a-political. It is a favored rhetorical strategy of many popular science writers, especially progressives criticizing the right, including academics, like Naomi Oreskes, and science journalists, like Chris Mooney. There are lots of things wrong with politicization as an argumentative ploy. First off, it's too simple. It's not an accurate picture of reality. Also, it typically leads to a too easy polarization of politics: there are good guys, and there are bad guys, and we know who they are. And frequently it ends up with choir-preaching. It's no surprise that Mooney went from talking about the right-wing politicization of science in his first book to arguing that Republicans have bad brains in his most recent one. Forget the Socratic injunction that the wise person knows that she doesn't know. It's the other guys who are fools. The most vocal critic of this kind of thinking in science and technology studies has been Sheila Jasanoff. She doesn't think politicization, especially with its frequently built-in demonization, is any place to begin conversation. And she's right.

Politics, politics, politics. So many different kinds of politics. So many different kinds of politics that the word itself begins to melt. A basic tenant—perhaps even a dogma—of science and technology studies is that science is always political, but what does it mean to say this? Well, in their 1985 book, Leviathan and the Air Pump, Steven Shapin and Simon Schaffer described how the earliest debates about experimental science—in their story, the debates between Robert Boyle and Thomas Hobbes—were about the nature of polities and politics, with Boyle arguing for a quasi-democratic (though always selective) community of peers and Hobbes holding out for monarchy. In other words, the founding of science was itself political. Others have shown how the Cold War shaped science; how academic fads, such as the current craze for the three O's (nano-info-bio), influence project funding; how scientists strive to gain legitimacy and credibility and then use their authority for political ends; and how peer review is much less ideal and much more political and fraught than defenders make it out to be, just to name a few such arguments. The consensus was established a long time ago: there's no use in trying to separate science from politics, even rhetorically, and, moreover, attempts to make that separation are themselves political. Science, like everything else, is human and screwed up.

Also, we shouldn't forget in all of this that "politics" has long been a dirty word in the United States, extending back from recent rampant discourse about "partisanship" through pop works, like E. J. Dionne's 1991 book, Why American's Hate Politics, all the way to the founding of the nation, with the Federalists fretting endlessly over factions, parties, and their ill consequences. (I'll just mention without going into it that some thinkers, like him and her, have argued for years that this attempt to suppress politics is exactly the wrong tack; that, instead, we should admit that politics are omnipresent and learn to deal with them fruitfully and productively.)

This leads to a further question. Given that science is always political, what kind of politics do we want to use to guide it? Here, as I argued in my last post, I think science and technology studies have largely fallen down. One response from many corners would likely be that we can't give a general answer to this question. The appropriate form of politics will have to fit the context and the situation. But I would like to hear something more concrete than all that. Smith, as an elected official, is putting forward one version of a democratic politics: the NSF, a federal agency, should be accountable to Congress, the federal body of democratically-elected representatives. It's easy, however, to argue, with some force, that our electoral system is so broken that it is no longer democratic. Scott, who commented on my last post and who I hope will say more, criticized Smith as anti-democratic but drew on the trusty table metaphor to argue, "I would love to include him and all others at a table for fair, open, honest discussion and consensus building." This would be another model, having open, public discussions about how to set research priorities. Yet, can we imagine the NSF as a site of direct democracy? The science funding table? I can't; nor do I want to imagine such a thing, I think (though I could be convinced otherwise). So, what then? Rep. Smith has given people an excellent opportunity to put forward alternative frameworks for science governance.

I think the final question is this: what can people working in science and technology studies do to get their arguments "out there"? If we artificially date the idea that science is always political to the 1985 publication of Shapin and Schaffer's Leviathan and the Air Pump, then the argument has been around for nearly thirty years to little avail (outside academic discussions). Pop writers, such as McAuliff and Grim, Oreskes, and Mooney are still falling back on the too easy, too simple trope of politicization. What is to be done?

Kamis, 02 Mei 2013

Analogizing Human Genes

We asked Andrew Hogan, a historian of science and medicine whose work focuses on the observational approaches of postwar human genetics and biomedicine, what the sort of questions he asks might reveal about contemporary science.  He sent us the following guest post; you can find out more about his work here

Excellent coverage of the BRCA gene patenting case by Lukas on this blog (and elsewhere) over the past few months has recently gotten me thinking about the ways that various analogies shape the arguments and decisions made by lawyers, jurists, and government officials. Comparisons to more tangible objects seem to be particularly influential in cases that consider scientific concepts and entities, like genes, which cannot be directly seen. 

After the case Association for Molecular Pathology v. Myriad Genetics, Inc. was heard before the US Supreme Court last month, I read through the oral arguments, previous Court decisions for this case, and the 2001 US Patent and Trademark Office (USPTO) justificationfor allowing gene patents. I wanted to get a sense of what gene analogies seemed to be most influential, and how this shaped the framing of the BRCA case.


 In his recent post, Lukas did an excellent job of probing the implications of framing genes as molecules versus information. Today, I want to examine two related sets of analogies that may also shape the outcome of this case: those that equate human genes with (1) other chemicals that have been isolated and/or purified from the human body, like the hormone adrenaline, and (2) macroscopic anatomical entities, such as a kidney, liver, or tree leaf.

In its 2001 ruling, the USPTO pointed back to Federal Court cases from earlier in the 20thcentury, which upheld patents on chemicals that had been taken from the human body and put to new uses. The jurists in these cases had found that the hormones adrenaline and prostaglandin were substantially different when isolated and purified, than when found in the body. 

Lawyers for Myriad offered a similar argument about the isolation of human genes, suggesting that DNA segments making up the BRCA genes, when excised from their respective human chromosomes, were chemically distinct molecules (in this case, not just purified, but different). As Myriad sees it, when they were first isolated from the body, the BRCA genes were entirely new compositions of matter, directly resulting from human ingenuity, and were thus patentable. 

Source: http://upload.wikimedia.org/wikipedia/commons/e/e1/Protein_BRCA1_PDB_1jm7.png
To briefly review some of what was covered in Lukas’ post: US Circuit Court Judge Alan Lourie agreed with this characterization of human genes in his opinion upholding the BRCA gene patents. Lourie argued that the breaking of chemical bonds necessary to isolate the BRCA genes from their normal position among the human chromosomes made them chemically distinct, and thus patentable, molecules. In his decision, Judge Lourie set aside an argumentpreviously made by US District Court Judge Robert Sweet, who suggested that DNA’s role in embodying biological information made it ineligible for patenting. Judge Lourie was not persuaded by the significance of analogies equating DNA with information, and instead found that isolated DNA was like any other molecule that had been chemically altered from its natural form.

As arguments got underway on April 15 at the US Supreme Court however, it quickly became clear that many of the Justices were more taken with anatomical analogies for human genes than chemical comparisons. Rather than chemical bonds being broken in order to isolate the BRCA genes from their natural location, Justices Sotomayor, Breyer, and Roberts spoke of “snipping” genes out of the human body. Here's an example:


 From here, the Justices began to equate the isolation of human genes with the removal of more discrete and tangible body parts: chromosomes and organs. Where Judge Lourie was willing to agree that human genes were chemically distinct when removed from the body, many of the Supreme Court Justices were hesitant to accept that genes, when analogized to other parts of the human anatomy, were in fact substantially different entities outside of the body.

We often speak of human genes and the human genome as discrete entities. Indeed, as Aryn Martin has suggested in her work on genetic chimeras, courts often think in terms of one-to-one correlations between individuals and their unique genetic ‘fingerprint’. Such analogies, it seems to me, make the genome seem more like a discrete part of the human anatomy – like a liver – than a widely dispersed chemical – such as a hormone. And, while almost no one would accept that the liver could be patented just because someone “snipped” it out of the human body, the purification of a hormone is more of a borderline case, having previously received patent protection.

Now, to step back for a minute, it seems to me that many people oppose gene patenting on the grounds that allowing monopolies over DNA sequences, as they exist in nature, disrupts research. I find it interesting that the US Circuit Court and Supreme Court have largely set aside this informational analogy, of DNA as biological code, in their deliberations about the BRCA gene case. Instead, they seem to be more interested in analogies that help them to probe what sort of material isolated DNA truly is: chemical, anatomical, or somewhere in between?

Rabu, 01 Mei 2013

The High Quality Research Act: A Steaming Plate of Democracy, or Careful What You Wish For!!

I'd like to build on Hank's post from yesterday, which looked at Rep. Lamar Smith (R-TX) and Smith's potential legislation, the "High Quality Research Act" (HQRA), which would curtail research spending on certain kinds of research at the National Science Foundation. This article nicely spells out the basic contours of the story. Rep. Smith is particularly interested in cutting funding to research in the social sciences, unless it makes contributions to economic development and national security. What has mostly gone un-mentioned in recent news articles is that most of the cuts will likely effect the NSF's program in science and technology studies (STS), a field in which I and most other authors of this blog work. Hank did a nice job in his post of connecting this law to two long-standing themes in STS, namely the so-called Science Wars and peer review. I would like to take this issue in a slightly different direction by focusing on STS writing on democracy.




STS writings on democracy go back a long ways, indeed one could easily argue that the place of science in liberal democracies is *the* central theme of the literature. How should science be controlled in a democratic society? Should it fulfill a "social function," for instance? Or should its objectives be set by scientists? Also, how should the products of scientific discovery play a role in democratic politics?

In the UK in the mid-20th century, J. D. Bernal and Michael Polanyi had it out over this issue, with the Marxist Bernal arguing the science should be for the people and Polanyi insisting that science worked best when it was "autonomous." In the United States at about the same time, Vannevar Bush was dreaming up the institution that would eventually become the NSF. It's important to note, however, that Bush originally envisioned an organization for scientists by scientists that would have been fully autonomous from intervention from politicians, including Congress. But this part of Bush's reverie never came true. The NSF has always had some oversight. 

Of course, one way this issue connects to the history and sociology of science is through the theme of who chooses scientific problems and how they are chosen. This topic goes all the way back to Merton's Harvard dissertation (1936?) and whiles its way through Kuhn, Forman, Crosbie Smith, Jeremy Blatter, and the whole literature on whether Cold War defense spending "distorted" science.

Most of my progressive friends have been unhappy about this Republican turn against the NSF. They like to point, as Hank did, to the fact that Rep. Smith doesn't buy into the products of science, including climate science. But we should realize that Smith's actions are completely understandable when viewed from a slightly different angle, which I usually call the "Whitey's on the Moon" critique, after Gil Scott-Heron's great song of the same name.


As Heron asks, why should we be putting white men on the moon when our healthcare system is broken and people are suffering poverty? "Was all that money I made last year for Whitey on the Moon? How come there ain't no money here? Hmm. Whitey's on the Moon." And all of us know about how silly and stupid some research in the social sciences and humanities is. This became especially true in various "studies" programs took off in the 80s, 90s, and 00s. Even lefty scholars, like Terry Eagleton, criticized the silly excesses reached in certain fields: it paints unholy pictures of some grad student in his studio apartment, dressed only in socks and boxers, sitting on his couch with a notebook, watching hours of porn, writing up his doctoral thesis on "The Historical and HermeneuticalTrajectory of the Money Shot" or whatever. 

So, it's a good thing to ask where our money is going and what it is producing. (Of course, as Scott-Heron points out, the same kinds of questions can and have been asked about the space program and, like, particle accelerators.)

It goes without saying that I part ways with Rep. Laramie when he thinks that the only things of value are economic development and national security. But that last sentence is so glaringly obvious that it really should have gone without saying.

But I think another place this law clearly intersects with STS is around the issue of "democracy." Some branches of STS have been insisting for years that science and engineering need more democratic input. (Just go to your friendly neighborhood STS journal and pump in the search term "democracy.") Is this desire for "democracy" in STS a conservative desire? Or a progressive one? Well, that depends, of course. It is remarkable, however, how close certain self-proclaimed progressive strains of post-1960s academic thought come to traditional conservative ideas. It's no surprise that Habermas called Foucault a "young conservative" given the long Burkean line of seeing people primarily as a product of their society's past. It's complicated. Michael Polanyi, who defended the autonomy of science, was also a "conservative nutter" (as one person called him at a conference I attended recently). Our ordinary ways of dividing up politics often falls down when examining these kinds of issues. 

But now perhaps we are seeing how calls for "democracy" are often not as progressive as their chanters believe. Yes, I know that there is a long tension in this country between desires for democracy and fears of the populist mob, of which the Tea Party is one expression. The irony is that STS scholars have always advocated democracy but when the crowds came--Rep. Lamar and his merry band of Tea Partiers--and they entered the NSF, the work of STS scholars was the first thing on the chopping block.  

Here's another metaphor for you: STS-ers have written a lot on "democracy," and now Rep. Lamar Smith has served them up a big steaming plate of democracy, upon which he and they can now dine.