cara agar cepat hamil weigh loss factor : April 2011

Rabu, 27 April 2011

A Nice Derangement of Empathies

In the wake of JAS-BIO (which I mentioned earlier and which Joanna thoroughly recapped last week), Nathaniel Comfort over at PACHSmörgåsbord has been continuing his ongoing thinking about what academic history of science is good for.

After beginning with a query ("Who Cares about the History of Science?"), Comfort shifts gears to ask (and provide a few answers for) why anyone should care. His first stab was about "History as a Way of Knowing." In that post, he paints scientific and historical reasoning as the contrast between determinism and contingency, simplification and complication. He ends with a plea to reach out to broader audiences, to engage ourselves and to change people's minds.


After JAS-BIO, Comfort takes what looks like a sharp turn. His third installment answers the question about why we should care more polemically: "Maybe we shouldn't." Here, he's arguing against careerism and in favor of passion. He adopts a similar stance in his latest post ("Toward a Poetics of HSMT"), which, as the title suggests, makes the case for attention to beauty and to the aesthetic qualities of our scholarship more generally.

I'm basically on board with Comfort's perspective, but today I'd like to carry this aesthetic strand even further and talk a little about empathy. The concept doesn't really come up in Comfort's posts, but it's an important one, and has come up on this blog in discussions of audience and the ability of actors (or those who take themselves to be their successors) to "recognize themselves" in our work (for snippets of this discussion, see here and here.)

I'm obviously in no position to attempt a definition or a history of empathy (as usual, for better or for worse, the SEP provides a good starting point). What I'd like to focus on is (a) two principal ways historians justify their work in terms of empathy and (b) a suggestive conclusion one might come to through their synthesis.

The first "use" of empathy comes in historical practice. It's akin to the "estrangement as historical Verstehen" Daston mentioned in her Critical Inquiry piece: it's understanding through attempted embodiment, rather than through inference of analogy. I won't belabor the historicio-hermeneutical sense of empathy we get from Dilthey through to Gadamer (which is fascinating in its own right), but will instead suggest the difference between what I'm picking out here and the issue of "fair representation" into which our previous discussion morphed (see here).

A call to empathy is not about giving actors a fair shake: that's an ethical mandate that precedes the sense of empathy I'm highlighting. Empathy and understanding, in the early-twentieth century, were contrasted with objectivity and explanation - the latter were the province of the natural sciences, while the former delimited special terrain for the newer human sciences. To access the meaning of an idea to its early proponents, we have to embody the experience of developing, holding, and communicating that idea in its time.

Though it overemphasized the individual mind, this sense of empathy as understanding-through-embodiment ("transposition," for Dilthey) found a new voice in Quentin Skinner's famous early essays. He and others of the "Cambridge School" developed a practicable sense of "ideas in context" that drew on the phenomenological approach to "meaning" and "understanding" while putting flesh, in the form of the narrative approach already familiar to historians, on those philosophical bones.

There are a bunch of fascinating parallels between Skinner and contemporary philosophers like Gadamer and Davidson - but I'll now move on to the second sense of empathy I promised to touch on , which may be of more interest to readers of this blog.

Where the first use of empathy was practical, the second is pedagogical. It, too, is expressed in terms of dislocation - a call to embody and experience the full weirdness of the past - but its purpose is less about getting the past "right" and more about inculcating certain habits of mind in students. Both are noble aims to which I'm sympathetic, and they often go together - especially, as Comfort notes, in the self-justification of those who study earlier periods.

This is a point Daston emphasizes as well, and it's worth noting given the audience of this blog. According to Daston, "only specialists in the twentieth century can allow themselves to take their subject matter for granted," by which she means that everyone else, by necessity, is "concerned with what science is, as well as how it works."

I'd imagine (or at least I'd hope) that historians of recent science - by which I mean science practiced by folks who are still alive, or near to it - might push back here, but rather than double down on that point, I'd like to use the case of twentieth-century science to sharpen my thoughts on empathy.

Being empathetic doesn't mean being uncritical - far from it. But it also isn't just an effort at fair representation - something like "empathy to the best explanation." Rather, empathy is a tool for understanding. Questions like "How could person X have simultaneously held views A and B?" often require us to first admit that (A&B) would've "made sense" to person X, and to then attempt an understanding of how this could've been by gathering in all the relevant pieces of X's lived experience.

Let me close with two related points:

Empathy is risky. In our effort to "make sense" of how a particular constellation of views or answer to a problem could've "made sense" to an actor, we don't want to import our own context-dependent, historically-contingent assumptions and categories. There is a neat parallel with Rawls' "veil of ignorance" here: while we aren't trying to assume an "original position" (much the opposite, in many ways), we do have to decide what features we take with us as we begin the thought experiment. This is tricky business, since cries of "Whiggism" rain down on she who fails to check her preconceptions at the door to the time machine.

Empathy runs both ways. While the previous point about checking one's assumptions is a truism for historians, I would argue (and have before) that we also have to watch out for how inflated ideas of our own agency infect our accounts of the past. That is, some use the language of empathy to urge us to grant more agency to our actors: they argue that we're too dependent on larger-than-life forces in our historical accounts, given that we deny or deflate the importance of such forces on us.

What I'd rather do is reverse the process - let empathy flow uphill. If we agree - and I think we do - that our privileged vantage helps us see the wider patterns and powers (ideological, institutional, or otherwise) determining day-by-day behavior in the past, then we might import that sense back into the way we account for our own activities. This is a sort of methodological actualism, albeit one run in reverse. Whereas in geology (for example), actualism forbids the invocation of anything not presently in play to explain past phenomena, I want to rely on the recognized strength of historical explanation to limit the stories we allow about our own lives.

Why empathy? Because I think it really is something that good historians have to offer accounts of both past and present - which means it's a valid answer both to Comfort's question about what history's good for as well as to questions we asked a while ago (here, here, and here) about how or why historians might raise their voices in current debates, political or otherwise.*

----------------------------
*It's also an interesting complement to the distinction, proposed by Donald Davidson in the essay from which I bowdlerized the title for this post, between "prior" and "passing" theories in communication. For Davidson, understanding between speaker and hearer isn't about a shared language ("There is no such thing as a language," he famously concludes), but rather about the production of a "passing theory" out of unshared "prior theories" and present data. His sense of constant, ongoing adjustment parallels the hermeneutics I dealt with earlier as well as the sense in which empathy, as a tool, should urge us to write and rewrite our own theories of how the world works as we put past and present into dialogue.

Sabtu, 23 April 2011

Can Our Minds Be Tricked by Food?

This entry was posted on Monday, August 2nd, 2010 at 2:08 pm and is filed under Featured. You can follow any responses to this entry through the RSS 2.0 feed. Both comments and pings are currently closed.

3229335993_32f5811c1e_mIs it actually possible that the way that you see food in your mind can make your stomach feel fuller by just looking at it?

From some research that has been carried out just recently it would seem to be the case. It works like this if we actually alter the way we think about food and how filling it is then so the theory goes, we should feel fuller as a result.

So let’s look at this in a bit more detail and see how it actually works, a group of people were given food and thought that the portion sizes were larger than they thought and felt fuller longer.

People’s recollection of how meals in the past kept them satisfied played a role

also, as to how long they stayed feeling full. This actually suggests that your memory tends to be involved both before and after eating food and how full you actually feel.

So let’s see how it affected some people in a, study when this theory was put to the test. In the first experiment, they were shown the contents of a fruit smoothie, so they were actually looking at all the fruit before it had been blended.

One half of the group was shown a small portion of fruit and the other a large portion then they were asked, which would give them the most satiety. Then they were asked to give their own rating both three hours before and after having consumed the smoothie.

The group which was shown the larger amount of fruit said they felt more fuller, although they were all given the smaller amount of fruit. In another experiment the researchers did something similar using a bowl of soup whereby they actually manipulated the amount of perceived soup they had and the actual amount they had.

They had a pump connected to the bowl, so they could vary the amount that the participant was consuming. So the researchers had total control, of how much they were eating overall.

So three hours later they found out it was the perceived amount that they thought they had consumed which is the remembered amount within their memory. Which decided how full they felt after three hours rather than the amount that they had actually consumed.

Here is what one of the researchers had to say regarding this study.

“The extent to which a food that can alleviate hunger is not determined solely by its physical size, energy content, and so on. Instead, it is influenced by prior experience with a food, which affects our beliefs and expectations about satiation. This has an immediate effect on the portion sizes that we select and an effect on the hunger that we experience after eating,” said Dr. Brunstrom.

“Labels on ‘light’ and ‘diet’ foods might lead us to think we will not be satisfied by such foods, possibly leading us to eat more afterwards,” added Dr. Brunstrom. “One way to militate against this, and indeed accentuate potential satiety effects, might be to emphasize the satiating properties of a food using labels such as ‘satisfying’ or ‘hunger relieving’.”

Is this why that sometimes when we eat food our minds tend to remember what it was we ate in order to feel satisfied so we tend to go back to those types of food more often than not? Because as this research tells us it’s how we actually see the food in our minds, which dictates to our stomachs how full we may feel by the food we are going to eat. And this is probably why it can be hard to let go of engrained eating habits.

Source http://www.sciencedaily.com

Find weight loss help and support with traineo.com



Turbulence Training

A Souper Way To Weight LossHow You Can Stay Fuller Longer Using The Satiety IndexReasons Why Obese People Eat More FoodDoes Eating Junk Food Make Kids DumberSlow Food Eating ExperimentTen Meals Based On The Satiety Index

View the original article here

Kamis, 21 April 2011

JAS-BIO, Evolving

A few weeks ago, Henry, Lukas, and I all traveled to New Haven for the 46th meeting of the Joint Atlantic Seminar for the History of Biology. Many of today’s leading scholars in the field gave their first papers at the conference and it continues to be a welcoming forum for junior scholars to share works-in-progress.

It has become a tradition to include a citation on the back of the program to a short essay on the history of the meeting by Mary P. Winsor, published in Isis in 1999. In that piece Winsor points out that the spirit upon which the conference was founded and perpetuated in the early years was not, in fact, professionalization. It was to provide a “stimulating day of friendly intellectual exchange.” What makes the JAS-BIO an important gathering is that it serves as a space where people from many generations can think together about why and how we do what we do. In my own experience, it has been a particularly important opportunity for me to learn from my peers.

The event began Friday night with a talk by medical anthropologist Marcia Inhorn, who spoke about her research on assisted reproductive technologies in Muslim countries. Her ethnographic research, and the lively discussion that followed her presentation, appropriately foreshadowed a conference in which it became impossible to ignore the evolution of history of biology. By this I mean that the participants at this year’s meetings unabashedly pushed the conceptual and methodological boundaries of the field, seeking to engage with history of technology, industrialization, philosophy, etc.

On Saturday, three quick sessions took us from 19th century collecting to Cold War psychological research, to philosophical, religious, and social legacies of Darwinism.

Lukas Rieppel and Courtney Thompson each articulated novel commercial aspects of 19th century natural historical collections. Rieppel presented work from his dissertation, which (forgive me if I’m overstating the case) situates museum-based vertebrate paleontology as a site for reinterpreting broader processes of industrialization. By focusing on the assembly of museum collections he encouraged us to consider the interrelationship of railroads, robber barons, and philanthropy as fundamental to American cultures of capitalism. At a different register, Thompson’s paper oriented us towards the home, where children were instructed, through books on natural history, to establish their own collections. Many of us who do history of biology index early such experiences as influential (for me, it was visiting the American Museum of Natural History). Thompson’s research also raised questions about the material legacies of the books themselves – they have become collectors’ items in their own right.

Nellwyn Thomas and Brian Casey explored the ways in which psychological research during mid-20th century reflected shifting notions of human potential and pathology. Thomas took us to the ocean floor in her account of the interplay between marine biological and psychological research at the underwater Tektite experiment station. Beyond giving us insight into practices of standardization that enabled the rich, otherworldy experiences of aquanauts (those marine biologists who lived in the aquatic environment for weeks on end) to be rendered statistically, Thomas’ paper tracked the mutating status of the human at mid-century. Casey, in his account of psycho-surgical research during the Cold War also pointed to the question of what it means to be human. Casey's project, part of a collaborative endeavor, emphasized the role of technology in psychological research. The history of 20th century biology has much to gain from a deeper engagement with technology. We have only just begun to pay attention to the consequences of technological intervention at the register of the biological.

As a panel, David Crawford, Stephen Dilley, and Myrna Perez offered us a kaleidoscopic portrait of the legacy of evolutionary thought. Their respective talks considered the material cultures of scientific publication as well as issues of theology and contemporary public intellectuals. Crawford, a philosopher, demonstrated the productive intersections of the history of ideas with attention to practice and material culture. His talk focused on the discrepancies between Lamarck’s printed work and his intellectual intentions. Dilley, also a philosopher, trained his attention on the religious content of Darwin’s work. By putting theological concerns on a par with ‘scientific’ content, Dilley’s paper was an implicit (and perhaps ironic?) reminder of the merits of a symmetrical SSK-style approach. Working in the 20th century, Perez described her efforts to view debates about evolution through the life and career of Stephen Jay Gould. This is a bold effort to consider the role of evolutionary biologist as public intellectual during a tumultuous period in American history. Commentator Janet Browne rightly commented of the three papers in the last session that the study of Darwin and his influence continue to generate explanations of how we order our world.

The day ended with a rather sobering conversation about the state of the field and prospects for jobs (see here). Much ink has been and will continue to be spilled over these problems. However, for today, I want to conclude this particular post with an unambiguously positive sentiment. JAS-BIO has always been a place to enact the life of the mind and I left this year’s meeting in awe of how much we – as junior scholars – have to learn from each other. Thank you…

Spending Cuts, Financial Crises, and Social Darwinism


The American Museum of Natural History, 77th and Central Park West

I have been reading Sven Beckert’s excellent book, The Monied Metropolis, recently.  It presents an account of how the economic elite of New York city consolidated into a coherent and powerful social class during the second half of the 19th century.  A deeply thought-provoking study, I encourage everyone who has not done so to read it.

My own interest in Becker’s research stems from the fact that one way Bourgeoise New Yorkers performed their social distinction was by visibly patronizing elite cultural institutions.  The most obvious examples are the Metropolitan Museum of Art and the New York Philharmonic.  In both cases, the idea was to distinguish oneself by displaying your highbrow tastes.  Thus, a crucial function for institutions like the Metropolitan Museum was to demarcate fine or legitimate art from popular, lowbrow entertainment.  

The interesting thing for me is that these people also patronized the American Museum of Natural History, which is located on the West side of Central Park, directly facing the Metropolitan Museum on the East.  What did the Natural History Museum distinguish or demarcate itself from?  The answer, I think, are the popular museums located further downtown.  The most famous of these is PT Barnum’s American museum.  But there were many more “dime museums” all over the city, probably over a dozen in all.  

If the Natural History Museum exhibited genuine and secure scientific knowledge, dime museums catered to people’s taste for the strange, exotic and wonderful.  The claim, then, is that whereas the Metropolitan Museum sought to canonize fine art, the Natural History Museum demarcated science from humbug.

I’ll leave the rest of that argument to my dissertation.  What I wanted to share with everyone here is a curious and highly topical connection between Beckert’s work and contemporary debates about government spending, fiscal policy, and the federal budget.

Beckert traces the origins of New York’s monied elite back to the early decades of the 19th century, but they did not begin to coalesce into a coherent social class until after the Civil War.  The most defining moment in the process, he argues, was the financial crisis of 1873.  It constituted the most severe economic depression the United States had experienced up to that point, and it came about when a speculative bubble in the railroad industry burst.  Once it became clear that investments in railroads had been overvalued, many banks fell apart.  Coal mining and other industries tied to the railroads -- such as steel and iron manufacturing -- also went into decline, with ripples spreading throughout the nation’s entire economy.

New York’s Bourgeoisie responded to the crisis by closing ranks.  This trend became especially pronounced as workers’ wages and conditions declined, leading to labor unrest.  

In addition to these economic changes, Beckert reminds us there was also an important and well-known intellectual movement gaining traction at the time: Social Darwinism.  In the United States, Spencer’s articulation of the idea that evolutionary progress requires a struggle for survival in human society as well as the natural world met an especially receptive audience among the New York’s financial and industrial elite.

Hence, whereas an older generation of wealthy New Yorkers had emphasized the importance of public munificence and civic stewardship, the new generation argued that many of the city’s working poor bore a personal responsibility for the conditions in which they now found themselves.  Rather than a misfortune of circumstance, poverty was cast as the result of moral failings.

What really struck me about Beckert’s story is that Bourgeoise New Yorkers responded to the panic of 1873 with what he calls “fiscal retrenchment.”  Their own considerable finances under serious threat for the first time in living memory, wealthy New Yorkers argued forcefully that the city would be wrong to spend their tax dollars on extravagant relief efforts and public works projects to alleviate public suffering.  For example, Beckert quotes Samuel Tilden, a railroad lawyer and governor of New York State from 1875 to 1877, who cut taxes and reduced public spending, leaving it up “to the people to work out their own prosperity and happiness.”  Even more compelling is a quote from the city’s mayor, William F. Havemeyer, who told the Times in 1873 that he “could not see why the property of those who, by thrift and industry, had built up their houses, should be confiscated [by] men who had ... by strikes or the like, contributed to the present state of things themselves.”

Rather than invest tax dollars in public works project and social programs, the Bourgeoisie preferred to set up their own, private charity operations.  The reason, Beckert argues, is that in so doing they could attempt to distinguish between the city’s “deserving” and its “undeserving” poor.  While it made sense to offer relief to the poor, nothing good could come out of encouraging pauperism!  As the NY State Board of Charity put it in its 1877 Annual Report, most of the impoverished classes “reached that condition by idleness, improvidence, drunkenness” or some other “form of vicious indulgence.”

Minggu, 17 April 2011

Labels and the History of Science

I would like to erase the consequences of certain events and restore an initial condition. But every moment of my life brings with it an accumulation of new facts, and each of these new facts bring with it consequences; so the more I seek to return to the zero moment from which I set out, the further I move away from it... (Calvino)
Certain recent events have left me thinking about labels. Unfortunately (and ironically?) "label" doesn't capture quite what I mean, but let me try to illustrate it by describing a few of the things I've run into lately.

Names were in the air at "STS: The Next Twenty," the conference convened at Harvard last weekend that was part stock-taking, part provocation, and part rethinking of the state of the field in Science and Technology Studies (these were all terms with which the organizers welcomed participants).

I was only able to be there for the first half of the three-day event, but much of the conversation to which I was privy centered around metaphors for understanding what STS has been or could be: is STS a discipline, field, or umbrella? Is it an arena for discussion or a way of seeing all its own? One participant characterized it (fondly) as a "swamp," opposite the "desert" of jurisprudence; tool-metaphors were rampant, too. These are things I've been thinking about, and it was fun to see a big group go through it, too.

The cause of my early departure from Cambridge was the Joint-Atlantic Seminar for the History of Biology, held at Yale that same weekend. It's always a great meeting, and this time our very own Lukas gave a well-received paper connecting dinosaurs, museums, and larger structural changes in the American economy. I won't dwell too long on our days in New Haven, since I know Joanna is planning on writing up her thoughts and Nathaniel Comfort has already given us some reflections to chew on.

As Nathaniel points out, the annual stock-taking conversation with which the Joint Atlantic concludes revolved around questions of disciplinary identity and boundaries. While he (echoing what Dan Kevles had to say) suggests it's about passion for the subject matter ("the history of science") and not professional or disciplinary concerns ("The History of Science"), younger scholars in the room weren't ready to abandon concerns about self-identification and project-choice in the interest of securing jobs and starting careers.

Like I said, I don't want to go too far into it, but some of the themes we've been dealing with on AmericanScience were heavy in the room both at STS 20-20 and at the Joint Atlantic: from the identity of the scientist/historian (not to mention the scholar/citizen) to selling your soul on scientific content, and including questions of writing (about which I know Nathaniel is passionate) and the boundaries between historical work and present politics (I and II), there was a lot to think about - and I won't even go into the David Brooks event held the following Tuesday.

Rather than continue in this vein - which has dominated much discussion here and elsewhere - I'd like instead to point out from these conversations to another issue I ran into recently at a neighboring blog. Over at U.S. Intellectual History, there's been an ongoing discussion of Dan Rodgers' new book, Age of Fracture. In one post, David Sehat put forward a label with which we might understand Rodgers' approach: "neo-pragmatist."

By connecting Rodgers' new book to an earlier one (Contested Truths) - or, rather, by quoting heavily from the former in support of two earlier posts (here and here) on the latter - Sehat means to pick out Rodgers' Rortyean elements. In the comments, a discussion of the utility of labels emerged, ending in something of a dispute over whether labels for the content of historical analysis ("science," for our purposes) differ from those for the method (e.g. "neo-pragmatist"), and why.

This seems particularly germane for our ongoing discussion of what it is to do the history of science, and what HOS is the history of - messy terrain indeed.

Some historians of science feel a content-affinity, such that early-modern astronomy and 21st-century nanotechnology have enough in common as objects (or ideas, or practices) to justify a common conversation. Others, it seems to me, would hold these subjects apart as historical phenomena but would address one another on the level of methods - how best to characterize ideas or embed them in their contexts, how to impart technical detail in a readable way, &c.

This question of method vs. content has come up before on our blog - and elsewhere. It's at the heart of questions about whether present-day practitioners should "recognize themselves" in historical work; embedded in accusations of whiggishness and attendant concerns about presentism; and even embroiled in current debates within the wider arena of social and humanistic studies of science ("Philosophy, anyone?").

So, what's in a name? The questions about Sehat's dubbing Rodgers a "neo-pragmatist" are key. Does it matter if Rodgers would reject or accept the label? Why do we need to find a word or phrase with which to pin him down? If we are natural label-makers, as Sehat asserts, does this mean we shouldn't question the impulse to do so? What are the consequences of labeling, and what possibilities and dimensions does the effort leave out?

Since such things come up so often, I'm tempted to conclude that these meta-level concerns fit the definition of philosophy offered up by a good friend of mine: "The questions are obvious; the answers are impossible." But one thing we might take away (at the risk of looping, reflexively, one more time) is an attention to the act of labeling itself.

While this sounds like yet another ratcheting up into the meta-sphere, I think there's something here. Rather than (or in addition to) dedicating conferences, journal issues, roundtables, and blog-hours to questions about who we are (questions to which we try to post new and better answers), we might burrow down into the queries themselves to find the impetuses at their roots.

In doing so, we might come to find that labels and labeling aren't just stock-taking or provoking or rethinking; instead, labels might be answers to questions we'd be better off abandoning. If our answer to the question of what differentiates the history of science from history more generally comes down to a difference in the content on which we attend, then we'd do well to recognize the consequences of such disciplinary labeling for the ways it forces us to prize the past apart as well.

Labels for present practice do, it seems, entail labels for past phenomena: and, as we continue to worry about the former, recognizing their connection to the latter might help ground future discussions.

Jumat, 08 April 2011

Is Business Our Business Too?

Of course. That's my answer.

Last weekend I had the pleasure of giving a paper at the annual Business History Conference. The program is online, as are some of the papers and most of the abstracts. My abstracts are here. The conference organizers chose the theme "knowledge" and did a remarkable job of holding the papers and sessions to the theme. I don't think I've ever attended a conference of this size that remained so coherent.

As you might expect, historians of science, medicine, and technology jumped at a "knowledge"-based conference. I saw a handful of terrific papers. For instance, my co-panelist---Rutgers' Jamie Pietruska--- detailed the massive statistical appartus that supported the USDA's attempt at "objective" cotton forecasting in the late nineteenth century, but showed how competing statistical claims had the unintended consequence of producing increased price volatility. Another risk-centered paper---this one by Nate Holdren, a grad student at the University of Minnesota, demonstrated the dominance of a medical discourse in the practices of the Pullman company as it faced such disparate problems as food-handling regulations, employee turnover, and workplace liability insurance. On the same panel with Holdren came a fascinating paper by Sarah Rose and Joshua Salzmann on "Bionic Ballplayers." Rose and Salzmann took our current fascination with steroid use and put it in the context of recent developments in baseball contracts, which have changed the incentives for players and owners, and the invention of "Tommy John surgeries," which repair the worn-out tendons of over-muscled ballplayers. Rose and Salzmann's picture seemed to me the perfect example of a technological system, ala Thomas Hughes.

I had to miss just as many great HOS/M/T offerings. Consider, Michael Pettit's paper on marketing hormones, Hyungsub Choi's paper on semiconductor research and manufacturing, and Dominque Tobbell's paper on the reformation of research in the pharmaceutical industry. And I haven't even mentioned David Hounshell's opening plenary talk.

While the historians of science and technology clearly took advantage of this BHC venue, it seems that the business history community was happy to oblige. Indeed, of the four recent dissertations presented as finalists for the BHC's annual dissertation prize, three were in the history of science, technology, and medicine. My work on the construction of statistical infrastructure and ideas of human difference in the American life insurance industry got a nod. So did excellent work by Eric Hintz, from the Smithsonian's Lemelson Center, and Kara Swanson, from the Northeastern University law school. In his dissertation, Hintz challenges our sense that big corporate research labs killed off independent inventors. I was struck in his presentation not only by his conclusion that the death of the independent inventor had been greatly exaggerated, but by the many ways in which such inventors found ways to work with and around large corporate science and technology labs. Swanson reveals in her dissertation the complex history behind the collection and storage of bodily materials --- milk, blood, and sperm --- in "banks" devoted to each such material. I'm used to reading about frozen bodily materials, thanks to Joanna when she isn't writing about cats. But I had not given much thought before this to the way in which the biomedical narrative of materials collecting intersected with real and metaphorical "banks." Human materials are their own sort of capital, but with their own sort of rules.

Unless pressed, I won't bother to make a sustained defense of the need to study the history of science in America within the context of business. Recent work like Paul Lucier's Scientists and Swindlers makes that case well enough already, even for the nineteenth century. Still, I am heartened to see so many great scholars thinking about thinking in places apart from the traditional settings of universities, museums, laboratories, and field stations. I hope we'll continue to see more scholarship focused on thinking in back offices, behind counters, and next to the cash register.

Selasa, 05 April 2011

Hemingway's Cats: Let's Talk About Animals

So far, our blog has been rather human-centric. Today, I want to change that by starting a discussion about the intersections of Animal Studies and the American Science. Since I just got back from a little trip to Florida, I'm calling this post "Hemingway's Cats" in honor of the polydactyl felines that have colonized the author's old estate in Key West. The image of exceptional, proliferating non-human bodies in human built-environments is evocative (at least to me) for thinking about the ways in which animals inspire, populate, and transmit technical knowledge.

I'm particularly interested in animals that resist standardization (unlike Kohler's flies or Rader's mice), but which nonetheless become enrolled in scientific projects. One obvious area in which this has occurred is the realm of conservation biology. Here, the privileged animal body is one in danger of being manipulated or obliterated by unfettered human activity. The non-human animal that resists captivity becomes the object of scientific intervention revealing as much, if not more, about human values as about the organism itself. Etienne Benson's Wired Wilderness is a notable, recent example (see his blog for links to some great bibliographies).

And then there are the animal bodies that get frozen -- my own area of expertise. I'm fascinated by the way in which overlapping ideologies and technologies of preservation have supported the archiving of extracts of non-human animals for purposes of conservation. Two of the most high-profile sites for such research are the Frozen Zoo in San Diego, CA and the Ambrose Monell Cryo-Collection at the American Museum of Natural History in NYC.

I've spent time at both collections and believe that they require us to re-think the relationship between certain spaces that support the conduct of American science: zoos & aquariums, natural history museums, and . . . reproductive clinics. (Sociologist Carrie Friese has investigated reproductive cloning for conservation purposes in US borderlands). In addition to tissue and DNA samples, the Frozen Zoo has a massive collection of cryo-preserved gametes from rare and endangered animals . . . a sort of nursery for nature if you will. When I visited, I spied this baby monitor in the freezer room -- intended to alert lab workers to alarms triggered by power outages. Here, the goal is not -- as in industrial agriculture -- to breed more standardized livestock. It is to preserve genetic diversity.

However, I recently learned of a frozen non-human tissue collection that manages to combine issues of assisted reproduction, agriculture, and conservation. The SVF Foundation in Newport Rhode Island is financed by the heiress to the Campbell's Soup Fortune and collaborates with the vet school at Tufts. According to the website:
'Although there are numerous seed banks throughout the world, little effort has been made to collect germplasm for rare and endangered breeds of livestock. Rare or heritage breeds of livestock carry valuable and irreplaceable traits, such as resistance to disease and parasites, heat tolerance and mothering ability—all of which may be needed at some future time.'
In this case, the discourse of conservation is combined with that of agriculture and assisted reproduction to promote the preservation of exotic breeds. SVF has actually partnered with organic food co-ops, including one a few blocks from my apartment, to promote a 'taste' for these animals. Unlike other animal conservation discourses, SVF reps argue that the best way to 'save' genetic diversity is to eat it. A moveable feast?

Tell us about your particular species of work and thoughts on animal bodies in American science. This would be a great place to share any resources for those interested in learning more.



Senin, 04 April 2011

David Brooks and "Scientific Concepts"

I know, I know: another David Brooks column? After my last stab at Brooks' popularization efforts - which received a bit of positive feedback - I've been keeping my eye on his column space (from the other side of NYT's pay-wall) to see if the issues I highlighted might resurface. They've continued unabated on his blog, but he seemed to turn back to politics in his column - until last Monday.

The post is called "Tools for Thinking," and its similar in many ways to his "Palooza" posts in that it's more of a grab-bag of recent goings-on in the sciences than a coherent expression of Brooks' own thoughts. As I emphasized in my last post, I don't necessarily think this is a bad approach in principle, though in Brooks' case I wanted to think about (a) why he's increasingly reaching into the cognitive and social sciences, (b) what kind of "popularizer" this makes him, and (c) how his social-scientific turn reflects the state of "science and society" in the States.

Brooks' "Tools for Thinking" are drawn from a recent symposium of the Edge World Question Center. As many of you will know, Edge.org and its founder John Brockman have been a major node of scientific intellectualism for decades. I'll try to say more about Edge in a subsequent post, since I think they're fascinating - among (many) other things, they've self-styled as a sort of gangster-superhero squad ("The Digerati"), complete with Watchmen-esque pseudonyms (Brockman is "The Connector" - seriously, check it out).

More germane for my purposes, though, are two points on "popularization" that connect Brooks and Edge, points that subtend his recent column on their latest "World Question" - which, for the record, was suggested by Steven Pinker, elicited over 150 "expert" responses, and reads as follows: "What Scientific Concept Would Improve Everybody's Cognitive Toolkit?" (More on the concepts underlying the question itself - things like "Scientific Concept," "Cognitive Toolkit," "Everybody" - in a minute.)

For now, as I said, there are two points I'd like to highlight:

(1) Brooks' "New Humanism" (unveiled here, though a feature of much of his recent writing and speaking, including his upcoming event at Harvard, on which I'll report in a week) has interesting parallels with Brockman's "Third Culture." Both insist optimistically on the power of the natural and human sciences to help define the nature and meaning of human life, and, in doing so, both are meant to be in dialogue with the humanities and, to a certain extent, a replacement for or at least reimagining of traditional religious frameworks for understanding and grounding human experience.

(2) The two men represent radically different "personae" at the boundary between expert science and the lay public (sorry to reify these categories, but I'll get back to them in a minute). Specifically, I think they come to the intersection from very different places. Brooks is an avowed moderate, defender of (certain) traditional values, and social-media skeptic; Brockman, on the other hand, is a self-styled "maverick," a member of the avant-garde and digital salonnière. These differences (a) reflect the variety of identities within science popularization and (b) manifest themselves in very different visions of what a humanistic movement based on scientific research might look like.

A word on Brooks' column and the Edge symposium on which it's based. In response to Pinker's question, 164 "scientific concepts" were contributed as candidate hammers and forceps for our "cognitive toolkits." The list is fascinating in its own right: the concepts, like their contributors, run the gamut from the expected ("Ecology") to the surprising ("Kayfabe," an insider-term in professional wrestling), with plenty of terrible neologisms and acronyms thrown in ("PERMA," "Pragmamorphism," and "The Dece(i)bo Effect" only begin to scratch the surface).

Brooks himself recommends a few - including "Path Dependence," "Einstellung Effect," and "Supervenience!" (emphasis original) - before reflecting on the theme of "emergence," which he suggests is itself an "emergent" property of the discussion. On his blog, he highlights a few more, recommending the symposium as a whole for what it reveals about "the epistemological climate in this subculture."

"Subculture"? A strange term, at first blush, for a laundry list of the world's most important social and human scientists, interspersed with successful authors, musicians, and "cultural impresarios." But I think he's captured something. What Brockman is proposing - his "third culture" - is not "science for the people," nor is it a movement in which anyone and everyone is encouraged to get involved. It's not even, as far as I can tell, a worldview - where Brooks hopes that "scientific concepts" might alter our everyday temper, Brockman - "the Connector" - is focused on (re-)fashioning a cultural elite - an avant-garde.

In his 1991 essay on "The Third Culture," he put it this way:
Throughout history, intellectual life has been marked by the fact that only a small number of people have done the serious thinking for everybody else. What we are witnessing is a passing of the torch from one group of thinkers, the traditional literary intellectuals, to a new group, the intellectuals of the emerging third culture.
He's following C.P. Snow, who in his "Two Cultures" lamented the “gulf of mutual incomprehension” between scientists and "literary intellectuals" while clearly siding with the scientists (a process of identity formation helped along by what was widely perceived as sneering treatment by F.R. Leavis, the beau ideal of "traditional literary intellectuals").

There's an important difference, though. While Snow's famous formulation was also meant to suggest a "third culture," that third way was aimed at a specific problem: what he called in the fourth part of his lecture "the rich and the poor." The divide between the sciences and the humanities was not, for Snow, a metaphysical pissing-match about who defined the meaning of human life - rather, it was about helping humans who were suffering from material wants in the real world.

How does this relate to Brockman and to Brooks? Well, both of the latter seem more concerned (to me) with "meaning," and their actions at the boundary between science and society - Brooks the columnist, Brockman the connector - suggest this preference. The two represent, in a certain sense, the two sides of TED, ideas broadcast on the web but presented at semi-private gatherings in California: Brooks is the leveler, distilling new ideas for consumption; Brockman is the peak-raiser, bringing bright minds together and driving things forward.

The difference, in terms of their respective "philosophies of popularization," is one of both ends and emphasis. Snow hoped to alleviate poverty, Brooks is trying to make science a part of our self-understanding, and Brockman wants to empower a (third-) cultural elite to do the same. Obviously they might each share portions of these visions - I'd be interested to hear Brockman respond to Brooks, for example - but I think the difference, especially between Brooks' ostensible populism and Brockman's seeming elitism, is a crucial one.

On the 50th anniversary of Snow's Lecture, Seed Magazine asked whether we were "beyond" the binary or not (video here). It's in this vein that I see both Brooks and Brockman operating - they both want to get past the divide, though in different ways and for different reasons. The question I'm left with - and this is to get back to some things we've been thinking about as a group - is, if there's to be a "third culture" that fuses traditionally scientific and humanistic ways of knowing, of whom is it comprised?

In his original formulation, Brockman named 23 "intellectuals" of the third culture. All are practicing academics: most are scientists, though a few are philosophers thereof. Apropos of recent discussions of the boundaries between both scholar and citizen, scientist and historian, and the relationship between the two, does demarcating such a group make sense? My take is that both Brooks and Brockman see themselves as integral to their vision of the "New Humanism"/"Third Culture," but it's unclear (to me) what language we might adopt to describe their positions relative to the knowledge they aggregate or enable.

Is the goal to raise the peaks and disseminate the results, or to raise the valleys and widen the conversation?

Sabtu, 02 April 2011

Sprouted Grains: Nature’s Secret for a Thinner Waistline and Healthier Life

This entry was posted on Friday, June 25th, 2010 at 11:04 am and is filed under Featured. You can follow any responses to this entry through the RSS 2.0 feed. Both comments and pings are currently closed.

This is a guest post by Alexis Bonari of onlinedegrees.org

When I first started trying to live a healthier lifestyle, I quickly realized one thing —low-carb diets weren’t for me. A week after I started cutting carbohydrates out of my diet, I felt terrible. I had no energy, I craved bread all the time, and I couldn’t focus. How was I supposed to regulate my blood sugar and reduce sugar cravings while still eating bread and cereal? Luckily, a close friend suggested spelt.

Discovering the low glycemic difference.

Like me, you’ve probably never heard of spelt or other sprouted grains.  These grains are not allowed to dry out before being ground. As a result, they are less starchy than their dried equivalents.

Unlike regular wheat flour, they don’t cause a spike in your blood sugar. Foods that don’t cause a dramatic increase in blood sugar over a short period of time are referred to as “low glycemic index foods”.

Breads and cereals made with spelt and amaranth —another low glycemic index grain— are commonly available in health food stores and some major grocery chains.

Instead of whole wheat, think spelt.

It has been proven time and again that sugar, especially in its purest form, is highly addictive. Those who habitually eat large quantities of processed sugar in the form of corn syrup and white sugar weigh up to ten pounds more than those who choose unprocessed, whole grains. The difference between the processed and unprocessed sugars is how they affect your glycemic index.

Eating spelt or amaranth instead of whole wheat is simply taking the concept of eating unprocessed grains a step further. The taste and texture of spelt bread is approximately the same as that of whole wheat.

Spelt for your health.

Spelt and other low glycemic index grains have been proven to help with more than weight loss. They can dramatically improve quality of life for diabetics and those suffering from adrenal disorders. For diabetics who must count every gram of carbohydrates consumed, this discovery is potentially life changing. An average slice of spelt bread raises your blood sugar to a much lesser degree than the average slice of whole-wheat bread.

Many people who suffer form adrenal disorders report that they experience fewer hot flashes after eating spelt bread and amaranth cereal.  Since making the switch to low glycemic index foods, I have lost several pounds with hardly any effort. Better yet, I’ve lost most of my craving for high-carbohydrate foods.

Whether you suffer from blood sugar related health issues, or you simply want to lose a few pounds, spelt might be the answer to your prayers.

Bio: Alexis Bonari is a freelance writer and blog junkie. She is currently a resident blogger at onlinedegrees.org, researching areas of online education programs. In her spare time, she enjoys square-foot gardening, swimming, and avoiding her laptop.

Find weight loss help and support with traineo.com



Turbulence Training

Sprouted Grains: Nature’s Secret for a Thinner Waistline and Healthier Life, 10.0 out of 10 based on 1 rating The method which I decided to use was the GI way w…Why you Need a 35 inch WaistlineIs Cron The Secret To A Longer Life?The Secret Life of Your Liver Part II (Recipes)The Secret Life of Your LiverIs Vegetable Juice Healthier Than Normal Water?

View the original article here