cara agar cepat hamil weigh loss factor : April 2012

Senin, 30 April 2012

6 Absolute Truths about the 5 Factor Diet

6 Absolute Truths about the 5 Factor DietWeight Loss Sample Tips

1. Get support. When you make the decision to lose weight, enlist the help and support of your friends and family members. Having people around you who will encourage you through the process is a great way to start. Be careful about telling those people who might be discouraging, either by not supporting your goals or by hounding you every time they see you eat something that they deem inappropriate for someone who is dieting. Neither of these scenarios is helpful!

2. Set realistic goals. Anyone who has ever set an unrealistic weight loss goal will tell you that not meeting your own expectations is the fastest way to fail at weight loss. You should plan to lose no more than 1-2 pounds per week. In general, people who set realistic goals will exceed it during at least the first few weeks. Exceeding your weight loss goals will give you something to get excited about, and keep the weight loss process positive.

3. Learn to keep things in moderation. When your goal is to lose weight, remember the old saying...all things in moderation. By following this mantra with eating and working out, you will lose weight at a reasonable pace and feel good while doing it!

4. Join a program. Weight loss groups like Weight Watchers are popular for more than just their diet plans. They help people to form a community with other people who have the same goals. This extended support network is great for making weight loss more exciting - having someone with whom you can share your excitement. It is also a great way to talk through some of the issues that you might be experiencing with your weight loss program. Support is crucial when you are attempting to make major lifestyle changes.

Price: $0.99


Click here to buy from Amazon

Sabtu, 28 April 2012

How Facebook Users Matter

I just reading finished the cover story of the May Atlantic Monthly, which asks the question "Is Facebook Making Us Lonely?" Facebook, here, is a stand-in for the hyper connectivity enabled by the gamut of communication technologies available today. And the answer given by the novelist/journalist Stephen Marche, unsurprisingly and as suggested by the illustration of a man gazing into his glowing cell phone even as he is embraced by a naked and clearly affection-seeking woman, is yes. Or at the very least, it's not making us any less lonely.

There's a lot to critique in the research that underlies Marche's basic claims, as the sociologist Eric Klinenberg makes clear at Slate.

What initially caught my attention in the piece (besides the realization that I'd be able to use The Social Network when I teach the history of technology) is that it hinges on a variation of a "users matter" argument. According to Marche, it's not inherent in Facebook, or other online social networking technology, to be isolating. It's just that many people use Facebook in ways that enhance feelings of loneliness rather than feelings of sociability. Instead of using Facebook to arrange meaningful, face-to-face interactions, we're more likely to click "like" to show our approval of a friend's most recent photo or status update and be done.

Marche interviews John Cacioppo, of the University of Chicago's Center for Cognitive and Social Neuroscience, who compares Facebook to automobiles in his suggestion that it's not the technology that matters but how we use it: "It's like a car. You can drive it to pick up your friends. Or you can drive alone." Marche decides to run with the idea, using a variant of that idea to argue that:
"The history of our use of technology is a history of isolation desired and achieved. When the Great Atlantic and Pacific Tea Company opened its A&P stores, giving Americans self-service access to groceries, customers stopped having relationship with their grocers. When the telephone arrived, people stopped knocking on their neighbors doors. Social media brings this process to a much wider set of relationships."
Of course, these aren't very good examples to choose, not least because user-centered histories of cars and telephones have shown the ways in which these did increase social connectivity (as Klinenberg also points out on Slate). In addition, Marche's argument seems to assume that there's an innate human tendency adopt technologies in socially counterproductive ways -- a sweeping generalization of human nature that undermines the very flexibility that the "users matter" idea first introduces to his narrative.

I'm curious to know what others think of the article, and, in a nod to Marche, I'll add that if we can make that happen over a beer, all the better. 

Jumat, 27 April 2012

"The 'Nothing' of Reality"

A recent dust-up between physicist/author Lawrence Krauss and philosopher of science David Albert should be of interest to anyone who studies science and wonders about how such studies interact with and are perceived by scientists. The controversy started with Albert's NYT review of Krauss's new book, A Universe from Nothing.


The book is part cosmological primer and part anti-religious screed (featuring an afterword by Richard Dawkins!), building on a lecture Krauss gave in 2009 that's had over a million hits on Youtube. I haven't read it, but I have seen the lecture, and based on that I'm not surprised that Krauss is regarded as a lucid and engaging popular science writer.


What Albert took issue with – and where the bickering began – was Krauss's use of the word "nothing." It turns out that Krauss can't explain where things like the laws of quantum mechanics or the fields described by relativistic quantum field theory come from: instead, "nothing" means "the absence of material particles" but not the "absence of everything."


This is where things get interesting. In an interview with The Atlantic, Krauss blasts Albert as a "moronic philosopher," saying: "I don't really give a damn about what 'nothing' means to philosophers; I care about the 'nothing' of reality." And he doesn't stop with Albert:
...the worst part of philosophy is the philosophy of science; the only people, as far as I can tell, that read work by philosophers of science are other philosophers of science. It has no impact on physics what so ever, and I doubt that other philosophers read it because it's fairly technical. And so it's really hard to understand what justifies it.
Now, there's a lot to doubt here – Is it true that no one outside the field readers philosophy of science? If it is, does it matter? – but instead I want to offer a charitable reading, albeit one Krauss didn't intend and would probably reject. To my mind, if we re-read "the 'nothing' of reality" as "the 'nothing' of common sense," Krauss has a point that's worth the attention of those in science studies.

For all the bad things about Krauss and his friend Dawkins (and there are many), they are committed to public engagement. And, while we can and should question the version of the "information deficit model" their vision of "the public understand of science" entails, we might take this episode as an opportunity to think about the work we do in science studies and the audiences for whom we do it.


By way of wrapping up, let me just note that today Krauss published some clarifying thoughts on philosophers, spurred partly by his friend Dan Dennett's suggestion that it sounded like he was condemning philosophy as a whole. There, he moves even further from the position on public engagement that I've staked out for him – which is too bad, in a sense.

Why? Because when he concludes by defining "bad" philosophy as that which goes beyond describing what we (scientifically) know, what we might know, and what we can't, he just barely misses making the point that we might all think more about how disciplinary puzzle-solving relates to what's interesting about what we (and scientists) do as expressed in ordinary language.

Kamis, 26 April 2012

History of Science / STS in Singapore

Marina Bay Sands Integrated Resort by Moshe Safdie Architects in Singapore.

I recently took a trip to Singapore, which is a great place to visit.  (If for no other reason than that the food is amazing!)  One thing that really struck me is the extent to which history, philosophy, sociology, and anthropology of science are taking off there.

Singapore has two main research universities: the National University of Singapore and the Nanyang Technological University.  Both of them are actively building an HOS / STS / HPS presence.  In my view, this is very good news for the discipline.

Among the more high-profile changes taking place in Singapore's academic landscape is the recent partnership between the NUS and Yale to build a joint liberal arts college on the south-east Asian island.  You may have heard that Yale's faculty have recently registered their complaints against this venture.

The purpose of this post is primarily to post a link to my friend Hallam Stevens' thoughts on the matter, which are worth reading.

It does not hurt that it also gives me an excuse to share a photo I took of the Marina Bay Sands Integrated Resort.  It is a giant casino built by Moshe Safdie (of Habitat '67 fame) in Singapore.  Not gonna say I'm a huge fan in terms of its formal qualities, but it certainly makes a statement!

Rabu, 25 April 2012

JAS-BIO 2012

Hard to believe it has been a year since I reported on the Joint Atlantic Seminar for the History of Biology, (see here).  This year's meeting, held at Penn, was one of the most well-attended in recent memory and featured a dozen well-crafted and dynamically-presented papers from grad students as local as Philadelphia and as distant as Arizona.




The meeting was kicked off by a plenary from Penn anthropologist Adriana Petryna, who spoke about work-in-progress on the demise of the sick role and the right to recovery.  I am biased (I have worked with Petryna for a number of years), but I appreciated the choice of an anthropologist of bioscience, following on the plenary given by anthropologist Marcia Inhorn last year. Anthropologists' attention to the life sciences have been informed by historians of biology and the methodological insights being generated through conversations across fields is responsible for some truly important work (here, I'm thinking of Hannah Landecker's Culturing Life, Stefan Helmreich's Alien Ocean, and Hugh Raffles' In Amazonia, though there are many others). Creating a space for anthropology at our table is an opportunity to recognize that our work matters to communities other than our own, which is a good thing.

Read about some resonant themes from the meeting after the jump.



The first is the question of "phenomenology," and its connection to empathy.  In his talk on Otto Potzl's neuropsychiatry, Harvard's Scott Phelps grappled with early 20th century efforts to fathom the brain as a scientific object and the self (cue Hank). He spoke of this in terms of "neuro-phenomenology," an approach to the physics of subjectivity that required "perceptual empathy."  Ok, there's a lot going on in that sentence, but what I think Phelps might be talking about is the personal equation that characterizes encounter in the human field sciences.  Or, perhaps literary analysis, as was beautifully described by Princeton's Sarah Eldrige in her account of the family and the rise of the novel in 18th century Germany.  She spoke of empathy as a way into understanding new ideas about interiority and epigenesis.  What work does "phenomenology" do for Potzl . . . and for Phelps?  Is it an effective way of linking the material to the affective?  Or does it obscure that very relation?

Phenomenology was also invoked by Hopkins' Adrianna Link on efforts to create a film archive of disappearing human cultures in the mid-20th century.  She described how these 8 million feet of anthropological film were intended to be a "phenomenological" resource for the science of man.  What kind of "perceptual empathy" is called upon to make sense of this sort of evidence?  Again, I found myself wondering about what phenomenology, as an actors' category, obscured. As historians of life science dip into anthropology, history of technology, etc., we can reconstitute ways of thinking and talking about the material and the social.

I was fascinated by the resurgence of risk in several papers.  Robin Scheffler, from Yale, gave us a remarkable  story of role of Simian Virus 40, first a contaminant and then an experimental organism, in driving research in cancer biology and regulation of vaccine production. Scheffler went beyond the "follow the thing" approach to show how the thing is constituted in relation to a broader physical infrastructure and affective context of anxiety about health and security.  Penn's Mary Mitchell engaged with risk in a different way, trying to make sense of how human geneticists reckoned with the application of their insights in the realm of prenatal screening.  Mitchell linked the clinic, lab, and field as she tracked concerns about risks to individuals and to populations. 

The tension between the individual and the collective is an old theme, newly rendered at this JAS-BIO.  Richard Nash from Hopkins' has bravely begun to rethink the eclipse of Darwin to reveal a richer landscape or "milky way" of efforts to understand variation.  Penn's Maxwell Rogoski's account of Curt Stern's travels in the American South demonstrated how theories of variation were constructed with regard to shifting racial politics. Harvard's Myra Perez examined the multiple personas of Stephen Jay Gould and their relative impacts on debates over the role of science in American democracy. 

On this last theme of individuals and collectives, I want to offer another explanation as to why environmental history was relatively absent from this year's meeting: the proliferation of new forums dedicated to the subject.  Along with the ASEH, the fledgling Yale graduate student conference met the week before JAS-BIO.  It's probably safe to say that if the two meetings had been scheduled further apart, we might have seen overlap in participants. The more places people have to gather, the better, but this will surely have consequences for group identity.  I agree with Lukas that the burgeoning relationship between environmental history and history of science is one to watch.

So, I'm casting my vote for an environmental historian as plenary speaker next year, when JAS-BIO meets at Woods Hole!



Sabtu, 21 April 2012

STS and the Spectre of ELSI

A spectre is haunting STS, the spectre of ELSI.

Or perhaps not.

Lukas's last post and Hank's comment (including the Winfried Fluck article Hank linked to) evoked many thoughts in me. The kinds of "facts" brought up in Brooks's column makes me wonder whether we will one day see No-Child-Left-Behind-esque standardized testing in universities. I want to add another layer to this discussion of the changing academic environment by discussing how funding might be shaping STS research.



The Human Genome Project's program on Ethical, Legal, and Social Implications (ELSI) was founded in 1990. The HGP dedicated 3 to 5% of its budget to ELSI research. Since then other programs on emerging technologies have had similar ELSI-type institutions, including Paul Rabinow's controversial tenure at the Synthetic Biology Engineering Research Center (SynBERC). Such programs mean a pot of money for STS scholars.

I've become very interested in how this funding has come to shape research in STS (broadly construed to include HOST). STS scholars often apply (and, in a Fluck-like manner, compete for) grant money. In some ways, this might be structurally akin to the kind of "corporatization" of research that Philip Mirowski has described in _Science-Mart_, though, for sure, most ELSI-type money is public, not private. I think this trend has had a host of influences on STS research.

I'll just list some possible implications now, though perhaps I can expand this reflection later. First, this focus on emerging technologies may limit the amount and depth of possible empirical research. What kind of sources exist for technologies that have very little to no history?

Second, the ELSI framing explicitly focuses on the normative "problems" inherent in scientific and engineering research. This might lead STS scholars to ignore issues in science studies that are less laden with (socially-noteworthy) norms. And, thus, it might flatten and circumscribe our overall account of science and technology. (I think, for instance, that the Speculative Realism movement might suggest that we should move away from the normative aspects of S&T--a subject for another time.)

Third, even if STS researchers want to do normative work, ELSI research seems to primarily involve participant-observation, such as attending conferences, conducting interviews, etc. These methods necessitate the continued participation of the the scientists and engineers doing the work. Thus, ELSI-type researchers may be more likely to pull their punches, else they anger or alienate their subjects.

Fourth, since ELSIers focus on the emerging, they may tend to ignore age-old, ordinary, mundane, or industrial technologies that have real impacts on our daily lives. Consider the US electricity system, which appears to be headed no where good at all but is the subject of just about zero "sexy" STS research. At the SHOT meeting in Cleveland last year, an audience member stated that STS scholars focus either on the emerging or on mundane technologies in developing nations (primarily in Africa) but on very little in between.

Finally--and most simply and stupidly--what does it mean for ELSI-type researchers to have (at least parts of) their research agenda set by someone else?

But perhaps I'm just being one of Winfried Fluck's romantic individualists?!?

Jumat, 20 April 2012

The Marketplace of Ideas


David Brooks wrote a column in today's NY Times about the deplorable state of higher education in the United States.  "Colleges are supposed to produce learning," he says.  "But [a recent study] found that, on average, students experienced a pathetic seven percentile point gain in skills during their first two years in college and a marginal gain in the two years after that."  

The study that Brooks refers to is a new book called Academically Adrift by Richard Arum and Josipa Roksa.  (You can also find a précis of the book's argument in Issue 43 (2011) of Change Magazine entitled "The State of Undergraduate Learning.")  In addition to the figure that Brooks cited above, the two sociologists found that, on average, only 45% of American undergraduates experience a significant improvement in critical thinking, analytical reasoning, and writing skills over their first two years in college.   

Why?  Because students spend very little time on schoolwork.  In this study, students were found only to spend about 12 hours a week studying, in addition to the roughly 15 hours a week they spent going to lectures, discussions, and laboratories.  This is a sharp decline since the 1960s.  Not only that, but students now were found to spend over 43 hours per week socializing with friends, watching television, and engaging in other entertainment pursuits.  

The upshot, then, seems to be as follows: American undergraduates spend too much of their time living out the Animal House fantasy, and not enough time hitting the books.  At least, that's the view that Arum and Roksa seem to endorse.  And judging from the media attention their book has received, many Americans agree.

But what has really happened to American higher education since the 1960s? Is this really a story about how our college campuses have been overrun by alcohol-guzzling John Belushi types, pictured above?  Or is something deeper and more interesting going on here?

To return to David Brooks' NY Times column, he opens with an interesting refrain that resembles what I have been hearing in many public discussions about higher education of late:

"There’s an atmosphere of grand fragility hanging over America’s colleges. The grandeur comes from the surging application rates, the international renown, the fancy new dining and athletic facilities. The fragility comes from the fact that colleges are charging more money, but it’s not clear how much actual benefit they are providing."

The thinking seems to be that while a college degree has become essential for entry into almost any field, actually attending school is getting more and more expensive.  This might be okay if students were getting something out of their degree, but it is hard to justify spending tens of thousands of dollars on tuition and living expenses when colleges don't actually seem to teach very much.

But it is also manifestly true that American colleges and universities do tend to offer a phenomenally rich educational experience to those who seek to benefit from it.  (It is not for nothing that international students flock to the United States.)  So if students spent more of their time studying and less of it partying, they might actually get something out of that very expensive degree!  So why can't we induce students to take advantage of all the opportunities that American colleges have to offer?

This is where I want to link up to Hank's post yesterday about the commercialization of personal genomics, represented by Silicon Valley startups like 23andMe.

In the United States, higher education is primarily a free market enterprise.  Students are customers, and the customer is always right.  But, as Hank, channeling Sanford Kwinter, points out, the mantra that the customer is always right is one that primarily benefits the producer, not the consumer.  

Is anyone really surprised that treating higher education as a business, and students as customers, has led to: increased college enrollments, soaring tuition prices, and an educational experience that is tailored to meet the desires of 18-22 year old undergraduates?  Why would colleges demand that students work hard and forgo socializing to spend time in the library?  Doing so would only drive students away!  That would mean eroding one's customer base, not to mention making it harder to charge more for tuition.

Free market principles yield free market solutions.  In health care, education, and anywhere else you care to look.

Kamis, 19 April 2012

23andMe: Genetic Testing or Bioprospecting?

This week, the Harvard Program on Science, Technology, & Society held the latest installment in its Science and Democracy Lecture Series, featuring Anne Wojcicki, the co-founder and CEO of 23andMe, a direct-to-consumer genetic testing company.


The lecture, called "Deleterious Me," combined an account of 23andMe's practices, and the challenges they've faced, with a blend of optimism and fatalism about the future (and future ubiquity) of personalized medicine, affordable biotechnology, and patient- (or consumer-) driven innovation in the health care industry. 

Many in attendance, not least a few of the scholars on the panel tasked with responding to the address, found Wojcicki's boosterism unpalatable. In particular, a line of critique running through the commentary centered on the nature of the relationship between the two stated missions of 23andMe: one individualistic, one collectivist.

On the one hand, Wojcicki highlighted her desire to empower consumer-patients by circumventing the medical establishment and making data available; on the other, she insisted on the role of 23andMe as a research platform, arguing that its unique dataset rendered it invaluable as a partner and model for further research. 

There's a potential tension here, one increasingly central to the modern biomedical establishment and, more generally, to the longer history of the interaction between patients (or subjects), science (or medicine), and capitalism. Who owns what? What's the impact of information asymmetries? Who is this research (or data) for?

The severest criticism along these lines came from Sanford Kwinter, a professor at the Graduate School of Design and the last panelist to comment. Kwinter's response was an unrelenting indictment of the outdated (market) logic underlying 23andMe, which (Kwinter argued) veiled exploitative practices in the language of "consumer choice." 

Sanford Kwinter (Harvard GSD)
Accusing Wojcicki of monetizing sickness, capitalizing on fear, and building 2.0 technology on 1.0 ideals, Kwinter argued that the mantra "the customer is always right" is even more dangerous in medicine than elsewhere–and that we should remember that the phrase's most loyal adherents are sellers, not customers.

Folks differed on their opinions of Kwinter's prudence, but no one denied his rhetorical strength. What was clear–in Wojcicki's inability to respond and in the absence of conversation at this level at the next day's workshop–was that an industrial culture (Silicon Valley biotech) built around innovation and "iteration" simply wasn't ready for such critical "why" questions.

The problem, it seems, is that the biomedical industry (whether entrepreneurial or establishment) and the scholars who critique it operate at different levels. While this might seem obvious, the upshot is that those whose lives are built directly on the logic of the market simply can't engage in the scrutiny of those foundations. 

While this week's event didn't destroy my faith in normative scholarly criticism of science and industry–far from it!–, it did reveal a potential limitation on the forms in which that engagement can take place. Far from "iterating" on existing models, then, I'd suggest that some form of revolution, rather than evolution, might be necessary to make critiques like Kwinter's heard.

Minggu, 15 April 2012

Environmental History & History of Science: The New Synthesis?

Alpine lake with wildflowers in Switzerland, a natural environment manicured by grazing ungulates.

Yesterday, I had the pleasure to attend the 3rd Northeast Environmental History Conference at Yale.  The theme this year was "Two Kingdoms: New Perspectives on Flora and Fauna in Environmental History."  And a few weeks prior, I was in Madison for the American Society for Environmental History Conference, where the theme was "From the Local to the Global."

What struck me at both occasions was the number of Historians of Science and Technology in attendance.  This was my first time at either event, and I was glad to meet many old friends I had not expected to see before next year's History of Science Society Meeting in San Diego.  But beyond this, I also heard a large number of presentations by people I've never met before, people who primarily self-identify as Environmental Historians, that could have just as well been presented at HSS or SHOT.

What's going on here?  Are the two field converging on one another?

In at least two ways, I think that they are.  And, so far as I'm concerned, the development is a welcome one.

In a mundane sense, Historians of Science have become increasingly interested in biological fields of study such as ecology, evolution, and behavior that used to be eclipsed by the physical sciences.  So on a purely thematic level, there is increasingly commonality with Environmental History.

But there is also a deeper and ultimately more interesting sense in which the two fields are in dialogue with one another.  My sense is that Environmental Historians have become increasingly aware that one cannot simply take the natural world as a given.  Nature is now routinely interrogated as category of historical analysis.  (Of course, this is not entirely new.  People like William Cronon who are on the vanguard of the discipline have been doing it for a long time.  But what used to be a fairly radical position seems to have become more or less mainstream.)  In so doing, environmental history has found much inspiration from historians of science, scholars who have sought to embed our knowledge and experience of the natural world within narratives of social and cultural change for several decades.

At the same time, Historians of Science have much to learn from Environmental History.  While it is certainly true that nature should be historicized alongside of everything else, this does not mean everything is cultural.  Although plants and animals--to take up the theme of yesterday's conference at Yale--do not exist independently of human culture, they also exhibit a certain degree of resilience and push back against our efforts at control.  There is a quote in David Blackbourn's most recent book, a fairly longue durée history of German efforts to divert rivers and streams, largely for the purposes of land reclamation, of which I'm quite fond:

"when I read yet another book or article about an 'imagined landscape,' it is sometimes tempting to complain, like Gertrude Stein, that 'there is no there there.'  And I want to ask: are all topographies in the mind, is every river nothing more than a flowing symbol?"

What makes Blackbourn's Conquest of Nature such an intriguing book is not that he simply denies the impact of humans upon their landscape.  Nor does he deny that our imaginations matter.  Rather, he looks at how the social, cultural, and political imaginary of Germans exerted a material impact on their landscape.  Over the centuries, Germans sought, and to some extent succeeded, in imposing their vision onto the natural world.  They diverted streams and rivers, drained swamps, and executed ambitious hydrological projects.  But the landscape was not merely inert matter, sitting there for people to shape after their own image.  Rather, he narrates a complex and messy dialectical process in which nature and culture interact to the point that it is hard to say where one ends and the other begins.

Historians of science have been moving in that direction, but I think there is much that remains to be learned from the best and most sophisticated work being done by Environmental Historians today.  The title of this post is meant to be tongue in cheek, harkening back to E.O. Wilson's controversial 1976 publication of Sociobilogy: The New Synthesis.  Still, the conversations we've been having about the material construction of scientific evidence (especially here, as well as here and here) might point to one way in which a synthesis of Hist. Sci. and Environmental History that would benefit both sides might be achieved.

Jumat, 13 April 2012

Lovecraft, Science, and Epistemic Subcultures

For my first post, I want to build on discussions about literature and science that Hank, Joanna, and Dan had earlier, here and here. H. P. Lovecraft (1890–1937) wrote a series of stories for magazines such as Weird Tales during the 1920s and early 1930s, before science fiction, horror, and fantasy split into distinct genres.  He set his stories in old, decaying East Coast towns, not unlike his home, Providence, RI, and nearby small hamlets that he knew well. He filled his tales with plot devices—like archaic, mysterious texts and secret societies—that remain stock-in-trade for genre writers today. His monsters are enormous and sublime; they leave his characters whimpering with shattered minds. Yet, for all of his silliness and shortcomings, like Dickens, Kafka, and Poe, Lovecraft created an ambiance and tone that is distinctively his own. 



People have long known and written about Lovecraft’s fascination with science.  Beginning in 1914, he began writing astronomical columns for a local Providence newspaper. His understanding of the universe as a vast expanse indifferent to human desires informs his tales in which characters cower before giant and ancient beasts, realizing in these moments their ultimate insignificance in the great scope of things. That is, contemporary science strongly shaped what Lovecraft’s critics refer to as his “cosmic horror.”


Rather than draw attention to Lovecraft’s broad interests in science, I want to focus on one
aspect, namely his participation in amateur journalism and epistolary circles.[1] The hobby of amateur journalism took off in the late 19th century with the introduction of small, cheap printing presses. Lovecraft eventually became president of the United Amateur Press Association (which was founded in 1890s). He used his own publication, The Conservative, as a soap box for his favorite causes, like decrying the League of Nations and, most infamously, supporting Aryan racial theories.  Lovecraft also took part in a few round robin letter-writing groups. Group members would send a packet of letters and writings from one to the other in a set order. When the packet came back to the first person, he would remove the piece that he put in it last time and replace it with something new. In this way, the group would constantly circulate new ideas and writings. Lovecraft likely circulated his racial theories through this route as well.

Lovecraft was born into a wealthy family in decline. He died in poverty. This economic descent combined with his beliefs about race and other intuitions about society led him to see degeneration everywhere, and these notions found their ways into his stories. For instance, in his story “The Horror of Red Hook,” Lovecraft wrote about a once patrician family that retreated from the world. After generations of inbreeding, they became “dwarfed, deformed hairy devils or apes - monstrous and diabolic caricatures of the monkey tribe.”

I’m not breaking any new ground here. All of this is well known. What interests me is how Lovecraft and others traded these ideas in periodicals of amateur journalism and these round robin epistolary groups.  They were doing more than just circulating pre-existing knowledge vouched for by professional scientists. They were putting forward their own speculations and developing and extending on the ideas of others.

Historians know that early scientists were—and, indeed, prided themselves on being—amateurs. I am more interested in lay circles, like Lovecraft’s, that persist(ed) well after the professionalization of science and technology. Some scholars have already touched on this theme. The historian of technology, Susan Douglas, has noted the importance of amateurs in shaping the initial stages of technical change in objects such as radios. We can also think of Sophia Roosth’s work on garage science. Yet, much remains to be said about the perseverance of amateurism.

Recently, I have been a great deal about two communities that have put forward idiosyncratic ideas about the world. Less Wrong claims to be “a community blog devoted to refining the art of human rationality.”  Eliezer Yudkowsky, a proponent of the singularity, began the blog in 2009 and used it as a space to broadcast his views on, well, just about everything but primarily artificial intelligence, epistemology, and ethics. Yudkowsky and the Less Wrong community often base their speculations on ‘rationality’ on research in cognitive science, behavioral economics, and related disciplines. I’ve also been interested for some time in chemtrail conspiracy theorists, a community that is more decentralized. Chemtrailers believe that contrails, or lines of condensed water left in an aircraft’s wake, are in fact, um, chemtrails, chemicals sprayed into the atmosphere by the government or some other malignant group. Chemtrail theorists have carried out their own experiments to verify their intuitions. And they have become the scourge of those proposing research on geoengineering (like these people haunting a meeting of the American Association for the Advancement of Science [beginning @ 1:50]).

Thinking about these communities reminded me of Lovecraft’s earlier interactions. In some ways, amateur journalism and epistolary circles of Lovecraft’s day were not unlike the blogs and webpages that Less Wrong and the chemtrailers use. (Yes, I know the dangers of cross-temporal and cross-technological comparisons.) Still, I think there is much to explore about how such groups produce and distribute their knowledge against the background of an epistemic status quo. If scientists have their journals—as Alex Csiszar has been exploring—the laity have their amateur journalism and their blogs. And such spaces give historians of science and technology and STS scholars a chance to examine and probe the practices of epistemic subcultures.

[1] Hippocampus Press has published five volumes of Lovecraft’s Collected Essays, including one dedicated to his work in amateur journalism and one on his science writings.

Rabu, 11 April 2012

Dana Carpender's Carb Gram Counter: Usable Carbs, Protein, Fat, and Calories - Plus Tips on Eating Low-Carb!

Dana Carpender's Carb Gram Counter: Usable Carbs, Protein, Fat, and Calories - Plus Tips on Eating Low-Carb!Hello, low-carb dieters! Need some help? In this book you'll find a comprehensive directory of the total carbs, usable carbs, fiber, protein, and calorie amounts for countless different types of food. To make it easy to use, we've highlighted the usable carbs, so you can find the most vital information at a glance. And to help you put more variety in your diet, we've also highlighted the foods with less than five grams of usable carbs per serving, so you can see what you may have been missing!

To help you maintain a low-carb diet happily and successsfully for life, we've included the best low-carb tips. We've even put together lists of great low-carb snacks, low-carb treats, fast food meals, and more!

So grab this little book, and carry it in your pocket, purse, or briefcase - it's the low-carb tool you've been looking for!

Price: $4.99


Click here to buy from Amazon

Evidence of the Normalization of American Science

I have only watched a few episodes of The Big Bang Theory on CBS (and don't plan to watch more), but I suppose that a show like this is something that historians of science in the United States will eventually have to deal with. From what I can glean, the show's science content as such plays a relatively small role, but its sense of the scientist/geek/nerd as an important modern/American type sits at the center of the show's concept. Could such a thing have been conceivable before the post-WWII proliferation of engineering and science jobs? Sure, you have Arrowsmith (1925) and works that valorize the scientist in the early twentieth century, but we don't see art that considers engineers or scientists to be normal, if socially awkward, folks. I guess we should read Steven Shapin's The Scientific Life next to an episode and see what happens.
A Lab Bench on TV! (and it isn't being used to solve murders!) -- The Big Bang Theory (CBS)

I would not have thought about the show at all, had I not stumbled upon this call for papers for an edited volume on "The Big Bang Theory and Gender Politics," which specifically calls for investigations into the gendered (and often demeaning) depictions of women, even the women scientists. I'll post the CFP after the jump.

(Also, apparently, Stephen Hawking just guest-starred. I would consider that a step down from The Simpsons, which, come to think of it, is also about the normalization of the engineer!)

Here it is:
Since its creation in 2007 The Big Bang Theory has captured a steady viewer-ship of nearly 17 Million people. After the first four seasons the network added an additional three without hesitation. The show is more popular than ever before and it stands to reason to analyze why this is the case. Does its popularity stem from the show's quirky but lovable characters? Can it be attributed to the clever mixture of science and popular culture? Is it the combination of down to earth scientist and the world of an aspiring actress? Who knows? There are many aspects that make the show a success; however, there are certain facets that need to be addressed since the presentation of the characters often is problematic. Throughout its run the show has shown a remarkable capability to reduce women to bed fellows, which do not necessarily need a brain and/or self-esteem. Penny's (who, alarmingly enough, does not yet have a last name) naivety from the first few seasons was hard to stomach for many viewers, and it wasn't until the introduction of Amy Farrah Fowler that the cast was equipped with a long-term female character whose function did not reduce her to being eye candy or a sex object. Bernadette, however, plays into the blond and dumb (even though she has a PhD) stereotype. Bernadette is bimbo-ized even though she has the PhD and her counterpart Amy is asexualized (though not masculine-ized) even though she „dates and enjoys sex“.
Most of the male characters in this show hardly fare any better. Raj's inability to address women without being drunk, and Howard's desperate attempt to find and shag a woman no matter the costs, as well as the latent sexual desires both characters seem to harbour for each other opens them up to a lot of criticism as well. Sheldon's inability to function in the real world also requires some investigating. Society seems to have reached a state where it appears to be impossible to have it all, the brains, the personality, the social aptness, and the beauty to go with it. Recent TV shows proposed that smart is the new sexy, but what if this attitude comes with a price too high to pay? This depiction of perceived smartness to an audience that may have needs to elevate nerds is complicated by that smartness being depicted as flat and two-dimensional. At the same time, the show also developed a kind of hierarchy for nerd-dom, some nerds are better than others, some kinds of smart are better than others, but above all, those nerds who can have sex are still better than all the other nerds.
This collection will focus on gender and sexuality and I welcome all papers that are related to these topics.
Please send one page (around 500 words) abstracts to the following E-mail address Nadine.farghaly@gmx.net
And include the following information:
Writers submit a 1-page synopsis of their proposed chapter to us clearly stating:
[a] the research question
[b] the methodology
[c] the findings
[d] the bibliography (5 sources)
Deadline: 1st of June 2012
For questions please contact
Nadine Farghaly
Nadine.farghaly@gmx.net
Hope to hear from you soon.
Nadine

Nadine Farghaly
Email: nadine.farghaly@gmx.net

Senin, 09 April 2012

101 Dieting Tips (Part 1)

The 5-Factor Diet was originally created for celebrities. Harley was challenged to create brief (yet incredibly effective) workouts for actors, while training them during their short breaks on movie sets. In addition, he needed to create healthy meals in a matter of minutes (literally), using simple ingredients (the fewer the better) that could fit in the mini fridge on a set. Oh, and did we mention that he was relying on a blender and toaster oven for the mixing, blending, and cooking? Harley's clients were eating five times a day, cutting the length of their workouts, and transforming their bodies! As the saying goes, necessity is the mother of invention.

-------------------------------------------------------------------------

Dear Readers:

We are dedicated to providing our customers with cutting edge information with the latest and most popular ebooks & hot topics at very affordable prices. Our mission is to create positive change in your life. We carry hundreds of unique titles including "Literary Classics" under many categories for your convenience. Please click on the name "Manuel Ortiz Braschi" at the top of the page, next to the title, or write "Manuel Ortiz Braschi" at the search box and you will be taken to our main page in Amazon, where you will be able to check all the interesting, unique and informative titles that we carry at Amazon Kindle.

Price: $0.99


Click here to buy from Amazon

Jumat, 06 April 2012

On the Very Idea of Ontologies

In giving up the dualism of scheme and world, we do not give up the world, but reestablish unmediated touch with the familiar objects whose antics make our sentences and opinions true or false. – Donald Davidson, "On the Very Idea of a Conceptual Scheme"

I've been enjoying the discussion on our last couple posts (here and here), and wanted to break it out via a different vein of American philosophy and science: the history of the idea of the "conceptual scheme." It was suggested to me when Lukas quoted W.V.O. Quine's "On What There Is" to clarify what philosophers mean by "ontology." As Lukas (and Quine) suggest, ontology has long been a metaphysical problem about what there is and the categories that apply to it.

Willard Van Orman Quine, 1908-2000

This problem changes, I think, if we move a few years later and look at Quine's most famous paper: 1951's "Two Dogmas of Empiricism." Without going into Quine's indictment of the analytic-synthetic distinction or the reductionism of Carnap's Der Logische Aufbau der Welt, let me highlight a claim I think will shed light on our ongoing discussion of ontology:

"As an empiricist I continue to think of the conceptual scheme of science as a tool, ultimately, for predicting future experience in the light of past experience. Physical objects are conceptually imported into the situation as convenient intermediaries -- not by definition in terms of experience, but simply as irreducible posits comparable, epistemologically, to the gods of Homer" (41, emphasis added).

Here we have an articulation of the "conceptual scheme," a philosophical term of art Quine (and others) credited to Lawrence Henderson but which I think I can show originated, in something like its modern sense, by William James. “The conceptual scheme,” he wrote in his Principles of Psychology, “is a sort of sieve in which we try to gather up the world's contents" (I.482). 

Interestingly, it's still somewhat unclear what a "conceptual scheme" is. For James, it's a "sieve" for gathering "the world's contents"; for Quine, "[p]hysical objects are conceptually imported" into it. Its name suggests that it's some sort of mental web through we interpret the physical world, but telling a compelling story about its constitution remains confusing.*

It's this general framing with which Donald Davidson took issue in what might be his most famous paper: 1974's "On the Very Idea of a Conceptual Scheme." In it, Davidson introduced the conceptual scheme as "the third dogma" of empiricism, since it mediated unnecessarily between ourselves and the world (as expressed succinctly in the quotation with which I began).


Donald Davidson, 1917-2003
Davidson argues, against "conceptual relativism," that "the very idea of a conceptual scheme" implies multiply points of view on a given, pre-schematic (or pre-scientific) reality. What's wrong with that? It seems, to Davidson (and to many of his readers), that we have no basis for comparing or even differentiating such schemes, since to do so requires an interchangeability the foundation of which belies the implied differentiation from the start. 

What's this got to do with the ontology of blood and bones (and brains)? I think Davidson's expansion of Quine's holism–and the resulting blurring of the boundary between "scheme and world"–sits in an interesting relationship with the sorts of claims Lukas and Joanna were making vis-a-vis the ontology of scientific specimens. 

While that holism seems to buttress their shared suspicion of Hacking's division between natural and social kinds, it also implies that the very idea of "ontologies" still distinguishes theories and things in a problematic way. In brief, resurrecting this midcentury conversation  might force us to ask what assumptions lie behind a view that takes for granted the multiplicity and material impact of ontologies today.


------------------------------------------------------------------
*I have more to say about how James framed his version of the "conceptual scheme," and how his close attention to brains provided him with a (partially unintentional) material account of what such a scheme might consist in and how it might interact with physical stimuli. Suffice it to say, here, that it suggests a way brains and minds differ profoundly as objects of inquiry and ontological kinds from either bones or blood. 

**Extra note: The images come from Steve Pyke's wonderful gallery of twentieth-century philosophers. Check them out starting here

Kamis, 05 April 2012

Feathered Dinosaurs

An artist's rendering of Yutyrannus huali, a feathered dinosaur recently discovered in China.

I wanted to alert everyone to an article that appears in the journal Nature today, which has been causing quite a stir.  (It was even written up in the NY Times!)  The article announces the discovery of a new feathered dinosaur from the Lower Cretaceous in Liaoning Province, China.  Above is an artist's rendering that gives you a sense of how scientists imagine these creatures looked in the flesh.

There are a few things worth noting here.  First, this creature is a fairly close (but older) relative of T. rex.  Second, as the article points out, it is by far the largest feathered dinosaur that has been found so far.  (The next largest was only about 1/40th its size.) 

Since the discovery of Archaeopteryx in the Victorian period, paleontologists have posited a link between extinct dinosaurs and modern birds.  (Indeed, extinct dinosaurs are now usually referred to as non-avian dinosaurs.)  But in the past several decades, scientists have been pushing the evolution of feathers further and further back, both temporally and phylogenetically.  Archeopteryx was a kind of transitional form, whereas Yutyrannus huali is nested relatively deep within the Tyrannosaurid family tree.

But there's another, perhaps even more interesting reason why the relationship between this creature and the famed Tyrannosaurus rex is so important.  Previous discoveries of feathered dinosaurs have resembled a chicken more so than the creatures we usually associate with the name "dinosaur."  This most recent finding changes that completely.  Now we have something that looks very much like a canonical (non-avian) dinosaur, yet it appears to have been covered with a downy plumage!  Part of the reason this animal has made such a stir, I think, is therefore the fact that it gives license to artist's renderings of the type pictured above.

Anchiornis huxleyi, a feathered dinosaur from Liaoning Province, China.

By way of contrast, consider the example of A. huxleyi (pictured above), which was also discovered in Liaoning Province, China.  In an article published in Science, researchers from China and the United States were able to infer the specimen's color pattern from microfossilization of melanosomes.  You've probably encountered the picture before, because it also made a big splash in the popular press.  Here, the animal cannot boast of a particularly impressive size, but scientists were able to endow it with a visually striking color pattern.

All of this obviously looks very different from how we used to think about dinosaurs.  The most famous and prolific visual interpreter of prehistoric animals during the heyday of dinosaur research around the turn of the 20th century was undoubtedly Charles R. Knight.  Unlike these modern, active, colorful creatures covered in feathers, Knight painted dim-witted, slow-moving, scaly, and drab reptiles.  


T. rex battling a Triceratops, by Charles R. Knight. 
   
So, what's the relationship between material evidence and imagination in producing these illustrations?  Why have our visual renderings of dinosaurs changed so much over time?  I think the answer is neither just cultural -- artists are simply making it up as they go along -- nor is it just empirical -- artists are simply following the available evidence.  Rather, the two interact with one another in a very deep way.

In the comments section of the previous post, Hank, ST(res)S-ed out, and I have been arguing about Ian Hacking's ideas about "dynamic nominalism" -- the extent to which our interpretation of the world changes its material constitution.  I don't want to suggest that dinosaurs are historically constituted in precisely the same way that Hacking thinks people are -- that is, I don't think that what we think about dinosaurs actually changes the material fossils buried underground -- but I do think that *something* comparable to Hacking's dynamic nominalism is going on here.

You might ask yourself (as I often do): why were feathered dinosaurs (of the non-avian variety) not discovered until so recently?  Prior to the discovery announced in Nature today, you might have said: perhaps because feathered dinosaurs are relatively modest in terms of their size and appearance.  (Modest until you have the tools to reconstruct their plumage pattern, that is!)  But as I've already noted, part of what makes this discovery significant is that Y. huali is a close relative of T. rex.  The next obvious question, to my mind, is this: if big, impressive, therapod dinosaurs like Y. huali had feathers, why didn't anyone notice until now?  Is it because paleontologists have only begin to research the evolution of dinosaurs in China, which is where feathered dinosaurs tend to be found, relatively recently?  Or is it because paleontologists simply weren't looking for feathered dinosaurs during the early 20th century, when the Western United States was understood to harbor the world's richest dinosaur quarries?  

Figure 2 from the Nature article, images c-h showing "preserved integumentary structures," i.e., fossil feathers, in Y. huali.

If you read the Nature article carefully,  and especially if you examine the Y. huali fossil (pictured above) closely, you'll see that it's not at all obvious, at least not at first glance, that this creature had feathers.  So I could easily imagine someone preparing this specimen, trying to free the bones from the rock matrix, inadvertently destroying the fossilized traces of feathers.  Perhaps the only reason Y. huali was recognized to have had feathers is because by now paleontologists are actively on the lookout for them.  If that's right, then what's in your mind when you prepare a fossil for study and display may well have a significant impact on the material constitution of dinosaur bones.

All of this is to say that I am now anxiously awaiting the day when paleontologists working in the American west announce the discovery of a downy T. rex!