cara agar cepat hamil weigh loss factor

Minggu, 11 Agustus 2013

The High Quality Research Act: A Blast from the Past?

Melinda Baldwin, a historian of science interested in the development of peer review, has written a guest post about some interesting parallels between the High Quality Research Act and an older controversy about peer review at the NSF. You can learn more about her work here.

A few months ago, Hank and Lee shared some thoughts about the discussion surrounding the "High Quality Research Act," a bill drafted by Rep. Lamar Smith (R-TX), the current head of the House Committee on Science, Space and Technology. The bill would require the NSF director to pledge that funded projects are "high-quality" and benefit the American people, and it seems to be grounded in Smith's concern that the NSF is funding "questionable" projects. Shortly before a draft of the HQRA leaked, Smith had called Presidential science advisor John Holdren and acting NSF Director Cora Marrett before Congress to justify the NSF's spending decisions. 

Smith's repeated statement that he wanted to "improve" on the NSF's grant-awarding process raised hackles in the scientific community. Currently, the NSF relies on reports from referees—i.e., peer review—to choose which applications will be funded. What many observers found really alarming was the letter Smith wrote to Marrett, requesting copies of the referee reports related to five NSF grants that he felt were suspicious, all in the social sciences. 

http://www.nsf.gov/news/mmg/media/images/nsf_bldg_f1.jpg
Reaction from the scientific community, and from politicians, was swift. Most of it played on the same theme: that peer review is a sacrosanct part of the scientific process and that Congressional interference would have dire consequences for the quality of research in the United States. Eighteen former NSF Assistant Directors signed a letter arguing that requiring the NSF to circulate peer review reports might "severely damage a merit review system that is the envy of the world." Smith's fellow Texan Eddie Bernice Johnson (D-TX) said Smith was "sending a chilling message to the entire scientific community that peer review may always be trumped by political review." And President Obama himself, in his April 29 address to the National Academy of Sciences on its 150th anniversary, assured listeners that he would work to protect "the integrity of the scientific process" from "political maneuvers." 

I recently began a project on the development of peer review in the twentieth century, so I am always interested when peer review pops up in the news. But the main reason the HQRA debate piqued my interest is that it's almost identical to a controversy about NSF funding from 1975. 

See if this sounds familiar. In 1975, Senator William Proxmire (D-WI) expressed concern about whether NSF spending was benefitting the American public. Proxmire named five NSF-funded grants that he said were "at best, of nominal value to the American taxpayer who foots the bill." He then called NSF director H. Guyford Stever before the Senate to defend the NSF's spending decisions.

Unlike Smith, whose proposals don't seem to have attracted much support, Proxmire found Congressional allies in Rep. John Conlan (R-AZ) and Rep. Robert Bauman (R-MD). Bauman proposed that the NSF should submit all grants for Congressional approval before promising any funding. Conlan, like Proxmire, believed that the NSF's grants were of minimal benefit to most Americans and that they were disproportionately awarded to elite private universities in the Northeast. Conlan quickly became one of the NSF's harshest critics.

Here's where things get interesting. One of the criticisms Conlan lobbed at the NSF was that the organization was ignoring referee opinions when it made funding decisions. In other words, Conlan set Congress forth as the defender of peer review.

Inside the review process (Source: Wikimedia Commons)
Conlan's criticisms appeared to strike a nerve with the NSF. In the early days of the organization, referees were seen as advisors to the division directors, but the directors retained the power to fund projects with lukewarm reports or reject proposals with enthusiastic ones. This was typical of peer review processes in the 1940s and 1950s, which tended to give editors and grant organization employees the power to accept or ignore the referees' advice as they saw fit.

But by the 1970s, the attitude towards peer review in the United States had changed. Peer review was increasingly being linked with scientific legitimacy. (Figuring out how and why this happened is the goal of my new project.) The idea that NSF directors might award a grant to a proposal with lukewarm referee reports was less acceptable to the scientific community—and to the public. NSF officials responded to the 1975 criticisms by placing more responsibility for decision-making on referee reports. A new audit office at the NSF was created to ensure that directors placed appropriate weight on positive and negative reports.

Director Stever then used the peer review reforms to justify rejecting Proxmire, Bauman, and Conlan's other proposals. Stever and other NSF officials argued that having proposals reviewed by experts was the best and only way to decide which projects should be funded. No further scrutiny was needed to guarantee that good science was funded and poor science was not, especially scrutiny from non-expert reviewers such as members of Congress.

The strategy was successful; the suggestion of Congressional review for NSF grants was dropped. Essentially, the NSF and Congress agreed to place their trust in peer review in order to determine how the NSF's chunk of taxpayer dollars would be spent.

http://strange-matter.net/screen_res/nz060.jpg (with permission)
Are the parallels between Smith's ideas and the 1975 proposals just an amusing case of déjà vu? Maybe, but there are some differences that are worth noting. In 1975, the criticisms of NSF spending led to Congress placing more trust in the NSF's peer review process. Thirty-eight years later, Smith seems to think that this trust might have been misplaced. In a May 3 interview with Science, an anonymous Science Committee aide explained that the HQRA was designed to add an extra step in between the referee reports and approval of the grants. As he put it: "There is a step between peer review and the awards being made, and somewhere in there, Congress is saying, 'We think an additional step is needed to solve the problem of so many questionable grants being awarded.'"

In other words, Smith's office seems to think that NSF reviewers can't be trusted to approve good projects and reject inadequate ones. An extra step is needed to make sure nothing "questionable" receives funding.

Does the HQRA signal that people outside the academy are losing their trust in scientific peer review? Actually, I think it's just the opposite. The speed with which the HQRA appears to have died on the vine suggests that public faith in peer review is still quite robust. Notably, in the interview I linked above, the Science Committee aide went out of his way to convince the reporter that the HQRA was not interfering with peer review itself. No one seems to think that attacking peer review is going to be a winning strategy.

In fact, trust in peer review might just be stronger outside the academy than within it at the moment. Scholars in many fields have written volumes on whether peer review actually works the way we want it to. I will be interested to see if the furor surrounding the HQRA dampens these kinds of critiques. When "trust in peer review!" has been such an effective rallying cry for the pro-NSF crowd, will scientists and other scholars want to criticize their best defense against Congressional interference?

Rabu, 07 Agustus 2013

Steven Pinker's New Scientism

Yesterday, The New Republic published a big article by bestselling Harvard psychologist Steven Pinker. The title says it all: "Science Is Not Your Enemy." Or does it? After all: whose enemy is science supposed to be? Pinker's answer is there in his subtitle: the targets of his "impassioned plea" are "neglected novelists, embattled professors, and tenure-less historians."

http://upload.wikimedia.org/wikipedia/commons/8/83/Steven_Pinker_2005.jpg
Humanists: according to Pinker, science isn't your enemy—it's your friend. Or your extremely successful younger sibling. Its methods and results are yours if you want them—all you have to do is ask. The problem is: you don't want them—you shy away from science, or reject it outright.

Pinker's got a solution, and he's calling it "scientism."

As Pinker points out, "scientism" is a term of abuse. It's usually hurled at "reductionist" efforts to pose scientific solutions to all sorts of problems. And, as a barb, it's often hitched to times when bad politics wore a scientific mask—Social Darwinism, say, or eugenics. According to Pinker, this is how some people paper over ignorance and fear of the sciences.

By appropriating the term, Pinker hopes to wipe the slate clean (sorry). He sees the new "scientism" as a campaign to both "export" scientific ideals to "the rest of intellectual life" and add scientific ideas to the stock of existing "tools of humanistic scholarship." I'll come back to both this idea of exportability and the metaphor of the toolkit in a bit.

But first: why all the fuss? Pinker's "scientism" is supposed to help solve the widespread (if perhaps unwarranted) sense that "something is wrong" with the humanities. As Pinker points out, "anti-intellectual trends" and "the commercialization of our universities" are part of the problem. But so is "postmodernism"—in a sense, the humanities have made their own bed.

John Brockman—a self-described "cultural impresario" about whom I've written before—shares Pinker's sense of what's wrong. In the preamble to a re-posting of Pinker's piece, Brockman is even more polemical: "the official culture" has "kicked [science] out" and "elite universities" have "nudged science out of the liberal arts undergraduate curriculum." He sees scientific intellectuals—bestselling authors, MacArthur fellows, TED talkers—as a sort of renegade "subculture."

http://upload.wikimedia.org/wikipedia/commons/d/de/John_Brockman_at_DLD.jpg
Does this sound right? It seems to me that, even within the academy, work that spans "the two cultures" is consistently rewarded—most obviously, with prizes and grants. The cutting edge is often that which is most engaged with the sciences. Say what you want about the digital humanities or experimental philosophy—they seem to be doing alright for themselves.

Interestingly, what Pinker points out as quintessentially humanistic modes of inquiry—"close reading" and "thick description"—stemmed from precisely this sort of engagement. Stefan Collini and John Guillory have revealed the roots of "close reading" in interactions between literary critics and scientific psychologists in the 1910s and '20s. And we owe "thick description" to Clifford Geertz and the cross-pollination of anthropological field-work and cultural history in the 1960s and '70s.

It could be that something similar—a new paradigm, even—is emerging from the adoption of digital tools, statistical methods, and fMRI scans by humanists today. Or not. The point is that such engagement is going on, and has a legacy that spans the twentieth-century—on either side of C.P. Snow's "Two Cultures" diagnosis fifty years ago.

But I don't want to rest on rejecting Pinker's premise. Whether or not the humanities are in crisis, lots of people think they are—and many agree with Pinker that the sciences might offer a way out. What I want to highlight is the consequences of imagining this interaction in the terms I noted above: the "export" of ideals or the "toolkit" approach to rapprochement.

This view of intellectual life is a common one, well-illustrated by the title of a recent book by the philosopher Daniel Dennett: Intuition Pumps and Other Tools for Thinking. It's no accident that Dennett is a leading philosopher of evolution: this view of cognition as tool-using is profoundly Darwinian. As a result, it represents, all by itself, the success of a particular scientific "export."

http://upload.wikimedia.org/wikipedia/commons/2/21/Daniel_dennett_Oct2008.JPG
This model of human agents—as embedded bricoleurs doing their best with the cultural resources ("tools") at hand—is something we've argued about on this blog before. And it might well be the correct view. It's certainly a very compelling one. Pinker, Dennett, and many of their peers in cognitive science and human evolution adhere to it.

And so do humanists—or at least historians. Limiting ourselves just to the history of science, let's think over how the human agents at the heart of recent works are characterized. For the most part, I'd argue, they're painted in a light very similar to the Pinker-Dennett-evolutionary model.

It wasn't always this way, though. Time was, there were earnest efforts by historians to cast human actors in Marxist or Freudian—rather than a Darwinian—roles. In the last half-century, however, such accounts have gone the way of the Dodo, leaving us with one that's extremely assimilable to reigning scientific views.

Here's the rub. Pinker might be right about "two cultures" angst. But in adopting the toolkit model, he's also put his finger on a prevailing assumption that ties the two sides together. This might explain both the promise and the peril perceived in the sort of "scientism" he's proposing. Such shared assumptions are essential for bridge-building. But if humanists are uncomfortable with them, then the theory of agency underlying our accounts might merit further scrutiny.

Rabu, 24 Juli 2013

Academic Publishing, the AHA, and the Ratchet Effect


On Monday, the American Historical Association published an official statement urging graduate programs and university libraries to "to adopt a policy that allows the embargoing of completed history PhD dissertations in digital form for as many as six years." The statement goes on to note that "History has been and remains a book-based discipline." However, the increasingly common practice of requiring that completed dissertations be posted freely online may make it more difficult for recent graduates to secure a publisher. This, in turn, could make it much more difficult for young scholars to earn tenure.

As the comments section that follows the AHA's online publication of its statement against online publishing indicates, this strikes many as a backwards-looking strategy. As I have argued myself in a previous post on this blog, scholarly publishing is clearly moving online. And as it does so, the nature of how we consume, share, and disseminate knowledge is certain to change. So why not embrace this trend rather than desperately try to hold on to an outdated, 19th-century version of print culture?


The answer, of course, is that although many of us are eager to publish our work freely online, it seems wrong to endanger the tenure prospects of a whole generation of scholars whose only crime was to have finished their PhD's during a time of transition and upheaval. It is laudable for the profession to embrace change. But we should not expect its most vulnerable members to be on the vanguard, leading the charge into an uncertain future.

But does that mean the profession can't embrace change? Couldn't the change we all seek come the level of hiring and tenure committees instead? Answering these questions is far from straightforward,  and it requires a small detour through what might be called the "ratchet effect."

I first heard the term "ratchet effect" in conversation with the philosopher Peter Godfrey-Smith, who described it as one among many potential mechanisms that drives cultural evolution. The ratchet effect will take hold anytime that cultural change is biased to drift in one direction rather than another. Take, for example, the case of airport security:

On a recent flight from Barcelona to Boston, I was surprised to find passports being checked at the gate of my connection in Zürich even though the Swiss border control had already inspected my documents when I entered the international terminal. Doing so added considerably to the time that it took us to board, and, to me, it seemed ridiculously over-indulgent. But there is nothing in the least bit surprising about it. In the wake of September 11th, there was a huge push to tighten the security around American airspace, and a few minutes of extra wait time seemed like a negligible sacrifice to make.

Of course, a long time has passed without a similar incident of in-flight terrorism so, for most of us, the cost-benefit analysis may have changed. But who is going to spear-head the movement to loosen airline security? After all, doing so would mean incurring the risk being blamed if another disaster did occur in the future. Hence, airline security is subject to the ratchet effect. It is much easier to tighten security than loosening it, giving us something to think about when we are stuck in what seems like an interminable queue.

Although its outcome is often annoying, the ratchet effect operates all around us, influencing everything from the evolution of the Republican party to the career trajectories of young historians.

At the same time that we have witnessed an upheaval in print culture, historians have also engaged in much hand-wringing about two interrelated and lamentable trends.  Ironically, while it is taking PhD students longer and longer to earn their degrees, they are also having a harder and harder time finding gainful employment. The relationship between these two trends is no less disturbing because it is obvious: it being harder to find a job, it makes sense for people to spend more time lingering in their PhD programs. By taking an extra couple of years to write their dissertations, they not only increase the amount of time they can spend on the market. They are also able to write better and more polished theses, thus giving them a leg up once they actually graduate.

The problem, of course, is that we are all playing the same game. Thus, we are caught up in a ratchet effect. As people spend longer writing their PhD and produce a more polished thesis, the basic requirements for securing a tenure-track job go up for the whole profession. For all practical purposes, it is simply no longer possible to land a permanent position with the kind of CV that was perfectly standard a generation ago. Rather than a completed dissertation and good letters of recommendation, you now need one or two published articles and a thesis that is well on its way to the book manuscript. Indeed, as more and more people also spend several years as a post-doc, it is not at all uncommon for recent hires to have a book contract in hand by the time they start their first permanent job. Sometimes, the book has already been published. This is, as they say, the new normal.

I read the AHA's position on the online publication of PhD theses as a good-faith reaction to the ratcheting up of publication requirements for young scholars. But wouldn't it be better to try and bring things down a few notches instead?

What I'm about to suggest is pretty draconian, so let me preface this by saying that I mainly put it out there as a contribution to a vitally important conversation.

What if we could use the move to online publishing as an opportunity to address the time-to-degree problem head-on? One way to do so would be to move to a more UK-style model, in which students are expected to write their PhD theses in 2-3 years (after having completed the relevant coursework, which in the US would result in roughly 5-year PhD programs). This would mean lowering expectations on PhD theses somewhat. Rather than a polished first draft of the book manuscript, the thesis would be an academic exercise, freely available on the internet, meant to *prepare* students for the task of writing a book rather than being a version of that book itself.

One virtue of such a move comes from the fact that the stagnant job market in the humanities is unlikely to change, meaning that many qualified people will fail to find a permanent teaching position. Although my proposal would not change that, at least it would mean that most recent PhD's would be about 25 - 30 years old. My sense is that it is easier, and preferable, to make the difficult choice of leaving the profession at 30 years old rather than five to ten years down the line.

Another virtue is that it would take some of the pressure off the writing of the PhD itself. It strikes me as foolish to expect people to write a polished book manuscript in their first try. Better to learn your craft in the context of a long-form exercise in which you can experiment and make mistakes. Then, after you have defended, you can decide if you want to have another go at the same topic (this time knowing what you wished you had known the first time around), or you can choose to go with something new (this time knowing much more about how to pick a topic and design an argument).

Although others, including Louis Menand, have proposed similar measures, there are significant drawbacks to going this route.

One major problem with my suggestion about reducing time to degrees is that it does not go far enough to solve the problem of the ratchet effect. Because there are so many more talented historians with a PhD than there are permanent teaching positions, hiring committees would still be free to choose from a pool of remarkably accomplished applicants. That is, even if we suddenly forced students to complete the PhD program in five years, what's to stop them from spending several years writing articles and polishing their thesis after they graduate? One thing I certainly do not want to do is advocate that the humanities go the way of the sciences, in which it has become standard to spend 5-10 years on the post-doc circuit building up a publication record before entering the tenure track.

Because of the ratchet effect, my proposal would only succeed if senior scholars commit to preferentially hire recent graduates. And this is where things get really draconian, because doing that would mean telling huge numbers of talented and deserving people who have been on the market for a number of years that all of a sudden they are out of the running for permanent positions. That's a pretty bitter pill to swallow. So bitter, I think, that the AHA's backwards-looking position on online publishing starts to make a lot of sense. 

Senin, 22 Juli 2013

Winner! The US T&C: Examining Law and Expectations in Our Digital World

A few weeks ago, in the wake of the Snowden Affair, I announced a contest to write a new social contract modeled on terms of service. Terms of service are, of course, the things most of us click through without reading when either signing up for a web-based service or installing a piece of software. There was doubtless something sarcastic, even cynical, about this contest.

A visualization of the US Internet, a web of technical and social bonds,
which, like all such bonds, include expectations.
(Source: National Science Foundation)
Today, I would like to announce the winner: Tall White American Male (Twitter: @TallWhiteMale), a resident of Chicago, who penned a proposed US Terms and Conditions. So, congrats to Tall White American Male. As spelled out in the contest announcement, he'll receive this remarkable shirt.  I have pasted his winning entry below, but before we come to that, I want to discuss why I held the contest in the first place.


Some people have asked me what this contest has to do with the history of science and technology, or science and technology studies (STS) more broadly, and whether it is a symptom of my resignation to ubiquitous surveillance. My answer to the first point is that STS may offer insights about these NSA programs and that imaginative, speculative writings are one way to address our current plight.

The strand of STS that has the most to say to discussions about the NSA programs is the one focused on the law, and this means, most centrally, the work of Sheila Jasanoff. Perhaps the best place to start is her essay "In a Constitutional Moment" (paywall). This usage of "constitutional" plays the well-worn postmodern game of deploying a word in a purposely ambiguous way. Here, constitution refers, on the one hand, both to the written Constitution and unwritten legal codes and, on the other hand, to ways in which we constitute—or make—the world, either by creating scientific pictures of reality or by building technological systems.  The point is that there is a dynamic interaction between (written or unwritten) norms and scientific theories/technologies. The formal constitutional debate and lawsuits about the NSA programs have hardened around whether searches of metadata violate the 4th Amendment, but it is how the NSA programs run up against the informal, unwritten norms of the Internet that is most interesting.

One irony of the Snowden Affair is that many people consider the Internet a space for freedom. Some describe it as a technology that has liberty written into its "code," and enthusiasts have celebrated how demonstrators and activists have used the Net to resist repressive governments. Yet, critics, like Evgeny Morozov, have mocked these ideas, arguing that this technology can just as easily be used for authoritarian ends. Not surprisingly, the NSA programs have enraged technolibertarians and heralds of Internet freedom. Interesting People, a large email list that has many members who (literally) helped create the Internet, has experienced a torrent of emails expressing anger and dismay. The NSA programs conflict with the norms, values, and expectations that many people have for and about this relatively young network technology.

The relationship between legal norms and technological change plays out in many different ways. One classic picture of the relationship is William Ogburn's notion of cultural lag (first articulated in 1922), which holds that technology often moves faster than laws, customs, and norms. Tradition lags behind invention. We have already seen some people spell out cultural lag arguments in response to the NSA programs. For example, Andrew Couts published a blog post that clearly states its position: "Restoring a Law from 1879 May Be Impossible with Technology from 2013." The url for the post puts the point more baldly, "Restoring the Fourth a Digital Age Pipe Dream." We will doubtlessly see more such arguments. Yet, Jasanoff has argued in the past that the idea of cultural lag is usually wrong because the law has almost always foreseen technological and legal possibilities. It will be interesting to see what historical interpretation of this point dominates down the road.

This issue of future historical interpretation brings me to something else I hoped to address through my post announcing the contest, namely the issue of education, social reproduction, and the public understanding of technology. In my original post, I imagined the US Terms of Service, which would state a user's privacy expectations, taking a central role in high school civics classes. Will those of us who lived through the immediate aftermath of 9/11 teach children that constant surveillance is simply a normal and assumed part of social reality? A connected issue is the relationship between consumerism and citizenship. The Internet arose from government and academia, but it has increasingly become a tool used for entertainment. (Some accounts suggest that Netflix takes up to 30% of US bandwidth in the evening; check out the fascinating image at the bottom of this page.) Yet, concerns about Internet privacy goes to the heart of our citizenship. If we add Internet literacy to our schools' curricula, as many argue for, should we put those lessons in home economics or in civics? This is partly what I was hoping to get at by suggesting a US Terms of Service, as if a kind of contract we use as consumers could come to define our lives as citizens.

Other imaginative works, or speculative fictions, that might further help us think through our current situation have occurred to me over the last few weeks. My wife and I are about to have our first child, and I have been preoccupied by thoughts of how I will explain our world (digital and otherwise) to my daughter. I pitched the idea of a book that would describe the NSA programs to children to the speculative fiction author Andri Magnason via Twitter. Magnason, who primarily works in allegory, responded, "...a world made of glass. Everything is visible, traceable, readable, and everything leaves traces and tracks, even thoughts." I was thinking of something more realistic, something for the kiddie non-fiction section, something that would begin "Once upon a time, some criminals attacked the United States" and would end, ". . . and so they watch." It could be called Your Friendly Watchers. Another possibility that I have discussed with friends would be an alternative history novel that imagines the fate of the Civil Rights Movement in the 50s and 60s if the feds had PRISM. Could we guarantee that these tools would not be turned on groups of US citizens if we entered a period of social strife? I think not. But American Science, a blog dedicated to the history of science and technology, is not the space for these kinds of writings. Therefore, this will likely be the only time this kind of writing contest will be held here.

And so we come to the winning entry . . .

********

The US T&C as Proposed by Tall White American Male

These Terms and Conditions for Citizenship evolved from the United States Constitution, the Bill of Rights, and Declaration of Independence, and are the terms of service that establish and govern reasonable expectations to be held by citizens of the United States.

The Declaration of Independence identified an inalienable right to “Life, Liberty, and the pursuit of Happiness”, and the Founding Fathers declared “that the form of government which communicates ease, comfort, security, or, in one word, happiness, to the greatest number of persons, and in the greatest degree, is the best.” The events of September 11, 2001 and subsequent threats to the American homeland present an enduring danger to the Happiness of the American people. It is this inalienable right that the Terms and Conditions for Citizenship seek to protect.

By using or accessing methods of electronic communication including but not limited to telephone, email, camera, internet search engine, web browser, and social media platforms, citizens agree to these Terms, as updated constantly in accordance with secret laws, secret courts, and secret decisions. By utilizing the rights and privileges of US citizenship, citizens agree to abide by these Terms and Conditions in perpetuity.

The nature of this ongoing threat to American happiness requires extraordinary measures be taken by decision makers, and necessarily places limits on expectations of privacy. Data, in their many forms, are an essential element of electronic communication, and the monitoring of this element is critical to the cause of guaranteeing the safety, liberty, happiness of the American people. All data are to be considered critical to this cause unless otherwise specified, and while the product of communications between one or more citizens, are exempt from traditional notions of privacy. Similarly, present and future conditions are such that control and ownership of all forms of data cannot be left to the individual citizen. Data are crucial to the maintenance of the common defense and the general welfare, and must be safeguarded by the commonwealth. 

Surveillance of the people, by the people, and for the people will secure the benefits of liberty to this and future generations.

Jumat, 05 Juli 2013

A Contest for Writing the New Social Contract: The US Citizens' Terms of Service?

Social contract theory—the idea that each person (implicitly or explicitly) agrees to a set of rules, rights, and duties by choosing to live in a society—has rested at the heart of Western political thought for the last three to four hundred years. The fallout surrounding the Snowden Affair and the NSA snooping programs that it has unveiled can be seen as a brouhaha over a social contract. The aggrieved feel that they had signed onto an agreement, say, The Bill of Rights, which they believe these programs violate.


Most of the discussions I have heard so far focus on how we can ensure proper oversight of the NSA's programs, either through courts or through Congress. Many express skepticism about the viability of such oversight systems, however. Who will watch the watchers? And who will watch the watchers' watchers? I'm with the skeptics here. I have little faith in systems of oversight, so I do not think they are the place to put our focus.

There's another option: we could write a new social contract that reflects the technological reality of the Internet and governments' use of it for intelligence and law enforcement.* This new social contract could be modeled on Terms of Service, the agreements users click through when they sign up for services (like Facebook and Twitter) and agree to use products (like Adobe Professional or Apple ITunes). In honor of our national celebration of Independence Day and the adoption of the Declaration of Independence, I will hold a contest to see who can write the best new social contract.


Members of the intelligence community will tell you that people are often of two minds about intelligence gathering. When something terrible happens, like when the Tsarnaev brothers bombed Boston, people want government agents to have extraordinary capabilities to gather data and find those responsible. Yet, people also want to have complete privacy when it comes to certain matters and certain "places," like their email boxes.

We probably cannot have it both ways. For example, we would expect law enforcement agents to interview acquaintances of suspected criminals. The question is how law enforcement will find out who those acquaintances are. Now, law enforcement officers can discover criminals' acquaintances by feeding criminals' cellphone metadata into a computer program and using an algorithm to find other cellphones (that is, people) that were regularly in the same location as the person(s) under investigation. This strategy doubtlessly leads to false positives and potentially to government agents hassling innocent people.

More important, as many have pointed out, it is not clear that the traditional system of getting a warrant for a search fits these technical procedures well. Hypothetically, judges hand out warrants when law enforcement officers can prove that they have good reasons to make a search. (Many would claim that the warrant system has often been abused.) But government agents use metadata to discover WHO THEY SHOULD SUSPECT, and this process involves everyone's information, even yours and mine. How would warrants work in this case?

One solution to this conundrum would be to have people sign an agreement, the US Citizens' Terms of Service, which makes clear that their online activity is not private and that all of it is open to government examination at any time. These Terms or Service would make explicit our political and social reality, and allow the NSA and other agencies to continue their practices without anyone having hard feelings. (Or is it that the aggrieved are experiencing cognitive dissonance? The Terms of Service would alleviate that, too.)

There are many, many issues that would need to be resolved before the US Citizens' Terms of Service could be adopted, however. For example, we could imagine a lively debate that would mirror long-lived arguments amongst Christians about the proper age for baptism. How old should someone be before he or she clicks through the national Terms of Service?

Can we imagine an event modeled on the christening, wherein, when a child is a few months old, his or her parents go to the court house and use a mouse to tab through and mostly not read the terms in front of a judge? That way mom and dad can let the IPad babysit the child without the the young one laboring under the notion that playing online with Sponge Bob or Dora the Explorer or whoever is private. Grandma and Grandpa and other family members and friends could go to the courthouse and take smile-filled photographs of the momentous occasion.

Or perhaps our thoughts would align closer to where the Baptists come down on baptism: a person should be of an age where he or she can make an adult decision about whether he or she agrees with the Terms of Service. Perhaps an educational unit and test on the US Citizens' Terms of Service could be added to ninth grade Civics classes. Students would receive a certificate and the warm applause of proud parents on the day that they are finally allowed to click through, and not read, the national user agreement. Each child would come to a podium and take the mouse in hand; the computer screen would be projected on a large screen over the stage; and the audience would cheer and whistle as the amplified clicks echoed through the school gymnasium.

We will also need to decide how often citizens need to renew their agreement to the terms. For instance, we can imagine the Federal Communications Commission mandating a technology standard that every web browser installed on a computer in the USA would require users to click the US Citizens' Terms of Service every time he or she logs on.

And there are so many other details that will need ironing out.

For all of these reasons, I propose a contest, a contest for the writing of the new social contract. Contestants can either put their submissions in the comment field of this post or email them to me at leevinsel@gmail.com. The submissions should be modeled on the terms of service of Facebook and other such companies; they should be filled with the kinds of inscrutable legalese that make terms of service so endearing.

The winner will receive this t-shirt.





 
I will leave it for the reader to decide whether the shirt is ironic. Of course, the t-shirt is simply a token of appreciation. The real reward will potentially come when the US Citizens' Terms of Service is adopted a central political document of the USA and the writer joins the ranks of the Founders and other great citizens of the nation's history. 

I will announce the winner on July 18th, that is, two weeks after Independence Day. This contest will be exceedingly informal and will mostly reflect my own prejudices. I may involve the other team members of this blog in the judging, but they are busy people and may not have time.



* One silly argument that people who Evgeny Morozov calls "Internet centrists" might make is to say that the Internet is such a "radical innovation" it has somehow undone all previous social contracts, including the Constitution and the Bill of Rights. We can imagine them making allusions to Joseph Schumpeter and/or William Ogburn's notion of "cultural lag." I don't see any good reason to go along with this line of thought. The Internet has not so radically shifted things that we need to rewrite the rules of, say, when governments can search our residences.

Selasa, 02 Juli 2013

The NSA and Tech Change, Part II: The Dialectic of Strategy and Counter-Strategy

Nathan Andrew Fain's comment on my last post was so interesting, I thought I would respond to it here. In that post, I briefly explored—and mostly asked questions about—how the NSA's programs, like PRISM, may be shaping technological change. As many know, there is a long—several hundred year—history of defense spending and priorities influencing science and technology, and I wanted to ask how government surveillance programs might do the same. 

In his comment, Fain considered the flip side of my point, namely how the Snowden Affair might encourage others to change technologies. He wrote, "The NSA programs, or more accurately the revelation of them, will push in ernest the development of subversive technologies." He went on to talk about John Gilmore and the cypherpunk movement, which sees cryptology and the avoidance of surveillance as potential loci for social change. I knew nothing about this movement, know little more now, but am hoping to learn, first by reading this book. Fain's comment is fascinating, and I encourage everyone to read it as well as to check out his website, deadhacker.com

I'd like to examine Fain's comments through the lens of technology studies by thinking for a moment about strategy and counter-strategy and how this dynamic shapes technologies and the practices that surround them.


When I was in grad school, I spent a good bit of time wondering how hacking influenced technology. This was during the time that I was reading Cyril Stanley Smith, who talked about how inventors and innovators often have a tacit connection to their medium won out by a great deal of experience. This connection leads to a sense of "play"; invention becomes a kind of second nature. Smith's account reminded me of hackers I knew, who seemed to have an easy and fluid relationship with computing and who enjoyed nothing more than the thrill of doing what was not to be done. But did hacking do anything (technologically) more than stress out systems managers and induce better security programs? Before Fain's comment, I had not considered the inverse of this dynamic, that ever expanding surveillance systems fostered technologies of concealment, and that it wasn't only criminals and terrorists who wanted to escape detection but also techno-libertarians, cyber-anarchists, and the like.

The dialectic of strategy and counter-strategy is an essential part both of technological change and changes in how we use technologies. The phenomenon is as true of business as it is of war, but I will give a few examples from the latter. In the Viet Nam War, the United States found new strategies for the helicopter, especially through the famous 1st Cavalry Division. Helicopters enabled novel kinds of troop movements and air support during battles, but the Viet Cong quickly adapted to the technology. They would sit and wait for helicopters to come in, before lighting them up as they neared the ground, turning the vehicles' inhabitants into sitting ducks. In another, perhaps apocryphal, example, the M1 Garand rifle that US soldiers used in WWII made a loud 'ping' sound when it had run out of ammunition. In close range combat, Japanese soldiers would wait to hear that sound before rushing the US troops. The US soldiers developed a counter-strategy, however. Working in two man teams—a sniper and an assistant—one soldier would use the rifle to make the 'ping' sound. When the Japanese soldiers began their charge, the sniper would already have their position lined up in his sights. While two examples focus on changes in practices, there are plenty of examples of strategy and counter-strategy shaping technological systems themselves, such as when, during WWII, scientists at Harvard realized that radar was under development at the MIT RadLab and playfully jammed the signal from across the Charles River.


It seems that Fain is almost certainly right. The revelation of the NSA's programs will be watershed moment for many people, some subset of which will actually work to produce new technologies for maintaining privacy. I think the real question is whether people will adopt these systems of cryptography and use them in everyday life. In a long theoretical essay that I finished recently and will probably never publish, I spend a lot of time discussing how scholars in technology studies have concentrated for too long on how technologies are "constructed," or achieve their final form. Often the more important issue is whether technologies are adopted, especially whether they are adopted on a massive scale. At this point in time, sadly, economics is more helpful than history or sociology (because people in the latter fields have talked too much about construction). One significant exception is the work of the rural sociologist and communications scholar, Everett Rogers, whose Diffusion of Innovations (1962 and many subsequent editions) is still the gold standard for studies of technological adoption. Price and effort are always important factors in whether potential users adopt a technology, but other factors can also play a role.  

Being pissed off could be one such factor that trumps cost and effort, but keeping information secret takes time, discipline, and at least a modicum of technical know-how. We live in a society where barely anyone reads terms of service and dwell in a land of flashing DVD player/microwave oven/cable box clocks. I have recently seen tech savvy people, such as the members of mailing list Interesting People (also the interesting account here), sharing public keys. What percentage of the population will be willing to go to such lengths to protect their privacy? (I foresee a study, if one hasn't been done yet, where economists push people to put a monetary value on their privacy and find that the value is $0. Not that such studies tell us much of anything at all.) Also, as one friend put it after reading Fain's comment, "The NSA has 50 nerds for every one of the cypherpunk nerds."

Yet, these last thoughts are getting me off track. The question of my last two posts has been this: How are the NSA programs influencing technological change? Fain must be right to point out that to answer this question we should look not only at the NSA programs themselves but at how people are reacting to those programs. To add one final thought, my last post argued that we should think about how knowledge produced through the NSA's programs spills over into the other sectors of society. To be perfectly symmetrical, we should also attend to spill over from the efforts of cypherpunks and other such dissidents. How will the technologies and practices they produce come to influence even those in society who are too apathetic and lazy to work for their own privacy?

Senin, 01 Juli 2013

The National Security Agency and Technological Change

This post builds on the one Lukas put up last week. Most commentaries on the Snowden Affair, PRISM, and the other NSA programs that have come to light have focused on whether these programs are constitutional, whether Snowden is a hero or villain or something else, and, now, what these programs will mean for US foreign relations. I have also heard people ask how any of us could be surprised by these programs, and for a few days, people spent a lot of time talking about Snowden's girlfriend's pole-dancing skills. In other words, the Snowden Affair has all the markings of a major American media event.


In this post, I'd like to exercise the historian's prerogative by exploring how these NSA programs fit into a longer historical trajectory, namely how government spending and procurement influence technological change.
The history and sociology of science and technology are full of well-known stories of how government funding affected the direction and growth of technological innovation. The best known stories in the United States have to do with technical advances made at MIT, Harvard, and Los Alamos during World War II and the wide variety of scientific breakthroughs and technologies that emerged from Cold War defense spending. (Mark Buchanan recently put up an entertaining post about the many technologies that ultimately have roots in government spending.) There are many earlier examples in the United States from WWI and even the 19th century. Of course, any comprehensive history of the military-science-technology relationship would have to go back much further; in the West, right through 18th century French science societies through da Vinci at least back to Archimedes.

We can assume that spending on intelligence and the technology that undergirds it exploded after 9/11. 9/11 was to the surveillance-industrial complex what Sputnik was for Cold War sci-tech funding. It would be interesting to know whether the programs that developed after that date were merely extensions—if massively scaled up extensions—of things that were already in the works. It would also be interesting to know how many new programs developed after that date (versus building on old programs).

But it would also be fascinating to learn how these programs have influenced technological change, if at all. Do fundamentally new and largely unknown computing technologies lay behind the NSA's capabilities. Are these capabilities mostly the result of hugely scaling up technologies that are already well known (server farms, data mining algorithms, etc.)? Or will we look back at the NSA's programs as greatly changing computing technologies? If so, which companies would have produced these technologies for the agency? Mostly defense contractors? Or mostly computing firms? Or might the government have its own internal R&D shops? 

Economists and historians often examine "spill over" to see how government, typically military, spending ends up influencing the broader economy. To the degree that new technologies, processes, and techniques are being developed through these programs, for several reasons, it will likely be very difficult down the road to determine how much these things have moved into the domestic sector.  First of all, the NSA can likely prevent the spread of new technological systems (if truly new, x-technologies, like quantum computing, are a part of the programs). But the agency cannot easily stem the dissemination of the experience and tacit knowledge that people will gain by working in these programs. People will move to other jobs and take their experiences in, say, developing data mining algorithms with them. Again, the movement of this knowledge will be very difficult to track. 

Second, contractors, like Snowden, do a significant portion of US intelligence work. Today, on Meet the Press, Rep. Nancy Pelosi said that the Obama Administration has done a great deal to decrease the role of contractors in classified projects. I don't know where Pelosi is getting her information, but my instinct says that she is overestimating the decline of contractors under Obama. As long as intelligence remains tied to the use of enormous computer networks, contractors will likely continue to play an essential role. Even in the midst of news about Snowden, we have learned more about Amazon's contract to build cloud computing infrastructure for the CIA. Booz Allen Hamilton employees and other such consultants and contractors will take the lessons they learn in working for the NSA and apply them elsewhere. I'm sure the opposite is also true: the contractors are bringing lessons learned from private industry and using them for the intelligence agency. Indeed, in terms of the movement and synthesis of knowledge, between the private and public sectors, these contracting firms are likely important nodes that historians and sociologists would do well to examine . . . if they ever can . . . all of this is veiled in such horrible secrecy. In this case, however, secrecy might also have dire implications for our ability to study the realities of US innovation policy, since the surveillance-industry complex will have an unknown relationship to technological change and economic growth.

What I keep wondering is how we will see these things in ten or twenty years. Will we see the NSA's influence on technology as we now see Cold War sci-tech funding, that is, as a hugely important source of technological change and knowledge production? Or will the NSA's programs seem like just seem like another (ultimately boring) application of "big data" and the "app economy"? 

If any readers—especially those readers with deep knowledge of computing and/or computing history—have thoughts about the relationship between the NSA's programs and technological change, I'd love to hear them.