cara agar cepat hamil weigh loss factor : Juli 2013

Rabu, 24 Juli 2013

Academic Publishing, the AHA, and the Ratchet Effect


On Monday, the American Historical Association published an official statement urging graduate programs and university libraries to "to adopt a policy that allows the embargoing of completed history PhD dissertations in digital form for as many as six years." The statement goes on to note that "History has been and remains a book-based discipline." However, the increasingly common practice of requiring that completed dissertations be posted freely online may make it more difficult for recent graduates to secure a publisher. This, in turn, could make it much more difficult for young scholars to earn tenure.

As the comments section that follows the AHA's online publication of its statement against online publishing indicates, this strikes many as a backwards-looking strategy. As I have argued myself in a previous post on this blog, scholarly publishing is clearly moving online. And as it does so, the nature of how we consume, share, and disseminate knowledge is certain to change. So why not embrace this trend rather than desperately try to hold on to an outdated, 19th-century version of print culture?


The answer, of course, is that although many of us are eager to publish our work freely online, it seems wrong to endanger the tenure prospects of a whole generation of scholars whose only crime was to have finished their PhD's during a time of transition and upheaval. It is laudable for the profession to embrace change. But we should not expect its most vulnerable members to be on the vanguard, leading the charge into an uncertain future.

But does that mean the profession can't embrace change? Couldn't the change we all seek come the level of hiring and tenure committees instead? Answering these questions is far from straightforward,  and it requires a small detour through what might be called the "ratchet effect."

I first heard the term "ratchet effect" in conversation with the philosopher Peter Godfrey-Smith, who described it as one among many potential mechanisms that drives cultural evolution. The ratchet effect will take hold anytime that cultural change is biased to drift in one direction rather than another. Take, for example, the case of airport security:

On a recent flight from Barcelona to Boston, I was surprised to find passports being checked at the gate of my connection in Zürich even though the Swiss border control had already inspected my documents when I entered the international terminal. Doing so added considerably to the time that it took us to board, and, to me, it seemed ridiculously over-indulgent. But there is nothing in the least bit surprising about it. In the wake of September 11th, there was a huge push to tighten the security around American airspace, and a few minutes of extra wait time seemed like a negligible sacrifice to make.

Of course, a long time has passed without a similar incident of in-flight terrorism so, for most of us, the cost-benefit analysis may have changed. But who is going to spear-head the movement to loosen airline security? After all, doing so would mean incurring the risk being blamed if another disaster did occur in the future. Hence, airline security is subject to the ratchet effect. It is much easier to tighten security than loosening it, giving us something to think about when we are stuck in what seems like an interminable queue.

Although its outcome is often annoying, the ratchet effect operates all around us, influencing everything from the evolution of the Republican party to the career trajectories of young historians.

At the same time that we have witnessed an upheaval in print culture, historians have also engaged in much hand-wringing about two interrelated and lamentable trends.  Ironically, while it is taking PhD students longer and longer to earn their degrees, they are also having a harder and harder time finding gainful employment. The relationship between these two trends is no less disturbing because it is obvious: it being harder to find a job, it makes sense for people to spend more time lingering in their PhD programs. By taking an extra couple of years to write their dissertations, they not only increase the amount of time they can spend on the market. They are also able to write better and more polished theses, thus giving them a leg up once they actually graduate.

The problem, of course, is that we are all playing the same game. Thus, we are caught up in a ratchet effect. As people spend longer writing their PhD and produce a more polished thesis, the basic requirements for securing a tenure-track job go up for the whole profession. For all practical purposes, it is simply no longer possible to land a permanent position with the kind of CV that was perfectly standard a generation ago. Rather than a completed dissertation and good letters of recommendation, you now need one or two published articles and a thesis that is well on its way to the book manuscript. Indeed, as more and more people also spend several years as a post-doc, it is not at all uncommon for recent hires to have a book contract in hand by the time they start their first permanent job. Sometimes, the book has already been published. This is, as they say, the new normal.

I read the AHA's position on the online publication of PhD theses as a good-faith reaction to the ratcheting up of publication requirements for young scholars. But wouldn't it be better to try and bring things down a few notches instead?

What I'm about to suggest is pretty draconian, so let me preface this by saying that I mainly put it out there as a contribution to a vitally important conversation.

What if we could use the move to online publishing as an opportunity to address the time-to-degree problem head-on? One way to do so would be to move to a more UK-style model, in which students are expected to write their PhD theses in 2-3 years (after having completed the relevant coursework, which in the US would result in roughly 5-year PhD programs). This would mean lowering expectations on PhD theses somewhat. Rather than a polished first draft of the book manuscript, the thesis would be an academic exercise, freely available on the internet, meant to *prepare* students for the task of writing a book rather than being a version of that book itself.

One virtue of such a move comes from the fact that the stagnant job market in the humanities is unlikely to change, meaning that many qualified people will fail to find a permanent teaching position. Although my proposal would not change that, at least it would mean that most recent PhD's would be about 25 - 30 years old. My sense is that it is easier, and preferable, to make the difficult choice of leaving the profession at 30 years old rather than five to ten years down the line.

Another virtue is that it would take some of the pressure off the writing of the PhD itself. It strikes me as foolish to expect people to write a polished book manuscript in their first try. Better to learn your craft in the context of a long-form exercise in which you can experiment and make mistakes. Then, after you have defended, you can decide if you want to have another go at the same topic (this time knowing what you wished you had known the first time around), or you can choose to go with something new (this time knowing much more about how to pick a topic and design an argument).

Although others, including Louis Menand, have proposed similar measures, there are significant drawbacks to going this route.

One major problem with my suggestion about reducing time to degrees is that it does not go far enough to solve the problem of the ratchet effect. Because there are so many more talented historians with a PhD than there are permanent teaching positions, hiring committees would still be free to choose from a pool of remarkably accomplished applicants. That is, even if we suddenly forced students to complete the PhD program in five years, what's to stop them from spending several years writing articles and polishing their thesis after they graduate? One thing I certainly do not want to do is advocate that the humanities go the way of the sciences, in which it has become standard to spend 5-10 years on the post-doc circuit building up a publication record before entering the tenure track.

Because of the ratchet effect, my proposal would only succeed if senior scholars commit to preferentially hire recent graduates. And this is where things get really draconian, because doing that would mean telling huge numbers of talented and deserving people who have been on the market for a number of years that all of a sudden they are out of the running for permanent positions. That's a pretty bitter pill to swallow. So bitter, I think, that the AHA's backwards-looking position on online publishing starts to make a lot of sense. 

Senin, 22 Juli 2013

Winner! The US T&C: Examining Law and Expectations in Our Digital World

A few weeks ago, in the wake of the Snowden Affair, I announced a contest to write a new social contract modeled on terms of service. Terms of service are, of course, the things most of us click through without reading when either signing up for a web-based service or installing a piece of software. There was doubtless something sarcastic, even cynical, about this contest.

A visualization of the US Internet, a web of technical and social bonds,
which, like all such bonds, include expectations.
(Source: National Science Foundation)
Today, I would like to announce the winner: Tall White American Male (Twitter: @TallWhiteMale), a resident of Chicago, who penned a proposed US Terms and Conditions. So, congrats to Tall White American Male. As spelled out in the contest announcement, he'll receive this remarkable shirt.  I have pasted his winning entry below, but before we come to that, I want to discuss why I held the contest in the first place.


Some people have asked me what this contest has to do with the history of science and technology, or science and technology studies (STS) more broadly, and whether it is a symptom of my resignation to ubiquitous surveillance. My answer to the first point is that STS may offer insights about these NSA programs and that imaginative, speculative writings are one way to address our current plight.

The strand of STS that has the most to say to discussions about the NSA programs is the one focused on the law, and this means, most centrally, the work of Sheila Jasanoff. Perhaps the best place to start is her essay "In a Constitutional Moment" (paywall). This usage of "constitutional" plays the well-worn postmodern game of deploying a word in a purposely ambiguous way. Here, constitution refers, on the one hand, both to the written Constitution and unwritten legal codes and, on the other hand, to ways in which we constitute—or make—the world, either by creating scientific pictures of reality or by building technological systems.  The point is that there is a dynamic interaction between (written or unwritten) norms and scientific theories/technologies. The formal constitutional debate and lawsuits about the NSA programs have hardened around whether searches of metadata violate the 4th Amendment, but it is how the NSA programs run up against the informal, unwritten norms of the Internet that is most interesting.

One irony of the Snowden Affair is that many people consider the Internet a space for freedom. Some describe it as a technology that has liberty written into its "code," and enthusiasts have celebrated how demonstrators and activists have used the Net to resist repressive governments. Yet, critics, like Evgeny Morozov, have mocked these ideas, arguing that this technology can just as easily be used for authoritarian ends. Not surprisingly, the NSA programs have enraged technolibertarians and heralds of Internet freedom. Interesting People, a large email list that has many members who (literally) helped create the Internet, has experienced a torrent of emails expressing anger and dismay. The NSA programs conflict with the norms, values, and expectations that many people have for and about this relatively young network technology.

The relationship between legal norms and technological change plays out in many different ways. One classic picture of the relationship is William Ogburn's notion of cultural lag (first articulated in 1922), which holds that technology often moves faster than laws, customs, and norms. Tradition lags behind invention. We have already seen some people spell out cultural lag arguments in response to the NSA programs. For example, Andrew Couts published a blog post that clearly states its position: "Restoring a Law from 1879 May Be Impossible with Technology from 2013." The url for the post puts the point more baldly, "Restoring the Fourth a Digital Age Pipe Dream." We will doubtlessly see more such arguments. Yet, Jasanoff has argued in the past that the idea of cultural lag is usually wrong because the law has almost always foreseen technological and legal possibilities. It will be interesting to see what historical interpretation of this point dominates down the road.

This issue of future historical interpretation brings me to something else I hoped to address through my post announcing the contest, namely the issue of education, social reproduction, and the public understanding of technology. In my original post, I imagined the US Terms of Service, which would state a user's privacy expectations, taking a central role in high school civics classes. Will those of us who lived through the immediate aftermath of 9/11 teach children that constant surveillance is simply a normal and assumed part of social reality? A connected issue is the relationship between consumerism and citizenship. The Internet arose from government and academia, but it has increasingly become a tool used for entertainment. (Some accounts suggest that Netflix takes up to 30% of US bandwidth in the evening; check out the fascinating image at the bottom of this page.) Yet, concerns about Internet privacy goes to the heart of our citizenship. If we add Internet literacy to our schools' curricula, as many argue for, should we put those lessons in home economics or in civics? This is partly what I was hoping to get at by suggesting a US Terms of Service, as if a kind of contract we use as consumers could come to define our lives as citizens.

Other imaginative works, or speculative fictions, that might further help us think through our current situation have occurred to me over the last few weeks. My wife and I are about to have our first child, and I have been preoccupied by thoughts of how I will explain our world (digital and otherwise) to my daughter. I pitched the idea of a book that would describe the NSA programs to children to the speculative fiction author Andri Magnason via Twitter. Magnason, who primarily works in allegory, responded, "...a world made of glass. Everything is visible, traceable, readable, and everything leaves traces and tracks, even thoughts." I was thinking of something more realistic, something for the kiddie non-fiction section, something that would begin "Once upon a time, some criminals attacked the United States" and would end, ". . . and so they watch." It could be called Your Friendly Watchers. Another possibility that I have discussed with friends would be an alternative history novel that imagines the fate of the Civil Rights Movement in the 50s and 60s if the feds had PRISM. Could we guarantee that these tools would not be turned on groups of US citizens if we entered a period of social strife? I think not. But American Science, a blog dedicated to the history of science and technology, is not the space for these kinds of writings. Therefore, this will likely be the only time this kind of writing contest will be held here.

And so we come to the winning entry . . .

********

The US T&C as Proposed by Tall White American Male

These Terms and Conditions for Citizenship evolved from the United States Constitution, the Bill of Rights, and Declaration of Independence, and are the terms of service that establish and govern reasonable expectations to be held by citizens of the United States.

The Declaration of Independence identified an inalienable right to “Life, Liberty, and the pursuit of Happiness”, and the Founding Fathers declared “that the form of government which communicates ease, comfort, security, or, in one word, happiness, to the greatest number of persons, and in the greatest degree, is the best.” The events of September 11, 2001 and subsequent threats to the American homeland present an enduring danger to the Happiness of the American people. It is this inalienable right that the Terms and Conditions for Citizenship seek to protect.

By using or accessing methods of electronic communication including but not limited to telephone, email, camera, internet search engine, web browser, and social media platforms, citizens agree to these Terms, as updated constantly in accordance with secret laws, secret courts, and secret decisions. By utilizing the rights and privileges of US citizenship, citizens agree to abide by these Terms and Conditions in perpetuity.

The nature of this ongoing threat to American happiness requires extraordinary measures be taken by decision makers, and necessarily places limits on expectations of privacy. Data, in their many forms, are an essential element of electronic communication, and the monitoring of this element is critical to the cause of guaranteeing the safety, liberty, happiness of the American people. All data are to be considered critical to this cause unless otherwise specified, and while the product of communications between one or more citizens, are exempt from traditional notions of privacy. Similarly, present and future conditions are such that control and ownership of all forms of data cannot be left to the individual citizen. Data are crucial to the maintenance of the common defense and the general welfare, and must be safeguarded by the commonwealth. 

Surveillance of the people, by the people, and for the people will secure the benefits of liberty to this and future generations.

Jumat, 05 Juli 2013

A Contest for Writing the New Social Contract: The US Citizens' Terms of Service?

Social contract theory—the idea that each person (implicitly or explicitly) agrees to a set of rules, rights, and duties by choosing to live in a society—has rested at the heart of Western political thought for the last three to four hundred years. The fallout surrounding the Snowden Affair and the NSA snooping programs that it has unveiled can be seen as a brouhaha over a social contract. The aggrieved feel that they had signed onto an agreement, say, The Bill of Rights, which they believe these programs violate.


Most of the discussions I have heard so far focus on how we can ensure proper oversight of the NSA's programs, either through courts or through Congress. Many express skepticism about the viability of such oversight systems, however. Who will watch the watchers? And who will watch the watchers' watchers? I'm with the skeptics here. I have little faith in systems of oversight, so I do not think they are the place to put our focus.

There's another option: we could write a new social contract that reflects the technological reality of the Internet and governments' use of it for intelligence and law enforcement.* This new social contract could be modeled on Terms of Service, the agreements users click through when they sign up for services (like Facebook and Twitter) and agree to use products (like Adobe Professional or Apple ITunes). In honor of our national celebration of Independence Day and the adoption of the Declaration of Independence, I will hold a contest to see who can write the best new social contract.


Members of the intelligence community will tell you that people are often of two minds about intelligence gathering. When something terrible happens, like when the Tsarnaev brothers bombed Boston, people want government agents to have extraordinary capabilities to gather data and find those responsible. Yet, people also want to have complete privacy when it comes to certain matters and certain "places," like their email boxes.

We probably cannot have it both ways. For example, we would expect law enforcement agents to interview acquaintances of suspected criminals. The question is how law enforcement will find out who those acquaintances are. Now, law enforcement officers can discover criminals' acquaintances by feeding criminals' cellphone metadata into a computer program and using an algorithm to find other cellphones (that is, people) that were regularly in the same location as the person(s) under investigation. This strategy doubtlessly leads to false positives and potentially to government agents hassling innocent people.

More important, as many have pointed out, it is not clear that the traditional system of getting a warrant for a search fits these technical procedures well. Hypothetically, judges hand out warrants when law enforcement officers can prove that they have good reasons to make a search. (Many would claim that the warrant system has often been abused.) But government agents use metadata to discover WHO THEY SHOULD SUSPECT, and this process involves everyone's information, even yours and mine. How would warrants work in this case?

One solution to this conundrum would be to have people sign an agreement, the US Citizens' Terms of Service, which makes clear that their online activity is not private and that all of it is open to government examination at any time. These Terms or Service would make explicit our political and social reality, and allow the NSA and other agencies to continue their practices without anyone having hard feelings. (Or is it that the aggrieved are experiencing cognitive dissonance? The Terms of Service would alleviate that, too.)

There are many, many issues that would need to be resolved before the US Citizens' Terms of Service could be adopted, however. For example, we could imagine a lively debate that would mirror long-lived arguments amongst Christians about the proper age for baptism. How old should someone be before he or she clicks through the national Terms of Service?

Can we imagine an event modeled on the christening, wherein, when a child is a few months old, his or her parents go to the court house and use a mouse to tab through and mostly not read the terms in front of a judge? That way mom and dad can let the IPad babysit the child without the the young one laboring under the notion that playing online with Sponge Bob or Dora the Explorer or whoever is private. Grandma and Grandpa and other family members and friends could go to the courthouse and take smile-filled photographs of the momentous occasion.

Or perhaps our thoughts would align closer to where the Baptists come down on baptism: a person should be of an age where he or she can make an adult decision about whether he or she agrees with the Terms of Service. Perhaps an educational unit and test on the US Citizens' Terms of Service could be added to ninth grade Civics classes. Students would receive a certificate and the warm applause of proud parents on the day that they are finally allowed to click through, and not read, the national user agreement. Each child would come to a podium and take the mouse in hand; the computer screen would be projected on a large screen over the stage; and the audience would cheer and whistle as the amplified clicks echoed through the school gymnasium.

We will also need to decide how often citizens need to renew their agreement to the terms. For instance, we can imagine the Federal Communications Commission mandating a technology standard that every web browser installed on a computer in the USA would require users to click the US Citizens' Terms of Service every time he or she logs on.

And there are so many other details that will need ironing out.

For all of these reasons, I propose a contest, a contest for the writing of the new social contract. Contestants can either put their submissions in the comment field of this post or email them to me at leevinsel@gmail.com. The submissions should be modeled on the terms of service of Facebook and other such companies; they should be filled with the kinds of inscrutable legalese that make terms of service so endearing.

The winner will receive this t-shirt.





 
I will leave it for the reader to decide whether the shirt is ironic. Of course, the t-shirt is simply a token of appreciation. The real reward will potentially come when the US Citizens' Terms of Service is adopted a central political document of the USA and the writer joins the ranks of the Founders and other great citizens of the nation's history. 

I will announce the winner on July 18th, that is, two weeks after Independence Day. This contest will be exceedingly informal and will mostly reflect my own prejudices. I may involve the other team members of this blog in the judging, but they are busy people and may not have time.



* One silly argument that people who Evgeny Morozov calls "Internet centrists" might make is to say that the Internet is such a "radical innovation" it has somehow undone all previous social contracts, including the Constitution and the Bill of Rights. We can imagine them making allusions to Joseph Schumpeter and/or William Ogburn's notion of "cultural lag." I don't see any good reason to go along with this line of thought. The Internet has not so radically shifted things that we need to rewrite the rules of, say, when governments can search our residences.

Selasa, 02 Juli 2013

The NSA and Tech Change, Part II: The Dialectic of Strategy and Counter-Strategy

Nathan Andrew Fain's comment on my last post was so interesting, I thought I would respond to it here. In that post, I briefly explored—and mostly asked questions about—how the NSA's programs, like PRISM, may be shaping technological change. As many know, there is a long—several hundred year—history of defense spending and priorities influencing science and technology, and I wanted to ask how government surveillance programs might do the same. 

In his comment, Fain considered the flip side of my point, namely how the Snowden Affair might encourage others to change technologies. He wrote, "The NSA programs, or more accurately the revelation of them, will push in ernest the development of subversive technologies." He went on to talk about John Gilmore and the cypherpunk movement, which sees cryptology and the avoidance of surveillance as potential loci for social change. I knew nothing about this movement, know little more now, but am hoping to learn, first by reading this book. Fain's comment is fascinating, and I encourage everyone to read it as well as to check out his website, deadhacker.com

I'd like to examine Fain's comments through the lens of technology studies by thinking for a moment about strategy and counter-strategy and how this dynamic shapes technologies and the practices that surround them.


When I was in grad school, I spent a good bit of time wondering how hacking influenced technology. This was during the time that I was reading Cyril Stanley Smith, who talked about how inventors and innovators often have a tacit connection to their medium won out by a great deal of experience. This connection leads to a sense of "play"; invention becomes a kind of second nature. Smith's account reminded me of hackers I knew, who seemed to have an easy and fluid relationship with computing and who enjoyed nothing more than the thrill of doing what was not to be done. But did hacking do anything (technologically) more than stress out systems managers and induce better security programs? Before Fain's comment, I had not considered the inverse of this dynamic, that ever expanding surveillance systems fostered technologies of concealment, and that it wasn't only criminals and terrorists who wanted to escape detection but also techno-libertarians, cyber-anarchists, and the like.

The dialectic of strategy and counter-strategy is an essential part both of technological change and changes in how we use technologies. The phenomenon is as true of business as it is of war, but I will give a few examples from the latter. In the Viet Nam War, the United States found new strategies for the helicopter, especially through the famous 1st Cavalry Division. Helicopters enabled novel kinds of troop movements and air support during battles, but the Viet Cong quickly adapted to the technology. They would sit and wait for helicopters to come in, before lighting them up as they neared the ground, turning the vehicles' inhabitants into sitting ducks. In another, perhaps apocryphal, example, the M1 Garand rifle that US soldiers used in WWII made a loud 'ping' sound when it had run out of ammunition. In close range combat, Japanese soldiers would wait to hear that sound before rushing the US troops. The US soldiers developed a counter-strategy, however. Working in two man teams—a sniper and an assistant—one soldier would use the rifle to make the 'ping' sound. When the Japanese soldiers began their charge, the sniper would already have their position lined up in his sights. While two examples focus on changes in practices, there are plenty of examples of strategy and counter-strategy shaping technological systems themselves, such as when, during WWII, scientists at Harvard realized that radar was under development at the MIT RadLab and playfully jammed the signal from across the Charles River.


It seems that Fain is almost certainly right. The revelation of the NSA's programs will be watershed moment for many people, some subset of which will actually work to produce new technologies for maintaining privacy. I think the real question is whether people will adopt these systems of cryptography and use them in everyday life. In a long theoretical essay that I finished recently and will probably never publish, I spend a lot of time discussing how scholars in technology studies have concentrated for too long on how technologies are "constructed," or achieve their final form. Often the more important issue is whether technologies are adopted, especially whether they are adopted on a massive scale. At this point in time, sadly, economics is more helpful than history or sociology (because people in the latter fields have talked too much about construction). One significant exception is the work of the rural sociologist and communications scholar, Everett Rogers, whose Diffusion of Innovations (1962 and many subsequent editions) is still the gold standard for studies of technological adoption. Price and effort are always important factors in whether potential users adopt a technology, but other factors can also play a role.  

Being pissed off could be one such factor that trumps cost and effort, but keeping information secret takes time, discipline, and at least a modicum of technical know-how. We live in a society where barely anyone reads terms of service and dwell in a land of flashing DVD player/microwave oven/cable box clocks. I have recently seen tech savvy people, such as the members of mailing list Interesting People (also the interesting account here), sharing public keys. What percentage of the population will be willing to go to such lengths to protect their privacy? (I foresee a study, if one hasn't been done yet, where economists push people to put a monetary value on their privacy and find that the value is $0. Not that such studies tell us much of anything at all.) Also, as one friend put it after reading Fain's comment, "The NSA has 50 nerds for every one of the cypherpunk nerds."

Yet, these last thoughts are getting me off track. The question of my last two posts has been this: How are the NSA programs influencing technological change? Fain must be right to point out that to answer this question we should look not only at the NSA programs themselves but at how people are reacting to those programs. To add one final thought, my last post argued that we should think about how knowledge produced through the NSA's programs spills over into the other sectors of society. To be perfectly symmetrical, we should also attend to spill over from the efforts of cypherpunks and other such dissidents. How will the technologies and practices they produce come to influence even those in society who are too apathetic and lazy to work for their own privacy?

Senin, 01 Juli 2013

The National Security Agency and Technological Change

This post builds on the one Lukas put up last week. Most commentaries on the Snowden Affair, PRISM, and the other NSA programs that have come to light have focused on whether these programs are constitutional, whether Snowden is a hero or villain or something else, and, now, what these programs will mean for US foreign relations. I have also heard people ask how any of us could be surprised by these programs, and for a few days, people spent a lot of time talking about Snowden's girlfriend's pole-dancing skills. In other words, the Snowden Affair has all the markings of a major American media event.


In this post, I'd like to exercise the historian's prerogative by exploring how these NSA programs fit into a longer historical trajectory, namely how government spending and procurement influence technological change.
The history and sociology of science and technology are full of well-known stories of how government funding affected the direction and growth of technological innovation. The best known stories in the United States have to do with technical advances made at MIT, Harvard, and Los Alamos during World War II and the wide variety of scientific breakthroughs and technologies that emerged from Cold War defense spending. (Mark Buchanan recently put up an entertaining post about the many technologies that ultimately have roots in government spending.) There are many earlier examples in the United States from WWI and even the 19th century. Of course, any comprehensive history of the military-science-technology relationship would have to go back much further; in the West, right through 18th century French science societies through da Vinci at least back to Archimedes.

We can assume that spending on intelligence and the technology that undergirds it exploded after 9/11. 9/11 was to the surveillance-industrial complex what Sputnik was for Cold War sci-tech funding. It would be interesting to know whether the programs that developed after that date were merely extensions—if massively scaled up extensions—of things that were already in the works. It would also be interesting to know how many new programs developed after that date (versus building on old programs).

But it would also be fascinating to learn how these programs have influenced technological change, if at all. Do fundamentally new and largely unknown computing technologies lay behind the NSA's capabilities. Are these capabilities mostly the result of hugely scaling up technologies that are already well known (server farms, data mining algorithms, etc.)? Or will we look back at the NSA's programs as greatly changing computing technologies? If so, which companies would have produced these technologies for the agency? Mostly defense contractors? Or mostly computing firms? Or might the government have its own internal R&D shops? 

Economists and historians often examine "spill over" to see how government, typically military, spending ends up influencing the broader economy. To the degree that new technologies, processes, and techniques are being developed through these programs, for several reasons, it will likely be very difficult down the road to determine how much these things have moved into the domestic sector.  First of all, the NSA can likely prevent the spread of new technological systems (if truly new, x-technologies, like quantum computing, are a part of the programs). But the agency cannot easily stem the dissemination of the experience and tacit knowledge that people will gain by working in these programs. People will move to other jobs and take their experiences in, say, developing data mining algorithms with them. Again, the movement of this knowledge will be very difficult to track. 

Second, contractors, like Snowden, do a significant portion of US intelligence work. Today, on Meet the Press, Rep. Nancy Pelosi said that the Obama Administration has done a great deal to decrease the role of contractors in classified projects. I don't know where Pelosi is getting her information, but my instinct says that she is overestimating the decline of contractors under Obama. As long as intelligence remains tied to the use of enormous computer networks, contractors will likely continue to play an essential role. Even in the midst of news about Snowden, we have learned more about Amazon's contract to build cloud computing infrastructure for the CIA. Booz Allen Hamilton employees and other such consultants and contractors will take the lessons they learn in working for the NSA and apply them elsewhere. I'm sure the opposite is also true: the contractors are bringing lessons learned from private industry and using them for the intelligence agency. Indeed, in terms of the movement and synthesis of knowledge, between the private and public sectors, these contracting firms are likely important nodes that historians and sociologists would do well to examine . . . if they ever can . . . all of this is veiled in such horrible secrecy. In this case, however, secrecy might also have dire implications for our ability to study the realities of US innovation policy, since the surveillance-industry complex will have an unknown relationship to technological change and economic growth.

What I keep wondering is how we will see these things in ten or twenty years. Will we see the NSA's influence on technology as we now see Cold War sci-tech funding, that is, as a hugely important source of technological change and knowledge production? Or will the NSA's programs seem like just seem like another (ultimately boring) application of "big data" and the "app economy"? 

If any readers—especially those readers with deep knowledge of computing and/or computing history—have thoughts about the relationship between the NSA's programs and technological change, I'd love to hear them.