This book has explained how the Internet’s generative characteristics primed it for extraordinary success—and now position it for failure. The response to the failure will most likely be sterile tethered appliances and Web services that are contingently generative, if generative at all. The trajectory is part of a larger pattern. If we can understand the pattern and what drives it, we can try to avoid an end that eliminates most disruptive innovation while facilitating invasive and all-too-inexpensive control by regulators.

The pattern begins with a technology groomed in a backwater, as much for fun as for profit. The technology is incomplete even as it is shared. It is designed to welcome contribution and improvement from many corners. New adopters refine it as it spreads, and it spreads more as it improves, a virtuous circle that vaults the technology into the mainstream, where commercial firms help to package and refine it for even more people. This is the story of the PC against information appliances, and it is the story of the Internet against the proprietary networks.

Developments then take a turn for the worse: mainstream success brings in people with no particular talent or tolerance for the nuts and bolts of the technology, and no connection with the open ethos that facilitates the sharing of improvements. It also attracts those who gain by abusing or subverting the technology and the people who use it. Users find themselves confused and hurt by the abuse, and they look for alternatives.

The most obvious solution to abuse of an open system is to tighten or altogether close it. A bank robbery calls for more guards; a plane hijacking suggests narrowing the list of those permitted to fly and what they are permitted to take with them. For the Internet and PC, it seems natural that a system beset by viruses ought not to propagate and run new code so easily. The same goes for that which is built on top of the Internet: when Wikipedia is plagued by vandals the obvious response is to disallow editing by anonymous users. Such solutions carry their own steep price within information technology: a reduction in the generativity of the system, clamping its innovative capacity while enhancing the prospects of control by the parties left in charge, such as in the likely shift by users away from generative PCs toward tethered appliances and Web services. What works in the short or medium term for banks and airlines has crucial drawbacks for consumer information technology, even as consumers themselves might bring such solutions about precisely where regulators would have had difficulty intervening, consigning generative technologies to the backwaters from which they came.

So what to do to stop this future? We need a strategy that blunts the worst aspects of today’s popular generative Internet and PC without killing these platforms’ openness to innovation. Give users a reason to stick with the technology and the applications that have worked so surprisingly well—or at least reduce the pressures to abandon it—and we may halt the movement toward a nongenerative digital world. This is easier said than done, because our familiar toolkits for handling problems are not particularly attuned to maintaining generativity. Solely regulatory interventions—such as banning the creation or distribution of deceptive or harmful code—are both under- and overinclusive. They are underinclusive for the usual reasons that regulation is difficult on today’s Net, and that it is hard to track the identities of sophisticated wrongdoers. Even if found, many wrongdoers may not be in cooperative jurisdictions. They are overinclusive because so much of the good code we have seen has come from unaccredited people sharing what they have made for fun, collaborating in ways that would make businesslike regulation of their activities burdensome for them—quite possibly convincing them not to share to begin with. If we make it more difficult for new software to spread, good software from obscure sources can be fenced out along with the bad.

The key to threading the needle between needed change and undue closure can be forged from understanding the portability of both problems and solutions among the Internet’s layers. We have seen that generativity from one layer can recur to the next. The open architecture of the Internet and Web allowed Ward Cunningham to invent the wiki, generic software that offers a way of editing or organizing information within an article, and spreading this information to other articles. Wikis were then used by unrelated nontechies to form a Web site at the content layer like Wikipedia. Wikipedia is in turn generative because people are free to take all of its contents and experiment with different ways of presenting or changing the material, perhaps by placing the information on otherwise unrelated Web sites in different formats.1

If generativity and its problems flow from one layer to another, so too can its solutions. There are useful guidelines to be drawn from the success stories of generative models at each layer, transcending the layer where they originate, revealing solutions for other layers. For example, when the Morris worm abused the openness of the 1987 Internet, the first line of defense was the community of computer scientists who populated the Internet at that time: they cooperated on diagnosing the problem and finding a solution. Recall that the Internet Engineering Task Force’s (IETF’s) report acknowledged the incident’s seriousness and sought to forestall future viruses not through better engineering but by recommending better community ethics and policing.2 This is exactly Wikipedia’s trump card. When abuses of openness beset Wikipedia, it turned to its community—aided by some important technical tools—as the primary line of defense. Most recently, this effort has been aided by the introduction of Virgil Griffith’s Wikiscanner, a simple tool that uses Wikipedia’s page histories to expose past instances of article whitewashing by organizations.3 So what distinguishes the IETF recommendation, which seems like a naïve way to approach Internet and PC-based problems, from the Wikipedian response, which so far appears to have held many of Wikipedia’s problems at bay?

The answer lies in two crucial differences between generative solutions at the content layer and those at the technical layer. The first is that much content-layer participation—editing Wikipedia, blogging, or even engaging in transactions on eBay and Amazon that ask for reviews and ratings to establish reputations—is understood to be an innately social activity.4 These services solicit and depend upon participation from the public at large, and their participation mechanisms are easy for the public to master. But when the same generative opportunity exists at the technical layer, mainstream users balk—they are eager to have someone else solve the underlying problem, which they perceive as technical rather than social.

The second difference is that many content-layer enterprises have developed technical tools to support collective participation, augmenting an individualistic ethos with community mechanisms.5 In the Internet and PC security space, on the other hand, there have been few tools available to tap the power of groups to, say, distinguish good code from bad. Instead, dealing with bad code has been left either to individual users who are ill-positioned to, say, decipher whether a Web site’s digital certificate is properly signed and validated, or to Internet security firms that try to sort out good code from bad according to a one-size-fits-all standard. Such a defense still cannot easily sift bad gray-zone software that is not a virus but still causes user regret—spyware, for instance—from unusual but beneficial code. As with the most direct forms of regulation, this solution is both under- and overinclusive.

These two differences point to two approaches that might save the generative spirit of the Net, or at least keep it alive for another interval. The first is to reconfigure and strengthen the Net’s experimentalist architecture to make it fit better with its now-mainstream home. The second is to create and demonstrate the tools and practices by which relevant people and institutions can help secure the Net themselves instead of waiting for someone else to do it.

Befitting the conception of generative systems as works in progress that muddle through on the procrastination principle, the concrete ideas spawned by these solutions are a bit of a grab bag. They are evocative suggestions that show the kinds of processes that can work rather than a simple, elegant patch. Silver bullets belong to the realm of the appliance. Yet as with many of the Internet’s advances, some of these hodge-podge solutions can be developed and deployed to make a difference without major investment—and with luck, they will be. The most significant barriers to adoption are, first, a wide failure to realize the extent of the problem and the costs of inaction; second, a collective action problem, exacerbated by the Internet’s modular design, thanks to which no single existing group of actors who appreciates the problem sees it as its own responsibility; and third, a too-easily cultivated sense among Internet users that the system is supposed to work like any other consumer device.

Posted by The Editors on March 1, 2008
Tags: Uncategorized