Anyone can design new applications to operate over the Internet. Good applications can then be adopted widely while bad ones are ignored. The phenomenon is part of the Internet’s “hourglass architecture” (Figure 4.1).

The hourglass portrays two important design insights. First is the notion that the network can be carved into conceptual layers. The exact number of layers varies depending on who is drawing the hourglass and why,1 and even by chapter of this book.2 On one basic view the network can be understood as having three layers. At the bottom is the “physical layer,” the actual wires or airwaves over which data will flow. At the top is the “application layer,” representing the tasks people might want to perform on the network. (Sometimes, above that, we might think of the “content layer,” containing actual information exchanged among the network’s users, and above that the “social layer,” where new behaviors and interactions among people are enabled by the technologies underneath.) In the middle is the “protocol layer,” which establishes consistent ways for data to flow so that the sender, the receiver, and anyone necessary in the middle can know the basics of who the data is from and where the data is going.

By dividing the network into layers and envisioning some boundaries among them, the path is clear to a division of labor among people working to improve the overall network. Tinkerers can work on one layer without having to understand much about the others, and there need not be any coordination or relationship between those working at one layer and those at another. For example, someone can write a new application like an instant messenger without having to know anything about whether its users will be connected to the network by modem or broadband. And an ISP can upgrade the speed of its Internet service without having to expect the authors of instant messenger programs to rewrite them to account for the new speed: the adjustment happens naturally. On the proprietary networks of the 1980s, in contrast, such divisions among layers were not as important because the networks sought to offer a one-stop solution to their customers, at the cost of having to design everything themselves. Layers facilitate polyarchies, and the proprietary networks were hierarchies.3

The second design insight of the hourglass is represented by its shape. The framers of Internet Protocol did not undertake to predict what would fill the upper or lower layers of the hourglass. As a technical matter, anyone could become part of the network by bringing a data-carrying wire or radio wave to the party. One needed only to find someone already on the network willing to share access, and to obtain a unique IP address, an artifact not intended to be hoarded. Thus, wireless Internet access points could be developed by outsiders without any changes required to Internet Protocol: the Protocol embodied so few assumptions about the nature of the medium used that going wireless did not violate any of them. The large variety of ways of physically connecting is represented by the broad base to the hourglass. Similarly, the framers of Internet Protocol made few assumptions about the ultimate uses of the network. They merely provided a scheme for packaging and moving data, whatever its purpose. This scheme allowed a proliferation of applications from any interested and talented source—from the Web to e-mail to instant messenger to file transfer to video streaming. Thus, the top of the hourglass is also broad. It is only the middle that is narrow, containing Internet Protocol, because it is meant to be as feature-free as possible. It simply describes how to move data, and its basic parameters have evolved slowly over the years. Innovation and problem-solving are pushed up or down, and to others: Chapter Two’s procrastination principle at work.

This same quality is found within traditional PC architecture. It greatly facilitates the way that the overall network operates, although those joining the debate on Internet openness have largely ignored this quality. Operating system designers like Microsoft and Apple have embraced the procrastination principle of their counterparts in Internet network design. Their operating systems, as well as Unix and its variants, are intentionally incomplete; they were built to allow users to install new code written by third parties. Such code could entirely revise the way a computer operates, which gives individuals other than the original designers the capacity to solve new problems and redirect the purposes of PCs.4 We could even sketch a parallel hourglass of PC architecture (Figure 4.2).

The PC can run code from a broad number of sources, and it can be physically placed into any number and style of physical chassis from many sources, at least as a technical matter. (Sometimes the operating system maker may object as a strategic and legal matter: Apple, for example, has with few exceptions notoriously insisted on bundling its operating system with Apple hardware, perhaps a factor in its mere 5 percent market share for PCs.5)

I have termed this quality of the Internet and of traditional PC architecture “generativity.” Generativity is a system’s capacity to produce unanticipated change through unfiltered contributions from broad and varied audiences. Terms like “openness” and “free” and “commons” evoke elements of it, but they do not fully capture its meaning, and they sometimes obscure it.

Generativity pairs an input consisting of unfiltered contributions from diverse people and groups, who may or may not be working in concert, with the output of unanticipated change. For the inputs, how much the system facilitates audience contribution is a function of both technological design and social behavior. A system’s generativity describes not only its objective characteristics, but also the ways the system relates to its users and the ways users relate to one another. In turn, these relationships reflect how much the users identify as contributors or participants, rather than as mere consumers.

FEATURES OF A GENERATIVE SYSTEM

What makes something generative? There are five principal factors at work: (1) how extensively a system or technology leverages a set of possible tasks; (2) how well it can be adapted to a range of tasks; (3) how easily new contributors can master it; (4) how accessible it is to those ready and able to build on it; and (5) how transferable any changes are to others—including (and perhaps especially) nonexperts.

Leverage: Leverage makes a difficult job easier. Leverage is not exclusively a feature of generative systems; non-generative, specialized technologies can provide leverage for their designated tasks.6 But as a baseline, the more a system can do, the more capable it is of producing change. Examples of leverage abound: consider a lever itself (with respect to lifting physical objects), a band saw (cutting them), an airplane (transporting them from one place to another), a piece of paper (hosting written language, wrapping fish), or an alphabet (constructing words). Our world teems with useful objects and processes, both natural and artificial, tangible and intangible. Both PCs and network technologies have proven very leveraging. A typical PC operating system handles many of the chores that the author of an application would otherwise have to worry about, and properly implemented Internet Protocol sees to it that bits of data move from one place to another without application authors having to worry on either end. A little effort can thus produce a very powerful computer program, whether a file-sharing program or a virus comprising just a few lines of code.

Adaptability: Adaptability refers to how easily the system can be built on or modified to broaden its range of uses. A given instrumentality may be highly leveraging yet suited only to a limited range of applications. For example, TiVo is greatly leveraging—television viewers describe its impact on their lives as revolutionary—but it is not very adaptable. A plowshare enables one to plant a variety of seeds; however, its comparative leverage quickly vanishes when devoted to other tasks such as holding doors open. The same goes for swords (they really make poor plowshares), guns, chairs, band saws, and even airplanes. Adaptability is clearly a spectrum. Airplanes can transport people and things, or they can be configured to dust or bomb what lies below. But one can still probably count the kinds of uses for an airplane on two hands. A technology that affords hundreds of different, additional kinds of uses beyond its essential application is more adaptable and, all else being equal, more generative than a technology that offers fewer kinds of uses. The emphasis here is on uses not anticipated at the time the technology was developed. A thick Swiss Army knife may have plenty of built-in tools compared with a simple pocket knife, but many of those are highly specialized.7

By this reckoning, electricity is an amazingly adaptable technology, as is plastic (hence the historical use of “plastic” to refer to notions of sculptability).8 And so are the PC and the Internet: they can be endlessly diverted to new tasks not counted on by their original makers.

Ease of mastery: A technology’s ease of mastery reflects how easy it is for broad audiences to understand how to adopt and adapt it. The airplane is not readily mastered, being neither easy to fly nor easy to learn how to modify for new purposes. The risk of physical injury if the modifications are poorly designed or executed is a further barrier to such tinkering. Paper, on the other hand, is readily mastered: we teach our children how to use it, draw on it, and even fold it into paper airplanes (which are much easier to fly and modify than real ones), often before they enter preschool. The skills required to understand many otherwise generative technologies are often not very readily absorbed. Many technologies require apprenticeships, formal training, or many hours of practice if one is to become conversant in them. The small electronic components used to build radios and doorbells fall into this category—one must learn both how each piece functions and how to solder—as do antique car engines that the enthusiast wants to customize. Of course, the skills necessary to operate certain technologies, rather than modify them, are often more quickly acquired. For example, many quickly understand how to drive a car, an understanding probably assisted by user-friendly inventions such as the automatic transmission.

Ease of mastery also refers to the ease with which various types of people might deploy and adapt a given technology, even if their skills fall short of full mastery. A pencil is easily mastered: it takes a moment to understand and put to many uses, even though it might require a lifetime of practice and innate artistic talent to achieve Da Vincian levels of leverage from it. The more useful a technology is both to the neophyte and to the expert, the more generative it is. PCs and network technologies are not easy for everyone to master, yet many people are able to learn how to code, often (or especially) without formal training.

Accessibility: The easier it is to obtain access to a technology, along with the tools and information necessary to achieve mastery of it, the more generative it is. Barriers to accessibility can include the sheer expense of producing (and therefore consuming) the technology, taxes, regulations associated with its adoption or use, and the secrecy its producers adopt to maintain scarcity or control.

Measured by accessibility, paper, plowshares, and guns are highly accessible, planes hardly at all, and cars somewhere in between. It might be easy to learn how to drive a car, but cars are expensive, and the government can always revoke a user’s driving privileges, even after the privileges have been earned through a demonstration of driving skill. Moreover, revocation is not an abstract threat because effective enforcement is not prohibitively expensive. Measured by the same factors, scooters and bicycles are more accessible, while snowplows are less so. Standard PCs are very accessible; they come in a wide range of prices, and in a few keystrokes or mouse-clicks one can be ready to write new code for them. On the other hand, specialized PC modes—like those found in “kiosk mode” at a store cycling through slides—cannot have their given task interrupted or changed, and they are not accessible.

Transferability: Transferability indicates how easily changes in the technology can be conveyed to others. With fully transferable technology, the fruits of skilled users’ adaptations can be easily conveyed to less-skilled others. The PC and the Internet together possess very strong transferability: a program written in one place can be shared with, and replicated by, tens of millions of other machines in a matter of moments. By contrast, a new appliance made out of a 75-in-1 Electronic Project Kit is not easily transferable because the modifier’s changes cannot be easily conveyed to another kit. Achieving the same result requires manually wiring a new kit to look like the old one, which makes the project kit less generative.

GENERATIVE AND NON-GENERATIVE SYSTEMS COMPARED

Generative tools are not inherently better than their non-generative (“sterile”) counterparts. Appliances are often easier to master for particular uses, and because their design often anticipates uses and abuses, they can be safer and more effective. For example, on camping trips, Swiss Army knives are ideal. Luggage space is often at a premium, and such a tool will be useful in a range of expected and even unexpected situations. In situations when versatility and space constraints are less important, however, a Swiss Army knife is comparatively a fairly poor knife—and an equally awkward magnifying glass, saw, and scissors.

As the examples and terms suggest, the five qualities of leverage, adaptability, ease of mastery, accessibility, and transferability often reinforce one another. And the absence of one of these factors may prevent a technology from being generative. A system that is accessible but difficult to master may still be generative if a small but varied group of skilled users make their work available to less-sophisticated users. Usually, however, a major deficiency in any one factor greatly reduces overall generativity. This is the case with many tools that are leveraging and adaptable but difficult to master. For example, while some enjoy tinkering in home workshops, making small birdhouses using wood and a saw, most cannot build their own boats or decks, much less pass those creations on to others. Similarly, there are plenty of examples of technology that is easy to master and is quite adaptable, but lacks leverage. Lego building blocks are easy to master and can produce a great range of shapes, but regardless of the skill behind their arrangement they remain small piles of plastic, which largely confines their uses to that of toys.

The more that the five qualities are maximized, the easier it is for a system or platform to welcome contributions from outsiders as well as insiders. Maximizing these qualities facilitates the technology’s deployment in unanticipated ways. Table 4.1 lists examples of generative tools. For comparison, the table also includes some of these tools’ less generative counterparts. Views on these categories or particular examples will undoubtedly vary, but some themes emerge. In general, generative tools are more basic and less specialized for accomplishing a particular purpose; these qualities make such tools more usable for many tasks. Generative technologies may require the user to possess some skill in order for the tool to be even minimally useful—compare a piano with a music box—but once the user has acquired some skill, the tools support a wider range of applications.

Generative tools are individually useful. Generative systems are sets of tools and practices that develop among large groups of people. These systems provide an environment for new and best—or at least most popular—practices to spread and diversify further within them. Generative systems can be built on non-generative platforms—no technical reason prevented CompuServe from developing wiki-like features and inviting its subscribers to contribute to something resembling Wikipedia—but frequently generativity at one layer is the best recipe for generativity at the layer above.

GENERATIVITY AND ITS COUSINS

The notion of generativity is itself an adaptation. It is related to other conceptions of information technology and, to some degree, draws upon their meanings.

The Free Software Philosophy

The normative ideals of the free software movement and the descriptive attributes of generativity have much in common. According to this philosophy, any software functionality enjoyed by one person should be understandable and modifiable by everyone. The free software philosophy emphasizes the value of sharing not only a tool’s functionality, but also knowledge about how the tool works so as to help others become builders themselves. Put into our terms, accessibility is a core value. When the free software approach works, it helps to expand the audiences capable of building software, and it increases the range of outputs the system generates.

While generativity has some things in common with the free software approach, it is not the same. Free software satisfies Richard Stallman’s benchmark “four freedoms”: freedom to run the program, freedom to study how it works, freedom to change it, and freedom to share the results with the public at large.9 These freedoms overlap with generativity’s four factors, but they depart in several important respects. First, some highly generative platforms may not meet all of free software’s four freedoms. While proprietary operating systems like Windows may not be directly changeable—the Windows source code is not regularly available to outside programmers—the flexibility that software authors have to build on top of the Windows OS allows a programmer to revise nearly any behavior of a Windows PC to suit specific tastes. Indeed, one could implement GNU/Linux on top of Windows, or Windows on top of GNU/Linux.10 So, even though Windows is proprietary and does not meet the definition of free software, it is generative.

Free software can also lack the accessibility associated with generativity. Consider “trapped” PCs like the one inside the TiVo. TiVo is built on Linux, which is licensed as free software, but, while the code is publicly published, it is nearly impossible for the Linux PC inside a TiVo to run anything but the code that TiVo designates for it. The method of deploying a generative technology can have a non-generative result: the free software satisfies the leveraging quality of generativity, but it lacks accessibility.11

Affordance Theory

Fields such as psychology, industrial design, and human-computer interaction use the concept of “affordances.”12 Originally the term was used to refer to the possible actions that existed in a given environment. If an action were objectively possible, the environment was said to “afford” that action. The concept has since been adapted to focus on “perceived affordances,” the actions or uses that an individual is subjectively likely to make, rather than on actions or uses that are objectively possible. As a design tool, affordances can help the creator of an environment ensure that the available options are as obvious and inviting as possible to the intended users.

A theory of affordances can also be used to predict what various people might do when presented with an object by asking what that object invites users to do. A ball might be thrown; a chair might be sat on. A hyperlink that is not underlined may be “poorly afforded” because it may impede users from realizing that they can click on it, suggesting that a better design would visually demarcate the link.

Generativity shares some of this outlook. If poorly afforded, some forms of technical user empowerment, such as the ability to run software written by others, can harm users who mistakenly run code that hurts their machines. This leads to the unfortunate result that the unqualified freedom to run any code can result in restrictions on what code is or can be run: adverse experiences cause less-skilled users to become distrustful of all new code, and they ask for environments that limit the damage that they can inadvertently do.

Yet unlike generativity, affordance theory does not focus much on systemic output. Instead, it takes one object at a time and delineates its possible or likely uses. More recent incarnations of the theory suggest that the object’s designer ought to anticipate its uses and tailor the object’s appearance and functionality accordingly. Such tailoring is more consistent with the development of appliancized systems than with generative ones. Generativity considers how a system might grow or change over time as the uses of a technology by one group are shared with other individuals, thereby extending the generative platform.

Theories of the Commons

Generativity also draws from recent scholarship about the commons. Some commentators, observing the decentralized and largely unregulated infrastructure of the Internet, have noted how these qualities have enabled the development of an innovation commons where creativity can flourish.13 Projects like Creative Commons have designed intellectual property licenses so that authors can clearly declare the conditions under which they will permit their technical or expressive work to be copied and repurposed. Such licensing occurs against the backdrop of copyright law, which generally protects all original work upon fixation, even work for which the author has been silent as to how it may be used. By providing a vehicle for understanding that authors are willing to share their work, Creative Commons licenses are a boon for content-level generativity because the licenses allow users to build on their colleagues’ work.

Other scholars have undertaken an economic analysis of the commons. They claim that the Internet’s economic value as a commons is often significantly underestimated, and that there are strong economic arguments for managing and sustaining an infrastructure without gatekeepers.14 In particular, they argue that nonmonopolized Internet access is necessary to ensure meritocratic competition among content providers.15

These arguments about infrastructure tend to end where the network cable does. A network on which anyone can set up a node and exchange bits with anyone else on the network is necessary but not sufficient to establish competition, to produce innovative new services, to promote the free flow of information to societies in which the local media is censored, or to make the most efficient use of network resources. As the next chapter explains, the endpoints have at least as much of a role to play. Focusing on the generativity of a system without confining that system to a particular technical locus can help us evaluate what values the system embodies—and what it truly affords.

Values, of course, vary from one person and stakeholder to the next. Generative systems can encourage creativity and spur innovation, and they can also make it comparatively more difficult for institutions and regulators to assert control over the systems’ uses. If we are to draw conclusions about whether a field balanced between generative and non-generative systems ought to be preserved, we need to know the benefits and drawbacks of each in greater detail.

THE STRENGTHS OF GENERATIVE SYSTEMS

Generative systems facilitate change. The first part of this book introduced positive and negative faces of generativity: it told an optimistic tale of Internet development, followed by pessimistic predictions of trouble due to deep-rooted vulnerabilities in that network.

A generative system can be judged from both within the system and outside of it. A set of PCs being destroyed by a virus from afar is a change brought about by a generative system that is internally bad because it harms the system’s generativity. The development and distribution of a generic installer program for a PC, which makes it easy for other software authors to bundle their work so that users can easily install and use it, is an example of a generative system producing an internally good change, because it makes the system more generative.

Generative outputs can also be judged as good or bad by reference to external values. If people use a generative system to produce software that allows its users to copy music and video without the publishers’ permissions, those supportive of publishers will rationally see generativity’s disruptive potential as bad. When a generative system produces the means to circumvent Internet filtering in authoritarian states, people in favor of citizen empowerment will approve.

Generativity’s benefits can be grouped more formally as at least two distinct goods, one deriving from unanticipated change, and the other from inclusion of large and varied audiences. The first good is its innovative output: new things that improve people’s lives. The second good is its participatory input, based on a belief that a life well lived is one in which there is opportunity to connect to other people, to work with them, and to express one’s own individuality through creative endeavors.

GENERATIVITY’S OUTPUT: INNOVATION

To those for whom innovation is important, generative systems can provide for a kind of organic innovation that might not take place without them.

The Limits of Non-generative Innovation

Non-generative systems can grow and evolve, but their growth is channeled through their makers: a new toaster is released by Amana and reflects anticipated customer demand or preferences, or an old proprietary network like CompuServe adds a new form of instant messaging by programming it itself. When users pay for products or services in one way or another, those who control the products or services amid competition are responsive to their desires through market pressure. This is an indirect means of innovation, and there is a growing set of literature about its limitation: a persistent bottleneck that prevents certain new uses from being developed and cultivated by large incumbent firms, despite the benefits they could enjoy with a breakthrough.16

We have already seen this phenomenon by anecdote in the first part of this book. Recall the monopoly telephone system in the United States, where AT&T; attempted to extend its control through the network and into the endpoint devices hooked up to the network, at first barring the Hush-A-Phone and the Carterfone. The telephone system was stable and predictable; its uses evolved slowly if at all from its inception in the late nineteenth century. It was designed to facilitate conversations between two people at a distance, and with some important exceptions, that is all it has done. The change it has wrought for society is, of course, enormous, but the contours of that change were known and set once there was a critical mass of telephones distributed among the general public. Indeed, given how revolutionary a telephone system is to a society without one, it is striking that the underlying technology and its uses have seen only a handful of variations since its introduction. This phenomenon is an artifact of the system’s rejection of outside contributions. In the United States, after the law compelled AT&T; to permit third-party hardware to connect, we saw a number of new endpoint devices: new telephone units in various shapes, colors, and sizes; answering machines; and, most important, the telephone modem, which allows the non-generative network itself to be repurposed for widespread data communication.

We saw a similar pattern as the Internet overtook proprietary networks that did not even realize it was a competitor. The generative Internet is a basic, flexible network, which began with no innate content. The content was to appear as people and institutions were moved to offer it. By contrast, the proprietary networks of CompuServe, AOL, Prodigy, and Minitel were out beating the bushes for content, arranging to provide it through the straightforward economic model of being paid by people who would spend connect time browsing it. If anything, we would expect the proprietary networks to offer more, and for a while they did. But they also had a natural desire to act as gatekeepers—to validate anything appearing on their network, to cut individual deals for revenue sharing with their content providers, and to keep their customers from affecting the network’s technology. These tendencies meant that their rates of growth and differentiation were slow. A few areas that these networks consigned to individual contribution experienced strong activity and subscriber loyalty, such as their topical bulletin boards run by hired systems operators (called “sysops”) and boasting content provided by subscribers in public conversations with each other. These forums were generative at the content layer because people could post comments to each other without prescreening and could choose to take up whatever topics they chose, irrespective of the designated labels for the forums themselves (“Pets” vs. “Showbiz”).17 But they were not generative at the technical layer. The software driving these communities was stagnant: subscribers who were both interested in the communities’ content and technically minded had few outlets through which to contribute technical improvements to the way the communities were built. Instead, any improvements were orchestrated centrally. As the initial offerings of the proprietary networks plateaued, the Internet saw developments in technology that in turn led to developments in content and ultimately in social and economic interaction: the Web and Web sites, online shopping, peer-to-peer networking, wikis, and blogs.

The hostility of AT&T; toward companies like Hush-A-Phone and of the proprietary networks to the innovations of enterprising subscribers is not unusual, and it is not driven solely by their status as monopolists. Just as behavioral economics shows how individuals can consistently behave irrationally under particular circumstances,18 and how decision-making within groups can fall prey to error and bias,19 so too can the incumbent firms in a given market fail to seize opportunities that they rationally ought to exploit. Much of the academic work in this area draws from further case studies and interviews with decision-makers at significant firms. It describes circumstances that echo the reluctance of CompuServe, AOL, and other proprietary online services to allow third-party innovation—or to innovate much themselves.

For example, Tim Wu has shown that when wireless telephone carriers exercise control over the endpoint mobile phones that their subscribers may use, those phones will have undesirable features—and they are not easy for third parties to improve.20 In design terms, there is no hourglass. Carriers have forced telephone providers to limit the mobile phones’ Web browsers to certain carrier-approved sites. They have eliminated call timers on the phones, even though they would be trivial to implement—and are in much demand by users, who would like to monitor whether their use of a phone has gone beyond allotted minutes for a monthly plan.21 Phones’ ability to transfer photos and recorded sounds is often limited to using the carriers’ preferred channels and fees. For those who wish to code new applications to run on the increasingly powerful computers embedded within the phones, the barriers to contribution are high. The phones’ application programming interfaces are poorly disclosed, or are at best selectively disclosed, making the programming platform difficult to master. Often, the coding must be written for a “virtual machine” that bars access to many of the phone’s features, reducing accessibility. And the virtual machines run slowly, eliminating leverage. These factors persist despite competition among several carriers.

Oxford’s Andrew Currah has noted a similar reluctance to extend business models beyond the tried-and-true in a completely different setting. He has studied innovation within the publishing industries, and has found cultural barriers to it across studios and record companies. As one studio president summarized:

The fiscal expectations are enormous. We have to act in a rational and cautious fashion, no matter how much potential new markets like the Internet have. Our core mission is to protect the library of films, and earn as much as possible from that library over time. . . . So that means focusing our efforts on what’s proven—i.e. the DVD—and only dipping our toes into new consumer technologies. We simply aren’t programmed to move quickly.22

And the studio’s vice-chairman said:

You have to understand [studio] strategy in relation to the lifestyle here. . . . Once you reach the top of the hierarchy, you acquire status and benefits that can soon be lost—the nice cars, the home in Brentwood, the private schools. . . . It doesn’t make sense to jeopardize any of that by adopting a reckless attitude towards new technologies, new markets. Moving slow, and making clear, safe progress is the mantra.23

The puzzle of why big firms exhibit such innovative inertia was placed into a theoretical framework by Clayton Christensen in his pioneering book The Innovator’s Dilemma.24 Christensen found the hard disk drive industry representative. In it, market leaders tended to be very good at quickly and successfully adopting some technological advancements, yet were entirely left behind by upstarts. To explain the discrepancy, he created a taxonomy of “sustaining” and “disruptive” innovations. When technological innovations are consistent with the performance trajectory of established market leaders—that is, when they are a more efficient way of doing what they already do—alert leaders will be quick to develop and utilize such “sustaining” innovations.

It is with disruptive innovations that the market leaders will lag behind. These innovations are not in the path of what the company is already doing well. Indeed, Christensen found that the innovations which market leaders were the worst at exploiting were “technologically straightforward, consisting of off-the-shelf components put together in a product architecture that was often simpler than prior approaches. They offered less of what customers in established markets wanted and so could rarely be initially employed there. They offered a different package of attributes valued only in emerging markets remote from, and unimportant to, the mainstream.”25

It is not the case, Christensen argues, that these large companies lack the technological competence to deploy a new technology, but rather that their managements choose to focus on their largest and most profitable customers, resulting in an unwillingness to show “downward vision and mobility.”26

Subsequent authors have built on this theory, arguing that a failure to innovate disruptively is not simply an issue of management, but the organizational inability of large firms to respond to changes in consumer preferences caused by such disruptive innovations. Established firms are structurally reluctant to investigate whether an innovative product would be marketable to a sector outside what they perceive to be their traditional market.27 They want to ride a wave, and they fail to establish alternatives or plumb new markets even as competitors begin to do so.

This observation has led others to conclude that in order for large organizations to become more innovative, they must adopt a more “ambidextrous organizational form” to provide a buffer between exploitation and exploration.28 This advice might be reflected in choices made by companies like Google, whose engineers are encouraged to spend one day a week on a project of their own choosing—with Google able to exploit whatever they come up with.29

But large firms struggling to learn lessons from academics about becoming more creative need not be the only sources of innovation. In fact, the competitive market that appears to be the way to spur innovation—a market in which barriers to entry are low enough for smaller firms to innovate disruptively where larger firms are reluctant to tread—can be made much more competitive, since generative systems reduce barriers to entry and allow contributions from those who do not even intend to compete.

THE GENERATIVE DIFFERENCE

Generative systems allow users at large to try their hands at implementing and distributing new uses, and to fill a crucial gap that is created when innovation is undertaken only in a profit-making model, much less one in which large firms dominate. Generatively-enabled activity by amateurs can lead to results that would not have been produced in a firm-mediated market model.

The brief history of the Internet and PC illustrates how often the large and even small firm market model of innovation missed the boat on a swath of significant advances in information technology while non-market-motivated and amateur actors led the charge. Recall that Tasmanian amateur coder Peter Tattam saw the value of integrating Internet support into Windows before Microsoft did, and that the low cost of replicating his work meant that millions of users could adopt it even if they did not know how to program computers themselves.30 Hundreds of millions of dollars were invested in proprietary information services that failed, while Internet domain names representing firms’ identities were not even reserved by those firms.31 (McDonald’s might be forgiven for allowing someone else to register mcdonalds.com before it occurred to the company to do so; even telecommunications giant MCI failed to notice the burgeoning consumer Internet before Sprint, which was the first to register mci.com—at a time when such registrations were given away first-come, first-served, to anyone who filled out the electronic paperwork.)32

The communally minded ethos of the Internet was an umbrella for more activity, creativity, and economic value than the capitalistic ethos of the proprietary networks, and the openness of the consumer PC to outside code resulted in a vibrant, expanding set of tools that ensured the end of the information appliances and proprietary services of the 1980s.

Consider new forms of commercial and social interaction made possible by new software that in turn could easily run on PCs or be offered over the Internet. Online auctions might have been ripe for the plucking by Christie’s or Sotheby’s, but upstart eBay got there first and stayed. Craigslist, initiated as a “.org” by a single person, dominates the market for classified advertising online.33 Ideas like free Web-based e-mail, hosting services for personal Web pages, instant messenger software, social networking sites, and well-designed search engines emerged more from individuals or small groups of people wanting to solve their own problems or try something neat than from firms realizing there were profits to be gleaned. This is a sampling of major Internet applications founded and groomed by outsiders; start sliding down what Wired editor Chris Anderson calls the Long Tail—niche applications for obscure interests—and we see a dominance of user-written software.34 Venture capital money and the other artifacts of the firm-based industrial information economy can kick in after an idea has been proven, and user innovation plays a crucial role as an initial spark.

GENERATIVITY AND A BLENDING OF MODELS FOR INNOVATION

Eric von Hippel has written extensively about how rarely firms welcome improvements to their products by outsiders, including their customers, even when they could stand to benefit from them.35 His work tries to persuade otherwise rational firms that the users of their products often can and do create new adaptations and uses for them—and that these users are commonly delighted to see their improvements shared. Echoing Christensen and others, he points out that firms too often think that their own internal marketing and R&D; departments know best, and that users cannot easily improve on what they manufacture.

Von Hippel then goes further, offering a model that integrates user innovation with manufacturer innovation (Figure 4.3).

Von Hippel’s analysis says that users can play a critical role in adapting technologies to entirely new purposes—a source of disruptive innovation. They come up with ideas before there is widespread demand, and they vindicate their ideas sufficiently to get others interested. When interest gets big enough, companies can then step in to smooth out the rough edges and fully commercialize the innovation.

Von Hippel has compiled an extensive catalog of user innovation. He points to examples like farmers who roped a bicycle-like contraption to some PVC pipes to create a portable center-pivot irrigation system, which, now perfected by professional manufacturers, is a leading way to water crops.36 Or a paramedic who placed IV bags filled with water into his knapsack and ran the outlet tubes from behind so he could drink from them while bicycling, akin to the way some fans at football games drink beer out of baseball caps that have cup holders that hang on either side of the head. The IV bag system has since been adopted by large manufacturers and is now produced for hikers and soldiers.37 Von Hippel’s studies show that 20 percent of mountain bikers modify their bikes in some way, and an equal number of surgeons tinker with their surgical implements. Lego introduced a set of programmable blocks for kids—traditional Lego toys with little engines inside—and the toys became a runaway hit with adults, who accounted for 70 percent of the market. The adults quickly hacked the Lego engines and made them better. Silicon Valley firms then banned Legos as a drain on employee productivity. Lego was stumped for over a year about how to react—this market was not part of the original business plan—before concluding that it was good.

The building blocks for most of von Hippel’s examples are not even particularly generative ones. They represent tinkering done by that one person in a hundred or a thousand who is so immersed in an activity or pursuit that improving it would make a big difference—a person who is prepared to experiment with a level of persistence that calls to mind the Roadrunner’s nemesis, Wile E. Coyote. Generative systems and technologies are more inviting to disruptive innovation thanks to their leverage, adaptability, ease of mastery, and accessibility, and they make it easier for their fruits to spread.

Most firms cannot sift through the thousands of helpful and not-so-helpful suggestions sent in by their customers, and they might not even dare look at them institutionally, lest a sender claim later on that his or her idea was stolen. Offers of partnership or affiliation from small businesses may not fare much better, just as deals between proprietary networks and individual technology and content providers numbered only in the tens rather than in the thousands. Yet when people and institutions other than the incumbents have an opportunity to create and distribute new uses as is possible in a generative system, the results can outclass what is produced through traditional channels.

If one values innovation, it might be useful to try to figure out how much disruptive innovation remains in a particular field or technology. For mature technologies, perhaps generativity is not as important: the remaining leaps, such as that which allows transistors to be placed closer and closer together on a chip over time without fundamentally changing the things the chip can do, will come from exploitative innovation or will necessitate well-funded research through institutional channels.

For the Internet, then, some might think that outside innovation is a transitory phenomenon, one that was at its apogee when the field of opportunity was new and still unnoticed by more traditional firms, and when hardware programming capacity was small, as in the early days of the PC.38 If so, the recent melding of the PC and the Internet has largely reset the innovative clock. Many of the online tools that have taken off in recent years, such as wikis and blogs, are quite rudimentary both in their features and in the sophistication of their underlying code. The power of wikis and blogs comes from the fact that nothing quite like them existed before, and that they are so readily adopted by Internet users intrigued by their use. The genius behind such innovations is truly inspiration rather than perspiration, a bit of tinkering with a crazy idea rather than a carefully planned and executed invention responding to clear market demand.

Due to the limitations of the unconnected PC, one could credibly claim that its uses were more or less known by 1990: word processing, spreadsheets, databases, games. The rest was merely refinement. The reinvigorated PC/Internet grid makes such applications seem like a small corner of the landscape, even as those applications remain important to the people who continue to use them.

We have thus settled into a landscape in which both amateurs and professionals as well as small- and large-scale ventures contribute to major innovations. Much like the way that millions of dollars can go into production and marketing for a new musical recording39 while a gifted unknown musician hums an original tune in the shower that proves the basis for a hit album, the Internet and PC today run a fascinating juxtaposition of sweepingly ambitious software designed and built like a modern aircraft carrier by a large contractor, alongside “killer applets” that can fit on a single floppy diskette.40 OS/2, an operating system created as a joint venture between IBM and Microsoft,41 absorbed billions of dollars of research and development investment before its plug was pulled,42 while Mosaic, the first graphical PC Internet browser, was written by a pair of students in three months.43

A look at sites that aggregate various software projects and their executable results reveals thousands of projects under way.44 Such projects might be tempting to write off as the indulgences of hobbyists, if not for the roll call of pivotal software that has emerged from such environments:45 software to enable encryption of data, both stored on a hard drive and transmitted across a network,46 peer-to-peer file-sharing software,47 e-mail clients,48 Web browsers,49 and sound and image editors.50 Indeed, it is difficult to find software not initiated by amateurs, even as later versions are produced through more formal corporate means to be more robust, to include consumer help files and otherwise attempt to improve upon others’ versions or provide additional services for which users are willing to pay a premium.51 Many companies are now releasing their software under a free or open source license to enable users to tinker with the code, identify bugs, and develop improvements.52

It may well be that, in the absence of broad-based technological accessibility, there would eventually have been the level of invention currently witnessed in the PC and on the Internet. Maybe AT&T; would have invented the answering machine on its own, and maybe AOL or CompuServe would have agreed to hyperlink to one another’s walled gardens. But the hints we have suggest otherwise: less-generative counterparts to the PC and the Internet—such as standalone word processors and proprietary information services—had far fewer technological offerings, and they stagnated and then failed as generative counterparts emerged. Those proprietary information services that remain, such as Lexis/Nexis and Westlaw, sustain themselves because they are the only way to access useful proprietary content, such as archived news and some scholarly journal articles.53

Of course, there need not be a zero-sum game in models of software creation, and generative growth can blend well with traditional market models. Consumers can become enraptured by an expensive, sophisticated shooting game designed by a large firm in one moment and by a simple animation featuring a dancing hamster in the next.54 Big firms can produce software when market structure and demand call for such enterprise; smaller firms can fill niches; and amateurs, working alone and in groups, can design both inspirational “applets” and more labor-intensive software that increase the volume and diversity of the technological ecosystem.55 Once an eccentric and unlikely invention from outsiders has gained notoriety, traditional means of raising and spending capital to improve a technology can shore it up and ensure its exposure to as wide an audience as possible. An information technology ecosystem comprising only the products of the free software movement would be much less usable by the public at large than one in which big firms help sand off rough edges.56 GNU/Linux has become user-friendly thanks to firms that package and sell copies, even if they cannot claim proprietary ownership in the software itself, and tedious tasks that improve ease of mastery for the uninitiated might best be done through corporate models: creating smooth installation engines, extensive help guides, and other handholding for what otherwise might be an off-putting technical piece of PC software or Web service.57

As the Internet and the PC merge into a grid, people can increasingly lend or barter computing cycles or bandwidth for causes they care about by simply installing a small piece of software.58 This could be something like SETI@home, through which astronomers can distribute voluminous data from radio telescopes to individual PCs,59 which then look for patterns that might indicate the presence of intelligent life, or it could be a simple sharing of bandwidth through mechanisms such as amateur-coded (and conceived, and designed) BitTorrent,60 by which large files are shared among individuals as they download them, making it possible for users to achieve very rapid downloads by accumulating bits of files from multiple sources, all while serving as sources themselves. Generativity, then, is a parent of invention, and an open network connecting generative devices makes the fruits of invention easy to share if the inventor is so inclined.

GENERATIVITY’S INPUT: PARTICIPATION

A second good of generativity is its invitation to outside contribution on its own terms. This invitation occurs at two levels: the individual act of contribution itself, and the ways in which that contribution becomes part of a self-reinforcing community. On the first level, there is a unique joy to be had in building something, even if one is not the best craftsperson. This is a value best appreciated by experiencing it; those who demand proof may not be easy to persuade. Fortunately, there are many ways in which people have a chance to build and contribute. Many jobs demand intellectual engagement, which can be fun for its own sake. People take joy in rearing children: teaching, interacting, guiding. They can also immerse themselves in artistic invention or software coding.

Famed utilitarian John Stuart Mill may have believed in the greatest happiness for the greatest number, but he was also a champion of the individual and a hater of custom. He first linked idiosyncrasy to innovation when he argued that society should “give the freest scope possible to uncustomary things, in order that it may in time appear which of these are fit to be converted into customs.”61 He then noted the innate value of being able to express oneself idiosyncratically:

But independence of action, and disregard of custom, are not solely deserving of encouragement for the chance they afford that better modes of action, and customs more worthy of general adoption, may be struck out; nor is it only persons of decided mental superiority who have a just claim to carry on their lives in their own way. . . . The same things which are helps to one person towards the cultivation of his higher nature are hindrances to another. The same mode of life is a healthy excitement to one, keeping all his faculties of action and enjoyment in their best order, while to another it is a distracting burthen, which suspends or crushes all internal life.62

The generative Internet and PC allow more than technical innovation and participation as new services and software are designed and deployed. In addition, much of that software is geared toward making political and artistic expression easier. Yochai Benkler has examined the opportunities for the democratization of cultural participation offered by the Internet through the lens of liberal political theory:

The networked information economy makes it possible to reshape both the “who” and the “how” of cultural production relative to cultural production in the twentieth century. It adds to the centralized, market-oriented production system a new framework of radically decentralized individual and cooperative nonmarket production. It thereby affects the ability of individuals and groups to participate in the production of the cultural tools and frameworks of human understanding and discourse. It affects the way we, as individuals and members of social and political clusters, interact with culture, and through it with each other. It makes culture more transparent to its inhabitants. It makes the process of cultural production more participatory, in the sense that more of those who live within a culture can actively participate in its creation. We are seeing the possibility of an emergence of a new popular culture, produced on the folk-culture model and inhabited actively, rather than passively consumed by the masses. Through these twin characteristics—transparency and participation—the networked information economy also creates greater space for critical evaluation of cultural materials and tools. The practice of producing culture makes us all more sophisticated readers, viewers, and listeners, as well as more engaged makers.63

Benkler sees market-based models of cultural production as at odds with the folk-culture model, and he much prefers the latter: from “the perspective of liberal political theory, the kind of open, participatory, transparent folk culture that is emerging in the networked environment is normatively more attractive than was the industrial cultural production system typified by Hollywood and the recording industry.”64 Here, the lines between entertainment and more profound civic communication are understood to be thin, if they exist at all. An ability to participate in the making of culture is seen to be as paramount to full citizenship as the traditionally narrower activities of engaging in direct political debate or discussion of pressing policy issues.

Benkler points out the merits of systems that do what he calls “sharing nicely,” systems in which people help each other without demanding the formalities and corresponding frictions of economic exchange.65 He argues that much wealth has been created by an economy parallel to the corporate one, an economy of people helping out others without direct expectation of recompense, and that the network revolution makes it easier for that informal engine to generate much more—more participation, and more innovation.

The joy of being able to be helpful to someone—to answer a question simply because it is asked and one knows a useful answer, to be part of a team driving toward a worthwhile goal—is one of the best aspects of being human, and our information technology architecture has stumbled into a zone where those qualities can be elicited and affirmed for tens of millions of people.66 It is captured fleetingly when strangers are thrown together in adverse situations and unite to overcome them—an elevator that breaks down, or a blizzard or blackout that temporarily paralyzes the normal cadences of life in a city but that leads to wonder and camaraderie along with some fear. Part of the Net of the early twenty-first century has distilled some of these values, promoting them without the kind of adversity or physical danger that could make a blizzard fun for the first day but divisive and lawless after the first week without structured relief.

William Fisher has noted a similar potential in his discussion of semiotic democracy, a media studies concept drawn from the work of John Fiske.67 Fisher argues that “[i]n an attractive society all persons would be able to participate in the process of making cultural meaning. Instead of being merely passive consumers of images and artifacts produced by others, they would help shape the world of ideas and symbols in which they live.”68

Technology is not inherently helpful in achieving these ends. At its core, it is a way of taking useful practices and automating them—offering at least greater leverage. Laundry that took a day to do can now be done in an hour or two. But leverage alone, if packaged in a way that does not allow adaptation, is not generative. It threatens conformity. The more there are prescribed ways to do something, the more readily people fall into identical patterns. Such prescriptions can come about through rules (as in the fractally thorough guidebooks on how to operate a McDonald’s franchise) or technology (as in the linearity of a PowerPoint slide show and the straitjacket of some of its most favored templates).69 These rules might ensure a certain minimum competence in the preparation of a hamburger or of a business presentation precisely because they discourage what might be unhelpful or unskilled freelancing by the people implementing them. However, the regularity needed to produce consistent sandwiches and talks can actively discourage or prevent creativity. That drives critics of technology like Neil Postman, author of such evocatively titled books as Building a Bridge to the 18th Century70 and Technopoly: The Surrender of Culture to Technology,71 to argue that the ascendance of engineering and information technology is making sheep of us.

However, this understanding of technology stops at those systems that are built once, originating elsewhere, and then imposed (or even eagerly snapped up) by everyone else, who then cannot change them and thus become prisoners to them. It need not be that way. Technologies that are adaptable and accessible—not just leveraging—allow people to go a step further once they have mastered the basics. The Lego company offers suggestions of what to build on the boxes containing Lego blocks, and they even parcel out a certain number of each type of block in a package so the user can easily produce exactly those suggestions. But they are combinable into any number of new forms as soon as the user feels ready to do more than what the box instructs. The PC and the Internet have been just the same in that way. The divide is not between technology and nontechnology, but between hierarchy and polyarchy.72 In hierarchies, gatekeepers control the allocation of attention and resources to an idea. In polyarchies, many ideas can be pursued independently. Hierarchical systems appear better at nipping dead-end ideas in the bud, but they do so at the expense of crazy ideas that just might work. Polyarchies can result in wasted energy and effort, but they are better at ferreting out and developing obscure, transformative ideas. More importantly, they allow many more people to have a hand at contributing to the system, regardless of the quality of the contribution.

Is this only a benefit for those among us who are technically inclined? Most of Mill’s passion for individuality was channeled into a spirited defense of free speech and free thinking, not free building—and certainly not free programming. However, as Benkler’s linkage of the Internet to cultural expression suggests, the current incarnation of cyberspace offers a generative path that is not simply an avenue of self-expression for individual nerds. Generativity at the technical layer can lead to new forms of expression for other layers to which nonprogrammers contribute—culture, political, social, economic, and literary. We can call this recursive generativity, repeated up through the layers of the hourglass. Generativity at the technical layer can also enable new forms of group interaction, refuting Mill’s dichotomy of the mediocre masses versus the lone eccentric.

GROUPS AND GENERATIVITY: FROM GENERATIVE TOOLS TO GENERATIVE SYSTEMS

Creative talent differs greatly from one person to the next, not only in degree but in preferred outlet. Mozart might have turned to painting if there were no musical instruments for which to compose, but there is no particular reason to believe that his paintings would be as good among the work of painters as his music is judged to be among that of musicians. Generativity solicits invention, which in turn can be an important expression of the inventor—a fulfillment of a human urge that is thought to represent some of the highest endeavor and purpose of which we are capable. New technologies welcome new groups of people who may excel at manipulating them.

People can work alone or in groups. Working in groups has practical limitations. It is typically not easy to collaborate from far away. The combination of networks and PCs, however, has made it particularly easy to arrange such collaborations. Open source projects too ambitious for a single programmer or localized group of programmers to achieve alone have been made possible by cheap networking,73 and the free software movement has developed tools that greatly ease collaboration over a distance, such as CVS, the “concurrent versions system.”74 CVS automates many of the difficult tasks inherent in having many people work on the same body of code at the same time. Itself an open source project, CVS permits users to establish a virtual library of the code they are working on, checking out various pieces to work on and then checking them back in for others to use.75 Successive versions are maintained so that changes by one person that are regretted by another can be readily unmade. People with complementary talents who otherwise would not have known or met each other, much less found a way to collaborate without much logistical friction, can be brought together to work on a project. Creativity, then, is enhanced not only for individuals, but also for groups as distinct entities, thanks to the linkage of the PC and the Internet.

RECURSION FROM TECHNOLOGY TO CONTENT TO SOCIETY

The emergence of a vibrant public Internet and a powerful PC affects many traditional forms of creative and artistic expression because the accessibility of the PC and the Internet to coding by a variety of technically capable groups has translated to a number of platforms for use by artistically capable groups.

For example, thanks to the hypertext standards first developed by researcher Tim Berners-Lee,76 the Web came into existence, and because Berners-Lee’s html hypertext markup language was easy to master, people without much technical know-how could build Web sites showcasing their creative work,77 or Web sites that were themselves creative works. Once html took off, others wrote html processors and converters so that one did not even have to know the basics of html to produce and edit Web pages.78

Similarly, simple but powerful software written by amateurs and running on Internet servers has enabled amateur journalists and writers to prepare and customize chronological accounts of their work—“blogs”79—and the pervasiveness of online search software has made these blogs accessible to millions of people who do not seek them out by name but rather by a topical search of a subject covered by a blog entry.80 The blog’s underlying software may be changeable itself, as WordPress is, for example, and therefore generative at the technical layer. But even if it were not so readily reprogrammed, as Microsoft’s proprietary MSN Spaces is not, the opportunity to configure a blog for nearly any purpose—group commentary, seeking help finding a lost camera,81 expressing and then sorting and highlighting various political opinions—makes it generative at the content layer.

A signal example of both recursive and group generativity can be found in the wiki. Software consultant Ward Cunningham was intrigued by the ways in which strangers might collaborate online.82 He wrote some basic tools that would allow people to create Web pages even if they didn’t know anything about Web page creation, and that would allow others to modify those pages, keeping track of revisions along the way—a sort of CVS for nonprogramming content.83 He opened a Web site using these tools in 1995 to host an ongoing conversation about computer programming, and called it a “wiki” after a trip to Hawaii had exposed him to airport shuttle buses called “wiki-wikis” (wiki is the Hawaiian word for “quick”).84

Cunningham’s own wiki was successful among a group of several hundred people—to be sure, he did not appear to be aiming for mass adoption—and it inspired the founding of Wikipedia in 2001.85 Wikipedia built on Cunningham’s concepts with the ambition to create an online encyclopedia written and edited by the world at large. With few exceptions anyone can edit a Wikipedia entry at any time. As discussed at length later in this book, the possibility for inaccuracy or outright falsehood is thus legion, and Wikipedia’s users have both created new technology and solicited commitments from people who share Wikipedia’s ethos to maintain its accuracy without significantly denting its generativity. Wikipedia stands at the apex of amateur endeavor: an undertaking done out of sheer interest in or love of a topic, built on collaborative software that enables a breathtakingly comprehensive result that is the sum of individual contributions, and one that is extraordinarily trusting of them.86 Wikipedia’s character will no doubt evolve as, say, companies discover its existence and begin editing (and policing) entries that mention or describe them,87 just as ownership of domain names evolved from an informal and free first-come, first-served system to a hotly contested battlefield once their true value was recognized.88 Today, Wikipedia’s success showcases the interactions that can take place among the layers of a technical system, with the Internet’s absence of gatekeepers allowing wiki software to be developed, shared, and then taken up for educational and social purposes with contributions from people who have little to no technical expertise.

The ubiquity of PCs and networks—and the integration of the two—have thus bridged the interests of technical audiences and artistic and expressive ones, making the grid’s generativity relevant not only to creativity in code-writing as an end, but to creativity in other artistic ventures as well, including those that benefit from the ability of near-strangers to encounter each other on the basis of mutual interest, form groups, and then collaborate smoothly enough that actual works can be generated. If one measures the value of generativity through the amount of creativity it unleashes,89 then the generativity of the PC and Internet grid should be measured not solely by the creativity it enables among coders, but also by the creativity it enables among artists—and among groups of each.90

THE GENERATIVE PATTERN

Generative technologies need not produce forward progress, if by progress one means something like increasing social welfare. Rather, they foment change. They solicit the distributed intellectual power of humanity to harness the leveraging power of the product or system for new applications, and, if they are adaptable enough, such applications may be quite unexpected. To use an evolutionary metaphor, they encourage mutations, branchings away from the status quo—some that are curious dead ends, others that spread like wildfire. They invite disruption—along with the good things and bad things that can come with such disruption.

The harm from disruption might differ by field. Consider a hypothetical highly generative children’s chemistry set, adaptable and exceptionally leveraging. It would contain chemicals that could accomplish a variety of tasks, with small quantities adding up to big results if the user so desired. It would also be easy to master: children would be able to learn how to use it. But such generativity would have a manifest downside risk: a chemical accident could be dangerous to the child or even to the entire neighborhood.91 A malicious child—or adult, for that matter—could wreak greater havoc as the set’s generativity grew. The same principle applies to gene splicing kits, atom smashers, and many of the power tools at a local hardware store. The more experimentation allowed, the more harm the tool invites. One might want to allow more room for experimentation in information technology than for physics because the risks of harm—particularly physical harm—are likely to be lower as a structural matter from misuse or abuse of information technology. The law of negligence echoes this divide: it is ready to intervene in cases of physical harm but usually refuses to do so when someone’s misdeed results in “only” economic harm.92

Nonetheless, economic harm is real, whether caused to the Internet itself or to interests external to it. Disruption benefits some while others lose, and the power of the generative Internet, available to anyone with a modicum of knowledge and a broadband connection, can be turned to network-destroying ends. As the previous chapter illustrated, the Internet’s very generativity—combined with that of the PCs attached—sows the seeds for a “digital Pearl Harbor.”93 If we do not address this problem, the most likely first-order solutions in reaction to the problem will be at least as bad as the problem itself, because they will increase security by reducing generativity.

The Internet security problem is only one item within a basket of conflicts whose balance is greatly affected by the rise of the generative Internet. Some entrepreneurs who have benefited from the disruption of the late 1990s naturally wish to close the door behind them—enjoying the fruits of the generative grid while denying them to the next round of innovators. First among the injured are the publishing industries whose intellectual property’s value is premised on maintaining scarcity, if not fine-grained control, over the creative works in which they have been granted some exclusive rights.

Von Hippel’s work emphasizes the ways in which established firms and nonmarket-acting individuals can innovate in their respective spheres and benefit from one another’s activities. Benkler, on the other hand, sees war between the amateur authors empowered by generative systems and the industries whose work they will affect:

If the transformation I describe as possible occurs, it will lead to substantial redistribution of power and money from the twentieth-century industrial producers of information, culture, and communications—like Hollywood, the recording industry, and perhaps the broadcasters and some of the telecommunications services giants—to a combination of widely diffuse populations around the globe, and the market actors that will build the tools that make this population better able to produce its own information environment rather than buying it ready-made. None of the industrial giants of yore are taking this reallocation lying down. The technology will not overcome their resistance through an insurmountable progressive impulse.94

For others, the impact of a generative system may be not just a fight between upstarts and incumbents, but a struggle between control and anarchy. Mill in part reconciled his embrace of individual rights with his utilitarian recognition of the need for limits to freedom by conceding that there are times where regulation is called for. However, he saw his own era as one that was too regulated:

Whoever thinks that individuality of desires and impulses should not be encouraged to unfold itself, must maintain that society has no need of strong natures—is not the better for containing many persons who have much character—and that a high general average of energy is not desirable.

In some early states of society, these forces might be, and were, too much ahead of the power which society then possessed of disciplining and controlling them. There has been a time when the element of spontaneity and individuality was in excess, and the social principle had a hard struggle with it. The difficulty then was, to induce men of strong bodies or minds to pay obedience to any rules which required them to control their impulses. To overcome this difficulty, law and discipline, like the Popes struggling against the Emperors, asserted a power over the whole man, claiming to control all his life in order to control his character—which society had not found any other sufficient means of binding. But society has now fairly got the better of individuality; and the danger which threatens human nature is not the excess, but the deficiency, of personal impulses and preferences. Things are vastly changed, since the passions of those who were strong by station or by personal endowment were in a state of habitual rebellion against laws and ordinances, and required to be rigorously chained up to enable the persons within their reach to enjoy any particle of security. In our times, from the highest class of society down to the lowest every one lives as under the eye of a hostile and dreaded censorship.95

A necessary reaction to the lawlessness of early societies had become an overreaction and, worse, self-perpetuating regulation. The generative Internet and PC were at first perhaps more akin to new societies; as people were connected, they may not have had firm expectations about the basics of the interaction. Who pays for what? Who shares what? The time during which the Internet remained an academic backwater, and the PC was a hobbyist’s tool, helped situate each within the norms of Benkler’s parallel economy of sharing nicely, of greater control in the hands of users and commensurate trust that they would not abuse it. Some might see this configuration as spontaneity and individuality in excess. One holder of a mobile phone camera can irrevocably compromise someone else’s privacy;96 one bootleg of a concert can make the rounds of the whole world. And one well-crafted virus can take down millions of machines.

This is the generative pattern, and we can find examples of it at every layer of the network hourglass:

  1. An idea originates in a backwater.
  2. It is ambitious but incomplete. It is partially implemented and released anyway, embracing the ethos of the procrastination principle.
  3. Contribution is welcomed from all corners, resulting in an influx of usage.
  4. Success is achieved beyond any expectation, and a higher profile draws even more usage.
  5. Success is cut short: “There goes the neighborhood” as newer users are not conversant with the idea of experimentation and contribution, and other users are prepared to exploit the openness of the system to undesirable ends.
  6. There is movement toward enclosure to prevent the problems that arise from the system’s very popularity.

The paradox of generativity is that with an openness to unanticipated change, we can end up in bad—and non-generative—waters. Perhaps the forces of spam and malware, of phishing and fraud and exploitation of others, are indeed “too much ahead of the power which society then possessed of disciplining and controlling them.”97 For too long the framers of the Internet have figured that ISPs can simply add bandwidth to solve the spam problem; if so, who cares that 90 percent of e-mail is spam?98 Or vendors can add PC computing cycles to solve the malware problem, at least from the PC owner’s point of view—a PC can function just fine while it is infected because, with the latest processor, it can spew spam while still giving its user plenty of attention for game playing or word processing or Web surfing.

This complacency is not sustainable in the long term because it ignores the harm that accrues to those who cannot defend themselves against network mischief the way that technologically sophisticated users can. It fails to appreciate that the success of the Internet and PC has created a set of valid interests beyond that of experimentation. In the next chapter, we will see how the most natural reactions to the generative problem of excess spontaneity and individuality will be overreactions, threatening the entire generative basis of the Net and laying the groundwork for the hostile and dreaded censorship that Mill decried. In particular, a failure to solve generative problems at the technical layer will result in outcomes that allow for unwanted control at the content and social layers.

Then we will turn to solutions: ways in which, as the vibrant information society matures, we can keep problems in check while retaining the vital spark that drives it, and us, to new heights.

Posted by The Editors on March 1, 2008
Tags: Uncategorized