Nicholas Negroponte, former director of the MIT Media Lab, announced the One Laptop Per Child (OLPC) project at the beginning of 2005. The project aims to give one hundred million hardy, portable computers to children in the developing world. The laptops, called XOs, are priced around $100, and they are to be purchased by governments and given to children through their schools.1 As of this writing Brazil, Libya, Mexico, Nigeria, Peru, Rwanda, and Uruguay have committed to a pilot run that will have the XO’s assembly lines ramping up to five million machines per month and totaling approximately 20 percent of all laptop manufacturing in the world.2

The pitch to governments footing the bill emphasizes alignment with existing schoolhouse curricula and practices. A laptop can be a cost-effective way to distribute textbooks, because it can contain so much data in a small space and can be updated after it has been distributed. Says Negroponte: “The hundred-dollar laptop is an education project. It’s not a laptop project.”3

Yet OLPC is about revolution rather than evolution, and it embodies both the promise and challenge of generativity. The project’s intellectual pedigree and structure reveal an enterprise of breathtaking theoretical and logistical ambition. The education Negroponte refers to is not the rote learning represented by the typical textbook and the three R’s that form the basis of most developing and developed country curricula. Rather, the XO is shaped to reflect the theories of fellow Media Lab visionary Seymour Papert. Alternately known as constructionism or constructivism, Papert’s vision of education downplays drills in hard facts and abstract skills in favor of a model that teaches students how to learn by asking them to undertake projects that they find relevant to their everyday lives.4

A modest incarnation of the OLPC project would distribute PCs as electronic workbooks. The PCs would run the consumer operating systems and applications prevailing in the industrialized world—the better to groom students for work in call centers and other outsourced IT-based industries. Microsoft, under competition from free operating systems, has shown a willingness to greatly reduce the prices for its products in areas where wallets are smaller, so such a strategy is not necessarily out of reach, and in any case the XO machine could run one of the more consumer-friendly versions of free Linux without much modification.5

But the XO completely redesigns today’s user interfaces from the ground up. Current PC users who encounter an XO have a lot to unlearn. For example, the arrow pointer serves a different purpose: moving the XO’s arrow toward the center of the screen indicates options that apply only to that computer; moving the pointer toward any edge indicates interaction with nearby computers or the community at large.

The XO envisions students who are able to hack their own machines: to reprogram them even as they are learning to read and write—and to do so largely on their own initiative. The XO dissemination plan is remarkably light on both student and teacher training. There are a handful of trainers to cover the thousands of schools that will serve as distribution points, and the training function is more to ensure installation and functioning of the servers rather than true mastery of the machines. Students are expected to rely on each other and on trial-and-error to acquire most of the skills needed to use and reprogram the machines.

Content also seems a calculated afterthought. The XO project wiki—haphazardly organized, as wikis tend to be—featured a “call for content” in late 2006, mere months before millions of machines were to be placed in children’s hands, for “content creators, collectors, and archivists, to suggest educational content for inclusion with the laptops, to be made available to millions of children in the developing the world, most of whom do not have access to learning materials.”6 Determining exactly what would be bundled on the machines, what would repose on servers at schools, and what would be available on the XO Web site for remote access was very much a work in progress even as deployment dates neared.

In other words, XO has embraced the procrastination principle that is woven through generative technologies. To the chagrin and discomfort of most educational academics following the project, there is little focus on specific educational outcomes or metrics.7 There are no firm plans to measure usage of the laptops, or to correlate changes in test scores with their deployment and use. Instead, the idea is to create an infrastructure that is both simple and generative, stand back, and see what happens, fixing most major substantive problems only as they arise, rather than anticipating them from the start.

Thus as much as Negroponte insists that the project is not a technology play, the lion’s share of the effort has gone into just that, and is calculated to promote a very special agenda of experimentation. Central to the XO’s philosophy is that each machine should belong to a single child, rather than being found in a typical computer lab or children’s cyber café. That partially explains the XO’s radical design, both in physical form and in software. It features especially small keys so that adults cannot easily use it if they should steal it from a child, and it has no moving parts within. There is no hard drive to suffer from a fall; the screen is designed to be viewable in direct sunlight; and it consumes little enough power that it can be recharged with a crank or other physical motion in the absence of a source of electricity. The machines automatically form mesh networks with one another so that children can share programs and data with each other or connect to a school’s data repository in the absence of any ISPs. It is a rediscovery of the principles behind FIDOnet, the ad hoc network of bulletin boards programmed on PCs that called each other using modems before PC users could connect to the Internet.

One bundled application, TamTam, lets a child use the machine to generate music and drumbeats, and nearby machines can be coordinated through their mesh networks so that each one represents a different instrument in a symphony the group can compose and play. Just as some students might develop and express talents at the technical layer, reprogramming the machines, others might be inspired to develop musical talents through the rough tools of Tam- Tam at the content layer.

Constructionism counts on curiosity and intellectual passion of self- or informally taught individuals as its primary engine, exactly the wellspring tapped by generative systems. From XO’s founders we see an attempt to reprise the spirit that illuminated the original personal computer, Internet, and Web. They believe that it is less important to provide content than to provide a means of making it and passing it along, just as an Internet without any plan for content ended up offering far more than the proprietary walled gardens that had so carefully sponsored and groomed their offerings. There is a leap of faith that a machine given entirely to a child’s custody, twenty-four hours a day, will not promptly be lost, stolen, or broken. Instead, children are predicted to treat these boxes as dear possessions, and some among them will learn to program, designing and then sharing new applications that in turn support new kinds of content and interaction that may not have been invented in the developed world.

Yet the makers of the XO are aware that it is not the dawn of the networked era. We have experienced market boom and wildly successful applications, but also bust, viruses, and spam. The sheer scale and public profile of the XO project make it difficult fully to embrace an experimentalist spirit, whether at the technical, content, or social layers. The sui generis modified Linux-based operating systems within the XO machines give them an initial immunity to the worms and viruses that plague the machines of the developed world, so that should they choose to surf the world’s Web they will not be immediately overcome by the malware that otherwise requires constantly updated firewalls. They can breathe the digital air directly, without the need for the expensive antivirus “clean suits” that other PCs must have. XO’s director of security has further implemented a security architecture for the machines that keeps programs from being able to communicate with each other, in order to preemptively quarantine any attack in one part of the machine.8 This means that a word processor cannot talk directly to a music program, and an Internet program cannot talk to a drawing program. This protects the machine from hypothetical viruses, but it also adds a layer of inflexibility and complexity to an operating system that children are supposed to be able to understand and modify.

The XO thus combines its generative foundation with elements of a tethered appliance. XO staff have vowed never to accede to requests for content filtering9— yet they have built a kill switch into the machines so that stolen models can be selectively disabled,10 and such a switch opens the door to later remote control. Thus, XOs are both independent as they can form mesh networks, and tethered as they can be updated, monitored, and turned off from afar, so long as they are connected to the Internet. They are generative in spirit and architecture, and they are also appliances, painstakingly designed to be reliable to and usable by someone who cannot read or write. They combine the hope of the early Internet era with the hard lessons of its second phase. They represent the confusion of the interregnum between the unbridled explosion of cheap and flexible processors, networks, and sensors, and the tightening up that comes as their true power is appreciated—and abused.

Perhaps the audience of schoolchildren in developing countries is remote and uninteresting enough to those who want to control or compromise today’s information technology that it will be helpfully overlooked during the critical time period in which backwater status helps to foster generative development. Just as domain names were originally first-come, first-served, and no major companies reserved their own names or foresaw a trademark problem, poor schoolchildren may not be deemed enough of an economic market to be worth vying for—either in attracting their eyeballs to show them advertising, or in preventing them from exchanging bits that could be copyrighted. There are no preexisting CD sales among them to dent.

XO is but the most prominent and well-funded of a series of enterprises to attempt to bridge the digital divide. Other efforts, such as the Volkscomputer in Brazil, the VillagePDA, and the Ink have fared poorly, stuck at some phase of development or production.11 Negroponte’s impatience with tentative initial steps, and with the kind of planning and study that firm-based ventures usually require, has worried many in the international development community. They fear that a prominent failure of the project could unduly tarnish other attempts to deploy technology in the developing world. The Indian government announced in 2006 that it would not sign up to buy any XO machines, in part due to difficulties encountered with the Simputer, a for-profit project begun in 1998 to deliver handheld technology to India’s rural population, which is made up mostly of farmers and laborers—many of whom are illiterate and speak regional dialects. In 2001, Bruce Sterling lionized the Simputer as “computing as it would have looked if Gandhi had invented it, then used Steve Jobs for his ad campaign.”12 It never took off. Instead India appears to be placing its bets on the Novantium Nova or a similar device, non-generative machines fully tethered to a subscription server for both software and content.13

Will XO fail like the others? Development experts view it as skeptically as education experts do, seeing XO as yet another risky heaving of hardware at problems that are actually political, social, and economic in nature. Debates on the XO wiki wonder whether teching-up an entire generation of millions of children will be good or bad for those already online. Some worry that the already- formidable sources of Nigerian “419” spam soliciting business deals will grow and diversify. There is even musing that guerrilla fighters could use the laptops’ mesh networking capabilities to communicate secretly on the battlefield.14 (Depending on which side one supports in that battle, that could be good, although it is a far cry from the notion of laptops as educational gateways for children.)

As computer scientist Gene Spafford wrote:

We can’t defend against the threats we are facing now. If these mass computer giveaways succeed, shortly we will have another billion users online who are being raised in environments of poverty, with little or no education about proper IT use, and often in countries where there is little history of tolerance (and considerable history of religious, ethnic and tribal strife). Access to eBay and YouTube isn’t going to give them clean water and freedom from disease. But it may help breed resentment and discontent where it hasn’t been before.

Gee, I can barely wait. The metaphor that comes to mind is that if we were in the ramp-up to the Black Plague in the middle ages, these groups would be trying to find ways to subsidize the purchase of pet rats.15

Spafford appears to recognize the delicate condition of today’s Net, and he believes that a pause in expansion is needed—a bit of time to digest the problems that beset it. The easier and more risk-averse path is to distribute mobile phones and other basic Net appliances to the developing world just as those devices are becoming more central in the developed one, bridging the digital divide in one sense—providing useful technology—while leaving out the generative elements most important to the digital space’s success: the integration of people as participants in it rather than only consumers of it.

But a project like OLPC offers an opportunity to demonstrate fixes to the Net’s problems among audiences that have yet to encounter it. Its XO represents a new path of continued if cautious generativity as the developed world’s technology is beginning to ossify under the weight of its own success. It represents a faith not only that students can learn to reprogram their computers, but that what they direct them to do will be, on balance, good if disruptive.

The story of the XO is the story of the generative pattern. The pattern begins with the ambitious planting of a flag for a new enterprise in an overlooked backwater. The procrastination principle gives license for the idea’s technical and social blueprints to be incomplete. Contribution is welcome from outsiders, and if the project takes off, the results may prove completely unexpected.

The XO’s skeptics have much in common with generativity’s skeptics. They can convincingly put forward the very real risks attendant to a project only partially planned, without extensive layers of measurement, control, and accountability. These risks are most obviously grouped under the rubric of security, but the label is far too narrow either to capture the problem or to point us to the most promising solutions—just as the story of badware on PCs is not simply a story about security worries on the Internet, narrowly defined.

Rather, the limits of an open PC and Net, and the fears for the XO, are much more general case studies of what happens within systems that are built with a peculiar and extraordinary openness to contribution and innovation and that succeed because of it. They challenge us to understand and meet the problems arising from success in a way that does not neuter what made the original success possible.

The puzzle of PC security is fundamentally the same as the puzzle of keeping Wikipedia honest and true—and perhaps giving birth to its version 2.0 successor— now that Wikipedia has entered the prime-time glare, attracting participants who are ignorant or scornful of its ideals. It is the puzzle of empowering people to share and trade stories, photos, and recommendations without losing their identities as they become not only the creators of distributed scrutiny and judgment, but also their subjects.

It is the puzzle of Herdict, the application designed to run on netizens’ PCs to generate and share a collective map of vital signs, that can produce distributed judgments about good code and bad. One of the first questions asked about Herdict is whether makers of badware will simply hijack lots of PCs and compel them to report to Herdict that they are happy, when in fact they are not. One answer acknowledges the problem and then seeks, from day one, to forestall it while it is still on the drawing board, with attendant complication, investment, and elaboration. An alternative answer says: The point at which Herdict is worth the effort of bad people to game it is a milestone of success. It is a token of movement from the primordial soup that begins the generative pattern to the mainstream impact that attracts the next round of problems.

Imagine planning but not yet executing Wikipedia: “Won’t people come along and vandalize it?” One response to that question, and to the others like it that arise for an idea as crazy as Wikipedia, would be to abandon the idea—to transform it so much in anticipation of the problems that it is unrecognizable from its original generative blueprint. The response instead was to deem the question reasonable but premature. The generativity that makes it vulnerable also facilitates the tools and relationships through which people can meet the problems when first-round success causes them to materialize.

The animating spirit: “Ready, fire, aim.” This ethos is a major ingredient of Google’s secret sauce as a company, a willingness to deploy big ideas that remain labeled “beta” for months even as they become wildly popular, as Google News was. It lies behind the scanning of all the world’s books, despite the logistical burdens and legal uncertainties. To the amazement of those of us who work for universities and could not possibly persuade our general counsels to risk their clients’ endowments on such a venture, Google simply started doing it. The litigation continues as this book goes to press, and so does the scanning of the books and the indexing of their contents, available to hundreds of millions of people who would otherwise never know of them, at

How we choose to approach generative puzzles animates the struggle between the models of the Net and of television, of the insurgent and the incumbent. Traditional cyberlaw frameworks tend to see the Net as an intriguing force for chaos that might as well have popped out of nowhere. It is too easy to then shift attention to the “issues raised” by the Net, usually by those threatened by it—whether incumbent technical-layer competitors like traditional telephony providers, or content-layer firms like record companies whose business models (and, to be sure, legally protected interests) are disrupted by it. Then the name of the game is seen to be coming up with the right law or policy by a government actor to address the issues. Such approaches can lead to useful, hard-nosed insights and suggestions, but they are structured to overlook the fact that the Net is quite literally what we make it.

The traditional approaches lead us in the direction of intergovernmental organizations and diplomatically styled talk-shop initiatives like the World Summit on the Information Society and its successor, the Internet Governance Forum, where “stakeholders” gather to express their views about Internet governance, which is now more fashionably known as “the creation of multi-stakeholder regimes.” Such efforts import from professional diplomacy the notion of process and unanimity above all. Their solution for the difficulties of individual state enforcement on the Net is a kind of negotiated intellectual harmony among participants at a self-conscious summit—complex regimes to be mapped out in a dialogue taking place at an endlessly long table, with a role for all to play. Such dialogues end either in bland consensus pronouncements or in final documents that are agreed upon only because the range of participants has been narrowed.

It is no surprise that this approach rarely gets to the nuts and bolts of designing new tools or grassroots initiatives to take on the problems it identifies. The Net and its issues sail blithely on regardless of the carefully worded communiqués that emerge from a parade of meetings and consultations. Stylized gatherings of concerned stakeholders are not inherently bad—much can come of dialogue among parties whose interests interconnect. Indeed, earlier in this book I called for a latter-day Manhattan Project to take on the most pressing problems facing the generative Internet. But the types of people that such a project requires are missing from the current round of “stakeholder governance” structures. Missing are the computer scientists and geeks who would rather be coding than attending receptions in Geneva and Tunis. Without them we too easily neglect the prospect that we could code new tools and protocols to facilitate social solutions—the way that the robots.txt of Chapter Nine has so far headed off what otherwise would have been yet another cyberlaw issue.

To be sure, from the earliest days of the Internet the people who designed its protocols acceded to some formality and diplomacy. Recall that they published “RFCs,” requests for comments designed to write up their ideas, creating institutional structure and memory as the project became bigger than just a few researchers in a room. The author of the first one—RFC 1—recalls: “We parceled out the work and wrote the initial batch of memos. In addition to participating in the technical design, I took on the administrative function of setting up a simple scheme for numbering and distributing the notes. Mindful that our group was informal, junior and unchartered, I wanted to emphasize these notes were the beginning of a dialog and not an assertion of control.”16

Informal, junior, and unchartered, yet collaborative and at least partially structured: this includes people who are eager to take on a parcel of work and build. It represents the ingredients found in the generative soil of Wikipedia, Pledgebank, Meetup,, and other countless innovations that abound on the Net, themselves made possible because the Net’s soil is made of the same stuff. The way to secure it and the innovations built upon it is to empower rank-and-file users to contribute, rather than to impose security models that count on a handful of trusted people for control. We need tools that cure the autistic nature of today’s Net experience: PC users unaware of their digital environments and therefore unable to act on social cues, whether of danger or of encouragement.

If history is a guide, these tools can just as likely come from one or two researchers as from hackers, and the properly executed Manhattan Project to bolster the Net for another round of success will not be marked by centralization so much as by focus: the application of money and encouragement to those who step forward to help fix the most important and endemic problems that can no longer tolerate any procrastination.

Just as the XO’s technology platform seeks to cultivate such contributions as routine rather than as obscure or special, by placing generative technologies into as many children’s hands as possible, the educational systems in the developed world could be geared to encourage and reward such behavior, whether at the technical, content, or social layers.

Unfortunately, the initial reaction by many educators to digital participatory enterprises—ones that indeed may be subverted by their users—is fear. Many teachers are decrying the ways in which the Net has made it easy to plagiarize outright or to draw from dubious sources.17 Some schools and universities have banned the citation of Wikipedia in student papers,18 while signing up for plagiarism detection services like and automatic essay-grading tools like, which “uses computational intelligence strategies to grade students [sic] essays in seconds and respond with detailed, topic-specific feedback.”19

Instead of being subject to technology that automates and reinforces the worst aspects of contemporary education—emphasizing regurgitation and summarization of content from an oracular source, followed by impersonal grading within a conceptual echo chamber—our children ought to be encouraged to accept the participatory invitation of the Net and that which has recursively emerged at its upper layers from its open technologies below. Wikipedia’s conceded weakness as a source is an invitation to improve it, and the act of improving it can be as meaningful to the contributor as to those who come after. Our students can be given assignments that matter—reading with a critical eye the sources that the rest of the online world consults, and improving them as best they know how, using tools of research and argument and intellectual honesty that our schools can teach. Instead of turning in a report for a single teacher to read, they can put their skills into work that everyone can read. The millions of students doing make-work around the world can come to learn instead that what they do can have consequences—and that if they do not contribute, it is not necessarily true that others will. In other words, we can use our generative technologies to teach our children to take responsibility for the way the world works rather than to be merely borne along by its currents. This will work best if our teachers are on board. Without people to whom others can apprentice, to learn technical skills and social practice, generative technologies can flounder. This is the XO’s vulnerability, too—if it fails, it may in large part be because the technology was too difficult to master and share, and its possibilities not hinted at enough to entice learners to persist in their attention to it.

Like the XO, generativity itself is, at its core, not a technology project. It is an education project, an exercise in intellect and community, the founding concepts of the university. Our universities are in a position to take a leadership role in the Net’s future. They were the Net’s original cradle, along with the self-taught hobbyists who advanced the PC from its earliest days. Business and commerce followed in their wake, refining and expanding the opportunities developed through largely nonmarket process and ethos. The Internet and attached generative platforms are invitations to code and to build. Universities— and not just their computer science departments—should see those invitations as central to their missions of teaching their students and bringing knowledge to the world.

As countries and groups in the developing world incline to brand new generative technologies, those in the developed world must fight to retain theirs. There is not a simple pendulum swinging from generative to non-generative and back again; we cannot count on the fact that screws tightened too far can become stripped. Any comprehensive redesign of the Internet at this late stage would draw the attention of regulators and other parties who will push for ways to prevent abuse before it can even happen. Instead, we must piecemeal refine and temper the PC and the Net so that they can continue to serve as engines of innovation and contribution while mitigating the most pressing concerns of those harmed by them. We must appreciate the connection between generative technology and generative content.

Today’s consumer information technology is careening at breakneck pace, and most see no need to begin steering it. Our technologists are complacent because the ongoing success of the generative Net has taken place without central tending—the payoffs of the procrastination principle. Rank-and-file Internet users enjoy its benefits while seeing its operation as a mystery, something they could not possibly hope to affect. They boot their PCs each day and expect them more or less to work, and they access Wikipedia and expect it more or less to be accurate.

But our Net technologies are experiencing the first true shock waves from their generative successes. The state of the hacking arts is advancing. Web sites can be compromised in an instant, and many visitors will then come away with an infected PC simply for having surfed there. Without a new cadre of good hackers unafraid to take ownership of the challenges posed by their malicious siblings and create the tools needed to help nonhackers keep the Net on a constructive trajectory, the most direct solutions will be lockdown that cuts short the Net experiment, deprives us of its fruits, and facilitates a form of governmental control that upends a balance between citizen and sovereign. These ripples can be followed recursively up the Net’s layers. Our generative technologies need technically skilled people of goodwill to keep them going, and the fledgling generative activities above—blogging, wikis, social networks—need artistically and intellectually skilled people of goodwill to serve as true alternatives to a centralized, industrialized information economy that asks us to identify only as consumers of meaning rather than as makers of it. Peer production alone does not guarantee collaborative meaning making. Services like Inno- Centive place five-figure bounties on difficult but modular scientific problems, and ask the public at large to offer solutions.20 But the solutions tendered then become the full property of the institutional bounty hunter.21 Amazon’s Mechanical Turk has created a marketplace for the solving of so-called human intelligence tasks on the other side of the scale: trivial, repetitive tasks like tracing lines around the faces in photographs for a firm that has some reason to need them traced.22 If five years from now children with XOs were using them for hours each day primarily to trace lines at half a penny per trace, it could be a useful economic engine to some and a sweatshop to others—but either way it would not be an activity that is generative at the content layer.

The deciding factor in whether our current infrastructure can endure will be the sum of the perceptions and actions of its users. There are roles for traditional state sovereigns, pan-state organizations, and formal multistakeholder regimes to play. They can help reinforce the conditions necessary for generative blossoming, and they can also step in—with all the confusion and difficulty that notoriously attends regulation of a generative space—when mere generosity of spirit among people of goodwill cannot resolve conflict. But such generosity of spirit is a society’s powerful first line of moderation.

Our fortuitous starting point is a generative device in tens of millions of hands on a neutral Net. To maintain it, the users of those devices must experience the Net as something with which they identify and belong. We must use the generativity of the Net to engage a constituency that will protect and nurture it. That constituency may be drawn from the ranks of a new generation able to see that technology is not simply a video game designed by someone else, and that content is not simply what is provided through a TiVo or iPhone.

Posted by The Editors on March 1, 2008
Tags: Uncategorized