The conventional wisdom is that governments respond slowly to technological change. In the case of the Internet, nothing could be further from the truth. In 1994 and 1995, “dot-com” was still a mystical term for many. Most stories about the Internet dealt with sexual predation rather than possibilities of extreme wealth. Internet commerce itself was barely an idea, and some of the most exciting sites on the Web had pictures of coffeepots in university departments far away. (“See,” one would proudly say to a technological neophyte friend when introducing him to the wonders of the Net, “the pot is empty and we can see that live from here! This changes everything!”) It was an innocent time. Yet the U.S. government was already turning the wheels of intellectual property policy to respond to the threat (and promise) of the Internet. More precisely, they were trying to shape the future of the cumbersomely named “National Information Infrastructure,” the official name for the “information superhighway” that it was presumed would replace the “immature” technology of the Net. The government was wrong about that, and about a lot else.
The blueprint for new intellectual property policy online came from the Patent and Trademark Office. That office promulgated first a Green Paper and then, after further hearings, a White Paper, on “Intellectual Property and the National Information Infrastructure.”1 As policy and legal documents these are in one sense long out of date. Some of their legal arguments were successfully challenged. Some of their most important proposals were rejected, while many others have become law. But as a starting point from which to trace the frame of mind that has come to dominate intellectual property policy online, they are hard to equal.
These documents contained proposals that nowadays would be seen as fairly controversial. Internet service providers were said to be “strictly liable” for copyright violations committed by their subscribers; that is to say, they were legally responsible whether or not they knew about the violation or were at fault in any way. Loading a document into your browser’s transient cache memory while reading it was said to be making a “copy.” There was more: the beginnings of what later became the Digital Millennium Copyright Act,2 making it illegal to cut through the digital fences which content providers put around their products. The attitude toward fair use was particularly revealing. At one point in the White Paper it was hinted that fair use might be a relic of the inconveniences of the analog age, to be discarded now that we could have automated fractional payments for even the most insignificant use.3 (It was noted, however, that some disagreed with this conclusion.) At another point, fair use was described as a “tax” on rights holders and a “subsidy” to those who benefited from it, such as educational institutions.4 The White Paper also suggested that while any potential loss to rights holders caused by the new technology needed to be countered with new rights and new protections, any potential gain to them through the new technology was simply theirs. Potential gain did not offset the need to compensate for potential loss.
So what views of intellectual property were we carrying forward into the Internet age? Intellectual property is just like other property. Rights are presumptively absolute. Any limitations on them, such as fair use, are taxes on property owners, subsidies to the society at large. It sounds like a perfect time to administer the Jefferson Warning I sketched out in Chapter 2. After all, Jefferson was specifically warning against each of these errors two hundred years ago. To find them in a student paper would be disappointing—irritating, even. But this document was the blueprint for the intellectual property regime of cyberspace.
But do these mistakes matter? How important is it that we get the rules of intellectual property right? To me, a number of my colleagues, some librarians, a few software gurus, the White Paper was more than just a bit of bad policy in a technical field—like a poorly drafted statute about the witnessing of wills, say. When you set up the property rules in some new space, you determine much about the history that follows. Property rules have a huge effect on power relationships and bargaining positions. Think of rules setting out water rights or the right to drive cattle over homesteaders’ land in the American West. But they also are part of a larger way of seeing the world; think of the early-twentieth-century rules treating unions as “conspiracies in restraint of trade” or the Supreme Court decisions that dispossessed the American Indians on the theory that they did not comprehend the concept of property and thus did not “own” the land being taken from them.5 We were at a comparable point in the history of cyberspace. What was being set up here was a vision of economy and culture, a frame of mind about how the world of cultural exchange operates, and eventually a blueprint for our systems of communication. At this stage, the range of possibilities is extremely wide. A lot of different choices could be made, but subsequent changes would be harder and harder as people and companies built their activities around the rules that had been laid down. This was, in short, a tipping point where it was particularly important that we make the right decisions.
Conventional political science told us there were a lot of reasons to fear that we would not make the right decisions. The political process was going to be particularly vulnerable to problems of capture by established industries, many of whom would (rightly) see the Internet as a potential threat to their role as intermediaries between artists and creators on the one hand and the public on the other.
Intellectual property legislation had always been a cozy world in which the content, publishing, and distribution industries were literally asked to draft the rules by which they would live. The law was treated as a kind of contract between the affected industries. Rationally enough, those industries would wish to use the law not merely to protect their legitimate existing property rights, but to make challenges to their basic business plans illegal. (Imagine what would have happened if we had given the lamp-oil sellers the right to define the rules under which the newfangled electric light companies would operate.) There would be no easy counterweight to these pressures, as Jessica Litman points out in a wonderful set of reflections on copyright lawmaking, because the potential competitors to existing titans were just being born and could thus be strangled safely in their cradles.6 Certainly the public would have little grasp as yet of what was at stake.
In any event, when had the public played a role in intellectual property legislation? That kind of law affected businesses with printing presses or TV towers, not normal citizens. It did not help that the legislators were largely both ignorant and distrustful of the technology of the Internet—which was, at the time, thought to be dominated by foreign hackers, suicidal cults, pirates, and sleazy pornographers. (Terrorists and Nigerian spammers would be added to the mix later.)
Given an area of law that legislators were happy to hand over to the affected industries and a technology that was both unfamiliar and threatening, the prospects for legislative insight were poor. Lawmakers were assured by lobbyists
a) that this was business as usual, that no dramatic changes were being made by the Green or White papers; or
b) that the technology presented a terrible menace to the American cultural industries, but that prompt and statesmanlike action would save the day; or
c) that layers of new property rights, new private enforcers of those rights, and technological control and surveillance measures were all needed in order to benefit consumers, who would now be able to “purchase culture by the sip rather than by the glass” in a pervasively monitored digital environment.
In practice, somewhat confusingly, these three arguments would often be combined. Legislators’ statements seemed to suggest that this was a routine Armageddon in which firm, decisive statesmanship was needed to preserve the digital status quo in a profoundly transformative and proconsumer way. Reading the congressional debates was likely to give one conceptual whiplash.
To make things worse, the press was—in 1995, at least—clueless about these issues. It was not that the newspapers were ignoring the Internet. They were paying attention—obsessive attention in some cases. But as far as the mainstream press was concerned, the story line on the Internet was sex: pornography, online predation, more pornography. The lowbrow press stopped there. To be fair, the highbrow press was also interested in Internet legal issues (the regulation of pornography, the regulation of online predation) and constitutional questions (the First Amendment protection of Internet pornography). Reporters were also asking questions about the social effect of the network (including, among other things, the threats posed by pornography and online predators).
There were certainly important issues within the areas the press was willing to focus on, and I do not mean to trivialize them. I worked with a couple of civil liberties groups in opposing the hapless Communications Decency Act, one of the most poorly drafted pieces of speech regulation ever to come out of Congress.7 It was a palpably unconstitutional statute, eventually struck down by a unanimous Supreme Court.8 Its proposals would have burdened the speech of adults while failing to protect the interests of minors. Reporters loved the topic of the Communications Decency Act. It was about sex, technology, and the First Amendment. It foreshadowed the future of online speech regulation. One could write about it while feeling simultaneously prurient, principled, and prescient: the journalistic trifecta. For law professors who worked on digital issues, the Communications Decency Act was an easy topic to get the public to focus on; we had the reporters and editors calling us, pleading for a quote or an opinion piece.
Intellectual property was something quite different. It was occasionally covered in the business pages with the same enthusiasm devoted to changes in derivatives rules. Presented with the proposals in the Green and White Papers, the reporters went looking for opinions from the Software Publishers Association, the Recording Industry Association of America, or the Motion Picture Association of America. This was not bias or laziness—to whom else would they go? Who was on the “other side” of these issues? Remember, all of this occurred before Napster was a gleam in Sean Fanning’s eye. Sean Fanning was in middle school. Amazon.com was a new company and “Google” was not yet a verb.
In this environment, convincing the legislature or the press that fundamental public choices were implicated in the design of intellectual property rights for the digital world was about as easy as convincing them that fundamental public choices were implicated in the rules of tiddlywinks. My own experience is probably representative. I remember trying to pitch an article on the subject to a charming but uncomprehending opinion page editor at the Washington Post. I tried to explain that decisions about property rules would shape the way we thought about the technology. Would the relatively anonymous and decentralized characteristics of the Internet that made it such a powerful tool for global speech and debate come to be seen as a bug rather than a feature, something to be “fixed” to make the Net safe for protected content? The rules would also shape the economic interests that drove future policy. Would we try to build the system around the model of proprietary content dispensed in tightly controlled chunks? Would fair use be made technologically obsolescent? Would we undercut the various nontraditional methods of innovation, such as free software, before they ever managed to establish themselves? What would become of libraries in the digital world, of the ideal that access to books had important differences from access to Twinkies? After I concluded this lengthy and slightly incoherent cri de Coeur, there was a long pause; then the editor said politely, “Are you sure you couldn’t make some of these points about a free speech issue, like the Communications Decency Act, maybe?”
I finally placed the piece in the Washington Times,9 which was best known at the time as the only metropolitan newspaper owned by the Unification Church, familiarly referred to as the Moonies. This hardly counted as a direct line to the popular imagination (though the article’s mild criticisms elicited an extraordinary reaction from the Clinton administration’s lead official on intellectual property policy—throwing me for several weeks into a surreal world of secondhand threats, third-party leaks, and a hilarious back-and-forth in the letters page).10
Things were not completely one-sided. An unlikely group of critics had formed: librarians, a few software developers, law professors, some Internet libertarians. Of particular note was the Digital Future Coalition, which grew to represent a broad range of interested groups and industries thanks in part to the prescient analysis and remarkable energy of one of my colleagues, Peter Jaszi.11 Together with Pamela Samuelson, Jessica Litman, and a number of other distinguished legal scholars, Peter turned his considerable intellectual talents to explaining why writers, telecom companies, scientists, manufacturers of consumer electronics, and a host of other groups should be interested in the rules being debated. There had been a series of official hearings in which complaints were carefully collected and just as carefully ignored. This became harder to do as the critics became more numerous and better organized. Nevertheless, the currents were clearly running against them. It would be nice to say that this was merely because of the clubby history of intellectual property legislation, or the difficulty in getting press attention, or the various issues of industry capture and collective action problems. Yet this would be to miss a vital element of the situation.
Conventional political science showed that there were structural reasons why the legislative process was likely to succumb to industry capture.12 The reality turned out to be much worse. The real problem was not a political process dominated by cynical power politics, nor an initial absence of critical newspaper coverage, though both of those factors contributed. The real problem was that most of the proponents of the White Paper’s policies believed their own arguments so deeply and sincerely that they saw any criticism of those positions as either godless communism or hippy digital anarchism. (Frequently, in fact, they clung to their arguments even when there was fairly strong evidence that they would actually be harming themselves by putting these policies into effect. I will expand on this point later.) More importantly, they succeeded in getting their story about the threats and promises of the digital future accepted as the basis for all discussion of intellectual property policy. It became the organizing set of principles, the master narrative—call it what you will.
The heart of the story is beguilingly simple. The Internet makes copying cheaper and does so on an unparalleled global scale. Therefore we must meet the greater danger of illicit copying with more expansive rights, harsher penalties, and expanded protections. True, as I pointed out before, some of these expansions may indeed have the practical effect of reducing rights that citizens thought they had, such as fair use, low-level noncommercial sharing among personal friends, resale, and so on. But without an increase in private property rights, cheaper copying will eat the heart out of our creative and cultural industries. I call this story the Internet Threat. It is a powerful argument and it deserves some explanation.
Think back for a moment to the first chapter and the difference between Madame Bovary and the petunia. If the reason for intellectual property rights is the “nonrival” and “nonexcludable” nature of the goods they protect, then surely the lowering of copying and transmission costs implies a corresponding need to increase the strength of intellectual property rights. Imagine a line. At one end sits a monk painstakingly transcribing Aristotle’s Poetics. In the middle lies the Gutenberg printing press. Three-quarters of the way along the line is a photocopying machine. At the far end lies the Internet and the online version of the human genome. At each stage, copying costs are lowered and goods become both less rival and less excludable. My MP3 files are available to anyone in the world running Napster. Songs can be found and copied with ease. The symbolic end of rivalry comes when I am playing the song in Chapel Hill, North Carolina, at the very moment that you are both downloading and listening to it in Kazakhstan—now that is nonrival.
THE LOGIC OF PERFECT CONTROL
My point is that there is a teleology—a theory about how intellectual property law must develop historically—hidden inside the argument I call the Internet Threat. The argument, which is touted endlessly by the content industries—and not without reason—can be reduced to this: The strength of intellectual property rights must vary inversely with the cost of copying. With high copying costs, one needs weak intellectual property rights if any at all. To deal with the monk-copyist, we need no copyright because physical control of the manuscript is enough. What does it matter if I say I will copy your manuscript, if I must do it by hand? How will this present a threat to you? There is no need to create a legal right to exclude others from copying, no need for a “copy right.” As copying costs fall, however, the need to exclude increases. To deal with the Gutenberg press, we need the Statute of Anne—the first copyright statute—and the long evolution of copyright it ushered in.
But then comes the Internet. To deal with the Internet, we need the Digital Millennium Copyright Act,13 the No Electronic Theft Act,14 the Sonny Bono Copyright Term Extension Act,15 and perhaps even the Collections of Information Antipiracy Act.16 As copying costs approach zero, intellectual property rights must approach perfect control. We must strengthen the rights, lengthen the term of the rights, increase the penalties, and make noncommercial illicit copying a crime. We must move outside the traditional realm of copyright altogether to regulate the technology around the copyrighted material. Companies are surrounding their digital materials with digital fences. We must make it a violation of the law to cut those digital fences, even if you do so to make a “fair use” of the material on the other side. We must prohibit the making of things that can be used as fence-cutters—a prospect that worries researchers on encryption. In the long run, we must get rid of the troublesome anonymity of the Internet, requiring each computer to have an individual ID. We must make click-wrap contracts enforceable, even on third parties, even when you cannot read them before clicking—so that you never actually buy the software, music, movies, and e-books you download, merely “license” them for a narrowly defined range of uses. We must create interlocking software and hardware systems that monitor and control the material played on those systems—so that songs can be licensed to particular computers at particular times. Uses that the owners wish to forbid will actually be impossible, whether they are legal or not.
In other words, we must make this technology of the Internet, which was hailed as the great “technology of freedom,” into a technology of control and surveillance. The possibility of individuals circulating costless perfect digital copies requires it. It would be facile (if tempting) to say we must remake the Internet to make it safe for Britney Spears. The “Internet Threat” argument is that we must remake the Net if we want digital creativity—whether in music or software or movies or e-texts. And since the strength of the property rights varies inversely with the cost of copying, costless copying means that the remade Net must approach perfect control, both in its legal regime and its technical architecture.
Like any attractive but misleading argument, the Internet Threat has a lot of truth. Ask the software company producing expensive, specialized computer-assisted design programs costing thousands of dollars what happens when the program is made available on a “warez” site or a peer-to-peer filesharing network. The upstart computer game company pinning its hopes and its capital on a single new game would tell you the same thing. The easy availability of perfect, costless copies is a danger to all kinds of valuable cultural and economic production. The story of the Internet Threat is not wrong, it is simply dramatically incomplete in lots of ways. Here are two of them.
Costless Copying Brings Both
Costs and Benefits
The Internet does lower the cost of copying and thus the cost of illicit copying. Of course, it also lowers the costs of production, distribution, and advertising, and dramatically increases the size of the potential market. Is the net result a loss to rights holders such that we need to increase protection and control in order to maintain a constant level of incentives? A large, leaky market may actually provide more revenue than a small one over which one’s control is much stronger. What’s more, the same technologies that allow for cheap copying also allow for swift and encyclopedic search engines—the best devices ever invented for detecting illicit copying. What the Net takes away with one hand, it often gives back with the other. Cheaper copying does not merely mean loss, it also means opportunity. Before strengthening intellectual property rights, we would need to know whether the loss was greater than the gain and whether revised business models and new distribution mechanisms could avoid the losses while capturing more of the gains.
But wait, surely theft is theft? If the new technologies enable more theft of intellectual property, must we not strengthen the laws in order to deal with the problem? If some new technology led to a rash of car thefts, we might increase police resources and prison sentences, perhaps pass new legislation creating new crimes related to car theft. We would do all of this even if the technology in question gave car owners significant benefits elsewhere. Theft is theft, is it not?
The answer in a word is no. Saying “theft is theft” is exactly the error that the Jefferson Warning is supposed to guard against. We should not assume that intellectual property and material property are the same in all regards. The goal of creating the limited monopoly called an intellectual property right is to provide the minimum necessary incentive to encourage the desired level of innovation. Anything extra is deadweight loss. When someone takes your car, they have the car and you do not. When, because of some new technology, someone is able to get access to the MP3 file of your new song, they have the file and so do you. You did not lose the song. What you may have lost is the opportunity to sell the song to that person or to the people with whom they “share” the file. We should not be indifferent to this kind of loss; it is a serious concern. But the fact that a new technology brings economic benefits as well as economic harm to the creation, distribution, and sale of intellectual property products means that we should pause before increasing the level of rights, changing the architecture of our communications networks, creating new crimes, and so on.
Remember, many of the things that the content industries were concerned about on the Internet were already illegal, already subject to suit and prosecution. The question is not whether the Internet should be an intellectual property-free zone; it should not be, is not, and never was. The question is whether, when the content industries come asking for additional or new rights, for new penalties, for the criminalization of certain types of technology, we should take into account the gains that the Internet has brought them, as well as the costs, before we accede to their requests. The answer, of course, is that we should. Sadly, we did not. This does not mean that all of the content industries’ attempts to strengthen the law are wrong and unnecessary. It means that we do not know whether they are or not.
There is a fairly solid tradition in intellectual property policy of what I call “20/20 downside” vision. All of the threats posed by any new technology—the player piano, the jukebox, the photocopier, the VCR, the Internet—are seen with extraordinary clarity. The opportunities, however, particularly those which involve changing a business model or restructuring a market, are dismissed as phantoms. The downside dominates the field, the upside is invisible. The story of video recorders is the best-known example. When video recorders—another technology promising cheaper copying—first appeared, the reaction of movie studios was one of horror. Their business plans relied upon showing movies in theaters and then licensing them to television stations. VCRs and Betamaxes fit nowhere in this plan; they were seen merely as copyright violation devices. Hollywood tried to have them taxed to pay for the losses that would be caused. Their assumption? Cheaper copying demands stronger rights.
Having lost that battle, the movie studios tried to have the manufacturers of the recording devices found liable for contributory copyright infringement; liable, in other words, for assisting the copyright violations that could be carried out by the owners of Sony Betamaxes. This, of course, was exactly the same legal claim that would be made in the Napster case. In the Sony case, however, the movie companies lost. The Supreme Court said that recording of TV programs to “time-shift” them to a more convenient hour was a fair use.17 The movie studios’ claims were rejected.
Freed from the threat of liability, the price of video recorders continued to fall. They flooded consumers’ houses at a speed unparalleled until the arrival of the World Wide Web. All these boxes sitting by TVs now cried out for content, content that was provided by an emerging video rental market. Until the triumph of DVDs, the videocassette rental market made up more than 50 percent of the movie industry’s revenues.18 Were losses caused by video recorders? To be sure. Some people who might have gone to see a movie in a theater because the TV schedule was inconvenient could instead record the show and watch it later. Videos could even be shared with friends and families—tattered copies of Disney movies recorded from some cable show could be passed on to siblings whose kids have reached the appropriate age. VCRs were also used for copying that was clearly illicit—large-scale duplication and sale of movies by someone other than the rights holder. A cheaper copying technology definitely caused losses. But it also provided substantial gains, gains that far outweighed the losses. Ironically, had the movie companies “won” in the Sony case, they might now be worse off.
The Sony story provides us with some useful lessons—first, this 20/20 downside vision is a poor guide to copyright policy. Under its sway, some companies will invariably equate greater control with profit and cheaper copying with loss. They will conclude, sometimes rightly, that their very existence is threatened, and, sometimes wrongly, that the threat is to innovation and culture itself rather than to their particular way of delivering it. They will turn to the legislature and the courts for guarantees that they can go on doing business in the old familiar ways. Normally, the marketplace is supposed to provide correctives to this kind of myopia. Upstart companies, not bound by the habits of the last generation, are supposed to move nimbly to harvest the benefits from the new technology and to outcompete the lumbering dinosaurs. In certain situations, though, competition will not work:
• if the dinosaurs are a cartel strong enough to squelch competition;
• if they have enlisted the state to make the threatening technology illegal, describing it as a predatory encroachment on the “rights” of the old guard rather than aggressive competition;
• if ingrained prejudices are simply so strong that the potential business benefits take years to become apparent; or
• if the market has “locked in” on a dominant standard—a technology or an operating system, say—to which new market entrants do not have legal access.
In those situations, markets cannot be counted on to self-correct. Unfortunately, and this is a key point, intellectual property policy frequently deals with controversies in which all of these conditions hold true.
Let me repeat this point, because it is one of the most important ones in this book. To a political scientist or market analyst, the conditions I have just described sound like a rarely seen perfect storm of legislative and market dysfunction. To an intellectual property scholar, they sound like business as usual.
In the case of the VCR wars, none of these factors obtained. The state refused to step in to aid the movie companies by criminalizing the new technology. There were equally powerful companies on the other side of the issue (the consumer electronics companies selling VCRs) who saw this new market as a natural extension of a familiar existing market—audio recorders. There was no dominant proprietary technological standard controlled by the threatened industry that could be used to shut down any threats to their business model. The market was allowed to develop and evolve without premature legal intervention or proprietary technological lockout. Thus we know in this case that the movie companies were wrong, that their claims of impending doom from cheap copies were completely mistaken. The public and, ironically, the industry itself benefited as a result. But the Sony case is the exception rather than the rule. That is why it is so important. If competition and change can be forbidden, we will get relatively few cases that disprove the logic that cheaper copying must always mean stronger rights. The “natural experiments” will never be allowed to happen. They will be squelched by those who see only threat in the technologies that allow cheaper copies and who can persuade legislators or judges to see the world their way. The story line I describe here, the Internet Threat, will become the conventional wisdom. In the process, it will make it much less likely that we will have the evidence needed to refute it.
The Holes Matter as Much as the Cheese
The Sony case is important in another way. The Supreme Court’s decision turned on the judgment that it was a “fair use” under U.S. copyright law for consumers to record television programs for time-shifting purposes. Since fair use comes up numerous times in this book, it is worth pausing for a moment to explain what it is.
The content industries like to portray fair use as a narrow and grudging defense against an otherwise valid case for copyright infringement—as if the claim were, “Yes, I trespassed on your land, which was wrong, I admit. But I was starving and looking for food. Please give me a break.” This is simply inaccurate. True, fair use is asserted as “an affirmative defense”; that is the way it is brought up in a copyright case. But in U.S. law, fair uses are stated quite clearly to be limitations on the exclusive rights of the copyright holder—uses that were never within the copyright holder’s power to prohibit. The defense is not “I trespassed on your land, but I was starving.” It is “I did not trespass on your land. I walked on the public road that runs through it, a road you never owned in the first place.” When society hands out the right to the copyright holder, it carves out certain areas of use and refuses to hand over control of them. Again, remember the Jefferson Warning. This is not a presumptively absolute property right. It is a conditional grant of a limited and temporary monopoly. One cannot start from the presumption that the rights holder has absolute rights over all possible uses and therefore that any time a citizen makes use of the work in any way, the rights holder is entitled to get paid or to claim “piracy” if he does not get paid. Under the sway of the story line I called the Internet Threat, legislators have lost sight of this point.
So what is “fair use”? When I am asked this question by nonlawyers, I offer to show them the actual provision in the copyright act. They recoil, clearly imagining they are about to be shown something the size and complexity of the tax code. Here is the statutory fair use provision in its entirety:
Sec. 107. - Limitations on exclusive rights: Fair use
Notwithstanding the provisions of sections 106 and 106A, the fair use of a copyrighted work, including such use by reproduction in copies or phonorecords or by any other means specified by that section, for purposes such as criticism, comment, news reporting, teaching (including multiple copies for classroom use), scholarship, or research, is not an infringement of copyright. In determining whether the use made of a work in any particular case is a fair use the factors to be considered shall include—
(1) the purpose and character of the use, including whether such use is of a commercial nature or is for nonprofit educational purposes;
(2) the nature of the copyrighted work;
(3) the amount and substantiality of the portion used in relation to the copyrighted work as a whole; and
(4) the effect of the use upon the potential market for or value of the copyrighted work.
The fact that a work is unpublished shall not itself bar a finding of fair use if such finding is made upon consideration of all the above factors.
“But this seems quite sensible,” people often say, as though they had expected both Byzantine complexity and manifest irrationality. (Perhaps they have had some experience with legal matters after all.) The ones who think about it a little longer realize that these factors cannot be mechanically applied. Look at factor 3, for example. Someone who is making a parody frequently needs to take large chunks of the parodied work. That is the nature of a parody, after all. They might then sell the parody, thus also getting into trouble with factor 1. And what about factor 4? Someone might quote big chunks of my book in a devastating review that ruined any chance the book had of selling well. Come to think of it, even a parody might have a negative effect on the “potential market” for the parodied work. But surely those uses would still be “fair”? (In both instances, the Supreme Court agrees that they are fair uses.)
In coming up with these hypothetical problem cases, the copyright novice is probably closer to having a good understanding of the purpose of fair use than many people who have studied it for years. In fact, the novice’s questions shed light on all of the exceptions, limitations, and defenses to proprietary rights—the holes in the cheese of intellectual property. The scholar’s urge is to find one theory that explains all the possible applications of the fair use doctrine, to arrange all of the cases like targets and shoot a single arrow through all of them. Perhaps fair use is designed to reduce the difficulty of clearing rights when it would be uneconomical or impossibly complex to do so: to reduce the paperwork, hassle, delay, ignorance, and aggravation that economists refer to under the sanguine name of “transaction costs.”19 (Though the idea that fair use is about transaction costs hardly explains some of the types of fair use we care most about—the rights to parody, to criticize, to reverse engineer.) Or perhaps fair use allows the rights of a transformative author to be trumped only by a second transformative author, who is building on the first—the parodist, reviewer, collage artist, or what have you.20 (Then again, photocopying for classroom use does not sound very “transformative.”) Could fair use be dictated by the Constitution or by international free speech guarantees? In this view, fair use provides a safety valve that allows copyright to coexist with the First Amendment, property rights over speech to coexist with freedom of expression.21 After all, it is not entirely obvious how it could be constitutional to forbid me, in the name of a federal law, from translating Mein Kampf in order to warn of the dangers of fascism or parodying some piece of art to subversive effect.
Each of these ideas about fair use has much to recommend it, as do the many other grand theories that have been offered to explain the puzzle. And therein lies the problem.
Intellectual property is a brilliant social invention which presents us with great benefits but also with a multitude of dangers:
1. the danger that the monopoly is unnecessary to produce the innovation, or that it is broader or lasts for longer than is necessary to encourage future production;
2. that overly broad rights will chill speech, criticism, or scientific progress;
3. that it will restrict access in ways that discourage “follow-on” innovation;
4. that it will lead to industry concentration in a way that hurts consumers or citizens while being less subject to antitrust regulation precisely because the monopoly or oligopoly rests on intellectual property rights;
5. that it will establish strong “network effects” which cause the market to tip over to some inefficient technology; and
6. that it will give the rights holder control over some technology outside the range of the monopoly but closely linked to it.
The list of dangers goes on and on, and so does the list of exceptions, limitations, and restraints designed to prevent them. We restrict the length of intellectual property rights. (At least, we used to. The framers thought it so important to do so that they put the need to have a limited term in the Constitution itself; nevertheless both Congress and the Supreme Court seem to have given up on that one.) We restrict the scope of intellectual property rights, so that they cannot cover raw facts or general ideas, only the range of innovation and expression in between. (At least, we used to. Developments in database protection, gene patents, and business method patents are clearly eroding those walls.) As with fair use, we impose limitations on the rights when we hand them out in the first place. The exclusive right conferred by copyright does not include the right to prevent criticism, parody, classroom copying, decompilation of computer programs, and so on. (Though as the next chapter shows, a number of recent legal changes mean that the practical ability to exercise fair use rights is seriously threatened.)
These limitations on intellectual property do not fit a single theory, unless that theory is “avoiding the multiple and evolving dangers of intellectual property itself.” Even a single limitation such as fair use clearly responds to many different concerns about the dangers of intellectual property rights. Indeed it will evolve to fit new circumstances. When computer programs were first clearly covered by copyright law, software engineers wondered if this would cripple the industry. Why? Anyone who wishes to compete with a dominant program needs to “decompile” it in order to make their program “interoperable,” or simply better. For example, a new word processing program, no matter how good, would be dead on arrival unless it could read all the files people had created with the old, dominant word processing software. But to do this, the engineers at the upstart company would have to take apart their competitor’s program. In the process they would have to create temporary copies of the old program, even though the final product—the hot new software—would be completely different from the old. Would this be a violation of copyright law?
In a series of remarkable and far-seeing cases involving such issues, the courts said no.22 “Decompilation” was fair use. The law of fair use had evolved in the context of expressive, nonfunctional, stand-alone works such as books, poems, songs. Now it was being applied to a functional product whose economics depended strongly on “network effects”—many types of programs are useful only if they are widely used. Without interoperability, we could never take our existing documents or spreadsheets or datasets and move to a new program, even if it was better. One program would not be able to read the files created by another. It would be as if language itself had been copyrighted. To have said that the incidental copies created in the process of decompiling software were actually infringements of copyright would have turned the law on its head because of a technological accident (you needed temporarily to “copy” the programs in order to understand how they worked and make yours work with them) and a legal accident (copyright was now being used to regulate functional articles of commerce: “machines” made of binary code). The difference between copying and reading, or copying and understanding, had changed because of the technology. The context had changed because the law was being stretched to cover new types of products, whose economics were very different from those of novels. Rather than let the dominant software companies use copyright to stop others from making interoperable software, the courts used an escape hatch—fair use—to prevent that danger and to uphold the basic goal of copyright: encouraging progress in science and the useful arts.
This long story is told to make a simple point. The variegated and evolving limitations on intellectual property are as important as the rights they constrain, curtail, and define. The holes matter as much as the cheese.
What does this have to do with the Sony case? In that case, remember, the Supreme Court had said that copying TV shows in order to time-shift was fair use. The Court could simply have stopped there. It could have said, “since most of what consumers do is legal, there can be no claim of contributory or vicarious infringement. Sony is not contributing to infringement since consumers are not infringing copyright by copying shows in the first place.” Interestingly, though this is the heart of the ruling, the court went further. It quoted some seemingly unrelated patent law doctrine on contributory infringement: “A finding of contributory infringement does not, of course, remove the article from the market altogether; it does, however, give the patentee effective control over the sale of that item. Indeed, a finding of contributory infringement is normally the functional equivalent of holding that the disputed article is within the monopoly granted to the patentee.” Clearly, the Justices were concerned that, by using copyright law, the movie studios could actually get control of a new technology.
The fact that the Court expressed this concern through an analogy to patent law was, at first sight, fairly surprising. Courts do not normally look at copyrights in quite the same way as they look at patents. For one thing, patent rights are stronger, though they are harder to obtain and last for a shorter period of time. For another, while courts often express concern about the dangers of a patent-driven monopoly over a particular technology, it is strange to see that concern in the context of copyright law. An unnecessary monopoly over a plow or a grain elevator may, as Jefferson pointed out, slow technological development. But a monopoly over Snow White or “Ode on a Grecian Urn”? We do not normally think of rights over expression (the realm of copyright) threatening to sweep within their ambit an entire new technological invention (the realm of patent).
But in the Sony case, the Supreme Court quite clearly saw that, in a world where technological developments made copying easier, the idea of contributory infringement in copyright could be used to suppress or control entire technologies that seemed, in the logic of 20/20 downside vision, to pose a threat to the copyright holder. Indeed, in some sense, the logic behind the Internet Threat—“cheaper copying requires greater control”—demands this result, though the Sony case antedates the World Wide Web by a considerable time. If it is cheap copying itself that poses the threat, then the content owners will increasingly move to gain control over the technologies of cheap copying, using copyright as their stalking horse. That is why the Sony Court went beyond the simple ruling on fair use to explain the consequences of the movie companies’ claim. In a footnote (the place where judges often bury their most trenchant asides) the Court was almost snide:
It seems extraordinary to suggest that the Copyright Act confers upon all copyright owners collectively, much less the two respondents in this case, the exclusive right to distribute VTR’s [Video Tape Recorders] simply because they may be used to infringe copyrights. That, however, is the logical implication of their claim. The request for an injunction below indicates that respondents seek, in effect, to declare VTR’s contraband. Their suggestion in this Court that a continuing royalty pursuant to a judicially created compulsory license would be an acceptable remedy merely indicates that respondents, for their part, would be willing to license their claimed monopoly interest in VTR’s to Sony in return for a royalty.23
The real heart of the Sony case is not that “time-shifting” of TV programs is fair use. It is an altogether deeper principle with implications for all of the holes in the intellectual property cheese. The Sony Court declared that because video recorders were capable of substantial noninfringing uses, the manufacturers of those devices were not guilty of contributory infringement. If the rights of copyright holders were absolute, if they had the authority to prohibit any activity that appeared to pose a threat to their current business model, then it is quite possible that video recorders would have been guilty of contributory infringement. It is because we have, and need, multiple exceptions and limitations on intellectual property that the Supreme Court was able to resist the claim that copyright itself forbids technologies of cheaper copying. To put it another way, without a robust set of exceptions and limitations on copyright, the idea that cheaper copying requires greater control will inexorably drive us toward the position that the technologies of cheaper reproduction must be put under the governance of copyright holders.
Thus we have a corollary to the Jefferson Warning—call it the Sony Axiom: cheaper copying makes the limitations on copyright more rather than less important. Without those limitations, copyright law will bloat and metastasize into a claim of monopoly, or at least control, over the very architectures of our communications technology. And that is exactly where the logic of the Internet Threat is taking us today.
FROM NAPSTER TO GROKSTER
Seventeen years after the Sony decision, another court had to deal with a suit by outraged copyright holders against the creators of a technology that allowed individuals to copy material cheaply and easily. The suit was called A&M Records v. Napster.24 Napster was a “peer-to-peer” file sharing system. The files were not kept on some huge central server. Instead, there was a central directory—think of a telephone directory—which contained a constantly updated list of the addresses of individual computers and the files they contained. Anyone who had the software could query the central registry to find a file’s location and then establish a direct computer-to-computer connection—anywhere in the world—with the person who had the file they desired. This decentralized design meant the system was extremely “robust,” very fast, and of nearly infinite capacity. Using this technology, tens of millions of people around the world were “sharing” music, an activity which record companies quite understandably viewed as simple theft. In fact, it would be hard to think of a situation that illustrated the Internet Threat better. The case ended up in front of the U.S. Court of Appeals for the Ninth Circuit, which hears cases in an area that includes California and thus has decided a lot of copyright cases over the years.
There was an irony here. When the Supreme Court decided the Sony case, it was on appeal from the Ninth Circuit Court of Appeals. Sony, with its rule about reproductive technologies with substantial noninfringing uses, reversed the appeals court decision. The Supreme Court was, in effect, telling the Ninth Circuit that it was wrong, that its ruling would have required the “extraordinary” (legal shorthand for “stupid”) conclusion that copyright law gave copyright holders a veto on new technology. In the process, the Supreme Court told the Ninth Circuit that it also did not understand the law of fair use, or the freedom that should be given to individuals to make “noncommercial” private copies. The identities of the judges had changed, but now, seventeen years later, the same Circuit Court had another high-profile case on exactly the same issues. In case any of the judges might have missed this irony, it took David Boies, the lawyer for Napster, about ninety seconds to remind them in his oral argument. “This court,” he said, adding as if in afterthought, “in the decision that the Supreme Court ultimately reversed in Sony. . . .”25 To the laypeople in the audience it probably just seemed like another piece of legal droning. But to the lawyers in the room the message was quite clear. “The last time you got a case about a major new technology of consumer reproduction, you really screwed it up. Hope you can do better this time.” The judges’ mouths quirked—not entirely in pleasure. The point had been registered.
Think for a moment of the dilemma in which the court had been placed. On the one hand, you had tens of millions of people “sharing” music files and Napster was the service that allowed them to do it. If this was not contributory copyright infringement, what was? On the other hand, Napster seemed to fit very nicely under the rule announced in the Sony case.
The argument went like this. Like the VCR, the Napster service had substantial noninfringing uses. It allowed bands to expose their music to the world through the “New Artists” program. It made it easy to share music which was no longer under copyright. These uses clearly do not infringe copyright. There were also the claims that it permitted “space-shifting” by consumers who already owned the music or “sampling” of music by listeners as they decided whether or not to buy. One could argue that space-shifting and sampling were fair use (though in the end the court disagreed). But since we have two clear noninfringing uses, the technology obviously does have substantial uses that do not violate copyright. Thus, Napster cannot be liable as a contributory infringer, just as Sony could not be liable for the Betamax. Supreme Court precedent covers this case. The Ninth Circuit is bound by that precedent. All the judges can do, goes the argument, is to apply the words of the rule laid down in Sony, say that Napster wins, and move on to the next case. If Congress wants to make services like Napster illegal, it is going to have to pass a new law. The boundaries of the Sony rule are clear and Napster fits within them. (Of course, the last point is subject to argument, but the argument for Napster on this issue was a good one. Not overwhelming—there were more noninfringing uses in the Sony case because the normal way consumers used the technology in question was found to be a fair use—but certainly powerful.)
A more daring strategy was to suggest that all the copying done over Napster was fair use, or at least presumptively fair. In Sony, the Supreme Court had said that the law presumes that noncommercial private copying—such as taping a show at home for future viewing—is a fair use. This presumption shifts the burden to the copyright holder to prove that the practice caused harm to them. Copying on Napster was done by private individuals. No money was exchanged. Does this mean we must presume it was fair use and require the music companies and songwriters to show clear evidence of “market harm” if they want to convince us otherwise?
It sounds as though proving market harm would be pretty easy. How could millions of people exchanging hundreds of millions of songs not be causing harm? But it is more complicated. Remember the Jefferson Warning. We are not talking about swiping shoes from a shoe store. There one merely has to show the theft to prove the loss. By contrast, music files are copied without being “taken” from their owner. The record companies would have to show harm to their market—the people downloading who do not purchase music because it is available for free. Those who download, but would not have purchased, do not count. And we have to balance those who are deterred from purchasing against those who purchase a whole CD because they are exposed to new music through Napster. One very interesting empirical study on the subject indicates that the result is a wash, with hardly any measurable effect on sales; the overall drop in CD purchases results from larger macroeconomic issues.26 This study, however, has been subject to detailed methodological criticism.27 Another study shows a weak effect on sales, though rather woundingly it seems to suggest that the result is economically efficient—fewer people end up with music they do not like.28 Other studies, by contrast, support the record company position—suggesting that illicit file sharing does indeed undercut sales of both CDs and authorized digital downloads.29 Given the complexities of the issue, the record companies did not want to engage in a war of dueling empirical studies.
So, if Napster’s users were not infringing copyright law in the first place—at least until the record companies came up with convincing evidence of market harm—because their copying was noncommercial, then Napster could hardly be guilty of contributory infringement, could it? There would be no infringement at all!
You could see Mr. Boies’s arguments as simple equations between the cases.
• Noninfringing uses such as recording public domain films and “time-shifting” programs are equivalent to noninfringing uses such as the New Artists program or sharing public domain music (and maybe “space-shifting” one’s own music?); or
• Private noncommercial videotaping is equivalent to private noncommercial file sharing. Both are presumptively fair uses.
• Either way, Sony=Napster and Napster wins.
Napster did not win, of course, though when the judges handed down their decision it was clear they had been paying attention to Mr. Boies, at least enough to make them very wary of tampering with Sony. They claimed that they were upholding that case, but that Napster could be liable anyway. How? Because there was enough evidence here to show that the controllers of Napster had “actual knowledge that specific infringing material is available using its system, that it could block access to the system by suppliers of the infringing material, and that it failed to remove the material.” There was indeed evidence that Napster knew how its system was being used—an embarrassing amount of it, including early memos saying that users will want anonymity because they are trading in “pirated music.” Then there were nasty circumstantial details, like the thousands of infringing songs on the hard drive of one particular Napster employee—the compliance officer tasked with enforcing the Digital Millennium Copyright Act! (The recording company lawyers waxed wonderfully sarcastic about that.)
But despite the ludicrously dirty hands of Napster as a company, lawyers could see that the appeals court was making a lot of new law as it struggled to find a way to uphold Sony while still making Napster liable. The court’s ruling sounded reasonable and clear, something that would only strike at bad actors while paying heed to the Sony Axiom and the assurance of safety that the rule in Sony had provided to technology developers for the previous twenty years. But hard cases make bad law. In order to accomplish this piece of legal legerdemain, the court had to alter or reinterpret the law in ways that are disturbing.
The first thing the court did was to reject the argument that the “sharing” was private and noncommercial. As to the idea that it is not private, fair enough. Sharing one’s music with fifty-four million people does not sound that private, even if it is done for private ends, in private spaces. What about noncommercial? Embracing some earlier rulings on the subject, the court said a use was “commercial” if you got for nothing something for which you would otherwise have to pay. On the surface this sounds both clever and reasonable—a way to differentiate home taping from global file sharing—but the argument quickly begins to unravel. True, the Betamax owners could get TV shows for free just by watching at the regular time. But they could not get a copy of the show for free at the moment they wanted to watch it. That was why they taped. One could even argue that Napster users would have access to most songs over the radio for free. But lawyers’ quibbling about which way the rule cuts in this case is not the point. Instead, we need to focus on the change in the definition of “commercial,” because it illustrates a wider shift.
Remember, a finding that a use is “noncommercial” makes it more likely that a court will find it to be legal—to be a fair use. The old test focused mainly on whether the motive for the copying was to make money. (A different stage of the inquiry concerned whether there was harm to the copyright holder’s market.) The Napster court’s test concentrates on whether the person consuming the copy got something for free. Instead of focusing on the fact that the person making the copy is not making money out of it—think of a professor making electronic copies of articles for his students to download—it focuses on the presumptively dirty hands of those who are “getting something for nothing.” But lots of copyright law is about “getting something for nothing.”
To put it differently, one central goal of copyright is to limit the monopoly given to the copyright owner so that he or she cannot force citizens to pay for every single type of use. The design of the law itself is supposed to facilitate that. When “getting something for free” comes to equal “commercial” in the analysis of fair use, things are dangerously out of balance. Think back to Jefferson’s analogy. If I light my candle at yours, am I getting fire for free, when otherwise I would have had to pay for matches? Does that make it a “commercial” act?
Having dismissed the claim that this was noncommercial sharing, the court then reinterpreted the Sony decision to allow liability when there was “actual knowledge” of specific copyright violations, an ability to block access by infringers, and a failure to do so. Neither side was entirely happy with this ruling, but the record companies believed—rightly—that it would allow them effectively to shut Napster down. Yet the Napster ruling only postponed the issue. The next set of file sharing services to be sued after Napster were even more decentralized peer-to-peer systems; the Napster court’s reinterpretation of Sony would not be able to reach them.
The peer-to-peer file sharing service called Grokster is a relatively typical example. Unlike Napster, Grokster had no central registry. The system was entirely run by the individual “peer” computers. Because the system was designed this way, the people who made and distributed the software had no knowledge of specific infringing files. The users were doing the searching, indexing, and storing, and Grokster had no ability to control their behavior. For those reasons, a court of appeals held that Grokster was not liable. As in Sony, the system had substantial noninfringing uses. Lots of interesting content was traded on Grokster with the copyright holder’s consent. Other material was in the public domain. Grokster made money by streaming advertisements to the users of its software. The movie companies and record companies saw this as a flagrant, for-profit piracy ring. Grokster’s response was that like the makers of the VCR, it was simply providing a technology. Its financial interest was in people using that technology, not in using it for illicit purposes—though, like the VCR manufacturer, it would profit either way. The court of appeals agreed. True, the majority of the material traded on Grokster was illicitly copied, but the court felt that it could not give the recording or movie companies control over a technology simply because it allowed for easier copying, even if most of that copying was illegal. As I tried to point out in the section on the Sony Axiom, that line of thought leads to copyright holders having a veto over technological development.
It was at this point that the Supreme Court stepped in. In the case of MGM v. Grokster,30 the Supreme Court followed the line of the Napster court, but went even further. The Court created a new type of contributory copyright infringement—while apparently denying it was doing so. Grokster and its fellow services were liable because of three different kinds of evidence that they had “intended” to induce copyright violation. First, they were trying “to satisfy a known demand for copyright infringement.” This could be shown by the way that they advertised themselves as alternatives to the “notorious filesharing service, Napster.” Second, the file sharing services did not try to develop filtering software to identify and eliminate copyrighted content—though this alone would not have been enough to make them liable. Finally, their advertising-supported system clearly profited by high-intensity use, which they knew was driven in the most part by illicit copying. This too would not have been enough by itself, the Court added, but had to be seen in the context of the whole record of the case.
Let me be clear. I wept no tears for Napster, Grokster, and their ilk. I see no high-minded principle vindicated by middle-class kids getting access to music they do not want to pay for. It is difficult to take seriously the sanctimonious preening of those who cast each junior downloader of corporate rock as a Ché Guevara, fighting heroically to bring about a new creative landscape in music. (It is almost as hard to take seriously the record industry executives who moralistically denounce the downloading in the name of the poor, suffering artists, when they preside over a system of contracts with those same artists that makes feudal indenture look benign.) The file sharing companies themselves were also pretty unappealing. Many of the services were bloated with adware and spyware. True, some of their software engineers started with a dewy-eyed belief that this was a revolutionary technology that would break the record companies and usher in a new era of musical creativity. Whether one agrees or disagrees with them, it is hard—for me at least—to doubt their sincerity. But even this quality did not last long. For most of the people involved, the words “stock options” worked their normal, morally debilitating magic. In internal company correspondence, attacks on the hypocrisy of the music companies and defenses of a democratic communications structure imperceptibly gave way to discussions of “customer base,” “user experience,” and “saleable demographics.” I care little that Napster and Grokster—as individual companies—lost their specific legal battles. There are few heroes in this story. But if we had to rely on heroes, nothing would ever get done.
I do care about the technology behind Napster and Grokster—about the kind of decentralized system it represents. I also care about the principle I identified as the Sony Axiom—a principle that goes far beyond music, peer-to-peer systems, or the Internet as a whole. The Supreme Court’s decision in Grokster could have been much worse. But it still offers a modest threat both to that technology and to that axiom.
What is so great about peer-to-peer systems? We talk about “cheap speech” on the Internet, but bandwidth is actually expensive. If one is talking about music or video files, and one wishes to speak to many people in a short period of time, one vital way to have cheap speech is over peer-to-peer networks. If many of your viewers or listeners are willing to become broadcasting stations as they watch, you can cheaply reach a million people in a short period of time with your video of abuse in Abu Ghraib or your parody of political leaders. You do not need to rely on a broadcasting station, or even on the continued existence of entities such as YouTube, which face their own legal worries. By making your listeners your distributors, you can quickly reach the same number of ears that the payola-soaked radio waves allow the record companies to reach.
One need not cheer Grokster. Much of what went on there was indeed illicit. But there are two key things to understand about peer-to-peer networks. The first is that they are hard to police. They have multiple nodes. That is why they work. It means they will have both infringing and noninfringing uses, and the noninfringing uses will be centrally connected to our deepest values of free speech and cultural decentralization.
The second feature of peer-to-peer networks is even more basic. They are networks and thus subject to the laws of network economics. In short, they only work well if many people use them. A person who uses a peer-to-peer system that no one else uses is in the position of the person who owns the only fax machine in the world. Peer-to-peer networks provide cheap and unregulable audiovisual or data-heavy “speech” to a mass audience. And if the past is any guide, those networks will also carry large amounts of illicit material, just as photocopying machines (and VCRs) are widely used to violate copyright. The Grokster case makes it harder, but not impossible, to have successful, widely used peer-to-peer systems that are not themselves illicit. If they are widely used, there will be infringing content. If you try to police them and filter them, you will know more about that infringing content and thus might be liable—that was the point of the Napster case. If you do not, you will be failing to take precautions. That was the point of the Grokster case. What is a poor peer-to-peer network to do? Apart from making sure that the last four letters of your service’s name are not “-ster,” I am hard-pressed to advise you.
A decision does not need to make an activity illegal in order to impede it. It only needs to make it uncertain. Already, for example, the free—and so far as I could tell, entirely well-meaning—service “bonpoo,” which allowed you to send large file attachments to many people at once, has shut down all of its capabilities except photo transfer. That is simply one trivial instance of a larger harm. Lots of new communications technologies will remain undeveloped because of the uncertainties left by this ruling.
My colleague Jennifer Jenkins gave one useful hyperbolic illustration, drawing on earlier work by the Electronic Frontier Foundation: if one were launching the iPod today, it is not clear how it would fare under Grokster’s standard. Of course, there is no danger that the iPod will be challenged. It has become respectable and the music companies ended up sanctioning it. But how does it fare if we simply apply the tests laid down in the Grokster case? There is Apple’s “tainted” advertising campaign, urging users to “Rip, Mix, and Burn.” Does this not suggest complicity, or even intent? There is the fact that the iPod does not restrict itself solely to proprietary formats protected by digital rights management. It also allows uncontrolled MP3 files despite the fact that this format is “notoriously” used to transfer files against the wishes of the copyright owner. This, surely, is a “failure to police.” And finally, there is the fact that it would cost about $10,000 to fill an iPod with songs downloaded from iTunes. Clearly Apple must be aware that much of the music that fills iPods is illicitly copied. They are profiting from that fact to drive demand for the product, just as Grokster was profiting from the attractions of illicit traffic to drive people to use their service!
No one is going to sue Apple now, of course. In fact, established players in the marketplace are probably fairly safe (and have better lawyers). But what if a product as good as the iPod were being developed now by some upstart company? What if it were no more and no less likely to be used for infringing purposes? Would the business plan ever see the light of day? Or would it be quietly smothered due to legal uncertainty? I have little sympathy for Grokster the company, but the decision that doomed it is a bad piece of technology policy.
There is a second reason to dislike the Grokster decision. Despite some of the angst-ridden announcements made when the decision was handed down, the Supreme Court has not killed peer-to-peer systems. The concept is far too well entrenched. But the decision will mean that there are fewer of them that are widely used, easy to operate, and made by responsible and reputable people you can trust. This will probably lessen, but not end, illicit copying online. But that effect comes with a price—it makes our communications architecture a little bit more tightly controlled, reducing but not removing the availability of methods of mass distribution that are entirely outside centralized public or private control. It is another—relatively small—step toward an Internet that is more like cable TV or iTunes, a one-way flow of approved content. One might decide that such a price was well worth paying. But where is the limiting principle or end point of the logic that led to it?
There is no provision in U.S. statutory copyright law that imposes liability for contributory or vicarious infringement. None. The patent statute has such a provision; not the Copyright Act. The courts have simply made the scheme up themselves. Then they made up limitations—such as Sony—in order to rein it in. In Grokster, the Supreme Court went further. It made up a new type of “inducement” liability. Fine. As I have tried to indicate here, the decision is not as dreadful as it is reputed to be. But so long as there is any unregulated space in our communications network, some portion of it will have illicitly copied content on it. The more the system is free of central control, the more it is open to use by any citizen, the cheaper it gets—all very desirable characteristics—the more illicit content there will be. That is the premise of the Internet Threat—the belief that control must rise as copying costs fall. I have tried here to suggest an alternative interpretation, the Sony Axiom: without a strong internal set of limitations over copyright, cheaper copying and the logic of the Internet Threat will always drive us toward giving control over our communications architecture to the content industries.
There was one particularly striking moment in the Napster oral argument. The lawyer for the recording companies was arguing that Napster was illegal. The judges interrupted, as they often do, and there was a back-and-forth debate about the likely reach of any ruling that would shut down Napster. “I am not trying to say the Internet is illegal,” said the lawyer. There was a pause as everyone weighed those words carefully.
My response would be “Really? Why not?” The logic of the Internet Threat leads to the position that a network is either controlled or illegal. The better and cheaper the network, the tighter the control needed. The Internet itself could have been designed differently. There could have been more centralized control, filtering of content, a design based on one-way transmission, closed protocols that allow users only a limited number of options. Indeed there were such systems—the national French Minitel system is an example. The Internet represents the opposite set of choices—freedom from centralized control, absence of intervention. In a famous article, Saltzer, Reed, and Clark provided the argument that an “end-to-end” network that is “dumb” and leaves processing to the “ends”—the smart terminals at either end of the wires—will be stable and robust.31 But it will also be remarkably uncontrolled and it will lower global copying costs close to zero for digital content. It is that principle that has made it successful. To put it tersely: the logic of the Internet Threat runs in exactly the opposite direction to the Internet itself. The logic of control is not the logic of the Net.
Here is one last thought experiment. Apply the same test I suggested for the iPod to the Internet itself.32 Imagine you knew nothing of the Net. (Those of you who are over twenty-five may actually be able to remember when you knew nothing of the Net.) Imagine that you are sitting in a room somewhere discussing—perhaps with a group of government bureaucrats or some policy analysts from the Commerce Department—whether to develop this particular network. The scientists are enthusiastic. They talk of robustness and dumb networks with smart terminals. They talk of TCP/IP and HTML and decentralized systems that run on open protocols, so that anyone can connect to this network and use it any way they want to. You, of course, know nothing about the truly astounding outburst of creativity and communication that would actually flower on such a system, that would flower precisely because it is so open and no one country or company controls it or the protocols that run it. You do not know that millions of people worldwide will assemble the greatest factual reference work the world has ever seen on this network—often providing their information for free out of some bizarre love of sharing. You do not know about Amazon.com or Hotornot.com or the newspapers of the world online, or search engines, automatic page translation, plug-ins, or browsers. You cannot imagine free or open-source software being assembled by thousands of programmers worldwide. E-mail is only a dimly understood phenomenon to you. Teenagers in your world have never heard of instant messaging—a nostalgic thought.
As the scientists talk, it becomes clear that they are describing a system without centralized direction or policing. Imagine that your decision is framed by the logic of control I have described in this chapter, by the fears that the content industry has had for at least the last thirty years—by the logic of the suit they brought in Sony. Imagine, in other words, that we make the up-or-down decision to develop the Internet based on the values and fears that our copyright policy now exhibits, and that the content industries have exhibited for thirty years. There is no way, no way at all, that a network like it would ever be developed. It would be strangled at birth. You would be told by the lawyers and policy wonks that it would be a haven for piracy and illegality. (And it would be, of course—though it would also be much, much more.) You would be told that the system needed to be designed to be safe for commerce or it would never attract investment, that it would need to be controlled and centralized for it to be reliable, that it would need to be monitored to stop it being a hotbed of crime. With the copyright lawyers in the room, you would end up designing something that looked like cable TV or Minitel. The Internet would never get off the ground.
The Internet is safe now, of course, because it developed so fast that it was a reality before people had time to be afraid of it. But it should give us pause that if we had our current guiding set of policy goals in place, our assumption that cheaper copying means we need greater regulation, we would never have allowed it to flourish. As Jessica Litman points out, we are increasingly making our decisions about technology and communications policy inside copyright law. We are doing so according to the logic of control that I have sketched out in this chapter. But the logic of control is a partial logic. It blinds us to certain possibilities, ones that have huge and proven potential—look at the Internet.
The law has not been entirely one-sided, however. The Sony case drew a line in the sand which promised to halt the inevitable drift toward greater and greater control over communications technology by content owners. It turned out the heavens did not fall. Indeed, the content companies thrived. Perhaps that line was drawn in the wrong place; reasonable people can disagree about that. But Grokster smudges the line without drawing a clear new one. If that new line is drawn according to the logic of control, what technologies will we never see? Could they be technologies that would transform our lives as unimaginably as the Internet has since 1995?
I have described the story line—the cluster of metaphors and images and concerns—that pervades our copyright policy. I labeled it “the Internet Threat.” In the next chapter, I discuss an alternative story line, a different way of understanding our current policies. The subject of that story line is the best-known example of contemporary attempts to control the digital world, the Digital Millennium Copyright Act or DMCA.
Posted by admin on October 7, 2008