Even if the generative Internet is preserved, those who stand to lose from the behaviors taking place over it will maintain pressure for change. Threats to the technical stability of the newly expanded network are not the only factors at work shaping the digital future. At the time of the founding of the Internet and the release of the PC, little attention was given to whether a system that allows bits to move freely would be an instrument of contributory copyright infringement, or whether it was necessary to build in mechanisms of government surveillance for the new medium. Now that the PC and Internet are in the mainstream, having trumped proprietary systems that would have been much tamer, there remain strong regulatory pressures. This chapter considers how the law ought to be shaped if one wants to reconcile generative experimentation with other policy goals beyond continued technical stability. For those who think that good code and content can come from amateur sources, there are some important ways for the law to help facilitate generativity—or at least not hurt it. And for those whose legitimate interests have been threatened or harmed by applications of the generative Internet, we can look for ways to give them some redress without eliminating that generative character.

The ideas here fall into several broad categories. First, we ought to take steps to make the tethered appliances and software-as-service described in Chapter Five more palatable, since they are here to stay, even if the PC and Internet are saved. Second, we can help ensure a balance between generative and non-generative segments of the IT ecosystem. Third, we can make generative systems less threatening to legally protected interests.

PROTECTIONS FOR A WORLD OF TETHERED APPLIANCES AND WEB 2.0


2

Maintaining Data Portability

A move to tethered appliances and Web services means that more and more of our experiences in the information space will be contingent. A service or product we use at one moment could act completely differently the next, since it can be so quickly reprogrammed without our assent. Each time we power up a mobile phone, video game console, or BlackBerry, it might have gained some features and lost others. Each time we visit a Web site offering an ongoing service like e-mail access or photo storage, the same is true. People are notoriously poor at planning ahead, and their decisions about whether to start hosting all the family’s photos on one site or another may not take into account the prospect that the function and format of the site can change at any time.

Older models of software production are less problematic. Because traditional software has clearly demarcated updates, users can stick with an older version if they do not like the tradeoffs of a newer one. These applications usually feature file formats that are readable by other applications, so that data from one program can be used in another: WordPerfect users, for example, can switch to Microsoft Word and back again. The pull of interoperability compelled most software developers to allow data to be exportable in common formats, and if one particular piece of software were to reach market dominance— and thereby no longer need to be as interoperable—existing versions of that software would not retroactively lose that capability.

If the security issues on generative platforms are mitigated, it is likely that technology vendors can find value with both generative and non-generative business models. For example, it may be beneficial for a technology maker to sell below-cost hardware and to make up much of the loss by collecting licensing fees from any third-party contributions that build on that hardware. This is the business model for many video game console makers.1 This business model offers cheap hardware to consumers while creating less generative systems. So long as generative consoles can compete with non-generative ones, it would seem that the market can sort out this tradeoff—at least if people can easily switch from one platform to another. Maintaining the prospect that users can switch ensures that changes to wildly popular platforms and services are made according to the interests of their users. There has been ongoing debate about just how much of a problem lock-in can be with a technology.2 The tradeoff of, say, a long-term mobile phone contract in exchange for a heavy discount on a new handset is one that the consumer at least knows up front. Much less understood are the limits on extracting the information consumers deposit into a non-generative platform. Competition can be stymied when people find themselves compelled to retain one platform only because their data is trapped there.

As various services and applications become more self-contained within particular devices, there is a minor intervention the law could make to avoid undue lock-in. Online consumer protection law has included attention to privacy policies. A Web site without a privacy policy, or one that does not live up to whatever policy it posts, is open to charges of unfair or deceptive trade practices.3 Makers of tethered appliances and Web sites keeping customer data similarly ought to be asked to offer portability policies. These policies would declare whether they will allow users to extract their own data should they wish to move their activities from one appliance or Web site to another.

In some cases, the law could create a right of data portability, in addition to merely insisting on a clear statement of a site’s policies. Traditional software as product nearly always keeps its data files stored on the user’s PC in formats that third parties can access.4 Software as product therefore allows for the routine portability of data, including data that could be precious: one’s trove of e-mail, or the only copy of family photos from a cherished vacation. Imagine cameras that effectively made those photos property of Kodak, usable only in certain ways that the company dictated from one moment to the next. These cameras likely would not sell so long as there were free alternatives and people knew the limitations up front. Yet as with those hypothetical cameras, when one uses tethered appliances the limitations are neither advertised nor known, and they may not at first even be on the minds of the service providers themselves. They are latent in the design of the service, able to be activated at any moment according to the shifting winds of a dot-com’s business model and strategy. The law should provide some generous but firm borders.5 The binding promise that Wikipedia’s content can be copied, modified, and put up elsewhere by anyone else at any time—expressly permitted by Wikipedia’s content license6—is a backstop against any abuse that might arise from Wikipedia’s operators, mitigating the dangers that Wikipedia is a service rather than a product and that the plug at wikipedia.org can be pulled or the editors shut out at any time.

As we enter an era in which a photograph moves ephemerally from a camera’s shutter click straight to the photographer’s account at a proprietary storage Web site with no stop in between, it will be helpful to ensure that the photos taken can be returned fully to the custody of the photographer. Portability of data is a generative insurance policy to apply to individual data wherever it might be stored. A requirement to ensure portability need not be onerous. It could apply only to uniquely provided personal data such as photos and documents, and mandate only that such data ought to readily be extractable by the user in some standardized form. Maintaining data portability will help people pass back and forth between the generative and the non-generative, and, by permitting third-party backup, it will also help prevent a situation in which a non-generative service suddenly goes offline, with no recourse for those who have used the service to store their data.7

Network Neutrality and Generativity

Those who provide content and services over the Internet have generally lined up in favor of “network neutrality,” by which faraway ISPs would not be permitted to come between external content or service providers and their customers. The debate is nuanced and far ranging.8 Proponents of various forms of network neutrality invoke the Internet’s tradition of openness as prescriptive: they point out that ISPs usually route packets without regard for what they contain or where they are from, and they say that this should continue in order to allow maximum access by outsiders to an ISP’s customers and vice versa. Reliable data is surprisingly sparse, but advocates make a good case that the level of competition for broadband provision is low: there are few alternatives for high-speed broadband at many locations at the moment, and they often entail long-term consumer contracts. Such conditions make it difficult for market competition to prevent undesirable behavior such as ISPs’ favoring access to their own content or services, and even some measure of competition in the broadband market does not remove a provider’s incentives to discriminate.9 For example, an ISP might block Skype in order to compel the ISP’s users to subscribe to its own Internet telephony offering.10 Likewise, some argue that independent application and content providers might innovate less out of fear of discriminatory behavior. While proponents of net neutrality are primarily concerned about restrictions on application and content, their arguments also suggest that general restrictions on technical ways of using the network should be disfavored.

Skeptics maintain that competition has taken root for broadband, and they claim that any form of regulatory constraint on ISPs—including enforcing some concept of neutrality—risks limiting the ways in which the Internet can continue to evolve.11 For example, market conditions might bring about a situation in which an ISP could charge Google for access to that ISP’s customers: without payment from Google, those customers would not be allowed to get to Google. If Google elected to pay—a big “if,” of course—then some of Google’s profits would go to subsidizing Internet access. Indeed, one could imagine ISPs then offering free Internet access to their customers, with that access paid for by content providers like Google that want to reach those customers. Of course, there is no guarantee that extra profits from such an arrangement would be passed along to the subscribers, but, in the standard model of competition, that is exactly what would happen: surplus goes to the consumer. Even if this regime hampered some innovation by increasing costs for application providers, this effect might—and this is speculative—be outweighed by increased innovation resulting from increased broadband penetration.

Similarly, a situation whereby consumers share their Internet connections with their neighbors may be salutary for digital access goals. When wireless access points first came to market, which allowed people to share a single physical Internet point throughout their houses the way that cordless telephones could be added to their telephone jacks, most ISPs’ contracts forbade them.12 Some vendors marketed products for ISPs to ferret out such access points when they were in use.13 However, the ISPs ended up taking a wait-and-see approach at variance with the unambiguous limits of their contracts, and wi-fi access points became tolerated uses of the sort described in Chapter Five. Eventually the flagstones were laid for paths where people were walking rather than the other way around, and nearly every ISP now permits sharing within a household.

However, most access points are also automatically built to share the Internet connection with anyone in range,14 friend or stranger, primarily to reduce the complexity of installation and corresponding calls to the access point makers’ customer service and return lines. This laziness by access point makers has made it a commonplace for users to be able to find “free” wi-fi to glom onto in densely populated areas, often without the knowledge of the subscribers who installed the wireless access points. Again ISPs have dithered about how to respond. Services like FON now provide a simple box that people can hook up to their broadband Internet connection, allowing other FON users to share the connection for free when within range, and nonmembers to pay for access to the FON network.15 Those acquiring FON boxes can in turn use others’ FON connections when they are on the road. Alternatively, FON users can elect to have their FON box request a small payment from strangers who want to share the connection, and the payment is split between FON and the box owner.

Most ISPs have not decided whether such uses are a threat and therefore have not taken action against them, even as the ISPs still have contractual terms that forbid such unauthorized use,16 and some have lobbied for theft-of-service laws that would appear to criminalize both sharing Internet connections and accepting invitations to share.17 For ISPs, customers’ ability to share their service could increase the demand for it since customers could themselves profit from use by strangers. But this would also increase the amount of bandwidth used on the ill-measured “all you can eat” access plans that currently depend on far less than constant usage to break even.18 Some advocates have tried to steer a middle course by advocating a “truth in advertising” approach to network neutrality: broadband providers can shape their services however they want, so long as they do not call it “Internet” if it does not meet some definition of neutrality.19 This seems a toothless remedy if one believes there is a problem, for network providers inclined to shape their services could simply call it “broadband” instead.

The procrastination principle has left these issues open, and so far generativity is alive and well at the network level. So what can generativity contribute to this debate? One lesson is that the endpoints matter at least as much as the network. If network providers try to be more constraining about what traffic they allow on their networks, software can and will be written to evade such restrictions— so long as generative PCs remain common on which to install that software. We see exactly this trend in network environments whose users are not the network’s paying customers. When employers, libraries, or schools provide network access and attempt to limit its uses, clever PC software can generally get around the limitations so long as general Web surfing is permitted, using exactly the tools available to someone in China or Saudi Arabia who wants to circumvent national filtering.20 Even in some of the worst cases of network traffic shaping by ISPs, the generative PC provides a workaround. Just as Skype is designed to get around the unintended blockages put in place by some home network routers,21 it would not be a far leap for Linksys or FON to produce home boxes designed expressly to get around unwanted violations of network neutrality. (Of course, such workarounds would be less effective if the network provider merely slowed down all traffic that was not expressly favored or authorized.) The harshest response by ISPs to this—to ban such boxes and then to try to find and punish those disobeying the ban—represents expensive and therefore undesirable territory for them. One answer, then, to the question of network neutrality is that wide-open competition is good and can help address the primary worries of network neutrality proponents. In the absence of broad competition some intervention could be helpful, but in a world of open PCs some users can more or less help themselves, routing around some blockages that seek to prevent them from doing what they want to do online.

From Network Neutrality to API Neutrality

The debate on network neutrality, when viewed through a generative overlay, suggests a parallel debate that is not taking place at all. That debate centers on the lack of pretense of neutrality to begin with for tethered appliances and the services offered through them. Reasonable people disagree on the value of defining and mandating network neutrality. If there is a present worldwide threat to neutrality in the movement of bits, it comes not from restrictions on traditional Internet access that can be evaded using generative PCs, but from enhancements to traditional and emerging appliancized services that are not open to third-party tinkering. For example, those with cable or satellite television have their TV experiences mediated through a set-top box provided or specified by the cable or satellite company. The box referees what standard and premium channels have been paid for, what pay-per-view content should be shown, and what other features are offered through the service.

The cable television experience is a walled garden. Should a cable or satellite company choose to offer a new feature in the lineup called the “Internet channel,” it could decide which Web sites to allow and which to prohibit. It could offer a channel that remains permanently tuned to one Web site, or a channel that could be steered among a preselected set of sites, or a channel that can be tuned to any Internet destination the subscriber enters so long as it is not on a blacklist maintained by the cable or satellite provider. Indeed, some video game consoles are configured for broader Internet access in this manner.22 Puzzlingly, parties to the network neutrality debate have yet to weigh in on this phenomenon.

The closest we have seen to mandated network neutrality in the appliancized space is in pre-Internet cable television and post-Internet mobile telephony. Long before the mainstreaming of the Internet, the Cable Television Consumer Protection and Competition Act of 1992 allowed local broadcast television stations to demand that cable TV companies carry their signal, and established a limited regime of open-access cable channels.23 This was understandably far from a free-for-all of actual “signal neutrality” because the number of channels a cable service could transmit was understood to be limited.24 The must-carry policies—born out of political pressure by broadcasters and justified as a way of eliminating some bottleneck control by cable operators—have had little discernable effect on the future of cable television, except perhaps to a handful of home shopping and religious broadcasting stations that possess broadcast licenses but are of little interest to large television viewerships.25 Because cable systems of 1992 had comparatively little bandwidth, and because the systems were designed almost solely to transmit television and nothing else, the Act had little impact on the parched generative landscape for cable.

Mobile telephony, often featuring a tight relationship between service providers and endpoint devices used by their subscribers, has also drawn calls for mandated neutrality. Recall the Carterfone case from Chapter Two, which compelled AT&T to open the endpoints of its monopoly telephone network—that is, mobile phones—to third-party hardware providers.26 Tim Wu has called for a Carterfone rule for mobile phone service providers, allowing consumers to select whatever handset they want to work on the network, and Skype has petitioned the FCC for such a rule—at just the time that, like the old AT&T, Steve Jobs insists that the iPhone must be tethered to Apple and forced to use AT&T as its network provider “to protect carrier networks and to make sure the phone was not damaged.”27 The analogy between AT&T and telephones on the one hand and mobile phone providers and handsets on the other is strong, and it works because there is already an understood divide between network and device in both cases. But because a cable or satellite TV company’s regular service is intertwined with a content offering—the channels—and a specialized appliance to recognize proprietary transmission encryption schemes—the set-top box—it has been significantly harder to implement the spirit of Carterfone for cable television.28 A model that begins as sterile is much harder to open meaningfully to third-party contribution than one that is generative from the start.

We see a parallel discrepancy of attitudes between PCs and their counterpart information appliances. Microsoft was found to possess a monopoly in the market for PC operating systems.29 Indeed, it was found to be abusing that monopoly to favor its own applications—such as its Internet Explorer browser—over third-party software, against the wishes of PC makers who wanted to sell their hardware with Windows preinstalled but adjusted to suit the makers’ tastes.30 By allowing third-party contribution from the start—an ability to run outside software—after achieving market dominance, Microsoft was forced to meet ongoing requirements to maintain a level playing field for third-party software and its own.31

Yet we have not seen the same requirements arising for appliances that do not allow, or that strictly control, the ability of third parties to contribute from the start. So long as the market favorite video game console maker never opens the door to generative third-party code, it is hard to see how the firm could be found to be violating the law. A manufacturer is entitled to make an appliance, and to try to bolt down its inner workings so that they cannot be modified by others.32

So when should we consider network neutrality-style mandates for appliancized systems? The answer lies in that subset of appliancized systems that seeks to gain the benefits of third-party contribution while reserving the right to exclude it later. Those in favor of network neutrality suggest, often implicitly, how foundational the Internet is for the services offered over it.33 If downstream services cannot rely on the networks they use to provide roughly equal treatment of their bits, the playing field for Internet activities can shift drastically. If the AT&T telephone network had been permitted to treat data calls differently from voice calls—and to change originally generous policies in a heartbeat—the foundation to link consumer telecommunications with the existing Internet might have collapsed, or at least have been greatly constrained only to business models provable from the start and thus ripe for partnerships with AT&T. Network neutrality advocates might explain their lack of concern for nonneutral treatment of bits over cable television by pointing out that cable television never purported to offer a platform for downstream third-party development— and indeed has never served that purpose. It is bait and switch that ought to be regulated.

The common law recognizes vested expectations in other areas. For example, the law of adverse possession dictates that people who openly occupy another’s private property without the owner’s explicit objection (or, for that matter, permission) can, after a lengthy period of time, come to legitimately acquire it.34 More commonly, property law can find prescriptive easements—rights-of-way across territory that develop by force of habit—if the owner of the territory fails to object in a timely fashion as people go back and forth across it.35 The law of promissory estoppel identifies times when one person’s behavior can give rise to an obligation to another without a contract or other agreement between them; acting in a way that might cause someone else to reasonably rely on those actions can create a “quasi-contract.”36 These doctrines point to a deeply held norm that certain consistent behaviors can give rise to obligations, sometimes despite fine print that tries to prevent those obligations from coming about.37 Recall Bill Gates’s insistence that the Xbox video game console is not just for games: “It is a general purpose computer. . . . [W]e wouldn’t have done it if it was just a gaming device. We wouldn’t have gotten into the category at all. It was about strategically being in the living room.”38 Network neutrality’s spirit applied to the box would say: if Microsoft wants to make the Xbox a generalpurpose device but still not open to third-party improvement, no regulation should prevent it. But if Microsoft does so by welcoming third-party contribution, it should not later be able to impose barriers to outside software continuing to work. Such behavior is a bait and switch that is not easy for the market to anticipate and that stands to allow a platform maker to harness generativity to reach a certain plateau, dominate the market, and then make the result proprietary—exactly what the Microsoft antitrust case rightly was brought to prevent.

The principles and factual assumptions that animate network neutrality— that the network has been operated in a particular socially beneficial way and that, especially in the absence of effective competition, it should stay that way—can also apply to Internet services that solicit mash-ups from third-party programmers described in Chapter Five, like Google Maps or Facebook, while makers of pure tethered appliances such as TiVo may do as they please. Those who offer open APIs on the Net in an attempt to harness the generative cycle ought to remain application-neutral after their efforts have succeeded, so all those who have built on top of their interfaces can continue to do so on equal terms. If Microsoft retroactively changed Windows to prevent WordPerfect or Firefox from running, it would answer under the antitrust laws and perhaps also in tort for intentional interference with the relationship between the independent software makers and their consumers.39 Similarly, providers of open APIs to their services can be required to commit to neutral offerings of them, at least when they have reached a position of market dominance for that particular service. Skeptics may object that these relations can be governed by market forces, and if an open API is advertised as contingent, then those who build on it are on notice and can choose to ignore the invitation if they do not like the prospect that it can be withdrawn at any moment. The claim and counterclaim follow the essential pattern of the network neutrality debate. Just as our notions of network security ought to include the endpoints as well as the middle of the network—with a generative principle to determine whether and when it makes sense to violate the end-to-end principle—our far-ranging debates on network neutrality ought to be applied to the new platforms of Web services that in turn depend on Internet connectivity to function. At least Internet connectivity is roughly commoditized; one can move from one provider to another so long as there is sufficient competition, or—in an extreme case—one can even move to a new physical location to have better options for Internet access. With open APIs for Web services there is much less portability; services built for one input stream—such as for Google Maps—cannot easily be repurposed to another, and it may ultimately make sense to have only a handful of frequently updated mapping data providers for the world, at least as much as it can make sense only to invest in a handful of expensive physical network conduits to a particular geographic location.

Maintaining Privacy as Software Becomes Service

As Chapter Five explained, the use of our PCs is shrinking to that of mere workstations, with private data stored remotely in the hands of third parties. This section elaborates on that idea, showing that there is little reason to think that people have—or ought to have—any less of a reasonable expectation of privacy for e-mail stored on their behalf by Google and Microsoft than they would have if it were stored locally in PCs after being downloaded and deleted from their e-mail service providers.

The latest version of Google Desktop is a PC application that offers a “search across computers” feature. It is advertised as allowing users with multiple computers to use one computer to find documents that are stored on another.40 The application accomplishes this by sending an index of the contents of users’ documents to Google itself.41 While networking one’s own private computers would not appear to functionally change expectations of privacy in their contents, the placement or storage of the data in others’ hands does not hew well to the doctrinal boundaries of privacy protection by the U.S. Constitution. These boundaries treat the things one has held onto more gingerly than things entrusted to others. For example, in SEC v. Jerry T. O’Brien, Inc.,42 the Supreme Court explained: “It is established that, when a person communicates information to a third party even on the understanding that the communication is confidential, he cannot object if the third party conveys that information or records thereof to law enforcement authorities. . . . These rulings disable respondents from arguing that notice of subpoenas issued to third parties is necessary to allow a target to prevent an unconstitutional search or seizure of his papers.”43

The movement of data from the PC means that warrants served upon personal computers and their hard drives will yield less and less information as the data migrates onto the Web, driving law enforcement to the networked third parties now hosting that information. When our diaries, e-mail, and documents are no longer stored at home but instead are business records held by a dot-com, nearly all formerly transient communication ends up permanently and accessibly stored in the hands of third parties, and subject to comparatively weak statutory and constitutional protections against surveillance.44 A warrant is generally required for the government to access data on one’s own PC, and warrants require law enforcement to show probable cause that evidence of a crime will be yielded by the search.45 In other words, the government must surmount a higher hurdle to search one’s PC than to eavesdrop on one’s data communications, and it has the fewest barriers when obtaining data stored elsewhere.46 Entrusting information to third parties changes the ease of surveillance because those third parties are often willing to give it up, and typically the first party is not even aware the transfer has occurred. Online data repositories of all stripes typically state in their terms of use that they may disclose any information upon the request of the government—at least after receiving assurances by the requesting party that the information is sought to enhance the public safety.47 In the United States, should a custodian deny a mere request for cooperation, the records might further be sought under the Stored Communications Act, which does not erect substantial barriers to government access.48

The holders of private records also may be compelled to release them through any of a series of expanded information-gathering tools enacted by Congress in the wake of September 11. For example, a third party that stores networked, sensitive personal data could be sent a secretly obtained PATRIOT Act section 215 order, directing the production of “any tangible things (including books, records, papers, documents, and other items) for an investigation . . . to protect against international terrorism or clandestine intelligence activities.”49 The party upon whom a section 215 order is served can neither disclose nor appeal the order.50 Moreover, since the party searched—whether a library, accountant, or ISP—is not itself the target of interest, the targeted individual will not readily know that the search is occurring. Probable cause is not required for the search to be ordered, and indeed the target of interest may be presumed innocent but still monitored so long as the target is still generating records of interest to the government in an international terrorism or counter- intelligence investigation. Roughly 1,700 applications to the secret Foreign Intelligence Surveillance Act (FISA) court were lodged in each of 2003 and 2004 seeking records of some kind. Only four were rejected each year. In 2005, 2,074 applications were made, with 2 rejections, and in 2006, 2,181 were made, with 5 rejections.51

Any custodians might also be served a national security letter concerning the production of so-called envelope information. These letters are written and executed without judicial oversight, and those who receive such letters can be prohibited by law from telling anyone that they received them.52 National security letters may be used to solicit information held by particular kinds of private parties, including the records of telephone companies, financial institutions (now including such entities as pawnshops and travel agencies), as well as ISPs.53 For ISPs, the sorts of information that can be sought this way are “subscriber information and toll billing records information, or electronic communication transactional records.”54 This envelope information is not thought to extend to the contents of e-mail but includes such things as the “to” and “from” fields of e-mail—or perhaps even the search engine queries made by a subscriber, since such queries are usually embedded in the URLs visited by that subscriber.

If the government has questions about the identity of a user of a particular Internet Protocol address, a national security letter could be used to match that address to a subscriber name. Under section 505 of the PATRIOT Act, national security letters do not need to meet the probable cause standard associated with a traditional warrant: the FBI merely needs to assert to the private recipients of such letters that the records are sought in connection with an investigation into international terrorism.55 Government officials have indicated that more than thirty thousand national security letters are issued per year.56 A recent internal FBI audit of 10 percent of the national security letters obtained since 2002 discovered more than a thousand potential violations of surveillance laws and agency rules.57

Recipients of FISA orders or national security letters may press challenges to be permitted to disclose to the public that they have received such mandates— just as an anonymous car manufacturer sued to prevent its onboard navigation system from being used to eavesdrop on the car’s occupants58—but there is no assurance that they will do so. Indeed, many may choose to remain silent about cooperating with the government under these circumstances, thereby keeping each of these searches secret from the target.

As we move our most comprehensive and intimate details online—yet intend them to be there only for our own use—it is important to export the values of privacy against government intrusion along with them. For remotely stored data, this suggests limiting the holdings like that of SEC v. Jerry T. O’Brien, Inc. to financial records held by brokers similar to the ones in that case, rather than extending the relaxation of Fourth Amendment protections to all cases of third-party custody of personal information. The balance of accessibility for financial transactions need not be the same as that for our most personal communications and data. This is a reasonable limit to draw when the physical borders of one’s home no longer correlate well with the digital borders of one’s private life. Indeed, it is simply extending the protections we already enjoy to fit a new technological configuration. That is the spirit of Chapman v. United States,59 in which a police search of a rented house for a whiskey still was found to be a violation of the Fourth Amendment rights of the tenant, despite the fact that the landlord had consented to the search.60 The Court properly refused to find that the right against intrusion was held only by the absentee owner of the place intruded—rather, it was held by the person who actually lived and kept his effects there. Similarly, the data we store for ourselves in servers that others own ought to be thought of as our own papers and effects in which we have a right to be secure.

There is some suggestion that the courts may be starting to move in this direction. In the 2007 case Warshak v. United States, the U.S. Court of Appeals for the Sixth Circuit held that the government’s warrantless attempt to seize e-mail records through an ISP without notice to the account holder violated Fourth Amendment privacy rights.61 At the time of writing, the ruling stands, though it faces further review.62

The ability to store nearly all one’s data remotely is an important and helpful technological advance, all the more so because it can still be made to appear to the user as if the data were sitting on his or her own personal computer. But this suggests that the happenstance of where data are actually stored should not alone control the constitutional assessment of which standard the government must meet.

BALANCING GENERATIVE AND NON-GENERATIVE SYSTEMS

Code thickets

A number of scholars have written about the drawbacks to proprietary rights thickets: overlapping claims to intellectual property can make it difficult for those creating new but not completely original creative works to avoid infringing others’ rights. This is a particular problem for code developed with the tools of group generativity. For the past twenty years, the modern landscape of information technology has accommodated competing spheres of software production. These spheres can be grouped roughly around two poles warring for dominance in the field. On one side is proprietary software, which typically provides cash-and-carry functionality for the user. Its source code “recipe” is nearly always hidden from view as a technical matter, and as a legal matter it cannot be used by independent programmers to develop new software without the rarely given permission of its unitary rights holder. On the other side is free software, referring not to the price paid for a copy, but to the fact that the source code of the software is open to public view and modification.

It is not easy for the law to maintain neutrality in the conflict between the two spheres, evenhandedly encouraging development in both models. For example, the free software movement has produced some great works, but under prevailing copyright law even a slight bit of poison, in the form of code from a proprietary source, could amount to legal liability for anyone who copies or potentially even uses the software. (Running software entails making at least a temporary copy of it.)

The collaborative nature of free software development makes it harder to determine where various contributions are coming from and whether contributions belong to those who purport to donate them. Indeed, in the case of an employee of a software company charitably moonlighting for a free software project, the employee’s work may not even be the employee’s to give. A barely remembered but still enforceable employment agreement may commit all software written by the employee to the employer’s possession, which would set the stage for an infringement claim against those within the free software project for making use of the employee’s contributions.

Major free software projects try to avoid these problems by soliciting declarations from participants that they are only contributing code that they wrote or to which they have free license. The Free Software Foundation even suggests that employees obtain a disclaimer from their employers of all interest in employee contributions.63 But the danger of poisoned code remains, just as it is possible for someone to contribute copyrighted material to Wikipedia or a Geocities home page at any moment. The kind of law that shields Wikipedia and Geocities from liability for material contributed by outsiders, as long as the organization acts expeditiously to remove infringing material once it is notified, ought to be extended to the production of code itself.64 Code that incorporates infringing material is not given a free pass, but those who have promulgated it without knowledge of the infringement would have a chance to repair the code or cease copying it before becoming liable.

The patent thicket is also worrisome. There is a large and growing literature devoted to figuring out whether and under what circumstances software patents contribute to innovation, since they can promise returns to those who innovate. Scholars James Bessen and Robert Hunt have observed that the number of software patents has grown substantially since the early 1980s, from one thousand per year to over twenty thousand per year.65 These patents are obtained, on average, by larger firms than those acquiring patents in other fields, and non-software firms acquire over 90 percent of software patents.66 Bessen and Hunt suggest that these patterns are consistent with a patent thicket— large firms obtaining patents to extract royalties from rivals and to defend themselves from their rivals’ patents.67

While large firms can reach patent détente with each other through cross-licensing,68 smaller firms and individuals may be left out. There are thousands of software patents, and patent infringement, unlike copyright, does not require a copying of the original material: so long as someone else already came up with the idea, the new work is infringing. With copyright, if someone miraculously managed independently to come up with the tune to a Beatles song, that tune would not be infringing the Beatles’ copyright, since it did not copy the song—it was independently invented. It is this virtue of copyright law that allowed Richard Stallman to begin the free software movement’s effort to reproduce Unix’s functionality without infringing its copyright by simply creating new code from scratch that acts the same way that Unix’s code does.

Not only does patent not have such a limitation, but it also applies to the abstract concepts expressed in code, rather than to a specific set of code.69 Thus, someone can sit down to write some software in an empty room and, by that act, infringe multiple patents. Patent infringement can be asserted without having to claim appropriation of any code. For example, Microsoft has said that it believes that pieces of GNU/Linux infringe its patents, though it has not sued anyone over it.70 Microsoft may well be right, given the number and breadth of patents it possesses. So far the best protection against copyright or patent infringement for a contributor to a free software project is that he or she is not worth suing; litigation can be expensive for the plaintiff, and any victory hollow if the defendant cannot pay. The principle of tolerated uses comes back into play, not necessarily because patent holders are uncertain whether others’ uses are good or bad for them, but because the others are simply not worth suing. Certainly many amateur programmers seem undeterred by the prospect of patent infringement, and there is evidence that young commercial software firms plunge blithely ahead with innovations without being concerned about the risks of patent infringement.71

This is not an ideal state of affairs for anyone. If those who see value in software patents are correct, infringement is rampant. And to those who think patents are chilling innovation, the present regime needs reform. To be sure, amateurs who do not have houses to lose to litigation can still contribute to free software projects. Others can contribute anonymously, evading any claims of patent infringement since they simply cannot be found. But this turns coding into a gray market activity, eliminating what otherwise could be a thriving middle class of contributing firms should patent warfare ratchet into high gear. There may only be a working class of individual coders not worth suing and an upper class of IBMs—companies powerful enough to fight patent infringement cases without blinking.

The law can help level the playing field without major changes to the scopes of copyright or patent. Statutes of limitations define how quickly someone must come forward to claim that the law has been broken. For patent infringement in the United States, the limit is six years; for civil copyright infringement it is three.72 Unfortunately, this limit has little meaning for computer code because the statute of limitations starts from the time of the last infringement. Every time someone copies (or perhaps even runs) the code, the clock starts ticking again on a claim of infringement. This should be changed. The statute of limitations could be clarified for software, requiring that anyone who suspects or should suspect his or her work is being infringed sue within, for instance, one year of becoming aware of the suspect code. For example, the acts of those who contribute to free software projects—namely, releasing their code into a publicly accessible database like SourceForge—could be found to be enough to start the clock ticking on that statute of limitations.73 The somewhat obscure common-law defense of laches is available when plaintiffs sleep on their rights—sandbagging in order to let damages rack up—and it also might be adapted to this purpose.74

In the absence of such a rule, companies who think their proprietary interests have been compromised can wait to sue until a given piece of code has become wildly popular—essentially sandbagging the process. This proposed modification to the statute of limitations will still allow the vindication of proprietary rights, but users and developers of a particular version of code will know that lawsuits will be precluded after a specific interval of time. A system that requires those holding proprietary interests to advance them promptly will remove a significant source of instability and uncertainty from the freewheeling development processes that have given us—truly given, because no remuneration has been sought—everything from GNU/Linux to the Apache Web server to wikis. This approach would also create extra incentives for those holding proprietary code to release the source code so that the clock could start counting down on any infringement claims against it.75

The legal uncertainties in the process of writing and distributing new code currently express themselves most in the seams stitching together the worlds of amateur and commercial software production and use. With no change to copyright or patent, the amateur production of free software will likely continue; it is the adoption and refinement of the fruits of that production by commercial firms that is most vulnerable to claims of proprietary infringement. The uptake of generative outputs by commercial firms has been an important part of the generative cycle from backwater to mainstream. Interventions in the legal regimes to facilitate it—while offering redress for those whose proprietary rights have been infringed, so long as claims are made promptly—would help negotiate a better interface between generative and non-generative.

Content Thickets

Thickets similar to those found at the code layer also exist at the content layer. While patent does not significantly affect content, legal scholars Lawrence Lessig and Yochai Benkler, as well as others, have underscored that even the most rudimentary mixing of cultural icons and elements, including snippets of songs and video, can potentially result in thousands of dollars in legal liability for copyright infringement without causing any harm to the market for the original proprietary goods.76 Benkler believes that the explosion of amateur creativity online has occurred despite the legal system, not thanks to it.77 The high costs of copyright enforcement and the widespread availability of tools to produce and disseminate what he calls “creative cultural bricolage”78—something far more subtle and transformative than merely ripping a CD and sending its contents to a friend—currently allow for a variety of voices to be heard even when what they are saying is theoretically sanctionable by fines between $750 and $30,000 per copy made, $150,000 if the infringement contained within their expression is done “willfully.”79 As with code, this situation shoehorns otherwise laudable activity into a sub-rosa gray zone. The frequent unlawfulness of amateur creativity may be appealing to those who see it as a countercultural movement, like that of graffiti—part of the point of doing it is that it is edgy or illegal. It may even make the products of amateur cultural innovation less co-optable by the mainstream industrial information economy, since it is hard to clear rights for an anonymous film packing in images and sounds from hundreds of different sources, some proprietary.

But if prevention of commercial exploitation is the goal of some authors, it is best to let them simply structure their licenses to preclude it. Authors can opt to share their work under Creative Commons licenses that restrict commercial reuse of the work, while permitting limitless noncommercial use and modification by others.80

Finding ways through content thickets as Benkler and his cohort suggest is especially important if tethered appliances begin to take up more of the information space, making information that much more regulable. In a more regulable space the gap between prohibited uses and tolerated uses shrinks, creating the prospect that content produced by citizens who cannot easily clear permissions for all its ingredients will be squeezed out.

MAINTAINING REGULATORS’ TOLERANCE OF GENERATIVE SYSTEMS

Individual Liability Instead of Technology Mandates

As the capacity to inflict damage on “real world” interests increases with the Internet’s reach and with the number of valuable activities reliant upon it, the imperatives to take action will also increase. As both generative and non-generative devices maintain constant contact with various vendors and software providers, regulators may seek to require those manufacturers to shape the services they offer more precisely, causing a now-familiar wound to generativity.

One way to reduce pressure on institutional and technological gatekeepers is to ensure that individual wrongdoers can be held directly responsible. Some piecemeal solutions to problems such as spam take this approach. ISPs are working with makers of major PC e-mail applications to provide for forms of sender authentication.81 A given domain can, using public key encryption tools, authenticate that it is indeed the source of e-mail it sends. With Sender ID, e-mail purporting—but not proved—to be from a user at yahoo.com can be so trivially filtered as spam that it will no longer be worthwhile to send. This regime will hold ISPs more accountable for the e-mail that originates on their networks because they will find themselves shunned by other ISPs if they permit excessive anonymous spam—a system similar to the MAPS and Google/ StopBadware regimes discussed in the previous chapter. This opportunity for greater direct liability reduces the pressure on those processing incoming e-mail—both the designated recipients and their ISPs—to resort to spam filtration heuristics that may unintentionally block legitimate e-mail.82

The same principle can apply to individuals’ uses of the Internet that are said to harm legally protected interests. Music industry lawsuits against individual file sharers may be bad policy if the underlying substantive law demarcating the protected interest is itself ill-advised—and there are many reasons to think that it is—but from the point of view of generativity, such lawsuits inflict little damage on the network and PCs themselves. The Internet’s future may be brighter if technology permits easier identification of Internet users combined with legal processes, and perhaps technical limitations, to ensure that such identification occurs only when good cause exists. The mechanisms to make it less than impossible to find copyright infringers and defamers ought not to make it trivial for authoritarian states to single out subversives.

As the discussion of FON explained, a growing number of Internet users are acquiring wireless routers that default to sharing their connection with anyone nearby who has a PC configured with a wireless antenna. Consumers may not intend to open their networks, but doing so creates generative benefits for those nearby without their own Internet access.83 Usage by others does not typically impede the original consumer’s enjoyment of broadband, but should outsiders use that connection, say, to send viruses or to pirate copyrighted files, the original consumer could be blamed when the Internet connection is traced.84 Current legal doctrine typically precludes such blame—nearly all secondary liability schemes require some form of knowledge or benefit before imposing responsibilities85—but a sea change in the ability of lawbreakers to act untraceably by using others’ wi-fi could plausibly result in an adjustment to doctrine.

As such examples arise and become well known, consumers will seek to cut off others’ access to their surplus network resources, and the manufacturers of wireless routers might change the default to closed. If, however, genuine individual identity can be affirmed in appropriate circumstances, wi-fi sharing need not be impeded: each user will be held responsible for his or her own actions and no more. Indeed, the FON system of sharing wireless access among members of the “FON club” maintains users’ accounts for the purpose of identity tracing in limited circumstances—and to prevent additional pressure on regulators to ban FON itself. Such identification schemes need not be instant or perfect. Today’s status quo requires a series of subpoenas to online service providers and ISPs to discern the identity of a wrongdoer. This provides balance between cat and mouse, a space for tolerated uses described in Chapter Five, that precludes both easy government abuse of personal privacy and outright anarchy.

Beyond the Law

Regimes of legal liability can be helpful when there is a problem and no one has taken ownership of it. When a manufacturing plant pollutes a stream, it ought to pay—to internalize the negative externality it is inflicting on others by polluting. No one fully owns today’s problems of copyright infringement and defamation online, just as no one fully owns security problems on the Net. But the solution is not to conscript intermediaries to become the Net police. Under prevailing law Wikipedia could get away with much less stringent monitoring of its articles for plagiarized work, and it could leave plainly defamatory material in an article but be shielded in the United States by the Communications Decency Act provision exempting those hosting material from responsibility for what others have provided.86


1

Yet Wikipedia polices itself according to an ethical code—a set of community standards that encourages contributors to do the right thing rather than the required thing or the profitable thing. To harness Wikipedia’s ethical instinct across the layers of the generative Internet, we must figure out how to inspire people to act humanely in digital environments that today do not facilitate the appreciative smiles and “thank yous” present in the physical world. This can be accomplished with tools—such those discussed in the previous chapter and those yet to be invented—to foster digital environments that inspire people to act humanely. For the generative Internet fully to come into its own, it must allow us to harness the connections we have with each other, to coordinate when we have the time, talent, and energy, and to benefit from others’ coordination when we do not. Such tools allow us to express and live our civic instincts online, trusting that the expression of our collective character will be one at least as good as that imposed by outside sovereigns—sovereigns who, after all, are only people themselves.

To be sure, this expression of collective character will not always be just, even if participants seek to act in good faith. Some users have begun to deploy tools like Blossom, whereby individual PC users can agree to let their Internet connections be used so that others can see the Internet from their point of view.87 As states increasingly lean on their domestic ISPs and overseas online service providers to filter particular content, a tool like Blossom can allow someone in China to see the Internet as if he or she were a New Yorker, and vice versa. But such a tool undermines individual state sovereignty worldwide, just as a tool to facilitate filtering can be deployed to encroach on fundamental freedoms when ported to regimes that do not observe the rule of law. A tool like Blossom not only makes it hard for China to filter politically sensitive content, but it prevents Germany and France from filtering images of Nazi swastikas, and it gets in the way of attempts by copyright holders to carve the world into geographic zones as they seek to release online content in one place but not another: the New York Times could not as easily provide an article with an update about a British criminal investigation everywhere but within Britain, as it recently did to respect what it took to be the law of the United Kingdom on pretrial publicity.88

Tools like Blossom, which succeed only as much as netizens are impelled to want to adopt them, ask the distributed users of the Internet to decide, one by one, how much they are willing to create a network to subvert the enforcement of central authorities around the world.89 Each person can frame a view balancing the risks of misuse of a network against the risks of abuse of a sovereign’s power to patrol it, and devote his or her processor cycles and network bandwidth accordingly. Lessig is chary of such power, thinking of these tools as “technological tricks” that short-circuit the process of making the case in the political arena for the substantive values they enable.90 But this disregards a kind of acoustic separation found in a society that is not a police state, by which most laws, especially those pertaining to personal behavior, must not only be entered on to the books, but also reinforced by all sorts of people, public and private, in order to have effect.91 Perhaps it is best to say that neither the governor nor the governed should be able to monopolize technological tricks. We are better off without flat-out trumps that make the world the way either regulator or target wants it to be without the need for the expenditure of some effort and cooperation from others to make it so. The danger of a trump is greater for a sterile system, where a user must accept the system as it is if he or she is to use it at all, than for the tools developed for a generative one, where there is a constant— perhaps healthy—back-and-forth between tools to circumvent regulation and tools to effect the regulation anyway.92 The generative Internet upholds a precious if hidden dynamic where a regulator must be in a relationship with both those regulated and those who are needed to make the regulation effective. This dynamic is not found solely within the political maneuvers that transpire in a liberal democracy to put a law in place at the outset.

Today our conception of the Internet is still largely as a tool whose regulability is a function of its initial design, modified by the sum of vectors to rework it for control: as Lessig has put it, code is law, and commerce and government can work together to change the code. There is a hierarchy of dogs, cats, and mice: Governments might ask ISPs to retain more data or less about their users; individual users might go to greater or lesser lengths to cloak their online activities from observation by their ISPs.93 Tethered appliances change the equation’s results by making life far easier for the dogs and cats. Tools for group generativity can change the equation itself, but in unpredictable directions. They allow the level of regulability to be affected by conscious decisions by the mice about the kind of online world they want, not only for themselves but for others. If there is apathy about being able to experience the Internet as others do elsewhere, tools like Blossom will not be able to sustain much traffic, and the current level of regulability of the Internet will remain unchanged. If there is a wellspring of interest on this front, it can become easy to evade geographic restrictions.


1

One objection to the unfettered development of generative tools that can defy centralized authority in proportion to the number and passion of those willing to use them is that there are large groups of people who would be empowered to do ill with them. Criminal law typically penalizes conspiracy as a separate crime because it recognizes that the whole can be bigger than the sum of the parts—people working in concert can create more trouble than when they each act alone. Continued pressure on public file-sharing networks has led to fully realized “darknets,” semi-private systems whose sole purpose is to enable the convenient sharing of music and other content irrespective of copyright.94 For example, a network called Oink incorporates many of the community features of Wikipedia without being open to the public.95 People may join only on invitation from an existing member. Oink imposes strict rules on the sharing of files to ensure maximum availability: users must maintain a certain ratio of uploaded-to-downloaded material. Those who fall short risk being cut off from the service—and the penalty may also be applied to the members who sponsored them. The Oink service has none of the difficulties of the public networks, where files are nominated for takedown as they are discovered by publishers’ automated tools, and where publishers have inserted decoy files that do not contain what they promise.96 Oink is easier and faster to use than the iTunes store. And, of course, it is cheaper because it is free. If there are enough people to see to the creation and maintenance of such a community—still one primarily of strangers—is it a testament to the dangers of group generativity or to the fact that the current application of copyright law finds very little legitimacy?

One’s answer may differ to the extent that similar communities exist for people to share stolen credit card numbers or images of child abuse. If such communities do not exist it suggests that a change to copyright’s policy and business model could eliminate the most substantial disjunct between laws common to free societies and the online behavior of their citizens.97 Here there is no good empirical data to guide us. But the fact remains that so long as these communities are as semi-open as they must be in order to achieve a threatening scale— ready to accept new members who are not personally known to existing ones— they are in a position to be infiltrated by law enforcement. Private Yahoo! groups whose members trade in images of child abuse—a far less sophisticated counterpart to Oink’s community of copyright infringers—are readily monitored.98 This monitoring could take place by Yahoo! itself, or in a decentralized model, by even one or two members who have second thoughts—practically anyone is in a position to compromise the network. As theories multiply about the use of the Internet as a terrorist recruiting tool,99 we can see the downside to criminals as well: open networks cannot keep secrets very well. (The use of the Internet for more specialized conspiracies that do not depend on semi-public participation for their success is likely here to stay; sophisticated criminals can see to it that they retain generative devices even if the mainstream public abandons them.)

Wikipedia, as a tool of group generativity, reflects the character of thousands of people. Benkler compares Wikipedia’s entry on Barbie dolls to that of other encyclopedias developed in more traditional ways, and finds that most of the others fail to make mention of any of the controversies surrounding Barbie as a cultural icon.100 Wikipedia has extensive discussion on the topic, and Britannica has a share, too. Benkler freely concedes that a tool of group generativity like Wikipedia is not the only way to include important points of view that might not accord with the more monolithic views of what he calls the “industrial information economy.” More traditional institutions, such as universities, have established a measure of independence, too. And he also acknowledges that tools of group generativity can be abused by a group; there can be powerful norms that a majority enforces upon a minority to squelch some views. But he rightly suggests that the world is improved by a variety of models of production of culture, models that draw on different incentives, with different biases, allowing people to be exposed to a multiplicity of viewpoints, precluding a monopoly on truth. The same can be true of our technology, here the technology that undergirds our access to those viewpoints, and our ability to offer our own.

Can groups be trusted to behave well in the absence of formal government to rein in their excesses?101 The story of the American Revolution is sometimes romantically told as one in which small communities of virtue united against a common foe, and then lost their way after the revolution succeeded. Virtue gave way to narrow self-interest and corruption. The mechanisms of due process and separation of powers adapted by Madison to help substitute the rule of law for plain virtue will have to be translated into those online communities empowered with generative tools to govern themselves and to affect the larger offline world. Using the case of privacy, the next chapter seeks to sketch out some of the puzzles raised by the use of the powerful tools that this book has advocated to bring the generative Net fully into its own. Privacy problems that have been stable for the past thirty-five years are being revolutionized by the generative Internet, and how they are handled will tell us much about the future of the Net and our freedoms within and from it.

Posted by The Editors on March 16, 2008
Tags: Uncategorized

Total comments on this page: 38

How to read/write comments

Comments on specific paragraphs:

Click the icon to the right of a paragraph

  • If there are no prior comments there, a comment entry form will appear automatically
  • If there are already comments, you will see them and the form will be at the bottom of the thread

Comments on the page as a whole:

Click the icon to the right of the page title (works the same as paragraphs)

Comments

No comments yet.

Doug Sword on paragraph 65:

In the sentence beginning, “This can be accomplished…” there is an “as” missing after “such.”

May 22, 2009 7:44 pm
Mike on paragraph -1:

Hi, nice posts there :-) thank’s for the interesting information

May 23, 2009 2:39 pm
Kalebarkab on paragraph -1:

I want to find good pop music. Help me please.

June 9, 2009 12:06 pm

[…] typically have their own content filters installed!)  Thanks to the PC, people can help forge new civic technologies — ones that succeed to the extent that people are willing to participate in […]

June 18, 2009 10:16 pm

[…] III: Jon Zittrain notes that he had a similar thought in his book The Future of the Internet (and How to Stop It): The cable television experience is a walled garden. Should a cable or satellite company choose to […]

September 22, 2009 8:28 am

[…] are many reasons to worry about cloud computing.  Data stored in the cloud can be difficult to extract for you, yet all too easy to demand by government.  Applications running the cloud can mean new […]

October 13, 2009 11:46 am

[…] are many reasons to worry about cloud computing.  Data stored in the cloud can be difficult to extract for you, yet all too easy to demand by government.  Applications running the cloud can mean new […]

October 13, 2009 11:46 am

[…] out that “There are many reasons to worry about cloud computing.  Data stored in the cloud can be difficult to extract for you, yet all too easy to demand by government.  Applications running the cloud can mean new […]

October 29, 2009 10:44 am
JZ on paragraph 69:

Oink’s creator was found innocent of criminal conspiracy to defraud the music industry in a trial in Britain. http://torrentfreak.com/oink-admin-found-not-guilty-walks-free-100115/

January 18, 2010 2:26 am
bob saggit on whole page :

this thing kept following me whether i went on the page
WHY IS IT FOLLOWING ME MAAAAAN?

January 26, 2010 7:12 am
Gorgenzola Brasasanorovich on whole page :

07981243523
Call for sex
Ask for Horny Harry x

January 26, 2010 7:15 am
L. Reese on paragraph 4:

The ever growing trend towards digital is itself part of the problem. With Amazon’s Kindle and to a lesser extent Apple’s iTunes, the public ought to realize that they are in fact, only renting those digital files, not buying them. Without having an extractable version of a file, something that could be moved and saved to whatever storage device the user wishes, there is no real ownership. Convenient and quick access to a book, song, or movie is an alluring prospect for many consumers, and appears to overshadow this loss of true ownership of those items.

February 1, 2010 6:40 pm
Rick :

Yes but these conditions are not brought about by an expert in jurisprudence but by corporations that won’t stop at anything to maximise profits, no matter how unfair and greedy they appear to others.

The CD revolution did not raise costs of production at all - yet the MAFIAA record companies of course used the opportunity to exercise more ‘price-gouging’.

July 15, 2010 4:59 pm

[…] such requests, and users wanting to know the extent to which their data is protected—and arguably providing insufficient protection. Therefore, the coalition is lobbying Congress to amend the law […]

April 8, 2010 10:20 pm

[…] such requests, and users wanting to know the extent to which their data is protected—and arguably providing insufficient protection. Therefore, the coalition is lobbying Congress to amend the law […]

April 10, 2010 8:05 am

[…] The availability of destruction as a remedy smothers innovation.  If Oracle can’t strong-arm Google into settling but wins at trial and is awarded the destruction provision (and it survives appeal and Google eventually capitulates instead of balking and riding a series of contempt proceedings into a draconian post-litigation settlement or bankruptcy), (1) consumers would have their phones replaced with bricks and think twice before buying new tech again; (2) Android developers would see their platform and all their apps evaporate; and (3) in the future, companies would likely waste time reinventing the wheel to avoid Google’s court-ordered fate rather than developing new technologies.  There is a storm brewing, brought on by the rise of tethered appliances and the thicket of software patent regulation. […]

August 24, 2010 2:16 pm

[…] boxing.  (To be sure, this is with the exception of the byzantine and ill-considered “must carry” rules that give legacy TV broadcasters a chance to demand a corresponding cable channel […]

September 7, 2010 10:31 am

[…] are never entirely accountable to consumers. In a perfectly free market they would be, but not in a world of two-year contracts, exclusive service providers, and trapped data. Apple must take into account […]

September 28, 2010 7:23 pm

[…] (also of pizza and scooters). Since Italy knows Apple can remove the app, it may feel entitled to demand that the company do so whenever Italians’ dignity is the least bit bruised. In a walled […]

October 18, 2010 9:32 am
Gerda Cleckler on whole page :

extravagant almanac you’ve procure

October 31, 2010 5:55 pm
Tommye Sondles on whole page :

Of course, what a great site and informative posts, I surely will bookmark your website.Best Regards!

December 6, 2010 11:02 am

[…] or digital platforms to ensure “generativity” was not imperiled. On pg. 181 of the book, Zittrain argued that: “If there is a present worldwide threat to neutrality in the movement of bits, it comes not […]

March 13, 2011 11:18 am

[…] code to run and what data to associate with ourselves.  I have some thoughts on how to do that here and […]

May 2, 2011 9:30 am

[…] for IP law reform to correct the problem. These recommendations include approaches to clearing patent thickets; dealing with orphan works; and transitioning to evidence-based, rather than lobby-based, IP […]

June 6, 2011 9:23 am

[…] to guarantee that digital “generativity” was not imperiled. On pg. 181 of the book, Zittrain argued that: “If there is a present worldwide threat to neutrality in the movement of bits, it comes not […]

June 30, 2011 10:57 pm
sex toys on whole page :

I have learned new things through your blog site. One other thing I would really like to say is the fact that newer laptop or computer operating systems are inclined to allow additional memory to be used, but they likewise demand more memory space simply to function. If your computer could not handle much more memory as well as newest software program requires that ram increase, it may be the time to buy a new Laptop or computer. Thanks

September 12, 2011 3:06 am
Roma Schroen on whole page :

I just thought you should know that you could do a lot better with your titles. Try to find out what keywords people use to find your site and incorporate them in your titles. For instance what do you think you can chance in Secondary Profession Guide | How to Make WoW Gold? Best regards, Jenni

October 6, 2011 3:19 am
firma windykacyjna on whole page :

Simply wanna comment on few general things, The website layout is perfect, the subject matter is rattling excellent : D.

October 27, 2011 8:54 am
fajnastrona on whole page :

Very wonderful visual appeal on this site, I’d value it 10 10.

October 28, 2011 9:20 am
Anonymous on whole page :

I do like the way you have presented this specific problem plus it does indeed offer us a lot of fodder for consideration. On the other hand, coming from what I have observed, I simply just wish as other responses pack on that individuals keep on point and not start on a tirade associated with the news of the day. Anyway, thank you for this excellent point and whilst I can not really concur with this in totality, I respect your perspective.

November 3, 2011 7:12 am
Marcellus Ser on whole page :

Thanks for your post. One other thing is when you are advertising your property on your own, one of the troubles you need to be aware about upfront is how to deal with property inspection reviews. As a FSBO home owner, the key concerning successfully transferring your property plus saving money about real estate agent commissions is information. The more you realize, the smoother your home sales effort is going to be. One area when this is particularly essential is home inspections.

December 7, 2011 2:32 am
Brianna Sknerski on whole page :

I am glad to be one of several visitants on this great web site (:, thankyou for posting .

March 23, 2012 2:37 pm
Hollis Colapietro on whole page :

I like this web blog very much, Its a real nice post to read and obtain information.

April 4, 2012 9:37 am
Buck Zonia on whole page :

Some genuinely good information, Gladiolus I observed this. “A poem is never finished, only abandoned.” by Paul Valery.

June 27, 2012 1:55 am
Eve Confalone on whole page :

I am glad that I discovered this web blog , just the right information that I was looking for! .

July 23, 2012 6:27 pm
Mary Leanza on whole page :

I have to convey my affection to your generosity for guests who actually require support with your theme. Your exclusive commitment to getting the option all over received been seriously very good and have continuously helped regular people today just like me to arrive at their ambitions. This crucial suggestions indicates a complete ton a person like me and furthermore to my peers. Regards; from all of us.

November 6, 2013 7:22 pm
Gabriela Sultana on whole page :

The following time I discover a webpage, I hope that it doesnt disappoint me as much as this 1. I imply, I understand it was my selection to learn, having said that I truly thought youd have a single factor interest-grabbing to say. All I hear is a bunch of whining about 1 matter that you could fix after you werent far too active within the lookout for attention.

November 30, 2013 4:45 am
Natashia Walston on whole page :

My wife or husband and that i felt relieved when Peter managed to round up his inquiry by means of the suggestions he built out of your very own internet site. It is now and yet again perplexing to simply just usually be offering hints that quite a few many individuals may perhaps happen to be generating income from. So we determine we now have the author to give because of for that. The whole explanations you may have built, the straightforward website navigation, the friendships your website give assistance to instill - it is bought quite a few astounding, and it can be supporting our son also to our spouse and children feel that this thought is exceptional, and that is highly vital. Thank you for the whole great deal!

December 7, 2013 9:02 pm
Name
E-mail
URI