Informational Freedom

Can you read any book you want to at the push of a button? Can you listen to any music ever recorded the moment you feel like it? Do you have instant access to any academic publication from around the world that you wish to consult? In the past, when copying and distributing information was expensive, asking such seemingly outrageous questions would not have made any sense. In the early days of writing, when books were copied by hand, they were rare, costly and subject to errors. Very few people had access to them. And even as recently as the last years of the 20th century, it typically took physically costly processes of production and distribution to get books, musical recordings and other items into people’s hands.

But in the digital age, when the marginal cost of making and distributing a copy has shrunk to zero, all limitations on digital information are artificial. They involve adding costs to the system in order to impose scarcity on something that is abundant. For example, billions of dollars have been spent on preventing people from copying and sharing digital music files (CBS News, 2001).

Why are we spending money to make information less accessible? When information existed only in analog form, the cost of copying and distributing it allowed us to build an economy and a society that was based on information scarcity. A record label, for instance, had to recruit musical talent, record in expensive studios, market the music, and make and distribute physical records. Charging for the records allowed the label to cover its costs and turn a profit. Now that individuals can make music on a laptop and distribute it for free, the fixed costs are dramatically lower and the marginal cost of each listen is zero. In this context, the business model of charging per record, per song or per listen, and the copyright protections required to sustain it, no longer make sense. Despite the ridiculous fight put up by the music industry, our listening is generally either free (meaning that it is ad-supported) or part of a subscription. In either case, the marginal cost of each listen is zero.

Despite this progress in the music industry, we accept many other artificial restrictions on information access because this is the only system we know. To transition into the Knowledge Age, however, we should strive for an increase in informational freedom. This is not unprecedented in human history: prior to the advent of the printing press, stories and music were passed on orally or through copying by hand. There were no restrictions on who could tell a story or perform a song.

To be clear, information is not the same as knowledge. Information, for instance, includes the huge number of log files generated every day by computers around the world, many of which may never be analyzed. We don’t know in advance what information will turn out to be the basis for knowledge, so it makes sense to retain as much information as possible and make access to it as broad as possible. This section will explore various ways in which we can expand informational freedom, the second important step that will facilitate our transition to a Knowledge Age.

Access to the Internet

The Internet has been derided by some who claim it is a small innovation compared to, say, electricity or vaccinations—but they are mistaken. The Internet allows anyone, anywhere in the world, to learn how electricity or vaccinations work. Setting aside artificial limitations imposed on the flow of information, the Internet provides the means to access and distribute all human knowledge to all of humanity. As such, it is the crucial enabler of the digital knowledge loop. Access to the Internet is a core component of informational freedom.

At present, over four and half billion people are connected to the Internet, a number that is increasing by hundreds of millions every year (Kemp, 2020). This tremendous growth has become possible because the cost of access has fallen dramatically. A capable smartphone costs less than $100 to manufacture. In places with competitive markets, 4G bandwidth is provided at prices as low as $0.10 per GB (Roy, 2019).

Even connecting people who live in remote parts of the world is getting much cheaper, as the cost for wireless networking is decreasing and we are increasing our satellite capacity. For instance, there is a project underway to connect rural communities in Mexico for less than $10,000 per community (Rostad, 2018). At the same time, in highly developed economies such as the US, ongoing technological innovation such as MIMO wireless technology will further lower prices for bandwidth in densely populated urban areas (“MIMO,” 2020).

All of this means that even at relatively low levels, UBI would cover the cost of Internet access, provided that we keep innovating and maintain highly competitive and properly regulated markets for access to it. This is an example of how the three different freedoms reinforce each other: economic freedom allows people to access the Internet, which is the foundation for informational freedom, and, as we will see later, using it to contribute to the knowledge loop requires psychological freedom.

As we work to make affordable Internet access universal, we must also address limitations to the flow of information on the network. In particular, we should fight against restrictions on the Internet imposed by governments and Internet service providers (ISPs). Both are artificial limitations, driven by a range of economic and policy considerations opposed to the imperative of informational freedom.

One Global Internet

By design, the Internet has no built-in concept of geography. Most fundamentally, it constitutes a way to connect networks with one another (hence its name), regardless of where the machines involved are located. Any geographic restrictions have been added in, often at great cost. Australia and the UK have recently mandated so-called ‘firewalls’ around their countries, not unlike China’s own “Great Firewall” (“Great Firewall,” 2020), and countries like Turkey have had one in place for some time. These ‘firewalls’ place the Internet under government control, restricting informational freedom. For instance, Wikipedia was not accessible in Turkey for many years. Furthermore, both China and Russia have banned the use virtual private network services, tools that allow individuals to circumvent these artificial restrictions (Wenger, 2017a). As citizens, we should be outraged that governments are cutting us off from accessing information freely around the world, both on principle and on the basis of this being a bad use of resources. Imagine governments in an earlier age spending taxpayer money so citizens could dial fewer phone numbers.

No Artificial Fast and Slow Lanes

The same equipment used by governments to impose geographic boundaries on the Internet is used by ISPs to extract more money from customers, distorting access in the process through practices including paid prioritization and zero-rating. To understand why they are a problem, let’s take a brief technical detour.

When you buy access to the Internet, you pay for a connection of a certain capacity. If it provides 10 megabits per second and you use that connection fully for sixty seconds, you would have downloaded (or uploaded, for that matter) 600 megabits, the equivalent of 15–25 songs on Spotify or SoundCloud (assuming 3–5 megabytes per song). The fantastic thing about digital information is that all bits are the same. It doesn’t matter whether you accessed Wikipedia or looked at pictures of kittens: you’ve paid for the bandwidth, and you should be free to use it to access whatever parts of human knowledge you want.

That principle, however, doesn’t maximize profit for the ISP. In order to do that, they seek to discriminate between different types of information, based on consumer demand and the supplier’s ability to pay. First they install equipment that lets them identify bits based on their origin. Then they go to a company like YouTube or Netflix and ask them to pay to have their traffic ’prioritized’ relative to traffic from other sources. Another form of manipulation that is common among wireless providers is so-called zero-rating, where some services pay to be excluded from monthly bandwidth caps. If permitted, ISPs will go a step further: in early 2017, the US Senate voted to allow them to sell customer data, including browsing history, without customer consent (Wenger, 2017b).

The regulatory solution to this problem is blandly referred to as ‘net neutrality’, but what is at stake here is informational freedom itself. Our access to human knowledge should not be skewed by our ISPs’ financial incentives. We might consider switching to another ISP that provides neutral access, but in most geographic areas, especially in the United States, there is no competitive market for broadband Internet access. ISPs either have outright monopolies (often granted by regulators) or operate in small oligopolies. For instance, in the part of New York City where I live, there is just one broadband ISP.

Over time, technological advances such as wireless broadband may make the market more competitive, but until then we need regulation to avoid ISPs limiting our informational freedom. This concern is shared by people all over the world: in 2016, India objected to a plan by Facebook to provide subsidized Internet access that would have given priority to their own services, and outlawed ‘zero rating’ altogether (Vincent, 2016).

Bots for All of Us

Once you have access to the Internet, you need software to connect to its many information sources. When Tim Berners-Lee first invented the World Wide Web in 1989, he specified an open protocol, the Hypertext Transfer Protocol (HTTP), that anyone could use both to make information available and to access it ("Tim Berners-Lee," n.d.). In doing this, Berners-Lee enabled anyone to build software, so-called Web servers and browsers that would be compatible with this protocol. Many people did, including Marc Andreessen with Netscape, and many web servers and browsers were available as open-source or for free.

The combination of an open protocol and free software meant permissionless publishing and complete user control. If you wanted to add a page to the Web, you could just download a Web server, run it on a computer connected to the Internet and add content in the Hypertext Markup Language (HTML) format. Not surprisingly, the amount of content on the Web proliferated rapidly. Want to post a picture of your cat? Upload it to your Web server. Want to write something about the latest progress on your research project? There was no need to convince an academic publisher of its merits—you could just put up a web page.

People accessing the Web benefited from their ability to completely control their own web browser. In fact, in the Hypertext Transfer Protocol, the web browser is referred to as a ‘user agent’ that accesses the Web on behalf of the user. Want to see the raw HTML as delivered by the server? Right click on your screen and use ‘View Page Source’. Want to see only text? Instruct your user agent to turn off all images. Want to fill out a web form but keep a copy of what you are submitting for yourself? Create a script to have your browser save all form submissions locally.

Over time, the platforms that have increasingly dominated the Web have interfered with some of the freedom and autonomy enjoyed by early users. I went on Facebook recently to find a note I posted some time ago on a friend’s wall. But it turns out that you can’t search through all the wall posts you have written: instead you have to scroll backwards in time for each friend, trying to remember when you posted what where. Facebook has all the data, but they’ve decided not to make it searchable. Whether or not you attribute this to intentional misconduct on their part: my point is that you experience Facebook the way they want you to experience it. If you don’t like how Facebook’s algorithms prioritize your friends’ posts in your newsfeed, tough luck.

Imagine what would happen if everything you did on Facebook was mediated by a software program—a ‘bot’—that you could control. You could instruct it to go through and automate the cumbersome steps that Facebook lays out for finding old wall posts. Even better, if you had been using this bot all along, it could have kept your own archive of wall posts in your own data store and you could simply instruct it to search your archive. If we all used bots to interact with Facebook and didn’t like how our newsfeed was prioritized, we could ask our friends to instruct their bots to send us status updates directly, so that we could form our own feeds. This was entirely possible on the Web because of the open protocol, but it is not possible in a world of proprietary and closed apps on smartphones.

Although this example might sound trivial, bots have profound implications in a networked world. Consider the on-demand car services provided by companies such as Uber and Lyft. As drivers for these services know, each of them provides a separate app for them to use. You can try to run both apps on one phone or you can even have two phones, as some drivers do, but the closed nature of the apps means that you cannot use your phone’s computing power to evaluate competing offers. If you had access to bots that could interact with the networks on your behalf, you could simultaneously participate in these various marketplaces and play one off against the other.

Using a bot, you could set your own criteria for which rides you want to accept, including whether a commission charged by a given network was below a certain threshold. The bot would then allow you to accept only rides that maximize the fare you receive. Ride-sharing companies would no longer be able to charge excessive commissions, since new networks could arise to undercut them. Similarly, as a passenger, using a bot would allow you to simultaneously evaluate the prices between different services and choose the one with the lowest price for a particular trip.

We could also use bots as an alternative to antitrust regulation, in order to counter the power of technology giants like Google or Facebook without foregoing the benefits of their large networks. These companies derive much of their revenue from advertising, and consumers currently have no way of blocking ads on mobile devices. But what if users could control mobile apps to add ad-blocking functionality, just as they can with Web browsers?

Many people decry ad-blocking as an attack on journalism that dooms the independent Web, but that is a pessimistic view. In the early days of the Web, it was full of ad-free content published by individuals. When companies joined in, they brought their offline business models with them, including paid subscriptions and advertising. Along with the emergence of platforms such as Facebook and Twitter with strong network effects, this resulted in a centralization of the Web—content was increasingly either produced on a platform or moved behind a paywall.

Ad-blocking is an assertion of power by the end user, which is a good thing in all respects. Just as a New York City judge found in 2015 that taxi companies have no special right to see their business model protected from ridesharing companies (Whitford, 2015), neither do ad-supported publishers. And while this might result in a downturn for publishers in the short term, in the long run it will mean more growth for content that is paid for more directly by end users (for example, through subscriptions or crowdfunding).

To curtail the centralizing power of network effects, we should shift power to the end users by allowing them to have user agents for mobile apps, just as we did with the Web. The reason users don’t wield the same power on mobile is that native apps relegate end users to interacting with services using our eyes, ears, brain and fingers. We cannot use the computing capabilities of our smartphones, which are as powerful as supercomputers were until quite recently, in order to interact with the apps on our behalf. The apps control us, instead of us controlling the apps. Like a Web browser, a mobile user agent could do things such as block ads, keep copies of responses to services and let users participate in multiple services simultaneously. The way to help end users is not to break up big tech companies, but to empower individuals to use code that executes on their behalf.

What would it take to make this a reality? One approach would be to require companies like Uber, Google and Facebook to expose their functionality, not just through apps and websites, but also through so-called Application Programming Interfaces (APIs). An API is what a bot uses to carry out operations, such as posting a status update on a user’s behalf. Companies such as Facebook and Twitter have them, but they tend to have limited capabilities. Also, companies have the right to shut down bots, even when a user has authorized them to act on their behalf.

Why can’t I simply write code that interfaces with Facebook on my behalf? After all, Facebook’s app itself uses an API to talk to the Facebook servers. But in order to do that I would have to hack the Facebook app to figure out what the API calls are, and how to authenticate myself to them. Unfortunately, in the US there are three separate laws that make those steps illegal. The first is the anti-circumvention provision of the Digital Millennium Copyright Act (DMCA). The second is the Computer Fraud and Abuse Act (CFAA). And the third is the legal construction that by clicking ‘I accept’ on an End User License Agreement (EULA) or a set of Terms of Service (TOS), I am legally bound to respect whatever restrictions Facebook decides to impose. The last of these three is a civil matter, but criminal convictions under the first two laws carry mandatory prison sentences.

If we were willing to remove these three legal obstacles, hacking an app to give programmatic access to systems would be possible. People might argue that those provisions were created to solve important problems, but that is not entirely clear. The anti-circumvention provisions of the DMCA was put in place to allow the creation of digital rights management systems for copyright enforcement. What you think of this depends on what you think about copyright, a subject we will look at in the next section.

The scope of the CFAA could be substantially decreased without limiting its potential for prosecuting fraud and abuse, and the same goes for restrictions on usage a company might impose via a license agreement or a terms of service. If I only take actions that are also available inside the company’s app but happen to take them via a program of my own choice, why should that constitute a violation?

But, you might object, don’t companies need to protect the cryptographic keys that they use to encrypt communications? Aren’t ’botnets’ behind those infamous distributed denial-of-service (DDoS) attacks, where vast networks of computers flood a service with so many requests that no one can access it? It’s true that there are a lot of compromised machines in the world that are used for nefarious purposes, including set-top boxes and home routers. But that only demonstrates how ineffective the existing laws are at stopping illegal bots. As a result, companies have developed the technological infrastructure to deal with them.

How would we prevent people from using bots that turn out to be malicious code? For one, open-source code would allow people to inspect it to make sure it does what it claims. However, open source is not the only answer. Once people can legally be represented by bots, many markets currently dominated by large companies will face competition from smaller startups that will build, operate and maintain these bots on behalf of their end users. These companies will compete in part on establishing and maintaining a trust relationship with their customers, much like an insurance broker represents a customer in relationship with multiple insurance carriers.

Legalizing representation by bots would put pressure on the revenues of the currently dominant companies. We might worry that they would respond by slowing their investment in infrastructure, but there are contravening forces as more money will be invested in competitors. For example Uber’s ‘take rate’ (the percentage of the money paid for rides that they keep) is 25 percent. If competition forced that rate down to 5 percent, Uber’s value might fall from $90 billion to $10 billion, but that is still a huge figure, and plenty of capital would be available for investing in competitive companies that can achieve that kind of outcome.

That is not to say that there should not be limitations on bots. A bot representing me should have access to any functionality that I can access, but it should not be able to do things I can’t do, such as pretend to be another user or gain access to other people’s private posts. Companies can use technology to enforce such access limits for bots without relying on regulation that prevents bot representation in general.

Even if you are now convinced of the merits of bots, you might be wondering how we will get there. The answer is that we can start very small. We could run an experiment in a city like New York, where the city’s municipal authorities control how on-demand transportation services operate. They might say, “If you want to operate here, you have to let drivers interact with your service through a program.” Given how big a market the city is, I’m confident these services would agree. Eventually we could mandate APIs for all Internet services that have more than some threshold number of users, including all the social networks and even the big search platforms, such as Google.

To see that this is possible in principle, one need look no further than the UK’s Open Banking and the European Union’s Payment Services Directive, which require banks to offer an API for bank accounts (Manthorpe, 2018). This means that consumers can access third-party services, such as bill payment and insurance, by authorizing them to have access via the API instead of having to open a new account with a separate company. This dramatically lowers switching costs, and is making the market for financial services more competitive. The ACCESS Act, introduced by Senators Mark Warner and Josh Hawley in the US, was the first attempt to provide a similar concept for social media companies. While this particular effort hasn’t gone very far to date, it points to the possibility that bots could be used as an important alternative to Industrial Age antitrust regulation for shifting power back to end users.

Limiting the Limits to Sharing and Creating

After we have fought against geographical and prioritization limits and have bots that represent us online, we will still face legal limits that restrict what we can create and share. I will first examine copyright and patent laws and suggest ways to reduce how much they limit the knowledge loop. Then I’ll turn to privacy laws.

Earlier I noted how expensive it was to make copies of books when they had to be copied one letter at a time. Eventually, the printing press and movable type were invented. Together, the two made reproducing information faster and cheaper. Even back then, governments and churches saw cheaper information dissemination as a threat to their authority. In England, the Licensing of the Press Act of 1662 made the government’s approval to operate a printing press a legal requirement (Nipps, 2014). This approval depended on agreeing to censor content that was critical of the government or that ran counter to the teachings of the Church of England. The origins of copyright laws (i.e., the legal right to make copies) are thus tied up with censorship.

Those who had the right to make copies, effectively had monopolies on the copyrighted content which proved to be economically attractive. Censorship, however, does not make for a popular argument to support an ongoing monopoly and so relatively quickly the argument for sustaining such an arrangement shifted to the suggestion that copyright was necessary as an inducement or incentive to produce content in the first place. The time and effort authors put into learning and writing not only made written works morally their property, the writer Daniel Defoe and others argued in the early 18th century: to motivate people to do those things, the production of “pyrated copies” had to be stopped (Deazley, 2004). While this argument sounds much more compelling than censorship, in practice, copyright was rarely held by the original creator even back then. Instead, the economic benefits of copyright have largely accrued to publishers, who for the most part acquire the copyright for a one-time payment to the author or songwriter.

There is another problem with the incentive argument. It ignores a long history of prior content creation. Let’s take music as an example. Musical instruments were made as far back as 30,000 years ago, pre-dating the idea of copyright by many millennia. Even the earliest known encoding of a song, which marks music’s transition from information to knowledge, is around 3,400 years old (Andrews, 2018; Wulstan, 1971). Clearly then, people made and shared music long before copyright existed. In fact, the period during which it’s been possible for someone to earn a lot of money from making and then selling recorded music has been extraordinarily short. It started with the invention of the gramophone in the 1870s and peaked in 1999, the year that saw the biggest profits in the music industry, although the industry’s revenues have been gradually increasing again in recent years with the advent of streaming (Smith, 2020).

Before this short period, musicians made a living either from live performances or through patronage. If copyrighted music ceased to exist, musicians would still compose, perform and record music, and they would make money in the ways that they did prior to the rise of copyright. Indeed, as Steven Johnson found when he examined this issue, that is already happening, to some degree: “the decline in recorded-music revenue has been accompanied by an increase in revenues from live music... Recorded music, then, becomes a kind of marketing expense for the main event of live shows” (Johnson, 2015). Many musicians already choose to give away digital versions of their music, releasing tracks for free on SoundCloud or YouTube and making money from performing live (which during the COVID-19 pandemic had to shift to live streaming) or through crowdfunding methods such as Kickstarter and Patreon.

Over time, copyright holders have strengthened their claims and extended their reach. For instance, with the passing of the US Copyright Act of 1976, the requirement to register a copyright was removed: if you created content, you automatically held copyright in it (U.S. Copyright Office, 2019). Then, in 1998, the US Copyright Term Extension Act extended the length of a term of copyright from 50 to 70 years beyond the life of the author. This became known as the “Mickey Mouse Protection Act” because Disney had lobbied for it: having built a profitable business based on protected content, they were mindful that a number of their copyrights were due to expire (Slaton, 1999).

More recently, copyright lobbying has attempted to interfere with the publication of content on the Internet, through proposed legislation such as the Protect IP Act and the Stop Online Piracy Act, and language in the Trans-Pacific Partnership (TPP), a trade deal that the United States did not ultimately join (after the United States withdrew, the language was removed). In these latest attempts at expansion, the conflict between copyright and the digital knowledge loop has become especially clear. Copyright limits what you can do with content that you have access to, essentially restricting you to consuming it. It dramatically curtails your ability to share content and to create other works that use some or all of it. Some of the more extreme examples include takedowns of videos from YouTube that used the song “Happy Birthday to You,” which was under copyright until just a few years ago.

From a societal standpoint, it is never optimal to prevent someone from listening to or watching content that has already been created. Since the marginal cost of accessing a digital copy is zero, the world is better off if that person gets enjoyment from that content. And if that person becomes motivated to create some new inspiring content themselves, then the world is a lot better off.

Although the marginal cost for copying content is zero, you might wonder about the fixed and variable cost that goes into making it in the first place. If all content were to be free, then where would the money to produce it come from? Some degree of copyright is needed for content creation, especially for large-scale projects such as Hollywood movies or elaborate video games: it is likely that nobody would make them if, in the absence of copyright protection, they weren’t economically viable. Yet even for such big projects there should be constraints on enforcement. For instance, you shouldn’t be able to take down an entire service because it hosts a link to a pirated movie, as long as the link is promptly removed. More generally, I believe that copyright should be dramatically reduced in scope and made much more costly to obtain. The only automatic right accruing to content should be attribution. Reserving additional rights should require a registration fee, because you are asking for content to be restricted within the digital knowledge loop.

Imagine a situation where the only automatic right accruing to an intellectual work was attribution. Anyone wanting to copy or distribute your song would only have to credit you, something that would not inhibit any part of the knowledge loop. Attribution imposes no restrictions on making, accessing and distributing copies, or on creating or sharing derivative works. It can include referencing who wrote the lyrics, who composed the music, who played which instrument and so on. It can also include where you found this particular piece of music. This practice of attribution is already becoming popular for digital text and images using the Creative Commons License, or the MIT License in open source software development.

If you don’t want other people to use your music without paying you, you are asking for its potential contributions to the knowledge loop to be restricted, thus reducing the benefits that the loop confers upon society. You should pay for that right, which not only represents a loss to society but will also be costly to enforce. The registration fee should be paid on a monthly or annual basis, and when you stop paying it, your work should revert to attribution-only status.

In order to reserve rights, you should have to register your music with a registry, with some part of the copyright fee going towards maintaining the registry. Thanks to blockchains, which allow the operation of decentralized databases that are not owned or controlled by any one entity, there can be competing registries that access the same global database. The registries would be free to search, and registration would involve a check that you are not trying to register someone else’s work. The registries could be built in a way that anyone operating a music streaming service, such as Spotify or SoundCloud, could easily implement compliance to make sure they are not freely sharing music that has reserved rights.

It would even be possible to make the registration fee dependent on what rights you wanted to retain. For instance, your fee might be lower if you were prepared to allow non-commercial use of your music and to allow others to create derivative works, while it might increase significantly if you wanted all your rights reserved. Similar systems could be used for all types of content, including text, images and video.

Critics might object that the system would impose a financial burden on creators, but it is important to remember that removing content from the knowledge loop imposes a cost on society. And enforcing this removal, by finding people who are infringing and penalizing them, incurs additional costs for society. For these reasons, asking creators to pay is fair, especially if their economic freedom is already assured by UBI.

UBI also provides an answer to another argument that is frequently wielded in support of excessive copyright: employment by publishers. This argument is relatively weak, as the major music labels combined employ only a little over twenty thousand people ("Sony Music," n.d.; “Universal Music Group,” n.d; “Warner Music Group,” n.d). On top of that, the existence of this employment to some degree reflects the societal cost of copyright. The owners, managers and employees of record labels are, for the most part, not the creators of the music.

Let me point out one more reason why a system of paid registration makes sense. No intellectual works are created in a vacuum. All authors have read books by other people, all musicians have listened to tons of music, and all filmmakers have watched countless movies. Much of what makes art so enjoyable is the existence of a vast body of art that it draws upon and can reference, whether explicitly or implicitly. We are all part of the knowledge loop that has existed for millennia.

While copyright limits our ability to share knowledge, patents limit our ability to use knowledge to create something new. Just as having a copyright confers a monopoly on the reproduction of content, a patent confers a monopoly on its application. The rationale for patents is similar to the argument for copyright: the monopoly that is granted results in profits that are supposed to provide an incentive for people to invest in research and development.

Here, as with copyright, this incentive argument should be viewed with suspicion. People invented things long before patents existed, and many have continued to invent without seeking them (Kinsella, 2013). Mathematics is a great example of the power of what is known as ‘intrinsic motivation’: the drive to do something for its own sake, and not because it will be financially rewarded. People who might have otherwise spent their time in a more lucrative way often dedicate years of their lives to work on a single mathematical problem (frequently without success). It is because of the intrinsic human drive to solve problems that the field has made extraordinary advances, entirely in the absence of patents, which thankfully were never extended to include mathematical formulas and proofs. As we will see momentarily, this problem solving drive can and has been successfully amplified through other means.

The first patent system was established in Venice in the mid-fifteenth century, and Britain had a fairly well-established system by the seventeenth (“History of Patent Law,” 2020). That leaves thousands of years of invention, a time that saw such critical breakthroughs as the alphabet, movable type, the invention of the wheel and gears. This is to say nothing of those inventors who have chosen not to patent their inventions because they saw how that would impose a loss on society. A well-known example is Jonas Salk, who created the polio vaccine. Other important inventions that were never patented include X-rays, penicillin and the use of ether as an anesthetic (Boldrin & Levine, 2010). Since we know that limits on the use of knowledge impose a cost, we should ask what alternatives to patents exist which might stimulate innovation.

Many people are motivated by wanting to solve a problem, whether it’s one they have themselves or something that impacts the world at large. With UBI, more of these people will be able to spend their time on inventing. We will also see more innovation because digital technologies are reducing the cost of inventing. One example of this is the company Science Exchange, which has created a marketplace for laboratory experiments. Say you have an idea that requires you to sequence a bunch of genes. The fastest gene sequencing available is from a company called Illumina, whose fastest machines used to cost more than $0.5 million to buy (Next Gen Seek, 2021). Through Science Exchange, however, you were able to access such a machine for less than $1,000 per use (“Illumina Next Generation Sequencing,” 2021). Furthermore, the next generation of sequencing machines is on the way, and these will further reduce the cost—technological deflation at work.

A lot of legislation has significantly inflated the cost of innovation. In particular, FDA rules around drug trials have made drug discovery prohibitively expensive, with the cost of bringing a drug to market exceeding $1 billion. While it is obviously important to protect patients, there are novel statistical techniques that would allow for smaller and faster trials (Berry et al., 2010). A small step was taken recently with the compassionate use of not-yet-approved drugs for fatally ill patients. Excessive medical damage claims have presented another barrier to innovation. As a result of these costs, many drugs are either not developed at all or are withdrawn from the market, despite their efficacy. For example, the vaccine against Lyme disease, Lymerix, is no longer available for humans following damage claims (Willyard, 2019).

Patents are not the only way to incentivize innovation. Another historically successful strategy has been the offering of prizes. In 1714, Britain famously offered rewards to encourage a solution to the problem of determining a ship’s longitude at sea. Several people were awarded prizes for their designs of chronometers, lunar distance tables and other methods for determining longitude, including improvements to existing methods. Mathematics provides an interesting example of the effectiveness of prizes, which beyond money also provide recognition. In addition to the coveted Fields Medal for exceptional work by mathematicians under the age of 40, there are also the seven so-called Millennium Prize Problems, each with a $1 million reward (only one of which has been solved to date—and Grigori Perelman, the Russian mathematician who solved it, famously turned down the prize money opting only for the recognition).

At a time when we wish to accelerate the knowledge loop, we must shift the balance towards knowledge that can be used freely. The success of recent prize programs, such as the X Prizes, the DARPA Grand Challenge and NIST competitions, is promising, and the potential exists to crowdfund future prizes. Medical research should be a particular target for prizes, to help bring down the cost of healthcare.

Though prizes can help accelerate the knowledge loop, that still leaves a lot of existing patents in place. I believe much can be done to make the system more functional, in particular by reducing the impact of so-called non-practicing entities (NPEs, commonly referred to as “patent trolls”). These companies have no operating business of their own, and exist solely for the purpose of litigating patents. They tend to sue not just a company but also that company’s customers, forcing a lot of companies into a quick settlement. The NPE then uses the settlement money to finance further lawsuits. Fortunately, a recent Supreme Court ruling placed limits on where patent lawsuits can be filed, which might curtail the activity of NPEs (Liptak, 2017).

As a central step in patent reform, we must make it easier to invalidate existing patents, while at the same time making it more difficult to obtain new ones. We have seen some progress on both counts in the US, but there is still a long way to go. Large parts of what is currently patentable should be excluded from patentability, including university research that has received even small amounts of public funding. Universities have frequently delayed the publication of research in areas where they have hoped for patents that they could subsequently license out, a practice that has a damaging impact on the knowledge loop.

We have also gone astray in our celebration of patents as a measure of technological progress, when we should instead treat them at best as a necessary evil. Ideally, we would roll back the reach of existing patents and raise the bar for new ones, while also inducing as much unencumbered innovation as possible through prizes and social recognition.

Getting Over Privacy

Copyrights and patents aren’t the only legal limitations that slow down the digital knowledge loop. We’re also actively creating new restrictions in the form of well-intentioned privacy regulations. Not only do these measures restrict informational freedom, but as I will argue below, in the long run privacy is fundamentally incompatible with technological progress. Instead of clinging to our current conception of privacy, we therefore need to understand how to be free in a world where information is widely shared. Privacy has been a strategy for achieving and protecting freedom. To get over this idea while staying free, we need to expand economic, informational and psychological freedom.

Before I expand on this position, let me first note that countries and individuals already take dramatically different approaches to the privacy of certain types of information. For example, for many years Sweden and Finland have published everyone’s tax return, and some people, including the Chief Information Officer and Dean for Technology at Harvard Medical School, have published their entire medical history on the Internet (Doyle & Scrutton, 2016; Zorabedian, 2015). This shows that under certain conditions it is eminently possible to publicly share exactly the type of information that some insist must absolutely remain private. As we will see, such sharing turns out not only to be possible but also extremely useful.

To better understand this perspective, compare the costs and benefits of keeping information private with the costs and benefits of sharing it widely. Digital technology is dramatically shifting this tradeoff in favor of sharing. Take a radiology image, for example. Analog X-ray technology produced an image on a physical film that had to be developed, and could only be examined by holding it up against a backlight. If you wanted to protect the information on it, you would put it in a file and lock it in a drawer. If you wanted a second opinion, you had to have the file sent to another doctor by mail. That process was costly, time-consuming and prone to errors. The upside of analog X-rays was the ease of keeping the information secret; the downside was the difficulty of putting it to use.

Now compare analog X-rays to digital X-rays. You can instantly walk out of your doctor’s office with a copy of the digital image on a thumb drive or have it emailed to you, put in a Dropbox or shared in some other way via the Internet. Thanks to this technology, you can now get a near-instant second opinion. And if everyone you contacted was stumped, you could post the image on the Internet for everyone to see. A doctor, or a patient, or simply an astute observer, somewhere in the world may have seen something similar before, even if it is incredibly rare. This has happened repeatedly via Figure 1, a company that provides an image sharing network for medical professionals.

But this power comes at a price: protecting your digital X-ray image from others who might wish to see it is virtually impossible. Every doctor who looks at the image could make a copy—for free, instantly and with perfect fidelity—and send it to someone else. And the same goes for others who might have access to the image, such as your insurance company.

Critics will make claims about how we can use encryption to prevent the unauthorized use of your image, but those claims come with important caveats, and they are dangerous if pursued to their ultimate conclusion. In summary, the upside of a digital X-ray image is how easy it makes it to get help; the downside is how hard it is to protect digital information.

But the analysis doesn’t end there. The benefits of your digital X-ray image go beyond just you. Imagine a huge collection of digital X-ray images, all labeled with diagnoses. We might use computers to search through them and get machines to learn what to look for. And these systems, because of the magic of zero marginal cost, can eventually provide future diagnoses for free. This is exactly what we want to happen, but how rapidly we get there and who controls the results will depend on who has access to digital X-ray images.

If we made all healthcare information public, we would dramatically accelerate innovation in diagnosing and treating diseases. At present, only well-funded pharma companies and a few university research projects are able to develop new medical insights and drugs, since only they have the money required to get sufficient numbers of patients to participate in research. Many scientists therefore wind up joining big pharma companies, so the results of their work are protected by patents. Even at universities, the research agenda tends to be tightly controlled, and access to information is seen as a competitive advantage. While I understand that we have a lot of work to do to create a world in which the sharing of health information is widely accepted and has limited downsides, this is what we should be aiming for.

You might wonder why I keep asserting that assuring privacy is impossible. After all, don’t we have encryption? Well, there are several problems that encryption can’t solve. The first is that the cryptographic keys used for encryption and decryption are just digital information themselves, so keeping them secure is another instance of the original problem. Even generating a key on your own machine offers limited protection, unless you are willing to risk that the data you’re protecting will be lost forever if you lose the key. As a result, most systems include some kind of cloud-based backup, making it possible that someone will access your data, either through technical interception or by tricking a human being to unwittingly participate in a security breach. If you want a sense of how hard this problem is, consider the millions of dollars in cryptocurrency that can no longer be accessed by people who lost their key or who had them taken over through some form of attack. The few cryptocurrency exchanges that have a decent track record have invested massively in security procedures, personnel screening and operational secrecy.

The second problem is known as ‘endpoint security’. The doctor you’re sending your X-ray to for a second opinion might have a program on their computer that can access anything shown on the screen. In order to view your X-ray, the doctor has to decrypt and display it, so this program will have access to the image. Avoiding such a scenario would require us to lock down all computing devices, but that would mean preventing end users from installing software on them. Furthermore, even a locked-down endpoint is still subject to the “analog hole”: someone could simply take a picture of the screen, which itself could then be shared.

Locked-down computing devices reduce informational freedom and constrict innovation, but they also pose a huge threat to the knowledge loop and democracy. Handing over control of what you can compute and who you can exchange information with would essentially amount to submitting to a dictatorial system. In mobile computation we’re already heading in this direction, partly on the pretext of a need to protect privacy. Apple uses this as an argument to explain why the only way to install apps on an iPhone is through the App Store, which they control. Imagine this type of regime extended to all computing devices, including your laptop and cloud-based servers, and you have one way in which privacy is incompatible with technological progress. We can either have strong privacy assurance or open general-purpose computing, but we can’t have both.

Many people contend that there must be some way to preserve privacy and keep innovating, but I challenge anyone to present a coherent vision of the future where individuals control technology and privacy is meaningfully protected. Whenever you leave your house, you might be filmed, since every smartphone has a camera, and in the future we’ll see tiny cameras on tiny drones. Your gait identifies you almost as uniquely as your fingerprint, your face is probably somewhere on the Internet, and your car’s license plate is readable by any camera. You leave your DNA almost everywhere you go, and we will soon be able to sequence DNA at home for around $100. Should the government control all of these technologies? And if so, should penalties be enforced for using them to analyze someone else’s presence or movement? Many are tempted to reply yes to these questions, without thinking through the consequences of such legislation for innovation and for the relative power of the state versus citizens. For example, citizens recently used facial recognition technology to identify police officers who had removed IDs from their uniforms. Advocates of banning this technology are overly focused on surveillance and never seem to consider these kind of “sousveillance” use cases (from French “sous” which means below).

There is an even more profound reason why privacy is incompatible with technological progress. Entropy is a fundamental property of the universe, which means it is easier to destroy than to create. It can take hours to build a sand castle that is destroyed by a single wave washing ashore. It takes two decades of care for a human to grow up and a single bullet to end their life. Because of this inherent asymmetry, technological progress increases our ability to destroy more quickly than our ability to create. Today, it still takes twenty years for a human to grow, and yet modern weapons can kill thousands and even millions of people in an instant. So as we make technological progress, we must eventually insist on less privacy in order to protect society. Imagine a future in which anyone can create a biological weapon in their basement laboratory—for example, an even more deadly version of the COVID-19 virus. After-the-crime law enforcement becomes meaningless in such a world.

If we can’t protect privacy without passing control of technology into the hands of a few, we should embrace a post-privacy world. We should work to protect people and their freedom, rather than data and privacy. We should allow more information to become public, while strengthening individual freedom. Much information is already disclosed through hacks and data breaches, and many people voluntarily share private information on blogs and social media (McCandless, 2020). The economic freedom generated by the introduction of UBI will play a key role here, because much of the fear of the disclosure of private information results from potential economic consequences. For instance, if you are worried that you might lose your job if your employer finds out that you wrote a blog post about your struggles with depression, you are much less likely to share, a situation that, repeated across many people, helps to keep the topic of depression a taboo. There are, of course, countries where the consequences of private information being leaked, such as sexual orientation or political organizing, can be deadly. To be able to achieve the kind of post-privacy world I envision here, democracy with humanist values is an essential precondition.

If a post-privacy world seems impossible or terrifying, it is worth remembering that privacy is a modern, urban construct. Although the US Constitution protects certain specific rights, it does not recognize a generalized right to privacy. For thousands of years prior to the eighteenth century, most people had no concept directly corresponding to our modern notion of privacy. Many of the functions of everyday life used to take place much more openly than they do today. And privacy still varies greatly among cultures: for example, many Westerners are shocked when they first experience the openness of traditional Chinese public restrooms (Sasha, 2013). All over the world, people in small villages live with less privacy than is common in big cities. You can either regard the lack of privacy in village as oppressive, or you can see a close-knit community as a real benefit, for instance when your neighbor notices that you are sick because you haven’t left the house and offers to go shopping for you.

“But what about my bank account?” you might ask. “If my account number was public, wouldn’t it be easier for criminals to take my money?” This is why we need to construct systems such as Apple Pay and Android Pay that require additional authentication to authorize payments. Two-factor authentication systems will become much more common in the future, and we will increasingly rely on systems such as Sift, which assesses in real time the likelihood that a transaction is fraudulent. Finally, as the Bitcoin blockchain shows, it is possible to have a public ledger that anyone can inspect, as long as the transactions on it are protected by ‘private keys’ which allow only the owner of a Bitcoin address to initiate transactions.

Another area where people are nervous about privacy is health information. We worry, for instance, that employers or insurers will discriminate against us if they learn that we have a certain disease or condition. Here the economic freedom conferred by UBI would help protect us from destitution because of discrimination. Furthermore by tightening the labor market, UBI would also make it harder for employers to refuse to hire certain groups of people in the first place. We could also enact laws that require transparency from employers to more easily detect discrimination (note that transparency is often difficult to require today because it conflicts with privacy).

Observers such as Christopher Poole, the founder of the online message board 4Chan, have worried that in the absence of privacy, individuals wouldn’t be able to engage online as freely. Privacy, they think, helps people feel comfortable assuming multiple online identities that may depart dramatically from their “real life” selves. I think, in contrast, that by keeping our online selves separate, we pay a price in the form of anxiety, neurosis and other psychological ailments. It is healthier to be transparent than to hide behind veils of privacy. Emotional health derives from the integration of our different interests into an authentic multi-dimensional personality, rather than a fragmentation of the self. This view is supported by studies of how online self-presentation impacts mental health, where inauthentic presentations were associated with higher levels of anxiety and neurosis (Twomey & O’Reilly, 2017).

Many who argue against a post-privacy approach point out that oppressive governments can use information against their citizens. Without a doubt, preserving democracy and the rule of law is essential if we want to achieve a high degree of informational freedom, and this is addressed explicitly in Part Five. Conversely, however, more public information makes dictatorial takeovers considerably harder. For instance, it is much clearer who is benefiting from political change if tax records are in the public domain.

Last updated