Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
With digital technology so fundamentally expanding what we are able to do, we must establish some basic principles if we are to avoid misinterpreting current trends and phenomena. These principles will allow us to truly explore this new ‘space of the possible’ and the benefits that it might bring, instead of limiting and bending the technology to fit our existing economic and social systems.
What follows is an attempt to establish a firm foundation for how we might build a future, grounding it in a clear set of values. I start with a brief definition of knowledge, a term I use extensively and in a way that is somewhat different from common usage. I then explain the relationship between optimism and knowledge, as well as the importance of choices in shaping our future. This is followed by a discussion of why the existence of knowledge provides an objective basis for humanism, which sets it apart from other religious and philosophical narratives. Much of my thinking in this area has been influenced by the writing of David Deutsch, and in particular his book The Beginning of Infinity, which explores the history, philosophy and power of explanations (Deutsch, 2011).
I will then provide a definition of scarcity based directly on human needs rather than on money and prices, using this definition to show how technology has shifted scarcity throughout history, leading to dramatic changes in how we live. From there, I lay out a plan of attack for the rest of the book.
Humanity is the only species on Earth to have developed knowledge. I will make the term ‘knowledge’ increasingly precise as we go along, but for now I will simply say that we are the only species that is able to read and write, and that this ability in turn has allowed us to create increasingly powerful technology. Technological advance has the effect of broadening the ‘space of the possible’: for instance, with the invention of the airplane, human flight became a reality. When the ‘space of the possible’ is broadened, it brings with it both good and bad capabilities. This duality of technology has been with us since we learned to start a fire, the very first human technology. With this discovery, it became possible to warm ourselves and cook, but also to burn down forests and enemy villages. Today, the Internet broadens free access to learning, but it can also spread hate and lies on a global scale.
And yet there is something special about our time: we are experiencing a technological non-linearity, in which the ‘space of the possible’ expands dramatically, thus rendering predictions based on extrapolation useless. The current non-linearity arises from the extraordinary power of digital technology, which far exceeds anything that was possible with industrial machinery, due to two unique characteristics. Digital technology delivers universality of computation (it can potentially solve any solvable problem) at zero marginal cost (extra copies can be produced for free).
To understand what is happening, we therefore need to zoom out in time. Humanity has previously encountered two similar non-linearities. The first occurred roughly ten thousand years ago with the invention of agriculture, which ended the Forager Age and brought us into the Agrarian Age. The second started with the Enlightenment about four hundred years ago, which helped usher in the Industrial Age.
Consider foragers one hundred thousand years ago, trying to predict what society would look like after the invention of agriculture. Even something that seems as trivially obvious to us as living in buildings would be hard to imagine from the viewpoint of migratory tribes. Similarly, much of what we have today—from modern medicine to computer technology—would resemble magic to those living as recently as the mid-twentieth century. Not simply the existence of smartphones, but also the widespread availability and affordability of such powerful technology, would have been hard to foresee.
The World After Capital has two goals. The first is to establish that we are currently experiencing a third period of globally transformative, non-linear change. The key argument is that each time, the ‘space of the possible’ expands dramatically, the defining constraint for humanity shifts—meaning the allocation problem that most fundamentally needs to be solved in order to meet humanity’s needs changes. Specifically, the invention of agriculture shifted scarcity from food to land, and industrialization shifted scarcity from land to capital (which throughout The World After Capital refers to physical capital, such as machines and buildings, unless otherwise noted). Digital technology is now shifting scarcity from capital to attention.
Capital is no longer scarce in some parts of the world and it is becoming rapidly less scarce everywhere. We should consider this to be the great success of capitalism. But markets, which were the crucial allocation mechanism for capital, will not solve the scarcity of attention. We are bad at allocating attention, both individually and collectively. For example, how much attention do you pay to your friends and family, or to the existential question of the meaning of your life? And how much attention are we paying as humanity to the great challenges and opportunities of our time, such as the climate crisis and space travel? Markets are not able to help us better allocate attention because prices do not, and cannot, exist for many of the issues that we should be paying attention to. Consider paying attention to finding your purpose in life: there is no supply and demand that will form a ‘purpose price’ for an individual; it’s ultimately up to you to allocate enough attention to this existential question.
My second goal in writing The World After Capital is to propose an approach that will help us overcome the limitations and remedy the shortcomings of market-based capitalism, in order to facilitate a smooth transition from the Industrial Age (in which the key scarcity is capital) to the Knowledge Age (in which the key scarcity is attention). Getting this right will be critical for humanity, as the two previous transitions were marked by massive turmoil and upheaval. We are already seeing signs of increasing conflict within societies and among belief systems across the world, fueling a rise of populist and nationalist leaders, including Donald Trump in the US.
How should we approach this third transition? What actions should society take now, when the non-linearity we are facing prevents us from being able to make accurate predictions about the future? We need to enact policies that allow for gradual social and economic change. The alternative is that we artificially suppress these changes, only for them to explode eventually. In particular, I will argue that we should smooth the transition to the Knowledge Age by expanding three powerful individual freedoms:
Economic freedom: instituting a universal basic income.
Informational freedom: broadening access to information and computation.
Psychological freedom: practicing and encouraging mindfulness.
Increasing these three freedoms will make attention less scarce. Economic freedom will unlock the time that we currently spend in jobs that can and should be automated. Informational freedom will accelerate the creation and distribution of knowledge. And psychological freedom enables rationality in a world in which we are overloaded with information. Each of these freedoms is important in its own right, but they are also mutually reinforcing.
One crucial goal in reducing the scarcity of attention is to improve the functioning of the ‘knowledge loop’, which is the source of all knowledge and which consists of learning, creating and sharing. Producing more knowledge is essential to human progress. The history of humanity is littered with failed civilizations that didn’t produce enough knowledge to overcome the challenges facing them.
To achieve collective progress through increased individual freedoms, we must establish a set of values that include critical inquiry, democracy and responsibility. These values ensure that the benefits of the knowledge loop accrue broadly to humanity and extend to other species. They are central to a renewed humanism, which in turn has an objective basis in the existence and power of human knowledge. Reasserting humanism is especially critical at a time when we are coming close to creating ‘transhumans’ through genetic engineering and augmentation, as well as ‘neohumans’ through artificial intelligence.
The World After Capital argues that only this combination of increased freedoms and strong humanist values will allow us to safely navigate the transition from the Industrial Age to the Knowledge Age. Though I am profoundly optimistic about the ultimate potential for human progress, I am pessimistic about how we will get there. We seem intent on clinging to the Industrial Age at all costs, which increases the likelihood of violent change. My hope is that in writing this book I can in some small way help to move us forward peacefully.
Knowledge, as I use this term, is the information that humanity has recorded in a medium and improved over time. There are two crucial parts to this definition. The first is “recorded in a medium,” which allows information to be shared across time and space. The second is “improved over time,” which separates knowledge from mere information.
A conversation that I had years ago but didn’t record cannot be knowledge in my sense—it isn’t accessible to anyone who wasn’t there when it happened, and even my own recollection of it will fade. However, if I write down an insight from that conversation and publish it on my blog, I have potentially contributed to human knowledge. The blog post is available to others across space and time, and some blog posts will turn out to be important contributions to human knowledge. As another example, the DNA in our cells isn’t knowledge by my definition, whereas a recorded genome sequence can be maintained, shared and analyzed. Gene sequences that turn out to be medically significant, such as the BRCA mutation that increases the risk of breast cancer, become part of human knowledge.
My definition of knowledge is intentionally broad, and includes not just technical and scientific knowledge but art, music and literature. But it excludes anything that is either ephemeral or not subject to improvement. Modern computers, for example, produce tons of recorded information that are not subsequently analyzed or integrated into any process of progressive bettering. The reasons for this definition of knowledge will become clear as I use the term in the following sections and throughout the book.
Billions of people all over the world carry around smartphones, powerful computers that are connected to a global network (the Internet). We often spend many hours a day on these devices, whether playing games or carrying out work. And yet despite the growing ubiquity of digital technology, people often find it difficult to understand what exactly makes it so distinctively powerful. Some have even derided digital technology, pointing to services such as Twitter and arguing that they are inconsequential when compared to, say, the invention of vaccines.
It is nonetheless becoming increasingly difficult to ignore digital technology’s disruptiveness. For example, while many previously long-established businesses are struggling, including newspapers and retailers, digital technology companies such Facebook, Apple, Amazon, Netflix and Google are now among the world’s most highly valued (“List of public corporations,” 2020).
Digital technology turns out to possess two unique characteristics that explain why it dramatically expands the ‘space of the possible’ for humanity, going far beyond anything that was previously possible. These are zero marginal cost and the universality of computation.
Once a piece of information exists on the Internet, it can be accessed from anywhere on the network for no additional cost. And as more and more people around the world are connected to the Internet, ‘anywhere on the network’ is increasingly coming to mean ‘anywhere in the world’. The servers are already running, as are the network connections and the end-user devices. Making one extra digital copy of the information and delivering it across the network therefore costs nothing. In the language of economics, the ‘marginal cost’ of a digital copy is zero. That does not mean that people won’t try to charge you for this information—in many cases they will. But that's a matter of price, not of cost.
Zero marginal cost is radically different to anything that has come before it in the analog world, and it makes some pretty amazing things possible. To illustrate this, imagine that you own a pizzeria. You pay rent for your store and your equipment, and you pay salaries for your staff and yourself. These are so-called ‘fixed costs,’ and they don’t change with the number of pizzas you bake. ‘Variable costs,’ on the other hand, depend on the number of pizzas you make. For a pizzeria, these will include the cost of the water, the flour, any other ingredients you use, any additional workers you need to hire, and the energy you need to heat your oven. If you make more pizzas, your variable costs go up, and if you make fewer pizzas they go down.
So what is marginal cost? Well, let’s say you are making one hundred pizzas every day: the marginal cost is the additional cost of making one more pizza. Assuming the oven is already hot and has space in it, and your employees aren’t fully occupied, it is the cost of the ingredients, which is likely relatively low. If the oven had already cooled, then the marginal cost of the additional pizza would include the energy cost required for reheating the oven and might be quite high.
From a business perspective, you would want to make that additional pizza as long as you could sell it for more than its marginal cost. If you had already covered your fixed costs from the previous pizzas, every cent above marginal cost for the additional pizza would be profit. Marginal cost also matters from a social perspective. As long as a customer is willing to pay more than the marginal cost for that pizza, everyone is potentially better off—you get an extra contribution towards your fixed costs or your profits, and your customer gets to eat a pizza they wanted (important note: I am saying “potentially better off” for a reason because people sometimes want things that might not actually be good for them, such as someone suffering from obesity wanting to eat a pizza).
Now let’s consider what happens as marginal cost falls from a high level. Imagine that your key ingredient was an exceedingly expensive truffle which meant that the marginal cost of each of your pizzas is $1,000. You clearly wouldn’t sell many pizzas, so you might decide to switch to cheaper ingredients and reduce your marginal cost to a point where a larger number of customers are willing to pay more than your marginal cost, so your sales increase. And as you bring down the marginal cost further through additional process and product improvements, you would start to sell even more pizzas.
Now imagine that through a magical new invention you could make additional tasty pizzas at close to zero marginal cost (say one cent per additional pizza) and ship them instantaneously to anywhere in the world. You would then be able to sell an exceedingly large number of pizzas. If you charged just two cents per pizza, you would be making one cent of profit for every additional pizza you sold. At such low marginal cost you would probably quickly gain a monopoly on the global pizza market (more on this later). Anyone in the world who was hungry and could afford at least one cent might buy one of your pizzas. The best price of your pizza from a societal point of view would be one cent (your marginal cost): the hungry would be fed, and you would cover your marginal cost. But as a monopolist that is unlikely what you would do. Instead, you would probably engage in all sorts of problematic behavior aimed at increased profits, such as charging more than marginal cost, trying to prevent competitors from entering the market, and even looking to get people addicted to pizza so they will consume ever more.
This is exactly where we currently are with digital technology. We can “feed the world” with information: that additional YouTube video view, additional access to Wikipedia, or additional traffic report from Waze all have zero marginal cost. And just as in the case of the hypothetical zero marginal cost pizza we are seeing the emergence of digital monopolies, along with all the problems that entails (see Part Four on ‘Informational Freedom’ for a proposed remedy).
We are not used to zero marginal cost: most of our existing economics depends on the assumption that marginal costs are greater than zero. You can think of zero marginal cost as an economic singularity similar to dividing by zero in math—as you approach it, strange things begin to happen. In addition to digital near-monopolies we are already observing power-law distributions of income and wealth (see Part Three), where small variations result in hugely different outcomes. Furthermore, we are now rapidly approaching this zero marginal cost singularity in many other industries, which are primarily information based, including finance and education. In summary, the first characteristic of digital technology that dramatically expands the space of the possible is zero marginal cost. This can result in digital monopolies, but also has the potential to grant all of humanity access to the world’s knowledge.
Zero marginal cost is only one property of digital technology that dramatically expands the space of the possible; the second is in some ways even more amazing.
Computers are universal machines. I use this term in a precise sense: anything that can be computed in the universe can in principle be computed by the kind of machine that we already have, given enough memory and time. We have known this since Alan Turing’s groundbreaking work on computation in the middle of the last century. He invented an abstract version of a computer that we now call a Turing machine, before coming up with a proof to show that this simple machine could compute anything (Mullins, 2012; “Church–Turing thesis,” 2020).
By “computation,” I mean any process that takes information inputs, executes a series of processing steps, and produces information outputs. That is—for better or worse—also much of what a human brain does: it receives inputs via nerves, carries out some internal processing and produces outputs. In principle, a digital machine can accomplish every computation that a human brain can. Those brain computations include something as simple and everyday as recognizing someone’s face (inputs: image, output: name) to something as complicated as diagnosing disease (inputs: symptoms and test results, output: differential diagnosis).
This ‘in principle’ limitation will turn out to be significant only if quantum effects matter for the functioning of the brain, meaning effects that require quantum phenomena such as entanglement and the superposition of states. This is a hotly debated topic (Jedlicka, 2017). Quantum effects do not change what can be computed in principle, as even a Turing machine can theoretically simulate a quantum effect—but it would take an impractically long time, potentially millions of years, to do so (Timpson, 2004). If quantum effects are important in the brain, we may need further progress in quantum computing to replicate some of the brain’s computational capabilities. However, I believe that quantum effects are unlikely to matter for the bulk of computations carried out by the human brain—that is, if they matter at all. We may, of course, one day discover something new about physical reality that will change our view of what is computable, but so far this hasn’t happened.
For a long time, this property of universality didn’t matter much because computers were pretty dumb compared to humans. This was frustrating to computer scientists who since Turing had believed that it should be possible to build an intelligent machine, but for decades couldn’t get it to work. Even something that humans find really simple, such as recognizing faces, had computers stumped. Now, however, we have computers that can recognize faces, and their performance at doing so is improving rapidly.
An analogy here is the human discovery of heavier-than-air flight. We knew for a long time that it must be possible—after all, birds are heavier than air and they can fly—but it took until 1903, when the Wright brothers built the first successful airplane, for us to figure out how to do it (“Wright Brothers,” 2020). Once they and several other people had figured it out, progress was rapid—we went from not knowing how to fly to crossing the Atlantic in passenger jet planes in fifty-five years: the British Overseas Airways Corporation’s first transatlantic jet passenger flight was in 1958 (“British Overseas Airways Corporation,” 2020). If you plot this on a graph, you see a perfect example of a non-linearity. We didn’t get gradually better at flying—we couldn’t do it at all, and then suddenly we could do it very well.
Digital technology is similar. A series of breakthroughs have taken us from having essentially no machine intelligence to a situation where machines can outperform humans on many different tasks, including reading handwriting and recognizing faces (Neuroscience News, 2018; Phillips et al., 2018). The rate of machines’ progress in learning how to drive cars is another great example of the non-linearity of improvement. The Defense Advanced Research Projects Agency (DARPA) held its first so-called “Grand Challenge” for self-driving cars in 2004. At the time they picked a 150-mile-long closed course in the Mojave Desert, and no car got further than seven miles (less than 5 per cent of the course) before getting stuck. By 2012, less than a decade later, Google’s self-driving cars had driven over 300,000 miles on public roads, with traffic (Urmson, 2012).
Some people may object that reading handwriting, recognizing faces, or driving a car is not what we mean by ‘intelligence’, but this just points out that we don’t have a good definition of it. After all, if you had a pet dog that could perform any of these tasks, let alone all three, you would call it an ‘intelligent’ dog.
Other people point out that humans also have creativity and that these machines won’t be creative even if we grant them some form of intelligence. However, this amounts to arguing that creativity is something other than computation. The word implies ‘something from nothing’ and outputs without inputs, but that is not the nature of human creativity. After all, musicians create new music after hearing lots of music, engineers create new machines after seeing existing ones, and so on.
There is now evidence that at least some types of creativity can be recreated simply through computation. In 2016, Google achieved a breakthrough in machine intelligence when their AlphaGo program beat the South Korean Go grandmaster Lee Sedol by four games to one (Borowiec, 2017). Until that point, progress with game-playing software had been comparatively slow and the best programs were unable to beat strong club players, let alone grandmasters. The number of possible plays in Go is extremely large, far exceeding chess. This means that searching through possible moves and counter-moves from a current position, which is the approach historically used by chess computers, cannot be used in Go—instead, candidate moves need to be conjectured. Put differently, playing Go involves creativity.
The approach used for the AlphaGo program started out by training a neural network on games previously played by humans. Once the network was good enough, it was improved further by playing against itself. There has already been progress in the application of these and related techniques, which are often referred to as ‘generative adversarial networks’ (GANs) to the composition of music and the creation of designs. Even more surprisingly, it has been shown that machines can learn to be creative not just by studying prior human games or designs, but by creating their own, based on rules. Each of AlphaGo’s two successors, AlphaGo Zero and AlphaZero, started out knowing only the rules and learned from playing games against itself (“AlphaZero,” 2020). This approach will allow machines to be creative in areas where there is limited or no prior human progress.
While much of what the brain does is computation, including many tasks that we identify as creative, there is one function of the brain that may never be accessible to digital machines: having ‘qualia.’ This is a term from philosophy which refers to our subjective experience, such as what it “feels like” to be cold (or hot), to touch an object, be stressed or amazed. For example, when a digital thermostat displays the room temperature we do not assume that its internal state has anything remotely resembling our own subjective sensation. The lack of qualia is obvious in this example, but we assume that it extends to much more complex situations, such as a self-driving car taking a series of turns on a winding highway. We would expect a human driver to experience a sensation of thrill or elation, but not the car. This lack of qualia in machines may seem like an aside for the moment, but will turn out to be an important component of where humans might direct their attention in the Knowledge Age.
As impressive as zero marginal cost and universality are on their own, in combination they are truly magical. To take one example, we are making good progress in the development of a computer program that will be able to diagnose disease from a patient’s symptoms in a series of steps, including ordering tests and interpreting their results (Parkin, 2020). Though we might have expected this to happen at some point based on the principle of universality, we are making tangible progress and should accomplish this in a matter of decades, if not sooner. At that point, thanks to zero marginal cost, we will be able to provide low-cost diagnosis to anyone in the world. Let that sink in slowly: free medical diagnosis for all humans will soon be in the space of the possible.
The universality of computation at zero marginal cost is unlike anything we have had with prior technologies. Being able to make all the world’s information and knowledge accessible to all of humanity was never before possible, nor were intelligent machines. Now we have both. This represents at least as dramatic and non-linear an increase the ‘space of the possible’ for humanity as agriculture and industry did before, and each of those developments ushered in an entirely different age. We will be able to think better about what this implies for the current transition and the next age if we first put some foundations in place.
As a venture capitalist, I’m often asked: “What’s the next big thing?” People tend to ask this when they’re looking for a trend in technology, expecting me to talk to them about robotics or virtual reality. But I think that’s a boring interpretation of the question. These trends come and go as part of hype cycles that represent the waxing and waning of media interest in a particular technology. Instead I answer, “Oh, nothing much—just the end of the Industrial Age.” That momentous change is the subject of this book.
The World After Capital is unabashedly about some truly big subjects. In order to tackle why the Industrial Age is ending and what is coming next, I will examine such things as the nature of technology and what it means to be human. It might seem a wildly ambitious thesis, but I argue that we are facing a transition as profound as the one which took humanity from the Agrarian Age to the Industrial Age, so nothing less will do.
The current transition has been made possible by the advent of digital technology, so it is essential that we understand the nature of this technology and how it differs from what preceded it. It is also essential that we examine the philosophical foundations of what we want to accomplish—after all, we have the opportunity to decide what will follow the Industrial Age. In The World After Capital I argue that we are now entering the Knowledge Age, and that in order to get there we must focus on the allocation of attention rather than capital.
Markets fail at allocating attention because prices cannot exist for directing our attention to problems and opportunities most crucial for the survival and thriving of humanity. The climate crisis, for instance, is both more severe and more imminent than most people realize, and it is a direct result of our failure to pay attention. How quickly we address this crisis will to a great extent determine the shape of the current transition. If we do not make drastic changes quickly, getting to the next age will be even more painful than the transition to the Industrial Age, which started in the eighteenth century, involved numerous violent revolutions, and didn’t conclude until the end of the Second World War.
The transition from the Industrial Age is already underway, and has caused massive disruption and uncertainty. Many people are fearful of change, and react by supporting populist politicians who promote the simplistic message that we should return to the past. This is happening all over the world. We saw it with the vote in the 2016 UK referendum to leave the European Union, and with the election of Donald Trump as President of the United States in the same year. I started writing The World After Capital before both of those events, but they underline the importance of a future-oriented narrative that shows a path forward for humanity. Going back is not a viable option, and never has been. We did not continue foraging for food after the invention of agriculture, nor did we remain farmers after the invention of industry (farming is still important, of course, but it is carried out by a tiny percentage of the population). Each of these transitions required us to find new sources of purpose. As we leave the Industrial Age behind, our purpose can no longer be derived from having a job or from an ever-growing consumption of material goods. Instead, we need to find a purpose that is compatible with the Knowledge Age. I feel incredibly fortunate to have found my purpose in advancing innovation through investing in startups, as well as in examining why this transition is happening now and suggesting how we might go about it.
In a strange and wonderful way, much of what I have done in my life so far has brought me to this point. As a teenager in my native Germany in the early 1980s, I fell in love with computers. I started writing software for companies and then studied economics and computer science as an undergraduate at Harvard, writing my senior thesis on the impact of computerized trading on stock prices. After graduating I worked as a consultant and experienced the impact of information systems on the automotive, airline and electric utility industries. As a doctoral student at MIT, I wrote my dissertation on the impact of information technology on the organization of companies. As an entrepreneur, I co-founded an early Internet healthcare company. And as a venture investor, I have had the good fortune to back companies that provide transformative digital technologies and services, including Etsy, MongoDB and Twilio.
You might be wondering why I would choose to write this book as a VC—after all, surely it’s a distraction from finding and managing investments in startups? However, working with startups gives me a window into the future: I get to see trends and developments before they become widely understood, and this puts me in a good position to write about what is going to happen. At the same time, writing about the future that I would like to see will help me find companies that can help bring that future about. I am writing The World After Capital because what I see compels me to do so, but I am also confident that writing it has made me a better investor.
When I started my blog over a decade ago, I called myself a “technology optimist.“ I wrote:
I am excited to be living [at] a time when we are making tremendous progress on understanding aging, fighting cancer, developing clean technologies and so much more. This is not to say that I automatically assume that technology by itself will solve all our problems […]. Instead, I believe that – over time – we as a society figure out how to use technology to […] improve our standard of living. I for one am […] glad I am not living in the Middle Ages.
This book is fundamentally optimistic, which is partly a reflection of my personality. I can’t see how it would be possible to be a venture capitalist as a pessimist. You would find yourself focusing on the reasons why a particular startup would be unlikely to succeed and as a result would never make an investment.
I want to be clear about this apparent bias from the start. Optimism, however, is much more than a personal bias—it is essential for human knowledge. Acts of knowledge creation, such as inventing a new technology or writing a new song, are profoundly optimistic. They assume that problems can be solved, and that art will impact the audience (which is true even for a pessimistic song). Optimism is the attitude that progress is possible.
Progress has become a loaded term. After all, despite our technological achievements, aren’t humans also responsible for the many diseases of civilization, for the extinction of countless species, and potentially for our own demise through climate change? Without a doubt we have caused tremendous suffering throughout human history, and we are currently faced with huge problems including a global pandemic and the ongoing climate crisis. But what is the alternative to trying to tackle these?
The beauty of problems is that knowledge can help us overcome them. Consider the problem of warming ourselves in the cold. Humans invented ways of making fire, eventually documented them, and have since dramatically improved the ways in which we can produce heat. We may take the existence of knowledge for granted, but no other species has it, which means whether they can solve a problem depends largely on luck and circumstance. So not only is optimism essential for knowledge, but the existence of knowledge is the basis for optimism.
There is an extreme position that suggests that we would have been better off if we had never developed knowledge in the first place (Ablow, 2015). While this may sound absurd, much of religious eschatology (theology about the ‘end times’) and apocalyptic thinking is associated with this position, asserting that a grand reckoning for the sins of progress is inevitable. And while they are rare, there have even been voices welcoming the COVID-19 pandemic and the climate crisis as harbingers, if not of apocalypse, then at least of a “Great Reset.” Although there is no guarantee that all future problems will be solvable through knowledge, one thing is certain: assuming that problems cannot be solved guarantees that they will not be. Pessimism is self-defeating, and apocalyptic beliefs are self-fulfilling.
All of this is also true for digital technology, which has already brought with it a new set of problems. We will encounter many of them in this book, including the huge incentives for companies such as Facebook to capture as much attention as possible, and the conflicts that arise from exposure to content that runs counter to one’s cultural or religious beliefs. And yet digital technology also enables amazing progress, such as the potential for the diagnosis of diseases at zero marginal cost. The World After Capital is optimistic that we can solve not only the problems of digital technology, but also that we can apply digital technology in a way that results in broad progress, including the knowledge creation needed to address the climate crisis.
Believing in the potential of progress does not mean being a Pollyanna, and it is important to remember that progress is not the inevitable result of technology. Contrary to the claims made by the technology writer Kevin Kelly in his book What Technology Wants, technology doesn’t want a better world for humanity; it simply makes such a world possible.
Nor does economics ‘want’ anything: nothing in economic theory, for instance, says that a new technology cannot make people worse off. Economics gives us tools that we can use to analyze markets and design regulations to address their failures, but we still need to make choices relating to what we want markets and regulations to accomplish.
Moreover, contrary to what Karl Marx thought, history also doesn’t ‘want’ anything. There isn’t a deterministic mechanism through which conflicts between labor and capital are ultimately bound to be resolved in favor of a classless society. Nor is there, as the political economist Francis Fukuyama would have it, an “end of history“—a final social, economic and political system. History doesn’t make its own choices, it is the result of human choices, and there will be new choices to make, as long as we continue to make technological progress.
It always has been our responsibility to make choices about which of the worlds made possible by new technology we want to live in. Some of these choices need to be made collectively (requiring rules or regulations), and some of them need to be made individually (requiring self-regulation). The choices we are faced with today are especially important because digital technology so dramatically increases the ‘space of the possible’ that it includes the potential for machines that possess knowledge and will eventually want to make choices of their own.
The people building or funding digital technology tend to be optimists and to believe in progress (though there are also opportunists thrown into the mix). Many of those optimists also believe in the need for regulation, while another group has a decidedly libertarian streak and would prefer governments not to be involved. For them, regulation and progress conflict. The debates between these two groups are often acrimonious, which is unfortunate, because the history of technology clearly demonstrates both the benefits of good regulation and the dangers of bad regulation. Our energy is thus better spent on figuring out the right kind of regulation, as well as engaging in the processes required to enforce and revise it.
The history of regulating automotive technology is instructive here. Much of the world currently gets around by driving cars. The car was an important technological innovation because it vastly enhanced individual mobility, but its widespread adoption and large scale impacts would have been impossible without legislation, including massive public investments. We needed to build roads and to agree on how they should be used, neither of which could have been accomplished based solely on individual choices. Roads are an example of a ‘natural monopoly.’ Multiple disjointed road networks or different sets of rules would be hugely problematic: imagine what would happen if some people drove on the left side of the road and others drove on the right. Natural monopolies are situations where markets fail and regulation is required, and social norms are another form of regulation. The car would have been less widely adopted as a mode of individual transport without changes in social norms that made it acceptable for women to drive, for instance.
Not all regulation is good, of course. In fact, the earliest regulation of automotive vehicles was aimed at delaying their adoption by limiting them to walking speed. In the United Kingdom they were even required by law in their early years to be preceded by someone on foot carrying a red flag (“Red Flag Traffic Laws,” 2020). Similarly, not all regulation of digital technology will be beneficial. Much of it will initially aim to protect the status quo and to help established enterprises, including the new incumbents. The recent changes to net neutrality rules are a good example of this (Kang, 2017).
My proposals for regulation, which I will present later in the book, are aimed at encouraging innovation by giving individuals more economic freedom and better access to information. These regulations, which are choices we need to make collectively, represent a big departure from the status quo and from the programs of the established parties here in the United States and in most other countries. They aim to let us explore the space of the possible that digital technologies have created, so we can transition from the Industrial Age to the Knowledge Age.
Another set of choices has to do with how we react individually to the massive acceleration of information dissemination and knowledge creation that digital technology makes possible. These are not rules that society can impose, because they relate to our inner mental states: they are changes we need to make for ourselves. For instance, there are many people who feel so offended by content that they encounter on the Internet, from videos on YouTube to comments on Twitter, that they become filled with anxiety, rage, and other painful emotions leading them to withdraw or lash out, furthering polarization and cycles of conflict. Other people become trapped in ‘filter bubbles’ that disseminate algorithmically curated information that only confirms their existing biases, while others spend all their time refreshing their Instagram or Facebook feeds. Even though some regulation can help, as well as more technology, overcoming these problems will require us to change how we react to information.
Changing our reactions is possible through self-regulation, by which I mean training that enhances our capacity to think critically. From Stoicism in ancient Greece to Eastern religions such as Hinduism and Buddhism, humans have a long tradition of practices designed to manage our immediate emotional responses, enabling us to react to the situations we experience in insightful and responsible ways. These mindfulness practices align with what we have learned more recently about the workings of the human brain and body. If we want to be able to take full advantage of digital technology, we need to figure out how to maintain our powers of critical thinking and creativity in the face of an onslaught of information including deliberate attempts to exploit our weaknesses.
I will now provide a highly abstract account of human history that focuses on how technology has shifted scarcity over time and how those shifts have contributed to dramatic changes in human societies.
Homo sapiens emerged roughly two hundred and fifty thousand years ago. Over most of the time since, humans were foragers (also referred to as hunter-gatherers). During the Forager Age, the defining scarcity was food. Tribes either found enough food in their territory, migrated further or starved.
Then, roughly ten thousand years ago, humanity came up with a series of technologies such as the planting of seeds, irrigation and the domestication of animals that together we recognize today as agriculture. These technologies shifted the scarcity from food to land in what became the Agrarian Age. A society that had enough arable land (on which food can be grown), could meet its needs and flourish. It could, in fact, create a food surplus that allowed for the existence of groups such as artists and soldiers that were not directly involved in food production.
More recently, beginning about four hundred years ago with the Enlightenment, humanity invented a new series of technologies, including steam power, mechanical machines, chemistry, mining, and eventually technologies to produce, transmit and harness electricity. Collectively we refer to these today as the Industrial Revolution, and the age that followed as the Industrial Age. Once again, the scarcity shifted, this time away from food and towards capital, such as buildings, machinery and roads. Capital was scarce because we couldn’t meet the needs of a growing human population, including the need for calories, without building agricultural machines, producing fertilizer and constructing housing.
In each of those two prior transitions, the way humanity lived changed radically. In the transition from the Forager Age to the Agrarian Age we went from being nomadic to sedentary, from flat tribal societies to extremely hierarchical feudal societies, from promiscuity to monogamy (sort of), and from animistic religions to theistic ones. In the transition from the Agrarian Age to the Industrial Age we went from living in the country to living in the city, from large extended families to nuclear families or no family at all, from commons to private property (including private intellectual property) and from great-chain-of-being theologies to the Protestant work ethic.
What accounts for these changes? In each transition the nature of the scarcity changed in a way that made measurement of human effort more difficult, which in turn required more sophisticated ways of providing incentives to sustain the necessary level of effort.
In the Forager Age, when the key scarcity was food, the measurement and incentive problem was almost trivial: everyone in a tribe sees how much food the hunters and gatherers bring back, and it is either enough to feed everyone or not. In so-called immediate return societies (which had no storage) that is literally all there is to it. With storage the story gets slightly more complicated, but not by much. I believe that this explains many of the features of successful foraging tribal societies, including the flat hierarchy and the equality of sharing.
In the Agrarian Age, when the scarcity was land, the measurement problem got significantly harder: you can really only tell at harvest time (once per year in many regions of the world) how well-off a society will be. Again, I believe that this explains many of the features of successful agrarian societies, in particular the need for a lot of structure and strict rules. It is crucial to keep in mind that these societies were essentially pre-scientific, so they had to find out what works by trial and error. When they found a rule that seemed to work, they wanted to stick with it and codify it (much of this happened via the theistic religions).
In the Industrial Age, when the scarcity was capital, the measurement problem became even harder. How do you decide where a factory should be built and what it should produce? It might take years of process and product innovation to put physical capital together that is truly productive. I believe that this explains much of the success of the market-based model, especially when contrasted with planned economies. Effectively, the solution to the incentive problem moved from static rules to a dynamic process that allows for many experiments to take place and only a few of them to succeed.
These changes in how humanity lives were responses to an increasingly difficult measurement problem, as technological progress shifted scarcity from food to land and then from land to capital. But the transitions don’t occur deterministically; they are the result of human choice driving changes in regulation. For example, when it came to the scarcity of capital, humanity tried out radically different approaches between market-based and planned economies. As it turned out, competitive markets, combined with entrepreneurialism and the strategic deployment of state support (e.g. in the form of regulation), were better at allocating and accumulating capital. Similarly, the Agrarian Age contained vastly different societies, including the Athenian democracy, which was hugely advanced compared to much of Northern European society in the Middle Ages.
The other important point to note about the previous transitions is that they took quite some time and were incredibly violent. Agriculture emerged over the span of thousands of years, during which time agrarian societies slowly expanded, either subduing or killing foraging tribes. The transition from the Agrarian Age to the Industrial Age played out over several hundred years and involved many bloody revolutions and ultimately two world wars. At the end of the Agrarian Age, the ruling elites had gained their power from controlling land and still believed it to be the critical scarcity. For them, industry was a means of building and equipping increasingly powerful armies with tanks and battleships to ensure control over land. Even the Second World War was about land, as Hitler pursued “Lebensraum“ (literally “room to live”) for his Third Reich. It was only after the Second World War that we finally left the Agrarian Age behind for good.
We now, once again, find ourselves in a transition period, because digital technology is shifting the scarcity from capital to attention. What should be clear by now is that this transition will also require dramatic changes in how humanity lives, just as the two prior transitions did. It is also likely that the transition will play itself out over several generations, instead of being accomplished quickly.
Finally, there is a historic similarity to the transition out of the Agrarian Age that explains why many governments have been focused on incremental changes. To understand, we should first note that capital today is frequently thought of as monetary wealth or financial capital, even though it is productive capital (machines, buildings and infrastructure) that really matters. Financial capital allows for the formation of physical capital, but it does not directly add to the production of goods and services. Companies only require financial capital because they have to pay for machines, supplies and labor before they receive payment for the product or service they provide.
Just as the ruling elites at the end of the Agrarian Age came from land, the ruling elites today come from capital. They often don’t take up political roles themselves but rather have ways of influencing policy indirectly, exposing them to less personal risk. A good recent example is the role of the billionaire hedge fund manager Robert Mercer and his family in supporting groups that influenced the outcome of the US Presidential election in 2016, such as the right-wing news organization Breitbart (Gold, 2017).
What are the values that I am basing all this on, and where do they come from? In his book Sapiens, the historian Yuval Noah Harari claims that all value systems are based on equally valid subjective narratives. He denies that there is an objective basis for humanism to support a privileged position for humanity as a species, but here I will try to convince you that he is wrong (Harari, 2011). For not only is the power of knowledge a source of optimism; its very existence provides the basis for humanism. By “humanism” I mean a system of values that centers on human agency and responsibility rather than on the divine or the supernatural, and that embraces the process of critical inquiry as the central enabler of progress.
Knowledge, as I have already defined it, is the externalized information that allows humans to share insights with each other. It includes both scientific and artistic knowledge. Again, we are the only species on Earth that generates this kind of knowledge, with the ability to share it over space and time. I am able to read a book today that someone else wrote a long time ago and in a completely different part of the world.
This matters a great deal, because knowledge enables fundamentally different modes of problem solving and progress. Humans can select and combine knowledge created by other humans, allowing small changes to accrete into large bodies of work over time, which in turn provide the basis for scientific and artistic breakthroughs. Without knowledge, other species have only two methods of sharing things they have learned: communication and evolution. Communication is local and ephemeral, and evolution is extremely slow. As a result, animals and plants routinely encounter problems that they cannot solve, resulting in disease, death and even extinction. Many of these problems today are caused by humans (more on that shortly).
Knowledge has given humanity great power. We can fly in the sky, we can sail the seas, travel fast on land, build large and durable structures, and so on. The power of our knowledge is reshaping the Earth. It often does so in ways that solve one set of problems while creating an entirely new set of problems, not just for humans but for other species. This is why it is crucial that we remember what the story of Spiderman tells us: “With great power comes great responsibility.” It is because of knowledge that humans are responsible for looking after dolphins, rather than the other way round.
Progress and knowledge are inherently linked through critical inquiry: we can only make progress if we are capable of identifying some ideas as better than others. Critical inquiry is by no means linear—new ideas are not always better than old ones. Sometimes we go off in the wrong direction. Still, given enough time, a sorting takes place. For instance, we no longer believe in the geocentric view of our solar system, and only a tiny fraction of the art that has ever been created is still considered important. While this process may take decades or even centuries, it is blindingly fast compared to biological evolution.
My use of the word “better” implies the existence of universal values. All of these flow from the recognition of the power of human knowledge and the responsibility which directly attaches to that power. And the central value is the process of critical inquiry itself. We must be vigilant in pointing out flaws in existing knowledge and proposing alternatives. After all, imagine how impoverished our music would be if we had banned all new compositions after Beethoven.
We should thus seek regulation and self-regulation that supports critical inquiry, in the broad sense of processes that weed out bad ideas and help better ones to propagate. In business this often takes the form of market competition, which is why regulation that supports competitive markets is so important. Individually, critical inquiry requires us to be open to receiving feedback in the face of our deeply rooted tendency toward confirmation bias. In politics and government, critical inquiry is enabled by the democratic process.
Freedom of speech is not a value in and of itself; rather, it is a crucial enabler of critical inquiry. But we can see how some limits on free speech might flow from the same value. If you can use speech to call for violence against individuals or minority groups, you can also use it to suppress critical inquiry.
Digital technology, including a global information network and the general-purpose computing that is bringing machine intelligence, are dramatically accelerating the rate at which humanity can accumulate and share knowledge. However, these same technologies also allow targeted manipulation and propaganda on a global scale, as well as constant distraction, both of which undermine the evaluation and creation of knowledge. Digital technology thus massively increases the importance of critical inquiry, which is central to knowledge-based humanism.
Beyond critical inquiry, optimism and responsibility, other humanist values are also rooted in the existence of knowledge. One of these is solidarity. There are nearly 8 billion human beings living on Earth, which exists in an otherwise inhospitable solar system. The big problems that humanity faces, such as infectious diseases and the climate crisis, require our combined efforts and will impact all of us. We thus need to support each other, irrespective of such differences as gender, race or nationality. Whatever our superficial differences may be, we are much more like each other—because of knowledge—than we are to any other species.
Once we have established a shared commitment to the value of solidarity, we can celebrate diversity as another humanist value. In current political debates we often pit individuality against the collective as if it the two conflicted. However, to modernize John Donne, no human is an island—we are all part of societies, and of humanity at large. By recognizing the importance of our common humanity, we create the basis on which we can unfold as individuals. Solidarity allows us to celebrate, rather than fear, the diversity of the human species.
Those with some familiarity with economic theory are likely to understand ‘scarcity’ in its terms. In that context, something is scarce if its price is greater than zero. By this definition, land is scarce—it costs a lot of money to buy a piece of land. And financial capital is still because even with our current low interest rates, there is a price for borrowing money or raising equity.
However, there is a fundamental problem with this price-based definition of scarcity: anything can be made scarce by assigning ownership of it. Imagine for a moment that the world’s atmosphere belonged to Global Air Ltd, a company which could charge a fee to anyone who breathes air. Air would suddenly have become scarce, according to the price-based theory of scarcity. That might seem like an extreme example, and yet some people have argued that assigning ownership to the atmosphere would solve the problem of air pollution, on the grounds that it would result in the air’s owners having an economic incentive to maintain an unpolluted atmosphere.
Here I will use a different meaning of scarcity, one not based on price. I will call something scarce when there is less of it than we require to meet our needs. If people are starving because not enough food has been produced (or made available), food is scarce. Insofar as more knowledge would allow this problem to be solved, this can be thought of as technological (as opposed to economic) scarcity. The point here is that technological progress makes things less scarce. As I discuss in Part Two below, the eighteenth-century scholar Thomas Malthus (1798) was correct when he predicted that global population growth would be exponential, but his prediction that such growth would outpace growth in the food supply, resulting in ongoing shortages and mass starvation, turned out to be wrong, because technological progress resulted in exponential increases in food production. In fact, recent advances in agricultural techniques have meant that the amount of land needed for food production is now declining, even as food production is continuing to grow rapidly.
But is it possible to draw a clear distinction between needs and wants? If people are not starving but want more or different food, can food still be scarce? Modern economics equates the two, but intuitively we know that this is not the case. We need to drink water, but want to drink champagne. We need to provide our body with calories, but want to eat caviar. These examples are obviously extremes, but the point is that many different foods can be used to meet the need for calories. Desiring a particular food is a want, while getting enough calories (and other nutrients) is a need. In Part Two, I set out a list of the needs and look at our current and future ability to fulfill them.
Importantly, if something is no longer scarce, it isn’t necessarily abundant—there is an intermediate stage, which I will call ‘sufficiency’. For instance, there is sufficient land on the planet to meet everyone’s needs, but building housing and growing food still requires significant physical resources, and hence these things are not abundant. I can foresee a time when technological progress makes land and food abundant—imagine how much space we would have if we could figure out how to live on other planets. Digital information is already on a clear path to abundance: we can make copies of it and distribute them at zero marginal cost, thus meeting the information needs of everyone connected to the Internet.
With this needs-based definition of scarcity in place, we can now examine how technology has shifted the constraining scarcity for humanity over time.
In saying that capital is ‘sufficient’, I mean that there is enough of it to meet our needs. That’s what I set out to show in this part of the book. The only way to do so is by examining what those needs are, and separating them clearly from our unlimited wants. We must then consider population trends, so we can see how many humans are likely to have those needs in the future. Only then can we attempt to see if our existing capital is sufficient to meet them.
The definition of scarcity introduced in Part One is based on the idea of needs, so to argue that we are currently experiencing a shift to attention being the new scarcity requires me to demonstrate that we have sufficient capital for meeting our needs. But agreeing on what constitutes human needs is not a simple task. What follows should be seen as a step along the way. A list of needs is the type of externalized human knowledge that can be improved over time through the process of critical inquiry.
In an early draft of The World After Capital, I grouped needs into categories such as ‘biological’, ‘physical’ and ‘social’, but the boundaries between them seemed rather arbitrary. So instead I distinguish here between individual and collective needs, where the former apply to a single person and the latter are the needs of humanity. Another challenge in putting together such a list is that it is easy to confuse a need with a strategy for meeting it. For instance, eating meat is a strategy for addressing our need for calories, but humans can, of course, acquire calories from many sources.
These are the basic needs of the human body and mind, without which individual survival is impossible. A single individual has these needs even when they are completely isolated, such as if they are traveling alone in a spaceship. The first set of individual needs relates to keeping our bodies powered. These include:
Oxygen. On average, humans need about 550 liters of oxygen every day, depending on the size of our body and physical exertion. Our most common way of meeting this need is breathing air ("How Much Oxygen Does a Person Consume in a Day?," 2000). Although that may sound obvious, we have developed other solutions through technology – for example, the blood of patients struggling to breathe can be oxygenated externally.
Water. We need to ingest two or three liters of water per day to stay hydrated, depending on factors such as body size, exertion and temperature ("Water: How Much Should You Drink Every Day?," 2020). In addition to drinking water and fluids that contain it, we have other solutions for this, such as the water contained in the foods that we eat.
Calories. To power our bodies, adults need between 1,500 and 3,200 calories per day, a need we mainly meet by eating and drinking (U.S. Department of Agriculture and U.S. Department of Health and Human Services, 2015). The best way to obtain calories, however, is surprisingly poorly understood – the mix between proteins, lipids and carbohydrates is subject to debate.
Nutrients. The body cannot synthesize all the materials it requires, including certain amino acids, vitamins and minerals – these must be obtained as part of our nutrition. This is another area that is surprisingly poorly understood, meaning that the mix of nutrients we need to take in seems unsettled.
Discharge. We also need to get things out of our bodies by expelling processed food, radiating heat and exhaling carbon dioxide. Humans have made a great deal of progress around meeting our discharge needs, such as toilets and public sanitation.
The second set of individual needs relates to the operating environment for humans. From a cosmic perspective, humans have an incredibly narrow operating range. Even here on Earth we can live without technological assistance only in relatively few places. Here are some of our basic operating needs:
Temperature. Our bodies can self-regulate their temperature, but only within a limited range of environmental temperature and humidity. Humans can easily freeze to death or die of overheating (we cool our bodies through sweating, also known as ‘evaporative cooling’, which stops working when the air gets too hot and humid). We therefore often need to help our bodies with temperature regulation by controlling our environment. Common strategies to meet our temperature needs include clothing, shelter, heating and air conditioning.
Pressure. Anybody who has gone diving will be aware that our bodies do not handle increased pressure very well. The same goes for decreased pressure, which is one of the reasons why we find air travel exhausting (airplane cabins maintain pressure similar to being at the top of an eight-thousand-foot mountain).
Light. Most humans would be hard-pressed to achieve much in complete darkness. For a long time, our need for light was met mainly by sunlight, but much human ingenuity has gone into the creation of artificial light sources.
The third set of individual needs arises from how we deal with a complex and ever-changing environment. As we go through life, we all encounter challenges that we need to overcome, resulting in three fundamental individual needs:
Healing. Whenever we damage our body, it needs to heal. The human body comes equipped with extensive systems for self-healing, but beyond a certain range it needs external assistance. We have developed many solutions, which are often grouped under the term ‘healthcare’.
Learning. When we are born, we are quite incompetent – we have to learn basic skills, such as walking and how to use even the simplest tools. When we encounter a new situation, we have to learn how to deal with it. We group many of the strategies for meeting the need for learning under the heading ‘education’, but other solutions include experimenting to gain experience, self-study and parenting.
Meaning. As humans, we have a profound psychological need for meaning in our lives. One solution is to have a purpose. Religious belief and belonging to a community have long been a source of purpose for humans. Another key strategy comes from our interactions with other humans, including having other people acknowledge our contributions to a project, or even merely recognize our existence.
This last set of needs may strike you as being at a much higher level than the earlier ones. The idea of sorting individual needs into a hierarchy, as the psychologist Abraham Maslow famously did, is intuitively appealing, but it is misleading – all of these needs are vital. For example, Maslow put needs like calories at the bottom and needs like meaning at the top, implying that calories are more foundational than meaning. But we know from the work of Viktor Frankl and others that meaning is essential to human effort and that accessing calories requires effort. As a thought exercise, picture yourself alone in a spaceship and try to remove any of the above. You’ll soon realize that they are all equally important.
Our collective needs arise from living together in societies and sharing space and resources. Meeting them is what allows human societies to survive and advance.
Reproduction. Individuals can survive without sex, but reproduction is a need for societies as a whole. We have learned how to reproduce without sex; in the future, there may be different solutions for the continuation of a human society – whether here on Earth or elsewhere.
Allocation. Access to physical resources has to be allocated. Take a chair as an example. Only one person can comfortably sit in it at a time – when there are multiple people, we need a way of allocating the chair between them. If you are by yourself, you can sit on a chair whenever you want to – allocation is a collective need.
Motivation. This may seem like an individual need, but it acts as a collective one in the sense that societies must motivate their members to carry out important tasks and follow rules. Even the smallest and least technologically advanced societies have solutions for this problem, often in the form of rewards and punishments.
Coordination. Whenever more than a single human is involved in any activity, coordination is needed. Take a simple meeting between two people as an example. In order for it to take place, the two need to show up at the same place at the same time. We have developed many communication and governance mechanisms to address this need.
Knowledge. As I argued in earlier sections on optimism and humanism, knowledge is the central collective human need: without it, a society will encounter problems that it cannot solve. History is full of examples of societies that lacked sufficient knowledge to sustain themselves, such as the Easter Islanders and the Mayans. This is not about what any one individual has learned, but about the body of knowledge that is accessible to society as a whole. Later in this book we will examine solutions for generating more knowledge, faster.
These collective needs may strike you as abstract, but this is the result of identifying needs rather than solutions, which are much more concrete and readily recognizable. For instance, governments and laws are examples of solutions to collective needs such as allocation and coordination, as are markets and firms and, more recently, networks and platforms. In other words, many of the institutions of society exist because they help us solve a collective need.
Some things don’t meet specific needs in themselves, but instead enable different solutions. Consider energy, for example. You may well ask: isn’t it something we all need, both individually and collectively? For instance, individually we need energy to maintain the temperature of a house, and collectively we need energy to power our communications infrastructure. As these two examples show, energy itself does not meet needs—rather, it makes possible something that does. It is what I call an enabler.
Here are four foundational enablers:
Energy. For a long time, humans relied on direct sunlight as their primary energy source. Since then we have developed many ways of generating energy, including better ways of capturing sunlight. Capturing more energy and making it available in highly concentrated and easily controllabe form via electricity has enabled new solutions to human needs.
Resources. In early human history, all resources were drawn directly from our natural surroundings. Later, we started growing and extracting resources using progressively more technology. Many modern solutions have been made possible by access to new kinds of resources. For instance, mobile phones, which provide new solutions to individual and collective needs, are made possible in part by esoteric raw materials, including the so-called rare-earth elements.
Transformation. Energy and resources alone are not enough. To enable most solutions, we need to figure out (and remember!) how to use the former to transform the latter. This involves chemical and physical processes. Physical capital, in the shape of machines, has been a crucial enabler of many new solutions to human needs. For instance, a knitting machine can quickly transform yarn into clothing, one of our key solutions for maintaining the human operating environment.
Transportation. The final foundational enabler is the ability to move stuff, including people. This is another area in which we have made great progress, going from human-powered to animal-powered to machine-powered transportation.
As in the case of needs, I have deliberately chosen enablers that have a high degree of abstraction. Coal-fired power plants provide energy, as do solar panels – and nuclear fusion will do the same at some point in the future. These three examples have dramatically different characteristics, but they are all energy enablers.
While I expect further changes, I believe that my current version of needs and enablers satisfies my argument that there is sufficient productive capital in the world. To establish this in more quantitative terms, though, we need to consider the size and growth of the human population.
My first major claim is that capital, at least in the technological sense, is no longer scarce. We have sufficient productive capital to meet our needs through growing food, constructing buildings, producing clothes, and so on. To establish this, I will start by setting out a catalog of individual and collective needs. I will then examine current population trends to see what we can learn about the future growth in these needs, followed by an evaluation of our available capital and its ability to meet those needs. That entire section of The World After Capital shows that physical capital is sufficient in aggregate. It does not address questions of wealth distribution, which will be discussed later.
My second claim is that attention is now the key scarcity, meaning that our present allocation of attention is resulting in humanity’s needs not being met. To establish this I will start by pinning down more precisely what attention is and presenting several examples of human needs that either are already no longer met, such as the need for meaning, or that are at risk of not being met in the future, such as calories due to the climate crisis—all due to a lack of attention. After that I will consider how much human attention is currently caught up in Industrial Age activities, and how more attention is being trapped through the dominant uses of digital technology, such as advertising-based social networks. I will also discuss why market-based capitalism cannot be used to allocate attention.
I will then make concrete suggestions for how to facilitate the transition to the next age, which I call the Knowledge Age. In keeping with the ideas about knowledge and humanism that I presented earlier, my suggestions focus on increasing freedoms as the basis for more available attention and improved allocation of that attention.
In 1798, Thomas Malthus predicted widespread famine as the human population grew exponentially, outstripping increases in humanity’s ability to grow food (Malthus, 1798). His prediction was half-right: the global population did explode at the start of the nineteenth century.
Since then, the human population has grown from about 1 billion to nearly 8 billion people ("World Population Clock: 7.9 Billion People", 2021). However, Malthus’s dire warnings about the consequences of this population growth have proven wrong. There has been no global-scale starvation, and most people do not live in abject poverty. In fact, the number of people living in extreme poverty around the world has declined, even though population growth has been about twice as fast as Malthus’s predicted upper limit of 1 billion people added over 25 years (Roser & Ortiz-Ospina, 2013).
What Malthus got wrong was the rate of technological progress. First, he was pessimistic about our ability to improve agricultural productivity. Since his writing, there have been huge advances in agriculture: the percentage of the global workforce employed in the sector has declined from more than 80 per cent to 33 per cent, and is falling rapidly (in the US and other advanced economies, agriculture represents 2 per cent of employment or less). In the last 50 years alone, the land required to produce a given quantity of food has declined by 68 per cent (Ritchie & Roser , 2019). The total amount of land used to produce food has still continued to grow at least until recently, but much more slowly than the population (Ausubel et al., 2013; Ewers et al., 2009).
Second, Malthus could not foresee the scientific breakthroughs that enabled the Industrial Revolution. That revolution not only powered the increase in agricultural productivity, but also gave us dramatic advances in living standards, including increased life expectancy, faster transportation and cheaper communication.
Source: World Bank, 2020a; 2020b; United Nations, 2019
This matters, because as it turns out, population growth responds to progress. In particular, there is a strong and well-documented relationship between levels of infant mortality, living standards, and birth rates. As better medical technology reduces infant mortality and better production technology increases living standards, birth rates decline. This is not a mechanistic relationship but involves complex social adjustments, such as women entering the workforce and other forms of empowerment for women (e.g., better access to education). Max Roser and the team at the “Our World In Data” project have produced some beautiful charts that show how this effect of progress on birth rates has occurred all around the world (Roser, 2017).
So despite the extraordinary growth in global population over the last 200 years, simply assuming that it will continue into the future would be a mistake: there are strong signs that the world’s population is likely to peak. Some people claim that this debate is crucial because they don’t think the world can sustain, say, 11 billion people. However, this argument misses a key point. The world cannot sustain its current population of nearly 8 billion people either, unless we continue to make technological progress. The ways we have managed to supply 8 billion people so far have created all sorts of new problems, such as water and air pollution and, most pressingly, the climate crisis.
In fact Malthus's predictions may yet catch up with us. For example, if we don’t rapidly address the climate crisis, we may experience large scale crop failure resulting in mass starvation. This would be an example of having insufficient technology for producing enough food, in contrast to many past examples of mass starvation, such as in India, that were the result of social and political breakdowns from internal strife or external interference disrupting food production or distribution.
Still, the key takeaway should be that needs will not continue to grow exponentially because A) population growth will slow down, and B) needs per person are limited. All the signs suggest that the global population curve is starting to decelerate, whereas the rate of technical progress is continuing to accelerate (Roser, 2019a; Roser & Ritchie, 2013). Knowing these things, we can be optimistic about progress in relationship to population growth. In other words, Malthus will ultimately turn out to have been be wrong both about the rate of technological progress and the long-term rate of population growth.
I have already described why digital technology is so disruptive. We will see in more detail later how it is contributing to an acceleration of knowledge creation, and thus progress.
As implied by the title of this book, one of my fundamental claims is that there is enough capital in the world to meet everyone’s needs. That means meeting the individual needs of at least 7.9 billion people, as well as the collective needs of the societies they live in. If there is plenty of slack today, capital will no longer be the binding constraint for humanity going forward, as population growth is decelerating while technological progress is accelerating.
It is tempting to look at this in terms of financial capital, but that would be giving in to the illusion of money. Dollar bills don’t feed people and gold bars can’t be used as smartphones. The capital that fundamentally matters is productive physical capital, such as machines and buildings.
Financial capital is not irrelevant, of course – it is required for the initial construction of physical capital and to meet the ongoing needs of economic activity. If I want to build a factory or a store, I need to pay the construction workers and the suppliers of machines before I can start making money. And many businesses have ongoing expenses to pay each month before they can collect revenues from customers. When cash outflows precede cash inflows, a financing mechanism is required. To accumulate physical capital, we need to be able to accumulate financial capital.
In the history of financial capital there have been many important innovations, and the introduction of marketplace lending has been an important recent one. The allocation of financial capital to projects through markets has been enormously successful, and it is the success of the market-based approach that has given us a large enough physical capital base to meet our basic needs. I should be quick to point out, as I have done elsewhere in the book, that the market-based approach relies on plenty of governmental activity, such as pro-competition regulation and the funding of education and research.
Many recent innovations in finance, however, rather than contributing to the creation and allocation of physical capital, have had the opposite effect, instead leading to the excessive ‘financialization’ of the economy. This refers to growth in financial activities that help generate personal wealth for some but that are decoupled from, or even harm, the formation of physical capital. One example of excess financialization is companies borrowing money to buy back shares instead of investing in innovation. The derivatives and structured securities, such as collateralized debt obligations (CDOs), that powered the housing bubble are another example. This is not to say that there are no potentially legitimate uses of these tools – it is just that they have grown far beyond what is needed for physical capital formation and taken on a life of their own. This can be seen both in the increased size of the financial sector as a proportion of the overall economy and in the wealth generated by making money from money instead of from productive capital (Lahart, 2011; Lewis, 2018).
What is the role of ’human capital’ in all of this? I find this relatively new term to be a fundamental misnomer. Humans provide labor, and machines are capital. We saw earlier that, as Malthus had predicted, there was an exponential population explosion. As a result labor has not been a constraint on meeting our needs. That does not mean that we have not had labor shortages from time-to-time, but these have largely been the result of policy choices, such as restrictions on immigration or discriminatory access to education, rather than reflecting fundamentally scarce labor.
The better question to ask is: what is the role of knowledge? The answer is that advances in knowledge are essential for making capital more effective. Even more fundamentally, physical capital cannot exist in the first place without knowledge. Take a Magnetic Resonance Imaging (MRI) scanner, for example: you can’t build one without a great deal of knowledge of physics and engineering. However, in a world where everyone’s needs are taken care of, it might be possible to build the same machine without the need for financial capital, as you might not have to pay people in advance. And with enough knowledge, in the form of advanced robots, it will even be possible to build one without any ’human capital’ – labor – at all.
In conclusion here, we should realize that the accumulation of financial capital does not contribute to meeting our needs in and of itself. Imagine a Spanish galleon full of gold caught in a storm. Although the sailors aboard had ample access to financial capital, what they really needed to survive was either more knowledge or better physical capital. For example, if they had more knowledge of the weather, they could have circumnavigated the storm. Or if they had a stronger boat, they could have simply ridden it out. If anything, the gold is a hindrance to their survival – throwing it overboard might help the boat get away from the storm more quickly.
We will now examine whether physical capital is sufficient to meet our needs.
My claim is that capital is no longer the binding constraint on our ability to meet our individual needs. This is especially true for the developed economies, but it is increasingly true globally. Let’s start by considering the needs emanating from keeping our bodies powered (see the Appendix for additional supporting information).
Oxygen: There’s plenty of air for us to breathe; the key challenge is to make sure it is clean and safely breathable. China and India are both currently struggling with this, but they developed rapidly and are reliant on outdated energy sources. What is needed here are improvements to capital, such as switching to electric cars from internal combustion engine ones.
Water: There’s plenty of water for everyone in the world to drink (the oceans are full of it). Though there are distribution and access problems, including in the United States (for example, the crisis of polluted drinking water in Flint, Michigan), physical capital is not a binding constraint. We are even able to build new desalination plants in record time.
Calories: We have made dramatic progress in farming: as a result of increased productivity, the rate of increase in the amount of land used globally to produce has plummeted, and the amount of land used worldwide for agriculture may have already peaked (Ramankutty et al., 2018; Ausubel et al., 2013). There have been significant recent breakthroughs in vertical farming, the practice of growing plants under controlled conditions, and in automated farming. For instance, one of the world’s largest vertical farms operates in Jersey City, and the Japanese indoor farming company Spread’s automated facility can produce 30,000 heads of lettuce per day (Harding, 2020).
Nutrients: This is primarily a question of knowledge, as we still don’t fully understand which nutrients the body really needs to ingest in what quantities. We obtain most of them from food, but depending on our diet we may need to add some supplements. The remaining amounts tend to be small, and we can produce plenty of them already (in developed countries, entire industries have sprung up trying to convince people to buy and consume food supplements that they do not need).
Discharge: This is primarily addressed through modern sewage technology. Here too, capital is no longer a binding constraint, though its uneven distribution around the world is a problem.
Now let’s consider the needs relating to the operating environment for humans.
Temperature: The Chinese construction boom of the early 2000s illustrates how quickly we can build shelter, which, together with heating and air conditioning, is one crucial solution to our temperature needs. In the US, in the opening years of the 21st century, a construction boom was powered by artificially cheap mortgage credit. Though a lot of housing was built speculatively and remained empty, it powerfully demonstrated our construction capacity. Clothing is another strategy for meeting our temperature needs. The price of clothing has been falling in many parts of the world, including the United States. Capital is not a constraint here – indeed, we have the ability to clothe the world’s population many times over.
Pressure: Thankfully, we have nothing to do here, as we have plenty of space for humans to live in the right pressure range. This is a great example of a need that we do not consider much at all, but that would loom very large if land were to cease to be habitable and we had to go underwater or into space.
Light: We have become very good at providing light. One study shows how the hours of light provided by 60 hours of labor in the United States exploded from around 10 in 1800 to over 100,000 by 1990 (Harford, 2017; Nordhaus, 1994). Since then, we have made considerable further progress with LED lighting. That progress has also come to other parts of the world, for instance in the form of off-grid, solar-powered lamps.
Finally we come to the more abstract individual needs.
Healing: We often read that healthcare consumes an increasingly large fraction of the economy, especially in the United States, but that does not imply that capital is scarce. In industrialized countries we have plenty of hospital space and doctor’s offices. But, you may ask, didn’t the COVID-19 pandemic show that we didn’t have enough ICU beds? The answer is no: countries that reacted to the virus in good time stayed well within their capacity. Overall capital is sufficient for healing. We have extensive diagnostic facilities and are able to produce large quantities of medicine.
Learning: Nor are we constrained by capital when it comes to learning. This is increasingly true not just in industrialized nations but also globally, due to the expansion of wireless networks and the increasing affordability of smartphones. We are not far away from reaching a point where we have enough capital for anyone in the world to learn anything that can be transmitted over the Internet; the binding constraint is the availability of affordable content and the time it takes to learn and teach.
Meaning: The final individual need, that of meaning, is not and has never been constrained by capital. Capital plays no role in meeting our need for it.
At first it might seem difficult to see how capital relates to our collective needs. How could it have anything to do with such abstract concepts as motivation and coordination? In discussing why capital is already sufficient today to meet our collective needs, I will also briefly point out how it was scarce with regard to these needs in the past.
Reproduction: Available capital has always been sufficient for reproduction – otherwise, we wouldn’t be here today.
Allocation: During the Industrial Age the allocation of capital, such as where to build a factory and what it should produce, was the central allocation problem, and it was the scarcity of capital that made it difficult to meet this need. When there were few roads and other means of transportation, there were few places a factory could be built. Getting the place just right and building the right factory was thus a much harder problem than today where we can ship products around the world. As a result, the allocation problem for capital is no longer constrained by capital. And because capital is no longer scarce, it is also no longer the dominant allocation problem. As we will see in the next section, it has been replaced by the allocation of attention, for which capital is largely irrelevant.
Motivation: Again, it might at first seem as if capital never played a role here. But consider what it was like to work in an early factory, when the outputs were generally not affordable for the workers. Workers at the time had to more or less be forced into factory work, a situation that still persists in some parts of the world for certain industries (e.g., clothing and hardware assembly). Contrast this with much of the period following the Second World War, when more advanced economies already had a fair bit of capital, making possible the mass production of goods that workers could afford. Motivation can of course come from many sources other than what wages can buy, such as wanting to help others (e.g. in healthcare) or facing an enemy (e.g. wartime production). The key point is that today motivation is no longer constrained by capital in principle.
Coordination: One of the primary ways to meet the need for coordination is through communication, which was heavily constrained by capital for the longest time. Today, however, we can hold a real-time video conference with nearly anybody in the world. And some of the big coverage gaps, such as parts of Africa, are rapidly being filled in.
Knowledge: Finally, our collective need for knowledge was long constrained by capital. Making books, for instance, was expensive and time-consuming, and copies could only be made by humans, which introduced errors. The spread of knowledge was limited by the need to create and supply physical copies, constraints that we have now left behind. There were also other ways in which capital was scarce as far as knowledge was concerned. For instance, we had insufficient scientific instruments for inspecting matter, such as microscopes. Today, by contrast, we are able to build massive undertakings to support science, such as the Large Hadron Collider.
Our progress on the four foundational enablers – energy, resources, transformation and transportation – is another way to understand why capital is no longer scarce. There have been massive breakthroughs on all four during the Industrial Age.
Energy: The biggest breakthrough in energy was the development of electricity, which allowed us to apply energy precisely. Our remaining challenges relate to the production, storage and distribution of electricity. Further improvements will let us meet different needs in new ways, but we are not fundamentally energy-constrained. For instance, at current efficiency rates, covering less than 0.1% of the Earth’s surface with solar panels could meet all of today’s energy needs (Berners-Lee, 2019).
Resources: The availability of resources was completely transformed during the Industrial Age through mining, which was enabled by innovation in transportation (railways) and energy (steam power). People who have concerns about sustainability sometimes point to the scarcity of resources as the primary constraint, but there are three sources that we can tap in the future: recycling, asteroid mining, and eventually transmutation (turning one element into another, as in the alchemists’ quest to turn lead into gold). For instance, a lot of electronics currently end up in landfill instead of being recycled, we achieved the first soft landing on an asteroid as far back as 2001, and we can already turn lithium into tritium.
Transformation: Our ability to transform materials also improved radically during the Industrial Age. For instance, chemistry enabled the synthetic production of rubber, which previously had to be harvested from trees. Machine tools enabled the rapid transformation of wood and metals. We later added transformation technologies such as injection molding and additive manufacturing technologies (often referred to as “3D printing”).
Transportation: Here we went from human-, animal- and wind-powered movement to machine-powered movement, dramatically increasing our capabilities. We can now fly across continents and oceans on commercial flights, reaching any major city in a single day, and there has been extraordinary progress in flight safety. While some have complained about a recent lack of progress, pointing to the lack of commercial supersonic options following the retirement of Concorde, work has recently resumed on developing new options for commercial supersonic flight. We have also made tremendous progress on reusable rockets and autonomous vehicles (for instance, drones and robots used in warehouses).
The progress made on all these enablers has allowed us to produce more physical capital, to do so more rapidly and cheaply, and to transport it anywhere in the world. One illustration of how far we have come is the fact that smartphones only became available in 2000, but by 2017 there were over 2 billion smartphone users in the world.
I am not claiming that everyone’s needs are being met today, nor am I arguing that governments should be meeting people’s needs through government-run programs such as food stamps or subsidized housing – quite the opposite. My point is simply that physical capital is no longer the constraint when it comes to meeting our individual and collective needs.
The great success of capitalism is that capital is no longer scarce. However, we now face a scarcity of attention, and, as we will see, capitalism cannot and will not properly address that new scarcity without dramatic changes in how we regulate our society and ourselves.
In saying that attention is scarce, I mean that there is not enough of it available today to meet our needs. That’s what I set out to show in this part of the book. I’ll start by defining attention, before presenting several examples of needs that either are already no longer met due to a lack of attention, such as the need for meaning, or are at risk of not being met in the near future. After that, I will consider how much human attention is currently caught up in Industrial Age activities and how an increasing amount of attention is being trapped through our current uses of digital technology, such as advertising-based social networks. I will also discuss why market-based capitalism cannot be used to allocate attention.
Attention is to time as velocity is to speed. If I tell you that I’m driving at a speed of 55 miles per hour, that does not tell you anything about where I’m going, because you don’t know what direction I’m heading in. Velocity is speed plus direction. Similarly, if I tell you that I spent two hours with my family yesterday (time), that does not tell you anything about what our minds were directed at—we could have been having an engaging conversation, or we could have been immersed in our phones. Attention is time plus intentionality.
The amount of human attention in the world is finite. We have 24 hours in the day, some of which we need to spend paying attention to eating, sleeping and meeting our other needs. The attention during the remaining hours of most people in the world is taken up by having to earn an income and by consuming goods and services, leaving relatively little time for attention to be freely allocated. A hard limit on available attention also exists for humanity as a whole—as I argued earlier, we are headed for peak population, at which point we will no longer be increasing the total amount of potentially available attention by adding more people.
Crucially, we cannot go back in time and change our past attention, either as individuals or collectively. A student who walks into an exam unprepared cannot revisit the preceding weeks and study more. A world that enters a pandemic unprepared is not able to go back in time and do more research on coronaviruses.
First, let’s consider attention at the individual level. The need for meaning is no longer being met because most people are failing to give enough attention to the crucial questions of purpose at a time of great transition.
In recent times, all over the world, people had become used to constructing meaning around their jobs and beliefs, but both are undermined by digital technologies. Many jobs have come under pressure from automation or outsourcing. Meanwhile, ideas, images, and information are no longer contained by geographic boundaries, and people are increasingly exposed to opinions and behaviors that diverge from their core beliefs. In combination, these challenges are leading to a crisis of identity and meaning. This crisis can take many different forms, including teenage depression, adult suicide—in the US, particularly among middle-aged white men —and fatal drug overdoses (Rodrick, 2019; American Foundation for Suicide Prevention, 2019). Between 2006 and 2019, these problems increased by 99 percent, 26 percent and 43 percent respectively.
Source: CDC, 2020; National Center for Health Statistics, 2019; Substance Abuse and Mental Health Services Administration, 2020
The situation is not dissimilar to the one that first occurred when people left the countryside and moved to big cities during the transition to the Industrial Age, having to give up identities that had been constructed around land and crafts (a process that has continued to play itself out throughout the world as industrialization spread). They were uprooted from their extended families and confronted with people from other regions who held different beliefs. Then too there was a marked increase in mental illness, drug abuse and suicide.
The Industrial Age had little use for an individual sense of meaning—it is difficult to combine the pursuit of a strong sense of personal purpose with the repetitive operation of an industrial machine day in, day out. Early in the Industrial Age, religion continued to provide a source of meaning for most people, as a collective purpose. As the Industrial Age progressed, however, church attendance decreased, while jobs and consumption increasingly came to be seen as sources of meaning. Some of this change can be traced back to the rise of the ‘Protestant work ethic‘, which provided justification for wealth accumulation from rising professions (such as lawyers and doctors) and the managerial class. Some of it is the result of the massive growth in commercial advertising, which cleverly tied consumption to such aspirations as freedom (e.g., the infamous Camel cowboy cigarette ads) and happiness. We have come so far on that path that people now speak of “retail therapy,” the idea that you can make yourself feel better by shopping.
As with such earlier transitions, it is not surprising that with the current digital dislocation we are yet again seeing a rise in populist leaders with simplistic messages, such as Donald Trump in the United States and Viktor Orbán in Hungary. A recent study found that the average share of the vote for populist parties throughout Europe is more than double what it was in the 1960s (Inglehart & Norris, 2016). People who lose their sense of meaning when their purpose and beliefs are challenged want to be told that things will be okay and that the answers are simple. “Make America Great Again” is one such message. These backward-looking movements promise an easy return to a glorious past. Similarly, we are once again seeing a growth in church attendance as well as in various spiritual movements, all of which promise to quickly restore individuals’ access to meaning. The alternative of creating new meaning through an individual search for purpose and the independent examination and formation of beliefs requires considerable attention. Attention which people cannot muster for reasons that we will examine in detail later in the book.
This individual scarcity of attention to purpose is not confined to any one demographic. People who work multiple jobs to pay rent and feed their families are definitely impacted, but so are many people in high-paying jobs, who are often working more hours than ever and have increased their personal expenses to the point where they cannot afford to quit. One might posit that this is the result of a lack of education, but I often meet young people who have graduated from elite schools and want to work for a technology startup or get into venture capital. Most of them are looking for advice about how to apply to a specific position. After discussing that for some time, I usually ask them a more open question: “What do you want from this position?” That often elicits more interesting answers—they might talk about learning a new skill, or applying one that they have recently learned. Sometimes people answer with a desire to contribute to some cause. When I ask them “What is your purpose?”, shockingly few have paid enough attention to this question to have an answer. It is often as if they had been presented with this question for the first time and suddenly realize that “make a lot of money” is not actually a purpose that can provide meaning in life.
Humanity is also not devoting nearly enough attention to our collective need for more knowledge to address the threats we are facing and seize the opportunities ahead of us.
In terms of the threats we face, we are not working nearly hard enough on reducing the levels of carbon dioxide and other greenhouse gases in the atmosphere. Or on monitoring asteroids that could strike the Earth, and coming up with ways of deflecting them. Or on containing the current coronavirus outbreak and future pandemics (an early draft of The World After Capital, written before 2020, said “containing the next avian flu” here).
Climate change, “death from above” and pandemics are three examples of species-level threats that are facing humans. As I wrote earlier, we are only able to sustain the current global human population due to technological progress. Each of these risk categories has the potential to fundamentally disrupt our ability to meet individual needs. For example, the climate crisis could result in large-scale global crop failures, which could mean we would no longer be able meet everyone’s needs for calories and nutrients. This is not a hypothetical concern: it has led to the downfall of prior human civilizations, such as the Rapa Nui on Easter Island and the Mayans, whose societies collapsed due to relatively small changes in their local climate, possibly induced in some measure by their own actions (White, 2019; Simon, 2020; Seligson, 2019). Now we are facing a climate crisis on a truly global scale, and we should be using a significant proportion of all human attention to fight this threat.
On the opportunity side, far too little human attention is spent on things such as environmental cleanup, educational resources and basic research. The list here is nearly endless, and includes unlocking quantum computing and advancing machine intelligence. The latter is particularly intriguing because it could help produce more knowledge faster, thus potentially helping to reduce the scarcity of attention.
None of this means that everyone has to become a scientist or engineer—there are many other ways to allocate attention to address these threats and opportunities. For instance, learning about the climate crisis, sharing that knowledge with others and becoming politically active are all ways of allocating attention that can directly or indirectly create more knowledge. So is creating art that inspires others, whether it is to directly take an action, or by connecting us to our shared humanity as a source of meaning. This is why when I talk about not creating enough knowledge, I am not limiting it to scientific knowledge but including all knowledge, as defined earlier.
Attention scarcity is difficult to alleviate, and I therefore propose it as a possible explanation for the Fermi paradox. The physicist Enrico Fermi famously asked why we have not yet detected any signs of intelligent life elsewhere in our universe, despite knowing that there are plenty of planets that could harbor such life. Many different explanations have been advanced, including that we are the first and hence only intelligent species, or that more advanced intelligent species stay “dark” for fear of being attacked by even more advanced species (the premise of Cixin Liu’s sci-fi trilogy The Three-Body Problem). Alternatively, perhaps all civilizations develop until they have sufficient capital but then suffer from attention scarcity, so they are quickly wiped out by a pandemic or a meteor strike. If civilizations that can build radios don’t persist for very long, the timing of signs of their existence may be very unlikely to coincide with ours.
Why is our scarce attention so poorly allocated that we have created a potential extinction-level event in the form of a climate crisis? One reason is that we currently use the market mechanism to allocate attention. The next sections explain how this mechanism is sucking huge amounts of attention into a few systems such as Facebook, while also keeping much of it trapped in Industrial Age activities. Finally, we will consider why markets fundamentally cannot allocate attention, which points to crucial limits of capitalism.
We have seen that attention is scarce relative to the great problems and opportunities facing us, making proper allocation of available attention the crucial challenge for humanity. As we will see later, digital technology can help meet this challenge. But in the recent past the primary effect of digital technology has been to misallocate attention.
The Internet is exponentially increasing the amount of available content. Most of the recorded content produced by humanity has been produced in the last few years, a natural result of fast exponential growth in the creation of data (Marr, 2019). As a result, it is easy to be overwhelmed. Our limited attention is easily absorbed by the increasing amount of content tailored to piquing our curiosity and capturing our attention. Humans are inadequately adapted to the information environment we now live in. Checking email, Twitter, Instagram, and watching yet another YouTube clip or Snapchat story provide ‘information hits’ that trigger the parts of our brain that evolved to be stimulated by novelty, social connection, sexual attractiveness, animal cuteness, and so on. For hundreds of thousands of years, when you saw a cat (or a sexy person) there was an actual cat (or sexy person) somewhere nearby. Now the Internet can provide you with an effectively unlimited stream of cat (or sexy person) pictures. In 2019, the average person spent nearly two and a half hours on social media every day, part of a staggering 10 and a half hours spent on some sort of media consumption, or more than 60 percent of all waking hours (Kemp, 2020; "The Nielsen Total Audience Report," 2020).
Importantly, the dominant companies that we use to access this information, such as Google, Facebook and Twitter, generate most of their revenues by capturing and reselling our attention. That’s the essence of advertising, which is their business model. Advertisers literally buy our attention. Today, in order to grow, these companies invest in algorithms designed to present highly targeted, captivating content, thereby capturing more of our attention. News sites like Buzzfeed and the Huffington Post do the same.
It is much easier to capture attention by appealing to the parts of our brain that find kittens cute, people sexy, and react with outrage to perceived offenses rather than asking us to read a long-form essay or work through an argument by independently weighing evidence. The companies responsible for these systems lack financial incentives to persuade you to close your computer, put down your smartphone and spend more time with family and friends, read a book, or go outdoors and enjoy, or even clean up, the environment. The financial markets closely track metrics such as number of users and time spent on a platform, which are predictors of future growth in advertising revenue. In other words, the markets that drive the predominant way we use digital technology to allocate attention reflect the financial interests of investors and advertisers, which are often orthogonal or even antagonistic to individual and community interests. As we will see later (see the section on “Missing Prices” below) the problem runs even deeper, as it is actually impossible to construct proper markets for attention.
Capitalism has been so successful that even theoretically communist countries like China have embraced it. But it cannot solve the scarcity of attention without significant changes in regulation, because of three important limitations. First, prices will always be missing for things that we should be paying attention to. Second, capitalism has limited means for dealing with the concentration in wealth and market power arising from digital technologies. Third, capitalism acts to preserve the interests of capital over knowledge. We need to make changes now, precisely because capitalism has been so successful—the problems that are left are the ones it cannot solve.
Capitalism won’t help us allocate attention because it relies on prices that are determined in markets. Prices are powerful because they efficiently aggregate information about consumer preferences and producer capabilities, but not everything can be priced. And increasingly, the things that cannot be priced are becoming more important than those that can: for example, the benefits of space exploration, the cost of the climate crisis, or an individual’s sense of purpose.
The lack of prices for many things is not just a question of a missing market that can be created through regulation. The first foundational issue is the zero marginal cost of copies and distribution in the digital realm. From a social perspective, we should make all the world’s knowledge, including services such as medical diagnoses, available for free at the margin. But this means that as long as we rely on the price mechanism, we will under-produce digital resources. Just as the Industrial Age has been full of negative externalities such as pollution, resulting in overproduction, the Knowledge Age is full of positive externalities, such as learning, which implies underproduction. If we rely on the market mechanism, we will not pay nearly enough attention to the creation of free educational resources.
The second foundational issue is uncertainty. Because prices aggregate information, they fail when no such information exists. When events are either incredibly rare or have never occurred, we have no information on their frequency or severity, and the price mechanism cannot work when forecast error is infinite. For instance, large asteroid impacts on Earth occur millions of years apart, and hence no price can help us allocate attention to detecting them and building systems to deflect them. As a result, we pay a trivial amount of attention to such problems relative to the potential damage they would cause.
The third foundational issue is new knowledge. The further removed such knowledge is from creating a product or service that can be sold, the less use the price mechanism is. Consider early aviation pioneers, for example. They pursued flight because they were fascinated by solving a challenge rather than because there was an obvious market for air travel. Or take the early days of quantum computing: actual machines were still decades away, so at that time the price mechanism would not have allocated attention to the discipline. Much of this knowledge therefore needs to be produced by allocating attention through other mechanisms, such as government funded research, academic institutions, and prizes.
The fourth foundational issue is that in order for markets and prices to exist, there have to be multiple buyers (demand) and sellers (supply). There is no demand and supply for you to spend time with your children or to figure out your purpose in life. Capitalism cannot help us allocate attention to anything that is deeply personal.
A way of summarizing all of these examples is to think of the world as divided into an economic sphere (where prices exist) and a non-economic one. Market-based allocation of attention can only succeed in the former and, to the extent that there are insufficient counterweights, will do so at the detriment of attention allocated to the non-economic sphere. This is the high earning parent, who doesn’t spend enough time with their children, or the legions of science PhDs optimizing ad algorithms instead of working on the climate crisis.
When it comes to the distribution of income and wealth, many different outcomes are possible, and what actually happens depends both on the underlying production function and government regulation. Consider a manual production function that was common before industrialization. If you were a cobbler making shoes by hand, for instance, there was a limit to the number of shoes you could produce.
Then along came industrialization and economies of scale. If you made more cars, say, you could make them more cheaply. That is why, over time, there were relatively few car manufacturers around the world and the owners of the surviving ones had large fortunes. Still, these manufacturing businesses stayed fairly competitive with each other even as they grew large, which limited their market power and thereby the amount of wealth that they generated. Many service businesses have relatively small economies of scale, which has allowed a great many of them to exist, and markets such as nail salons and restaurants have remained highly competitive. Finance is one clear exception to this among services. A few large banks, insurance companies and brokerage firms tend to dominate the finance industry, and that has accelerated in recent years, largely because financial services have already been heavily impacted by digital technology.
With digital technology we are seeing a shift to ever-higher market power and wealth concentration. When you plot the outcomes, such as companies by revenue, the resulting curves look like so-called ‘power laws’: the biggest firm is a lot bigger than the next biggest firm, which in turn is a lot bigger than the third largest, and so on. This pattern is pervasive throughout digital technology and the industries in which it plays a major role. For instance, the most watched video on YouTube has been watched billions of times, while the vast majority of videos have been watched just a few times. In e-commerce, Amazon is an order of magnitude larger than its biggest competitor, and several orders of magnitude larger than most e-commerce companies. The same goes for apps: the leading ones have hundreds of millions of users, but the vast majority have just a few.
Digital technologies are driving these power laws due to zero marginal cost, as explained earlier, as well as through network effects. Network effects occur when a service gets better for all participants as more people or companies join the service. For example, as Facebook grew, both new and early users had more people they could connect with. This means that once a company grows to a certain size it becomes harder and harder for new entrants to compete, as their initially smaller networks offer less benefit to participants. In the absence of some kind of regulation, the combination of zero marginal cost with network effects results in extremely lopsided outcomes. So far, we have seen one social network, Facebook, and one search company, Google, dominate all others.
This shift to power laws is driving a huge increase in wealth and income inequality, to levels that are even beyond the previous peak of the early 1900s. Inequality beyond a certain level is socially corrosive, as the wealthy start to live in a world that is disconnected from the problems faced by large parts of the population.
Beyond the social implications of such inequality, the largest digital companies also wield undue political and market power. When Amazon acquired a relatively small online pharmacy, signaling its intent to compete in that market, there was a dramatic drop in the market capitalization of pharmacy chains. Historically, market power produced inefficient allocations due to excessive rents as prices were kept artificially high. In digital markets, in contrast, powerful companies have often pushed prices down or even made things free. While this appears positive at first, the harm to customers comes via reduced innovation, as companies and investors stop trying to bring better alternative products to market (consider, for example, the lack of innovation in Internet search).
Joseph Schumpeter coined the term “creative destruction” to describe the way in which entrepreneurs create new products, technologies, methods, and ultimately economic structures to replace old ones (Kopp, 2021). Indeed, if you look at the dominant companies today, such as Google, Amazon and Facebook, they are all relatively new having displaced in importance those of the Industrial Age. However, such ‘Schumpeterian' innovation will be more difficult going forward, if not downright impossible. During the Industrial Age, machines served a specific purpose, which meant that when a new product or manufacturing technology became available, the installed base of machines became essentially worthless. Today, general-purpose computers can easily implement a new product, add a feature or adopt a new algorithm. Furthermore, production functions with information as a key input have a property known as ‘supermodularity’: the more information you have, the higher the marginal benefit of additional information (Wenger, 2012). This gives the incumbent companies tremendous sustained power—they gain more marginal value from a new product or service than a new entrant does.
Toward the end of the Agrarian Age, when land was scarce, political elites came from the landowning classes, and their influence wasn’t truly diminished until after the Second World War. Now, although we have reached the end of the time of capital scarcity, political elites largely represent the interests of capital holders. In some countries, such as China, senior political leaders and their families own large parts of industry outright. In other countries such as the United States, politicians are heavily influenced by the owners of capital because of the need to raise funds, the impact on policy of lobbyists, ’think tanks’ and foundations backed by capital, the skewing of public debate through capital-owned media (e.g. FOX), as well as global ’regulatory competition’ allowing capital owners to play governments off against one another in order to limit wealth redistribution through taxation. Consider just lobbying: over a five-year period, the 200 most politically active companies spent nearly $6 billion to influence policy (Allison & Harkins, 2014). A clever study from 2019 showed how this kind of lobbying directly impacts the language of laws subsequently drafted by lawmakers (O’Dell & Penzenstadler, 2019).
The net effect of all of this are policies that are favorable to owners of capital, such as low rates of capital gains tax. Low corporate tax rates, with loopholes that allow corporations to accumulate profits in countries where taxes are low, are also favorable to owners of capital. This is why in many countries we have some of the lowest effective tax rates for corporations and wealthy individuals and families in history (‘effective’ means what is paid after exemptions and other ways of reducing or avoiding tax payments).
In addition to preserving and creating financial benefits for the owners of capital, corporations have also attacked the creation and sharing of knowledge. They have lobbied heavily to lengthen terms of copyright and to strengthen copyright protection. And scientific publishers have made access to knowledge so expensive that libraries and universities struggle to afford the subscriptions (Sample, 2018; Buranyi, 2018).
A key limitation of capitalism thus is that without meaningful change, it will keep us trapped in the Industrial Age by keeping governments and the political process focused on capital. As long as that is the case, we will continue to over-allocate attention to work and consumption, and under-allocate it to areas such as the individual need for meaning and the collective need for the growth of knowledge. Parts Four and Five of The World After Capital will examine how we can get out of the Industrial Age, but first we will take a closer look at the power of knowledge and the promise of the digital knowledge loop.
My first major aim in writing this book was to establish that we are currently experiencing a period of non-linear change, and my second aim is to outline a plan for our transition to the Knowledge Age. Our challenge is to overcome the limits of capitalism and move away from a society centered on the job loop toward one that embraces the knowledge loop. This section of The World After Capital will propose changes in regulation and self-regulation that would increase human freedom and unlock the promise of the digital knowledge loop. There are three components to this:
Economic freedom. We must ensure that everyone’s needs are met without them being forced into the job loop. Once we have economic freedom, we can embrace automation and enable everyone to participate in and benefit from the digital knowledge loop.
Informational freedom. We must remove barriers to the digital knowledge loop that limit our ability to learn from existing knowledge, in order to accelerate the creation and sharing of new knowledge. At the same time, we must build systems into the digital knowledge loop that support critical inquiry.
Psychological freedom. We must free ourselves from scarcity thinking and the associated fears that impede our participation in the digital knowledge loop. Learning, creating and sharing knowledge all require us to overcome barriers in our minds, some of which are evolutionary and others the result of social pressure.
With these increased freedoms will come the possibility of a peaceful transition from the Industrial Age to the Knowledge Age that is not dictated from the top down, but results from the choices of individuals and the communities they form. There is no guarantee that these changes will be sufficient to avoid a disastrous transition, but I am convinced that without them we are headed for just that, incurring species-level risk for humanity. Later in the book I will discuss the values and systems that are necessary for successful collective action in a world of increased individual freedom.
While digital technology is being used to capture rapidly increasing amounts of our attention, we should also consider what the bulk of attention is dedicated to today. Not surprisingly, since we are just beginning to transition out of it, the vast bulk of human attention is focused on Industrial Age activities, in particular labor and consumption. For example, in the US many people spend 40 or more hours a week on the job, which amounts to 35 per cent of waking hours (assuming eight hours of sleep per night). People in the US now spend around 10 and a half hours a day consuming media (including traditional television and radio in addition to Facebook, YouTube, Netflix and similar services, podcasts, games, and more), which (setting aside simultaneous usage) amounts to over 60 per cent of waking time ("The Nielsen Total Audience Report," 2020). To understand why so much of our attention is spoken for, I present the concept of the “job loop.”
Thinking dispassionately about labor is hard, because over the last couple of centuries we have become convinced that employment is essential to both the economy and individual dignity. Let’s start from the perspective of production. If you want to make products or deliver a service, you require a series of inputs, including buildings and machines (capital), raw materials or parts (supplies), and human workers (labor). For much of history, capital and labor have been complementary: as the owner of a company, you couldn’t use your physical capital without having labor to operate it. That was true for manufacturing and even more so for services, which often use little capital and consist primarily of labor.
However, there is nothing in economics that says that all production processes should require labor. The opposite idea is an artifact of the production functions that were technologically available when economists developed the theory of production. If company owners are able to figure out how to do something cheaper or better by using less or no labor, that’s what they will choose to do. When it was acquired by Facebook for $19 billion, for example, WhatsApp had fewer than 50 employees.
Having no labor at all might make sense for a single company, but it does not for the economy as a whole as it is currently constructed. Who will buy goods and services produced by automated systems if people are unemployed and don’t have any money? Walter Reuther, head of the United Automobile Workers union in the 1950s, often told a story about an exchange he had with an official of the Ford motor company (who, as the story became famous in its own right, became Henry Ford II):
Ford official: How are you going to collect union dues from these guys [robots]? Walter Reuther: How are you going to get them to buy Fords? (O'Toole, 2011)
If we all had inherited wealth or sufficient income from capital, an economy without labor would not be a problem, and we could enjoy the benefits of cheaper products and services courtesy of robots and automation.
The possibility of a slump in consumer demand due to less labor long seemed not just unlikely but impossible. There was a virtuous loop at the heart of economic growth: the ‘job loop.’
In today’s economy, the majority of people sell their labor, producing goods and services and receiving wages in return. With their wages, they buy smartphones, books, tools, houses and cars. They also buy the professional assistance of attorneys, doctors, car mechanics, gardeners and hair stylists.
Most of the people who sell these goods and services are in turn employed, meaning that they too sell their labor and buy goods and services from other people with what they are paid. And round and round it goes.
The job loop worked incredibly well in combination with competitive markets for goods and services and with a properly functioning financial system. Entrepreneurs either used debt or equity to start new businesses, and employed people at wages that were often higher than older businesses, increasing their employees’ purchasing power and thereby fueling further innovation and growth. As far as expanding economic production and solving problems for which markets are well-suited, it was a virtuous cycle that resulted in unprecedented prosperity and innovation.
Some might point out that many people these days are self-employed, but that is irrelevant if they are selling their time. For instance, a graphic designer who works as an independent contractor is still paid for the labor they put into a project. It is only if they design something that is paid for over and over without them spending further time on it, such as a graphics template, that they have the opportunity to leave the job loop.
There are multiple problems with this virtuous cycle today. First, as we calculated at the outset of this section, it traps the vast majority of human attention. Second, when things contract, the effect of mutual reinforcement applies in the other direction. Take a small town, for example, in which local stores provide some of the employment. If a big superstore comes into town, total retail employment and wages will both fall. Fewer store employees have income, and those who do have less. If they start to spend less on haircuts and car repairs, the hair stylist and car mechanic earn less and can spend less themselves, and so on. Third, much of the consumption today is driven by vast sums of money spent on advertising, as well as by exposure to social media, inducing people into positional spending on wants (e.g., a bigger car than their neighbor). These higher expenditure levels, in turn, lock people into jobs which they hate but cannot afford to leave, which explains a great deal of the frustration among relatively highly-paid professionals, such as lawyers and bankers.
Put differently, what was once a virtuous loop has become a vicious loop that holds much of human attention trapped. Much of The World After Capital is about breaking free of this vicious version of the job loop. That is an urgent problem as the job loop has been becoming more vicious for some time now due to a change in the relationship between labor and capital.
To understand what is happening to the job loop, we need to look at a change in the economy that has become known as “the Great Decoupling” (Bernstein & Raman, 2015). In the decades after Worl War II, as the US economy grew, the share of Gross Domestic Product (GDP) going to labor grew at the same rate. However, starting in the mid-1970s, GDP continued to grow while household income remained flat (Economic Policy Institute, n.d.).
Source: Federal Reserve Bank of St. Louis, 2021a
Over this time of stagnant incomes, and particularly from the mid-1980s onward, US GDP growth was increasingly financed by consumers going into debt, until we reached the limit of how much debt households could support. The first event that really drove that point home was the collapse of the US housing bubble. There is some evidence that we are hitting another such point right now, as a result of the COVID-19 crisis, which has led to dramatic increases in unemployment.
Source: Federal Reserve Bank of St. Louis, 2021b; 2021c
Similar changes have occurred in other developed economies. This decoupling may be partly accounted for by changing demographics, but the primary driver appears to be technology. As technological innovation accelerates, there will be further pressure on the job loop. Particularly worrisome is the fact that jobs in developing countries are highly exposed to automation (The Economist, 2016). As a result, these countries may either skip the “golden age of the job loop” entirely or have a much diminished version.
So, while we want to free up the attention trapped in the job loop, we need to figure out how to do so gradually, rather than through a rapid collapse. But is such a collapse even possible?
With the job loop still dominant, people have to sell their labor to earn a living. Until recently, most economists believed that when human labor is replaced by technology in one economic activity, it finds work in another part. These economists refer to a fear of technological unemployment or underemployment as the “lump of labor fallacy.“
The argument is that automating some part of the economy frees up labor to work on something else—entrepreneurs might use this newly available labor to deliver innovative new products and services, for example. There is no fixed “lump” of labor; rather there are potentially an infinite number of things to work on. After all, this is what has happened historically. Why should this time be different?
To understand how things could be different, we might consider the role horses have played in the American economy. As recently as 1915, 25 million horses worked in agriculture and transportation; by 1960, that number had declined to 3 million, and then we stopped keeping track entirely as horses became irrelevant (Kilby, 2007). This decline happened because we figured out how to build tractors, cars and tanks. There were just no uses left for which horses were superior to a mechanical substitute. The economist Wassily Leontief (1952) pointed out that the same thing could happen to humans in his article “Machines and Man”.
Humans obviously have a broader range of skills than horses, which is why we have so far always found new employment. So what has changed? Basically, we’ve figured out how to have computers do lots of things that until recently we thought only humans could do, such as driving a car. Digital technology gives us universal computation at zero marginal cost. Suddenly, the idea that we humans might have fewer uses doesn’t seem quite so inconceivable.
Those who claim that this is committing the lump of labor fallacy argue that we haven’t considered a new set of human activities that will employ people, but that line of thinking might also be flawed. Just because we have found new employment in the past doesn’t mean we will in the future. I call this belief the “magic employment fallacy.”
We can be incredibly creative when it comes to thinking of new things to spend our time on, but the operative question for people selling their labor is whether they can get paid enough to afford solutions to their needs, such as food, shelter and clothing. The only thing that matters for this question is whether a machine or another human is capable of doing whatever we think of more cheaply.
This turns out to be the central problem with the magic employment fallacy. Nothing in economic theory says what the ‘market-clearing price’ for labor—the wage level at which there is neither unemployment nor a labor shortage—ought to be. It might be well below what people need to cover their needs, which could present a near-term existential threat to many people. We thus appear to face a dilemma. On the one hand, we want to free up human attention for uses that the job loop doesn’t provide for. On the other hand, we want to avoid a rapid collapse of the job loop. In order to understand how we can accomplish both, we need to consider the relationship between the cost of labor and innovation.
Some people argue that unions made labor expensive, resulting in unaffordable products and services. But in reality, increased labor costs in fact propelled us to become more efficient: entrepreneurs overcame the challenge of more expensive labor by building better machines that required fewer humans. In countries such as India, the abundance of cheap labor meant that for a long time there was little incentive to invest in machines, since it was cheaper to have people do the work by hand.
Globally, we face the risk of being stuck in a low innovation trap precisely as a result of a fear that automation will make labor cheap. For example, we might end up with many more years of people driving trucks across the country, long after a machine could do the same job more safely (Wong, 2016). What is the incentive to automate a job if you can get someone to do it for minimum wage?
Some people object to automation innovations on the grounds that work is an integral part of people’s identity. If you have been a truck driver for many years, for instance, who will you be if you lose your job? At first, this might sound like a completely legitimate question. But it is worth recalling that the idea that purpose primarily has to do with one’s profession, instead of belonging to a religion or to a community, is an Industrial Age phenomenon.
If we want to free up attention via automation, we need to come up with new answers to these concerns. That will be the subject of Part Four, but before getting there we will first consider why capitalism by itself can’t solve these problems.
If you were to quit your job right now, could you afford to take care of your needs? And if you are retired, what would happen if you suddenly stopped receiving your pension? If you are supported by a spouse or partner, could you still afford food, shelter and clothing without them? If you could no longer meet your needs in any of these situations, you are not economically free. Your decisions on how much of your labor to sell and whom to sell it to, whether to stay with your partner and where to live, are not free decisions.
Many people in the US are not free in this sense. A recent survey asked respondents if they had enough money to pay for a $1,000 emergency, and 60 percent said they did not (Leonhardt, 2020). Other studies have found that about 75 percent of Americans over the age of forty are behind on saving for retirement, and somewhere between a quarter and a third of all non-retired adults have no savings at all (Board of Governors of the Federal Reserve System, 2020; Backman, 2020). This data predates the COVID-19 pandemic which wreaked further havoc on the finances of many Americans.
If you are not economically free, you are not able to participate freely in the knowledge loop, which is why economic freedom is a cornerstone of the Knowledge Age. We must make people economically free so that they have the time to learn new knowledge, from practical skills to the latest theoretical physics. We need them to create new knowledge using what they have learned. And finally, we need them to share this knowledge with others.
Participation in the knowledge loop has never been more important: we have massive problems to overcome, including the COVID-19 pandemic and, above all, the climate crisis. To do so, we must be able to embrace automation rather than feel threatened by it.
Economic freedom is a reality for the wealthy, for tenured professors and retirees with pensions and savings, but how can we make it a reality for everyone? The answer is to provide everyone with a guaranteed income to cover solutions to their needs, including housing, clothing and food. This income wouldn’t depend on whether someone is married or single, employed or unemployed, rich or poor—it would be unconditional (also referred to as ‘universal’).
At first glance, this idea of a ‘universal basic income’ (UBI) may seem outrageous. Getting paid simply for being alive—isn’t that akin to socialism? Where would this money come from? And won’t people simply descend into laziness and drug addiction? We will examine each of these objections to UBI in turn, but let’s first consider the arguments for UBI as a way of achieving economic freedom.
Concerns about economic freedom are by no means new. When the American republic was in its infancy, economic freedom was within the reach of those involved in the colonial project. There was plenty of land available to settler colonialists, and they could hypothetically make ends meet through small-scale farming (also known as subsistence farming). Thomas Jefferson considered formalizing the idea of land grants as a way of ensuring a free citizenry. It is important to point out that this land was being taken away from Native Americans, who were losing both their land and their freedom, on the basis of the so-called Doctrine of Discovery (“Discovery Doctrine,” 2020). Even back then, observers such as the philosopher and political activist Thomas Paine (1797) understood that land that could be appropriated and allocated to settlers as property would run out at some point, raising the specter of a time when landless citizens might have to rent themselves out as laborers in order to provide for their needs, leading him to conclude that an alternative to land would be to give people money instead. The idea of increased freedom through direct cash transfers thus dates back to the earliest days of the American republic.
If you don’t find this argument for UBI compelling, consider the case of air. We can all afford to breathe air because it is free and distributed around the globe (important side note: regulation is required to keep it clean—there were many problems with air pollution during industrialization, and it is estimated that more than seven million people still die around the world every year from air pollution) (World Health Organization, n.d.). Our freedom is thus not restricted by having to find air, and the power of UBI would be to make us equally free when it comes to our other needs, by making food, housing and clothing affordable for everyone.
As I argued earlier, our technologies are sufficiently advanced that we are capable of meeting everyone’s needs. Farming can generate enough food for everyone. We can easily make enough clothing and provide everyone with shelter. It is the knowledge and capital that humanity has created that have made this possible. And our technological progress is accelerating while global population growth is slowing, so all this will get easier—that is, as long as we generate enough new knowledge to overcome the problems we are facing, starting with the climate crisis.
The question is not whether we have the ability to meet everyone’s needs, but whether our economy and society distribute the resources fairly, and that is where UBI comes in. A UBI enables markets to function without forcing people into the job loop. It lets everyone freely choose whether and how much to participate in these markets and how much to devote to personal relationships, the pursuit of meaning, curiosity and creativity, etc.—freeing up attention and enabling people to form new communities in more affordable geographies (e.g. smaller cities or the countryside).
Industrial society presents us with two different ways of allocating resources. In one, individuals participate in a market economy; in the other, governments provide for people’s needs. Those options form the extremes of a spectrum that has a variety of ‘hybrid’ arrangements in the middle, such as government-subsidized housing, for which people pay reduced rent. UBI broadens the scope for market-based allocation, thus reducing reliance on an ever-expanding government sector. In that regard it is the opposite of socialism and communism, which rely predominantly or even entirely on government allocation.
Right after the Second World War, only about 5 percent of people in the US were employed by government, which comprised about 42 percent of the economy (U.S. Bureau of the Census, 1949; OECD 2021a). In the Soviet Union, by contrast, most of the working population was employed by the state, which owned close to 100 percent of the economy, but that system turned out to be much less effective at allocating resources. Nevertheless, the size and scope of government spending has gradually expanded in the US and in Europe. In many European economies it now accounts for more than half of the economy.
Food, clothing and shelter are obvious solutions to human needs that a UBI should make affordable, but a UBI could eventually also cover the cost of education and healthcare. That might seem ambitious, given how quickly education and healthcare costs have risen over the past decades, but technology can be expected to make both of these far more affordable in the near future.
If you’re struggling to take care of your needs, the world will seem like an expensive place. Yet the data show that a lot of things have been getting cheaper for some time. In the US, as the chart below shows, the price of consumer durables has been falling since the mid-1990s.
Source: Federal Reserve Bank of St. Louis, 2021d
The decline in the price of consumer durables has been made possible by technological progress. We are getting better at making stuff, and the automation of production and distribution is a big part of that. While this will hurt you if you lose your job as a result, if you have money to buy things it will help you. And with UBI everyone will have money, which as prices fall over time will buy more and more.
The decline in the price of consumer durables has made adequate clothing easily affordable. Technology is also driving down the cost of smartphones, which will themselves be essential in making education and healthcare more affordable. This declining trend will only accelerate as we begin to use technology such as additive manufacturing (also known as ‘3D printing’), manufacturing products only when they are needed and close to where they are required (Crane & Marchese, 2015). Additive manufacturing technology is even making it much cheaper to put up a building, with various structures around the world produced in this way in recent years, and one California company now offering small houses 3D printed in 24 hours for sale (Orrall, 2020).
Another way housing can be made more affordable is through improved sharing of existing housing. Digital technology, including the services offered by companies such as Airbnb and Couchsurfing, make such sharing vastly easier (note: a degree of regulation is required to avoid detrimental impact on some local housing markets). Despite such progress, it still costs a fortune to live in places where demand for housing exceeds the available supply, such as New York and San Francisco. With UBI, people can choose to live where housing is more affordable.
In recent years, the city of Detroit gave away houses as an alternative to demolishing them, and in some rural areas of the US you can rent a home for as little as a few hundred dollars per month (Macguire, 2014). In fact, there are around 30 million units renting for less than $600 a month, accounting for 25 percent of the national rental stock (The Joint Center for Housing Studies of Harvard University, 2020). The most affordable way to do this is to share a place with friends. This could simply mean having roommates or go as far as purchasing and fixing up an abandoned village as friends of mine have done in Germany. Many people can’t currently take advantage of these opportunities, since they can’t find a job in these locations. By contrast, UBI provides geographic freedom. People who want to move would no longer be trapped in expensive cities just so they can meet their basic needs (often by holding down multiple jobs).
One large group of people is already free of the constraints of work: retirees. And sure enough, many people move away from expensive cities when they retire, to places where real estate is more affordable. When considering the cost of shelter, rather than analyzing how much people need to pay to live where they live today, we should therefore look at what the cost could be in a world that has UBI. Crucially UBI doesn’t prevent someone from applying their entire payment to whatever rent they are currently paying. That is the great power of providing cash which is completely fungible, meaning it can be used to purchase anything (unlike say a housing voucher which could be applied to housing only).
Food is another area where technology stands to offer massive gains. While some argue that genetically modified foods hold the key to feeding the planet affordably, other near-term breakthroughs don’t carry the potential issues that GMOs pose. Indoor vertical farming, for instance, allows for a precise delivery of nutrients and light to plants, as well as enabling huge increases in future productivity through the use of robotics. It also allows food to be grown much nearer to where it is consumed, reducing the costs associated with transportation. In the extreme, using new hydroponic systems, lettuce, tomatoes and other vegetables can be grown right inside of apartments. Over time, as these innovations progress, they will add up to significant cost reductions and increases in availability of food.
Technology also promises a dramatic decline in the cost of education. Over the last decade, the availability of online learning resources has grown rapidly, including many free platforms, such as the language learning app Duolingo. In addition to online courses such as edX and Khan Academy, there are millions of blog posts that explain specific topics. And of course, YouTube is bursting with educational videos on a near-infinite range of subjects, from sailing to quantum computing.
There is evidence that the exorbitant rise in the cost of college tuition in the US is beginning to slow. When analyzing this data, we must remember that there is a huge amount of inertia in our educational system and job market. Many employers continue to believe they must hire graduates from the best universities, which drives up prices for higher education, with a ripple effect that extends all the way down to private nursery schools. This year, Google announced that they would be offering specialized six-month programs for fifty dollars a month, which they committed to treating as equivalent to four-year college degrees in their own hiring (Bariso, 2020). It will be some time before most students turn to free or affordable online resources for all their learning needs, but at least the possibility now exists. The COVID-19 crisis has shown the potential of online education, with schools all around the world switching from in-person instruction to prevent the pandemic from spreading faster.
Healthcare is a similar story. Per capita spending in the United States far exceeds that of other countries, having risen much more quickly than the rate of inflation for many years, but that hasn’t translated into better care. For instance, Cuba has for many years had an almost identical life expectancy to the US, despite spending less than a tenth of the amount on healthcare per capita (Hamblin, 2016). Debates have raged as to whether the Affordable Care Act or other legislative interventions will decrease healthcare costs or increase insurance premiums. Regardless of these issues, there are a number of reasons why we can count on progress in digital technology bringing down healthcare costs.
Source: OECD, 2021b
First, digital technology can make prices on medical procedures more transparent, enabling more competition to push prices down (this could be assisted further by regulation). Second, with people using technology to track their own health data, we can live healthier lives and require less care, especially over the long term. And third, technology will lead to faster and better diagnosis and treatment. The online medical crowdsourcing platform CrowdMed has helped many people whose conditions previously went undiagnosed or misdiagnosed. The Human Diagnosis Project (Human Dx) is also working on a system to help improve the accuracy of diagnoses.
Figure 1 is a platform that lets doctors exchange images and other observations relating to medical cases, and Flatiron Health pools data on oncology patients to enable targeted treatment. In addition, a number of companies are bringing telemedicine into the app era: HealthTap, Doctor On Demand, Teladoc Health and Nurx all promise to dramatically reduce the cost of delivering care.
You might think that a large proportion of healthcare cost results from pharmaceuticals rather than doctors’ visits, but in fact in the US they account for only a tenth of total health spending (The American Academy of Actuaries, 2018; OECD, 2019). However, technology will likely drive costs down here, too. One pharma entrepreneur told me about the potential for personalized drugs that could dramatically improve the effectiveness of treatments for a wide range of conditions that account for large expenses, including many cancers, motor neuron disease and Alzheimer’s. And in the longer term, technologies such as CRISPR gene editing will give us unprecedented abilities to fix genetic defects that currently result in large and ongoing expenses, such as cystic fibrosis (Mosse, 2015).
You might be confused by my presentation of deflation as a positive thing. Economists, after all, tend to portray it as an evil that should be avoided at all costs. They are primarily concerned about growth as measured by GDP, which they argue makes us all better off. They assert that if people anticipate that prices will drop, they will be less likely to spend money, which will decrease output and lead owners of capital to make fewer investments, resulting in less innovation and lower employment. That, in turn, makes people spend even less, causing the economy to contract further. Economists point to Japan as a country that has been experiencing deflation and contracting output. To avoid this scenario, they argue for policies designed to achieve some amount of inflation, including the Federal Reserve’s so-called ‘quantitative easing’, which is intended to expand the supply of money and thus increase the nominal prices of things.
However, in a world where digital technology drives technological deflation, this reasoning is flawed. GDP is an increasingly problematic measure of progress because it ignores both positive and negative externalities. For instance, on the side of positive externalities, making education and healthcare radically cheaper could lower GDP while clearly making people better off. A second flaw in economists’ reasoning is that it assumes technological progress requires growth in paid production. A great counterexample is open-source software, which has driven a lot of technological progress outside of the traditional economic model. Increases in economic, informational and psychological freedom will allow us to accelerate the knowledge loop, which is the foundation of the progress that enables technological deflation.
Technological deflation is what puts society in a position where UBI becomes both possible and increasingly helpful. The total price of all the solutions that a person requires to take care of their needs has already started to decline, and will be lower still in the future. Technological deflation is what allows people to break out of the job loop.
With all this background information, you might wonder how much a universal basic income should be. My working proposal for the United States is $1,000 per month for everyone over the age of 18, $400 per month for everyone over the age of 12 and $200 per month for every child. These numbers might seem low, but bear in mind that the goal of UBI isn’t to make people well off—it’s just to allow them to take care of their needs without being forced into the job loop. Our collective thinking about the amounts required is muddled because we have mistakenly come to embrace the fulfillment of our unlimited wants, instead of focusing on the freedom to find purpose that comes from being able to take care of our needs. In addition to showing that capital is no longer our binding constraint, this is the second crucial reason for the earlier section on re-establishing a clear distinction between wants and needs. We should also remember that technological deflation will make fulfilling our needs progressively cheaper, while UBI won’t be introduced overnight. My numbers are intended to work over time, as some other government programs are phased out and a UBI is phased in.
Let’s consider these numbers further. While everyone will spend their UBI in different ways, a possible allocation for an adult might break down roughly as follows on a monthly basis: $400 for housing, $300 for food, $100 for transportation, $50 for clothing, and $50 for Internet access and associated equipment, with the balance spent differently each month (for example, on healthcare as required).
You might wonder why I propose a lower payment for children and teenagers. The answer is, first, we can meet many of their needs more cheaply than we can for adults. And second, there is historic evidence that the number of children people have is partially determined by economics. UBI should not incentivize adults to have more children, so as to ‘skim’ their income. That’s especially important because, again, we want the birth rate to decline globally so we eventually reach peak population.
When you calculate how much money would be required to provide a UBI in the United States, based on the 2019 population, you wind up with an annual figure of about $3 trillion (U.S. Census Bureau, 2019). While that is a huge sum, it represents just 14 percent of the economy as measured by GDP in 2019, and under 8 percent of gross output, which measures not just final output but also intermediate steps (U.S. Bureau of Economic Analysis, 2020; Federal Reserve Bank of St. Louis, 2021e). Where will this money come from? Two sources: government budgets (paid for by taxes) and money creation.
In the US in 2019, the total revenues of all levels of government from taxation and fees were on the order of $5 trillion, so the money for a UBI could, in theory, come from redirecting existing budgets (OECD, 2020). There would then be another $2 trillion of money for critical government activities, such as law enforcement and national defense (the budget for the latter was $0.7 trillion in 2019) (“Military Budget of the United States,” 2020). Setting aside the question of the political process that might allow such a reallocation to be accomplished, it is not ruled out by sheer arithmetic.
UBI would also substantially increase government revenues. At the moment, nearly half of all earners don’t get paid enough to owe federal income tax. Once people have a UBI, every dollar earned from work, or from other sources such as interest or capital gains, could be taxed. For instance, if you are currently single and earn $10,000 in employment income, you do not need to file a federal income tax return. With a UBI, that could be taxed at a rate of 25 per cent, generating $2,500 in tax revenue. This could provide as much as $0.3 trillion based on a back-of-the-envelope calculation. People who already pay taxes would of course also effectively be paying back some of their UBI through these taxes. Applying a 25 percent tax rate to that group, which would receive roughly half of all UBI payments, would decrease the required amount by an additional $0.4 trillion. In other words, the net amount required for a UBI with a 25 per cent federal tax rate applied starting with the first dollar earned would about $2.3 trillion.
Government revenues can also be expanded in ways that accomplish other goals. For instance, we should increase taxation on pollution, in particular greenhouse gas emissions. Taxes are a well-established way of dealing with negative externalities, and we have made good use of this effect. Aggressively taxing cigarettes, for instance, has resulted in diminished consumption and higher gasoline taxes in Europe have contributed to more efficient cars. Estimates of the potential revenue from a carbon tax are around $0.3 trillion per year, and might be even higher. So, between offsets from income tax (which would occur automatically) and a greenhouse gas tax (which we need anyway), the funds needed for UBI could be reduced to about $2 trillion. Though that’s a massive number, social security and Medicare/Medicaid each cost about $1 trillion. So in the extreme, UBI could be financed through a massive reallocation of existing programs.
There is, however, another way to provide much or all of the money needed for UBI. This solution would require moving away from today’s banking system, where the power to ‘create’ money is delegated to banks, to a system where money is issued directly to people instead. In today’s fractional reserve banking system, commercial banks extend more credit than they have deposits, with the Federal Reserve Bank acting as the so-called lender of last resort. For instance, in the 2008 financial crisis, the Fed bought up potentially bad assets to provide banks with liquidity. Europe too has had a policy of ‘quantitative easing’ (often abbreviated as QE), where a central bank infuses large amounts of money directly into commercial banks through loans and asset purchases on highly favorable terms, in the hope that the banks in turn will use this money to extend loans.
The idea is that by extending loans to businesses that need to finance the purchase of equipment or that require more working capital (to hire more salespeople, for example), banks will help the economy grow. But while banks have done that to some degree, they have increasingly focused on lending to large corporations and to wealthy people, to acquire second homes or even for financial speculation. Meanwhile, poor people have virtually no access to affordable credit, and lending to small businesses has been decreasing. The net result has been a rise in wealth and income inequality. Interestingly, this lopsided effect of bank-based money creation was understood as early as the 18th century in the writings of the French banker and economist Richard Cantillon, and has become known as the ‘Cantillon Effect’ (Stoller, 2020).
An alternative system would be to take money creation out of the hands of banks by forcing them to hold demand deposits at the Fed, or the corresponding central bank, in other countries. Known as ‘full-reserve banking,’ this system dramatically reduces risk in the banking sector by eliminating the possibility of bank runs and allows for new competitive banks to be formed without big upfront equity requirements. Credit extension would take place on the basis of long term deposits, and also happen via marketplace lending, as enabled by companies such as LendingClub, for individuals, and Funding Circle, for businesses. Money creation would happen simply by giving the new money directly to people as part of their UBI payments, a system sometimes referred to as “QE for the people.”
What orders of magnitude are we talking about? The terms M0, M1, M2 and M3 are progressively more encompassing measures of how much money has been created in the economy (meaning M3 is greater than M2, which is greater than M1, etc.). In the US, we no longer track the larger monetary aggregates, such as M3, and only use narrower measures, such as M2, and even that measure has been growing by about $1 trillion each year over the last decade. Since the beginning of the COVID-19 crisis, the Federal Reserve has created an astonishing additional $15 trillion of money as measured by M1.
Another way to get a sense of the total magnitude of money creation is by considering the development of debt. US households have about $10 trillion in mortgage debt, $1.2 trillion in auto loans, over $1.5 trillion in student loans and more than $900 million in credit card debt (Federal Reserve Bank of New York, 2020; Fontinelle, 2021; White, 2021). Total household debt can increase by as much as $1 trillion in a single year. US business debt stands at around $35 trillion, about half of which is in the financial sector (Federal Reserve Bank of St. Louis, 2021f; 2021g).
The amount of money created annually is thus in the same ballpark as my proposed UBI. Historically, the idea of the government ‘printing’ money is associated with fears of runaway inflation of the sort that occurred in Germany’s Weimar Republic. There are several reasons why this would not be the case with a proper UBI scheme. First, the amount of new money created would be fixed and known in advance. Second, as we saw earlier, technology is a strong deflationary force. Third, the net amount of money created can be reduced over time by removing money from the economy, which could be accomplished through negative interest rates on bank deposits above a certain amount, with payment collected by the central bank. Alternatively, a system of ‘demurrage’ could be implemented, in which a fee is levied on all currency holdings or the holdings are automatically shrunk (with digital currencies, the latter is now possible automatically).
I expect the path to UBI to involve some combination of changes to government budgets, taxation and the monetary system. As we will see later it is also possible that UBI emerges outside of government through a decentralized project using blockchain technology. However we wind up getting there, my back-of-the-envelope calculations above show that UBI is affordable in the United States today. Similar calculations have been carried out for other countries and show affordability even in many lesser developed countries. Economic freedom for all is already within our reach.
One of the many attractive features of UBI is that it doesn’t remove people’s ability to sell their labor. Suppose someone offers you $5 per hour to look after their dog. Under UBI you are completely free to accept or reject that proposal, without distortion from a minimum wage. The reason we need a minimum wage in the current system is to guard against exploitation, but this problem exists primarily because people do not have the option to walk away from exploitive employment. If UBI was in place, they would.
The example of dog-sitting shows why a minimum wage is a crude instrument that results in distortion. If you like dogs, you might happily take the work for $5 per hour. You might be able to do it while writing a blog post or watching videos on YouTube. Government should not interfere with such transactions. The same is true of working in a fast food restaurant. If employees have the option of walking away from a job, the labor market will naturally find out how much it takes to get someone to work at, say, McDonalds. That might turn out to be $5 per hour or it might turn out to be $30 per hour (the former being exceedingly unlikely for McDonalds but might be the case for a local burger joint that people love working at). Finally, with the existence of UBI labor organizing for collective bargaining would become easier, as various tactics currently used by employers rely on the threat of unemployment and the large demand for even poorly compensated jobs from the ‘precariat’ (a term used by Guy Standing to describe the growing group of people who lead a precarious existence as the result of intermittent employment and other forms of underemployment).
One concern often expressed about UBI is that people would stop working altogether and cause the labor market to collapse. Experiments with UBI, such as the Manitoba Basic Annual Income Experiment in Canada in the 1970s, showed that while people somewhat reduced their working hours when they were paid such income, there was no dramatic labor shortage. People will generally want to earn more than their basic income provides, and the increase in the price of labor will make working more attractive. Furthermore, in conjunction with the income tax change discussed in the previous section, UBI avoids a major issue with many existing welfare programs in which people lose their entire benefit when they start to work, resulting in effective tax rates above 100 per cent (i.e. people make less working than not). With UBI, whatever you earn is in addition to your basic income and you pay the normal marginal tax rate on that. There is no benefit cliff and hence no disincentive to paid work.
But what about dirty or dangerous jobs? Will there be a price of labor high enough to motivate anyone to do them, and will the companies that need this labor be able to stay in business? Businesses will have a choice between paying people more to do such work or investing in automation. In all likelihood, the answer will be a combination of both. As we already saw earlier, expensive labor has historically been a driver of innovation. Because our ability to automate has gone up dramatically with digital technology we can in fact now automate these dirty and dangerous jobs. That also means that we do not have to worry about labor-price induced inflation. Put differently, technological deflation can continue even if the cost for some labor increases.
UBI would have three other important impacts on the labor market. The first has to do with volunteering. Currently, there are not enough people looking after the environment or taking care of the sick and elderly. Labor is frequently undersupplied in these sectors, because the demand isn’t backed by enough money, and these activities thus rely largely on donations. Many elderly people don’t have sufficient savings to afford personal care. When people have to work pretty much every free hour to meet their needs they don’t have time to volunteer. Providing them with UBI could vastly increase the number of volunteers (we observe increased volunteering among pensioners, who are effectively already on a UBI).
The second big effect UBI would have on the labor market is a dramatic expansion of the scope for entrepreneurial activity. A lot of people who would like to start a local business, such as a nail salon or a restaurant, have no financial cushion and can never quit their jobs to give it a try. I sometimes refer to UBI as ‘seed money for everyone’: more businesses getting started in a community would mean more opportunities for fulfilling local employment.
Once they get going, some of these new ventures would receive traditional financing, including bank loans and venture capital, but UBI also has the potential to significantly expand the reach and importance of crowdfunding. If you feel confident that your needs are taken care of, you will be more likely to start an activity that has the potential to attract support via crowdfunding, such as recording music videos and putting them up on YouTube. Also, if your needs are taken care of, you will be more likely to use a fraction of any income you receive on top of UBI to support crowdfunded projects.
The third big impact of UBI on the labor market would be the growth of human qua human jobs (credit for this concept goes to Yochai Benkler, who mentioned it to me in a conversation). There are certain jobs that even when we can automate them we will sometimes have a preference for a human instead. Examples of these include food preparation and serving, massage and mental health therapy, and of course arts and crafts. A great example for this desire is the persistence and even growth of live music following the invention of sound recording. There is something special about a performance that results from a sense of shared humanity. It is our intuitive understanding that the performer has an inner life of thoughts and emotions akin to our own that makes live music special. Conversely it can be highly pleasurable to perform for somebody or to cook a meal for someone. UBI will greatly increase the room for people to participate on both sides of these human qua human labor markets.
I have addressed the three biggest objections to UBI by showing that it is affordable, that it will not result in inflation, and that it will have a positive impact on the labor market and innovation. There are some other common objections that are worth addressing, including a moral objection that people have done nothing to deserve such an income, which is answered in its own section below.
One other objection to UBI is that it diminishes the value of work in society. In fact the opposite is true: UBI recognizes how much unpaid work exists in the world, including child rearing. We have created a situation where the word ‘work’ has become synonymous with getting paid. As things stand, if you don’t get paid for doing something, it’s not considered work. An example of another approach is the one taken by Montessori Schools, which base their teaching on creativity and problem-solving: they use “work” to refer to any “purposeful activity.”
A further objection is that UBI robs people of the sense of purpose that work provides if they stop working or stop looking for work. However, the idea of work as a source of human purpose is a relatively new one, and it is largely attributable to the Protestant work ethic. Earlier, human purpose tended to be rooted in religion, which offered meaning in return for adhering to certain precepts (those might include work as one of several commandments, but not as a source of purpose in itself). The source of human purpose is thus subject to redefinition over time. As we transition to the Knowledge Age, contributing to or sustaining the knowledge loop is a more suitable focus. So is carrying out the responsibilities that we as humans have because of the power that knowledge has given to us.
One other frequent objection is that people will spend their basic income on alcohol and drugs, an assertion often accompanied by claims that the casino money received by Native Americans has caused drug problems among that population. There is no evidence to support this objection: no UBI pilots have found a significant increase in drug or alcohol abuse, and in the meantime the opioid crisis has been the largest drug epidemic in US history. Research shows that, contrary to widely held belief, casino money has contributed to declines in obesity, smoking and heavy drinking (Wolfe et al., 2012).
Some people object to UBI not because they think it won’t work, but because they claim it is a cynical ploy by the rich to silence the poor and keep them from rebelling. Some who voice this criticism genuinely believe it, but others use it as a tool of political division. Whatever the case, the impact of UBI is likely to be the opposite, as Thomas Paine recognized. In many parts of the world, including the United States, the poor are effectively excluded from the political process. They are too busy holding down one or more jobs to be able to run for office—or sometimes even to vote. American elections are held on a weekday, and employers are not required to give employees time off work to go to the polling station. Outside of elections many democratic processes, such as organizing protests or even strikes, relies on people who can contribute time to the effort. UBI will dramatically improve the ability of people to engage this way and thus challenge the status quo.
Before we examine informational freedom, we should remind ourselves why individuals deserve to have enough to take care of their needs, regardless of any economic contributions they may have made.
Consider the air we breathe. None of us did anything to make it: we just inherited it from the planet. Similarly, no one who is alive today did anything to invent electricity. It had already been invented, and we have inherited its benefits. You might point out that electricity costs money and people have to pay for it, but they pay for the cost of producing it rather than the cost of its invention. Here we might substitute many other amazing examples of our collectively inherited human knowledge, such as antibiotics, the wheel, sliced bread, etc.
We are incredibly fortunate to have been born into a world where capital is no longer scarce. This means that using our knowledge to take care of everyone’s needs is a moral imperative. UBI accomplishes that by giving people economic freedom, allowing them to escape the job loop, and accelerating the knowledge loop that gave us this incredible knowledge in the first place.
Even if you’re now convinced of the importance of the knowledge loop and attention scarcity in the digital age, and persuaded by my suggestions for increasing economic, informational and psychological freedom, that still leaves a huge question: can it be done?
You may think my proposals to change everything from how money is created to who controls computation are too extreme. You may dismiss them as utopian and argue that we cannot change everything about how we live. And yet to do so ignores the fact that we have already changed everything twice. Each of our two prior shifts in scarcity—from food in the Forager Age to land in the Agrarian Age, and from land to capital in the Industrial Age—was accompanied by extraordinary transformations.
When we transitioned from the Forager Age to the Agrarian Age, we went from nomadic to sedentary, from egalitarian to hierarchical, from promiscuous to monogamous and from animistic to theistic religions. When we went from the Agrarian Age to the Industrial Age, we moved from the countryside to cities, from large extended families to nuclear ones, from commons to private property, and from great-chain-of-being theologies to the Protestant work ethic. Though the first of these transitions took place over millennia and the second one over centuries, they still show that a shift in the binding scarcity comes with profound changes in how we live.
With scarcity shifting once more, from capital to attention, we will again have to change everything—no matter how daunting that may seem. What follows is a series of ideas for how each of us can contribute to that change. There are many different projects to be tackled—my list is far from exhaustive, and should be regarded as inspiration for how we can take responsibility.
One action we should all take is the development of a mindfulness practice. The word ‘mindfulness’ is used a lot and is easy to dismiss, but for the reasons discussed in the section on psychological freedom, without such a practice it will be difficult to participate fully in the other actions discussed below. We each need to find out what works for us, whether it’s meditation, yoga, running or something else entirely. I do a conscious breathing exercise every day, first thing in the morning and last thing in the evening. I started doing this about five years ago and the change in my life has been profound.
It is also important that we help and inspire other people to do the same. There have been discussions on whether or not math should be mandatory in school, but there has been no such debate around mindfulness. It is entirely possible to go through school and college or university without developing a practice of one’s own. Every one of us would be better off with more mindfulness—the same cannot be said about algebra.
Another way to contribute to the spread of mindfulness is through research and entrepreneurship. Much remains to be understood about how different techniques, or drugs such as psilocybin, influence our brains. There is plenty of room for services such as individualized coaching, and for more apps that help with meditation and conscious breathing.
Imagine you live in a society that has achieved economic and informational freedom. Would you make good use of those freedoms, or would your beliefs and fears hold you back from engaging in the knowledge loop? Or worse yet, would your attention be taken up by systems designed to capture it for someone else’s benefit? Would you feel free to pursue your own interests, or would your Industrial Age beliefs keep you trapped in the job loop? Would you have a strong sense of purpose, or would you feel adrift without a clear career path and a boss telling you what to do? Would you seek out new knowledge, or would you seek to confirm what you already believe? Would you feel free to create, or would you hold yourself back out of fear? And would you recognize when your attention is being manipulated?
While the previous sections on economic and informational freedom examined changes that require collective action, this section addresses individual action. We must free ourselves from our deeply engrained Industrial Age beliefs, and we can start on that path by developing some form of mindfulness practice. This, in my view, is essential to freely directing our attention in the Knowledge Age.
I should start by acknowledging the profound psychological dimension of the transition out of the Industrial Age. Social and economic disruption were making life stressful even before the Covid-19 pandemic. The unfolding climate crisis and the ongoing escalation of political and social tensions around the world are further causes for anxiety. To make matters worse, we have yet to learn to live healthily with new technology, and we obsessively check our smartphones during meetings, while driving, and before we go to sleep. This is taking an immense psychological toll, as increases in sleep disorders, suicide rates, drug overdoses and antisocial behaviors (e.g., bullying) show.
We need to go beyond that general insight about the population at large and look at what goes on in our own heads, but that requires time and effort because our brains are easily hijacked by emotional reactions that interfere with introspection and self-awareness. Can we overcome the anxieties that might prevent us from gaining, creating and sharing knowledge? Can we put down our phones, when they are designed to draw us in? It might seem like a monumental task, but humankind is uniquely adaptable. After all, we have navigated two prior transitions that required dramatic psychological change, first from the Forager Age to the Agrarian Age, and then to the Industrial Age.
We now understand why humans can adapt so well. As neuroscientists have discovered, our brains remain plastic even as we age, meaning that what and how we think can be changed. In fact, we can change it quite deliberately, with techniques such as conscious breathing, meditation and cognitive behavioral therapy (McClintock, 2020; “Cognitive Behavioral Therapy,” 2020). As a crude approximation, the brain can be thought of as consisting of two systems: one that instinctually produces emotions and snap judgments and one that allows for rational thought but requires effort (Kahneman, 2013). Mindfulness techniques allow us better access to our rational faculties by limiting the extent to which our instinctual reactions control our behavior.
The idiom “take a deep breath” captures this idea well: pause and reflect before acting. The larger concept of deliberately freeing the mind is found in both Eastern and Western traditions. The Stoic philosophers developed practices of thought to temper the emotions, such as imagining the loss of a possession repeatedly before it occurs. In Buddhism, meditation techniques help practitioners achieve similar psychological freedom. We now have neuroscience research that lets us begin to understand how these techniques work, showing that their persistence over time is not a matter of religious belief or superstition, but grounded in the physical reality of our brains (Yoon et al., 2019).
We will now examine what we need to free ourselves from so that we can direct our attention to contributing to the knowledge loop and other Knowledge Age activities.
The extraordinary success of capitalism has made us confused about work and consumption. Instead of seeing them as a means to an end, we now see them as sources of purpose in themselves. Working harder and consuming more allows the economy to grow, so that... we can work harder and consume more. Though this sounds crazy, it has become the default position. We went so far as to ingrain this view in religion, moving to a Protestant work ethic that encourages working harder and earning more (Skidelsky & Skidelsky, 2013). Similar changes have taken place throughout Asia, where other religions have undergone this transition, most prominently in the case of the “New Confucianism” as championed by Lee Kuan Yew, the founding Prime Minister of Singapore (Pezzutto, 2019).
Even worse, we frequently find ourselves trapped in what’s known as ‘positional consumption’, or “Keeping up with the Joneses.” This is where what matters to us are not the inherent benefits of the things we buy, but their relative prestige. If our neighbor buys a new car, we find ourselves wanting an even newer and more expensive model. Such behavior has emerged not just with respect to goods but also to services—think of the $1,000 haircut or the $595-per-person dinner at a Michelin-starred restaurant (Orlo Salon, n.d.; Cross, 2020). Of course, much of this confusion has been fueled by trillions of dollars of advertising spend aimed at convincing us to buy more, flooding us with imagery of how happy we’ll be if only we do. Between economic policy, advertising and religion, it is no wonder that many people are convinced that materialism is part of human nature.
However, our addiction to consumption is exactly that—an addiction that exploits a mechanism in the brain. When you desire something, a new car for instance, your brain gets a dopamine hit based on your anticipated happiness, which makes you feel good. Once you get the car, you compare it to your prior expectations. If the car turns out to be less than you expected, your dopamine levels will decrease and this can cause extreme disappointment. If your expectations are met, your dopamine levels will stay constant. Only if your expectations are exceeded will you get another hit of dopamine. Now as you get used to having the new car your expectation adjusts and so quite quickly after the initial purchase you no longer receive any new dopamine from having a new car. The unfortunate result of all this is known as the ‘hedonic treadmill.’ When your brain grows accustomed to something like a car or an apartment, then recreating the same feeling of happiness as your original anticipation for the car or apartment now requires a faster car or a bigger apartment (Szalavitz, 2017).
That same mechanism, however, can provide long-term motivation when the anticipation is aimed at creation or exploration instead of consumption. As an artist or scientist, you can forever seek out new subjects. As a traveler, you can forever seek out new destinations. Freedom from wanting—in the sense of the delusional belief that consumption as such will bring happiness or meaning—is possible if we recognize that we can point our brain away from consumption and towards other pursuits, many of which are part of the knowledge loop. Redirecting our reward mechanism re-establishes the difference between needs and wants. You need to eat, while you may want to eat at a Michelin-starred restaurant. You need to drink water, while you may want to drink an expensive wine. This is why UBI, as discussed earlier, focuses on meeting needs rather than wants. Once you are economically free to meet your needs and are freeing yourself from wanting, you can direct your attention to the knowledge loop.
Suppose skiing is your passion and you want to keep seeking the perfect powder. How would a UBI let you focus your attention on it? On a UBI alone, you might not be able to afford an annual ski trip to the Swiss alps, but ski equipment is actually not expensive when you consider that it can last for many years and can be shared with others. And if you’re willing to hike up a mountain, you can ski as much as you want without buying a lift pass at an expensive resort.
In this instance, psychological freedom means freeing yourself of assumptions that you might have about how to go skiing. It helps, of course, to remind yourself that many of these assumptions are formed by companies that have a commercial interest in portraying skiing that way. If you can learn to reframe it as an outdoor adventure and a chance to be in nature, it needn’t be expensive. A similar logic holds for any number of other activities.
To free ourselves from wanting, we should remind ourselves of the difference between needs and wants, learn how our brain works and point our seeking away from consumption towards creative and experiential activities. For many of us, that means letting go of existing attachments to wants that we have developed over a long time. Finally, we should always cast a critical eye on the advertising we encounter, understanding that it perpetuates illusions about needs and wants and keeps us trapped in the job loop.
Young children ask dozens of questions a day, often annoying their parents who don’t have the time to answer. Humans are naturally curious, and it is this curiosity that has driven much of our progress (Shin & Kim, 2019). At the same time, our curiosity was not well-suited to the Industrial Age. If you employ people in a factory job that has them performing the same action all day every day, curiosity is not a desirable trait. The same goes for many modern service jobs, such as operating a cash register or delivering packages.
The present-day educational system was built to support the job loop of the industrial economy, so it is not surprising that it tends to suppress rather than encourage curiosity (Gatto et al., 2017). While educators hardly ever identify “suppressing curiosity” as one of their goals, many of our educational practices do exactly that. For instance, forcing every eight-year-old to learn the same things in math, teaching for tests, and cuts to music and art classes all discourage curiosity.
A critical way that we undermine curiosity is by evaluating areas of knowledge according to whether we think they will help us get a “good job.” If your child expressed an interest in learning Swahili or wanting to play the mandolin, would you support that? Or would you say something like, “But how will you earn a living with that?” The latest iteration of this thinking is an enthusiasm for learning how to code in order to get high-paying job in tech. Here again, instead of encouraging curiosity about coding, either for its own sake or as a tool in science or art, we force it into the Industrial Age logic of the job loop.
We need to free ourselves from this instrumental view of knowledge and embrace learning for its own sake. As we’ve already seen, UBI can go a long way in allaying fears that we won’t be able to support ourselves if we let our curiosity guide our learning. But will we have enough engineers and scientists in such a world? If anything, we’ll likely have more than we do under the current system. After all, forcing kids to study something is a surefire way to squelch their natural curiosity.
The knowledge loop, accelerated by digital technology, brings to the fore other limits to learning that we must also overcome. The first of these is confirmation bias. As humans we find it easier to process information that confirms what we already believe to be true. We can access a huge amount of online content that confirms our pre-existing beliefs rather than learning something new. We risk becoming increasingly entrenched in these views, fracturing into groups with strong and self-reinforcing beliefs. This phenomenon becomes even more pronounced with the automatic personalization of many Internet systems, with ‘filter bubbles’ screening out conflicting information (Pariser, 2021).
Another barrier to learning is the human tendency to jump to conclusions on the basis of limited data. After a study suggested that smaller schools tended to produce better student performance than larger schools, educators began to create a lot of smaller schools, only for a subsequent study to find that a lot of smaller schools were also doing poorly. It turns out that the more students a school has, the more likely it is to approximate the overall distribution of students. A small school is therefore more likely to have students who perform predominantly well or poorly.
Daniel Kahneman discusses such biases in his book Thinking, Fast and Slow. We rely on heuristics that result in confirmation bias and storytelling because many of the systems in the human brain are optimized for speed and effortlessness. These heuristics aren’t necessarily all bad as they can contribute to important ruin avoidance mechanisms (something which Kahneman ignores), but they served us better in a world with an analog knowledge loop, where more time existed for correcting mistakes. In today’s high-velocity digital knowledge loop, we must slow ourselves down or risk passing along incorrect stories. A recent study showed that false stories spread online many times more quickly than true ones (Vosoughi et al., 2018).
The bulk of online experiences we currently interact with are designed to exploit our cognitive and emotional biases, not to help us overcome them. Companies such as Facebook and Twitter become more valuable as they capture more of our attention by appealing to what Kahneman calls “System 1”: the parts of our brain that run automatically and that are responsible for our cognitive biases (Kahneman, 2013). You are more likely to look at cute animal pictures or status updates from your friends than to read an in-depth analysis of a proposal for a carbon tax. The recent explosion of “fake news” exploits this flaw in our systems, making large-scale manipulation possible.
New systems can help here. We might, for instance, imagine an online reader that presents opposing viewpoints to a given story. For each topic, you could explore both similar and opposing views. Such a reader could be presented as a browser plug-in, so that when you’ve ventured beyond the confines of a social media platform and are looking around on the Web, you could be drawn into actively exploring beyond the bounds of your usual sources (Wenger, 2011).
Fundamentally though, we all have to actively work on engaging what Kahneman calls “System 2”: the part of our brain that requires effort, but that lets us think independently and rationally. Developing and keeping up some kind of mindfulness practice is a key enabler for overcoming biases and freeing ourselves to learn.
After learning, the next step in the knowledge loop is creating. Here again, we need to work on our freedom. As Picasso once said, “Every child is an artist. The problem is how to remain an artist [...]” As adults we censor ourselves, inhibiting the natural creativity we enjoyed as children. The educational system, with its focus on preparing for standardized tests, further crushes our creative impulses. Many people eventually come to believe that creativity is something that they’re not capable of.
The job loop further solidifies these beliefs about creativity, and even institutionalizes them. Society categorizes people into amateurs and professionals. We venerate the professional guitar player, artist or sculptor but denigrate the amateur, dismissing their work as “amateurish.” When we start to measure creativity by how much money an artist or musician makes rather than the passion they feel for a pursuit, there is no wonder that many people fear they will never measure up.
Distractions also inhibit our impulses to create. There’s always another YouTube video to watch, another email to read, another post to glance at. Our brains are poorly suited to environments that are overloaded with information specifically curated to capture our attention. We evolved in a world where potentially relevant information—for instance, the sound of an approaching animal—could be a matter of life or death, and our brains are thus easily distracted. This is an example of a maladaptation to the current environment akin to our evolutionary craving for sugar in a world with added sugar everywhere.
In order to be able to create, we need to disconnect ourselves from many of those strategically selected and concentrated stimuli. Again, a mindfulness practice will be helpful here, allowing us to tune out interruptions, and there are many hacks we can use to prevent them in the first place, such as putting our phone into Do Not Disturb (DND) mode (I keep my phone on DND at all times with only family members being able to get through—that way I use my phone when I want to and not when Facebook or Twitter want me to).
Even after we have created something, many of us fear that when we share it, it will be criticized. Someone will call our painting ugly, our code incompetent, or our proposal naive. Given the state of much online commentary and the prevalence of ‘trolling,’ those fears are well-founded—but they need not inhibit our participation in the knowledge loop. Part of the answer is to work on the inner strength to continue sharing despite criticism.
Another part of the answer is that we should cultivate empathy. Whenever we comment on the work of others online, we should keep in mind that they dared to create and share it. And we should also remember that by contributing to the knowledge loop, they have engaged in the very thing that makes us human. Those who manage online communities should provide tools for flagging and banning people who are abusive or make threats aimed at shutting down sharing.
If you live in a country that is subject to dictatorship, censorship or mob rule, sharing opinions, art or research can result in imprisonment, torture or even death. And yet despite that, we routinely find people who freely share in these places. We should take inspiration and courage from those people, and we should support people’s ability to build systems to enable sharing in these places that are censorship-resistant and that allow for pseudonymous and anonymous expression (even though these systems ultimately can provide only limited protection, as discussed in the earlier section on privacy).
In the Knowledge Age, there is such a thing as sharing too much—not sharing too much personal information, but mindlessly sharing harmful information. Threats, rumors and lies can take on lives of their own, and we can find ourselves contributing to an information cascade, in which an initial bit of information picks up speed and becomes an avalanche that destroys everything in its path. So, as with freedom in other contexts, there’s a double-edged aspect to having the psychological freedom to share. We need to free ourselves from fear to share our creations and our ideas, while also needing to control our emotional responses so that we do not poison the knowledge loop. Ask yourself whether what you’re sharing will enhance or hurt the pursuit of knowledge. If the answer is not obvious, it might be better not to share.
Self-regulation lies at the heart of psychological freedom, and allows us to separate our wants from our needs. It lets us consider our initial reactions to what others are saying, writing or doing without immediately reacting in anger. It lets us have empathy for others and to be open to learning something new. And it lets us overcome our fears of creating and sharing.
Still, as humans we have a need for meaning that has us searching for purpose and recognition in ways that can all too easily result in us being psychologically unfree. Existential angst can express itself in many different forms, ranging from an inability to do anything to a manic desire to do everything. The persistence of religion is partly explained by its ability to address the need for meaning. Most religions claim that our purpose is to follow a set of divinely ordained rules, and that if we follow them, the respective god or gods will recognize and potentially even reward our existence.
Many organized religions intentionally disrupt the knowledge loop. They restrict the process of critical inquiry by which knowledge improves over time, through mechanisms such as censorship and “divine knowledge,” which is often encoded in sacred texts. This serves to maintain the power of the gatekeepers to the texts and their interpretation. While adhering to a religion may meet your existential psychological need for meaning, it may also make it difficult for you to participate fully and freely in the knowledge loop.
The same is true for many informal beliefs. Belief in a preordained individual destiny can be used to fulfill the need for meaning, but it also acts as an obstacle to psychological freedom via thoughts such as “this was meant to be, there’s nothing I can do about it.” Or people can belong to communities that meet the need for meaning through recognition, but impose a strict conformity that restricts participation in the knowledge loop. It can often be difficult to recognize how much of one’s behavior is controlled by custom or peer pressure.
A new humanism, built around recognizing the importance of knowledge, can provide an alternative that enhances psychological freedom instead of inhibiting it. With participation in the knowledge loop as a key source of purpose, learning new things, being creative and sharing with others is encouraged. This doesn’t mean that everyone has to be the proverbial rocket scientist. There are a great many ways to participate in the knowledge loop, including creating art, as well as caring for others and the environment.
In order to help people be psychologically free, we need to substantially change most countries’ education systems. Today’s systems were developed to support the Industrial Age, and their goal is to shape people to participate in the job loop. We need a system that celebrates knowledge for its own sake, allows students to discover their individual interests and deepen those into a purpose, and teaches people about how to be psychologically free. In other words, we need to put humanism at the center of learning.
Humanism and the knowledge loop thus have important implications for how we can reorganize society and take responsibility for the world around us. This is the subject of Part Five.
Can you read any book you want to at the push of a button? Can you listen to any music ever recorded the moment you feel like it? Do you have instant access to any academic publication from around the world that you wish to consult? In the past, when copying and distributing information was expensive, asking such seemingly outrageous questions would not have made any sense. In the early days of writing, when books were copied by hand, they were rare, costly and subject to errors. Very few people had access to them. And even as recently as the last years of the 20th century, it typically took physically costly processes of production and distribution to get books, musical recordings and other items into people’s hands.
But in the digital age, when the marginal cost of making and distributing a copy has shrunk to zero, all limitations on digital information are artificial. They involve adding costs to the system in order to impose scarcity on something that is abundant. For example, billions of dollars have been spent on preventing people from copying and sharing digital music files (CBS News, 2001).
Why are we spending money to make information less accessible? When information existed only in analog form, the cost of copying and distributing it allowed us to build an economy and a society that was based on information scarcity. A record label, for instance, had to recruit musical talent, record in expensive studios, market the music, and make and distribute physical records. Charging for the records allowed the label to cover its costs and turn a profit. Now that individuals can make music on a laptop and distribute it for free, the fixed costs are dramatically lower and the marginal cost of each listen is zero. In this context, the business model of charging per record, per song or per listen, and the copyright protections required to sustain it, no longer make sense. Despite the ridiculous fight put up by the music industry, our listening is generally either free (meaning that it is ad-supported) or part of a subscription. In either case, the marginal cost of each listen is zero.
Despite this progress in the music industry, we accept many other artificial restrictions on information access because this is the only system we know. To transition into the Knowledge Age, however, we should strive for an increase in informational freedom. This is not unprecedented in human history: prior to the advent of the printing press, stories and music were passed on orally or through copying by hand. There were no restrictions on who could tell a story or perform a song.
To be clear, information is not the same as knowledge. Information, for instance, includes the huge number of log files generated every day by computers around the world, many of which may never be analyzed. We don’t know in advance what information will turn out to be the basis for knowledge, so it makes sense to retain as much information as possible and make access to it as broad as possible. This section will explore various ways in which we can expand informational freedom, the second important step that will facilitate our transition to a Knowledge Age.
The Internet has been derided by some who claim it is a small innovation compared to, say, electricity or vaccinations—but they are mistaken. The Internet allows anyone, anywhere in the world, to learn how electricity or vaccinations work. Setting aside artificial limitations imposed on the flow of information, the Internet provides the means to access and distribute all human knowledge to all of humanity. As such, it is the crucial enabler of the digital knowledge loop. Access to the Internet is a core component of informational freedom.
At present, over four and half billion people are connected to the Internet, a number that is increasing by hundreds of millions every year (Kemp, 2020). This tremendous growth has become possible because the cost of access has fallen dramatically. A capable smartphone costs less than $100 to manufacture. In places with competitive markets, 4G bandwidth is provided at prices as low as $0.10 per GB (Roy, 2019).
Even connecting people who live in remote parts of the world is getting much cheaper, as the cost for wireless networking is decreasing and we are increasing our satellite capacity. For instance, there is a project underway to connect rural communities in Mexico for less than $10,000 per community (Rostad, 2018). At the same time, in highly developed economies such as the US, ongoing technological innovation such as MIMO wireless technology will further lower prices for bandwidth in densely populated urban areas (“MIMO,” 2020).
All of this means that even at relatively low levels, UBI would cover the cost of Internet access, provided that we keep innovating and maintain highly competitive and properly regulated markets for access to it. This is an example of how the three different freedoms reinforce each other: economic freedom allows people to access the Internet, which is the foundation for informational freedom, and, as we will see later, using it to contribute to the knowledge loop requires psychological freedom.
As we work to make affordable Internet access universal, we must also address limitations to the flow of information on the network. In particular, we should fight against restrictions on the Internet imposed by governments and Internet service providers (ISPs). Both are artificial limitations, driven by a range of economic and policy considerations opposed to the imperative of informational freedom.
By design, the Internet has no built-in concept of geography. Most fundamentally, it constitutes a way to connect networks with one another (hence its name), regardless of where the machines involved are located. Any geographic restrictions have been added in, often at great cost. Australia and the UK have recently mandated so-called ‘firewalls’ around their countries, not unlike China’s own “Great Firewall” (“Great Firewall,” 2020), and countries like Turkey have had one in place for some time. These ‘firewalls’ place the Internet under government control, restricting informational freedom. For instance, Wikipedia was not accessible in Turkey for many years. Furthermore, both China and Russia have banned the use virtual private network services, tools that allow individuals to circumvent these artificial restrictions (Wenger, 2017a). As citizens, we should be outraged that governments are cutting us off from accessing information freely around the world, both on principle and on the basis of this being a bad use of resources. Imagine governments in an earlier age spending taxpayer money so citizens could dial fewer phone numbers.
The same equipment used by governments to impose geographic boundaries on the Internet is used by ISPs to extract more money from customers, distorting access in the process through practices including paid prioritization and zero-rating. To understand why they are a problem, let’s take a brief technical detour.
When you buy access to the Internet, you pay for a connection of a certain capacity. If it provides 10 megabits per second and you use that connection fully for sixty seconds, you would have downloaded (or uploaded, for that matter) 600 megabits, the equivalent of 15–25 songs on Spotify or SoundCloud (assuming 3–5 megabytes per song). The fantastic thing about digital information is that all bits are the same. It doesn’t matter whether you accessed Wikipedia or looked at pictures of kittens: you’ve paid for the bandwidth, and you should be free to use it to access whatever parts of human knowledge you want.
That principle, however, doesn’t maximize profit for the ISP. In order to do that, they seek to discriminate between different types of information, based on consumer demand and the supplier’s ability to pay. First they install equipment that lets them identify bits based on their origin. Then they go to a company like YouTube or Netflix and ask them to pay to have their traffic ’prioritized’ relative to traffic from other sources. Another form of manipulation that is common among wireless providers is so-called zero-rating, where some services pay to be excluded from monthly bandwidth caps. If permitted, ISPs will go a step further: in early 2017, the US Senate voted to allow them to sell customer data, including browsing history, without customer consent (Wenger, 2017b).
The regulatory solution to this problem is blandly referred to as ‘net neutrality’, but what is at stake here is informational freedom itself. Our access to human knowledge should not be skewed by our ISPs’ financial incentives. We might consider switching to another ISP that provides neutral access, but in most geographic areas, especially in the United States, there is no competitive market for broadband Internet access. ISPs either have outright monopolies (often granted by regulators) or operate in small oligopolies. For instance, in the part of New York City where I live, there is just one broadband ISP.
Over time, technological advances such as wireless broadband may make the market more competitive, but until then we need regulation to avoid ISPs limiting our informational freedom. This concern is shared by people all over the world: in 2016, India objected to a plan by Facebook to provide subsidized Internet access that would have given priority to their own services, and outlawed ‘zero rating’ altogether (Vincent, 2016).
Once you have access to the Internet, you need software to connect to its many information sources. When Tim Berners-Lee first invented the World Wide Web in 1989, he specified an open protocol, the Hypertext Transfer Protocol (HTTP), that anyone could use both to make information available and to access it ("Tim Berners-Lee," n.d.). In doing this, Berners-Lee enabled anyone to build software, so-called Web servers and browsers that would be compatible with this protocol. Many people did, including Marc Andreessen with Netscape, and many web servers and browsers were available as open-source or for free.
The combination of an open protocol and free software meant permissionless publishing and complete user control. If you wanted to add a page to the Web, you could just download a Web server, run it on a computer connected to the Internet and add content in the Hypertext Markup Language (HTML) format. Not surprisingly, the amount of content on the Web proliferated rapidly. Want to post a picture of your cat? Upload it to your Web server. Want to write something about the latest progress on your research project? There was no need to convince an academic publisher of its merits—you could just put up a web page.
People accessing the Web benefited from their ability to completely control their own web browser. In fact, in the Hypertext Transfer Protocol, the web browser is referred to as a ‘user agent’ that accesses the Web on behalf of the user. Want to see the raw HTML as delivered by the server? Right click on your screen and use ‘View Page Source’. Want to see only text? Instruct your user agent to turn off all images. Want to fill out a web form but keep a copy of what you are submitting for yourself? Create a script to have your browser save all form submissions locally.
Over time, the platforms that have increasingly dominated the Web have interfered with some of the freedom and autonomy enjoyed by early users. I went on Facebook recently to find a note I posted some time ago on a friend’s wall. But it turns out that you can’t search through all the wall posts you have written: instead you have to scroll backwards in time for each friend, trying to remember when you posted what where. Facebook has all the data, but they’ve decided not to make it searchable. Whether or not you attribute this to intentional misconduct on their part: my point is that you experience Facebook the way they want you to experience it. If you don’t like how Facebook’s algorithms prioritize your friends’ posts in your newsfeed, tough luck.
Imagine what would happen if everything you did on Facebook was mediated by a software program—a ‘bot’—that you could control. You could instruct it to go through and automate the cumbersome steps that Facebook lays out for finding old wall posts. Even better, if you had been using this bot all along, it could have kept your own archive of wall posts in your own data store and you could simply instruct it to search your archive. If we all used bots to interact with Facebook and didn’t like how our newsfeed was prioritized, we could ask our friends to instruct their bots to send us status updates directly, so that we could form our own feeds. This was entirely possible on the Web because of the open protocol, but it is not possible in a world of proprietary and closed apps on smartphones.
Although this example might sound trivial, bots have profound implications in a networked world. Consider the on-demand car services provided by companies such as Uber and Lyft. As drivers for these services know, each of them provides a separate app for them to use. You can try to run both apps on one phone or you can even have two phones, as some drivers do, but the closed nature of the apps means that you cannot use your phone’s computing power to evaluate competing offers. If you had access to bots that could interact with the networks on your behalf, you could simultaneously participate in these various marketplaces and play one off against the other.
Using a bot, you could set your own criteria for which rides you want to accept, including whether a commission charged by a given network was below a certain threshold. The bot would then allow you to accept only rides that maximize the fare you receive. Ride-sharing companies would no longer be able to charge excessive commissions, since new networks could arise to undercut them. Similarly, as a passenger, using a bot would allow you to simultaneously evaluate the prices between different services and choose the one with the lowest price for a particular trip.
We could also use bots as an alternative to antitrust regulation, in order to counter the power of technology giants like Google or Facebook without foregoing the benefits of their large networks. These companies derive much of their revenue from advertising, and consumers currently have no way of blocking ads on mobile devices. But what if users could control mobile apps to add ad-blocking functionality, just as they can with Web browsers?
Many people decry ad-blocking as an attack on journalism that dooms the independent Web, but that is a pessimistic view. In the early days of the Web, it was full of ad-free content published by individuals. When companies joined in, they brought their offline business models with them, including paid subscriptions and advertising. Along with the emergence of platforms such as Facebook and Twitter with strong network effects, this resulted in a centralization of the Web—content was increasingly either produced on a platform or moved behind a paywall.
Ad-blocking is an assertion of power by the end user, which is a good thing in all respects. Just as a New York City judge found in 2015 that taxi companies have no special right to see their business model protected from ridesharing companies (Whitford, 2015), neither do ad-supported publishers. And while this might result in a downturn for publishers in the short term, in the long run it will mean more growth for content that is paid for more directly by end users (for example, through subscriptions or crowdfunding).
To curtail the centralizing power of network effects, we should shift power to the end users by allowing them to have user agents for mobile apps, just as we did with the Web. The reason users don’t wield the same power on mobile is that native apps relegate end users to interacting with services using our eyes, ears, brain and fingers. We cannot use the computing capabilities of our smartphones, which are as powerful as supercomputers were until quite recently, in order to interact with the apps on our behalf. The apps control us, instead of us controlling the apps. Like a Web browser, a mobile user agent could do things such as block ads, keep copies of responses to services and let users participate in multiple services simultaneously. The way to help end users is not to break up big tech companies, but to empower individuals to use code that executes on their behalf.
What would it take to make this a reality? One approach would be to require companies like Uber, Google and Facebook to expose their functionality, not just through apps and websites, but also through so-called Application Programming Interfaces (APIs). An API is what a bot uses to carry out operations, such as posting a status update on a user’s behalf. Companies such as Facebook and Twitter have them, but they tend to have limited capabilities. Also, companies have the right to shut down bots, even when a user has authorized them to act on their behalf.
Why can’t I simply write code that interfaces with Facebook on my behalf? After all, Facebook’s app itself uses an API to talk to the Facebook servers. But in order to do that I would have to hack the Facebook app to figure out what the API calls are, and how to authenticate myself to them. Unfortunately, in the US there are three separate laws that make those steps illegal. The first is the anti-circumvention provision of the Digital Millennium Copyright Act (DMCA). The second is the Computer Fraud and Abuse Act (CFAA). And the third is the legal construction that by clicking ‘I accept’ on an End User License Agreement (EULA) or a set of Terms of Service (TOS), I am legally bound to respect whatever restrictions Facebook decides to impose. The last of these three is a civil matter, but criminal convictions under the first two laws carry mandatory prison sentences.
If we were willing to remove these three legal obstacles, hacking an app to give programmatic access to systems would be possible. People might argue that those provisions were created to solve important problems, but that is not entirely clear. The anti-circumvention provisions of the DMCA was put in place to allow the creation of digital rights management systems for copyright enforcement. What you think of this depends on what you think about copyright, a subject we will look at in the next section.
The scope of the CFAA could be substantially decreased without limiting its potential for prosecuting fraud and abuse, and the same goes for restrictions on usage a company might impose via a license agreement or a terms of service. If I only take actions that are also available inside the company’s app but happen to take them via a program of my own choice, why should that constitute a violation?
But, you might object, don’t companies need to protect the cryptographic keys that they use to encrypt communications? Aren’t ’botnets’ behind those infamous distributed denial-of-service (DDoS) attacks, where vast networks of computers flood a service with so many requests that no one can access it? It’s true that there are a lot of compromised machines in the world that are used for nefarious purposes, including set-top boxes and home routers. But that only demonstrates how ineffective the existing laws are at stopping illegal bots. As a result, companies have developed the technological infrastructure to deal with them.
How would we prevent people from using bots that turn out to be malicious code? For one, open-source code would allow people to inspect it to make sure it does what it claims. However, open source is not the only answer. Once people can legally be represented by bots, many markets currently dominated by large companies will face competition from smaller startups that will build, operate and maintain these bots on behalf of their end users. These companies will compete in part on establishing and maintaining a trust relationship with their customers, much like an insurance broker represents a customer in relationship with multiple insurance carriers.
Legalizing representation by bots would put pressure on the revenues of the currently dominant companies. We might worry that they would respond by slowing their investment in infrastructure, but there are contravening forces as more money will be invested in competitors. For example Uber’s ‘take rate’ (the percentage of the money paid for rides that they keep) is 25 percent. If competition forced that rate down to 5 percent, Uber’s value might fall from $90 billion to $10 billion, but that is still a huge figure, and plenty of capital would be available for investing in competitive companies that can achieve that kind of outcome.
That is not to say that there should not be limitations on bots. A bot representing me should have access to any functionality that I can access, but it should not be able to do things I can’t do, such as pretend to be another user or gain access to other people’s private posts. Companies can use technology to enforce such access limits for bots without relying on regulation that prevents bot representation in general.
Even if you are now convinced of the merits of bots, you might be wondering how we will get there. The answer is that we can start very small. We could run an experiment in a city like New York, where the city’s municipal authorities control how on-demand transportation services operate. They might say, “If you want to operate here, you have to let drivers interact with your service through a program.” Given how big a market the city is, I’m confident these services would agree. Eventually we could mandate APIs for all Internet services that have more than some threshold number of users, including all the social networks and even the big search platforms, such as Google.
To see that this is possible in principle, one need look no further than the UK’s Open Banking and the European Union’s Payment Services Directive, which require banks to offer an API for bank accounts (Manthorpe, 2018). This means that consumers can access third-party services, such as bill payment and insurance, by authorizing them to have access via the API instead of having to open a new account with a separate company. This dramatically lowers switching costs, and is making the market for financial services more competitive. The ACCESS Act, introduced by Senators Mark Warner and Josh Hawley in the US, was the first attempt to provide a similar concept for social media companies. While this particular effort hasn’t gone very far to date, it points to the possibility that bots could be used as an important alternative to Industrial Age antitrust regulation for shifting power back to end users.
After we have fought against geographical and prioritization limits and have bots that represent us online, we will still face legal limits that restrict what we can create and share. I will first examine copyright and patent laws and suggest ways to reduce how much they limit the knowledge loop. Then I’ll turn to privacy laws.
Earlier I noted how expensive it was to make copies of books when they had to be copied one letter at a time. Eventually, the printing press and movable type were invented. Together, the two made reproducing information faster and cheaper. Even back then, governments and churches saw cheaper information dissemination as a threat to their authority. In England, the Licensing of the Press Act of 1662 made the government’s approval to operate a printing press a legal requirement (Nipps, 2014). This approval depended on agreeing to censor content that was critical of the government or that ran counter to the teachings of the Church of England. The origins of copyright laws (i.e., the legal right to make copies) are thus tied up with censorship.
Those who had the right to make copies, effectively had monopolies on the copyrighted content which proved to be economically attractive. Censorship, however, does not make for a popular argument to support an ongoing monopoly and so relatively quickly the argument for sustaining such an arrangement shifted to the suggestion that copyright was necessary as an inducement or incentive to produce content in the first place. The time and effort authors put into learning and writing not only made written works morally their property, the writer Daniel Defoe and others argued in the early 18th century: to motivate people to do those things, the production of “pyrated copies” had to be stopped (Deazley, 2004). While this argument sounds much more compelling than censorship, in practice, copyright was rarely held by the original creator even back then. Instead, the economic benefits of copyright have largely accrued to publishers, who for the most part acquire the copyright for a one-time payment to the author or songwriter.
There is another problem with the incentive argument. It ignores a long history of prior content creation. Let’s take music as an example. Musical instruments were made as far back as 30,000 years ago, pre-dating the idea of copyright by many millennia. Even the earliest known encoding of a song, which marks music’s transition from information to knowledge, is around 3,400 years old (Andrews, 2018; Wulstan, 1971). Clearly then, people made and shared music long before copyright existed. In fact, the period during which it’s been possible for someone to earn a lot of money from making and then selling recorded music has been extraordinarily short. It started with the invention of the gramophone in the 1870s and peaked in 1999, the year that saw the biggest profits in the music industry, although the industry’s revenues have been gradually increasing again in recent years with the advent of streaming (Smith, 2020).
Before this short period, musicians made a living either from live performances or through patronage. If copyrighted music ceased to exist, musicians would still compose, perform and record music, and they would make money in the ways that they did prior to the rise of copyright. Indeed, as Steven Johnson found when he examined this issue, that is already happening, to some degree: “the decline in recorded-music revenue has been accompanied by an increase in revenues from live music... Recorded music, then, becomes a kind of marketing expense for the main event of live shows” (Johnson, 2015). Many musicians already choose to give away digital versions of their music, releasing tracks for free on SoundCloud or YouTube and making money from performing live (which during the COVID-19 pandemic had to shift to live streaming) or through crowdfunding methods such as Kickstarter and Patreon.
Over time, copyright holders have strengthened their claims and extended their reach. For instance, with the passing of the US Copyright Act of 1976, the requirement to register a copyright was removed: if you created content, you automatically held copyright in it (U.S. Copyright Office, 2019). Then, in 1998, the US Copyright Term Extension Act extended the length of a term of copyright from 50 to 70 years beyond the life of the author. This became known as the “Mickey Mouse Protection Act” because Disney had lobbied for it: having built a profitable business based on protected content, they were mindful that a number of their copyrights were due to expire (Slaton, 1999).
More recently, copyright lobbying has attempted to interfere with the publication of content on the Internet, through proposed legislation such as the Protect IP Act and the Stop Online Piracy Act, and language in the Trans-Pacific Partnership (TPP), a trade deal that the United States did not ultimately join (after the United States withdrew, the language was removed). In these latest attempts at expansion, the conflict between copyright and the digital knowledge loop has become especially clear. Copyright limits what you can do with content that you have access to, essentially restricting you to consuming it. It dramatically curtails your ability to share content and to create other works that use some or all of it. Some of the more extreme examples include takedowns of videos from YouTube that used the song “Happy Birthday to You,” which was under copyright until just a few years ago.
From a societal standpoint, it is never optimal to prevent someone from listening to or watching content that has already been created. Since the marginal cost of accessing a digital copy is zero, the world is better off if that person gets enjoyment from that content. And if that person becomes motivated to create some new inspiring content themselves, then the world is a lot better off.
Although the marginal cost for copying content is zero, you might wonder about the fixed and variable cost that goes into making it in the first place. If all content were to be free, then where would the money to produce it come from? Some degree of copyright is needed for content creation, especially for large-scale projects such as Hollywood movies or elaborate video games: it is likely that nobody would make them if, in the absence of copyright protection, they weren’t economically viable. Yet even for such big projects there should be constraints on enforcement. For instance, you shouldn’t be able to take down an entire service because it hosts a link to a pirated movie, as long as the link is promptly removed. More generally, I believe that copyright should be dramatically reduced in scope and made much more costly to obtain. The only automatic right accruing to content should be attribution. Reserving additional rights should require a registration fee, because you are asking for content to be restricted within the digital knowledge loop.
Imagine a situation where the only automatic right accruing to an intellectual work was attribution. Anyone wanting to copy or distribute your song would only have to credit you, something that would not inhibit any part of the knowledge loop. Attribution imposes no restrictions on making, accessing and distributing copies, or on creating or sharing derivative works. It can include referencing who wrote the lyrics, who composed the music, who played which instrument and so on. It can also include where you found this particular piece of music. This practice of attribution is already becoming popular for digital text and images using the Creative Commons License, or the MIT License in open source software development.
If you don’t want other people to use your music without paying you, you are asking for its potential contributions to the knowledge loop to be restricted, thus reducing the benefits that the loop confers upon society. You should pay for that right, which not only represents a loss to society but will also be costly to enforce. The registration fee should be paid on a monthly or annual basis, and when you stop paying it, your work should revert to attribution-only status.
In order to reserve rights, you should have to register your music with a registry, with some part of the copyright fee going towards maintaining the registry. Thanks to blockchains, which allow the operation of decentralized databases that are not owned or controlled by any one entity, there can be competing registries that access the same global database. The registries would be free to search, and registration would involve a check that you are not trying to register someone else’s work. The registries could be built in a way that anyone operating a music streaming service, such as Spotify or SoundCloud, could easily implement compliance to make sure they are not freely sharing music that has reserved rights.
It would even be possible to make the registration fee dependent on what rights you wanted to retain. For instance, your fee might be lower if you were prepared to allow non-commercial use of your music and to allow others to create derivative works, while it might increase significantly if you wanted all your rights reserved. Similar systems could be used for all types of content, including text, images and video.
Critics might object that the system would impose a financial burden on creators, but it is important to remember that removing content from the knowledge loop imposes a cost on society. And enforcing this removal, by finding people who are infringing and penalizing them, incurs additional costs for society. For these reasons, asking creators to pay is fair, especially if their economic freedom is already assured by UBI.
UBI also provides an answer to another argument that is frequently wielded in support of excessive copyright: employment by publishers. This argument is relatively weak, as the major music labels combined employ only a little over twenty thousand people ("Sony Music," n.d.; “Universal Music Group,” n.d; “Warner Music Group,” n.d). On top of that, the existence of this employment to some degree reflects the societal cost of copyright. The owners, managers and employees of record labels are, for the most part, not the creators of the music.
Let me point out one more reason why a system of paid registration makes sense. No intellectual works are created in a vacuum. All authors have read books by other people, all musicians have listened to tons of music, and all filmmakers have watched countless movies. Much of what makes art so enjoyable is the existence of a vast body of art that it draws upon and can reference, whether explicitly or implicitly. We are all part of the knowledge loop that has existed for millennia.
While copyright limits our ability to share knowledge, patents limit our ability to use knowledge to create something new. Just as having a copyright confers a monopoly on the reproduction of content, a patent confers a monopoly on its application. The rationale for patents is similar to the argument for copyright: the monopoly that is granted results in profits that are supposed to provide an incentive for people to invest in research and development.
Here, as with copyright, this incentive argument should be viewed with suspicion. People invented things long before patents existed, and many have continued to invent without seeking them (Kinsella, 2013). Mathematics is a great example of the power of what is known as ‘intrinsic motivation’: the drive to do something for its own sake, and not because it will be financially rewarded. People who might have otherwise spent their time in a more lucrative way often dedicate years of their lives to work on a single mathematical problem (frequently without success). It is because of the intrinsic human drive to solve problems that the field has made extraordinary advances, entirely in the absence of patents, which thankfully were never extended to include mathematical formulas and proofs. As we will see momentarily, this problem solving drive can and has been successfully amplified through other means.
The first patent system was established in Venice in the mid-fifteenth century, and Britain had a fairly well-established system by the seventeenth (“History of Patent Law,” 2020). That leaves thousands of years of invention, a time that saw such critical breakthroughs as the alphabet, movable type, the invention of the wheel and gears. This is to say nothing of those inventors who have chosen not to patent their inventions because they saw how that would impose a loss on society. A well-known example is Jonas Salk, who created the polio vaccine. Other important inventions that were never patented include X-rays, penicillin and the use of ether as an anesthetic (Boldrin & Levine, 2010). Since we know that limits on the use of knowledge impose a cost, we should ask what alternatives to patents exist which might stimulate innovation.
Many people are motivated by wanting to solve a problem, whether it’s one they have themselves or something that impacts the world at large. With UBI, more of these people will be able to spend their time on inventing. We will also see more innovation because digital technologies are reducing the cost of inventing. One example of this is the company Science Exchange, which has created a marketplace for laboratory experiments. Say you have an idea that requires you to sequence a bunch of genes. The fastest gene sequencing available is from a company called Illumina, whose fastest machines used to cost more than $0.5 million to buy (Next Gen Seek, 2021). Through Science Exchange, however, you were able to access such a machine for less than $1,000 per use (“Illumina Next Generation Sequencing,” 2021). Furthermore, the next generation of sequencing machines is on the way, and these will further reduce the cost—technological deflation at work.
A lot of legislation has significantly inflated the cost of innovation. In particular, FDA rules around drug trials have made drug discovery prohibitively expensive, with the cost of bringing a drug to market exceeding $1 billion. While it is obviously important to protect patients, there are novel statistical techniques that would allow for smaller and faster trials (Berry et al., 2010). A small step was taken recently with the compassionate use of not-yet-approved drugs for fatally ill patients. Excessive medical damage claims have presented another barrier to innovation. As a result of these costs, many drugs are either not developed at all or are withdrawn from the market, despite their efficacy. For example, the vaccine against Lyme disease, Lymerix, is no longer available for humans following damage claims (Willyard, 2019).
Patents are not the only way to incentivize innovation. Another historically successful strategy has been the offering of prizes. In 1714, Britain famously offered rewards to encourage a solution to the problem of determining a ship’s longitude at sea. Several people were awarded prizes for their designs of chronometers, lunar distance tables and other methods for determining longitude, including improvements to existing methods. Mathematics provides an interesting example of the effectiveness of prizes, which beyond money also provide recognition. In addition to the coveted Fields Medal for exceptional work by mathematicians under the age of 40, there are also the seven so-called Millennium Prize Problems, each with a $1 million reward (only one of which has been solved to date—and Grigori Perelman, the Russian mathematician who solved it, famously turned down the prize money opting only for the recognition).
At a time when we wish to accelerate the knowledge loop, we must shift the balance towards knowledge that can be used freely. The success of recent prize programs, such as the X Prizes, the DARPA Grand Challenge and NIST competitions, is promising, and the potential exists to crowdfund future prizes. Medical research should be a particular target for prizes, to help bring down the cost of healthcare.
Though prizes can help accelerate the knowledge loop, that still leaves a lot of existing patents in place. I believe much can be done to make the system more functional, in particular by reducing the impact of so-called non-practicing entities (NPEs, commonly referred to as “patent trolls”). These companies have no operating business of their own, and exist solely for the purpose of litigating patents. They tend to sue not just a company but also that company’s customers, forcing a lot of companies into a quick settlement. The NPE then uses the settlement money to finance further lawsuits. Fortunately, a recent Supreme Court ruling placed limits on where patent lawsuits can be filed, which might curtail the activity of NPEs (Liptak, 2017).
As a central step in patent reform, we must make it easier to invalidate existing patents, while at the same time making it more difficult to obtain new ones. We have seen some progress on both counts in the US, but there is still a long way to go. Large parts of what is currently patentable should be excluded from patentability, including university research that has received even small amounts of public funding. Universities have frequently delayed the publication of research in areas where they have hoped for patents that they could subsequently license out, a practice that has a damaging impact on the knowledge loop.
We have also gone astray in our celebration of patents as a measure of technological progress, when we should instead treat them at best as a necessary evil. Ideally, we would roll back the reach of existing patents and raise the bar for new ones, while also inducing as much unencumbered innovation as possible through prizes and social recognition.
Copyrights and patents aren’t the only legal limitations that slow down the digital knowledge loop. We’re also actively creating new restrictions in the form of well-intentioned privacy regulations. Not only do these measures restrict informational freedom, but as I will argue below, in the long run privacy is fundamentally incompatible with technological progress. Instead of clinging to our current conception of privacy, we therefore need to understand how to be free in a world where information is widely shared. Privacy has been a strategy for achieving and protecting freedom. To get over this idea while staying free, we need to expand economic, informational and psychological freedom.
Before I expand on this position, let me first note that countries and individuals already take dramatically different approaches to the privacy of certain types of information. For example, for many years Sweden and Finland have published everyone’s tax return, and some people, including the Chief Information Officer and Dean for Technology at Harvard Medical School, have published their entire medical history on the Internet (Doyle & Scrutton, 2016; Zorabedian, 2015). This shows that under certain conditions it is eminently possible to publicly share exactly the type of information that some insist must absolutely remain private. As we will see, such sharing turns out not only to be possible but also extremely useful.
To better understand this perspective, compare the costs and benefits of keeping information private with the costs and benefits of sharing it widely. Digital technology is dramatically shifting this tradeoff in favor of sharing. Take a radiology image, for example. Analog X-ray technology produced an image on a physical film that had to be developed, and could only be examined by holding it up against a backlight. If you wanted to protect the information on it, you would put it in a file and lock it in a drawer. If you wanted a second opinion, you had to have the file sent to another doctor by mail. That process was costly, time-consuming and prone to errors. The upside of analog X-rays was the ease of keeping the information secret; the downside was the difficulty of putting it to use.
Now compare analog X-rays to digital X-rays. You can instantly walk out of your doctor’s office with a copy of the digital image on a thumb drive or have it emailed to you, put in a Dropbox or shared in some other way via the Internet. Thanks to this technology, you can now get a near-instant second opinion. And if everyone you contacted was stumped, you could post the image on the Internet for everyone to see. A doctor, or a patient, or simply an astute observer, somewhere in the world may have seen something similar before, even if it is incredibly rare. This has happened repeatedly via Figure 1, a company that provides an image sharing network for medical professionals.
But this power comes at a price: protecting your digital X-ray image from others who might wish to see it is virtually impossible. Every doctor who looks at the image could make a copy—for free, instantly and with perfect fidelity—and send it to someone else. And the same goes for others who might have access to the image, such as your insurance company.
Critics will make claims about how we can use encryption to prevent the unauthorized use of your image, but those claims come with important caveats, and they are dangerous if pursued to their ultimate conclusion. In summary, the upside of a digital X-ray image is how easy it makes it to get help; the downside is how hard it is to protect digital information.
But the analysis doesn’t end there. The benefits of your digital X-ray image go beyond just you. Imagine a huge collection of digital X-ray images, all labeled with diagnoses. We might use computers to search through them and get machines to learn what to look for. And these systems, because of the magic of zero marginal cost, can eventually provide future diagnoses for free. This is exactly what we want to happen, but how rapidly we get there and who controls the results will depend on who has access to digital X-ray images.
If we made all healthcare information public, we would dramatically accelerate innovation in diagnosing and treating diseases. At present, only well-funded pharma companies and a few university research projects are able to develop new medical insights and drugs, since only they have the money required to get sufficient numbers of patients to participate in research. Many scientists therefore wind up joining big pharma companies, so the results of their work are protected by patents. Even at universities, the research agenda tends to be tightly controlled, and access to information is seen as a competitive advantage. While I understand that we have a lot of work to do to create a world in which the sharing of health information is widely accepted and has limited downsides, this is what we should be aiming for.
You might wonder why I keep asserting that assuring privacy is impossible. After all, don’t we have encryption? Well, there are several problems that encryption can’t solve. The first is that the cryptographic keys used for encryption and decryption are just digital information themselves, so keeping them secure is another instance of the original problem. Even generating a key on your own machine offers limited protection, unless you are willing to risk that the data you’re protecting will be lost forever if you lose the key. As a result, most systems include some kind of cloud-based backup, making it possible that someone will access your data, either through technical interception or by tricking a human being to unwittingly participate in a security breach. If you want a sense of how hard this problem is, consider the millions of dollars in cryptocurrency that can no longer be accessed by people who lost their key or who had them taken over through some form of attack. The few cryptocurrency exchanges that have a decent track record have invested massively in security procedures, personnel screening and operational secrecy.
The second problem is known as ‘endpoint security’. The doctor you’re sending your X-ray to for a second opinion might have a program on their computer that can access anything shown on the screen. In order to view your X-ray, the doctor has to decrypt and display it, so this program will have access to the image. Avoiding such a scenario would require us to lock down all computing devices, but that would mean preventing end users from installing software on them. Furthermore, even a locked-down endpoint is still subject to the “analog hole”: someone could simply take a picture of the screen, which itself could then be shared.
Locked-down computing devices reduce informational freedom and constrict innovation, but they also pose a huge threat to the knowledge loop and democracy. Handing over control of what you can compute and who you can exchange information with would essentially amount to submitting to a dictatorial system. In mobile computation we’re already heading in this direction, partly on the pretext of a need to protect privacy. Apple uses this as an argument to explain why the only way to install apps on an iPhone is through the App Store, which they control. Imagine this type of regime extended to all computing devices, including your laptop and cloud-based servers, and you have one way in which privacy is incompatible with technological progress. We can either have strong privacy assurance or open general-purpose computing, but we can’t have both.
Many people contend that there must be some way to preserve privacy and keep innovating, but I challenge anyone to present a coherent vision of the future where individuals control technology and privacy is meaningfully protected. Whenever you leave your house, you might be filmed, since every smartphone has a camera, and in the future we’ll see tiny cameras on tiny drones. Your gait identifies you almost as uniquely as your fingerprint, your face is probably somewhere on the Internet, and your car’s license plate is readable by any camera. You leave your DNA almost everywhere you go, and we will soon be able to sequence DNA at home for around $100. Should the government control all of these technologies? And if so, should penalties be enforced for using them to analyze someone else’s presence or movement? Many are tempted to reply yes to these questions, without thinking through the consequences of such legislation for innovation and for the relative power of the state versus citizens. For example, citizens recently used facial recognition technology to identify police officers who had removed IDs from their uniforms. Advocates of banning this technology are overly focused on surveillance and never seem to consider these kind of “sousveillance” use cases (from French “sous” which means below).
There is an even more profound reason why privacy is incompatible with technological progress. Entropy is a fundamental property of the universe, which means it is easier to destroy than to create. It can take hours to build a sand castle that is destroyed by a single wave washing ashore. It takes two decades of care for a human to grow up and a single bullet to end their life. Because of this inherent asymmetry, technological progress increases our ability to destroy more quickly than our ability to create. Today, it still takes twenty years for a human to grow, and yet modern weapons can kill thousands and even millions of people in an instant. So as we make technological progress, we must eventually insist on less privacy in order to protect society. Imagine a future in which anyone can create a biological weapon in their basement laboratory—for example, an even more deadly version of the COVID-19 virus. After-the-crime law enforcement becomes meaningless in such a world.
If we can’t protect privacy without passing control of technology into the hands of a few, we should embrace a post-privacy world. We should work to protect people and their freedom, rather than data and privacy. We should allow more information to become public, while strengthening individual freedom. Much information is already disclosed through hacks and data breaches, and many people voluntarily share private information on blogs and social media (McCandless, 2020). The economic freedom generated by the introduction of UBI will play a key role here, because much of the fear of the disclosure of private information results from potential economic consequences. For instance, if you are worried that you might lose your job if your employer finds out that you wrote a blog post about your struggles with depression, you are much less likely to share, a situation that, repeated across many people, helps to keep the topic of depression a taboo. There are, of course, countries where the consequences of private information being leaked, such as sexual orientation or political organizing, can be deadly. To be able to achieve the kind of post-privacy world I envision here, democracy with humanist values is an essential precondition.
If a post-privacy world seems impossible or terrifying, it is worth remembering that privacy is a modern, urban construct. Although the US Constitution protects certain specific rights, it does not recognize a generalized right to privacy. For thousands of years prior to the eighteenth century, most people had no concept directly corresponding to our modern notion of privacy. Many of the functions of everyday life used to take place much more openly than they do today. And privacy still varies greatly among cultures: for example, many Westerners are shocked when they first experience the openness of traditional Chinese public restrooms (Sasha, 2013). All over the world, people in small villages live with less privacy than is common in big cities. You can either regard the lack of privacy in village as oppressive, or you can see a close-knit community as a real benefit, for instance when your neighbor notices that you are sick because you haven’t left the house and offers to go shopping for you.
“But what about my bank account?” you might ask. “If my account number was public, wouldn’t it be easier for criminals to take my money?” This is why we need to construct systems such as Apple Pay and Android Pay that require additional authentication to authorize payments. Two-factor authentication systems will become much more common in the future, and we will increasingly rely on systems such as Sift, which assesses in real time the likelihood that a transaction is fraudulent. Finally, as the Bitcoin blockchain shows, it is possible to have a public ledger that anyone can inspect, as long as the transactions on it are protected by ‘private keys’ which allow only the owner of a Bitcoin address to initiate transactions.
Another area where people are nervous about privacy is health information. We worry, for instance, that employers or insurers will discriminate against us if they learn that we have a certain disease or condition. Here the economic freedom conferred by UBI would help protect us from destitution because of discrimination. Furthermore by tightening the labor market, UBI would also make it harder for employers to refuse to hire certain groups of people in the first place. We could also enact laws that require transparency from employers to more easily detect discrimination (note that transparency is often difficult to require today because it conflicts with privacy).
Observers such as Christopher Poole, the founder of the online message board 4Chan, have worried that in the absence of privacy, individuals wouldn’t be able to engage online as freely. Privacy, they think, helps people feel comfortable assuming multiple online identities that may depart dramatically from their “real life” selves. I think, in contrast, that by keeping our online selves separate, we pay a price in the form of anxiety, neurosis and other psychological ailments. It is healthier to be transparent than to hide behind veils of privacy. Emotional health derives from the integration of our different interests into an authentic multi-dimensional personality, rather than a fragmentation of the self. This view is supported by studies of how online self-presentation impacts mental health, where inauthentic presentations were associated with higher levels of anxiety and neurosis (Twomey & O’Reilly, 2017).
Many who argue against a post-privacy approach point out that oppressive governments can use information against their citizens. Without a doubt, preserving democracy and the rule of law is essential if we want to achieve a high degree of informational freedom, and this is addressed explicitly in Part Five. Conversely, however, more public information makes dictatorial takeovers considerably harder. For instance, it is much clearer who is benefiting from political change if tax records are in the public domain.
The climate crisis is the single biggest collective problem facing humanity. If we fail to direct attention and resources to fighting it, the climate crisis will make the transition from the Industrial Age worse than the transition into it, which involved two world wars. This may sound hyperbolic, but the climate crisis represents an existential risk for humanity.
Every day, unimaginable amounts of energy hit the Earth in the form of sunlight. Much of this energy is radiated back into space, but greenhouse gases reduce the Earth’s ability to shed heat and instead keep it trapped inside the atmosphere. To get a sense of how much heat we are talking about, we can express it in terms of Hiroshima-sized nuclear bombs. Compared to pre-industrial times, how much more heat would you guess the Earth is retaining? The equivalent of one nuclear bomb per year? Per month? Per week? Per day? The reality is that the extra heat being trapped amounts to four nuclear bombs per second, of every minute, of every hour, of every day, three hundred and sixty-five days a year.
Imagine for a moment that alien spaceships were dropping four nuclear bombs into our atmosphere every second. What would we do? We would drop everything else to fight them. This is, of course, roughly the plot of the movie Independence Day. Except with the climate crisis it’s not aliens, it’s ourselves, and it’s not bright explosions, it’s all the molecules in the atmosphere and in the oceans wiggling a bit harder (that’s what it means for something to heat up).
There are many ways to fight the climate crisis. They include making personal changes such as switching to electric heating, voting for politicians who are committed to tackling the problem, and becoming active in movements such as Extinction Rebellion. As with mindfulness, research and entrepreneurship provide crucial avenues for action. For instance, there are many questions in how to make nuclear fusion work (which would provide a clean source of abundant electricity) or how to most effectively draw down greenhouse gases from the atmosphere. There are companies to be founded that will further the adoption of solar power, not just here in the U.S. but in the developing world.
Given the limits of capitalism that we explored earlier, we might find socialism or even some form of Marxism tempting. That too, however, represents a return to a populist past rather than an invention of the future. The alternatives that people commonly propose are also Industrial Age thinking, rooted in the scarcity of capital. As should be clear by now, my proposals are effectively about shrinking capitalism, much as we previously shrunk agriculture, to make room for participation in the knowledge loop.
Central to this project is the promotion of humanism and the policies associated with it, such as the adoption of UBI. Everyone can take action on this, from contributing to UBI trials to creating content under a Creative Commons license.
We can also promote humanism by applying humanist values to our everyday decision-making. The starting point for this is to see ourselves as human first, with nationality, faith, gender and race all a distant second. I realize that this is easier for me as a white male living in the United States, but that removes nothing from the underlying values of humanism. I described some of these in the earlier chapter on humanism, but here is a more complete list.
Solidarity: There are nearly 8 billion people living on a planet that does not easily support human life, in an otherwise inhospitable solar system. We need to support each other above all else, irrespective of such differences as gender, race or nationality. The big problems that humanity faces, such as the climate crisis, will impact all of us and require our combined effort.
Diversity: We are all unique, and we should celebrate these differences. They are beautiful and a part of our humanity.
Responsibility: Only humans have the power of knowledge, so we are responsible for other species. For example, we are responsible for whales rather than the other way round.
Non-violence: Mental or bodily harm reduces or removes our ability to contribute to humanity. We must avoid it wherever possible.
Mindfulness: Our brain is capable of a broad range of emotions, but when they hijack us we lose our capacity to think rationally. Mindfulness is the ability to experience our emotions while retaining our ability to analyze issues and come up with creative solutions.
Joyfulness: While we are capable of many emotions, moments of joy are what makes life worth living.
Criticism: When we see something that could be improved, we need to have the ability to express that. Individuals, companies and societies that do not allow criticism become stagnant and ultimately fail.
Innovation: Beyond criticism, the major mode for improvement is to create new ideas, products and art. Without innovation, systems become stagnant and start to decay.
Optimism: We need to believe that problems can be solved. Without optimism we will stop trying, and problems like the climate crisis will become bigger, until they threaten human extinction.
These values can help us think through the moral problems we face as we enter the Knowledge Age. That will make a good subject for a separate book, so here is just one example: should we kill animals to feed ourselves? One answer is that we stop eating meat and become vegetarian or vegan; another is that we work out how to grow meat in a lab. Both are valid answers under the humanist approach. Continuing to eat animals without working on alternatives—standing still with the status quo, in which we do not live up to our responsibility—is not.
What is the political process by which we should achieve the profound changes that are required? We are seeing some leaders emerge in this period of transition who provide simplistic, populist answers to difficult questions, advocating a return to the past. There is a danger around the world, including here in the United States, that we will slide into dictatorship and other forms of autocratic government.
Democracy is the only system of government in which the knowledge loop can function to its full potential. Democracy allows new policies to be tested, with a peaceful transition to another set of policies if they don’t work. As tempting as a quick autocratic fix might seem, we need to figure out what it takes to have a working democracy. Some things seem obvious, such as limiting the influence of money in politics.
Because attention is scarce, it can be bought, either by raising and spending a lot of money, or by doing or saying outrageous things. Neither is good for democracy: the former because it makes candidates beholden to the interests of their backers, and the latter because it results in polarization rather than critical debate.
Going forward, we should experiment with new forms of democracy. Given the complexity of the modern world, I am in favor of specialization and delegated voting. We should explore forms of democracy in which I can delegate my vote to people I trust on specific issues, such as energy policy. These delegates, in turn, would elect a leader for the energy agency based on their proposed policies.
A more extreme version of this, which is worth exploring in the context of the climate crisis, is a so-called ‘citizens’ assembly.’ Citizens would be selected at random from the population to form an assembly and given access to experts in the field. With the experts’ help, they would come up with a plan that is then either implemented right away or put to a vote. This idea recalls Athenian democracy, which relied on the random selection of citizens for various government functions. The advantage of such an approach is that it would shortcut long electoral cycles and allow for policies to be chosen that might not be popular with any one party. For example, Ireland recently successfully used a citizens’ assembly to develop an abortion policy.
These are just two of many possible variations of how democracy can work. With digital technologies, we have options that were not previously feasible. Take, for example, the town of Jun in Spain, which uses Twitter as the primary communication channel between citizens and local government officials (Powers & Roy, 2018). We should start to explore more of these possibilities, and part of that means revisiting the geographic units we use for decision-making. The key principle here is that decisions should be made at the lowest possible level. We need to make some decisions globally, such as limiting greenhouse gases, but how we achieve that should be decided at lower levels.
Making decisions at the lowest possible level, a principle known as ‘subsidiarity,’ is especially important during a time of great change. For instance, what is possible in education is changing rapidly due to digital technology, so we should allow experimentation at a local level. By running many small experiments, we can more rapidly figure what works well.
Most of all, we need to reject attempts at dictatorship and autocracy. These effectively disable the knowledge loop because they cannot tolerate freedom of expression—their power is based on suppressing criticism. This is especially dangerous in a time of transition that requires debating and implementing new ideas. There are many ways of defending democracy, starting with the obvious one of voting against would-be dictators. Of course, speaking out against them once they are in power is also crucial, even if it comes at high personal cost. Finally, building and participating in systems that support uncensorable, anonymous or pseudonymous expression is a crucial action to help undermine dictatorships.
Have you watched television recently? Stored food in a refrigerator? Accessed the Internet? Played games on your smartphone? Driven a car? These are all things that billions of people around the world do every day. And while they are produced by different companies using a wide range of technologies, none of them would be possible without the existence of knowledge.
Knowledge, as I have earlier defined it, is the information that humanity has recorded in a medium and improved over time. As a reminder, there are two crucial parts to this definition. The first is “recorded in a medium,” which allows information to be shared across time and space. The second is “improved over time,” which separates knowledge from information. Improvement is the result of the operation of the critical process, which allows for existing knowledge to be criticized and alternatives to be proposed. Through this process knowledge becomes better at helping us humans meet our needs.
I began this section with examples of everyday technologies that would not exist without knowledge. An even stronger illustration of the power of knowledge is that without it, many of us would not even be here today. As we saw in our discussion of population, Malthus was right about population growth but wrong about its consequences because he did not foresee the development of technological progress powered by improved knowledge.
Let’s look at a specific example of how this process unfolded. Humans breathe air, but for a long time we did not know what it consisted of. Oxygen and nitrogen, the two primary components of air, were not identified as elements until the late eighteenth century.
Separately, although manure had been used in agricultural practice for millennia, it was not properly studied until the early nineteenth century. By the late 19th century, scientists had finally discovered the microbes that convert nitrogen into a form that plants can use. That led to the understanding that ammonia, which consists of nitrogen and hydrogen, is a powerful fertilizer. Scientific progress eventually resulted in the Haber process for nitrogen fixation which allows for the mass production of fertilizer. Invented in the early twentieth century, it became crucial to raising agricultural yields globally, thus averting the dire consequences Malthus had envisaged. Today, about half of the nitrogen in humans bodies has been touched by the Haber process on its way into the plants and animals that we eat .
My simplified history of the discovery of nitrogen fixation doesn’t capture the many false starts along the way. It seems strange to us now, but at one point a leading theory as to why some materials burn was that they all contain a substance called ‘phlogiston,’ which was thought to be released during combustion or ‘dephlogistication.’ Without the improvement of knowledge over time, we might have remained stuck on that theory, failing to find oxygen and nitrogen and thus to increase agricultural yields, and thereby potentially exposing humanity to a Malthusian crisis.
This is just one example of a knowledge breakthrough that allowed humanity to overcome a seemingly insurmountable barrier to progress. When thinking about the power of knowledge, we must remember that both our individual lifetimes and the history of modern science to-date are trivially short in the timescale of humanity, which in turn is minuscule compared to that of the universe. When considering longer timeframes, we should regard all speculative technological advances that don’t contravene the laws of physics as possible and eventually achievable. This line of thinking about the power of knowledge is inspired by a theoretical foundation for science recently developed by the physicists David Deutsch and Chiara Marletto called constructor theory (“Constructor Theory,” 2020).
Consider for a moment what knowledge might allow us to do in the more or less distant future. We might rid ourselves of our dependence on fossil fuels, cure any disease, and travel to other planets in our solar system (organizations like SpaceX and NASA are already working toward this goal) (NASA, 2018). Eventually, we might even travel to the stars. You might think interstellar travel is impossible, but it isn’t. Extremely difficult? Yes. Requiring technology that doesn’t exist yet? Yes. But impossible? No. It is definitely not imminent, but we can count on it to becoming possible with the further accretion of knowledge.
We are the only species on Earth that has created knowledge—not just science, but also art. Art allows us to express our hopes and fears, and culture has helped to motivate the large-scale coordination and mobilization of human effort. We might think of the technical component of knowledge as underpinning the ‘how’ of our lives, and the artistic component the ‘why’. If you’ve ever doubted the power of art, just think of the many times throughout history when dictators and authoritarian regimes have banned or destroyed works of art.
Knowledge has already made possible something extraordinary: by means of the innovations of the Industrial Age we can, in principle, meet everyone’s needs. But we must generate additional knowledge to solve the problems we have introduced along the way, such as the climate crisis. New knowledge does not spring forth fully formed out of a vacuum. Instead it emerges through what I call the ‘knowledge loop’, in which someone learns something and creates something new, which is then shared and in turn serves as the basis for more learning.
The knowledge loop has been around since humans first developed written language, some five thousand years ago. Before that, humans were able to use spoken language, but that limits learning and sharing in terms of both time and space. Since the invention of written language, breakthroughs have accelerated and access to the knowledge loop has broadened. Those include moveable type (around one thousand years ago), the printing press (around five hundred years ago) and more recently the telegraph, radio and television. Now we are in the middle of another fundamental breakthrough: digital technology, which connects all of humanity to the knowledge loop at zero marginal cost, and also allows machines themselves to participate in it.
It is easy to underestimate the potential of digital technology to further accelerate and broaden access to the knowledge loop. To many people, it seems as if these innovations have so far under-delivered. The technology investor Peter Thiel once famously complained that “We wanted flying cars, instead we got 140 characters.” In fact, we have made great progress on flying cars since then, in no small part because digital technologies have already helped accelerate the knowledge loop.
The zero marginal cost and universality of digital technologies are already accelerating learning, creating and sharing, giving rise to a digital knowledge loop. And as can be seen in the example of YouTube, it holds both amazing promise and great peril.
YouTube has experienced astounding growth since its launch in 2005. People around the world now upload over 100 hours of video content to the platform every minute. To illustrate just how much content that is, if you were to spend 100 years watching YouTube 24 hours a day, you would be unable to watch all the videos uploaded in a single week. YouTube contains amazing educational content on topics as diverse as gardening and pure mathematics. Many of those videos illustrate the promise of the digital knowledge loop, but the peril is also clear: YouTube also contains videos that peddle conspiracies, spread misinformation and even incite hate. Promoting such videos may, perversely, be in YouTube’s interest, as these capture more attention, which can then be resold to advertisers, thus growing YouTube’s revenues and profits.
Both the promise and the peril are made possible by the same characteristics of the platform: all of the videos are available for free to anyone in the world, and they become available globally the second they are published. Anybody can publish a video, and all you need to access them is an Internet connection and a smartphone. As a result, two to three billion people, almost half of the world’s population, has access to YouTube and can participate in the digital knowledge loop.
These characteristics are found in other systems that similarly show the promise and peril of the digital knowledge loop. Wikipedia, the collectively produced online encyclopedia, is another good example. At its most promising, someone might read an entry and learn the method Pythagoras used to approximate pi, then create an animation that illustrates this method, publishing it on Wikipedia, thus making it easier for other people to learn. Wikipedia entries result from collaboration and an ongoing revision process. You can also examine both the history of the page and the conversations about it, thanks to a piece of software known as a ‘wiki’ that keeps track of the history of edits to a page (“Wiki,” n.d.). When the process works, it raises the quality of entries over time. But when there is a coordinated effort at manipulation, Wikipedia can spread misinformation instantly and globally.
Wikipedia illustrates another important aspect of the digital knowledge loop: it allows individuals to participate in extremely small ways. If you wish, you can contribute to Wikipedia by fixing a single typo. If ten thousand people fixed one typo every day, that would be 3.65 million typos a year. If we assume that it takes two minutes to discover and fix a typo, it would take nearly fifty people working full-time for a year (2,500 hours) to fix that many typos.
The example of a Wikipedia spelling correction shows the power of small contributions that add up within the digital knowledge loop. Their peril can be seen on social networks such as Twitter and Facebook, where the small contributions are likes and retweets or reposts to one’s friends or followers. While these tiny actions can amplify high-quality content, they can also spread mistakes, rumors and propaganda: indeed, research carried out at MIT in 2018 found that fake news stories spread faster and more widely than true ones (Vosoughi et al., 2018) (see “Freedom to Learn”, below). These information cascades can have significant consequences, ranging from jokes going viral to the outcomes of elections being affected. They have even contributed to major outbreaks of violence, as in the well-known case of the brutal persecution of the Rohingya in Myanmar (BBC News, 2018).
Some platforms make it possible for people to contribute passively to the digital knowledge loop. Waze is a GPS navigation app. It tracks users that seem to be in a car, and the speed at which they are moving. It then passes that information back to its servers, and algorithms figure out where traffic is moving smoothly and where drivers will encounter traffic jams. Waze then proposes alternative routes, taking the traffic into account. If you follow a different route proposed by Waze, you automatically contribute your speed on that detour, a further example of passive contribution.
To see the peril of passive contribution, consider Google’s autocomplete for search queries, which are derived from what people frequently search for. As a result, they often reflect existing biases, further amplifying them: often, instead of typing out their whole query, users select one of the autocompleted options presented to them. Another example of dangerous passive contribution are suggested ’follows’ on networks such as Twitter. These often present accounts of people similar to the ones someone is already connected with, thus deepening connections among people who think alike while cutting them off from other groups, a phenomenon giving rise to a kind of “Cyber-Balkans” (Van Alstyne & Brynjolfsson, 2005).
The promise of the digital knowledge loop is broad access to a rapidly improving body of knowledge. The peril is that it will lead to a post-truth society that is constantly in conflict. Both of these possibilities are enabled by the same characteristics of digital technologies. Here once again, we can see that technology by itself does not determine the future.
To achieve the promise of the digital knowledge loop and sidestep its perils will require human societies to go through a massive transition, on a par with the two previous ones, from the Forager Age to the Agrarian Age and from the Agrarian Age to the Industrial Age. We now need to leave the Industrial Age behind and enter the next one, which I am calling the Knowledge Age. We have based our economies around the job loop, which traps a lot of our attention. We have constructed our laws governing access to information and computation as if they were industrial products. We have adopted a range of beliefs that keep us tied to jobs and consumption, and we are utterly overwhelmed by the new information environment. All of that has to change.
The transition will be difficult, however, because the Industrial Age is a system with many interlocking parts, and systems are highly resistant to change. As we saw earlier, simply harnessing digital technology to the existing system results in a hugely uneven distribution of power, income and wealth. Even worse, it tilts the digital knowledge loop away from its promise and toward its perils.
The human species is facing problems that we can only overcome if we use digital technology to alleviate rather than worsen attention scarcity. We must reap the promise and limit the perils of digital technology for the knowledge loop. In order to successfully negotiate the transition into the Knowledge Age, we need to make dramatic changes in both collective regulation and self-regulation. This is what we will explore in Part Four.
Learning is the hardest step in the knowledge loop. How many of us say something like “I wish I could play the guitar,” but either never do anything about it or give up after a short period? Learning is hard, and we should try to make it easier, more fun and more social. There has been plenty of recent progress here: for example, Duolingo has made language learning more accessible by breaking it down into small units that are customized to each learner.
I am personally excited about helping to create two particular projects. One is an integrated platform for learning math, programming, engineering and science. These areas of knowledge are closely related, yet the way we teach them is often oddly disconnected. The other project is a compendium of the principles of knowledge. We have so much knowledge that it seems impossible to know more than a tiny fraction of it all, but this is partly an illusion because much of it is a variation or an application of a much smaller set of underlying principles. Collecting and explaining these will make knowledge more accessible and help to unify areas that seem unrelated.
While the COVID-19 crisis has come at a terrible cost, it has also accelerated innovation in learning. Many parents are discovering that home schooling their children, whether individually or in small groups, may be a viable option. There are many ways to encourage learning that is based on fostering our innate curiosity, from simply learning something new oneself to inventing and building new systems.
As I have previously described, most developed countries have large central governments with a high degree of centralized decision-making. That is complemented by a high degree of concentration in the economy, with a few firms dominating most industries. Both of these things are bad for the knowledge loop, as they inhibit experimentation. A recent example of this was illustrated by the response to COVID-19. In the US, testing was largely federally controlled, making it difficult to execute a differentiated state-level response. Even a state the size of California, which on a standalone basis would be the world’s sixth largest economy, was unable to approve rapid tests developed by California-based startups (Haverstock, 2020).
We can take many actions to help foster a return to decentralization. For example, where it is permitted, parents can choose to home school their children with other parents, forming experimental education pods. More importantly, we can participate in the burgeoning field of blockchain technologies. The best known blockchain is Bitcoin, a digital alternative to gold.
Blockchains are decentralized networks that can nonetheless achieve consensus, such as on how many Bitcoin are controlled by which address on the network. This matters because as we saw earlier, much of the power of companies such as Facebook, Google or Amazon comes from network effects. Government power is also derived from a network effect that arises from the ability to issue currency and regulate banking. Building decentralized alternatives to these systems using blockchain technology is a way of removing power from government and large corporations.
As it turns out, when blockchains work properly they are uncensorable. Unless a government or corporation can take over a large percentage of the nodes on a blockchain network, the information maintained by the network will continue to be propagated correctly, even when some nodes are trying to purge or manipulate the contents. The only option governments face is to cut their population off from accessing the networks, and this requires a high degree of control over all Internet traffic (as has been achieved, for example, by China). This is why fighting against national ‘firewalls’ is so important.
While any one new blockchain system has a high likelihood of failing, the large number of current experiments will produce systems that have the potential to be transformative on a global scale. One of the most exciting possibilities is that we may end up with UBI built outside of existing government budgets as part of a cryptocurrency. A variety of projects are currently attempting this, including Circles and the $UBI token.
To be clear, decentralization and blockchains do not represent a panacea. Some problems require centralization to be solved (for instance, water and sewage are centrally controlled for a reason). And decentralization can bring its own problems, such as the potential aggravation of the ‘Cyber-Balkans’ problem that we encountered earlier. But at a time of excessive centralization, it is crucial that we foster decentralization to act as a counterweight.
We need to act with great urgency during this transition to the Knowledge Age. We are woefully behind in dealing with the climate crisis, and there is a real possibility that as its consequences unfold, society will degenerate into violence. In the longer term, we face a potential threat from the possible rise of superintelligences, and there is also a chance that we are not alone in this universe. These are risks that we can only deal with if we stop clinging to the Industrial Age and instead embrace the Knowledge Age. Conversely, if we are able to make the transition, huge opportunities lie ahead of us.
The world is rapidly being pulled apart by people who want to take us back to the past, as well as people who are advancing technology while being trapped in Industrial Age thinking. As I described in the introduction, technology increases the space of the possible, but it does not automatically make everyone better off. Bound to an Industrial Age logic, automation is currently enriching a few, while putting pressure on large sections of society. Nor does digital publishing automatically accelerate the knowledge loop—we find ourselves in a world plagued by fake news and filter bubbles.
Those who are trying to take us back into the past are exploiting these trends. They promise those negatively affected by technology that everything will be better, while investing heavily in mass manipulation. They seek to curtail the open Internet, while simultaneously building up secret surveillance. This is true on both sides of the American political spectrum. Neither the Republicans nor the Democrats have a truly forward-looking platform, and both favor governmental controls over online platforms and speech, instead of empowering end-users as described in the section on informational freedom.
The net effects of this are an increase in polarization and a breakdown of critical inquiry and democracy. As disturbing as it is, the possibility of large-scale violent conflict, both within and between nations, is increasing, while the climate crisis wreaks havoc on industrial and food supply chains around the world. At the same time, our ability to solve the problem of climate change is decreasing, because we are spiraling back towards the past directing our attention to nationalism.
There’s another reason for urgency in navigating the transition to the Knowledge Age: we find ourselves on the threshold of creating both transhumans and neohumans. ‘Transhumans’ are humans with capabilities enhanced through both genetic modification (for example, via CRISPR gene editing) and digital augmentation (for example, the brain-machine interface Neuralink). ‘Neohumans’ are machines with artificial general intelligence. I’m including them both here, because both can be full-fledged participants in the knowledge loop.
Both transhumans and neohumans may eventually become a form of ‘superintelligence,’ and pose a threat to humanity. The philosopher Nick Bostrom published a book on the subject, and he and other thinkers warn that a superintelligence could have catastrophic results. Rather than rehashing their arguments here, I want to pursue a different line of inquiry: what would a future superintelligence learn about humanist values from our current behavior?
As we have seen, we’re not doing terribly well on the central humanist value of critical inquiry. We’re also not treating other species well, our biggest failing in this area being industrial meat production. Here as with many other problems that humans have created, I believe the best way forward is innovation. I’m excited about lab-grown meat and plant-based meat substitutes. Improving our treatment of other species is an important way in which we can use the attention freed up by automation.
Even more important, however, is our treatment of other humans. This has two components: how we treat each other now, and how we will treat the new humans when they arrive. As for how we treat each other now, we have a long way to go. Many of my proposals are aimed at freeing humans so they can discover and pursue their personal interests and purpose, while existing education and job loop systems stand in opposition to this freedom. In particular we need to construct the Knowledge Age in a way that allows us to overcome, rather than reinforce, our biological differences which have been used as justification for so much existing discrimination and mistreatment. That will be a crucial model for transhuman and neohuman superintelligences, as they will not have our biological constraints.
Finally, how will we treat the new humans? This is a difficult question to answer because it sounds so preposterous. Should machines have human rights? If they are humans, then they clearly should. My approach to what makes humans human—the ability to create and make use of knowledge—would also apply to artificial general intelligence. Does an artificial general intelligence need to have emotions in order to qualify? Does it require consciousness? These are difficult questions to answer but we need to tackle them urgently. Since these new humans will likely share little of our biological hardware, there is no reason to expect that their emotions or consciousness should be similar to ours. As we charge ahead, this is an important area for further work. We would not want to accidentally create a large class of new humans, not recognize them, and then mistreat them.
I want to provide one final reason for urgency in getting to the Knowledge Age. It is easy for us to think of ourselves as the center of the universe. In early cosmology we put the Earth at the center, before we eventually figured out that we live on a small planet circling a star, in a galaxy that lies in an undistinguished location in an incomprehensibly large universe. More recently, we have discovered that the universe contains a great many planets more or less like ours, which means some form of intelligent life may have arisen elsewhere. This possibility leads to many fascinating questions, one of which is known as the Fermi paradox: if there is so much potential for intelligent life in the universe, why have we not yet picked up any signals?
There are different possible answers to this question. For instance, perhaps civilizations get to a point similar to ours and then destroy themselves because they cannot make a crucial transition. Given the way we are handling our current transition, that seems like a distinct possibility. Maybe all intelligent civilizations encounter a problem that they cannot solve, such as the climate crisis, and either disappear entirely or suffer a collapse in knowledge and technology. Given the scale of cosmic time and space, short-lived broadcast civilizations like ours would be unlikely to be detected in any small spatiotemporal range. Furthermore, while climate change is a clear and present danger, there are also many other species-level challenges (Ord, 2021).
A different answer to the Fermi paradox would present a different challenge: more advanced civilizations may have gone dark so as to not be discovered and destroyed by even more advanced civilizations. By that account, we may be entering a particularly dangerous period, in which we have been broadcasting our presence but do not yet have the means to travel through space.
Conversely, it is worth asking what kind of opportunities we might explore in the Knowledge Age. To begin with, there is a massive opportunity for automation. Fifty years or so after a successful transition to the Knowledge Age, I expect the amount of attention trapped in the job loop to have shrunk to around 20 percent or less of all human attention. This is comparable to the shrinkage of attention focused on agriculture during the Industrial Age. We will finally be able to achieve the level of freedom that many thinkers had previously predicted, such as Keynes in his essay “The Economic Possibilities for our Grandchildren,” where he wrote about humanity coming to enjoy a life of mostly leisure (Keynes, 1932). Even Marx envisioned such a world, although he believed it would be brought about differently. He wrote about a system that “makes it possible for me to do one thing today and another tomorrow, to hunt in the morning, fish in the afternoon, rear cattle in the evening, criticize after dinner, just as I have a mind, without ever becoming hunter, fisherman, herdsman or critic.” That is the promise of the Knowledge Age.
But there are many more opportunities for human progress, including space travel. One of the most depressing moments in my life came early on when I learned that at some point our sun will burn out and, in the process, annihilate all life on Earth. What is the point of anything we do if everything will come to an end anyhow? Thankfully I came to realize that with enough knowledge and progress, humanity could become a spacefaring species and live on other planets, before eventually traveling to the stars.
A third opportunity is the elimination of disease. It is sometimes easy to forget how far we have already come on that account. Many of the diseases that used to cripple or kill people have either become treatable or have been eliminated. We have started to make major progress in fighting cancer, and I believe there is a good chance that most cancers will become treatable within the next couple of decades. Ultimately this raises the question of mortality. Can, and should, we strive to become immortal? I believe we should, although achieving immortality will bring new problems. These are not the ones of overpopulation that some people imagine, as birth rates will be falling and there is, of course, space beyond our planet. The real challenge of immortality will be maintaining the functioning of the knowledge loop, as we will have to figure out not just how to keep the body alive but the mind flexible. Max Planck captured this challenge in his famous quote that “science advances one funeral at a time”—the older, dominant positions do not allow new theories to displace them.
A fourth opportunity is to go from capital being merely sufficient to capital being abundant. By the definitions set out earlier, that would mean that the marginal cost of capital was zero. The best way to imagine what that might look like is to think of the replicator from Star Trek. Imagine a microwave oven that instead of heating up a dish makes it from scratch, without requiring you to shop for the ingredients first. Such an abundance of capital might seem a preposterous idea that could never be achieved, but for most physical assembly processes today, the factors that limit the rate are the energy required and the need for humans to operate parts of the system. Machine learning is helping with the second factor, but progress on energy has been slow: we don’t yet have any fusion reactors that output more energy than is used to start the fusion, but there is no fundamental reason that can’t be achieved. With enough knowledge we will make nuclear fusion work, removing the second major barrier to the abundance of capital.
We live in a period where there is an extraordinary range of possible outcomes for humanity. They include the annihilation of humankind in a climate catastrophe, at one extreme, and the indefinite exploration of the universe, at the other. Where we end up depends on the large and small choices each of us makes every day, from how we treat another person in conversation to how we tackle the climate crisis. It is a massive challenge, and I wake up every day both scared and excited about this moment of transition. I sincerely hope that The World After Capital makes a small contribution to getting us to the Knowledge Age.
I am grateful to all the people who have helped me along the way: my parents, who wholeheartedly supported my interest in computers, at a time when it was quite unusual and expensive to do so; my wife Susan Danziger and our children Michael, Katie and Peter, who make me a better person; my many teachers, including Erik Brynjolfsson and Bengt Holmström, from whom I learned so much; my partners at Union Square Ventures, starting with Fred Wilson and Brad Burnham who invited me to join the firm they had started; the many entrepreneurs I have had the opportunity to work with; the philosophers and scientists, such as David Deutsch and Chiara Marletto, who have demonstrated the power of human knowledge; the friends who have been there through good and bad times; and the many people who have taken the time to comment on the book and on my blog, who have invited me to speak, who have contributed in ways small and large, with special mentions for Ed Cooke for some of the earliest feedback, Seth Schulman for work on an early draft, Basil Vetas for capable research assistance, Nick Humphrey for goldilocks editing, Paul Reeves for intellectual pushback, Mona Alsubaei for perceptive feedback and crucial contributions to finishing the book, and Max Roser and the team at the Our World in Data project for their extensive data collection and visualization.
1854 Broad Street cholera outbreak. (2021, July 30). In Wikipedia. https://en.wikipedia.org/wiki/1854_Broad_Street_cholera_outbreak
2019 European heat waves. (2021, July 24). In Wikipedia. https://en.wikipedia.org/wiki/2019_European_heat_wave
Ablow, K. (2015, November 18). Was the Unabomber correct? Fox News. https://www.foxnews.com/opinion/was-the-unabomber-correct
Albert Einstein. (2021, August 6). In Wikipedia. https://en.wikipedia.org/wiki/Albert_Einstein#Early_life_and_education
Aleklett, K, Morrissey, D J, Loveland, W, McGaughey, P L, & Seaborg, G T. (1981) Energy dependence of /sup 209/Bi fragmentation in relativistic nuclear collisions. United States. https://doi.org/10.1103/PhysRevC.23.1044
Alewell, C., Ringeval, B., Ballabio, C. et al. Global phosphorus shortage will be aggravated by soil erosion. Nat Commun 11, 4546 (2020). https://doi.org/10.1038/s41467-020-18326-7
Allison, B., & Harkins, S. (2014, November 17). Fixed Fortunes: Biggest corporate political interests spend billions, get trillions. Sunlight Foundation. https://sunlightfoundation.com/2014/11/17/fixed-fortunes-biggest-corporate-political-interests-spend-billions-get-trillions/
AlphaZero. (2020, September 9). In Wikipedia. https://en.wikipedia.org/w/index.php?title=AlphaZero&oldid=977537226
Andrews, E. (2018, September 1). What is the oldest known piece of music? HISTORY. https://www.history.com/news/what-is-the-oldest-known-piece-of-music
Appleby, M. (2018, December 7). Recommended Levels of Essential Amino Acids. SF Gate. https://healthyeating.sfgate.com/recommended-levels-essential-amino-acids-3649.html
Armstrong, M. (2020, August 27). Air Conditioning Biggest Factor in Growing Electricity Demand. Statista. https://www.statista.com/chart/14401/growing-demand-for-air-conditioning-and-energy/
Ausubel, J. H., Wernick, I. K., & Waggoner, P. E. (2013). Peak Farmland and the Prospect for Land Sparing. Population and Development Review, 38, 221–242 https://doi.org/10.1111/j.1728-4457.2013.00561.x
Backman, M. (2020, February 19). You’ll Be Shocked by How Many Americans Have No Retirement Savings at All. The Motley Fool. https://www.fool.com/retirement/2020/02/19/youll-be-shocked-by-how-many-americans-have-no-ret.aspx
Bariso, J. (2020, August 19). Google Has a Plan to Disrupt the College Degree. Inc.Com. https://www.inc.com/justin-bariso/google-plan-disrupt-college-degree-university-higher-education-certificate-project-management-data-analyst.html
BBC News. (2018, November 6). Facebook admits it was used to “incite offline violence” in Myanmar. https://www.bbc.com/news/world-asia-46105934
Becker, R. (2017, April 25). An artificial womb successfully grew baby sheep — and humans could be next. The Verge. https://www.theverge.com/2017/4/25/15421734/artificial-womb-fetus-biobag-uterus-lamb-sheep-birth-premie-preterm-infant
Berners-Lee, M. (2019). There Is No Planet B: A Handbook for the Make or Break Years Iillustrated ed.). Cambridge University Press.
Bernstein, A., & Raman, A. (2015, June). The Great Decoupling: An Interview with Erik Brynjolfsson and Andrew McAfee. Harvard Business Review. https://hbr.org/2015/06/the-great-decoupling
Berry, S. M., Carlin, B. P., Lee, J. J., & Muller, P. (2010). Bayesian Adaptive Methods for Clinical Trials (1st ed.). CRC Press.
Bivens, J., & Mishel, L. (2015, September 2). Understanding the Historic Divergence Between Productivity and a Typical Worker’s Pay: Why It Matters and Why It’s Real. Economic Policy Institute. https://www.epi.org/publication/understanding-the-historic-divergence-between-productivity-and-a-typical-workers-pay-why-it-matters-and-why-its-real/
Board of Governors of the Federal Reserve System. (2020, May). Report on the Economic Well-Being of U.S. Households in 2019 Featuring Supplemental Data from April 2020. Federal Reserve. https://www.federalreserve.gov/publications/files/2019-report-economic-well-being-us-households-202005.pdf
Boldrin, M., & Levine, D. K. (2010). Against Intellectual Monopoly (1st ed.). Cambridge University Press.
Borowiec, S. (2016, March 15). AlphaGo seals 4–1 victory over Go grandmaster Lee Sedol. The Guardian. https://www.theguardian.com/technology/2016/mar/15/googles-alphago-seals-4-1-victory-over-grandmaster-lee-sedol
Borunda, A. (2021, July 2). Heat waves kill people—and climate change is making it much, much worse. National Geographic. https://www.nationalgeographic.com/environment/article/heat-related-deaths-attributed-to-climate-change
British Overseas Airways Corporation. (2020, September 6). In Wikipedia. https://en.wikipedia.org/w/index.php?title=British_Overseas_Airways_Corporation&oldid=977094302
Buranyi, S. (2018, February 22). Is the staggeringly profitable business of scientific publishing bad for science? The Guardian. https://www.theguardian.com/science/2017/jun/27/profitable-business-scientific-publishing-bad-for-science
Bureau of Transportation Statistics. (2017, May)._ World Motor Vehicle Production._ https://www.bts.gov/bts/archive/publications/national_transportation_statistics/table_01_23
Cabin pressurization. (2021, July 21). In Wikipedia. https://en.wikipedia.org/wiki/Cabin_pressurization
Casais, E. (2010)._ US federal spending as a share of GDP._ Areppim AG. https://stats.areppim.com/stats/stats_usxrecxspendxgdp.htm
Case, A., & Deaton, A. (2021). Deaths of Despair and the Future of Capitalism. Princeton University Press.
Cavity magnetron. (2021, June 27). In Wikipedia. https://en.wikipedia.org/wiki/Cavity_magnetron
CDC. (n.d.). Health and Economic Costs of Chronic Diseases. National Center for Chronic Disease Prevention and Health Promotion. Retrieved August 6, 2021, from https://www.cdc.gov/chronicdisease/about/costs/index.htm
Centers for Disease Control (CDC). (2020, February). Fatal Injury and Violence Data. CDC. https://www.cdc.gov/injury/wisqars/fatal.html
Chaisson, C. (2017, December 12). When It Rains, It Pours Raw Sewage into New York City’s Waterways. NRDC. https://www.nrdc.org/stories/when-it-rains-it-pours-raw-sewage-new-york-citys-waterways
Chamber of Commerce of the United States. (1973)._ United States Multinational Enterprise Survey._ https://books.google.com.sa/books?id=tPZe7kHHTboC&printsec=frontcover#v=onepage&q&f=false
Chiu, L. (2021, March 22). Declining sperm counts around the world could reach zero in just 24 years. CGTN America. https://america.cgtn.com/2021/03/22/declining-sperm-counts-around-the-world-could-reach-zero-in-just-24-years
Church–Turing thesis. (2020, August 3). In Wikipedia. https://en.wikipedia.org/w/index.php?title=Church%E2%80%93Turing_thesis&oldid=970983529
Climate Central. (2019, May 15). POURING IT ON: How Climate Change Intensifies Heavy Rain Events. https://www.climatecentral.org/news/report-pouring-it-on-climate-change-intensifies-heavy-rain-events
Cognitive behavioral therapy. (2020, September 29). In Wikipedia. https://en.wikipedia.org/w/index.php?title=Cognitive_behavioral_therapy&oldid=980954468
Constructor theory. (2020, August 24). In Wikipedia. https://en.wikipedia.org/w/index.php?title=Constructor_theory.
Crane, J., & Marchese, K. (2015, September 2). 3D opportunity for the supply chain: Additive manufacturing delivers. Deloitte Insights. https://www2.deloitte.com/xe/en/insights/focus/3d-opportunity/additive-manufacturing-3d-printing-supply-chain-transformation.html
Cross, H. (2020, June 17). 9 Most Expensive New York City Restaurants. TripSavvy. https://www.tripsavvy.com/most-expensive-new-york-city-restaurants-1613415
Deazley, R. (2004). On the Origin of the Right to Copy: Charting the Movement of Copyright Law in Eighteenth Century Britain (1695–1775) (First Edition). Hart Publishing.
Deutsch, D. (2011). The Beginning of Infinity: Explanations That Transform the World. Viking Adult.
Dickson, S. (2016, August 17). Ten WWII innovations that changed the world we live in (for the better). The Vintage News. https://www.thevintagenews.com/2016/08/17/ten-wwii-innovations-changed-world-live-better/
Discovery doctrine. (2020, September 22). In Wikipedia. https://en.wikipedia.org/w/index.php?title=Discovery_doctrine&oldid=979704205
Doyle, A., & Scrutton, A. (2016, April 12). Privacy, what privacy? Many Nordic tax records are a phone call away. Reuters. https://www.reuters.com/article/us-panama-tax-nordics-idUSKCN0X91QE
Dubock, A. (2019). Golden Rice: To Combat Vitamin A Deficiency for Public Health. Vitamin A. Published. https://doi.org/10.5772/intechopen.84445
Escort fighter. (2021, June 6). In Wikipedia. https://en.wikipedia.org/wiki/Escort_fighter
Ewers, R. M.,Scharlemann, J. P. W., Balmford, A., & Green, R. E. (2009). Do increases in agricultural yield spare land for nature? Global Change Biology, 15(7), 1716–1726. https://doi.org/10.1111/j.1365-2486.2009.01849.x
Federal Reserve Bank of New York. (2020). Household Debt and Credit Report. https://www.newyorkfed.org/microeconomics/hhdc.html
Federal Reserve Bank of St. Louis. (2021d, August 11). Consumer Price Index for All Urban Consumers: Durables in U.S. City Average. FRED. https://fred.stlouisfed.org/series/CUSR0000SAD#0
Federal Reserve Bank of St. Louis. (2021b, August 26). Gross Domestic Product. FRED. https://fred.stlouisfed.org/series/GDP#0
Federal Reserve Bank of St. Louis. (2021a, August 26). Real Median Household Income in the United States and Gross Domestic Product. FRED. https://fred.stlouisfed.org/graph/?g=caWq
Federal Reserve Bank of St. Louis. (2021c, June 10). Households and Nonprofit Organizations; Debt Securities and Loans; Liability, Level. FRED. https://fred.stlouisfed.org/series/CMDEBT#0
Federal Reserve Bank of St. Louis. (2021g, March 11). Domestic Financial Sectors; Debt Securities and Loans; Liability, Level. FRED. https://fred.stlouisfed.org/series/DODFS
Federal Reserve Bank of St. Louis. (2021f, March 11). Nonfinancial Business; Debt Securities and Loans; Liability, Level. FRED. https://fred.stlouisfed.org/series/TBSDODNS
Federal Reserve Bank of St. Louis. (2021e, March 25). Gross Output of All Industries. FRED. https://fred.stlouisfed.org/series/GOAI
Fetherston, E., Kinzler, M., & Miller, S. (n.d.). Assembling Our Transportation Future. Gala. Retrieved August 2, 2021, from https://www.learngala.com/cases/model-t/8
Flight distance record. (2021, April 7). In Wikipedia. https://en.wikipedia.org/wiki/Flight_distance_record
Flint water crisis. (2021, August 1). In Wikipedia. https://en.wikipedia.org/wiki/Flint_water_crisis
Fontinelle, A. (2021, February 11). Auto Loan Balances Hit $1.36 Trillion in 2020. Investopedia. https://www.investopedia.com/personal-finance/american-debt-auto-loan-debt/
Gatto, J. T., Grove, R., Rodriguez, D., Ruenzel, D., & Paul, R. (2017). The Underground History of American Education, Volume I: An Intimate Investigation Into the Prison of Modern Schooling. Valor Academy.
Global Migration Data Portal. (2021, June 3). Environmental Migration. Migration Data Portal. https://migrationdataportal.org/themes/environmental_migration_and_statistics
Gold, H. (2017, February 24). Breitbart reveals owners: CEO Larry Solov, the Mercer family and Susie Breitbart.
Great Firewall. (2020, October 2). In Wikipedia. https://en.wikipedia.org/wiki/Great_Firewall Hamblin, J. (2016, November 30). How Cubans Live as Long as Americans at a Tenth of the Cost. The Atlantic. https://www.theatlantic.com/health/archive/2016/11/cuba-health/508859/
Harari, Y. N. (2011). Sapiens: A Brief History of Humankind. Penguin Random House
Harding, R. (2020, January 23). Vertical farming finally grows up in Japan. Financial Times. https://www.ft.com/content/f80ea9d0-21a8-11ea-b8a1-584213ee7b2b
Harford, B. T. (2017, February 6). Why the falling cost of light matters. BBC News. https://www.bbc.com/news/business-38650976
Harrison, M. (1998). The Economics of World War II: Six Great Powers in International Comparison (Studies in Macroeconomic History) (Revised ed.). Cambridge University Press. https://warwick.ac.uk/fac/soc/economics/staff/mharrison/public/ww2overview1998.pdf
Hasell, J. (2013, October 10). Famines. Our World in Data. https://ourworldindata.org/famines
Haverstock, E. (2020, April 14). Changing FDA rules leave startups in limbo over at-home COVID-19 tests. PitchBook. https://pitchbook.com/news/articles/changing-fda-rules-leave-startups-in-limbo-over-at-home-covid-19-tests
History of patent law. (2020, August 14). In Wikipedia. https://en.wikipedia.org/w/index.php?title=History_of_patent_law&oldid=972938791
Holst, A. (2020, December 7). Air conditioners demand worldwide 2012–2018. Statista. https://www.statista.com/statistics/871534/worldwide-air-conditioner-demand/
How much oxygen does a person consume in a day? (2000, April 1). HowStuffWorks. https://health.howstuffworks.com/human-body/systems/respiratory/question98.htm
Hutton, G. (2016, February 12). Can we really put a price on meeting the global targets on drinking-water and sanitation? World Bank Blogs. https://blogs.worldbank.org/water/can-we-really-put-price-meeting-global-targets-drinking-water-and-sanitation
Illumina Next Generation Sequencing. (2021). Science Exchange. https://www.scienceexchange.com/marketplace/illumina-ngs?page=1
Inglehart, R., & Norris, P. (2016). Trump, Brexit, and the Rise of Populism: Economic Have-Nots and Cultural Backlash. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.2818659
International Iron and Steel Institute. (1991). Steel statistical yearbook. Brussels: The Institute. https://www.worldsteel.org/en/dam/jcr:7cd3de6e-ff49-4731-acc0-01bb9e8e3773/Steel%2520statistical%2520yearbook%25201991.pdf
International Iron and Steel Institute. (2001). Steel statistical yearbook. Brussels: The Institute. https://www.worldsteel.org/en/dam/jcr:08b20e40-78a2-4971-bcb2-7a99ee2c7b99/Steel%2520statistical%2520yearbook%25202001.pdf
Jedlicka, P. (2017, October 24). Revisiting the Quantum Brain Hypothesis: Toward Quantum (Neuro)biology? Frontiers. https://www.frontiersin.org/articles/10.3389/fnmol.2017.00366/full.
Johnson, S. (2007). The Ghost Map: The Story of London’s Most Terrifying Epidemic--and How It Changed Science, Cities, and the Modern World. Riverhead Books.
Johnson, S. (2015, August 19). The Creative Apocalypse That Wasn’t. The New York Times. https://www.nytimes.com/2015/08/23/magazine/the-creative-apocalypse-that-wasnt.html
Johnson, S. (2021). Extra Life: A Short History of Living Longer. Riverhead Books.
Joint Center for Housing Studies of Harvard University. (2020). America’s Rental Housing. https://www.jchs.harvard.edu/sites/default/files/Harvard_JCHS_Americas_Rental_Housing_2020.pdf
Juan García (2009) An integrated approach to the design and operation of low capacity sewage treatment works, Desalination and Water Treatment, 4:1-3, 28-32, DOI: 10.5004/dwt.2009.351
Kahneman, D. (2013). Thinking, Fast and Slow (1st ed.). Farrar, Straus and Giroux.
Kamal, B. (2017, August 21). Climate Migrants Might Reach One Billion by 2050. ReliefWeb. https://reliefweb.int/report/world/climate-migrants-might-reach-one-billion-2050
Kang, C. (2017, April 26). F.C.C. Chairman Pushes Sweeping Changes to Net Neutrality Rules. The New York Times. https://www.nytimes.com/2017/04/26/technology/net-neutrality.html
Kaufman, L., Rojanasakul, M., Warren, H., Kao, J., Harris, B., & Gopal, P. (2020, June 29). Mapping America’s Underwater Real Estate. Bloomberg. https://www.bloomberg.com/tosv2.html?vid=&uuid=3f8d41d0-f6ce-11eb-b60b-83b18a347fe8&url=L2dyYXBoaWNzLzIwMjAtZmxvb2Qtcmlzay16b25lLXVzLW1hcC8=
Kemp, S. (2020, January). Digital 2020: 3.8 billion people use social media. We Are Social. https://wearesocial.com/uk/blog/2020/01/digital-2020-3-8-billion-people-use-social-media/
Keynes, J. M. (1932). Essays in persuasion. New York: Harcourt, Brace and Co.
Kilby, E.R. (2007). The demographics of the U.S. equine population. In D.J. Salem & A.N. Rowan (Eds.), The state of the animals 2007 (pp. 175-205). Washington, DC: Humane Society Press.
Kinsella, S. (2013, December 1). Study: Most Important Innovations Are Not Patented. The Center for the Study of Innovative Freedom. http://c4sif.org/2013/11/study-most-important-innovations-are-not-patented/
Kopp, C. (2021, February 12). Creative Destruction Definition. Investopedia. https://www.investopedia.com/terms/c/creativedestruction.asp
Krivit, S., & Ravnitzky, M. (2016, December 7). It’s Not Cold Fusion. . . But It’s Something. Scientific American Blog Network. https://blogs.scientificamerican.com/guest-blog/its-not-cold-fusion-but-its-something/
Lahart, J. (2011, December 10). Number of the Week: Finance’s Share of Economy Continues to Grow. WSJ. https://www.wsj.com/articles/BL-REB-15342
Large Hadron Collider. (2021, July 23). In Wikipedia. https://en.wikipedia.org/wiki/Large_Hadron_Collider
Leonhardt, M. (2020, January 22). 41% of Americans would be able to cover a $1,000 emergency with savings. CNBC. https://www.cnbc.com/2020/01/21/41-percent-of-americans-would-be-able-to-cover-1000-dollar-emergency-with-savings.html
Leontief, W. (1952). MACHINES AND MAN. Scientific American, 187(3), 150-164. Retrieved from http://www.jstor.org/stable/24950787
Lepley, S. (2019, May 30). 9 mind-blowing facts about the US farming industry. Markets.Businessinsider.Com. https://markets.businessinsider.com/news/stocks/farming-industry-facts-us-2019-5
Lewis, N. (2018, December 17). The Financial System Should Be Less Than Half The Size of Today. Forbes. https://www.forbes.com/sites/nathanlewis/2018/12/17/the-financial-system-should-be-less-than-half-the-size-of-today/
Liptak, A. (2017, May 22). Supreme Court Ruling Could Hinder ‘Patent Trolls.’ The New York Times. https://www.nytimes.com/2017/05/22/business/supreme-court-patent-lawsuit.html
List of public corporations by market capitalization. (2020, September 1). In Wikipedia. https://en.wikipedia.org/w/index.php?title=List_of_public_corporations_by_market_capitalization&oldid=976207594
London sewerage system. (2021, July 24). In Wikipedia. https://en.wikipedia.org/wiki/London_sewerage_system
Macguire, B. F. E. C. (2014, October 24). Houses given away for free in Detroit. CNN. https://edition.cnn.com/2014/10/23/business/write-a-house-detroit/index.html
Malthus, T. R. (1798). Essay on the Principle of Population. http://www.esp.org/books/malthus/population/malthus.pdf
Manthorpe, R. (2018, April 17). What is Open Banking and PSD2? WIRED explains. WIRED UK. https://www.wired.co.uk/article/open-banking-cma-psd2-explained#:~:text=
Marr, B. (2019, September 5). How Much Data Do We Create Every Day? The Mind-Blowing Stats Everyone Should Read. Forbes. https://www.forbes.com/sites/bernardmarr/2018/05/21/how-much-data-do-we-create-every-day-the-mind-blowing-stats-everyone-should-read/
McCandless, D. (2020, December 9). World’s Biggest Data Breaches & Hacks. Information Is Beautiful. https://informationisbeautiful.net/visualizations/worlds-biggest-data-breaches-hacks/
McClintock, L. (2020, March 10). How Meditation Changes the Brain. Psych Central. https://psychcentral.com/blog/how-meditation-changes-the-brain#1
Mearns, E. (2013, November 3). Energiewende: Germany, UK, France and Spain. Energy Matters. http://euanmearns.com/energiewende-germany-uk-france-and-spain/
Michael Faraday. (2021, August 6). In Wikipedia. https://en.wikipedia.org/wiki/Michael_Faraday#Early_life
Military budget of the United States. (2020, September 7). In Wikipedia. https://en.wikipedia.org/w/index.php?title=Military_budget_of_the_United_States&oldid=977121615
MIMO. (2020, August 29). In Wikipedia. https://en.wikipedia.org/w/index.php?title=MIMO&oldid=975685031
Minimally invasive education. (2021, July 5). In Wikipedia. https://en.wikipedia.org/wiki/Minimally_invasive_education
Mobile Price List In India. (n.d.). 91 Mobiles. Retrieved August 27, 2021, from https://www.91mobiles.com/xiaomi-mobile-price-list-in-india
Mosse, R. (2015, December 21). The Genesis Engine. Wired. https://www.wired.com/2015/12/the-genesis-engine/
Motorlease. (n.d.). The Rise of the SUVs. Motorlease Fleet Management & Leasing Solutions. Retrieved August 7, 2021, from https://motorlease.com/article/suv-rise/
Motorola DynaTAC. (2021, June 30). In Wikipedia. https://en.wikipedia.org/wiki/Motorola_DynaTAC
Mullins, R. (2012). What is a Turing machine? University of Cambridge. https://www.cl.cam.ac.uk/projects/raspberrypi/tutorials/turing-machine/one.html
Napster Settlement Offer Rejected. (2001, March 6). CBS News. https://www.cbsnews.com/news/napster-settlement-offer-rejected/
NASA. (2018, December). Journey to Mars: Pioneering Next Steps in Space Exploration. http://www.nasa.gov/johnson/exploration/overview/
National Center for Health Statistics. (2019). National Vital Statistics System. https://www.cdc.gov/nchs/data/databriefs/db394-tables-508.pdf#page=1
National Institutes of Health. (2021, August 4). Office of Dietary Supplements - Omega-3 Fatty Acids. https://ods.od.nih.gov/factsheets/Omega3FattyAcids-Consumer/
Neuroscience News. (2018, December 3). Man Versus Machine: Who Wins When It Comes to Facial Recognition? https://neurosciencenews.com/man-machine-facial-recognition-120191/
Nipps, K. (2014). Cum privilegio: Licensing of the Press Act of 1662. The Library Quarterly, 84(4), 494–500. https://doi.org/10.1086/677787
Nordhaus,W. (1994). Do Real Output and Real Wage Measures Capture Reality? The History of Lighting Suggests Not, Cowles Foundation Discussion Papers 1078, Cowles Foundation for Research in Economics, Yale University.
O’Dea, S. (2021, June 2). Smartphone penetration worldwide 2020. Statista. https://www.statista.com/statistics/203734/global-smartphone-penetration-per-capita-since-2005/
O’Dell, R., & Penzenstadler, N. (2019, April 4). You elected them to write new laws. They’re letting corporations do it instead. Center for Public Integrity. https://publicintegrity.org/politics/state-politics/copy-paste-legislate/you-elected-them-to-write-new-laws-theyre-letting-corporations-do-it-instead/
O’Toole, G. (2011, November 16). “How Will You Get Robots to Pay Union Dues?” “How Will You Get Robots to Buy Cars?” Quote Investigator. https://quoteinvestigator.com/2011/11/16/robots-buy-cars/
OECD (2019)._ Health resources - Pharmaceutical spending._ OECD Data. https://data.oecd.org/healthres/pharmaceutical-spending.htm
OECD (2020). Details of Tax Revenue - United States. OECD Stats. https://stats.oecd.org/Index.aspx?DataSetCode=REVUSA
OECD (2021a), General government spending (indicator). doi: 10.1787/a31cbf4d-en (Accessed on 07 July 2021)
OECD (2021b, July). OECD Statistics - Health expenditure and financing. OECD Stat. https://stats.oecd.org/Index.aspx?ThemeTreeId=9
Ord, T. (2020). The Precipice: Existential Risk and the Future of Humanity. Hachette Books.
Orlo Salon. (n.d.). Orlo Salon Price List. Orlo Salon. http://orlosalon.com/#
Orrall, J. (2020, September 27). Houses 3D-printed in just 24 hours now shipping in California. CNET. https://www.cnet.com/news/houses-3d-printed-in-just-24-hours-now-shipping-in-california/
Our World in Data. (2019). World Population over the last 12,000 years and UN projection until 2100. https://ourworldindata.org/grapher/world-population-1750-2015-and-un-projection-until-2100?country=Our+World+In+Data%7EOWID_WRL
Our World in Data. (n.d.). Literate and illiterate world population. Retrieved August 7, 2021, from https://ourworldindata.org/grapher/literate-and-illiterate-world-population?country=%7EOWID_WRL
Paine, T. (1797). Agrarian Justice. London: Paris printed by W. Adlard, London re-printed for T. Williams
Palimpsest. (2021, June 30). In Wikipedia. https://en.wikipedia.org/wiki/Palimpsest
Pariser, E. (2021). The Filter Bubble. Penguin Random House.
Parkin, S. (2020, April 2). The Artificially Intelligent Doctor Will Hear You Now. MIT Technology Review. https://www.technologyreview.com/2016/03/09/8890/the-artificially-intelligent-doctor-will-hear-you-now/
Penicillin. (2021, July 30). In Wikipedia. https://en.wikipedia.org/wiki/Penicillin#Mass_production
_Personalized Genes. (2015, June 24). _Comparing Price and Tech. Specs. of Illumina MiSeq, Ion Torrent PGM, 454 GS Junior, and PacBio RS. http://www.personalizedgenes.com/comparison-of-illumina-miseq-ion-torrent-pgm-454-gs-junior-and-pacbio-rs/
Pezzutto, S. (2019). Confucianism and Capitalist Development: From Max Weber and Orientalism to Lee Kuan Yew and New Confucianism. Asian Studies Review, 43(2), 224–238. https://doi.org/10.1080/10357823.2019.1590685
Phillips, P. J et al., (2018). Face recognition accuracy of forensic examiners, super recognizers, and face recognition algorithms. Proceedings of the National Academy of Sciences, 115(24), 6171–6176. https://doi.org/10.1073/pnas.1721355115
POLITICO. https://www.politico.com/blogs/on-media/2017/02/breitbart-reveals-owners-ceo-larry-solov-mercer-family-and-susie-breitrbart-235358
Powers, W., & Roy, D. (2018, March 19). The Incredible Jun: A Town that Runs on Social Media. Medium. https://medium.com/@socialmachines/the-incredible-jun-a-town-that-runs-on-social-media-49d3d0d4590
Ramankutty, N. et al., (2018). Trends in Global Agricultural Land Use: Implications for Environmental Health and Food Security. Annual Review of Plant Biology, 69(1), 789–815. https://doi.org/10.1146/annurev-arplant-042817-040256
Red flag traffic laws. (2020, July 2). In Wikipedia. https://en.wikipedia.org/w/index.php?title=Red_flag_traffic_laws&oldid=965632438
Ritchie, H. (2019, November 13). Land Use. Our World in Data. https://ourworldindata.org/land-use
Ritchie, H., & Roser, M. (2021, July 1). Sanitation. Our World in Data. https://ourworldindata.org/sanitation
Rodrick, S. (2019, December 23). All-American Despair. Rolling Stone. https://www.rollingstone.com/culture/culture-features/suicide-rate-america-white-men-841576/.
Romine, S. (2019, March 29). Essential Nutrients: What Are They & How Much Do You Need? Openfit. https://www.openfit.com/essential-nutrients-explainer
Roser, M. (2013, December 10). Light at Night. Our World in Data. https://ourworldindata.org/light
Roser, M. (2017, December 2). Fertility Rate. Our World in Data. https://ourworldindata.org/fertility-rate
Roser, M. (2019a, November). Future Population Growth. Our World in Data. https://ourworldindata.org/future-population-growth
Roser, M. (2019b, October). Life Expectancy. Our World in Data. https://ourworldindata.org/life-expectancy
Roser, M., & Ortiz-Ospina E (2013, May 25). Global Extreme Poverty. Our World in Data. https://ourworldindata.org/extreme-poverty
Roser, M., & Ritchie, H. (2013, May 11). Technological Progress. Our World in Data. https://ourworldindata.org/technological-progress
Rostad, N. (2018, April). In Mexico, satellite and Wi-Fi come together to bring internet to remote areas. Viasat. https://www.viasat.com/about/newsroom/blog/how-satellite-and-wi-fi-come-together-to-bring-internet-to-remotest-mexico/
Roy, P. (2019, March 18). Mobile data: Why India has the world’s cheapest. BBC News. https://www.bbc.com/news/world-asia-india-47537201
S. (2013, October 30). Culture Shock in China - Bathrooms. Chinese Language Blog. https://blogs.transparent.com/chinese/culture-shock-in-china-bathrooms/
Sample, I. (2018, February 14). Harvard University says it can’t afford journal publishers’ prices. The Guardian. https://www.theguardian.com/science/2012/apr/24/harvard-university-journal-publishers-prices
Seligson, K. (2019, May 20). Misreading the story of climate change and the Maya. The Conversation. https://theconversation.com/misreading-the-story-of-climate-change-and-the-maya-113829
Shin, D. D., & Kim, S. I. (2019). Homo Curious: Curious or Interested? Educational Psychology Review, 31(4), 853–874. https://doi.org/10.1007/s10648-019-09497-x
Shrinivasan, R. (2013, May 1). Farmer population falls by 9 million in 10 years. The Times of India. https://timesofindia.indiatimes.com/india/Farmer-population-falls-by-9-million-in-10-years/articleshow/19813617.cms
Simon, C. (2020, February 28). New clues about how and why the Maya culture collapsed. Harvard Gazette. https://news.harvard.edu/gazette/story/2020/02/new-clues-about-how-and-why-the-maya-culture-collapsed/
Skidelsky, R., & Skidelsky, E. (2013). How Much is Enough?: Money and the Good Life. Other Press.
Slaton, J. (1999, January 13). A Mickey Mouse Copyright Law? Wired. https://www.wired.com/1999/01/a-mickey-mouse-copyright-law/
Slotta, D. (2020, November 26). Number of newly built apartments in China 2009–2019. Statista. https://www.statista.com/statistics/242963/number-of-newly-built-apartments-in-china/
Smith, D. (2020, August 27). Despite Streaming, US Recorded Music Revenues Still Down 50% From 1999 Peaks. Digital Music News. https://www.digitalmusicnews.com/2020/08/27/recorded-music-revenues-decrease/
Smog kills thousands in England. (2020, December 4). HISTORY. https://www.history.com/this-day-in-history/smog-kills-thousands-in-england
Solar wind. (2021, July 18). In Wikipedia. https://en.wikipedia.org/wiki/Solar_wind#Atmospheres
Sony Music - Overview, News & Competitors. (n.d.). ZoomInfo. https://www.zoominfo.com/c/sony-music-entertainment/253920496
Statista. (2020, December 3). Number of mobile subscriptions worldwide 1993–2019. https://www.statista.com/statistics/262950/global-mobile-subscriptions-since-1993/
Statista. (2021a, February 2). Global module manufacturing production 2000–2019. https://www.statista.com/statistics/668764/annual-solar-module-manufacturing-globally/
Statista. (2021b, May 10). Smartphone penetration rate in India FY 2016–2020, with estimates until 2025. https://www.statista.com/statistics/1229799/india-smartphone-penetration-rate/
Steven W. Henderson, "Consumer spending in World War II: the forgotten consumer expenditure surveys," Monthly Labor Review, U.S. Bureau of Labor Statistics, August 2015, https://doi.org/10.21916/mlr.2015.29.
Stoller, M. (2020, April 9). The Cantillon Effect: Why Wall Street Gets a Bailout and You Don’t. BIG by Matt Stoller. https://mattstoller.substack.com/p/the-cantillon-effect-why-wall-street
Substance Abuse and Mental Health Services Administration. (2020). Key substance use and mental health indicators in the United States: Results from the 2019 National Survey on Drug Use and Health (HHS Publication No. PEP20-07-01-001, NSDUH Series H-55). Rockville, MD: Center for Behavioral Health Statistics and Quality, Substance Abuse and Mental Health Services Administration. Retrieved from https://www.samhsa.gov/data/
Suicide statistics. (2019, November 15). American Foundation for Suicide Prevention. https://afsp.org/suicide-statistics/
Szalavitz, M. (2017, June 14). Why Disappointment Is So Devastating: Dopamine, Addiction, and the Hedonic Treadmill. Pacific Standard. https://psmag.com/news/why-disappointment-is-so-devastating-dopamine-addiction-and-the-hedonic-treadmill
The American Academy of Actuaries. (2018, March). Prescription Drug Spending in the U.S. Health Care System. American Academy of Actuaries. https://www.actuary.org/content/prescription-drug-spending-us-health-care-system
The Economist. (2016, January 28). Machine earning. https://www.economist.com/finance-and-economics/2016/01/28/machine-earning
The Nielsen Total Audience Report. (2020, August 13). Nielsen. https://www.nielsen.com/us/en/insights/report/2020/the-nielsen-total-audience-report-august-2020/#popupForm
The U.S. National Archives and Records Administration. (n.d.). When You Ride Alone You Ride With Hitler! National Archives. Retrieved August 2, 2021, from https://www.archives.gov/exhibits/powers_of_persuasion/use_it_up/images_html/ride_with_hitler.html
Tim Berners-Lee. (n.d.). W3C. https://www.w3.org/People/Berners-Lee/
Timpson, Christopher G. “Quantum Computers: The Church-Turing Hypothesis Versus the Turing Principle.” Alan Turing: Life and Legacy of a Great Thinker, Springer Berlin Heidelberg, 2004, pp. 213–40, http://dx.doi.org/10.1007/978-3-662-05642-4_9.
Twomey, C., & O’Reilly, G. (2017). Associations of Self-Presentation on Facebook with Mental Health and Personality Variables: A Systematic Review. Cyberpsychology, Behavior, and Social Networking, 20(10), 587–595. https://doi.org/10.1089/cyber.2017.0247
U.S. Bureau of Economic Analysis. (2020, February). Gross Domestic Product, Fourth Quarter and Year 2019 (Second Estimate), U.S. Bureau of Economic Analysis (BEA). https://www.bea.gov/news/2020/gross-domestic-product-fourth-quarter-and-year-2019-second-estimate
U.S. Bureau of the Census, Statistical Abstract of the United States: 1949_. (Seventh edition.) Washington, D.C., https://www.census.gov/library/publications/1949/compendia/statab/70ed.html
U.S. Census Bureau. (2019, July). U.S. Census Bureau QuickFacts: United States. Census Bureau Quick Facts. https://www.census.gov/quickfacts/fact/table/US/PST045219
U.S. Copyright Office. (2019, December). Circular 1: Copyright Basics,” U.S. Copyright Office. https://www.copyright.gov/circs/circ01.pdf.
U.S. Department of Agriculture and U.S. Department of Health and Human Services. (2015, December). 2015-2020 Dietary Guidelines for Americans (8th Edition) https://health.gov/sites/default/files/2019-09/2015-2020_Dietary_Guidelines.pdf
U.S. Environmental Protection Agency (EPA). (2021, July 2). Aluminum: Material-Specific Data. US EPA. https://www.epa.gov/facts-and-figures-about-materials-waste-and-recycling/aluminum-material-specific-data
United Nations Environment Programme (UNEP) & The Global Alliance for Buildings and Construction (GABC). (2017, November). Towards zero-emission efficient and resilient buildings - Global Status Report 2016. United Nations Environment Programme. https://wedocs.unep.org/bitstream/handle/20.500.11822/10618/GABC-Report_Updated.pdf?sequence=1&%3BisAllowed=
United Nations, Department of Economic and Social Affairs, Population Division (2019). World Population Prospects 2019, Online Edition. Rev. 1.
Universal Music - Overview, News & Competitors. (n.d.). ZoomInfo. https://www.zoominfo.com/c/universal-music-group/122060886
Unschooling. (2021, August 5). In Wikipedia. https://en.wikipedia.org/wiki/Unschooling
Urmson, C. (2012, August 7). The self-driving car logs more miles on new wheels. Official Google Blog. https://googleblog.blogspot.com/2012/08/the-self-driving-car-logs-more-miles-on.html
USDA. (n.d.)._ USDA ERS - Ag and Food Sectors and the Economy. U.S Department of Agriculture._ Retrieved August 4, 2021, from https://www.ers.usda.gov/data-products/ag-and-food-statistics-charting-the-essentials/ag-and-food-sectors-and-the-economy/
Van Alstyne, M. & Brynjolfsson, E. (2005). Global Village or Cyber-Balkans? Modeling and Measuring the Integration of Electronic Communities. Management Science Vol. 51, No. 6, 851-868. https://www.jstor.org/stable/20110380
Vincent, J. (2016, February 8). Facebook’s Free Basics service has been banned in India. The Verge. https://www.theverge.com/2016/2/8/10913398/free-basics-india-regulator-ruling
Vosoughi, S., Roy, D., & Aral, S. (2018). The spread of true and false news online. Science, 359(6380), 1146–1151. https://doi.org/10.1126/science.aap9559
Warner Music Group - Company Profile and News, (n.d.). Bloomberg. https://www.bloomberg.com/profile/company/WMG:US
Water: How much should you drink every day? (2020, October 14). Mayo Clinic. https://www.mayoclinic.org/healthy-lifestyle/nutrition-and-healthy-eating/in-depth/water/art-20044256?reDate=06052021
Wenger, A. (2011, October 3). Needed: The Opposing View Reader. Continuations by Albert Wenger. https://continuations.com/post/10979468250/needed-the-opposing-view-reader
Wenger, A. (2012, January 26). Supermodularity And Service Bundling. Continuations by Albert Wenger. https://continuations.com/post/16473794320/supermodularity-and-service-bundling
Wenger, A. (2017a, July 31). VPNs and Informational Freedom. Continuations by Albert Wenger. https://continuations.com/post/163632347545/vpns-and-informational-freedom
Wenger, A. (2017b, March 24). Government Just Gave Your ISP Even More Power: You Can Take it Back! Continuations by Albert Wenger. https://continuations.com/post/158773876945/yesterday-the-republican-controlled-senate-voted#=
White, A. (2021, May 4). Credit card debt in the U.S. hits all-time high of $930 billion—here’s how to tackle yours with a balance transfer. CNBC. https://www.cnbc.com/select/us-credit-card-debt-hits-all-time-high/
White, R. (2019, October 7). Ancient Maya Canals and Fields Show Early and Extensive Impacts on Tropical Forests. UT News. https://news.utexas.edu/2019/10/07/ancient-maya-canals-and-fields-show-early-and-extensive-impacts-on-tropical-forests/
Whitford, E. (2015, September 10). Judge: Uber’s E-Hails Are Legal, Taxi Industry Must Be “Wary.” Gothamist. https://gothamist.com/news/judge-ubers-e-hails-are-legal-taxi-industry-must-be-wary
Willyard, C. (2019, December 20). Lyme Vaccines Show New Promise, and Face Old Challenges. Undark Magazine. https://undark.org/2019/10/02/new-landscape-lyme-vaccines/
Wong, J. I. (2016, April 7). A fleet of trucks just drove themselves across Europe. Quartz. https://qz.com/656104/a-fleet-of-trucks-just-drove-themselves-across-europe/
World Bank. (2020a). GDP per capita, PPP. World Bank – World Development Indicators. https://data.worldbank.org/indicator/NY.GDP.PCAP.PP.CD
World Bank. (2020c). Gross capital formation. World Bank – World Development Indicators. https://data.worldbank.org/indicator/NE.GDI.TOTL.KD
World Bank. (2020b). Mortality rate, under-5. World Bank – World Development Indicators. https://data.worldbank.org/indicator/SH.DYN.MORT
World Health Organization. (n.d.). Air pollution. https://www.who.int/health-topics/air-pollution#tab=tab_1
World Population Clock: 7.9 Billion People (2021) - Worldometer. (n.d.). World Meters. https://www.worldometers.info/world-population/
World Steel Association. (2021, January 26). Global crude steel output decreases by 0.9% in 2020. World Steel. https://www.worldsteel.org/media-centre/press-releases/2021/Global-crude-steel-output-decreases-by-0.9--in-2020.html
Worldsteel Committee on Economic Studies. (2011). Steel Statistical Yearbook 2011. World Steel Association. https://www.worldsteel.org/en/dam/jcr:c12843e8-49c3-40f1-92f1-9665dc3f7a35/Steel%2520statistical%2520yearbook%25202011.pdf
Wright brothers. (2020, August 25). In Wikipedia.https://en.wikipedia.org/w/index.php?title=Wright_brothers&oldid=974902146
Wulstan, D. (1971). The Earliest Musical Notation. Music & Letters, 52(4), 365-382. Retrieved May 8, 2021, from http://www.jstor.org/stable/734711
Xue, L. (2019, October 11). Hainan Bans All Fossil Fuel Vehicles. What Does it Mean for Clean Transport in China? TheCityFix. https://thecityfix.com/blog/hainan-bans-fossil-fuel-vehicles-mean-clean-transport-china-lulu-xue/
Yoon, Y. B., Bae, D., Kwak, S., Hwang, W. J., Cho, K. I. K., Lim, K. O., Park, H. Y., Lee, T. Y., Kim, S. N., & Kwon, J. S. (2019). Plastic Changes in the White Matter Induced by Templestay, a 4-Day Intensive Mindfulness Meditation Program. Mindfulness, 10(11), 2294–2301. https://doi.org/10.1007/s12671-019-01199-3
Zorabedian, J. (2015, July 17). Why this doctor posted his medical history online for anyone to see. Naked Security. https://nakedsecurity.sophos.com/2015/07/17/why-this-doctor-posted-his-medical-history-online-for-anyone-to-see/
The goal of this appendix is to provide additional data and arguments for why capital is sufficient, i.e. there is enough capital to meet everyone’s needs. Again, capital here specifically refers to physical capital, such as machines and infrastructure. It is physical capital, which produces the solutions to our needs, such as clothes and buildings.
There are three sections. The first section presents some general data on the development of global capital, showing the tremendous growth that has been accomplished over the last one hundred years. The second section uses data from World War II to show what can be accomplished when there is enough political will to redirect capital towards specific goals. Put differently, the bulk of capital available today is used in the service of wants, giving us a lot of room to meet needs. Finally, the third section dives deeper into each of the needs.
It turns out to be surprisingly difficult to find global data on physical capital. The best source I have been able to locate is the World Bank, which publishes a data series on gross capital formation (World Bank, 2020c). Unfortunately the data here reaches back only to 1970 but it still shows an increase from roughly $5 trillion to $22 trillion in 2019 (this is measured on constant 2010 dollars, i.e. adjusted for inflation).
Source: World Bank, 2020c
For triangulation it is worth considering the growth in output of some things that require productive capacity. We can infer the availability of physical capital through outputs. To that end, I was able to find the following chart of global crude steel production over time (Morfeldt, 2017).
Source: International Iron and Steel Institute 1991, 2001; Worldsteel Committee on Economic Studies, 2011; World Steel Association, 2021
Compared to gross capital there is only about a twofold growth here from 1970 to today, but it is important to keep in mind that during that time period we have come up with many materials other than steel from which to make things, such as aluminum and of course plastics. Importantly though this graph lets us compare steel output today with output at the time of World War II and we can see that there has been more than an order of magnitude growth (roughly 15x).
What about finished goods production? This too is a good proxy for the amount of total available physical capital. A great example is the global production of cars. Here is a chart that shows it over time going back to the earliest days of the industry.
Source: Chamber of Commerce of the United States, 1973; Bureau of Transportation Statistics, 2017
Again we can see a roughly twofold increase relative to the 1970s and a greater than 10x increase if we go back further. This chart has an important feature worth pointing out now: there is a dip to near zero production in the mid 1940s corresponding to World War II.
Here is a dramatic example of what all this productive capacity makes possible. The first commercially available handheld mobile phone was the Motorola DynaTAC 8000x which became available in 1984 (“Motorola DynaTAC,” 2021). Here is the growth of mobile phones since then, measured in active subscriptions (Statista, 2020).
Source: Statista, 2020
Over the course of three decades we basically went from not having mobile phones to having more than the global population (this of course brings to mind William Gibson’s great quote that “[t]he future is already here – it’s just not evenly distributed” – with many people having two mobile phones, one for work and oner personal for example, while others have none).
And here is one more example that’s highly relevant to the climate crisis: the rate at which we have produced solar panels (Statista, 2021a).
Source: Statista, 2021a
Over a decade and a half we went from basically making no panels, to making 150 Gigawatts in new panels on what looks like an exponential growth trajectory. Now crucially we are currently using a small part of our productive capital to make solar panels. How do we know this? Because we have not yet taken the drastic steps necessary for fighting the climate crisis, which will eventually have to reach levels similar to the capital deployment in World War II.
The prior section provided some data on how much physical capital has grown in the last one hundred years. When measured by certain proxies, such as the production of steel, it looks like about a 30x growth in capital over the last 100 years and a nearly 100x growth if one goes back just two decades further to 1900. We also saw that significant growth has occurred since World War II, which--as a first approximation--is at least an order of magnitude growth (10x).
Now someone might suggest that this growth could all be due to the population explosion, but that’s not the case. Over the same timeframe the global population has grown a lot less: from 1900 to today a bit less than 5x and from the end of World War II to today only by a bit more than 3x. Put differently, the increase in physical capital has far outstripped population growth (Our World in Data, 2019).
Now one might still question whether this capital is sufficient to meet everyone’s needs, as I have asserted. Some of the strongest evidence for my claim comes from considering what happened during World War II. Here is a chart that shows the government share of GDP in the US during the war years (Casais, 2010).
Let’s dive a bit deeper and look at the manufacturing efforts. The US ramped production of tanks, airplanes, battleships and guns at an extraordinary clip in the war years. Here is a table that tabulates this for different weapons systems (Harrison, 1998).
The numbers here are staggering. For example, in 1943 the US built 2654 major naval vessels. That’s more than 7 every day, or roughly one every three and half hours! In 1944 the US built over 74 thousand combat aircraft, that’s about 8.5 combat aircraft every hour.
We are not talking about simple devices here. These are complex high-performance systems with many components (think of the aircraft engines alone!). And that’s just the US production. There were similar scale efforts in Germany, Japan, the UK, and Russia. For example, adding up all the combat aircraft production in 1944 is 185 thousand units, which is 21 aircraft every hour.
Now in the US while all of this production was happening, people were not starving, there was enough clothing, and were doing surprisingly well overall (Henderson, 2015). But as we saw earlier, the production of cars dropped dramatically — so how were people’s transportation needs met during this time? Through a massive increase in public transportation (Fetherston et al., n.d.). The connection was made quite explicitly with the government running ads “When you ride alone, you ride with Hitler.”(The U.S. National Archives and Records Administration, n.d.). This is a perfect illustration of separating a need, transportation, from its solutions, in this case individual versus shared mobility.
The continued ability to meet needs while at the same time repurposing half or more of physical capital strongly supports the claim of sufficient capital. As a first approximation much of that capital was previously used to meet wants. And it went back to meeting wants after World War II which partially explains the tremendous economic boom of the post war years.
All of this is to say that today’s economy with at least an order of magnitude more capital than during World War II can easily meet our needs. Importantly it also means that we have plenty of additional capacity that could be allocated to solving the climate crisis. For example, we could dramatically ramp the production of everything from solar panels to nuclear reactors to heat pumps.
But there is more to be gleaned from what happened during World War II production. It isn’t just that we collectively made a lot of complicated stuff rapidly. We also innovated on extremely compressed time scales. The Manhattan project is the most obvious example of that which in a span of three years developed the nuclear bomb. It is hard to exaggerate how extensive this effort was, including for example uranium mining, as well as the exploration of several different bomb designs.
Important technologies were either invented or significantly advanced during World War II (Dickson, 2016). For example, at the beginning of the war, radar was a nascent technology. Towards the end of World War II through the invention of the cavity magnetron, the Allies managed to build radars small and lightweight enough to put on planes (“Cavity Magnetron,” 2021). Penicillin, which had been discovered in 1928, was not widely used until mass production was unlocked as part of a secretive World War II effort (“Penicillin,” 2021).
Production and deployment at high volume also drove important improvements. Take fighter planes as an example. Early fighters had limited range which meant that bombers had to fly into enemy territory without escorts. Their only defense against local fighters were plane mounted machine guns. It was only as the war went on that escort fighters of sufficient range were developed to accompany bombers (“Escort Fighter,” 2021). This was made possible by a combination of technological advances, such as more powerful engines, and the insights gained from battle.
So what are the key takeaways? First, during peacetime mode much of the capital is used to meet wants not needs (the third installment in this post will look at this with regard to all the needs identified). Second, when switched into wartime mode, much of the productive capital can be redirected quickly towards accomplishing specific goals that are different from needs. This was already true at a much lower amount of physical capital per capita than is available today. Third, innovation can in fact be accelerated dramatically by focusing resources on critical problems.
The obvious threat we are facing today that requires a massive reallocation of, and improvement in, capital is the climate crisis. Whether this can be accomplished is determined entirely by what we choose to pay attention to. Hence, the defining scarcity of our age is attention, not capital.
The overall physical capital statistics provided earlier abstract away any regional differences. The examination of World War II showed that the US was able to meet people’s needs with a fraction of the available capital but obviously that wasn’t true elsewhere. In particular of course in the actual war zones much physical capital was destroyed, resulting in needs going unmet. In the following discussion too we will see that capital is not yet sufficient everywhere. Given the total amount of aggregate physical capital available now that is a distribution problem (which is really an attention scarcity problem). Paraphrasing a famous William Gibson quote: capital is already sufficient, it is just not yet evenly distributed.
Furthermore I should caveat that I am providing a mix of statistics, anecdotes and arguments. My goal is not to make an incontrovertible case that capital is sufficient. I doubt this would be possible even with a lot more time, given the limited state of measurement of much of the world’s capital. Incidentally, I believe that eventually this paucity of data will be something humanity will look back in surprise, much as we sometimes wonder how things worked before we had mobile phones. Thankfully Max Roser, Hannah Ritchie, and the rest of the team at Our World in Data are starting to make a dent here. Instead, I am simply aiming to make a case that’s compelling enough to bolster the overall argument that attention has now become humanity’s critical scarcity.
In the following the passage from the needs section is in italics, followed by an examination of the sufficiency of capital. I cover both individual and collective needs, as well as what I call enablers (e.g., energy). Along the way, I will also often point out how our lack of attention to the climate crisis and other problems may result in capital becoming scarce again in the future.
Oxygen. On average, humans need about 550 liters of oxygen every day (“How Much Oxygen Does a Person Consume in a Day?,” 2000), depending on the size of our body and physical exertion. Our most common way of meeting this need is breathing air. Although that may sound obvious, we have developed other solutions through technology – for example, the blood of patients struggling to breathe can be oxygenated externally.
There is no shortage of oxygen in the Earth’s atmosphere. Throughout industrialization the issue has been air pollution. For example, in London the air was so bad that the Great Smog of 1952 killed four thousand people in the span of less than a week (Smog Kills Thousands in England, 2020). More recently it has been Indian and Chinese cities that are experiencing similar levels of air pollution. This can definitely be seen as an example of a local insufficiency of capital. In the more developed countries the passage of clean air acts forced the installation of catalytic converters, a switch from coal to gas heat, etc. and largely resolved this deficiency. These same and even more advanced technologies (e.g. electric vehicles) can be deployed globally. China has already taken crucial steps in this direction, with the province of Hainan setting a 2030 deadline for all new and replacement vehicles to be emission free (Xue, 2019).
We should, however, not take the earth’s atmosphere for granted. Many different phenomena resulted in the existence of and maintenance of today’s breathable atmosphere. For example, the Earth’s magnetic field protects it from the solar winds which would otherwise tear off large parts of the atmosphere (“Solar Wind,” 2021). A reduction in or even loss of the magnetic field is exactly the kind of long tail “Black Swan” type of event that we do not pay nearly enough attention to as humanity.
Water. We need to ingest two or three liters of water per day to stay hydrated, depending on factors such as body size, exertion and temperature. In addition to drinking water and fluids that contain it, we have other solutions for this, such as the water contained in the foods that we eat.
As with oxygen, there is no shortage of water on Earth. The challenge is access to drinkable water which means sufficiently clean and desalinated water. Here too we can see how at an earlier point in development capital was insufficient. Again London serves as a great example: frequent Cholera outbreaks were the result of water wells that were not separated from sewage. John Snow famously documented the connection by establishing a detailed map in the 1854 outbreak which helped to overcome the prior “Miasma” theory of Cholera and ultimately resulted in London building out an elaborate water infrastructure (“1854 Broad Street Cholera Outbreak,” 2021).
A more recent example is the water crisis in Flint, Michigan where lead from old pipes resulted in toxic drinking water (“Flint Water Crisis,” 2021). So we can see how capital has been insufficient here and is still insufficient in some parts of the world but not because of some fundamental lack of technology or capital but rather because of a failure of attention to clean water access. The World Bank has come up with an estimate of only about $28 billion annually to provide everyone in the world with basic water, sanitation and hygiene, and about $114 billion to make these services available continuously (Hutton, 2016). These surprisingly low numbers show how little physical capital would need to be deployed. Clean drinking water is a great example of the type of problem where markets tend to fail and hence attention allocation needs to happen through other processes (e.g. by electing a capable city government).
Calories. To power our bodies, adults need between 1,500 and 3,200 calories per day (U.S. Department of Agriculture and U.S. Department of Health and Human Services, 2015), a need we mainly meet by eating and drinking. The best way to obtain calories, however, is surprisingly poorly understood – the mix between proteins, lipids and carbohydrates is subject to debate.
Eating food is the primary solution to our need for calories. This is where Malthus expected the big shortfall to come from. Agriculture simply wouldn’t be able to keep up with the growth in population. The big breakthrough that he didn’t anticipate was the Haber Bosch process of nitrogen fixation, which powered the so-called green revolution. Equipped with artificial nitrogen fertilizer, agricultural output soared.
The other big win in agriculture was the use of machinery. Today in the US only 1.3% of the employed population works in agriculture and the entire food supply system at $1.1 trillion represents only 5% of total GDP (Lepley, 2019; USDA, n.d.). Even in countries that are further back in development such as India, the percentage of the population engaged in farming has been shrinking, a decline made possible by the availability of sufficient physical capital (Shrinivasan, 2013).
Now clearly not everyone has access to enough calories to meet their needs. For example, starvation is ravaging Yemen right now as a result of the ongoing war there. Overall, however, since the 1970s the incidence of death from famine has been at historic lows (Hasell, 2013). And even before that as Amartya Sen and others have documented many famines resulted from a failure to distribute food, not an absolute lack of it (with examples of rotting supplies in harbors while people starve nearby).
Here too though we cannot rest on our accomplishments. The biggest risk to humanity’s ability to meet everyone’s need for calories is the climate crisis which is disrupting the relatively stable weather patterns required by agriculture. So far we have been experiencing crop failures only locally and sometimes regionally. A global large scale crop failure would result in starvation as we have very limited stockpiles.
Nutrients. The body cannot synthesize all the materials it requires, including a couple of certain fatty acids, some amino acids, as well as a few vitamins and minerals – these are called “essential” and must be obtained as part of our nutrition. This is another area that is surprisingly poorly understood, meaning that the actual mix and amount of required nutrients we need to take in seems unsettled.
Nutrients, while important, are needed in relatively small amounts. For example, the daily recommended amount for alpha-linolenic acid (ALA) is between 0.5g and 1.6g (National Institutes of Health, 2021). The biggest intake requirements are the essential amino acids, with adults probably needing about 7g daily of Leucine as one example (Appleby, 2018). For minerals and vitamins we are talking about even smaller amounts. These are mostly in the milligram and microgram range with the exception of Calcium, Chloride and Sodium, which are needed in a few grams each (Romine, 2019).
The cost and capital required to produce all of these essential nutrients has been declining substantially over time as a result of scientific and engineering progress. For example, we have recently figured out how to grow rice that has more Vitamin A in it, called Golden Rice (Dubock, 2019). More than half the global population eats rice daily and so having it deliver enough Vitamin A is a major way of ensuring sufficient amounts of that essential nutrient are available.
Here too, capital is not the binding constraint today. But as the example of Golden Rice shows, it will continue to be important to innovate so as to better meet nutrient needs for everyone and not just those who can currently afford to buy every possible supplement by walking into the nearest drug store. Further research is also required to understand which nutrients we really need and in what dosage for humans to thrive and live long, healthy lives.
Discharge. We also need to get things out of our bodies by expelling processed food, radiating heat and exhaling carbon dioxide. Humans have made a great deal of progress around meeting our discharge needs, such as toilets and public sanitation.
Building public sanitation systems is one of the major contributors to improvements in life expectancy. As Steven Johnson documents in his books “The Ghost Map” (2007) and “Extra Life” (2021) the city of London was hit by repeated Cholera outbreaks until it separated sewage from fresh water delivery. Even back in the mid 1800s London had sufficient capital to build out a large scale sewer system (“London Sewerage System,” 2021).
In many countries we take this for granted today but there are still places in the world that have insufficient sewage treatment capacity. Globally the number of people without access to proper sanitation has been declining albeit slowly (Ritchie & Roser, 2021). That’s largely due to the fact that a lack of sanitation exists predominantly in the places with the highest population growth. Still at this point about two thirds of the global population has access to sanitation and the total number of people who do has grown by several billion in the last couple of decades. This has been possible as the overall capital required for achieving sufficient sanitation is relatively low and again has been declining with technological progress (García , 2009).
Sanitation provides another example of how a lack of attention to the right problems puts our ability to meet our needs at risk. Right here in New York City for example during heavy downpours raw sewage spills into the East and Hudson Rivers because of insufficient capacity in the rainwater runoff systems (Chaisson, 2017). With the climate crisis accelerating, the frequency of that kind of heavy rainfall is increasing rapidly (Climate Central, 2019).
Temperature. Our bodies can self-regulate their temperature, but only within a limited range of environmental temperature and humidity. Humans can easily freeze to death or die of overheating (we cool our bodies through sweating, also known as ‘evaporative cooling’, which stops working when the air gets too hot and humid). We therefore often need to help our bodies with temperature regulation by controlling our environment. Common strategies to meet our temperature needs include clothing, shelter, heating and air conditioning.
We have long had enough capital to provide everyone in the world with clothing. We are strictly faced with a distribution problem here. Some people don’t have the financial resources or live in circumstances, such as homelessness, that make it difficult for them to acquire and maintain sufficient clothing. Conversely in many advanced economies people have piles of unused clothes and the so-called fast fashion industry promotes rapid changes in style that result in massive additional consumption.
But what about shelter? This is a more difficult problem that requires significantly more capital. Here too the evidence suggests that we have sufficient physical capital. For example it is estimated that in 2015 we already had over 220 billion square meters of buildings globally (UNEP & GABC, 2017). This amounts to 30 square meters per person. Now of course some part of that is commercial and industrial space, still this shows that as a first approximation we can house everyone. Even more impressive is the rate at which we are adding space. The same report estimates that by 2030 we will be at over 300 billion square meters of buildings. We also have a lot of circumstantial evidence that supports this conclusion. In particular building booms in various parts of the world, including China, the US and the Middle East, created vast local oversupplies of housing. For instance at the height of the China boom enough housing was added annually for the equivalent of two new ten million resident cities (Slotta, 2020).
And yet again we encounter the climate crisis as the biggest threat to our ability to provide adequate shelter to everyone. In the US alone, nearly 15 million housing units are threatened by floods as found by a recently updated federal mapping exercise (Kaufman et al., 2020). That doesn’t count homes threatened by forest fires. Over longer time horizons sea level rise will make large coastal areas around the world uninhabitable. We are already experiencing significant climate refugee movements today. In 2020 alone it is estimated that 30 million people were displaced globally due to storms and floods (Global Migration Data Portal, 2021). The forecasts are that by 2050 as many as 1 billion people may need shelter in a new location (Kamal, 2017).
Can we heat and cool all this space as needed? The capital requirements here are accelerating rapidly at the moment due to the unfolding climate crisis which is increasing cooling requirements globally. This is not just a question of convenience. In hot and humid conditions evaporative cooling via sweat stops working and when that happens people die from overheating. This is now a routine occurrence in many parts of the world and even a relatively northern region such as Europe is affected, with the 2019 heat wave causing over two thousand deaths (Borunda, 2021; “2019 European Heat Waves,” 2021).
As of 2020 there are an estimated 1.9 billion AC units in the world, adding about 110 million units annually at an accelerating pace (Armstrong, 2020; Holst, 2020). The key constraint here is not capital but electricity to run all of these new units, which will be further exacerbated by the need to switch heating from fossil fuels to electricity. This constraint will be looked at in the energy section further down.
Pressure. Anybody who has gone diving will be aware that our bodies do not handle increased pressure very well. The same goes for decreased pressure, which is one of the reasons why we find air travel exhausting (airplane cabins maintain pressure similar to being at the top of an eight-thousand-foot mountain).
Thankfully we need minimal capital to meet our pressure needs. One might at first assume that we do not need any capital, but that’s not correct. For example, pretty much all commercial flights are in altitudes that require pressurized cabins and hence extra capital above and beyond what would be required for an unpressurized plane. For instance, at just 12 km of altitude pressure falls to 0.2 bar (“Cabin Pressurization,” 2021). At such a low pressure it is not just a lack of oxygen that would be fatal, but also decompression sickness may occur where gases that have been dissolved in the bloodstream may gas out resulting in sickness and even death. As noted earlier, we cannot take the existence of the Earth’s atmosphere for granted. So in addition to giving thought on how to create a livable atmosphere on planets such as Mars that we may eventually want to settle, we need to pay attention to the various forces that could damage or even destroy the Earth’s atmosphere.
Light. Most humans would be hard-pressed to achieve much in complete darkness. For a long time, our need for light was met mainly by sunlight, but much human ingenuity has gone into the creation of artificial light sources.
Our ability to make artificial light is one of the great human achievements and also a story of ongoing progress. We are the only species that has the knowledge to make fire, a capability attributed in Greek mythology to Prometheus who stole fire from the gods. Capital is essential to making light, from the earliest time of gathering wood to the modern creation of light emitting diodes (LEDs). This progress has meant that light has become incredibly affordable in most parts of the world and consumption has gone up accordingly (for example, in the UK by four orders of magnitude over the last two hundred years) (Roser, 2013). Even in extremely poor countries that lack electrical infrastructure, so-called “offgrid solar” is revolutionizing the availability of light, replacing the burning of kerosene and other dangerous fuels. In summary we are definitely not constrained by capital when it comes to our need for light.
Healing. Whenever we damage our body, it needs to heal. The human body comes equipped with extensive systems for self-healing, but beyond a certain range it needs external assistance. We have developed many solutions, which are often grouped under the term ‘healthcare’.
The primary constraint on healing has been knowledge, rather than capital. As documented in Steven Johnson’s book Extra Life, for the longest time medical interventions tended to result in worse outcomes as doctors had no idea what they were doing (Johnson, 2021). Conversely, today it has become possible to create new life saving medicines virtually overnight, as was the case with the COVID-19 mRNA vaccines. These are super cheap to manufacture and supply has been held back by artificial limitations based primarily on intellectual property protection rather than a scarcity of capital.
It is als crucial to understand that much of our current need for healing could be avoided in the first place by leading healthier lives. As noted earlier we do not know as much about nutrition as we should, but we do know that obesity contributes significantly to medical problems and yet the rate of obesity has increased dramatically, especially in the United States. Similarly we do know that stress is negatively associated with health and yet the stress level for many people has gone up for a range of reasons from economic insecurity to proliferation of stress inducing online content. Overall, the bulk of medical expenses today is accounted for by chronic conditions, such as diabetes (CDC, n.d.).
Despite all of this we can measure our tremendous progress on healing by considering the increase in life expectancy (Roser, 2019b). Based on historical data, life expectancy in 1800 was well below 40 years across the entire world. By 2015 it had risen to well over 70 years in many places and to over 80 in some. Even in Africa, which is the furthest behind, life expectancy in many countries in the 60s.
That of course doesn’t mean we can’t face a great reversal. Covid provided a glimpse of what that might look like. As does the decrease in life expectancy in the United States due to what has been termed “deaths of despair” by Anne Case and Angus Deaton, which have impacted many groups, but particularly white middle-aged males (Case & Deaton, 2021). Finally again the climate crisis looms large here with deaths from heat waves, crop failure, etc. having the potential to undo much of the progress we have made.
Learning. When we are born, we are quite incompetent – we have to learn basic skills, such as walking and how to use even the simplest tools. When we encounter a new situation, we have to learn how to deal with it. We group many of the strategies for meeting the need for learning under the heading ‘education’, but other solutions include experimenting to gain experience, self-study and parenting.
There has been tremendous progress in making learning available to everyone around the world. A crucial enabler of learning is literacy and that has grown tremendously over the last two hundred years. In the early 1800s fewer than 20% of people globally were literate (Our World in Data, n.d.). Today global literacy is approaching 90%.
It is crucial to recognize that traditional schools are only one solution to the need for learning. It is a capital intensive one as it requires the construction of buildings. There are many alternative solutions such as unschooling, homeschooling, and neighborhood schools.
Increasingly pretty much all the world’s knowledge can be delivered via a smartphone. Global smartphone penetration is approaching 80% (O’Dea, 2021). Even in relatively poor countries smartphones are becoming much more widespread. For example a new Android phone can be purchased in India for 7,000 Rupees (~$100) and smartphone penetration there has reached 42% in 2020 (91 (Mobile Price List In India, n.d.; Statista, 2021b).
Now the assertion that anything can be learned via a smartphone may seem preposterous. But research on so-called “minimally invasive education” and “unschooling” has shown that children want to learn and are capable of learning largely independently when given the right opportunity (“Minimally Invasive Education,” 2021; “Unschooling,” 2021). Even some famous scientists, including notably Einstein and Faraday, did a great deal of independent studying early on, with the latter having virtually no formal education (“Albert Einstein,” 2021; “Michael Faraday,” 2021).
Meaning. As humans, we have a profound psychological need for meaning in our lives. One solution is to have a purpose. Religious belief and belonging to a community have long been a source of purpose for humans. Another key strategy comes from our interactions with other humans, including having other people acknowledge our contributions to a project, or even merely recognize our existence.
As stated in the main text, the need for meaning can be met more or less without capital. Meaning comes from stories, beliefs, recognition and many other sources, none of which have any substantial capital requirements. These have existed in oral form long before humans developed written language.
Reproduction. Individuals can survive without sex, but reproduction is a need for societies as a whole. We have learned how to reproduce without sex; in the future, there may be different solutions for the continuation of a human society – whether here on Earth or elsewhere.
Historically we have needed no capital for reproduction. Ironically that is changing as sperm count and motility are dropping in many places around the world (likely due to many products containing hormone disrupting chemicals) (Chiu, 2021). So eventually we may need to use technological means for reproduction. These could over the long term also involve the creation of artificial wombs, something that’s already possible for animals (Becker, 2017).
Allocation. Access to physical resources has to be allocated. Take a chair as an example. Only one person can comfortably sit in it at a time – when there are multiple people, we need a way of allocating the chair between them. If you are by yourself, you can sit on a chair whenever you want to – allocation is a collective need.
The need for allocation is more severe the less of something there is. For example, in the early days of producing the COVID vaccine, when there were few doses, there were intense debates about how to allocate the available doses. With enough progress we can envision a world with dramatically reduced allocation needs, where we can always make more of something on the spot whenever we need it. I believe we are a long way out from abundance in the physical realm. Even in Star Trek, which is often seen as a fictional model of abundance with technology such as the replicator, there are often allocation needs, such as the captain having to decide whether to route power to the shields or the weapons or the engines. It is easy to think of other examples where we will still have an allocation mechanism, such as going to see the original Mona Lisa at the Louvre -- only so many people can fit in the room at any one time. In the extreme of course some will argue that it might become possible to give you a Matrix style experience where you think you are in the room at the Louvre with the Mona Lisa but in fact have never left your home. While possible in principle I again believe this is a long way off.
At first blush it might not seem that capital has much to do with addressing the allocation need, but historically we have definitely been capital constrained here. Two crucial parts of allocation solutions are communication and transportation. Sticking with the COVID vaccination example, in order to allocate a limited number of vaccines one needs to know where the people who should get the early doses are and how to get the doses to them (or them to the doses). Both of these sub problems require capital. In particular because the COVID vaccines need to be kept at low temperatures, it requires a so-called “cold chain” -- a logistics solution that can keep the vaccine cooled. Our progress in communication will be discussed more in the sections on motivation, coordination, and knowledge below. For transportation we know we are not really constrained given the massive amount of stuff (most of which is serving wants rather than needs) being delivered every day.
Motivation. This may seem like an individual need, but it acts as a collective one in the sense that societies must motivate their members to carry out important tasks and follow rules. Even the smallest and least technologically advanced societies have some solutions for this problem, often in the form of rewards and punishments.
Motivation is another need where at first it might appear as if capital never really played a role. Yet on further examination, communication infrastructure is crucial for maintaining motivation across distance. This in no small part explains why the structures of motivation were for the most part extremely local prior to the industrial age and the deployment of communication networks. Successful larger motivation efforts earlier on, such as the Roman Empire, invested heavily in roads and/or currier networks. With communication networks operating at zero marginal cost today, we are no longer capital constrained with regard to motivation even at global scale.
Now to this day it might be argued that financial capital, e.g. in the form of bonus payments, is key to motivation. We are certainly not constrained by a lack of financial capital and we can even create new financial capital essentially ex nihilo via crypto currencies. Still it is worth pointing out that so-called “high powered” incentives are a very narrow form of motivation and require precise measurement to avoid substituting quantity for quality. This can easily be seen from the many failed attempts to incentivize the creation of content via payments in the absence of systems to rate content quality.
Unlocking intrinsic motivation and complementing it with “low powered” incentives such as recognition and reputation is a far better approach most of the time. In science, for example, much progress comes from people’s genuine interest in solving a specific problem. Wikipedia is a great example of how much high quality content creation can happen in a system built solely on recognition and reputation (this is not to say that there aren’t bias issues worth addressing but commercial publishers time and again have argued that Wikipedia would run out of steam and it hasn’t).
Coordination. Whenever more than a single human is involved in any activity, coordination is needed. Take a simple meeting between two people as an example. In order for it to take place, the two need to show up at the same place at the same time. We have developed many communication and governance mechanisms to address this need.
Coordination may seem like such an abstract need that it can be difficult to see at first that it was at one point capital constrained. One of our primary solutions to the need for coordination is communication. Considering the simple example again of two people meeting, this requires agreeing on a place and time. When communication was really difficult (because of insufficient capital), a lot of meetings, such as religious ceremonies, happened on pre-ordained schedules. Another common solution was simply waiting. People would show up somewhere and wait until it was their turn.
This latter solution shows a great example of how we have achieved sufficient capital for coordination only relatively recently. Waiting as a solution to coordination has been decreasing rapidly in places with high internet penetration. For example, we now routinely book appointments for a haircut or make restaurant reservations online. At this point our ability to coordinate even at the scale of all of humanity is no longer capital constrained. Instead, most of our coordination problems now result from disagreements about priorities. That has been true most recently for fighting the COVID pandemic and is also true with what to do about the climate crisis.
Knowledge. As I argued in earlier sections on optimism and humanism, knowledge is the central collective human need: without it, a society will encounter problems that it cannot solve. History is full of examples of societies that lacked sufficient knowledge to sustain themselves, such as the Easter Islanders and the Mayans. This is not about what any one individual has learned, but about the body of knowledge that is accessible to society as a whole. Later in this book we will examine solutions for generating more knowledge, faster.
The accumulation of knowledge was significantly constrained by capital early on. That’s largely because humans didn’t know how to make easily writable and transportable materials. Many ancient cultures engraved writing in stone, which was both a slow process and made it difficult to transport the results. There was a series of innovations in recording materials such as parchment (animal skins), papyrus and eventually paper. To get a sense of just how scarce the more advanced materials were early on, one only needs to consider the so-called palimpsest, a manuscript that was written over a prior one, sometimes repeatedly, in the re-use of the underlying parchment (“Palimpsest,” 2021). With today’s digital technology for recording and transmitting knowledge we are no longer capital constrained with respect to those crucial aspects of the knowledge loop.
That is not to say that there aren’t some pockets of knowledge left where our ability to push forward is constrained by capital but these are few and far between. One example is high energy physics. The Large Hadron Collider (LHC) is the biggest device humanity has built to explore what happens when particles collide at very high energies to reveal what they are made of (“Large Hadron Collider,” 2021). It is unclear that we have accumulated enough physical capital to date (or have the capacity right now to build enough) for significantly higher energy levels that would potentially let us see even deeper into the fabric of reality.
In the meantime though it is important to point out that conversely a lot of research that used to be capital constrained is no longer so through breakthroughs in both computation (allowing the simulation of much bigger systems) and laboratory equipment, especially for the sequencing and assembly of genomic information. This has resulted in an explosion in knowledge about cellular processes which among other benefits gave us the powerful new mRNA vaccines against COVID.
Energy. For a long time, humans relied on direct sunlight as their primary energy source. Since then we have developed many ways of generating energy, including better ways of capturing sunlight. Capturing more energy and making it available in concentrated and highly regulated form via electricity has enabled new solutions to human needs.
Energy turns out to be the crucial enabler of progres. The other enablers identified below, such as resources, are directly impacted by how much energy we have available. As David Deutsch has pointed out in “The Beginning of Infinity” a sufficiently advanced civilization, with enough energy at their disposal, could assemble entire worlds in seemingly empty space.
We are a long way from such a civilization but we have plenty of energy, and more importantly, the ability to rapidly build out our energy supply if we pay enough attention to it. Today energy production and consumption varies significantly across nations, with developed countries having enough energy far beyond what is required to meet needs. But even in the US we occasionally run into energy distribution problems as seen this year in Texas.
More importantly globally our energy production is heavily dependent on fossil fuels. This of course is the cause of the climate crisis and something that requires our urgent attention. So isn’t this an example of capital scarcity? No. We have the capital necessary to dramatically ramp our clean energy supply, we simply lack the political will to go into wartime production mode. We should be building out wind, solar, geothermal, and nuclear energy (as well as storage) with a large percentage of our existing physical capital.
To illustrate what is possible, consider France’s deployment of nuclear energy in the 1970s and 80s. France went from virtually no nuclear power to about 40% in the space of less than two decades (Mearns, 2013). Keep in mind this was during a peacetime approach to energy. Now imagine what could be done with a wartime approach. I believe it would be possible in most countries to get to 100% clean energy in 10 years or possibly even faster if we really tried.
Finally, there seems to be no particular reason why we cannot make even more advanced forms of energy work, starting with nuclear fusion. There’s now at least a dozen well funded commercial fusion startups pursuing a variety of approaches from the longstanding magnetic confinement to more recent inertial confinement (using laser). Research has even continued on the much derided notion of “cold” fusion. Following the reproducibility crisis of the original 1989 paper by Fleischmann and Pons, research has been continued albeit under the different headline of “condensed matter physics.” (Krivit & Ravnitzky, 2016). That research has continued to provide interesting observations that could eventually unlock a massive energy source for humanity.
Resources. In early human history, all resources were drawn from our natural surroundings. Later, we started growing and extracting resources in our own right. Many modern solutions have been made possible by access to new kinds of resources. For instance, mobile phones, which provide new solutions to individual and collective needs, are made possible by some esoteric raw materials, including the so-called rare-earth elements.
One of the consistent worries cited for why we will not be able to meet our needs are resources, by which I mean raw and intermediate materials. For example, there is a concern now about running out of phosphorus (Alewell et al., 2020). Historically, this has not turned out to be the case for several reasons. First, when a resource becomes more expensive, we often start to discover new places where we can find it. For instance, there were repeated concerns that we might run out of oil, but each time we discovered more. Second, an increase in the price of a resource tends to lead to product and process designs that make less use of the resource, either by becoming more efficient or by substituting another resource. Third, when a resource is sufficiently expensive it makes sense to recycle it. For example, in the US about half of all aluminum beer cans are recycled (EPA, 2021).
Ultimately though there are far more resources in the universe than we might ever need. In the relatively near future we will be able to catch meteorites, which contain massive amounts of metals. There are several venture backed startups pursuing this and companies such as SpaceX are driving down the cost of launching such missions. Longer term though we will likely be able to transmute materials. Our techniques for doing this at the moment are somewhat primitive but famously Gleen Seaborg and others turned bismuth into gold in 1981 (Aleklett et al., 1981).
Transformation. Energy and resources alone are not enough. To enable most solutions, we need to figure out (and remember!) how to use the former to transform the latter. This involves chemical and physical processes. Physical capital, in the shape of machines, has been a crucial enabler of many new solutions to human needs. For instance, a knitting machine can quickly transform yarn into clothing, one of our key solutions for maintaining the human operating environment.
Some of the earliest human transformation processes were to make weapons for hunting, such as arrows and knives, by sharpening wood or stone. Eventually we figured out how to make metals and developed lots of chemical processes that combine and transform resources into the materials we use every day. Many of these processes are capital intensive.
Yet here too it is unlikely that we are meaningfully constrained by capital in many parts of the world. One example to consider here is how much larger cars have become in recent years in advanced economies. This has been driven by the trend towards Sports Utility Vehicles (SUV) and so-called Crossovers which have been dramatically outselling traditional sedans since 2015 (Motorlease, n.d.). These vehicles require significantly more material due to their size. One can give similar examples from other areas, such as the trend of much larger houses. In the US in 2014 the average new home had ballooned to 2,675 square feet from below 1,000 square feet in the 1950s (while at the same time average family size shrunk considerably). Put differently we may have distribution issues but we do not have aggregate capital constraint issues in making materials.
It is also worth noting that much of our manufacturing is still applying highly wasteful processes where one starts with a big piece of material and then cuts or mills portions away. In contrast to this we are now beginning to see additive manufacturing, sometimes referred to as 3D printing, growing in capabilities. In this approach material is deposited to take on the desired shapes which allows for a dramatic reduction in waste.
Transportation. The final foundational enabler is the ability to move stuff, including people. This is another area in which we have made great progress, going from human-powered to animal-powered to machine-powered transportation.
Humanity was capital constrained on transportation for a long time and still is in some parts of the world today. Without some kind of assistance we are quite slow (certainly compared to many other species) and cannot cross sizable bodies of water. Not surprisingly then much of early human technology was aimed at improving transportation, from domesticating animals such as horses to inventing the boats and carriages.
The longstanding dream of course was to fly like birds with myths of human flight, such as Daedalus and Icarus, dating back thousands of years. It wound up taking until the early 1900s to figure it out though. From there progress was rapid and by 1969 a human had landed on the moon. Following the COVID pandemic, we are finding ourselves with an excess of equipment with more planes than ever before parked in the desert, maybe never to be reactivated.
Today we face two overlapping challenges. Some parts of the world still need to catch up on transportation, in particular India and China with their billions of people hankering for increased mobility. At the same time we need to rapidly decarbonize transportation all over the world in order to combat the climate crisis. Achieving both will require massive investments in electric mobility. Ideally new vehicles in India and China will be predominantly EVs, including electric scooters and trikes. We are not really constrained by capital in accomplishing this but only if we make the choice to redeploy a lot of productive capital.