Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Knowledge, as I use this term, is the information that humanity has recorded in a medium and improved over time. There are two crucial parts to this definition. The first is “recorded in a medium,” which allows information to be shared across time and space. The second is “improved over time,” which separates knowledge from mere information.
A conversation that I had years ago but didn’t record cannot be knowledge in my sense—it isn’t accessible to anyone who wasn’t there when it happened, and even my own recollection of it will fade. However, if I write down an insight from that conversation and publish it on my blog, I have potentially contributed to human knowledge. The blog post is available to others across space and time, and some blog posts will turn out to be important contributions to human knowledge. As another example, the DNA in our cells isn’t knowledge by my definition, whereas a recorded genome sequence can be maintained, shared and analyzed. Gene sequences that turn out to be medically significant, such as the BRCA mutation that increases the risk of breast cancer, become part of human knowledge.
My definition of knowledge is intentionally broad, and includes not just technical and scientific knowledge but art, music and literature. But it excludes anything that is either ephemeral or not subject to improvement. Modern computers, for example, produce tons of recorded information that are not subsequently analyzed or integrated into any process of progressive bettering. The reasons for this definition of knowledge will become clear as I use the term in the following sections and throughout the book.
With digital technology so fundamentally expanding what we are able to do, we must establish some basic principles if we are to avoid misinterpreting current trends and phenomena. These principles will allow us to truly explore this new ‘space of the possible’ and the benefits that it might bring, instead of limiting and bending the technology to fit our existing economic and social systems.
What follows is an attempt to establish a firm foundation for how we might build a future, grounding it in a clear set of values. I start with a brief definition of knowledge, a term I use extensively and in a way that is somewhat different from common usage. I then explain the relationship between optimism and knowledge, as well as the importance of choices in shaping our future. This is followed by a discussion of why the existence of knowledge provides an objective basis for humanism, which sets it apart from other religious and philosophical narratives. Much of my thinking in this area has been influenced by the writing of David Deutsch, and in particular his book The Beginning of Infinity, which explores the history, philosophy and power of explanations (Deutsch, 2011).
I will then provide a definition of scarcity based directly on human needs rather than on money and prices, using this definition to show how technology has shifted scarcity throughout history, leading to dramatic changes in how we live. From there, I lay out a plan of attack for the rest of the book.
When I started my blog over a decade ago, I called myself a “technology optimist.“ I wrote:
I am excited to be living [at] a time when we are making tremendous progress on understanding aging, fighting cancer, developing clean technologies and so much more. This is not to say that I automatically assume that technology by itself will solve all our problems […]. Instead, I believe that – over time – we as a society figure out how to use technology to […] improve our standard of living. I for one am […] glad I am not living in the Middle Ages.
This book is fundamentally optimistic, which is partly a reflection of my personality. I can’t see how it would be possible to be a venture capitalist as a pessimist. You would find yourself focusing on the reasons why a particular startup would be unlikely to succeed and as a result would never make an investment.
I want to be clear about this apparent bias from the start. Optimism, however, is much more than a personal bias—it is essential for human knowledge. Acts of knowledge creation, such as inventing a new technology or writing a new song, are profoundly optimistic. They assume that problems can be solved, and that art will impact the audience (which is true even for a pessimistic song). Optimism is the attitude that progress is possible.
Progress has become a loaded term. After all, despite our technological achievements, aren’t humans also responsible for the many diseases of civilization, for the extinction of countless species, and potentially for our own demise through climate change? Without a doubt we have caused tremendous suffering throughout human history, and we are currently faced with huge problems including a global pandemic and the ongoing climate crisis. But what is the alternative to trying to tackle these?
The beauty of problems is that knowledge can help us overcome them. Consider the problem of warming ourselves in the cold. Humans invented ways of making fire, eventually documented them, and have since dramatically improved the ways in which we can produce heat. We may take the existence of knowledge for granted, but no other species has it, which means whether they can solve a problem depends largely on luck and circumstance. So not only is optimism essential for knowledge, but the existence of knowledge is the basis for optimism.
There is an extreme position that suggests that we would have been better off if we had never developed knowledge in the first place (Ablow, 2015). While this may sound absurd, much of religious eschatology (theology about the ‘end times’) and apocalyptic thinking is associated with this position, asserting that a grand reckoning for the sins of progress is inevitable. And while they are rare, there have even been voices welcoming the COVID-19 pandemic and the climate crisis as harbingers, if not of apocalypse, then at least of a “Great Reset.” Although there is no guarantee that all future problems will be solvable through knowledge, one thing is certain: assuming that problems cannot be solved guarantees that they will not be. Pessimism is self-defeating, and apocalyptic beliefs are self-fulfilling.
All of this is also true for digital technology, which has already brought with it a new set of problems. We will encounter many of them in this book, including the huge incentives for companies such as Facebook to capture as much attention as possible, and the conflicts that arise from exposure to content that runs counter to one’s cultural or religious beliefs. And yet digital technology also enables amazing progress, such as the potential for the diagnosis of diseases at zero marginal cost. The World After Capital is optimistic that we can solve not only the problems of digital technology, but also that we can apply digital technology in a way that results in broad progress, including the knowledge creation needed to address the climate crisis.
Believing in the potential of progress does not mean being a Pollyanna, and it is important to remember that progress is not the inevitable result of technology. Contrary to the claims made by the technology writer Kevin Kelly in his book What Technology Wants, technology doesn’t want a better world for humanity; it simply makes such a world possible.
Nor does economics ‘want’ anything: nothing in economic theory, for instance, says that a new technology cannot make people worse off. Economics gives us tools that we can use to analyze markets and design regulations to address their failures, but we still need to make choices relating to what we want markets and regulations to accomplish.
Moreover, contrary to what Karl Marx thought, history also doesn’t ‘want’ anything. There isn’t a deterministic mechanism through which conflicts between labor and capital are ultimately bound to be resolved in favor of a classless society. Nor is there, as the political economist Francis Fukuyama would have it, an “end of history“—a final social, economic and political system. History doesn’t make its own choices, it is the result of human choices, and there will be new choices to make, as long as we continue to make technological progress.
It always has been our responsibility to make choices about which of the worlds made possible by new technology we want to live in. Some of these choices need to be made collectively (requiring rules or regulations), and some of them need to be made individually (requiring self-regulation). The choices we are faced with today are especially important because digital technology so dramatically increases the ‘space of the possible’ that it includes the potential for machines that possess knowledge and will eventually want to make choices of their own.
The people building or funding digital technology tend to be optimists and to believe in progress (though there are also opportunists thrown into the mix). Many of those optimists also believe in the need for regulation, while another group has a decidedly libertarian streak and would prefer governments not to be involved. For them, regulation and progress conflict. The debates between these two groups are often acrimonious, which is unfortunate, because the history of technology clearly demonstrates both the benefits of good regulation and the dangers of bad regulation. Our energy is thus better spent on figuring out the right kind of regulation, as well as engaging in the processes required to enforce and revise it.
The history of regulating automotive technology is instructive here. Much of the world currently gets around by driving cars. The car was an important technological innovation because it vastly enhanced individual mobility, but its widespread adoption and large scale impacts would have been impossible without legislation, including massive public investments. We needed to build roads and to agree on how they should be used, neither of which could have been accomplished based solely on individual choices. Roads are an example of a ‘natural monopoly.’ Multiple disjointed road networks or different sets of rules would be hugely problematic: imagine what would happen if some people drove on the left side of the road and others drove on the right. Natural monopolies are situations where markets fail and regulation is required, and social norms are another form of regulation. The car would have been less widely adopted as a mode of individual transport without changes in social norms that made it acceptable for women to drive, for instance.
Not all regulation is good, of course. In fact, the earliest regulation of automotive vehicles was aimed at delaying their adoption by limiting them to walking speed. In the United Kingdom they were even required by law in their early years to be preceded by someone on foot carrying a red flag (“Red Flag Traffic Laws,” 2020). Similarly, not all regulation of digital technology will be beneficial. Much of it will initially aim to protect the status quo and to help established enterprises, including the new incumbents. The recent changes to net neutrality rules are a good example of this (Kang, 2017).
My proposals for regulation, which I will present later in the book, are aimed at encouraging innovation by giving individuals more economic freedom and better access to information. These regulations, which are choices we need to make collectively, represent a big departure from the status quo and from the programs of the established parties here in the United States and in most other countries. They aim to let us explore the space of the possible that digital technologies have created, so we can transition from the Industrial Age to the Knowledge Age.
Another set of choices has to do with how we react individually to the massive acceleration of information dissemination and knowledge creation that digital technology makes possible. These are not rules that society can impose, because they relate to our inner mental states: they are changes we need to make for ourselves. For instance, there are many people who feel so offended by content that they encounter on the Internet, from videos on YouTube to comments on Twitter, that they become filled with anxiety, rage, and other painful emotions leading them to withdraw or lash out, furthering polarization and cycles of conflict. Other people become trapped in ‘filter bubbles’ that disseminate algorithmically curated information that only confirms their existing biases, while others spend all their time refreshing their Instagram or Facebook feeds. Even though some regulation can help, as well as more technology, overcoming these problems will require us to change how we react to information.
Changing our reactions is possible through self-regulation, by which I mean training that enhances our capacity to think critically. From Stoicism in ancient Greece to Eastern religions such as Hinduism and Buddhism, humans have a long tradition of practices designed to manage our immediate emotional responses, enabling us to react to the situations we experience in insightful and responsible ways. These mindfulness practices align with what we have learned more recently about the workings of the human brain and body. If we want to be able to take full advantage of digital technology, we need to figure out how to maintain our powers of critical thinking and creativity in the face of an onslaught of information including deliberate attempts to exploit our weaknesses.
What are the values that I am basing all this on, and where do they come from? In his book Sapiens, the historian Yuval Noah Harari claims that all value systems are based on equally valid subjective narratives. He denies that there is an objective basis for humanism to support a privileged position for humanity as a species, but here I will try to convince you that he is wrong (Harari, 2011). For not only is the power of knowledge a source of optimism; its very existence provides the basis for humanism. By “humanism” I mean a system of values that centers on human agency and responsibility rather than on the divine or the supernatural, and that embraces the process of critical inquiry as the central enabler of progress.
Knowledge, as I have already defined it, is the externalized information that allows humans to share insights with each other. It includes both scientific and artistic knowledge. Again, we are the only species on Earth that generates this kind of knowledge, with the ability to share it over space and time. I am able to read a book today that someone else wrote a long time ago and in a completely different part of the world.
This matters a great deal, because knowledge enables fundamentally different modes of problem solving and progress. Humans can select and combine knowledge created by other humans, allowing small changes to accrete into large bodies of work over time, which in turn provide the basis for scientific and artistic breakthroughs. Without knowledge, other species have only two methods of sharing things they have learned: communication and evolution. Communication is local and ephemeral, and evolution is extremely slow. As a result, animals and plants routinely encounter problems that they cannot solve, resulting in disease, death and even extinction. Many of these problems today are caused by humans (more on that shortly).
Knowledge has given humanity great power. We can fly in the sky, we can sail the seas, travel fast on land, build large and durable structures, and so on. The power of our knowledge is reshaping the Earth. It often does so in ways that solve one set of problems while creating an entirely new set of problems, not just for humans but for other species. This is why it is crucial that we remember what the story of Spiderman tells us: “With great power comes great responsibility.” It is because of knowledge that humans are responsible for looking after dolphins, rather than the other way round.
Progress and knowledge are inherently linked through critical inquiry: we can only make progress if we are capable of identifying some ideas as better than others. Critical inquiry is by no means linear—new ideas are not always better than old ones. Sometimes we go off in the wrong direction. Still, given enough time, a sorting takes place. For instance, we no longer believe in the geocentric view of our solar system, and only a tiny fraction of the art that has ever been created is still considered important. While this process may take decades or even centuries, it is blindingly fast compared to biological evolution.
My use of the word “better” implies the existence of universal values. All of these flow from the recognition of the power of human knowledge and the responsibility which directly attaches to that power. And the central value is the process of critical inquiry itself. We must be vigilant in pointing out flaws in existing knowledge and proposing alternatives. After all, imagine how impoverished our music would be if we had banned all new compositions after Beethoven.
We should thus seek regulation and self-regulation that supports critical inquiry, in the broad sense of processes that weed out bad ideas and help better ones to propagate. In business this often takes the form of market competition, which is why regulation that supports competitive markets is so important. Individually, critical inquiry requires us to be open to receiving feedback in the face of our deeply rooted tendency toward confirmation bias. In politics and government, critical inquiry is enabled by the democratic process.
Freedom of speech is not a value in and of itself; rather, it is a crucial enabler of critical inquiry. But we can see how some limits on free speech might flow from the same value. If you can use speech to call for violence against individuals or minority groups, you can also use it to suppress critical inquiry.
Digital technology, including a global information network and the general-purpose computing that is bringing machine intelligence, are dramatically accelerating the rate at which humanity can accumulate and share knowledge. However, these same technologies also allow targeted manipulation and propaganda on a global scale, as well as constant distraction, both of which undermine the evaluation and creation of knowledge. Digital technology thus massively increases the importance of critical inquiry, which is central to knowledge-based humanism.
Beyond critical inquiry, optimism and responsibility, other humanist values are also rooted in the existence of knowledge. One of these is solidarity. There are nearly 8 billion human beings living on Earth, which exists in an otherwise inhospitable solar system. The big problems that humanity faces, such as infectious diseases and the climate crisis, require our combined efforts and will impact all of us. We thus need to support each other, irrespective of such differences as gender, race or nationality. Whatever our superficial differences may be, we are much more like each other—because of knowledge—than we are to any other species.
Once we have established a shared commitment to the value of solidarity, we can celebrate diversity as another humanist value. In current political debates we often pit individuality against the collective as if it the two conflicted. However, to modernize John Donne, no human is an island—we are all part of societies, and of humanity at large. By recognizing the importance of our common humanity, we create the basis on which we can unfold as individuals. Solidarity allows us to celebrate, rather than fear, the diversity of the human species.
Those with some familiarity with economic theory are likely to understand ‘scarcity’ in its terms. In that context, something is scarce if its price is greater than zero. By this definition, land is scarce—it costs a lot of money to buy a piece of land. And financial capital is still because even with our current low interest rates, there is a price for borrowing money or raising equity.
However, there is a fundamental problem with this price-based definition of scarcity: anything can be made scarce by assigning ownership of it. Imagine for a moment that the world’s atmosphere belonged to Global Air Ltd, a company which could charge a fee to anyone who breathes air. Air would suddenly have become scarce, according to the price-based theory of scarcity. That might seem like an extreme example, and yet some people have argued that assigning ownership to the atmosphere would solve the problem of air pollution, on the grounds that it would result in the air’s owners having an economic incentive to maintain an unpolluted atmosphere.
Here I will use a different meaning of scarcity, one not based on price. I will call something scarce when there is less of it than we require to meet our needs. If people are starving because not enough food has been produced (or made available), food is scarce. Insofar as more knowledge would allow this problem to be solved, this can be thought of as technological (as opposed to economic) scarcity. The point here is that technological progress makes things less scarce. As I discuss in Part Two below, the eighteenth-century scholar Thomas Malthus (1798) was correct when he predicted that global population growth would be exponential, but his prediction that such growth would outpace growth in the food supply, resulting in ongoing shortages and mass starvation, turned out to be wrong, because technological progress resulted in exponential increases in food production. In fact, recent advances in agricultural techniques have meant that the amount of land needed for food production is now declining, even as food production is continuing to grow rapidly.
But is it possible to draw a clear distinction between needs and wants? If people are not starving but want more or different food, can food still be scarce? Modern economics equates the two, but intuitively we know that this is not the case. We need to drink water, but want to drink champagne. We need to provide our body with calories, but want to eat caviar. These examples are obviously extremes, but the point is that many different foods can be used to meet the need for calories. Desiring a particular food is a want, while getting enough calories (and other nutrients) is a need. In Part Two, I set out a list of the needs and look at our current and future ability to fulfill them.
Importantly, if something is no longer scarce, it isn’t necessarily abundant—there is an intermediate stage, which I will call ‘sufficiency’. For instance, there is sufficient land on the planet to meet everyone’s needs, but building housing and growing food still requires significant physical resources, and hence these things are not abundant. I can foresee a time when technological progress makes land and food abundant—imagine how much space we would have if we could figure out how to live on other planets. Digital information is already on a clear path to abundance: we can make copies of it and distribute them at zero marginal cost, thus meeting the information needs of everyone connected to the Internet.
With this needs-based definition of scarcity in place, we can now examine how technology has shifted the constraining scarcity for humanity over time.
I will now provide a highly abstract account of human history that focuses on how technology has shifted scarcity over time and how those shifts have contributed to dramatic changes in human societies.
Homo sapiens emerged roughly two hundred and fifty thousand years ago. Over most of the time since, humans were foragers (also referred to as hunter-gatherers). During the Forager Age, the defining scarcity was food. Tribes either found enough food in their territory, migrated further or starved.
Then, roughly ten thousand years ago, humanity came up with a series of technologies such as the planting of seeds, irrigation and the domestication of animals that together we recognize today as agriculture. These technologies shifted the scarcity from food to land in what became the Agrarian Age. A society that had enough arable land (on which food can be grown), could meet its needs and flourish. It could, in fact, create a food surplus that allowed for the existence of groups such as artists and soldiers that were not directly involved in food production.
More recently, beginning about four hundred years ago with the Enlightenment, humanity invented a new series of technologies, including steam power, mechanical machines, chemistry, mining, and eventually technologies to produce, transmit and harness electricity. Collectively we refer to these today as the Industrial Revolution, and the age that followed as the Industrial Age. Once again, the scarcity shifted, this time away from food and towards capital, such as buildings, machinery and roads. Capital was scarce because we couldn’t meet the needs of a growing human population, including the need for calories, without building agricultural machines, producing fertilizer and constructing housing.
In each of those two prior transitions, the way humanity lived changed radically. In the transition from the Forager Age to the Agrarian Age we went from being nomadic to sedentary, from flat tribal societies to extremely hierarchical feudal societies, from promiscuity to monogamy (sort of), and from animistic religions to theistic ones. In the transition from the Agrarian Age to the Industrial Age we went from living in the country to living in the city, from large extended families to nuclear families or no family at all, from commons to private property (including private intellectual property) and from great-chain-of-being theologies to the Protestant work ethic.
What accounts for these changes? In each transition the nature of the scarcity changed in a way that made measurement of human effort more difficult, which in turn required more sophisticated ways of providing incentives to sustain the necessary level of effort.
In the Forager Age, when the key scarcity was food, the measurement and incentive problem was almost trivial: everyone in a tribe sees how much food the hunters and gatherers bring back, and it is either enough to feed everyone or not. In so-called immediate return societies (which had no storage) that is literally all there is to it. With storage the story gets slightly more complicated, but not by much. I believe that this explains many of the features of successful foraging tribal societies, including the flat hierarchy and the equality of sharing.
In the Agrarian Age, when the scarcity was land, the measurement problem got significantly harder: you can really only tell at harvest time (once per year in many regions of the world) how well-off a society will be. Again, I believe that this explains many of the features of successful agrarian societies, in particular the need for a lot of structure and strict rules. It is crucial to keep in mind that these societies were essentially pre-scientific, so they had to find out what works by trial and error. When they found a rule that seemed to work, they wanted to stick with it and codify it (much of this happened via the theistic religions).
In the Industrial Age, when the scarcity was capital, the measurement problem became even harder. How do you decide where a factory should be built and what it should produce? It might take years of process and product innovation to put physical capital together that is truly productive. I believe that this explains much of the success of the market-based model, especially when contrasted with planned economies. Effectively, the solution to the incentive problem moved from static rules to a dynamic process that allows for many experiments to take place and only a few of them to succeed.
These changes in how humanity lives were responses to an increasingly difficult measurement problem, as technological progress shifted scarcity from food to land and then from land to capital. But the transitions don’t occur deterministically; they are the result of human choice driving changes in regulation. For example, when it came to the scarcity of capital, humanity tried out radically different approaches between market-based and planned economies. As it turned out, competitive markets, combined with entrepreneurialism and the strategic deployment of state support (e.g. in the form of regulation), were better at allocating and accumulating capital. Similarly, the Agrarian Age contained vastly different societies, including the Athenian democracy, which was hugely advanced compared to much of Northern European society in the Middle Ages.
The other important point to note about the previous transitions is that they took quite some time and were incredibly violent. Agriculture emerged over the span of thousands of years, during which time agrarian societies slowly expanded, either subduing or killing foraging tribes. The transition from the Agrarian Age to the Industrial Age played out over several hundred years and involved many bloody revolutions and ultimately two world wars. At the end of the Agrarian Age, the ruling elites had gained their power from controlling land and still believed it to be the critical scarcity. For them, industry was a means of building and equipping increasingly powerful armies with tanks and battleships to ensure control over land. Even the Second World War was about land, as Hitler pursued “Lebensraum“ (literally “room to live”) for his Third Reich. It was only after the Second World War that we finally left the Agrarian Age behind for good.
We now, once again, find ourselves in a transition period, because digital technology is shifting the scarcity from capital to attention. What should be clear by now is that this transition will also require dramatic changes in how humanity lives, just as the two prior transitions did. It is also likely that the transition will play itself out over several generations, instead of being accomplished quickly.
Finally, there is a historic similarity to the transition out of the Agrarian Age that explains why many governments have been focused on incremental changes. To understand, we should first note that capital today is frequently thought of as monetary wealth or financial capital, even though it is productive capital (machines, buildings and infrastructure) that really matters. Financial capital allows for the formation of physical capital, but it does not directly add to the production of goods and services. Companies only require financial capital because they have to pay for machines, supplies and labor before they receive payment for the product or service they provide.
Just as the ruling elites at the end of the Agrarian Age came from land, the ruling elites today come from capital. They often don’t take up political roles themselves but rather have ways of influencing policy indirectly, exposing them to less personal risk. A good recent example is the role of the billionaire hedge fund manager Robert Mercer and his family in supporting groups that influenced the outcome of the US Presidential election in 2016, such as the right-wing news organization Breitbart (Gold, 2017).
My first major claim is that capital, at least in the technological sense, is no longer scarce. We have sufficient productive capital to meet our needs through growing food, constructing buildings, producing clothes, and so on. To establish this, I will start by setting out a catalog of individual and collective needs. I will then examine current population trends to see what we can learn about the future growth in these needs, followed by an evaluation of our available capital and its ability to meet those needs. That entire section of The World After Capital shows that physical capital is sufficient in aggregate. It does not address questions of wealth distribution, which will be discussed later.
My second claim is that attention is now the key scarcity, meaning that our present allocation of attention is resulting in humanity’s needs not being met. To establish this I will start by pinning down more precisely what attention is and presenting several examples of human needs that either are already no longer met, such as the need for meaning, or that are at risk of not being met in the future, such as calories due to the climate crisis—all due to a lack of attention. After that I will consider how much human attention is currently caught up in Industrial Age activities, and how more attention is being trapped through the dominant uses of digital technology, such as advertising-based social networks. I will also discuss why market-based capitalism cannot be used to allocate attention.
I will then make concrete suggestions for how to facilitate the transition to the next age, which I call the Knowledge Age. In keeping with the ideas about knowledge and humanism that I presented earlier, my suggestions focus on increasing freedoms as the basis for more available attention and improved allocation of that attention.