Believing in the potential of progress does not mean being a Pollyanna, and it is important to remember that progress is not the inevitable result of technology. Contrary to the claims made by the technology writer Kevin Kelly in his book What Technology Wants, technology doesn’t want a better world for humanity; it simply makes such a world possible.

Nor does economics ‘want’ anything: nothing in economic theory, for instance, says that a new technology cannot make people worse off. Economics gives us tools that we can use to analyze markets and design regulations to address their failures, but we still need to make choices relating to what we want markets and regulations to accomplish.

Moreover, contrary to what Karl Marx thought, history also doesn’t ‘want’ anything. There isn’t a deterministic mechanism through which conflicts between labor and capital are ultimately bound to be resolved in favor of a classless society. Nor is there, as the political economist Francis Fukuyama would have it, an ‘end of history’—a final social, economic and political system. History doesn’t make its own choices, it is the result of human choices, and there will be new choices to make as long as we continue to make technological progress.

It always has been our responsibility to make choices about which of the worlds made possible by new technology we want to live in. Some of these choices need to be made collectively (requiring rules or regulations), and some of them need to be made individually (requiring self-regulation). The choices we are faced with today are especially important because digital technology so dramatically increases the ‘space of the possible’ that it includes the potential for machines that possess knowledge and will eventually want to make choices of their own.


The people building or funding digital technology tend to be optimists and to believe in progress (though there are also opportunists thrown into the mix). Many of those optimists also believe in the need for regulation, while another group has a decidedly libertarian streak and would prefer governments not to be involved. For them, regulation and progress conflict. The debates between these two groups are often acrimonious, which is unfortunate, because the history of technology clearly demonstrates both the benefits of good regulation and the dangers of bad regulation. Our energy is thus better spent on figuring out the right kind of regulation, as well as engaging in the processes required to enforce and revise it.

The history of regulating automotive technology is instructive here. Much of the world currently gets around by driving cars. The car was an important technological innovation because it vastly enhanced individual mobility, but its widespread adoption and large scale impacts would have been impossible without legislation, including massive public investments. We needed to build roads and to agree on how they should be used, neither of which could have been accomplished based solely on individual choices. Roads are an example of a ‘natural monopoly.’ Multiple disjointed road networks or different sets of rules would be hugely problematic: imagine what would happen if some people drove on the left side of the road and others drove on the right. Natural monopolies are situations where markets fail and regulation is required, and social norms are another form of regulation. The car would have been less widely adopted as a mode of individual transport without changes in social norms that made it acceptable for women to drive, for instance.

Not all regulation is good, of course. In fact, the earliest regulation of automotive vehicles was aimed at delaying their adoption by limiting them to walking speed. In the United Kingdom they were even required by law in their early years to be preceded by someone on foot carrying a red flag [16]. Similarly, not all regulation of digital technology will be beneficial. Much of it will initially aim to protect the status quo and to help established enterprises, including the new incumbents. The recent changes to net neutrality rules are a good example of this [17].

My proposals for regulation, which I will present later in the book, are aimed at encouraging innovation by giving individuals more economic freedom and better access to information. These regulations, which are choices we need to make collectively, represent a big departure from the status quo and from the programs of the established parties here in the United States and in most other countries. They aim to let us explore the space of the possible that digital technologies have created, so we can transition from the Industrial Age to the Knowledge Age.


Another set of choices has to do with how we react individually to the massive acceleration of information dissemination and knowledge creation that digital technology makes possible. These are not rules that society can impose, because they relate to our inner mental states: they are changes we need to make for ourselves. For instance, there are many people who feel so offended by content that they encounter on the Internet, from videos on YouTube to comments on Twitter, that they become filled with anxiety, rage, and other painful emotions leading them to withdraw or lash out, furthering polarization and cycles of conflict. Other people become trapped in ‘filter bubbles’ that disseminate algorithmically curated information that only confirms their existing biases, while others spend all their time refreshing their Instagram or Facebook feeds. Even though some regulation can help, as well as more technology, overcoming these problems will require us to change how we react to information.

Changing our reactions is possible through self-regulation, by which I mean training that enhances our capacity to think critically. From Stoicism in ancient Greece to Eastern religions such as Hinduism and Buddhism, humans have a long tradition of practices designed to manage our immediate emotional responses, enabling us to respond to the situations we experience in insightful and responsible ways. These mindfulness practices align with what we have learned more recently about the workings of the human brain and body. If we want to be able to take full advantage of digital technology, we need to figure out how to maintain our powers of critical thinking and creativity in the face of an onslaught of information including deliberate attempts to exploit our weaknesses.