Digital Technology
Billions of people all over the world carry around smartphones, powerful computers that are connected to a global network (the internet). We often spend many hours a day on these devices, whether playing games, checking social media, or carrying out work. And yet despite the growing ubiquity of digital technology, people often find it difficult to understand what exactly makes it so distinctively powerful. Some have even derided digital technology, pointing to services such as X/Twitter and arguing that they are inconsequential when compared to, say, the invention of vaccines.
It is nonetheless becoming increasingly difficult to ignore digital technology’s disruptiveness. For example, while many previously long-established businesses are struggling, including newspapers and retailers, digital technology companies such Facebook, Apple, Amazon, Netflix, and Google are now among the world’s most highly valued (“List of public corporations,” 2020).
Digital technology turns out to possess two unique characteristics that explain why it dramatically expands the space of the possible for humanity, going far beyond anything previously possible. These are zero marginal cost and the universality of computation.
ZERO MARGINAL COST
Once a piece of information exists on the internet, it can be accessed from anywhere on the network for no additional cost. And as more and more people around the world connect to the internet, “anywhere on the network” is increasingly coming to mean “anywhere in the world.” The servers are already running, as are the network connections and the end-user devices. Making one extra digital copy of the information and delivering it across the network therefore costs nothing. In the language of economics, the marginal cost of a digital copy is zero. That does not mean that people won’t try to charge you for this information—in many cases they will. But that’s a matter of price, not of cost.
Zero marginal cost is radically different to anything that has come before it in the analog world, and it makes some pretty amazing things possible. To illustrate this, imagine that you own a pizzeria. You pay rent for your store and your equipment, and you pay salaries for your staff and yourself. These are fixed costs, and they don’t change with the number of pizzas you bake. Variable costs, on the other hand, depend on the number of pizzas you make. For a pizzeria, these will include the cost of the water, flour, any other ingredients you use, any additional workers you need to hire, and the energy you need to heat your oven. If you make more pizzas, your variable costs go up, and if you make fewer pizzas, they go down.
So what is marginal cost? Well, let’s say you are making one hundred pizzas every day: The marginal cost is the additional cost of making one more pizza. Assuming the oven is already hot and has space in it, and your employees aren’t fully occupied, it is the cost of the ingredients, which is likely relatively low. If the oven had already cooled, then the marginal cost of the additional pizza would include the energy cost required for reheating the oven and might be quite high.
From a business perspective, you would want to make that additional pizza as long as you could sell it for more than its marginal cost. Every cent you can charge above marginal cost for the additional pizza makes a contribution towards your fixed costs and if fixed costs are already covered, then this contribution would be all profit.
Marginal cost also matters from a social perspective. As long as a customer is willing to pay more than the marginal cost for that pizza, everyone is potentially better off—you get that extra contribution toward your fixed costs or your profits, and your customer gets to eat a pizza they wanted and were willing to pay for (note: I am saying “potentially better off” because people sometimes want things that might not actually be good for them, such as someone who wants pizza but, as a result, becomes obese).
Now let’s consider what happens as marginal cost falls from a high level. Imagine that your key ingredient is an exceedingly expensive truffle which means that the marginal cost of each of your pizzas is $1,000. You clearly wouldn’t sell many pizzas, so you might decide to switch to cheaper ingredients and reduce your marginal cost to a point where a larger number of customers are willing to pay more than your marginal cost, so your sales increase. And as you bring down the marginal cost further through additional process and product improvements, you would sell even more pizzas.
Now imagine that through a magical new invention, you could make additional tasty pizzas at close to zero marginal cost (say, one cent per additional pizza) and ship them instantaneously to anywhere in the world. You would then be able to sell an exceedingly large number of pizzas. If you charged just two cents per pizza, you would be making one cent of profit for every additional pizza you sold. Such low marginal cost might quickly give you a monopoly on the global pizza market (more on this later). Anyone in the world who was hungry and could afford at least one cent might buy one of your pizzas.
The best price of your pizza from a societal point of view would be one cent (your marginal cost): The hungry would be fed, and you would cover your marginal cost. But if you are the only one with the ability to make and deliver pizzas that cheaply you would have a global monopoly on pizza as nobody else would be able to compete. As a monopolist, you’re highly unlikely to set your price at your marginal cost. Instead, you would maximize your profits by charging a meaningfully higher price. You would probably also engage in all sorts of problematic behavior aimed at securing and further increasing profits, such as trying to prevent competitors from entering the market, and possibly even looking to get people addicted to pizza so they will consume ever more.
This is exactly where we stand currently with digital technology. We can “feed the world” with information: That additional YouTube video view, additional access to Wikipedia, or additional traffic report from Waze all have zero marginal cost. We may have already lost our sense of wonder at this, but seen from the age of the printed encyclopedia, the ability to make the world’s knowledge accessible to everyone anywhere in the world at zero marginal cost is truly magical.
We are not used to zero marginal cost: most of our existing economics depends on the assumption that marginal costs are greater than zero. You can think of zero marginal cost as an economic singularity similar to dividing by zero—as you approach it, strange things begin to happen. We have seen the emergence of digital near-monopolies, along with all the problems that entails. (I propose a remedy to this in Part Four: “Informational Freedom.”) We are also observing power-law distributions of income and wealth (see Part Three), where small and often random variations result in hugely different outcomes. Furthermore, we are now rapidly approaching this zero marginal cost singularity in many other industries, which are primarily information based, including finance and education. In summary, the first characteristic of digital technology that dramatically expands the space of the possible is zero marginal cost. This can result in digital monopolies, but also has the potential to grant everyone access to the world’s knowledge.
UNIVERSALITY OF COMPUTATION
Zero marginal cost is only one property of digital technology that dramatically expands the space of the possible; the second is in some ways even more amazing.
Computers are universal machines; that is, anything that can be computed in the universe can in principle be computed by the kind of machines that we already have, given enough memory and time. We have known this since Alan Turing’s groundbreaking work on computation in the middle of the last century. He invented an abstract version of a computer that we now call a Turing machine, before coming up with a proof to show that this simple machine could compute anything (Mullins, 2012; “Church–Turing thesis,” 2020).
By “computation,” I mean any process that takes information inputs, executes a series of processing steps, and produces information outputs. That computation process is—for better or worse—also much of what a human brain does: it receives inputs via nerves, carries out some internal processing, and produces outputs. In principle, a digital machine can accomplish every computation that a human brain can. Those brain computations include something as simple and everyday as recognizing someone’s face (input: image, output: name) to something as complicated as diagnosing disease (inputs: symptoms and test results, output: differential diagnosis).
This “in principle” limitation will turn out to be significant only if quantum effects matter for the functioning of the brain, meaning effects that require quantum phenomena such as entanglement and the superposition of states. This is a hotly debated topic (Jedlicka, 2017). Quantum effects do not change what can be computed in principle, as even a Turing machine can theoretically simulate a quantum effect—but it would take an impractically long time, potentially millions of years, to do so (Timpson, 2004). If quantum effects are important in the brain, we may need further progress in quantum computing to replicate some of the brain’s computational capabilities. However, I believe that quantum effects are unlikely to matter for the bulk of computations carried out by the human brain, if they matter at all (they might play a role in consciousness). We may, of course, one day discover something new about physical reality that will change our view of what is computable, but so far this hasn’t happened.
For a long time, this property of universality didn’t matter much because computers were pretty dumb compared to humans. This was frustrating to computer scientists who, since Turing, had believed that it should be possible to build an intelligent machine, but for decades couldn’t get it to work. Even something that humans find really simple, such as recognizing faces, had computers stumped. Now, however, we have computers that can recognize faces far better than humans can.
An analogy here is the human discovery of heavier-than-air flight. We knew for a long time that it must be possible—after all, birds are heavier than air and they can fly—but it took until 1903, when the Wright brothers built the first successful airplane, for us to figure out how to do it (“Wright Brothers,” 2020). Once they and several other people had figured it out, progress was rapid. We went from not knowing how to fly to crossing the Atlantic in passenger jet planes in fifty-five years: the British Overseas Airways Corporation’s first transatlantic jet passenger flight was in 1958 (“British Overseas Airways Corporation,” 2020). If you plot this on a graph, you see a perfect example of a non-linearity. We didn’t get gradually better at flying over a long period of time—instead we couldn’t do it at all, and then suddenly we could do it very well.

Digital technology is similar. A series of breakthroughs have taken us from having essentially no machine intelligence to a situation where machines can outperform humans on many different tasks, including reading handwriting and recognizing faces (Neuroscience News, 2018; Phillips et al., 2018). The rate of machines’ progress in learning how to drive cars is another great example of the non-linearity of improvement. The Defense Advanced Research Projects Agency (DARPA) held its first so-called “Grand Challenge” for self-driving cars in 2004. They picked a 150-mile-long closed course in the Mojave Desert, and no car got further than seven miles (less than 5 percent of the course) before getting stuck. By 2012, less than a decade later, Google’s self-driving cars had driven over 300,000 miles on public roads, with traffic (Urmson, 2012) and now the Waymo self-driving taxis are operating in several cities.
Some people may object that reading handwriting, recognizing faces, or driving a car is not what we mean by intelligence, but this just points out that we don’t have a good definition of intelligence. After all, if you had a pet dog that could perform any of these tasks, let alone all three, wouldn’t you call it an intelligent dog?
Other people point out that humans also have creativity and that these machines won’t be creative even if we grant them some form of intelligence. However, this amounts to arguing that creativity is something other than computation. The word implies something from nothing—outputs without inputs—but that is not the nature of human creativity. After all, musicians create new music after hearing lots of music, engineers create new machines after seeing existing ones, and so on.
There is now evidence that at least some types of creativity can be recreated simply through computation. In 2016, Google achieved a breakthrough in machine intelligence when their AlphaGo program beat the South Korean Go grandmaster Lee Sedol by four games to one (Borowiec, 2017). Until that point, progress with game-playing software had been comparatively slow and the best programs were unable to beat strong club players, let alone grandmasters. The number of possible plays in Go is enormous, far exceeding chess. This means that searching through possible moves and counter-moves from a current position, which is the approach historically used by chess computers, cannot be used in Go—instead, candidate moves need to be conjectured. Put differently, playing Go involves creativity.
The approach used for the AlphaGo program started out by training a neural network on games previously played by humans. Once the network was good enough, it was improved further by playing against itself. There has already been progress in the application of these and related techniques, which are often referred to as generative adversarial networks (GANs) to the composition of music and the creation of designs. Even more surprisingly, it has been shown that machines can learn to be creative not just by studying prior human games or designs, but by creating their own, based on rules. Each of AlphaGo’s two successors, AlphaGo Zero and AlphaZero, started out knowing only the rules and learned from playing games against itself (“AlphaZero,” 2020). This approach will allow machines to be creative in areas where there is limited or no prior human progress.
In the last two years we have now seen rapid progress with systems that can generate images, music, and even video. Models such as Midjourney and DALL-E use text input to draw pictures. Suno writes songs with both music and lyrics based on a text description. These models have been trained on millions of hours of existing content. Some people are arguing that the output of the models is simply remixing of what humans had created but here too this feels like an attempt to tighten the definition of “creativity” too far. If your pet dog was able to paint images based on you telling it what you want, sure you would call the dog “creative.” For many humans the output of these models is already indistinguishable from what another human might create. While some open questions about creativity remain, such as how new genres emerge or how humans conjecture mathematical proofs, there can be no doubt that machines have now attained a significant level of creativity.
While much of what the brain does is computation, including many tasks that we identify as creative, there is one function of the brain that may never be accessible to digital machines: having qualia. This is a term from philosophy which refers to our subjective experience, such as what it “feels like” to be cold (or hot), to touch an object, be stressed or amazed. For example, when a digital thermostat displays the room temperature we do not assume that its internal state has anything remotely resembling our own subjective sensation. The lack of qualia is obvious in this example, but we assume that it extends to much more complex situations, such as a self-driving car taking a series of turns on a winding highway. We would expect a human driver to experience a sensation of thrill or elation, but not the car, at least not in the same form as a human. This absence of or at least profound difference in qualia for digital machines may seem like an aside for the moment, but will turn out to be an important component of where humans might direct their attention in the Knowledge Age.
UNIVERSALITY AT ZERO MARGINAL COST
As impressive as zero marginal cost and universality are on their own, in combination they are truly magical. To take one example, we are making good progress in the development of a computer program that will be able to diagnose disease from a patient’s symptoms in a series of steps, including ordering tests and interpreting their results (Parkin, 2020). Though we might have expected this to happen at some point based on the principle of universality, we are making tangible progress and should accomplish this in a matter of decades, if not sooner. At that point, thanks to zero marginal cost, we will be able to provide low-cost diagnosis to anyone in the world. Let that sink in: Free medical diagnosis for all humans will soon be in the space of the possible due to digital technology.
The universality of computation at zero marginal cost is unlike anything we have had with prior technologies. Being able to make all the world’s information and knowledge accessible to all of humanity was never before possible, nor were intelligent machines. Now we have both. This represents at least as dramatic and non-linear an increase the ‘space of the possible’ for humanity as agriculture and industry did before, and both of those two developments ushered in an entirely different age. We will be able to think better about what this implies for the current transition and the next age if we start by laying a foundation.
Last updated
