Digital Technology

Billions of people all over the world carry smartphones with them, powerful computers that are connected to a global network (the Internet). We often spend many hours a day on these devices, whether playing games or carrying out work. And yet, despite the growing ubiquity of digital technology, people often find it difficult to understand what exactly makes it so powerful.

There are even some who have derided digital technology, pointing to services such as Twitter and arguing that they are inconsequential when compared to, say, the invention of vaccines. However, it is becoming increasingly difficult to ignore the disruptiveness of digital technology. For example, while many previously well-established businesses are struggling, including newspapers and retailers, digital technology companies such Facebook, Apple, Amazon, Netflix and Google are now among the world’s most highly valued [3].

Digital technology turns out to possess two unique characteristics that explain why it dramatically expands the ‘space of the possible’ for humanity, going far beyond anything that was previously possible. These are ‘zero marginal cost’ and the ‘universality of computation’.

Zero Marginal Cost

Once a piece of information exists on the Internet, it can be accessed from anywhere on the network for no additional cost. And as more and more people around the world are connected to the Internet, ‘anywhere on the network’ is increasingly coming to mean ‘anywhere in the world’. The servers are already running, as are the network connections and the end-user devices. Making one extra digital copy of the information and delivering it across the network therefore costs nothing. In the language of economics, the ‘marginal cost’ of a digital copy is zero. That does not mean that people won’t try to charge you for this information – in many cases they will. Zero marginal cost is a statement that relates to cost rather than prices.

Zero marginal cost is radically different to anything that has come before it in the analog world, and it makes some pretty amazing things possible. To illustrate this, imagine that you own a pizzeria. You pay rent for your store and your equipment, and you pay salaries for your staff and yourself. These are so-called ‘fixed costs’, and they don’t change with the number of pizzas you bake. ‘Variable costs’, on the other hand, depend on the number of pizzas you make. For a pizzeria, these will include the cost of the water, flour, any other ingredients you use and the energy you need to heat your oven. If you make more pizzas, your variable costs go up and if you make fewer they go down.

So what is marginal cost? Well, let’s say you are making one hundred pizzas every day; the marginal cost is the additional cost of making one more pizza. Assuming the oven is already hot and has space in it, it is the cost of the ingredients, which is likely relatively low. If the oven had already cooled, then the marginal cost of the additional pizza would include the energy cost required for reheating the oven and might be quite high.

From a business perspective, you would want to make that additional pizza as long as you could sell it for more than its marginal cost. If you had already covered your fixed costs from the previous pizzas, every cent above marginal cost for the additional pizza would be profit. Marginal cost also matters from a social perspective. As long as a customer is willing to pay more than the marginal cost for that pizza, everyone benefits – you get extra contribution towards your fixed cost or profit, and your customer gets to eat a pizza they wanted.

Let’s consider what happens as marginal cost falls from a high level. Imagine that your key ingredient was an exceedingly expensive truffle which meant that the marginal cost of each of your pizzas is $1,000. You clearly wouldn’t sell many pizzas, so you might decide to switch to cheaper ingredients and reduce your marginal cost to a point where a larger number of customers are willing to pay more than your marginal cost, so your sales increase. And as you bring down the marginal cost further through additional process and product improvements, you would start to sell even more pizzas.

Now imagine that through a magical new invention you could make additional pizzas at close to zero marginal cost (say one cent per additional pizza) and ship them instantaneously to anywhere in the world. You would then be able to sell an exceedingly large number of pizzas. If you charged just two cents per pizza, you would be making one cent of profit for every additional pizza you sold. At such low marginal cost you would probably have a monopoly on the global pizza market (more on that subject later). Anyone in the world who was hungry and could afford at least one cent would want one of your pizzas. The best price of your pizza from a societal point of view would be one cent (your marginal cost) – the hungry would be fed and you would cover your marginal cost.

This is exactly where we currently are with digital technology. We can feed the world with information, and that additional YouTube video view, additional access to Wikipedia or an additional traffic report from Waze has a marginal cost of zero. We should expect certain digital operations to become huge and to span the globe in near-monopolies, which is what we are seeing with companies such as Google and Facebook. But – and this is critical to the idea of the Knowledge Age – it also means that from a social perspective, the price of marginal usage should be zero.

Why prevent someone from accessing YouTube, Wikipedia or Waze, either by not allowing them access to the system or by charging a price they can’t afford? If the marginal cost is zero, any given individual might receive a benefit greater than the marginal cost. Best of all, they might use what they learn to create something that may in turn deliver extraordinary enjoyment or a scientific breakthrough to the world.

We are not used to zero marginal cost; most of economics assumes non-zero marginal cost. You can think of zero marginal cost as an economic singularity similar to dividing by zero in math – as you approach it, strange things begin to happen. We are already observing digital near-monopolies and power-law distributions of income and wealth, where small variations result in hugely different outcomes. Furthermore, we are now rapidly approaching this zero marginal cost singularity in many other industries, including finance and education. The first characteristic of digital technology is that it expands the space of the possible. This can result in digital monopolies but also has the potential to grant all of humanity access to the world’s knowledge.

Universality of Computation

Zero marginal cost is only one property of digital technology that dramatically expands the space of the possible; the second is in some ways even more amazing.

Computers are universal machines. I use this term in a rather precise sense; anything that can be computed in the universe can be computed by the kind of machine that we already have, given enough memory and time. We have known this since Alan Turing’s groundbreaking work on computation. He invented an abstract computer that we now call a Turing machine [4], before coming up with a proof to show that this simple machine could compute anything [5].

By ‘computation’, I mean any process that takes information inputs, executes a series of processing steps and produces information outputs. That is – for better or worse – what a human brain does; it receives inputs via nerves, carries out some internal processing and produces outputs. In principle, a digital machine can do everything that a human brain does.

The ‘in principle’ limitation will turn out to be significant only if quantum effects matter for the functioning of the brain, meaning effects that require quantum phenomena such as entanglement. This is a hotly debated topic [172]. Quantum effects do not change what can be computed per se because even a Turing machine can theoretically simulate a quantum effect, but it would take an impractically long time – potentially millions of years – to do so [6]. If quantum effects are important in the brain, we may need further progress in quantum computing to replicate some of the brain’s capabilities. However, I believe that quantum effects are unlikely to matter for the bulk of computations carried out by the human brain – that is, if they matter at all. We may, of course, one day discover something new about physical reality that will change our view of what is computable, but this so far hasn’t happened.

For a long time, this universality property didn’t matter much because computers were pretty dumb compared to humans. This was frustrating to computer scientists who had since Turing believed that it should be possible to build an intelligent machine, but for the longest time couldn’t get it to work. Even something that humans find really simple, such as recognizing faces, had computers stumped. We now, however, have computers that can recognize faces, and their performance at doing so is improving rapidly.

An analogy here is the human discovery of heavier-than-air flight. We knew for a long time that it must be possible – after all, birds are heavier than air and they can fly – but it took until 1903, when the Wright brothers built the first successful airplane, for us to figure out how to do it [7]. Once they and several other people had figured it out, progress was rapid – we went from not knowing how to fly to crossing the Atlantic in passenger jet planes in fifty-five years (the British Overseas Airways Corporation’s first transatlantic jet passenger flight was in 1958 [8]). If you plot this on a graph, you see a perfect example of a non-linearity. We didn’t get gradually better at flying – we couldn’t do it at all and then we could suddenly do it very well.

​​Non-Commercial Flight Distance Records​​

Digital technology is similar; a series of breakthroughs have taken us from having essentially no machine intelligence to a situation where machines can outperform humans on many different tasks, including reading handwriting and recognizing faces [9]. The rate of machines’ progress in learning how to drive cars is another great example of the non-linearity of improvement. The Defense Advanced Research Projects Agency (DARPA) held its first so-called ‘Grand Challenge’ for self-driving cars in 2004. At the time they picked a 150-mile-long closed course in the Mojave Desert, and no car got further than seven miles (less than 5 per cent of the course) before getting stuck. By 2012, less than a decade later, Google’s self-driving cars had driven over 300,000 miles on public roads, with traffic [11].

Some people may object that reading handwriting, recognizing faces or driving a car is not what we mean by ‘intelligence’, but this just points out that we don’t have a good definition of it. After all, if you had a pet dog that could perform any of these tasks, let alone all three, you would call it an ‘intelligent’ dog.

Other people point out that humans also have creativity and that these machines won’t be creative even if we grant them some form of intelligence. However, this amounts to arguing that creativity is something other than computation. The word implies ‘something from nothing’ and outputs without inputs, but that is not the nature of human creativity. After all, musicians create new music after hearing lots of music, engineers create new machines after seeing existing ones, and so on. There is now evidence that at least some types of creativity can be recreated simply through computation.

Google recently achieved a breakthrough in machine intelligence when their AlphaGo program beat the South Korean Go grandmaster Lee Sedol by four games to one [12]. Until that point, progress with game-playing software had been comparatively slow and the best programs were unable to beat strong club players, let alone grandmasters. The number of possible plays in Go is extremely large, far exceeding chess. This means that searching through possible moves and counter-moves from a current position, which is the approach historically used by chess computers, cannot be used in Go – instead, candidate moves need to be conjectured. Put differently, playing Go involves creativity.

The approach used for the AlphaGo program started out by training a neural network on games previously played by humans. Once the network was good enough, it was improved further by playing against itself. There has already been progress in the application of these and related techniques, which are often referred to as ‘generative adversarial networks’ (GANs) to the composition of music and the creation of designs. Even more surprisingly, it has been shown that machines can learn to be creative not just by studying prior human games or designs, but by creating their own, based on rules. A newer version of AlphaGo called AlphaZero starts out knowing the rules of a game and learns from playing games against itself [171]. This approach will allow machines to be creative in areas where there is limited or no prior human progress.

Universality at Zero Marginal Cost

As impressive as zero marginal cost and universality each are on their own, in combination they are truly magical. To take one example, we are making good progress in the development of a computer program that will be able to diagnose disease from a patient’s symptoms in a series of steps, including ordering tests and interpreting their results [10]. Though we might have expected this based on the principle of universality, we are making tangible progress and should accomplish this in a matter of decades, if not sooner. Once we can do it, we will thanks to zero marginal cost be able to provide low-cost diagnosis to anyone in the world. We should let that sink in slowly in order to grasp its significance: free medical diagnosis for all humans will soon be in the ‘space of the possible’.

The universality of computation at zero marginal cost is unlike anything we have had with prior technologies. Being able to make all the world’s information and knowledge accessible to all of humanity was never before possible, and nor were intelligent machines. Now we have both. This represents as dramatic and non-linear an increase the ‘space of the possible’ for humanity as agriculture and industry did before, and each of those developments ushered in an entirely different age. We will be able to think better about what this implies for the current transition and the next age if we first put some foundations in place.