The invention of agriculture expanded the space of the possible by dramatically increasing the food density of land. This allowed humanity to have surplus food, which provided the basis for increased population density and hierarchical societies that developed standing armies, specialization of labor and writing .
The Enlightenment and subsequent Industrial Revolution further expanded the space of the possible by substituting human power for machine power and increasing our understanding of, and control over, chemical and physical transformations of matter. This allowed humanity to make extraordinary material progress on the basis of innovations in energy, manufacturing, transportation and communication .
Digital technologies provide the third expansion of the space of the possible. This seems like a bold claim, and many have derided digital technologies such as Twitter, arguing that they are inconsequential compared to, say, the invention of vaccines.
Yet we can already see the disruptiveness of digital technologies. For instance, many previously well established businesses, such as newspapers and retailers, are struggling, while companies that deal only in information, such as Google and Facebook, are among the world's most highly valued .
There are two characteristics of digital technology that expand the space of the possible, and both are important: the first is zero marginal cost and the second is the universality of digital computation.
Once a piece of information is on the Internet, it can be accessed from anywhere on the network for no additional cost. As more and more people around the world are connected to the Internet, “anywhere on the network” increasingly means anywhere in the world. The servers are already running. The network connections and end user devices are already in place and powered up. Making one extra digital copy of the information and delivering it across the network is therefore free. In the language of economics: the “marginal cost” of a digital copy is zero. That doesn't mean there aren't people trying to charge you, in many cases there are. Zero marginal cost is a statement about cost, not about prices.
Zero marginal cost is radically different from anything that has come before it in the analog world, and it makes possible some pretty amazing things. To illustrate, imagine you own a pizzeria. You pay rent for your store, you pay for your equipment, and you pay salaries for your staff (and yourself). All of these are so-called “fixed costs.” They don't change at all with the number of pizzas you bake. “Variable costs,” on the other hand, depend on the number of pizzas you make. For a pizzeria, these include the cost of the water, flour, and other ingredients used in making pizzas. Variable cost also includes the energy you need to heat your oven. If you make more pizzas, your variable cost goes up. If you make fewer, your variable cost goes down.
So what is marginal cost? Well, let's say you are up and running making 100 pizzas every day. The marginal cost is the additional cost to make the 101st pizza. Assuming the oven is already hot and has room in it for one more pizza, then the additional cost for that 101st pizza is just the cost of the ingredients, which is likely relatively low. Imagine now that the oven has already cooled off, then the marginal cost of the 101st pizza would include the energy cost required for re-heating the oven. In that case the marginal cost could be quite high.
From a business perspective, you would want to make that 101st pizza as long as you can sell it for more than its marginal cost. Every cent above marginal cost makes a contribution towards fixed cost, helping to pay for rent and salaries. If you have already covered all your fixed cost from the previous pizzas sold, then every cent above marginal cost for the 101st pizza is profit.
Marginal cost also matters from a social perspective. As long as a customer is willing to pay more than the marginal cost for that pizza, then everyone is better off. You're better off because you get extra contribution towards your fixed cost or your profit. Your customer is better off because, well, they just ate a pizza they wanted! Even if the customer paid exactly the marginal cost you wouldn't be any worse off and the customer would still be better off.
Let's consider what happens as marginal cost falls from an initially high level. Imagine for a moment that your key ingredient is an exceedingly rare and expensive truffle and therefore the marginal cost of your pizzas is $1,000 per pizza. Clearly you won't be selling a lot of pizzas. You decide to switch to cheaper ingredients and start to bring down your marginal cost to where a larger number of customers are willing to pay more than your marginal cost. In New York City, where I live, that seems to be around $25 per pizza. So you start selling quite a few pizzas. As you bring down the marginal cost of your pizza even further through additional process and product improvements (e.g., a thinner crust, economies of scale, etc.), you can start selling even more pizzas.
Now imagine that through a magical new invention you can make additional pizzas at close to zero marginal cost (say one cent per additional pizza), including nearly instantaneous (say one second) shipment to anywhere in the world. What would happen then? Well, for starters you would be able to sell an exceedingly large number of pizzas. And if you charged even just two cents per pizza you would be making one cent of contribution or profit for every additional pizza you sell.
At such low marginal cost you would probably be the only pizza seller in the world (a monopoly—more on that later). From a social welfare standpoint, anyone in the world who was hungry and who wanted pizza and could afford at least one cent would ideally be getting one of your pizzas. This means that the best price of your pizza from a social point of view would be one cent (your marginal cost). Why not two cents? Because if someone was hungry but could only afford one cent and you sold them a pizza at that price, then the world as a whole would still be better off. The hungry person was fed and you covered the marginal cost of making the pizza.
Let's recap: When your marginal cost was extremely high, you had very few customers. As your marginal cost dropped you started to be able to sell more. And as your marginal cost approached zero, you eventually started to feed the world! This is exactly where we are with digital technology. We can now feed the world with information. That additional YouTube video view? Marginal cost of zero. Additional access to Wikipedia? Marginal cost of zero. Additional traffic report delivered by Waze? Marginal cost of zero.
This means we should expect certain digital “pizza-making operations” to be huge and span the globe in near monopoly positions (i.e., they are much larger than anyone else, having nearly the entire market to themselves). This is exactly what we are seeing with companies such as Google and Facebook. But—and this is critical to the idea of the Knowledge Age—it also means, from a social perspective, that the price for marginal usage should be zero.
Why prevent someone from accessing YouTube, Wikipedia or Waze, either by cutting them off from the system altogether or charging a price they can't afford? This would always constitute a loss to society. With zero marginal cost, any given individual might receive some benefit, which would be a benefit greater than the marginal cost. And best of all, they might use what they learn to create something that they share and that in turn winds up delivering extraordinary enjoyment or a scientific breakthrough to the world.
We are not used to zero marginal cost. Most of economics assumes non-zero marginal cost. You can think of zero marginal cost as an economic singularity: dividing by zero is undefined, and as you approach zero marginal cost, strange things happen. We are already observing these strange things in the world today, including digital near monopolies and a power law distribution of income and wealth. We are now rapidly approaching this zero marginal cost singularity in many industries, including finance and education.
So the first characteristic of digital technology that expands the space of the possible is zero marginal cost. This space includes digital monopolies, but it also includes access for all of humanity to all the world's knowledge (a term I will define more precisely later).
Zero marginal cost is only the first property of digital technology that dramatically expands the space of the possible. The second property is in some ways even more amazing.
Computers are universal machines. I mean this in a rather precise sense: anything that can be computed in the universe at all can be computed by the kind of machine that we already have, given enough memory and enough time. We have known this since the groundbreaking work by Alan Turing on computation. Turing invented an abstract computer, which we now call a Turing machine . He then came up with an ingenious proof to show that this machine, which turns out to be extremely simple, can compute anything .
What do I mean here by computation? I mean any process that takes some information inputs, executes a series of processing steps and produces an information output. That is—for better or worse—all that a human brain does either. The brain receives inputs via nerves, carries out some internal processing, and produces outputs (also via nerves). In principle, there is nothing a human brain can do that a digital machine cannot do.
The “in principle” limitation will turn out to be significant only if quantum effects matter in the brain. This is a hotly debated topic . Quantum effects do not change what can be computed per se, because even a Turing machine can simulate a quantum effect, but it would take an impractically long time to do so, potentially millions of years or more . If quantum effects were to matter in the brain then we would need to wait for further progress in quantum computing to simulate a brain. Personally, I believe that quantum effects are unlikely to matter and that we will be able to simulate an entire human brain in a digital computer with sufficient detail. We can't do it quite yet, as our present digital hardware is too slow and has insufficient memory (we also do not yet have a complete map of a human brain).
Unless you want to believe in something beyond what physics has determined to date, there is nothing that a human brain can do that a computer cannot do also. Likely a digital computer will suffice, but it is possible that we have to get to quantum computers to cover everything. Now there is always some wiggle room in the future. We may discover something new about physical reality that we don't yet know, and that changes our view of what is computable. But not so far.
For a long time this universality property didn't seem to matter all that much. Computers were pretty dumb compared to humans. This was frustrating to computer scientists who, going back as far as Turing himself, had the belief that it should be possible to build a machine that does, well, smart things. But they couldn't get it to work. Even something that is really simple for most humans, such as recognizing objects, had computers completely stumped. Until now that is, when we suddenly find ourselves with computers that can do all sorts of smart things.
An analogy here is heavier than air flight. We knew for a long time that it must be possible—we knew that birds were heavier than air and yet they could fly. But it took until 1903, when the Wright Brothers built the first successful airplane, for us to figure out how to do it . Once they and several others around the same time had figured it out, though, progress was rapid. We went from not knowing how to fly for thousands of years to passenger jet planes crossing the Atlantic in 55 years (BOAC's first transatlantic jet passenger flight was in 1958 ). If you graph this, you see a perfect example of a non-linearity. We didn't get gradually better at flying. We couldn't do it at all and then suddenly we did, and quickly did it very well.
Similarly, with digital technology, we have finally made a series of breakthroughs, which have taken us from essentially no machine intelligence to machines outperforming humans on many different tasks, including reading handwriting and recognizing faces . More impressive, maybe, is that machines have learned how to drive cars. The rate of progress in driving is a great example of the non-linearity of improvement. DARPA, the Defense Advanced Research Projects Agency, held its first so-called Grand Challenge for self driving cars in 2004. At the time they picked a 150 mile closed course in the Mojave Desert region, and yet no car got further than 7 miles before getting stuck (less than 5% of the course). By 2012, less than a decade later, Google's self-driving cars had successfully driven over 300,000 miles on public roads with traffic .
Some people will object that reading handwriting, recognizing faces, or driving a car is not what we mean by intelligence. This just points out, though, that we don't really have a good definition of “intelligence.” For instance, if you had a dog that could perform any of these tasks, let alone all three, you would likely call it an “intelligent” dog.
Other people will say that humans also have creativity and these machines, even if we grant them some form of intelligence, won't ever be creative. This amounts to arguing that creativity is something other than computation. The word “creativity” suggests the idea of “something from nothing,” of outputs without inputs. But that is not the nature of human creativity: musicians create new music after having heard lots of music, engineers create new machines after having seen many existing ones, and so on. There is no evidence that creativity is more than computation.
Recently, Google achieved a relevant breakthrough in machine intelligence. The AlphaGo program beat Korean Go grandmaster Lee Sedol 4-1 . Previously, progress with software that could play Go had been comparatively slow and even the best programs could not beat strong club players, let alone masters. The search space in Go is extremely large, which means a search approach, which works for Chess, cannot be used to find moves. Instead, candidate moves need to be conjectured. Put differently, playing Go involves creativity.
The approach used to train the AlphaGo program, so-called adversarial training of neural networks, can also be applied to other domains that require creativity. There is already progress in applying these techniques to composing music and creating designs. Maybe even more surprisingly, machines can learn to be creative not just from studying prior human games or designs, but from creating their own based on rules. A newer version of AlphaGo called AlphaZero, starts out just knowing the rules of a game such as Go or chess, and learns from games it plays against itself . This approach allows machines to be creative in areas that have limited or no prior human work to go on.
With digital technologies, the space of the possible has thus expanded to include machines that can most likely do anything that a human can do.
Now, impressive as these two properties of zero marginal cost and universality are on their own, their combination is truly magical. I will just give one example: we are well on our way to a computer program that will be able to diagnose any disease from a patient's symptoms in a series of steps, including ordering new tests and interpreting their results . We have expected this based on universality, but now we are making tangible progress and accomplishing this is a matter of decades at best. Once we can do it, then thanks to zero marginal cost we can, and should, provide free diagnosis to anyone, anywhere in the world. (Okay—the actual lab tests, to the extent they are required, will still cost something). Still, one needs to let that sink in slowly to really grasp its extent. The realm of possibility for mankind will soon include free medical diagnosis for all humans.
Universality of computation at zero marginal cost is unlike anything we have had with prior technologies. Being able to give all of humanity access to all the world's information and knowledge was never before possible. Intelligent machines were not previously possible. Now we have both. This is as profound an increase in what is possible for humanity as agriculture and industry were before. Each of those ushered in an entirely different age.
To help us think better about the next age made possible by digital technologies, we now need to put some foundations in place.