Digital Technology
Billions of people all over the world carry around smartphones, powerful computers that are connected to a global network (the Internet). We often spend many hours a day on these devices, whether playing games or carrying out work. And yet despite the growing ubiquity of digital technology, people often find it difficult to understand what exactly makes it so distinctively powerful. Some have even derided digital technology, pointing to services such as Twitter and arguing that they are inconsequential when compared to, say, the invention of vaccines.
It is nonetheless becoming increasingly difficult to ignore digital technology’s disruptiveness. For example, while many previously long-established businesses are struggling, including newspapers and retailers, digital technology companies such Facebook, Apple, Amazon, Netflix and Google are now among the world’s most highly valued (“List of public corporations,” 2020).
Digital technology turns out to possess two unique characteristics that explain why it dramatically expands the ‘space of the possible’ for humanity, going far beyond anything that was previously possible. These are zero marginal cost and the universality of computation.
Zero Marginal Cost
Once a piece of information exists on the Internet, it can be accessed from anywhere on the network for no additional cost. And as more and more people around the world are connected to the Internet, ‘anywhere on the network’ is increasingly coming to mean ‘anywhere in the world’. The servers are already running, as are the network connections and the end-user devices. Making one extra digital copy of the information and delivering it across the network therefore costs nothing. In the language of economics, the ‘marginal cost’ of a digital copy is zero. That does not mean that people won’t try to charge you for this information—in many cases they will. But that's a matter of price, not of cost.
Zero marginal cost is radically different to anything that has come before it in the analog world, and it makes some pretty amazing things possible. To illustrate this, imagine that you own a pizzeria. You pay rent for your store and your equipment, and you pay salaries for your staff and yourself. These are so-called ‘fixed costs,’ and they don’t change with the number of pizzas you bake. ‘Variable costs,’ on the other hand, depend on the number of pizzas you make. For a pizzeria, these will include the cost of the water, the flour, any other ingredients you use, any additional workers you need to hire, and the energy you need to heat your oven. If you make more pizzas, your variable costs go up, and if you make fewer pizzas they go down.
So what is marginal cost? Well, let’s say you are making one hundred pizzas every day: the marginal cost is the additional cost of making one more pizza. Assuming the oven is already hot and has space in it, and your employees aren’t fully occupied, it is the cost of the ingredients, which is likely relatively low. If the oven had already cooled, then the marginal cost of the additional pizza would include the energy cost required for reheating the oven and might be quite high.
From a business perspective, you would want to make that additional pizza as long as you could sell it for more than its marginal cost. If you had already covered your fixed costs from the previous pizzas, every cent above marginal cost for the additional pizza would be profit. Marginal cost also matters from a social perspective. As long as a customer is willing to pay more than the marginal cost for that pizza, everyone is potentially better off—you get an extra contribution towards your fixed costs or your profits, and your customer gets to eat a pizza they wanted (important note: I am saying “potentially better off” for a reason because people sometimes want things that might not actually be good for them, such as someone suffering from obesity wanting to eat a pizza).
Now let’s consider what happens as marginal cost falls from a high level. Imagine that your key ingredient was an exceedingly expensive truffle which meant that the marginal cost of each of your pizzas is $1,000. You clearly wouldn’t sell many pizzas, so you might decide to switch to cheaper ingredients and reduce your marginal cost to a point where a larger number of customers are willing to pay more than your marginal cost, so your sales increase. And as you bring down the marginal cost further through additional process and product improvements, you would start to sell even more pizzas.
Now imagine that through a magical new invention you could make additional tasty pizzas at close to zero marginal cost (say one cent per additional pizza) and ship them instantaneously to anywhere in the world. You would then be able to sell an exceedingly large number of pizzas. If you charged just two cents per pizza, you would be making one cent of profit for every additional pizza you sold. At such low marginal cost you would probably quickly gain a monopoly on the global pizza market (more on this later). Anyone in the world who was hungry and could afford at least one cent might buy one of your pizzas. The best price of your pizza from a societal point of view would be one cent (your marginal cost): the hungry would be fed, and you would cover your marginal cost. But as a monopolist that is unlikely what you would do. Instead, you would probably engage in all sorts of problematic behavior aimed at increased profits, such as charging more than marginal cost, trying to prevent competitors from entering the market, and even looking to get people addicted to pizza so they will consume ever more.
This is exactly where we currently are with digital technology. We can “feed the world” with information: that additional YouTube video view, additional access to Wikipedia, or additional traffic report from Waze all have zero marginal cost. And just as in the case of the hypothetical zero marginal cost pizza we are seeing the emergence of digital monopolies, along with all the problems that entails (see Part Four on ‘Informational Freedom’ for a proposed remedy).
We are not used to zero marginal cost: most of our existing economics depends on the assumption that marginal costs are greater than zero. You can think of zero marginal cost as an economic singularity similar to dividing by zero in math—as you approach it, strange things begin to happen. In addition to digital near-monopolies we are already observing power-law distributions of income and wealth (see Part Three), where small variations result in hugely different outcomes. Furthermore, we are now rapidly approaching this zero marginal cost singularity in many other industries, which are primarily information based, including finance and education. In summary, the first characteristic of digital technology that dramatically expands the space of the possible is zero marginal cost. This can result in digital monopolies, but also has the potential to grant all of humanity access to the world’s knowledge.
Universality of Computation
Zero marginal cost is only one property of digital technology that dramatically expands the space of the possible; the second is in some ways even more amazing.
Computers are universal machines. I use this term in a precise sense: anything that can be computed in the universe can in principle be computed by the kind of machine that we already have, given enough memory and time. We have known this since Alan Turing’s groundbreaking work on computation in the middle of the last century. He invented an abstract version of a computer that we now call a Turing machine, before coming up with a proof to show that this simple machine could compute anything (Mullins, 2012; “Church–Turing thesis,” 2020).
By “computation,” I mean any process that takes information inputs, executes a series of processing steps, and produces information outputs. That is—for better or worse—also much of what a human brain does: it receives inputs via nerves, carries out some internal processing and produces outputs. In principle, a digital machine can accomplish every computation that a human brain can. Those brain computations include something as simple and everyday as recognizing someone’s face (inputs: image, output: name) to something as complicated as diagnosing disease (inputs: symptoms and test results, output: differential diagnosis).
This ‘in principle’ limitation will turn out to be significant only if quantum effects matter for the functioning of the brain, meaning effects that require quantum phenomena such as entanglement and the superposition of states. This is a hotly debated topic (Jedlicka, 2017). Quantum effects do not change what can be computed in principle, as even a Turing machine can theoretically simulate a quantum effect—but it would take an impractically long time, potentially millions of years, to do so (Timpson, 2004). If quantum effects are important in the brain, we may need further progress in quantum computing to replicate some of the brain’s computational capabilities. However, I believe that quantum effects are unlikely to matter for the bulk of computations carried out by the human brain—that is, if they matter at all. We may, of course, one day discover something new about physical reality that will change our view of what is computable, but so far this hasn’t happened.
For a long time, this property of universality didn’t matter much because computers were pretty dumb compared to humans. This was frustrating to computer scientists who since Turing had believed that it should be possible to build an intelligent machine, but for decades couldn’t get it to work. Even something that humans find really simple, such as recognizing faces, had computers stumped. Now, however, we have computers that can recognize faces, and their performance at doing so is improving rapidly.
An analogy here is the human discovery of heavier-than-air flight. We knew for a long time that it must be possible—after all, birds are heavier than air and they can fly—but it took until 1903, when the Wright brothers built the first successful airplane, for us to figure out how to do it (“Wright Brothers,” 2020). Once they and several other people had figured it out, progress was rapid—we went from not knowing how to fly to crossing the Atlantic in passenger jet planes in fifty-five years: the British Overseas Airways Corporation’s first transatlantic jet passenger flight was in 1958 (“British Overseas Airways Corporation,” 2020). If you plot this on a graph, you see a perfect example of a non-linearity. We didn’t get gradually better at flying—we couldn’t do it at all, and then suddenly we could do it very well.
Digital technology is similar. A series of breakthroughs have taken us from having essentially no machine intelligence to a situation where machines can outperform humans on many different tasks, including reading handwriting and recognizing faces (Neuroscience News, 2018; Phillips et al., 2018). The rate of machines’ progress in learning how to drive cars is another great example of the non-linearity of improvement. The Defense Advanced Research Projects Agency (DARPA) held its first so-called “Grand Challenge” for self-driving cars in 2004. At the time they picked a 150-mile-long closed course in the Mojave Desert, and no car got further than seven miles (less than 5 per cent of the course) before getting stuck. By 2012, less than a decade later, Google’s self-driving cars had driven over 300,000 miles on public roads, with traffic (Urmson, 2012).
Some people may object that reading handwriting, recognizing faces, or driving a car is not what we mean by ‘intelligence’, but this just points out that we don’t have a good definition of it. After all, if you had a pet dog that could perform any of these tasks, let alone all three, you would call it an ‘intelligent’ dog.
Other people point out that humans also have creativity and that these machines won’t be creative even if we grant them some form of intelligence. However, this amounts to arguing that creativity is something other than computation. The word implies ‘something from nothing’ and outputs without inputs, but that is not the nature of human creativity. After all, musicians create new music after hearing lots of music, engineers create new machines after seeing existing ones, and so on.
There is now evidence that at least some types of creativity can be recreated simply through computation. In 2016, Google achieved a breakthrough in machine intelligence when their AlphaGo program beat the South Korean Go grandmaster Lee Sedol by four games to one (Borowiec, 2017). Until that point, progress with game-playing software had been comparatively slow and the best programs were unable to beat strong club players, let alone grandmasters. The number of possible plays in Go is extremely large, far exceeding chess. This means that searching through possible moves and counter-moves from a current position, which is the approach historically used by chess computers, cannot be used in Go—instead, candidate moves need to be conjectured. Put differently, playing Go involves creativity.
The approach used for the AlphaGo program started out by training a neural network on games previously played by humans. Once the network was good enough, it was improved further by playing against itself. There has already been progress in the application of these and related techniques, which are often referred to as ‘generative adversarial networks’ (GANs) to the composition of music and the creation of designs. Even more surprisingly, it has been shown that machines can learn to be creative not just by studying prior human games or designs, but by creating their own, based on rules. Each of AlphaGo’s two successors, AlphaGo Zero and AlphaZero, started out knowing only the rules and learned from playing games against itself (“AlphaZero,” 2020). This approach will allow machines to be creative in areas where there is limited or no prior human progress.
While much of what the brain does is computation, including many tasks that we identify as creative, there is one function of the brain that may never be accessible to digital machines: having ‘qualia.’ This is a term from philosophy which refers to our subjective experience, such as what it “feels like” to be cold (or hot), to touch an object, be stressed or amazed. For example, when a digital thermostat displays the room temperature we do not assume that its internal state has anything remotely resembling our own subjective sensation. The lack of qualia is obvious in this example, but we assume that it extends to much more complex situations, such as a self-driving car taking a series of turns on a winding highway. We would expect a human driver to experience a sensation of thrill or elation, but not the car. This lack of qualia in machines may seem like an aside for the moment, but will turn out to be an important component of where humans might direct their attention in the Knowledge Age.
Universality at Zero Marginal Cost
As impressive as zero marginal cost and universality are on their own, in combination they are truly magical. To take one example, we are making good progress in the development of a computer program that will be able to diagnose disease from a patient’s symptoms in a series of steps, including ordering tests and interpreting their results (Parkin, 2020). Though we might have expected this to happen at some point based on the principle of universality, we are making tangible progress and should accomplish this in a matter of decades, if not sooner. At that point, thanks to zero marginal cost, we will be able to provide low-cost diagnosis to anyone in the world. Let that sink in slowly: free medical diagnosis for all humans will soon be in the space of the possible.
The universality of computation at zero marginal cost is unlike anything we have had with prior technologies. Being able to make all the world’s information and knowledge accessible to all of humanity was never before possible, nor were intelligent machines. Now we have both. This represents at least as dramatic and non-linear an increase the ‘space of the possible’ for humanity as agriculture and industry did before, and each of those developments ushered in an entirely different age. We will be able to think better about what this implies for the current transition and the next age if we first put some foundations in place.
Last updated