We need to act with great urgency during this transition to the Knowledge Age. We are woefully behind in dealing with the climate crisis, and there is a real possibility that society will degenerate into violence. In the longer term, we face a potential threat from the possible rise of superintelligences, and there is also a chance that we are not alone in this universe. These are risks that we can only deal with if we stop clinging to the Industrial Age and instead embrace the Knowledge Age. Conversely, if we are able to make the transition, huge opportunities are ahead of us.
The world is rapidly being pulled apart by people who want to take us back to the past, as well as people who are advancing technology while being trapped in Industrial Age thinking. As I described in the introduction, technology increases the space of the possible, but it does not automatically make everyone better off. Bound by Industrial Age logic, automation is enriching a few, while putting pressure on large sections of society. Nor does digital publishing automatically accelerate the knowledge loop – we find ourselves in a world plagued by fake news and filter bubbles.
Those who are trying to take us back into the past are exploiting these trends. They promise those negatively affected by technology that everything will be better, while investing heavily in mass manipulation. They seek to curtail the open Internet, while simultaneously building up secret surveillance. This is true on both sides of the American political spectrum – neither the Republicans nor the Democrats have a truly forward-looking platform and both favor governmental controls over online platforms and speech, instead of empowering endusers as described in the section on Informational Freedom.
The net effects of this are an increase in polarization and a breakdown of critical inquiry and democracy. As disturbing as it is, the possibility of large-scale violent conflict, both within and between nations, is increasing, while the climate crisis wreaks havoc on industrial and food supply chains around the world. At the same time, our ability to solve the problem of climate change is rapidly decreasing because we are spiraling back towards the past.
There is another reason for urgency: we find ourselves on the threshold of creating both transhumans and neohumans. Transhumans are humans with capabilities enhanced through both genetic modification (for example, via CRSPR gene editing) and digital augmentation (for example, the brain-machine interface Neuralink), while neohumans are machines with artificial general intelligence. I am including them both here because they can be full-fledged participants in the knowledge loop.
Both transhumans and neohumans may eventually become a form of ‘superintelligence’ and pose a threat to humanity. The philosopher Nick Bostrom has written a book on the subject, and he and other thinkers warn that a superintelligence could have catastrophic results. Rather than rehashing their arguments here, I want to pursue a different line of inquiry: what would a future superintelligence learn about humanist values from our current behavior?
As we have seen, we are not doing terribly on the central humanist value of critical inquiry. We are also not treating other species well, our biggest failing in this area being industrial meat production. As with many other problems that humans have created, I believe the best way forward is innovation – I am excited about lab-grown meat and plant-based meat substitutes. Improving our treatment of other species is an important way in which we can use the attention freed up through automation.
Even more important, however, is our treatment of other humans. This has two components: how we treat each other now and how we will treat the new humans when they arrive. As for how we treat each other now, we have a long way to go. Many of my proposals are aimed at freeing humans so they can discover and pursue their personal interests, yet the existing education and job loop systems stand in opposition to this freedom. In particular we need to construct the Knowledge Age in a way that allows us to overcome, rather than reinforce, our biological differences. That will be a crucial model for transhuman and neohuman superintelligences, as they will not have our biological constraints.
Finally, how will we treat the new humans? This is a difficult question to answer because it sounds so preposterous. Should machines have human rights? If they are humans, then they clearly should. My approach to what makes humans human would also apply to artificial general intelligence. Does an artificial general intelligence need to have emotions in order to qualify? I would argue that it doesn’t, because how we handle emotions varies so widely. And since these new humans will likely share little of our biological hardware, there is no reason to expect that their emotions should be similar to ours. As we charge ahead, this is an important area for further work. We would not want to accidentally create, not recognize and then mistreat a large class of new humans.
I want to provide one final reason for urgency in getting to the Knowledge Age. It is easy for us to think of ourselves as the center of the universe. In early cosmology we put the earth at the center, before we eventually figured out that we live on a small planet circling a star, in a galaxy that is within an incomprehensibly large universe. More recently, we have discovered that the universe contains a great many planets more or less like ours, which means some form of intelligent life may have arisen elsewhere. This possibility leads to many fascinating questions, one of which is known as the Fermi paradox: if there is so much potential for intelligent life in the universe, why have we not yet picked up any signals?
There are different possible answers to this question. For instance, perhaps civilizations get to a point similar to ours and then destroy themselves because they cannot make a crucial transition. Given the way we are handling the current transition, that seems like a distinct possibility. Or maybe all intelligent civilizations encounter a problem that they cannot solve, such as the climate crisis, and either disappear entirely or become primitive. Given the scale of cosmic time and space, short-lived broadcast civilizations like ours would be difficult to detect. Furthermore, climate change is a clear and present danger, but there are many other species-level challenges.
A different answer to the Fermi paradox would present a different challenge: more advanced civilizations may have gone dark so as to not be discovered and destroyed by even more advanced civilizations. By that account, we may be entering a particularly dangerous period, in which we have been broadcasting our presence but do not yet have the means to travel through space.
Conversely, it is worth asking what kind of opportunities we might explore in the Knowledge Age. To begin with, there is a massive opportunity for automation. Fifty years or so into the Knowledge Age, I expect the amount of attention trapped in the job loop to have shrunk to around 20 percent or less of all human attention. This is akin to agriculture shrinking during the Industrial Age. We will finally be able to achieve the level of freedom that many thinkers had predicted previously, as Keynes did in his essay ‘The Economic Possibilities for our Grandchildren’, where he wrote about achieving a life of mostly leisure. Even Marx envisioned such a world, although he believed it would be brought about differently. He wrote about a system that ‘makes it possible for me to do one thing today and another tomorrow, to hunt in the morning, fish in the afternoon, rear cattle in the evening, criticize after dinner, just as I have a mind, without ever becoming hunter, fisherman, herdsman or critic’. That is the promise of the Knowledge Age.
But there are many more opportunities for human progress, including space travel. One of the most depressing moments in my life came when I learned that at some point our sun would burn out and, in the process, annihilate all life on Earth. What is the point of anything we do if everything will come to an end anyhow? Thankfully I came to realize that with enough knowledge and progress, humanity could become spacefaring and live on other planets, before eventually traveling to the stars.
A third opportunity is the elimination of disease. It is sometimes easy to forget how far we have already come on that account. Many of the diseases that used to cripple or kill people have either become treatable or even eliminated. We have started to make major progress in fighting cancer and I believe there is a good chance that most cancers will become treatable within the next couple of decades. Ultimately this raises the question of mortality. Can, and should, we strive to become immortal? I believe we should, although achieving immortality will bring new problems. These are not the ones of overpopulation that some people imagine, as birth rates will be falling and there is, of course, space beyond our planet. The real challenge of immortality will be maintaining the functioning of the knowledge loop, as we will have to figure out not just how to keep the body alive but the mind flexible. Max Planck captured this challenge in his famous quote that ‘science advances one funeral at a time’ – the older, dominant positions do not allow new theories to displace them.
Our fourth opportunity is to go from capital merely being sufficient to it being abundant. By the definitions set out earlier, that would mean that the marginal cost of capital was zero. The best way to imagine what that might look like is to think of the replicator from Star Trek. Imagine a microwave oven that instead of heating up a dish makes it from scratch, without you having to shop for the ingredients. Such an abundance of capital might seem a preposterous idea that could never be achieved, but for most physical assembly processes today, the factors that limit the rate are the energy required and the need for humans to operate parts of the system. Machine learning is helping with the second factor, but progress on energy has been slow – we don’t yet have any fusion reactors that output more energy than is provided to start the fusion, but there is no reason it can’t be achieved. With enough knowledge we will make nuclear fusion work, removing the second major barrier to the abundance of capital.
We live in a period where there is an extraordinary range of possible outcomes for humanity. They include the annihilation of humankind in a climate catastrophe on one extreme and the exploration of the universe on the other. Where we end up depends on the large and small choices each of us makes every day, from how we treat another person in conversation to how we tackle the climate crisis. It is a massive challenge, and I wake up every day both scared and excited about this moment of transition. I sincerely hope that The World After Capital makes a small contribution to getting us to the Knowledge Age.