We need to act on this transition to the Knowledge Age with great urgency. We are at risk of society degenerating into violence and losing our ability to solve pressing problems, most notably climate change. We also face a potential threat from the possible rise of superintelligences. And there is a chance that we are not alone in this universe.
The world is rapidly being pulled apart between those who want to take us back into the past and those who are advancing technology, but are largely doing so still trapped in the Industrial Age. These two groups are engaged in a dangerous feedback loop.
As described all the way back in the introduction, technology itself simply increases the space of the possible. Pushing automation along is not automatically making everyone better off. Trapped in Industrial Age logic, automation is instead enriching a few, while putting pressure on large sections of society. Similarly, digital publishing doesn't automatically accelerate the Knowledge Loop. Instead, we are finding ourselves in a world of fake news and filter bubbles.
The forces which are trying to take us back into the past are exploiting both of these trends. They are promising those negatively affected by technology that everything will be better again. They are investing heavily in mass scale manipulation including producing and harnessing anti-rational memes. They are often curtailing or seeking to curtail the open internet, while simultaneously building up secret surveillance.
The net effects are an increase in polarization and a breakdown of the crucial processes of critical inquiry and democracy. I am saying crucial because without these we are reduced to violent solutions. Disturbing as it is, we are once again finding ourselves looking at the real possibility of large scale violent conflict both within and between nations.
This possibility of violence is further increased as climate change wreaks havoc on industrial and food supply chains around the world. At the same time our ability to solve the climate change problem is rapidly decreasing because we are spiraling back towards the past.
As if that spiral is not enough by itself, there is a second reason for urgency. And that's because we are finding ourselves on the threshold to creating both transhumans and neohumans. Transhumans are humans with capabilities enhanced through both genetic modification (e.g., via CRSPR) and digital augmentation (e.g., Neuralink). Neohumans are machines with artificial general intelligence. I am referring to both of them as humans because they can be full fledged participants in the Knowledge Loop.
Both Transhumans and Neohumans may eventually become a form of “Superintelligence” which could pose a threat to humanity. The philosopher Nick Bostrom has written an entire book on the subject and others, including Elon Musk and Stephen Hawking, are currently warning that the creation of a superintelligence could have catastrophic results. I don't want to rehash all the arguments here about why a superintelligence might be difficult (impossible?) to contain and what its various failure modes might be. Instead I want to pursue a different line of inquiry: what would a future superintelligence learn about humanist values from our current behavior?
We just saw that we are doing quite terribly on the central humanist value of critical inquiry. We are also not doing great with regard to how we treat other species. Our biggest failing with regard to animals is industrial meat production. As someone who eats meat, I am part of that problem. As with many other problems that human knowledge has created, I believe our best way forward is further innovation and I am excited about lab grown meat and plant based meat substitutes. We have a long way to go in being responsible to other species in many other regards (e.g., pollution and outright destruction of many habitats). Doing better here is one important way we should be using the human attention that is freed up through automation.
Even more important though is how we treat other humans. This has two components: how we treat each other today and how we treat the new humans when they arrive. As for how we treat each other today, we again have a long way to go. Much of what I have proposed is aimed at freeing humans to be able to discover and pursue their personal interests. Yet the existing education and Job Loop systems stand in opposition to this freedom. These systems also embed historical injustices. In particular we need to construct the Knowledge Age in a way that allows us to overcome, rather than re-enforce, our biological differences. That will be a crucial model for transhuman and neohuman superintelligences, as they will not have our biological constraints. Put differently, discrimination on the basis of biological difference would be a terrible thing for superintelligences to learn from us.
Finally, what about the arrival of the new humans. How will we treat them? The video of a robot being mistreated by Boston Dynamics is not a good start here. This is a difficult topic because it sounds so preposterous. Should machines have human rights? Well if the machines are humans then clearly yes. And my approach to what makes humans distinctly human would apply to artificial general intelligence. Does an artificial general intelligence have to be human in other ways as well in order to qualify? For instance, does it need to have emotions? I would argue no, because we vary widely in how we handle emotions, including conditions such as psychopathy. Since these new humans will likely share very little, if any, of our biological hardware, there is no reason to expect that their emotions should be similar to ours (or that they should have a need for emotions altogether).
This is an area in which a lot more thinking is required. We don't have a great way of discerning when we might have built an artificial general intelligence. The best known attempt here is the Turing Test for which people have proposed a number of improvements over the years. This is an incredibly important area for further work, as we charge ahead. We would not want to accidentally create, not recognize and then mistreat a large class of new humans. They and their descendants might not take kindly to that.
I want to provide one more reason for urgency in getting to the Knowledge Age. It is easy for us to think of ourselves as the center of the universe. In early cosmology we literally put the earth in the center with everything else revolving around it. We eventually figured out that we live on a smallish planet circling a star in a galaxy that's part of an incomprehensibly large universe.
More recently we have discovered that there are a great many planets more or less like ours scattered throughout the universe. That means some form of intelligent life may have arisen in other places. This possibility leads to many fascinating questions, one of which is known as the Fermi Paradox: if there is so much potential for intelligent life in the universe, why have we not yet picked up any signals?
There are different possible answers to this question. For instance, maybe civilizations get to a point similar to ours and then blow themselves to smithereens because they cannot make a crucial transition. Given the way we are handling the current transition that seems like a distinct possibility for Earth as well (see "A Dangerous Spiral" above). Or all intelligent civilizations encounter a problem, such as climate change, which they cannot solve and they disappear again entirely or become primitive. Given cosmic time and space scales, short lived broadcast civilizations might be especially difficult to detect (a broadcast civilization being one like ours that using electro magnetic waves for communication). I keep bringing up climate change because it is a clear and present danger but there are many more current and future species level challenges.
One of these comes in the form of a different answer to the Fermi Paradox. More advanced civilizations may have gone dark on purpose so as to not be discovered and potentially destroyed by even more advanced civilizations. This is the premise of the “Three Body Problem” science fiction trilogy by Chinese author Cixin Liu. And while it is a work of fiction one cannot entirely rule out its dark logic. Certainly in the history of Earth whenever a less advanced civilization was discovered by a more advanced one it has not ended well for the former. By that account we may be entering a particularly dangerous stretch in which we have been broadcasting our presence but do not yet have the means to travel broadly through space.