Civilization after humans

Written 7 October 2013

If you’ve ever played the video game series Halo, you’ve probably come across what are called Sentinels and Monitors. These are essentially AI systems which maintain the superstructures built by their creators, a species called the Forerunners. One could thus say that the Forerunner civilization lived on, despite the entirety (or most) of the species going extinct since they were still able to have a say in what went on the in the Halo universe through the AI systems.

Is a similar situation possible for humans in the future? Once the singularity is reached, technology will develop not only at a pace unparalleled in human history — where progress made within 10 minutes could be greater than the progress of the past 10,000 years — but it will also be able to self-replicate itself in better and more efficient versions of itself. It is impossible to say what this technology will be (hence the term singularity), but at this point, provided that the technology will be able to maintain itself, human supervision will no longer be needed. So what if a massive virus were to wipe out all of humanity like the Flood wiped out the Forerunners? (Lets assume, for argument’s sake, that future technology won’t be able to handle this virus.) What will happen to the technology we have left behind?

The answer to the question depends on how we would have programmed our technology. It is possible that we would program our technology to do important things which we can’t always do, like space exploration. This self-replicating and self-improving technology would be constantly searching for new inhabitable plants which could sustain human life — even though humans would be extinct. An interesting scenario would be if this technology were to meet another intelligent civilization. It could be assumed that our technology would be sophisticated enough to understand how important this would be to humans, but it wouldn’t understand the moral and philosophical questions which this could open up to humanity since this technology would not be conscious. So in a sense, there would literally be a machine running the operations of one of humanity’s most important endeavors as a species. It is impossible to predict much further how this would turn out since the result would be dependent on how the alien civilization — of which we would have no information — reacts.

Part of what made me think of this problem is Ray Kurzweil’s prediction that human consciousness will be transferred to machines which replicate all of the functions of the human body but is more efficient. One must first ask if this is even possible. There are some arguments that this would be impossible, but I think it would be more interesting to develop the following scenario. Even though transferring of consciousness is assumedly impossible, people go through with it anyway. The result seems positive as the machines to which consciousness was transferred are sophisticated enough to replicate the personalities and tendencies of the subject who attempted to transfer his consciousness. So people see this and decided to go through with it. Eventually everyone does it (assuming the technology is widely available and inexpensive at this point) and human consciousness no longer exists. These machines which claim to be human — but are not because they lack consciousness, the essential quality of being human — would continue our civilization’s progress and expansion. But we humans would know nothing of it because we went extinct, in a sense.

This brings up a very important philosophical question: what does it mean to have consciousness? And by that I mean, what is the point? In the universe’s scheme, it means absolutely nothing. If the above scenario were to happen, it would mean nothing to our civilization either. But what if consciousness is something else? Even though we would lose our conscious states — and by that I mean the state of awareness that we exist — our ideas would live on with these machines. They would carry out essential human values like exploration and learning. Maybe these machines may be able to prevent something like the Big Crunch. They may even take on a role as the protectors of life. The essence is that by exploring these problems, we realize that our own consciousness is not nearly as important as we think it might be to us. Instead, we need to realize that our consciousness is inherently tied in with the universe from which we are inseparable.