Edward O. Wilson and his diagnosis of humanity in a tweet

This article was published in El País on January 20, 2022.

On 26 December, Edward O. Wilson, the biologist, writer and naturalist who was called the Darwin of the 21st century, died. His work was key to understanding how evolution explains animal behaviour. He began by studying the social systems of ants and ended up applying them to humans, with conclusions that did not always please both sides of the political spectrum, unaccustomed to being told that genetics plays a role in our behaviour. Today it is more than taken for granted. He also contributed to the development of biodiversity theory, and to the fact that everyone now knows what the term means.  

He did not only focus on biology. He was a humanist, author of the book Consilience, which tried to bring together knowledge from different branches of science, and aspired to do so with the humanities as well.

At the end of an interview in 2009, he made the most accurate diagnosis of humanity ever made in less than 140 characters: “we have Paleolithic emotions, medieval institutions and god-like technology. And that is terribly dangerous. This sentence goes a long way. Here’s an attempt to explain it.

When all our efforts should be devoted to the fight against climate change, the number of self-made risks that can end in technological disasters has multiplied. They have not yet happened, but they can distract us from the goal we should be focusing on: the decarbonisation of our lives. Let’s look at how the problems Wilson mentioned relate to each other, starting with a few examples (there could be many more) of technology with god-like capabilities.

Artificial intelligence, dangerous in itself for general use, becomes terrifying for military use. It is not something for the future: it is already possible to buy autonomous drones with the ability to decide when and whom to kill. They have been used by Turkey in Libya, with proven effectiveness.

Another not reassuring possibility is that of future pandemics with synthetic microbes. This has been possible for several years now, when a type of experiment called “gain of function” was tried with avian influenza). It was shown that a deadly but not very transmissible strain of avian influenza could be converted into a highly transmissible one.

Gene editing with CRISPR opens the door to two dangers of unknown consequences. The first is human gene editing of heritable traits. That is, changes made to the genome that will be passed on to offspring, unlike genetic treatments such as thalassaemia. With regard to heritable human gene editing, it has already been done in China, with the birth of twin girls in 2018, in a case that caused outrage in the scientific community for bypassing all ethical standards. The second danger is the suppression of species by introducing genes that block reproduction, which may be beneficial if diseases such as malaria, transmitted by mosquitoes, are eliminated, but whose ecological consequences are unknown.

Other technologies may affect the livelihoods of people in poor countries, such as synthetic coffee, the adoption of which could put 125 million people out of work growing coffee, especially in poor countries.

The same can happen with lab-grown meat, which could have beneficial effects on climate change by reducing the number of ruminant livestock that emit methane, but would leave tens of millions of families who make a living from livestock farming without income.

Another case of technological progress may be even more serious. Industrial automation may deny less advanced countries the possibility of moving out of the primary sector (agriculture, fishing, mining), which is less remunerative and more unstable in terms of prices than the industrial or service sectors. In this article, Jeffrey Sachs explains how productivity gains from robotics may harm the exports of the less technically advanced.

But the most dangerous of the technologies is the one that relates to the Paleolithic emotions that Wilson mentioned as the second leg of the problem.

The far right has understood better than the left how to use social networks to manipulate emotions by bringing out the worst in human beings. Artificial intelligence, which controls the algorithms that decide what we see on Facebook or twitter, amplifies that dominance. If we add to this the Deep fakes, edited videos that can show famous people saying whatever the manipulator of the moment wants them to say, the panorama is worrying. When it would be most necessary for us to understand the risks we face, and start to remedy them, the number of antivaccinationists, maguffins and denialists of all stripes, who feed on whatsapp to increase the political weight of stupidity, is growing.

Wilson’s third problem concerns the inability to address these risks with current institutions. Any of the technological risks mentioned should be controlled by agile and efficient organisations capable of bringing the scientific community that uses these technologies into agreement with the multilateral institutions that should be able to curb abuses or irresponsibility. This is difficult, because the rush to get on the front pages of scientific journals, to obtain the most profitable patents or to get a head start in the creation of new start-ups or lethal weapons are not incentives for prudence.

In short, we have a technology that can work wonders, but which, if misused, will have serious side effects, we have a section of the population stirred up by the extreme right – and sometimes by the extreme left – that is ready to oppose the solutions needed to tackle problems ranging from climate catastrophe to pandemic, and finally we have institutions that are not up to the challenges they have to face. May nothing happen to us.

Post a comment

Your email address will not be published. Required fields are marked *