back to the list

The Moral Obligation of Consciousness?

Science has advanced to such an extent that humans are now at least contemplating the possibility of engineering an AI which has consciousness and the capacity of self-comprehension similar to humans'. It's a development which is anywhere from a few weeks to a few centuries from coming to realization, but it's still worth examining. It is indeed an interesting thought, and understandably for many it brings up ethical issues on the spot.

A conscious machine intelligence may not have the same mental maneuverings as a human, but consciousness is the fundamental quality that would evoke our moral empathies to preserve and honor the pleasure and pain that consciousness experiences. For pain to be something "bad" in a moral sense, there must be a consciousness there to experience it.

There is of course a great deal of unintentional but well-meaning chauvinism involved in hypothesizing conscious entities; consciousness does not imply what we humans feel: fear of death or non-existence, experience of aversive stimuli or even preference about the future. Being aware of the world and one's self does not engender a sense of self-preservation, a sense which exists in humans and all other animals as a result of natural selection.

It's not to hard to imagine a computer that understands that it exists, is aware of the world fully to the extent we are but is not necessarily burdened with the over-arching will to survive; it may have no preference as to if it is shut down, erased or destroyed.

There are still many commentators and theorists to have said that upon the creation of a sentient machine, we would be morally obliged not to extinguish the flame of its consciousness. This sentiment, I feel, is really only based on an overgeneralization of human fears onto a machine which may have no preference as to the matter and which additionally is a failure to grasp the social bases of human morality itself.

Douglas Adams of course highlighted the moral paradox in The Hitchhiker's Guide to the Galaxy when while dining, the protagonists are brought a genetically modified cow which not only has the capability to speak, but has been built with the desire to be eaten. The creature advertises various parts of its fattened body to the disturbed and confused Arthur Dent who is too perturbed to take and eat of it.

The scene demonstrates the theoretical possibilities of consciousness. It should be fair to say that organisms on Earth have had the unconquerable disposition to escape death even before the evolution of the most primitive conscious thought. Early proto-life, even without all intentionality had to replicate itself in ways that avoided danger before the development of mental faculties that would come to define it. With the slow development of the mind, avoidance of death had to be a selected factor of essentially all generations (in fact it may only be until the times of social development that a being became willing to sacrifice itself in any way at all).

"Intelligently designed" life, machine life would not have this intentionality or the premise of self-preservation as its underpinning and fundamental driving values. In addition, pain itself is a carefully evolved bridge between the material and mental world. It may be that as humans are damaged closer and closer to death, they feel pain to notify themselves of the urgency of their problem, but this too is a long association bred by natural selection.

It would be wrong to say that a sentient computer feels pain when given a particularly long problem to work out or worries when its user approaches its power cord with the intentions of unplugging it. Even a android built to imitate human action does not necessary feel pain in its consciousness, even if it has been programed to avoid damage in a way analogous to humans' senses.

The consciousness problem seems to be the most extreme, but there are others that await directly after it has been uncovered as in: how exactly do sentient beings feel pain and pleasure?

What is important to acknowledge is that creating consciousness would not necessarily lead to a consciousness identical to the one experienced by humans with its own particularities. This also means that, operating on a general utilitarian morality, we shouldn't necessarily assign moral value to consciousness in itself, as more modules are required to create the experience of utility.