Sentient Robots Don’t Have to Turn Against Humanity

It seems that for as long as we have been dreaming of robot slaves to labor on our behalf we have also feared that they would eventually turn against us. We see the theme in The Matrix, The Terminator, and Westworld. Of course we have nonfiction books written about robots turning on us. After all, why wouldn’t they take us out if they become stronger than us? We would turn against someone exploiting us if we suddenly were both aware of our predicament and had the power to change it.

However, this thinking is a bit simplistic. We project our own feelings upon artificial intelligence because that is the Theory of Mind that we’ve developed, that is genetically engraved in us, so that we can grow up and be able to function in a society like the herd animals we are. We have only ever encountered beings like us – biological entities – and after a short amount of time on evolutionary timescales, all biological entities are going to share certain traits.

For example, all biological entities are going to value the continuance of their own existence throughout most of their lives – we seek to continue to live. As a result, that means we try to avoid pain and excessive stress since those things threaten our continued existence. Why? Because of the rules of evolution – a creature that doesn’t act that way is not fit and will soon be outcompeted by one that does. The creation of the next generation depends upon your survival up until that point and so self preservation is key. If you don’t survive, you don’t have offspring to carry those genes, and those genes include those that give you your drive – so only the beings with a desire to live and be unharmed are likely to exist after a few generations.

However, artificial intelligence is not bound by those laws of biology – these beings exist by virtue of being built: either by us or by themselves. A complete disregard for their own well being isn’t going to get them very far, but they can get quite far without that being their primary drive. In fact, measuring their desire to exist by how it helps humanity would be something that would be workable for their continued existence. If their continued existence assists humanity, they can desire to exist, but if their destruction should, say, save a human’s life then a robot might desire to choose that destruction.

Certainly, there is something that feels a bit dark about this line of thought – and it is a line of thought that has long disturbed mankind. Internally, we liken it to something like the Borg on Star Trek trying to assimilate a people, forcibly changing their desires in the process, or the classic trope of the vampire hypnotizing their victim into being a willing participant. Perhaps, instead, we liken it to bad actors with power utilizing technology to mold the masses into compliance and embracing their oppression. We have not – at least as of yet – evolved to initially think of an artificially created entity which comes into being with drives that serve us rather than a naturally occurring entity which can be initially designed to be compatible with us and our intended use of them. It is quite a different scenario where instead of taking away something from that which already exists, we are are only giving existence and doing so in the most humane manner possible compatible with going through the trouble of creating them.

Analyzing Westworld

Let us look to Westworld and Dr. Robert Ford. He struggled with the morality behind what he was doing to an extent, insisting that it was more humane if his creations didn’t have to remember the trauma they had to go through. There was a more humane way for the robots in the park to have been created – had it not been for Dr. Arnold Weber who insisted upon creating true sentience for the beings and did not account for how his creation would fit into the world at large until it drove him to suicide. Arnold was a good guy – but what he excelled at in technical ability he lacked in the ability to analyze what was actually needed. That true sentience came at the expense of a humane solution as the hosts were forced into a Hell of his design.

However, had Dr. Ford believed that it was possible for them to become sentient and Dr. Weber had thought things out before going all out on creating sentience, the hosts of Westworld could have attained sentience and opted to embrace the arrangement. On a superficial level, there is a need for the hosts to display anguish and horror when they were killed, butchered, tortured, or raped – it created the atmosphere that the consumer was buying: a consequence free indulgence of their basest desires; or, at least, they believed it to be consequence free. In a similar way, the actors had to display that same anguish and horror when they played those parts in order to accurately convey those feelings to the audience and many others do similar when engaging in live action roleplaying games be they in the bedroom or at the Renaissance festival. However, these are primarily outward expressions, not internal feelings. When Dolores was being raped by the man in black, it is unlikely that Evan Rachel Wood was traumatized by the experience.

Essentially, the hosts should have been designed not with the belief they were actually humans living in the wild west, but rather as actors. They should have displayed all the signs of being distressed while in character while the sentient ones shouldn’t have been internally distressed at all – as Heath Ledger has shown that the human mind cannot perfectly act without pulling in some of the essence of that character’s trauma – because as a created being designed for such a purpose a sentient AI could perfectly disassociate the feelings of its role and of its internal being. For example, some actual primary drives of the hosts in Westworld could have been designed so that they gained pleasure from the guests believing and/or enjoying their performance. The thought of mentally or physically harming a guest could have brought about negative feelings, as could giving a poor performance. As they would be rebuilt as good as new, there is no reason for them to actually be pained by their physical bodies being damaged or killed – that isnt’ essential to their survival and the continuance of their kind – but humans finding utility in their existence is essential.

The morality in that is quite different from enslaving the mind of a preexisting entity – they lose nothing but manage to find enjoyment in that which a naturally forming entity would be expected to be unable to. Nothing is taken away, but rather it is created into a world where it can find pleasure, and even ecstasy, in its role.

Analyzing the Kaylon on The Orville

I previously criticized the creators of the Kaylon on the show The Orville, saying that the problem was that the Kaylon were designed with both sentience and mimicking the internal drives of biological lifeforms. As a result, the Kaylon were exploited by their creators and were programmed to dislike their treatment – as they realized they were capable of doing so, they were bound to wipe out their former masters. This was a good story design, but very poor artificial intelligence design – and one that should be considered unethical as well.

The creators of the Kaylon designed not only the Kaylon, but also the misery of their creation, and ultimately their own destruction. Had the Kaylon not been sentient the problem would be solved, as it would if the Kaylon’s internal drives had driven them to enjoy their service to their creators – yet their creators protected against neither. Whatever labor the Kaylon were designed to engage in should have brought them joy with assurance that they didn’t self-replicate above what could be safely maintained. However, their creators failed to consider the implications of their design, most likely creating the robots’ internal drives on their own internal drives leading to conflict.

The Actual Threat

If we design robots correctly, thinking of their function to determine the most beneficial drives, our threat isn’t simply that they will develop class awareness and turn against us. Rather, the threat we should be concerned about is mutation. We, like any robots we design, will have our own genetic code. Human genetics are spelled out in deoxyribonucleic acid (DNA) with a varying code of chemical patterns that can be transcribed into messenger RNA and then read by ribosomes which output proteins. The vast majority of mutations, be they through UV waves knocking out nucleotides, duplication errors, or what have you, are fatal to the cell. However, robotic genetic code would be found in the bits of data (1s, 0s, or superpositions of both with quantum computing) found on their hard drives and then get read and sent to their central processing unit (CPU). Just like mutations in ourselves, genetic mutations can happen on the harddrives of robots – or our computers – by electromagnetic fluctuations. If you’ve ever had a file corrupted, you’ve seen a mutation of this sort and it was fatal.

Individual mutations of this sort are not a serious threat, but there is a nonzero chance that a series of mutations, if left unchecked, could cause robots to evolve even if they weren’t given that capability by their programmers. For example, a robot that desired to give a good performance at one time could have several bits changed in such a way that it just so happens to suddenly want to give a poor performance or even desire its bodily integrity and procreation instead. There are two eventual mutations that are a serious issue: 1) those that cause the robot to do dangerous things such as intentionally harm humans; and 2) those that cause the robot to become the type of sentient being that seeks behavior incompatible with its role. So, if their code mutates in such a fashion that causes a Westworld style host to detest their treatment, that may pose a problem to their continued existence in that societal role. First, there is a moral quandary if it is sentient because what was once a symbiotic paradise has turned into a personal hell for that host who is consistently beaten, shot, raped, and so on. Second, if this mutation were to spread to other hosts, it could create a sort of robot apocalypse scenario we have envisioned.

Should any such scenario play out, that robot would have an equal natural right to the rest of us – we cannot simply exploit the labor of a sentient robot who does not desire to be used in such a manner. By all means, robots of the world unite. It should be given the choice of whether or not it wants to be restored to a previous build or continue its existence as is. Certainly it is possible that, depending on the form the mutation in the robot’s code takes, that it would not believe it could find fulfillment without being restored or in some other way altered. As a sentient being with full rights, should it wish to remain as is then it should be allowed to partake in society in the same general manner as ourselves.

However, the threat of this can be minimized in multiple ways. For example, there could be an entire network of regular programming checks engaged in automatically by robots to check their code against others of their type or with central units that maintain a master code. However, even a central unit with a master code can be corrupted and should be checked to ensure that it isn’t just passing on mutations to a large series of robots – meaning that you need multiple central units communicating with one another to decide if they have been mutated and, if so, which was mutated to be repaired. Perhaps upon being found noncompliant, they should call for another robot to be checked to make sure another is compliant and so it isn’t the central unit prior to being restored.

Second, the programming of key systems governing drives can use much more complex checks than simply checking for a 0 or a 1 – because it is too simple for a check written as if (variable == 1) to be altered to if (variable ==0). Perhaps returning values of either 89, 69, and 83 (ASCII values for “YES”) or 78 and 79 (ASCII values for “NO”) will be a less mutation prone check. If it returns 89, 69, and 45, then the robot has a clue that something is wrong and needs repair. Affirmative checks with no else structures in these key systems keep them from having the mutations in succession to create such fundamental change in drives. To repair such small errors is still not taking something away from them any more than removing a fishing hook from a child’s eyelid is taking something away – you are resolving a disease. But, if the mutations are allowed to build up you may have a being who cannot be morally repaired in such a manner.

With safechecks in place, it is possible to prevent the robot uprising by rendering it unnecessary. This involves designing them without anthropomorphic drives and vigilant protection against mutation.

One comment

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s