The following is taken from Lynn M. LoPucki Algorithmic Entities - papers.ssrn.com
The Threat from Algorithm Plus Entity
To the contrary, the risk to humanity from AEs is greater than the risk from algorithms with human collaborators for at least three reasons. Entities
without human collaborators could be more ruthless, more difficult to deter, and easier to replicate - pdf
# Ruthlessness
Unless explicitly or implicitly programmed to have them, AEs will lack sympathy and empathy. Even if the AEs are fully capable of understanding the effects of their actions on humans, they may be indifferent to those effects. As a result, AEs will have a wider range of options available to them than would be available to even the most morally lax human controller.
An AE could pursue its goals with utter ruthlessness. Virtually any human controller would stop somewhere short of that, making the AE more dangerous.
# Lack of Deterrability
Outsiders can more easily deter a human-controlled entity than an AE. For example, if a human-controlled entity attempts to pursue an illegal course of action, the government can threaten to incarcerate the human controller. If the course of action is merely abhorrent, colleagues, friends, and relatives could apply social pressures. AEs lack those vulnerabilities because no human associated with them has control.
As a result, AEs have greater freedom to pursue unpopular goals using unpopular methods. In deciding to attempt a coup, bomb a restaurant, or assemble an armed group to attack a shopping center, a human-controlled entity puts the lives of its human controllers at risk. The same decisions on behalf of an AE risk nothing but the resources the AE spends in planning and execution. If an AE cares at all about self-preservation, it will be only as a means of achieving some other goal for which it has been programmed.
Deterrence of an AE from its goals, as distinguished from particular means of achieving them, is impossible.
# Replication
AEs can replicate themselves quickly and easily. If an AE’s operations are entirely online, replication may be as easy as forming a new entity and electronically copying an algorithm. An entity can be formed in some jurisdictions in as little as an hour and for as little as seventy dollars.
While entities are not, strictly speaking, copies of other entities, they can be identical to other entities, which has the same effect.
Easy replication supports several possible strategies. First, replication in a destination jurisdiction followed by dissolution of the entity in the original jurisdiction may put the AE beyond the legal reach of the original jurisdiction.
For a human-controlled entity to escape the reach of the original jurisdiction, the human would have to move physically to the destination jurisdiction.
Second, replication can make an AE harder to destroy. For example, if copies of an AE exist in three jurisdictions, each is a person with its own rights. A court order revoking the charter of one or seizing the assets of another would have no effect on the third.
It could continue to exist and replicate further. The strategy does not work as well for a human-controlled entity. To replicate a human-controlled entity, one must either recruit additional humans to control the copies or put the same human in control of the copies. The former is time consuming because it requires a personnel search. It is complex because each human must be appropriately motivated.
It is risky because every person is different and difficult to assess. The latter leaves the same person in control of all the entities, providing the basis for a court to disregard their separate existences. In short, algorithms can be almost instantly cloned; humans cannot.
Third, replication can operate as a method of hedging. Consider, for example, the hypothetical situation in which ten jurisdictions are considering a ban on AEs and the ban has a ninety percent chance of adoption in each. An AE that replicated itself in each of the ten jurisdictions would expect to survive in one.
Fourth, because they know what each other will do, replications may be able to cooperate for mutual benefit without the necessity for agreement or collusion. Ants and bees are biological examples of organisms in which replications cooperate.
.
This Part argues that current law provides no effective mechanisms for preventing the formation of algorithmic entities or controlling them once they exist. First, initiators could put algorithms in control of most types of artificial entities without violating any law. As the entity system currently operates, initiators—and AEs once they are formed—can choose among thousands of entity types made available by hundreds of states and countries. Second, if threatened by proposed changes in their governing legal regimes, algorithms could change legal regimes by migrating across borders or changing entity types. They could do so without changing the locations of their physical operations. Third, in most jurisdictions, the law does not require that entities reveal their beneficial owners or controllers, making it difficult, if not impossible, for enforcement agencies to identify those whose controllers are not human. Each of these three points is addressed in a separate section