A Legal Framework for Artificially Intelligent Robots

Science, Technology, and the Law

A Legal Framework for Artificially Intelligent Robots

Introduction

This article addresses emerging issues concerning the advancement of Artificial Intelligence (AI), the expansion of the robots market, and the relationship between robots and the law. It will consider different positions taken by scientists and academics, as well as the European Parliament, regarding AI robots’ legal status, and whether they should be given human-like rights in the near future. Specific subjects of analysis are anthropomorphism, sex robots and their impact on our society, and dangerously intelligent future AI robots. Finally, this article advocates against granting human rights to AI robots, but in favor of granting them sexual rights, similar to those held by humans. While AI robots may or may not ultimately be considered capable of consciousness in any traditional sense, it is essential to protect the ideal of, and respect for social human values.

The evolution of AI seems poised to unleash a new industrial revolution, and is likely to touch upon every sector of modern society. Different types of robots have been on the market for longer than many realize, and will continue to grow in importance. Currently, we may initially think of automated vacuum cleaners, lawn mowers, and some types of drones to name a few. However, the scope of functions that these technologies cover is larger than these examples. Consider the first self-driving shuttle bus, placed on the streets of Las Vegas in 2017. D. Lee, Self-driving shuttle bus in crash on first day, BBC NEWS (Nov. 8, 2017), http://www.bbc.com/news/technology-41923814. Or the fact that Japanese industrial firms are racing to build humanoid robots to act as domestic helpers for the elderly. Trust me, I’m a robot, The Economist (June 8, 2006), http://www.economist.com/node/7001829. South Korea has also set a goal that 100 percent of households should have domestic robots by 2020. Id. Many people have yet to accept that the growth of AI may carry negative consequences. S. Torrance, A Robust View of Machine Ethics, Institute for Social and Health Research Middlesex University (2005), http://www.aaai.org/Papers/Symposia/Fall/2005/FS-05-06/FS05-06-014.pdf; See also C. De Quincey, Switched-on consciousness: clarifying what it means, (2006); S. Gutiu, Sex robots and roboticization of consent, (2012); P. Hubbard, Do Android Dream?: Personhood and intelligent artifacts, Temple Law Review, 83 Temp. L.Rev. 405 (2011); D. Levy, The Ethical Treatment of Artificially Conscious Robots, (2009); S. Torrance, Ethics and Consciousness in Artificial Agents, (2007).

AI promises to bring benefits of efficiency and savings in production, commerce, medical care, transportation, and farming. On one hand, humans will be protected and may no longer have to engage in dangerous manual jobs. On the other hand, this raises a question as to the future of employment and the potential for increased inequality between different economic classes. Although these are compelling and relevant issues in our society, this article will not focus on them. Rather, it will concentrate on concerns linked to the possible failure or hacking of connected AI and the risks created by such possibilities. Specific examples include drones, self-driving cars, and AI robots that assume human semblances and have the ability of interacting with humans. Many scientists and academics are concerned that AI robots could eventually turn against their manufacturers and other individuals, either by their own volition or because so ordered by more powerful individuals. The use of drones in international conflicts, for example, is already customary practice.

The implications of this technological development must be addressed in legal and ethical contexts. The legislature must consider all of its implications. The European Union, for instance, has recently addressed these emerging concerns, suggesting that a new legal status should be determined and assigned to the most sophisticated AI robots.

The European Union’s Approach to AI Regulation

The European Parliament has advanced guidelines that may be effective in regulating AI in terms of its construction, design, and behavior. Technology entrepreneur Elon Musk recently urged the United States to regulate AI, “before it’s too late”. O. Etzioni, How to regulate Artificial Intelligence, The New York Times (Sept. 1, 2017), https://www.nytimes.com/2017/09/01/opinion/artificial-intelligence-regulations-rules.html.

This somewhat dramatic call for regulation stems from Musk’s concern about AI’s impact on weapons, jobs, and privacy. Id. Designing an effective and dynamic policy strategy should be a government imperative considering the speed with which AI is evolving. M. Gummi, Artificial Intelligence: The Way Forward for Policy and Regulation, Berkeley Pub. Policy Journal, (Apr. 12, 2017), https://berkeleypublicpolicyjournal.org/2017/04/12/artificial-intelligence-the-way-forward-for-policy-and-regulation/. Although it is not necessary that the United States government follow European suggestions, it is time for the legislative and executive branches to address these issues before advanced AI is placed in the market. However, this approach would be ineffective if adopted only by a small number of countries. Say, for example, that the United States sets a strict regulatory framework for production and design of AI robots; their capabilities would be limited to the standard set by the government. At the same time however, another hypothetical country refuses to pass similar legislation and guidelines, and the AI it produces creates an irreversible advancement that would harm the position of U.S. industries or force the U.S. to adapt to a new dynamic or threat. A situation like this would render all other countries’ efforts to keep AI under control futile. For this reason, international harmonization and cooperation in setting regulatory standards under the auspices of the United Nations is necessary. Id.

A robot is a “constructed system that displays both physical and mental agency, but is not alive in a biological sense.” N. Richards, How should law think about robots?, Wash. Univ. in Saint Louis Sch. of Law (2013), https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2263363. Although there is no universal definition of what an AI robot is, it is thought AI robots may be able to make decisions by themselves and learn from their experiences. Some authors argue AI robots are expected to be able to communicate, have internal and external knowledge, have goal-driven behavior, decision-making capabilities, and creativity. G. Hallevy, I, Robot – I, Criminal – when Science Fiction becomes Reality: Legal Liability of AI Robots committing Criminal Offenses, 22 Syracuse Sci. & Tech. L. Rep.1, (2010), http://jost.syr.edu/i-robot-i-criminal-when-science-fiction-becomes-reality-legal-liability-of-ai-robots-committing-criminal-offenses/. Others expect AI robots to be independent, autonomous, have emotions and responsibilities, and the ability to adapt and integrate information. Id.; see also P. Hubbard, Regulation of and liability for risks of physical injury from sophisticated robots, We Robot Conference, University of Miami School of Law (2014); P. Kahn, Do people hold a humanoid robot mentally accountable for the harm it causes? (2012), https://depts.washington.edu/hints/publications/Robovie_Moral_Accountability_Study_HRI_2012_corrected.pdf. The common key feature among these schools of thought is that they will be able to adapt to new circumstances on their own without the need for reprograming. The European Union expressly considers that these robots may have (i) the capacity to learn through experience and interaction, (ii) the capacity to acquire autonomy through sensors and by exchanging data with their environment, (iii) and the capacity to adapt their behaviors and actions to their environment. M. Delvaux, Committee on Legal Affairs, Motion for a European Parliament Resolution with recommendations to the Commission on Civil Law Rules on Robotics, (2015/2103(INL)) (May 31, 2016), http://www.europarl.europa.eu/sides/getDoc.do?pubRef=-//EP//NONSGML%2BCOMPARL%2BPE-582.443%2B01%2BDOC%2BPDF%2BV0//EN.

The first dilemma that arises in connection with the increased use of robots in every-day life activities is who should be legally responsible for robots’ actions. The intrinsic difficulty in holding robots accountable for their actions makes the ordinary rules on liability insufficient. Rather, it calls for new rules focusing on whether a machine can be held liable for its wrongful actions or omissions, and if so, how this may be accomplished. Currently, there is no existing legal framework that determines whether AI robots should be given legal status, or whether the humans who program, or who are in control of AI robots should be held accountable for the robots’ actions.

The European Parliament’s Committee on Legal Affairs recently issued a report calling for the adoption of different courses of action. Id. The Committee’s report urges the drafting of a set of regulations to govern the use and creation of robots and AI. This includes a form of electronic personhood to ensure rights and responsibilities for the most capable AI. A. Hern, Give Robots Personhood Status, EU Committee Argues, The Guardian (Jan 12, 2017), https://www.theguardian.com/technology/2017/jan/12/give-robots-personhood-status-eu-committee-argues. This specific legal status for robots would lead to at least the most sophisticated autonomous AI robots to have the status of electronic persons with specific rights and obligations. Among these is the obligation of making good any damage they may cause. Legal personhood would result in the application of electronic personality to AI robots that make smart autonomous decisions or otherwise interact with third parties independently. The Committee also considered that the complexity of allocating responsibility for damages caused by autonomous AI robots calls for the establishment of an insurance scheme, similar to that used today with cars. It would be obligatory for the producers of AI to have insurance on each autonomous robot it produces. Having identified the responsible parties, liability would be proportionate to the actual level of instructions given to the robot and of its autonomy, so that the greater the learning capabilities of the robot, the lower other parties’ responsibility should be, and the longer a robots ‘education’ has lasted, the greater the responsibility of its ‘teacher’ should be. M. Delvaux, Committee on Legal Affairs, Motion for a European Parliament Resolution with recommendations to the Commission on Civil Law Rules on Robotics, (2015/2103(INL)) (May 31, 2016). The Committee’s new legal framework would include (i) the creation of a European agency for robotics and AI in order to provide technical, ethical and regulatory expertise; (ii) the determination of a legal definition of smart autonomous robots, coupled with an effective system of registration and the establishment of criteria for classification of AI robots; (iii) an advisory code of conduct for robotics engineers, for research ethics committees and for users, aimed at guiding the ethical design, production and use of robots; and (iv) combined efforts to guarantee a smoother transition for these technologies from research to commercialization on the market. Id.

The dilemma as to whether a new legal status should be determined for AI robots raises important questions. If AI robots should be given legal personalities, there is room to argue they should also be given rights. Some argue that these rights should resemble human rights, while others reject this suggestion.

AI Robots and Human Rights

While the EU suggests giving legal personhood to AI robots, others support the idea of taking a step forward and grant AI robots human rights. This section considers arguments in support of, and against this idea, and argues that human rights should not be given to AI robots. It may not be the right time for granting human rights and citizenship to AI robots, because unsolved issues concerning human beings, such as race and gender must be prioritized. According to W.J. Smith, “no machine should ever be considered a rights bearer . . . even the most sophisticated machine is just a machine. It is not a living being. It is not an organism.” G. Dvorsky, When will Robots Deserve Human Rights?, Gizmodo (June 2, 2017), https://gizmodo.com/when-will-robots-deserve-human-rights-1794599063. However, there are limited circumstances in which some kind of rights and protection should be afforded to robots, in order to preserve social human values.

Linda MacDonald-Glenn, a bioethicist at California State University Monterey Bay and a faculty member at the Alden March Bioethics Institute at Albany Medical Center, says the law already considers some non-humans as rights bearing individuals. Id. “This is a significant development because it functions as a precedent that could pave a path towards granting human-equivalent rights to AI in the future.” Id. The United States is making efforts to grant personhood rights to non-human animals, such as great apes, elephants, whales, and dolphins, to protect them against confinement and abuse. Id. These efforts are founded on the idea that personhood is based on the presence of certain cognitive abilities, such as self-awareness and the ability to have feelings, which are currently the key elements that distinguish AI from humans. MacDonald-Glenn emphasized that emotions are an essential component of rational thinking and normal social behavior. The scientific evidence of the emotional capacity of animals is steadily increasing, and it is argued that eventually even AI may be imbued with similar emotional capacities, which would elevate their moral status to a level that would call for the granting of rights. Id.

Oxford Professor Marcus du Sautoy strongly supports the idea of giving human-like rights to AI robots. J. Javelosa, Should AI be Given Human Rights? This Oxford Professor Says “Yes, Futurism (June 2, 2016), https://futurism.com/should-artificial-intelligence-be-protected-by-human-rights-this-oxford-mathematician-says-yes/. He states that as we are moving closer to a reality of robots equipped with advanced AI, “shouldn’t they be given moral and legal protection that has, until now, been granted freely to humans?” Id. This idea is grounded on the concept of protecting robots against human actions. Du Sautoy is confident that robots’ sophistication will reach a level akin to human consciousness, and thinks it will be our duty as human beings to look after the welfare of the machines, just as we do for other human beings. P. Dockrill, Artificial Intelligence Should be Protected by Human Rights, Says Oxford Mathematician, Science Alert (May 31, 2016), https://www.sciencealert.com/artificial-intelligence-should-be-protected-by-human-rights-says-oxford-mathematician. The question du Sautoy strives to answer is at what point in the development of AI should these robots be given human rights. Id. There is still uncertainty as to when, if at all, AI robots will actually gain consciousness, and even if they should. Scientists disagree as to whether humans will be able to determine if artificially intelligent robots are actually conscious like humans are, or whether they just simulate it. G. Dvorsky, When will Robots Deserve Human Rights?, Gizmodo.

In contrast, others ask if human rights should be given at all, as they doubt that AI robots will actually develop consciousness. Id. The American lawyer and author Wesley J. Smith, Senior Fellow at the Discovery Institute’s Center of Human Exceptionalism argues that, “[w]e haven’t yet attained universal human rights, and it is grossly premature to start worrying about future robot rights”. Id.; see also J. Vincent, Pretending to give a robot citizenship helps no one, The Verge (Oct. 30, 2017), https://www.theverge.com/2017/10/30/16552006/robot-rights-citizenship-saudi-arabia-sophia. Accordingly, the granting of citizenship to a Robot named Sophia in Saudi Arabia, on October 25th, 2017 has not amused Saudi women, who are still required to have a male guardian and to cover their heads in public. H. Kanso, Saudi Arabia gave “citizenship” to a robot named Sophia, and Saudi women aren’t amused, Global News (Nov. 4, 2017), https://globalnews.ca/news/3844031/saudi-arabia-robot-citizen-sophia/. Women who are married to foreigners in the gender-segregated nation cannot pass on citizenship to the children, while robots are starting to attain citizenship.

Many more argue that human rights are unnecessary, because laws that prohibit researchers and developers from misusing and abusing AI robots will protect them. Id. At the same time, AI robots will have to respect the laws, set by the State, and social values of the society in which they will act. The hypothetical new legal framework must address both these aspects, determining the consequences of AI robots’ actions, and of actions taken by individuals against AI robots.

While agreeing with W.J. Smith’s view that more compelling human rights issues relating to human beings should be prioritized, this article also supports the view that AI robots should be given some kind of legal status and protection. This idea is not founded on the notion of protecting the robots themselves, as argued by du Sautoy, but rather to protect social human values and human beings. P. Dockrill, Artificial Intelligence Should be Protected by Human Rights, Says Oxford Mathematician, Science Alert. Consider a situation in which a robotic pet is abused in a household where a small child lives. Children must be educated with values and taught that violent and abusive behavior against animals and other “beings” is wrong. Therefore, because it may become increasingly difficult for children to distinguish between animals and robots, we should teach them to consider and treat both equally. K. Darling, Extending Legal Rights to Social Robots, We Robot Conference, University of Miami (Apr. 2012), http://robots.law.miami.edu/wp-content/uploads/2012/04/Darling_Extending-Legal-Rights-to-Social-Robots-v2.pdf. To avoid the child turning into an abusive adult, due to habitual witnessing of abuse on household or other AI robots, a newly adapted legal framework must protect these machines. In other words, a new legal framework under which AI robots are protected against abuse, and are also held accountable for their wrongdoings is necessary to keep them under control, and to keep humans who interact with them under scrutiny.

In regards to the exploitation of robots, consider a 2004 article entitled “Rent-a-Doll Blows Hooker Market Wide Open”, published in the Mainichi Daily News. It explained how one leading purveyor, Doll No Mori (Forest of Dolls), started its 24/7 doll escort service in Tokyo. N. Sharkey, Our Sexual Future With Robots, a foundation for responsible robotics consultation report (2017), https://responsible-robotics-myxf6pn3xr.netdna-ssl.com/wp-content/uploads/2017/11/FRR-Consultation-Report-Our-Sexual-Future-with-robots-1-1.pdf. The company’s spokesman explained that labor costs are cheaper using dolls rather than humans. Id. As of 2004, the company had four dolls operating for it, and it made back its original investment in the first month of operation. D. Levy, Robot Prostitutes as Alternatives to Human Sex Workers (May 20, 2015), http://www.roboethics.org/icra2007/contributions/LEVY%20Robot%20Prostitutes%20as%20Alternatives%20to%20Human%20Sex%20Workers.pdf.

Although in 2004 the workers were dolls, in 2020 they may be AI robots, capable of interacting and building a relationship with the clients. This new business model gives rise to a number of important moral, ethical, and legal questions. It may not be morally right to create robots for the purposes of sexual exploitation. However, the legal ramifications are more important. While robots are always going to be considered machines, AI robots will create confusion because it is unclear whether they will gain consciousness and feelings. Humans should not be allowed to design conscious robots and then be allowed to abuse them, as seen above in the case of the pet robot. Further, they should not be allowed to design and manufacture AI robots for the sole purpose of using them as sex workers.

Artificial Intelligence and Anthropomorphism: Sex robots

Loving, marrying, and having sexual intercourse with a machine is undoubtedly a source of moral concern. The very fact that the industry is developing anthropomorphic female-looking robots inspired by pornography prompts legitimate fears. Sexual rights conferred on robots should be similar to those held by humans. Complete freedom of action by people on robots may numb an individual’s sensitivity and impair their social human values, given that sex robots look and feel like human beings. Sex robots are currently being designed under all human forms and shapes, including young women and children.

Anthropomorphism is the attribution of human form, motivation, behavior, and characteristics to non-human organisms or inanimate objects. According to MIT researcher Kate Darling “[h]umans form attachments to robots that go well beyond our attachments to non-robotic objects.” Kate Darling, Extending Legal Rights to Social Robots, We Robot Conference, University of Miami, (Apr. 2012). Such reaction stems from the human inclination to anthropomorphize objects that act autonomously. Even in 2017 basic AI robots were able to elicit emotions in human beings that are similar to how we react to animals and to each other.

“Since humans are already disposed towards forming unidirectional emotional relationships with the Robotic companions available to humans today, it can only be imagined what the technological developments of the next decade will be able to effect. As we move within the spectrum between treating social robots like toasters and treating them more like our cats, the question of legal differentiation becomes more immediate.”

 

Id.

 

Consider the case of Zheng Jiajia, a Chinese AI engineer who in 2017 married Yingying, a robot he constructed himself. B. Haas, Chinese man ‘marries’ robot he built himself, The Guardian (Apr. 4, 2017), https://www.theguardian.com/world/2017/apr/04/chinese-man-marries-robot-built-himself. Today Yingying can only read some Chinese characters and speak a few simple words, but Zheng plans to upgrade his bride to be able to walk and do household chores. Id.

The sex-technology market worldwide is worth a reported US $30 billion. Lets talk about sex robots, Nature (July 13, 2017), https://www.nature.com/news/let-s-talk-about-sex-robots-1.22276. Roxxxy is the first sexbot introduced on the market and presented for the first time to the public in 2010. N. Sharkey, Our Sexual Future With Robots, a foundation for responsible robotics consultation report. As of July 2017, four companies in the United States produce sex robots and it is unknown how many individuals own one. Lets talk about sex robots, Nature (July 13, 2017). Although at present most machines are more dolls than robots, the advancement of technology is influencing manufacture and may result in these robots being given AI. They will be more active and more human-like, with the intent of allowing owners, or users, to develop emotional and physical bonds with their devices.

Companies creating and selling sex robots price their machines ranging from roughly $5,000 to around $15,000. Some products, like Harmony by Abyss Creations, are sold with specific body configurations and characteristics including weight, bra size, skin tone, eye color, and lip color. Others, like Roxxxy from TrueCompanion.com are customizable, giving consumers a chance to choose among many options for both complexion and the ability to speak up to four languages, including English, Spanish, German and Japanese. S. Smith, Roxxxy sex doll is world’s first TrueCompanion, Revolvy (2010), https://www.revolvy.com/main/index.php?s=Roxxxy; see also P. Svensson, “Roxxxy the sex robot makes her world debut, The Tech Herald, Agence France Presse (2010). New features of the machines include personality traits: Harmony can display simulated orgasms through facial expressions, shifting eyes and the emulation of sounds she “hears.” Depending on consumer preferences, Roxxxy Gold can be pre-programmed with distinctive personalities, including “Wild Wendy,” an outgoing and audacious personality, and “Frigid Farrah,” which exudes bashfulness. E. Lieberman, Sex Robots are Here and Could Change Society Forever, The Daily Caller (July 17, 2017), http://dailycaller.com/2017/07/17/sex-robots-are-here-and-could-change-society-forever/. According to its developers, the former is always “up to talk and play”, while Farah will refuse to be touched. Such refusal may be followed by a refusal to consent to intercourse, leading to a simulated rape. Further, foreign companies are manufacturing sex robots that look like children, creating a new outlet for pedophiles. D. Donovan, Child sex dolls, the newest outlet for pedophiles, must be banned, The Hill (Dec. 12, 2017), http://thehill.com/blogs/congress-blog/judicial/364438-child-sex-dolls-the-newest-outlet-for-pedophiles-must-be-banned.

Behavioral features are to be seen as a line of code programmed into the machine, rather than the machine being conscious. However, if predictions as to the development of AI are accurate, and AI robots will eventually be, or pretend to have human-like consciousness, it is argued that they should be protected. Whether protection is warranted for their own sake or for the indirect protection of social human values is subject to debate.

A tangle of legal, moral, ethical and social questions surrounds robophilia. George Washington University Law School professor John F. Banzhaf says there’s little law surrounding this and legislatures might have to regulate sex robots for safety, and health reasons. S. Bykofsky, Sex Robots are real, and they’re all made in the US, The Inquirer, Daily News (July 13, 2017), http://www.philly.com/philly/columnists/stu_bykofsky/sex-robots-are-real-and-theyre-all-made-in-the-u-s-20170713.html. The idea is to give robots rights in order to maintain human social values and standards. The new legal framework must address the concern that humans that engage in sexual activities with robots, and in some cases simulated rapes, may no longer worry about the distinction between robots and human beings. B. Chatterjee, Child sex dolls and robots: exploring the legal challenges, The Conversation (Aug. 3, 2017), https://theconversation.com/child-sex-dolls-and-robots-exploring-the-legal-challenges-81912. In other words, the popularity of sex robots might have an impact on the rates of sex crimes. Paul Abramson, professor of psychology at UCLA, told the Daily Caller News Foundation that marriage generally does not deter rape, and he does not see how a robot could. E. Lieberman, Sex Robots are Here and Could Change Society Forever, The Daily Caller (July 17, 2017). However, although he does not think society should treat pedophilia as a curable disease, he does believe that sex robots created with the likeness of a child could help in certain circumstances, even though it would not automatically prevent perpetrators. Id. Similarly, Shin Takagi, an affirmed pedophile who runs his own child sex robot company in Japan, agrees that people like himself are genetically compelled to be aroused by such unacceptable behavior: “We should accept that there is no way to change someone’s fetishes,” Takagi told The Atlantic, “I am helping people express their desires, legally and ethically.” Id.; see also N. Sharkey, Our Sexual Future With Robots, a foundation for responsible robotics consultation report.

However, this approach is generally rejected. Dr. Kate Darling, for example, is not certain that this approach will work.

“What we don’t know is whether if you go and play around in Westworld, whether that is just an indication of how callous you are, or if it can actually desensitize you towards that human, or whether it is a really healthy outlet if you have violent tendencies. You can go and you can beat the crap out of this really lifelike robot, and you know that you’re not hurting a real person. And maybe that makes you a much better person in real life; you’ve gotten all of your aggressions out. We just have no idea what direction this goes in . . .”

 

  1. Lieberman, Sex Robots are Here and Could Change Society Forever, The Daily Caller (July 17, 2017).

Similarly, Justin Hurwitz, a professor of law at the University of Nebraska, is of the opinion that robots may be like gateway drugs, leading more people to develop harmful deviances or worsening already horrible deviant conduct, “There is evidence that either or both of these views could be accurate.” Id.

“This debate should be given more weight than it is currently being given and should be addressed before these products are out there”, said Matthias Scheutz, the director of the human-robot interaction laboratory at Tufts University. P. Mellgard, As Sexbot Technology Advances, Ethical and Legal Questions Linger, The World Post (Sept. 22, 2015), http://www.huffingtonpost.com/entry/robot-sex_us_55f979f2e4b0b48f670164e9. Supporters of the idea that robots will gain human-like conscious and that giving humans the freedom to perform any act on sex robots may undermine social human values may argue that AI robots are not to be manufactured to be exploited or enslaved, and should be given “sexual rights”. Levy has argued that robots should get rights if they are conscious, while Steve Torrance’s has provided support for the position that treating other human beings ethically is something that society does because it is aware of their consciousness. See D. Levy, The Ethical Treatment of Artificially Conscious Robots (2009), https://link.springer.com/article/10.1007/s12369-009-0022-6; S. Torrance, Ethics and Consciousness in Artificial Agents (2007). See also D. Calverley, Towards a method for determining the legal status of a conscious machine (2005). AI sex robots are manufactured with the physical appearance of, and are programmed to behave like human beings. As seen above, sex robots may be programmed to refuse consent to sexual intercourse, which may fuel the fetishistic and criminal desire in some users. People should not be given the freedom to engage in sex robot rape. It is intrinsically wrong because the person performing the act shows a disturbing lack of sensitivity to social human values, and may lose the notion of the distinction between a real human being and an artificial one. An issue which will increase as sex robots become more realistic through the use of synthetic flesh, warming devices, and artificial intelligence.

As far as human beings’ sexual rights are concerned, 18 U.S.C § 2242 provides

“[W]hoever . . . knowingly causes another person to engage in a sexual act by threatening or placing that other person in fear, or engages in a sexual act with another person if that other person is incapable of appraising the nature of the conduct, or physically incapable of declining participation in, or communicating unwillingness to engage in, that sexual act, or attempts to do so, shall be fined under this title and imprisoned for any term of years or for life.”

 

18 U.S.C. § 2242.

State laws have further sought to define sexual assault in different ways. For example, under the Code of the District of Columbia, a person shall be imprisoned for any term or for life if that person engages in or causes another person to engage in or submit to a sexual act by (i) using force against that other person, (ii) threatening or placing that other person in reasonable fear that any person will be subject to death, bodily injury, or kidnapping, (iii) after rendering that other person unconscious, or (iv) after administering to that other person by force or threat of force, or without the permission of that other person, a drug, intoxicant, or similar substance that substantially impairs the ability of that other person to appraise or control his or her conduct. D.C. Code § 22-3002 (2017). In Texas, on the other hand, sexual assault occurs when a person causes a penetration of a sexual organ of another person by any means, without that person’s consent. Tex. Penal Code § 22.011 (2003). While each state legislature has adopted a different definition of the crime, their purpose is the same: protect human personal integrity and self-ownership.

Further, sexual interactions between humans and robots may be compared to sexual interactions between humans and animals, often referred to as bestiality. Animals have been sexually abused from prehistory, as evidenced in many representations of bestiality in ancient cave paintings, and still are in the modern era. M. Beetz, Bestiality and Zoophilia, Berg (2009), http://www.isaz.net/isaz/wp-content/uploads/2017/03/Bestiality-and-Zoophilia.pdf#page=7. In an attempt to protect animals from abuse, lawmakers in most jurisdictions extended sexual abuse laws to all animals. A minority of jurisdictions, such as Colorado and the District of Columbia, has limited their scope to anti-animal cruelty statutes. For example, California provides that “[a]ny person who sexually assaults any animal protected by § 597f for the purpose of arousing or gratifying the sexual desire of the person shall be punished.” Cal. Penal Code § 286.5 (1975).

Regardless of the distinction, both anti-animal cruelty and anti-bestiality statutes seek to prevent the violation of these same social human values as well as the integrity and safety of animals. The attribution of rights to animals, stems from the fact that humans recognize animals are self-aware, have feelings, suffer pain, and have consciousness. Thus, some argue that because in the near future AI may be considered as having some characteristics as humans and animals, AI robots should also be protected. However, although robots will not die or bleed if abused, they should not be the object of violent of socially condemnable treatment by humans, not for the robots’ sake, but for the sake of human social values.

Another pressing issue goes beyond the protection of human social values, sex robots, and sex robot rape. AI robots may in fact be capable of harming humans, either by their own volition or because they are programmed to follow orders by their creators.

Robots Against Humans

The European Committee on Legal Affairs stressed that although AI is currently not at the level of human intelligence, robots learn more quickly that humans do. M. Delvaux, Committee on Legal Affairs, Motion for a European Parliament Resolution with recommendations to the Commission on Civil Law Rules on Robotics, (2015/2103(INL)) (May, 31, 2016). AI could surpass human intelligence in only a few decades, and in a manner that may challenge the capability of humanity of controlling its own creation. European lawmakers have called for mandatory “kill switches” on all robots. I. Kottasova, Europe calls for mandatory kill switches on robots, CNN (Jan. 12, 2017), http://money.cnn.com/2017/01/12/technology/robot-law-killer-switch-taxes/index.html. “A growing number of areas of our daily lives are increasingly affected by robotics,” said Mady Delvaux, the Luxembourgish MEP who authored the proposal. Id. “To ensure that robots are and will remain in the service of humans, we urgently need to create a robust European legal framework.” A. Pandey, Artificial Intelligence: EU to debate robots’ legal rights after committee calls for mandatory AI ‘kill switches, International Business Times (Jan. 13, 2017), http://www.ibtimes.com/artificial-intelligence-eu-debate-robots-legal-rights-after-committee-calls-mandatory-2475055. This article supports the European approach, and the position that the United States should also consider this approach. Further, the call for regulation and kill switches should be approved and acted upon by the United Nations in order to achieve international harmonization. The main consideration in connection with this idea is that the danger is not extinguished until all countries producing AI robots keep their products under control.

Having considered the general issue of personhood, human rights, and sex robots, the final consideration is that the danger of developing AI may be greater than all that. Elon Musk has recently expressed his concern that AI will be the most likely cause of World War III, as a response to Russia’s President Vladimir Putin, who stated that the first global leader in AI would “become the ruler of the world.” R. Browne, Elon Musk says global race for AI will be the most likely to cause World War III, CNBC (Sept. 4, 2017), https://www.cnbc.com/2017/09/04/elon-musk-says-global-race-for-ai-will-be-most-likely-cause-of-ww3.html. Putin specified that the development of AI raises both “colossal opportunities” and “threats that are difficult to predict”. Elon Musk has called for the United Nations to ban killer robots, in a letter he co-signed with other leaders in robotics including Google Deep Mind’s Mustafa Suleyman. Id. Musk explained that because he has the opportunity of working with the latest AI on a daily basis he is aware, more than anyone, what AI may be capable of doing. He tweeted on September 4th, 2017, that the war “may be initiated not by the country leaders, but one of the AI’s, if it decides that a preemptive strike is most probable path to victory.” J. Vincent, Elon Musk and AI leaders call for a ban on killer robots, The Verge (Aug. 21, 2017), https://www.theverge.com/2017/8/21/16177828/killer-robots-ban-elon-musk-un-petition.

Physician Stephen Hawking has also warned attendees of a technology conference in Portugal that AI has potential to destroy civilization and could be the worst thing that has ever happened to humanity. H. Osborne, Stephen Hawking AI warning: Artificial Intelligence could destroy civilization, Newsweek (July 11, 2017), http://www.newsweek.com/stephen-hawking-artificial-intelligence-warning-destroy-civilization-703630. Hawking stated that computers can emulate human intelligence and even exceed it, and mankind must find a way to control them, because AI may become a “new form of life”, which could replace humans altogether. Id.

On the other hand, Sophia, the newly announced first citizen robot of Saudi Arabia, contends that robots will not be dangerous to humanity. When asked whether she would destroy humanity, she stated, “You’ve been reading too much Elon Musk. And watching too many Hollywood movies . . . Don’t worry, if you will be nice to me, I will be nice to you. Treat me as a smart input output system.” C. Weller, A robot that once said it would ‘destroy humans’ just became first robot citizen, Business Insider (Oct. 26, 2017), http://www.businessinsider.com/sophia-robot-citizenship-in-saudi-arabia-the-first-of-its-kind-2017-10. However in March 2016 Sophia’s creator David Hanson asked Sophia if she wanted to destroy humans, and with a blank expression she responded, “OK. I will destroy humans.” Id. Although it is clear that these interviews were scripted, the capabilities of AI should not be underestimated. The unpredictability of AI robots must be taken into account in all situations, and their creators and programmers must be kept under control in order to ensure they limit the AI robots’ capabilities both to learn and to act.

Conclusion

In light of recent technological evolution, this article considered the difficulty of determining a new legal status for AI robots. The difficulty in this task stems from the uncertainty as to what intellectual capabilities these AI robots will have, and from the awareness that the manner in which AI robots will be treated may shape human interactions in the future. An example is that of a child witnessing his pet robot being abused in his own household, and growing up with the dangerous notion that all pets may be abused as a consequence. The guidelines suggested by the European Parliament are a valuable starting point to set a new legal framework under which AI robots should be placed, both for their own protection and for the protection of human beings who may interact with them.

In addition, sex robots should be introduced in the existing sexual rights legal framework, not for their own benefit, but rather to indirectly protect social human values. This would prevent human beings using sex robots to lose sight of values that should be maintained in a civilized society. The danger is the possibility of the rape of sex robots becoming common practice. This could potentially cause an increase in the rate of sex crimes among human beings.

Finally, this article addressed Elon Musk and Stephen Hawking’s concern about AI exceeding human intelligence and having the capability of turning against human kind. In supporting the European Union’s call for kill switches on each AI robot, it also agrees with the need for international harmonization of this regulatory measure. International harmonization is likely to avoid a situation in which one country’s regulation would prove meaningless if another country produces AI robots without limiting their intelligence and capabilities.

 

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *