Legal Status of Robots: The RENFORCE/UGlobe Seminar and Why I Decided to Sign the Open Letter

Photo credits: iStock/Global_PhonlamaiPhoto

Should a robot enjoy any legal status independent of its human creators? If so, what kind of legal status would that be? Should the robot enjoy its/her/his “rights”? One’s answers to these futuristic questions might in part depend on whether one’s image of autonomous robots comes from the film Bicentennial Man (1999) based on Isaac Asimov’s novel or a more recent movie Ex Machina (2014). In the film version of Bicentennial Man, a highly autonomous robot played by Robin Williams exhibits humorous, friendly, and warm-hearted characteristics that co-exist with human communities. By contrast, in Ex Machina, a beautiful human-looking robot ended up deceiving a man and achieving freedom by taking advantage of the trust that the man developed towards the robot. While we cannot tell if such a self-governing robotic machine could ever be built, these two movies depict diametrically opposed scenarios that robots can have both beneficial and disturbing consequences to human beings.

European Parliament on robotic legal status

Awaiting the reality to catch up with scientific fiction, last year, on 16 February 2017, the European Parliament has adopted a resolution entitled “Civil Law Rules on Robotics”. Included therein was a series of provisions concerning “liability”. While the European Parliament noted that “at least at the present stage the responsibility must lie with a human and not a robot” (paragraph 56), the Parliament still called upon the European Commission to analyze the implications of a legal solution to “[create] a specific legal status for robots” in the long run (paragraph 59(f) of the resolution). The assumption is that such a legal status allows “the most sophisticated autonomous robots” to have the status of electronic persons who will be “responsible for making good any damage they may cause”. Possibly, electronic personality will be applied to cases “where robots make autonomous decisions or otherwise interact with third parties independently” (paragraph 59(f)).

Open letter to the European Commission

In March 2018, Nathalie Nevejans, a lecturer in law at the University of Artois and a member of the Ethics Committee of the French National Center for Scientific Research (COMETS), wrote an open letter to the European Commission which, in response to the European Parliament’s resolution, was in the process of drafting a first communication on the EU’s strategies on artificial intelligence and robotics. In essence, the letter requests the Commission to avoidcreating an electronic legal person for robotics. The letter contested the reliance upon the science fiction-based assumption that it would be impossible to hold human beings liable with respect to “autonomous” and “self-learning” robots. The creation of a legal personality for a robot is legally and ethically “inappropriate”, the letter claims, at least insofar as we rely on existing legal personality models applicable to natural persons and legal entities. According to the letter, if a robot’s legal personality is based on the natural person model, the robot may also enjoy a series of “rights”. The letter claims that to grant such rights to robots would be in contradiction with the European Convention on Human Rights. The claim is presumably based on the difficulty of sharing with robots the historical and philosophical backgrounds that sustain human rights and also on the restrictions that robotic rights may create vis-a-vis the rights of natural persons. As of 8 June 2018, the open letter garnered 257 signatories within the EU, including experts in artificial intelligence, robotics, ethics, and law.

The EU’s Questionable Competence

Triggered by the open letter, on 28 May 2018, Utrecht University’s RENFORCE and UGlobe researchers held a seminar on the legal status of robots. The seminar was convened by Lucky Belder who co-coordinates the UGlobe project on “Disrupting Technological Innovation: Towards an Ethical and Legal Framework”. The purpose of the seminar was to highlight some of the concerns arising from the consideration of the possible legal status of robots at the EU’s level. To begin with, from EU law perspectives, Sybe de Vries raised the question of whether the EU has competence to grant any legal personality on robots in the first place. Article 114 TFEU (Treaty on the Functioning of the European Union) could potentially be invoked in the future on the ground that the harmonization of natural law is necessary for the functioning of the internal market. Nevertheless, since we are dealing with civil law rules, the introduction of an entirely new, EU civil law regime for robots is difficult to realize.

Defining “autonomous” robots

Leaving aside the question of the EU’s competence, there is a fundamental difficulty in defining the “autonomy” of robots. The 2017 resolution of the European Parliament refers to those robots which “make autonomous decisions” (paragraph 59(f)). Under the resolution, a robot’s autonomy is defined as the “ability to take decisions and implement them” in the outside world “independently of external control or influence” (recital paragraph AA).But how do we know whether computers are “taking decisions”? You could argue that even the Game of Life, for instance, is “taking decisions” based on a set of simple rules. Moreover, how can a robot be free of “external control or influence” if somebody would have to build it in the first place? As Roeland de Bruijn and Madeleine de Cock Buning suggested during the Utrecht seminar, it is, at present, possible to trace the elements of human involvement. The discussion of the legal status of robots should not be based on a hasty assumption about the autonomy and independency of robots.

“Responsibility”, “damage, and “rights” in relation to robots

Even leaving aside the thorny question of definition, the European Parliament’s resolution talks about the electronic personality of robots who will be “responsible” for “damage” (paragraph 59(f)). Yet the ideas of responsibility and damages are deeply intertwined with a given human community’s values, morality, and, ultimately, humans’ feeling, pain, and emotion. Unless the reality fully catches up with the robots of Bicentennial Man and Ex Machina, it is clear that these concepts, embedded in human beings’ characteristics, cannot easily be transposed to the construction of robotic responsibility for damage. Furthermore, “responsibility” should also entail certain “rights”—according to our human line of thinking. Yet as Janneke Gerards highlighted during the seminar, fundamental conceptual difficulties persist in granting “rights” to robots, even if we want to do so.

Regulating the use and development of autonomous robots

At present, a more pressing question which ought to garner more attention at the EU’s level is to develop a set of rules for better ensuring the safety of robotic technologies. Illustrative in this regard is the regulation of so-called killer robots. As Cedric Ryngaert pointed out during the seminar in Utrecht, there are a number of unanswered questions as to whether and to what extent the human designers and operators of such robots could be held responsible if autonomous robots cause unintended harm to civilians in warfare. While some states are in favour of developing autonomous killer robots, Elon Musk, Tesla’s CEO, and many industry leaders have signed in 2017 an open letter to the UN’s conference for the Convention on Certain Conventional Weapons. The letter warned states parties about the dangers of legal autonomous weapons, including the possibility that the weapons be “hacked in undesirable ways”.

Revisiting the open letter

The Utrecht seminar was meant to flag some of the possible legal perspectives without, perhaps rightly, intending to provide any definitive answers to pending questions. As long as the European Parliament kicked off the discussions of legal status of robots, we—regardless of direct involvement in the development of robotic technologies—must engage in the debate on whether and how robots should enjoy any independent legal status. Having attended the seminar, I cannot but conclude that paragraph 59(f) of the European Parliament’s February 2017 resolution should have been more cautiously drafted based on much more thorough studies on robotics. Accordingly, this led me to sign the open letter to the European Commission.

This entry was posted in Core values on by .
Machiko Kanetake

About Machiko Kanetake

Machiko Kanetake is an Assistant Professor of Public International Law and a coordinator of the Master's Program in Public International Law. Machiko specialises in the ‘interfaces’ between national and international law. In particular, she focuses on: the interactions between the UN Security Council’s exercise of authority and the domestic legal order; domestic courts’ engagement with the instruments adopted by UN human rights treaty-monitoring bodies; and the domestic application of non-binding international instruments.