Jei - Jus e Internet

Il primo organo di informazione giuridica su internet per gli operatori del diritto - in linea dal 1996

Artificial Intelligence and Autonomous Weapons Systems: what regulation?

Scritto da Giorgia Diomede

Introduction

First of all, we should start by asking ourselves what do we mean by Artificial Intelligence? An Artificial Intelligence (AI) is a machine (or a system) which can select the right action to put in place based on an observation of its surroundings thus, according to Floridi and Sanders, such a system is characterized, by the concepts of Adaptability, Interactivity and Autonomy.

Nowadays, we are experiencing a so called cyber revolution, AIs are taking over our lives and it is possible to observe their different applications in our everyday routine (e.g. personal assistance on mobile phones, self-driving cars), thus proving what an improvement such technologies can be to our lives. However, this revolution does not come free of concerns, many people, in fact, are worried about the number of jobs these machines will take over. As it is well known, job destruction is a recurring event through history, every technological advancement has caused job loss but, has, at the same time, opened a window for new job opportunities. This time, however, the scenario is different and highly advanced states fear that they will soon have to face job collapse, in fact, this time around, the development of AI systems will likely provide for very few new jobs, leaving several amounts of people unemployed.

Unfortunately, economic armageddon is not the main problem AIs bring to the table, these systems, in fact, may also have a different kind of application as they are being more and more employed on the battlefield under the form of Autonomous Weapons.

As much as it is true that a system which does not require the presence of a human to work may be of great use to the military field, let us look for example at the use of autopilot in the Airforce or at the robots used for the disposal of bombs, it is also undeniable that they lead to an enormous amount of problems related to security and accountability. Many influential personalities between researchers and activists have addressed these problems in an open letter directed to the UN especially calling for the ban of Lethal Autonomous Weapons Systems (LAWS) which can operate without any human support at all. In order to understand what we are talking about, it is perhaps better to note that LAWS are characterized by different levels of autonomy and therefore lack of a uniform definition. At the international level, one of the most used tools to understand and differentiate between the different levels is the so called Boyd Cycle, named after his creator John Boyd, or “OODA Loop” (as in Observe, Orient, Decide and Act which according to some researchers are the steps followed by humans in their decision-making process). It derives, from this system, that there are three levels of autonomy for LAWS. The first one, called “human-in-the-loop”, requires human’s assistance for the robot to carry out its task, these are the systems currently in use. The second one is called “human-on-the-loop”, in these systems robots can take decisions on their own but they can be overridden by a human. The third and final level of autonomy is referred to as “human-out-of-the-loop” as in these kinds of systems humans cannot, in any way, control the actions of the robot.

As it is imaginable the deployment of such weapons would constitute a real revolution in modern warfare, some may even try and see the bright side as the use of these weapons on the battlefield would lead to less active combatants risking their life. Anyway, there are obviously more concerns on the use of LAWS than there are silver linings as a machine that cannot be controlled, at all, by a human, might take the wrong decision and cause an unimaginable number of casualties, this is why, as we are going to see, both at the national and international level, many steps are being taken to come up with a regulatory body for these systems.

International Law Perspective

As of today, no regulation on LAWS exists at the International level, however, existing laws may be amenable to be applied at least until a proper body of laws is created on the subject. The main example to take into consideration is International Humanitarian Law (IHL) which offers a series of principles governing the state of warfare. The main tools to be found in IHL are the Geneva Conventions and the Hague Convention.

The formers establish four core principles to govern weapons and military conduct in war: Humanity, Distinction, Proportionality and Military Necessity. The main aim of these principles is to limit military action in order to protect civilian lives and to cause as little sufferance as possible, for example by prohibiting the use of “weapons […] and methods of warfare of a nature to cause superfluous injury or unnecessary suffering[i]. The Geneva Conventions also provide for a procedure of so called “weapons review”, in fact, Article 36 states that “in the study, development, acquisition or adoption of a new weapon, means or method of warfare, a High Contracting Party is under an obligation to determine whether its employment would, in some or all circumstances, be prohibited by this Protocol or by any other rule of international law applicable to the High Contracting Party.” thus posing an obligation for States party to the agreement, unfortunately, non-abiding States have no interest in following such procedures, despite an invitation from the international community to do so.

On the other hand, the Hague Convention provides for a catch-all clause which has been later introduced in the Additional Protocol of the Geneva Conventions. This clause, called the Martens Clause, states as follows: “in cases not covered by this Protocol or by other international agreements, civilians and combatants remain under the protection and authority of the principles of international law derived from established custom, from the principles of humanity and from the dictates of public conscience”. It is generally accepted that for custom law to apply, in the context of the Martens Clause, it is more important for there to be an opinio iuris ac necessitatis, rather than the diuturnitas, thus underlying the importance given to the interpretation, at the International level, of a certain conduct as being unlawful.

The Geneva and Hague Conventions aside, an attempt to regulate LAWS is being made in the framework of the CCW which in 2016 has created a Group of Governmental Experts (GGE) with the goal to specifically study emerging technologies in the area of LAWS. In the course of their first meetings, the majority of experts has agreed on the need to give a common and definitive definition of LAWS in order to better address the problems posed by such systems. On these premises, four approaches to weapon characterization came up:

  1. The Separative approach in which characteristics not relevant for the CCW are not taken into consideration while those considered relevant are;
  2. The Cumulative approach in which categories of characteristics are added to a list and then evaluated on the base of specific criteria in order to exclude those not relevant to the CCW;
  3. The Accountability approach in which characteristics and decisions left to the machines are taken into consideration without focusing on the loss of human control;
  4. The Purpose oriented and effect-based approach which takes into consideration desirable and undesirable effects of LAWS based on emerging technologies.

In any case, what all members of the GGE agreed on is the need to keep a human-centric approach to LAWS, the “human-on-the-loop” system being the best way to combine artificial intelligence and risk management.

Following the different GGE meetings, the majority of the States party to the CCW have been pushing for the negotiation of an instrument of International law (such as a treaty or a protocol) on the regulation of LAWS. However, some Countries believe this to be a rushed decision as such weapons do not really exist currently, therefore, an early regulation might result either too weak or too limiting. For instance, France and Germany have adopted a common position, supported by Italy, that they presented to the GGE with a non-paper in November 2017. Their paper includes suggestions on the adoption of transparency measures and underlines the need to conduct weapon reviews as provided in Article 36[ii].

So, to summarize, IHL and CCW rules can be applied to LAWS, albeit with some difficulties, at least until a proper regulation is in place. This upcoming body of laws should provide for a definition of LAWS and for rules concerning the production and use of such systems, in particular, it should focus on the involvement of humans in the decision making process, obviously, all of this should be done while bearing in mind that the main aim of said regulation is to guarantee the full respect of the principles set by IHL.

EU Law Perspective

At the state of things, there is no regulation on LAWS at the European level, in fact, there is not even a proper regulation on AIs. In 2017 the European Parliament presented a resolution to the Commission on the subject, asking for the adoption of civil law rules on robotics. The main concern of the Parliament is related to accountability, for this reason, the report contains some recommendations on how the issue should be faced, in particular, in relation to liability for damages caused to third parties, it states at follows:

“52. [the Parliament] Considers that, whatever legal solution it applies to the civil liability for damage caused by robots in cases other than those of damage to property, the future legislative instrument should in no way restrict the type or the extent of the damages which may be recovered, nor should it limit the forms of compensation which may be offered to the aggrieved party, on the sole grounds that damage is caused by a non-human agent;

  1. [the Parliament] Considers that the future legislative instrument should be based on an in-depth evaluation by the Commission determining whether the strict liability or the risk management approach should be applied;
  2. [the Parliament] Notes at the same time that strict liability requires only proof that damage has occurred and the establishment of a causal link between the harmful functioning of the robot and the damage suffered by the injured party;
  3. [the Parliament] Notes that the risk management approach does not focus on the person "who acted negligently" as individually liable but on the person who is able, under certain circumstances, to minimize risks and deal with negative impacts;
  4. [the Parliament] Considers that, in principle, once the parties bearing the ultimate responsibility have been identified, their liability should be proportional to the actual level of instructions given to the robot and of its degree of autonomy, so that the greater a robot's learning capability or autonomy, and the longer a robot's training, the greater the responsibility of its trainer should be; notes, in particular, that skills resulting from training” given to a robot should be not confused with skills depending strictly on its self-learning abilities when seeking to identify the person to whom the robot's harmful behavior is actually attributable; notes that at least at the present stage the responsibility must lie with a human and not a robot;”. It is noteworthy that paragraph 55 provides for liability to fall on the person who is able to “minimize risks and deal with negative impacts”, according to relator Delvaux these persons can be identified following one of these two principles: safety by design and risk assessment. The former considers the producer to be liable as he should have programmed the system in such a way as to avoid risks, while the latter requires preliminary tests to be carried out on the robot and poses liability on all the persons involved in the process, from the producer to the operator.

Following the Parliament resolution, the Commission published a communication laying down a European strategy whose aim is to focus on human-centric AI. According to the Commission, AI is a tool at humans disposal, and to be used at its best trustworthiness should be achieved, this, in the Commission view, is only possible if the values of our society are embedded in the system development. For these reasons, the Commission has set up a High-Level Experts Group on AI with the task of laying down ethics guidelines which can be found here: https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai.

Furthermore, the Commission also set up an Expert Group on Liability and New Technologies to address the concerns raised by the Parliament on the subject, the group was divided into two formations: the Product Liability Directive formation and the New Technologies formation.

In its report, submitted in November 2019, the NTF presented its findings on whether existing liability regimes can adequately be applied to emerging technologies. Firstly it has to be noted that the law of tort is mostly non harmonized at EU level, with some exceptions concerning product liability law, data protection law and competition law. Thus said, the NTF found that even though existing regimes may be applied to emerging technologies, however, there are some problems to be faced, the most relevant being to whom liability should be addressed for damages caused by these systems. Many proposals were made through the years to give legal personality to autonomous systems, this, however, would be very hard to accomplish as, in order to be able to undertake obligations, an autonomous system should also have assets of its own, on the other hand, there is no need to give a legal personality to emerging technologies as in the end the risks connected to their use can be attributed to already existing natural or legal persons.

In this context the NTF took into consideration two kinds of strict liability, on the one hand, the report suggests the use of operator’s strict liability for: “risks posed by emerging digital technologies, if, for example, they are operated in non-private environments and may typically cause significant harm. Strict liability should lie with the person who is in control of the risk connected with the operation of emerging digital technologies and who benefits from their operation (operator). If there are two or more operators, […] strict liability should lie with the one who has more control over the risks of the operation.”. In this field, existing national regimes may already be applied. On the other hand, producer’s strict liability was suggested for: “[…] defects in emerging digital technologies even if said defects appear after the product was put into circulation, as long as the producer was still in control of updates to, or upgrades on, the technology. A development risk defense should not apply. If it is proven that an emerging digital technology has caused harm, the burden of proving defect should be reversed if there are disproportionate difficulties or costs pertaining to establishing the relevant level of safety or proving that this level of safety has not been met.”. This field poses some problems related to technologies which by default operate by interacting and learning from their surroundings (such as AIs). In these cases, the system might suffer a malfunction that could not be predicted by its producer, for this reason, national liability laws which allow the producer to avoid responsibility for unforeseeable defects should not be applicable in the aforementioned cases.

In addition to these two scenarios, the NTF considers the possibility to apply existing vicarious liability to autonomous systems so that when such a system is used in the place of human auxiliaries the damages it may cause will give rise to the liability of a principal, just like it would happen if it was an human acting. However, this solution is not free of problems either as many MS jurisdictions require the auxiliary to have misbehaved, in order for vicarious liability to apply, thus posing the problem of how such conduct may be put in place by a machine.

As it can be deduced by the NTF report, existing liability regimes may be applied to emerging technologies for the time being, but as these technologies grow bigger and bigger the development of proper legislation is desirable. Unfortunately, to this day, no legislative act has followed the Parliament resolution. Lastly, it is worth noting that the Parliament has also presented a report to the Council calling for the adoption of a common position on LAWS, however, there has been no follow up on this subject either.

Italian Law Perspective

The Italian Law landscape is pretty poor of rules concerning AIs and in particular LAWS as these technologies are still being developed and, as we have seen, not many regulations exist on the subject even at the international level. However, some steps forward are being made, especially in light of the strategy adopted at the EU level. Therefore, it should come as no surprise that following the Commission’s work, a national strategy has been proposed by the MISE (Ministero dello Sviluppo Economico), which, through the work of a group of experts, has presented a series of recommendations on the use and development of AIs. According to the experts, the main line of action Italy should follow is the one laid down at the European level, especially regarding ethics guidelines and the anthropocentric development of such systems.

Thus said, unmanned systems are not completely unknown to the Italian legal order, in fact, in the past years, the Italian Military Force has used such systems both for missions in foreign territories (such as Afghanistan and Iraq) and for public security operations on the national territory. The latter has seen in particular the use of Unmanned Ground Vehicles (UGV) and Unmanned Aerial Vehicles (UAV). LAWS, however, are still out of the picture as, especially when it comes to public order, the use of force must be reduced, to what is strictly necessary, thus requiring a human agent to adopt the relevant decisions. Furthermore, the use of LAWS on the national territory would pose several problems as internal human rights law would apply which is made up of stricter rules than those envisaged by IHL, thus leading to a very restrictive use of said systems.

At the moment, there is no specific legislation in Italy concerning LAWS, but it is very likely that the legislator will follow the example set at the international level. For instance, Decree no. 66 of 2010 (“Code of Military Organization”) provides for the implementation of the rules set on weapons review by Article 36 of the Additional Protocol I of the Geneva Conventions. Moreover, an important role is played at the national level by the Parliament, as first of all, it has presented in 2017 a proposal to adopt the definitions and principles that resulted from the CCW discussions, thus implementing the aforementioned Decree. Secondly, in case of both external and internal use of LAWS to carry out military operations, it would be up to the Parliament to give the final authorization.

As previously discussed in relation to EU Law, the biggest problems autonomous systems pose are related to liability. In Italian civil law, without the possibility to connect a fact to the will of the person that caused it there can be no responsibility on said person, therefore the lack of willpower on the robots causes the liability to fall on either the producer or the operator (just like in EU Law).

For what concerns the criminal law field, the lack of a legal personality places LAWS on the same level of normal weapons leading, as suggested by the human-in-the-loop principle, liability to fall on the operator of the system and his superior. In fact, in these cases Article 51 of the Italian Penal Code would apply, which provides for a cause of justification when the person that committed the crime was acting in fulfillment of a duty, therefore causing liability to fall on the superior who gave the order. In alternative, the operator might call on a malfunctioning of the system and, in this case, liability would fall on the producer.

Finally, according to Italian law, in case of armed conflicts, the use of LAWS would automatically be considered lawful as there is no legislation, both national and international, on the subject and the state of warfare justifies the suspension of certain rights and freedoms that are usually protected at Constitutional level.

Conclusion

It appears, from what has been said in the previous paragraphs, that even though we find ourselves in a completely new field which requires an ad hoc regulation, the majority of states agrees to some fundamental principles which can apply for the time being.

First of all, even though there is not a shared agreement on the ban of LAWS operating with the “human-out-of-the-loop” principle, all the states part to the CCW came to the conclusion that the “human-on-the-loop” principle is the best way to proceed as there needs to be at least a small amount of human control over the decisions adopted by the machine. However, there is not to worry as completely autonomous machines do not even exist yet and by the time they do we might have figured out a way to instill our moral principles into them, for example by using the Value-Sensitive Design approach as suggested by Major Ilse Verdiesen. This process, for instance, would require general norms and values to be translated into design requirements, literally building the values in the machine programming.

Secondly, as there are no norms at the national level yet, liability will be addressed following existing norms, thus assessing on a case by case basis who can be considered responsible for the damage, between the operator (if there is one), the commander (which as superior in rank will be responsible) or the producer (in case a malfunction occurs).

Finally, as highlighted many times human rights and in particular IHL are the foundation of every legal order, they should be kept in mind at all times, both when adapting existing rules to new weapons and in the process of laying down brand new regulation on the subject, in order to guarantee their full protection.

Bibliography

Addition Protocol I of the Geneva Conventions, 1977

Amitai & Oren Etzioni, Should Artificial Intelligence Be Regulated?, 2017

Caitlin Mitchell, When Laws Govern LAWS: a Review of the 2018 Discussions of the Group of Governmental Experts on the Implementation and Regulation of Lethal Autonomous Weapons Systems, 2020

Claudio Catalano, Lethal Autonomous Weapons System (LAWS): evoluzione del quadro giuridico nazionale e internazionale ed eventuale possibilità di impiego anche nell’ambito di operazioni sul territorio nazionale, 2018

Com 168/2019, Building Trust in Human-Centric Artificial Intelligence, 2019

Decreto Legislativo 15 marzo 2010, n. 66 “Codice dell'ordinamento militare”

European Commission, Communication on Artificial Intelligence for Europe, 2018

European Commission, Report on Liability for Artificial Intelligence and other emerging technologies, 2019

European Parliament, Resolution on Civil Law Rules on Robotics, 2017

Floridi & Sanders, On the Morality of Artificial Agents, 2004

France and Germany, Working Paper CCW/GGE.1/2017/WP.4 For consideration by the Group of Governmental Experts on Lethal Autonomous Weapons Systems (LAWS), 2017

Group of Experts selected by the MISE, Proposte per una strategia italiana per lintelligenza artificiale, 2019

Group of Governmental Experts on Lethal Autonomous Weapons Systems, Report, 2018

Ilse Verdiesen, How Do We Ensure That We Remain In Control of Our Autonomous Weapons?, 2017

Meeting of the High Contracting Parties to the Convention on Conventional Weapons, Final Report, 2013

The Geneva Conventions, 1949

The Hague Convention, 1899

 

 

 

Categoria: