American USV Sea Hunter

American USV Sea Hunter

The commander stays at home

12 Apr 2021 | Magazine, Technology | 0 Kommentare

Modern communication and information technology is making unmanned or autonomous weapon systems possible at sea. However, many questions still need to be answered before they can be deployed.

Since 1984 at the latest, perceptions and expectations of technical systems that are capable of acting independently have been strongly characterised by the "Terminator" and the technical intelligence behind it, "Skynet". A machine state with almost unlimited capabilities that are only surpassed by its will to power, boundless amorality and hostility towards humanity. When we talk about autonomous systems, images and stories of the Terminator are almost automatically conjured up. The technological reality is very different. Although drones fly halfway around the world, tanks and ships without crews guard military installations or search for submarines, and IT-supported command and weapon deployment systems have long been part of everyday military life, all of this takes place under the control or close supervision of soldiers. The article combines the technological status and the prospects for future developments with selected questions of international law and ethics. It is based on the findings of the treatise "Unmanned Systems and Cyber Operations: Armed Forces and Conflict in the 21st Century - An Introduction".

 

Examples of weapon systems

 Below are just a few comparative objects from the enormous global selection, all of which can be found using web search engines:

  1. "Zaunkönig", Kriegsmarine torpedo, in service since September 1943: the world's first autonomous acoustic homing torpedo, used by submarines.
  2. AGM-158C LRASM (Long Range Anti Ship Missile), subsonic sea target missile: designed for use against surface targets at greater distances; is assigned a search area and one or more targets and can make a selection from the assigned targets during the approach.
  3. Harpy: Drone, Anti-Radiation Loitering Munition System, designed to combat ground-based air defence radar systems. It is launched from a truck and circles for several hours over an assigned area of operation; if it detects the radar radiation of a ground-based air defence system, it crashes into this source and destroys it.
  4. MQ-9 Reaper: drone for long-range reconnaissance and engagement of ground targets, is either remote-controlled or can optionally carry out a programmed mission autonomously.
  5. Black Hornet PRS: In contrast to large drones, nano systems are used as personal reconnaissance systems for infantrymen; at 17 centimetres long and weighing 33 grams, they remain in the air for around 25 minutes and can also reconnoitre indoor areas.
  6. Airbus VSR 700: as a medium-sized unmanned helicopter, the VSR 700 offers a wide range of possible applications; with a total weight of 700 kilos, a payload of 150 kilos and a flight time of ten hours, it is the ideal unmanned "on-board helicopter" for corvettes and frigates with the options of reconnaissance and action.
  7. Future Combat Air System (FCAS): Germany, France and Spain want to carry out an initial technology demonstration by the end of 2021 for the combination of a fighter aircraft as the successor to the Eurofighter and drones, which are to act as "wingmen" of the fighter aircraft to create a capability network for reconnaissance, command and control and action. The system should be able to be used by the armed forces from around 2040.
  8. Protector and Seagull: unmanned combat boats, around nine to twelve metres long, for monitoring coastal waters, harbours, roads and rivers, which can engage targets remotely or in automatic mode. They can be equipped with machine guns, grenade launchers, mine hunting equipment or ASW torpedoes. They can be deployed singly, but also as wingmen for manned boats, for example to protect special forces in amphibious operations. Such USVs could be part of the equipment of the Class 125 frigates with combat boats.
  9. Sea Hunter, unmanned submarine hunting vessel of the US Navy, 10,000 nautical miles range, 90 days sea endurance, can be remotely controlled or operate in automatic mode. The PRC is planning to build unmanned submarines that can operate submerged for a similar length of time.
  10. Large UUVs (Unmanned Underwater Vehicles): Boeing's Echo Voyager was submerged for three months in a test in 2017, the US Navy has ordered five of its successor Orca; the Chinese Navy is to receive a UUVs in its twenties that can also operate submerged for months; Russia is developing "torpedoes" with nuclear propulsion and warhead, range 10,000 miles, to be deployed by the Belgorod submarine, which displaces around 30,000 tonnes; the weapon could "lie in wait" for months after being dropped from the mother ship.

All systems operate either remotely or on the basis of pre-programmed target assignments and procedures. Only take-offs and landings, transfers to the operational area or pre-programmed emergency procedures such as "lost-link mode" take place without the intervention of soldiers.

General Atomics SkyGuardian

General Atomics SkyGuardian

No "autonomous" systems

The best basis for classifying systems as autonomous or merely automated is provided by the British "Joint Doctrine Publication 0-30.2 Unmanned Aircraft Systems", which describes the special features of unmanned flying systems and, above all, makes it clear what current systems cannot do. A distinction is made between two categories.

According to this, an "automated system" is: "... an automated or automatic system is one that, in response to inputs from one or more sensors, is programmed to logically follow a predefined set of rules in order to provide an outcome. Knowing the set of rules under which it is operating means that its output is predictable."

And as an "autonomous system": "An autonomous system is capable of understanding higher-level intent and direction. From this understanding and its perception of its environment, such a system is able to take appropriate action to bring about a desired state. It is capable of deciding a course of action, from a number of alternatives, without depending on human oversight and control, although these may still be present. Although the overall activity of an autonomous unmanned aircraft will be predictable, individual actions may not be."

However, this definition has a weakness: systems that are monitored by humans and can be influenced or stopped at any time are not autonomous. The original term autonomy describes a human characteristic as a state of self-determination through Decision-making and Freedom of action. The definition of an autonomous system in this sense therefore needs to be supplemented: autonomy only begins when a system can start and end its action independently of humans, i.e. when humans have no possibility of intervening in the system's action.

The Wren is therefore an automated system because the operational decision lies with the human and secondly because it is bound to the mechanically defined action programme. LRASM or Harpy also execute pre-programmed options, albeit at a technologically higher level - they do not make their own decisions, even if they carry out their task in the operational area independently of the operator. In other words: Independent operation is not enough for autonomy if the system follows its mechanical constraints or algorithms alone, because it has no freedom of choice whatsoever. Many of the systems mentioned above are remotely controlled by humans. Is it enough to be labelled "autonomous" if the system has no shutdown function, no option to abort the mission through operator intervention? Clearly not, even if the system follows a programme and has no option to break out of the pre-programmed framework.

The military systems mentioned above fall completely into the "automated" category. The following applies to all the systems listed above: in automatic mode, they merely realise programmed intentions in specific individual cases at the command of a human. None of the systems analysed above is an autonomous system in the sense defined above. It is therefore appropriate to refer to the systems as "unmanned systems", as this is the only characteristic common to all systems.

Small test boat and aerial drone at the International RoboBoad Competition in South Daytona

Small test boat and aerial drone at the International RoboBoad Competition in South Daytona

International humanitarian law

 International humanitarian law (IHL) binds the parties to a conflict and all persons acting on their behalf. The Viadrina International Law Project (VILP) is one of the sources of the many legal bases. The legal aspects of the use of automated systems are explained using the four most important rules of IHL, based on their easy-to-understand wording on the website of the Federal Foreign Office.

 Rule 1: Neither the parties to the conflict nor the members of their armed forces have unrestricted freedom in their choice of methods and means of warfare. The use of any weapons and combat methods that cause superfluous injuries and unnecessary suffering is prohibited.

Prohibited means and methods include, for example, poison gas, lasers for permanent blindness, ammunition to cause additional suffering beyond that required for military purposes to injure and kill, and terrorist attacks against the civilian population. The legal basis is Article 35 of Additional Protocol No. 1 to the Geneva Conventions. The use of machines in itself does not violate these rules; it depends on the specific deployment and the munitions used. If the operator uses drones for a terrorist attack solely against uninvolved civilians or if the drone fires prohibited ammunition, this constitutes a violation of IHL. It makes no difference whether the drone is remote-controlled or carries out the attack without the intervention of the operator due to the programming.

 Rule 2: In order to protect the civilian population and civilian objects, a distinction must be made at all times between the civilian population and combatants. Neither the civilian population as a whole nor individual civilians may be attacked. Attacks may only be directed at military targets.

How is it to be judged when Harpy attacks an enemy radar display positioned in the centre of a densely populated village? The radar display is a legitimate military target. The attack is not exclusively and not primarily against the civilian population, so the attack on the radar display is permissible as long as no excessive collateral damage is caused. This follows from Articles 35, 51 and 57 of Additional Protocol No. 1 to the Geneva Conventions. There is no difference in legal judgement compared to the pilot of a fighter aircraft, who would also have to decide in favour of missile action against the enemy radar system deployed in the residential area in order to protect himself and the other units deployed, even if civilians could be killed in the process. Precision has been an effective means of reducing the number of civilian casualties for decades; before the introduction of homing missiles, a larger radius would have had to be covered with bombs or grenades to safely disable a radar system. The situation also requires a look at the other side: there is a violation of Rule 1 on the part of the party that positions its radar in a residential area and thus instrumentalises the civilian population as human shields. Articles 48 and 58 (b) of Additional Protocol No. 1 to the Geneva Conventions are violated. The positioning of the radar display in the midst of the civilian population makes it difficult for the enemy to distinguish and separate this military target from other non-legitimate targets - this can also be qualified as a combat method that can lead to unnecessary injuries, killings and unnecessary suffering.

 Rule 3: Combatants and civilians under the control of an opposing party are entitled to respect for their lives and dignity. They must be protected from any acts of violence or reprisals.

Custody is exercised by soldiers; at present, unmanned systems are not suitable for this task.

Rule 4: It is forbidden to kill or injure an opponent who surrenders or is unable to continue the fight.

One of the legal bases for this is Article 41 of Additional Protocol No. 1 to the Geneva Conventions. Machines, just like soldiers, must be able to fulfil the requirement to protect the enemy who is out of action, at least if they are acting without the involvement of the operator - which, however, will probably not be possible in the near future due to the current limitations of the perception capability of technical systems. When fighting at sea, it is difficult for people to recognise that an enemy ship has stopped simply to allow the crew to disembark due to the large distances involved. No higher demands can be placed on remote-control operators or programmers of technical systems than on the crew of a combat ship who observe the stopping. In land combat, machines are supported by soldiers in a similar way to the combined deployment of tanks and infantry. Soldiers can recognise surrendering or incapacitated opponents and take them prisoner, thus ensuring that IHL can be observed.

From the point of view of the four main rules of IHL, the following applies: machines must be able to be brought under control at all times and must not kill indiscriminately. The systems listed above do not in themselves constitute a violation of the rules of IHL, but their use is important.

AUV Remus 600 of the ONR

AUV Remus 600 of the ONR

Warship or not?

 The investigation into whether the Sea Hunter is a warship within the meaning of the Convention on the Law of the Sea (UNCLOS) shows that problems under international law arise in rather unexpected places. The United Nations Convention on the Law of the Sea (UNCLOS III) of 1982 defines warships in Article 29 as follows: "For the purposes of this Convention, 'warship' means a ship belonging to the armed forces of a State bearing the external marks distinguishing such ships of its nationality, under the command of an officer duly commissioned by the government of the State and whose name appears in the appropriate service list or its equivalent, and manned by a crew which is under regular armed forces discipline."

Maritime drones have been in use worldwide for decades as subsystems of warships, from which they are deployed for limited periods of time and space. These include the German Navy's Pinguin and Seehund minehunting submersible drones. A system like Sea Hunter was not yet conceivable in the 1970s, when UNCLOS was created - it is not a subsystem of a warship. Of the three conditions of Article 29, labelling is the easiest to fulfil. Subordination to the command of an officer is also the easiest - this indisputably means command authority. In the age of remote data transmission, the presence of the officer on board is not mandatory. The wording of UNCLOS - "manned by a crew" - is certainly based on the assumption that a crew is in fact always on board "their" ship. This is a clear requirement, but also an unintentional loophole created by technical progress, which can be filled by analogy. The Sea Hunter can therefore be described as a warship. This will be controversial among states, as this status is associated with various duties of behaviour and powers, for example when passing through foreign territorial waters.

Students from Singapore clearing an autonomous vehicle at the ONR's Maritime RobotX Challenge in Hawaii

Students from Singapore clearing an autonomous vehicle at the ONR's Maritime RobotX Challenge in Hawaii

Unmanned systems and ethics

 Soldiers have been dealing with increasingly automated systems for a long time, but it is foreseeable that their range, speed of action and options for action will continue to increase. Missile defence systems and other machines are increasingly turning the soldier into an observer and controller of the system, who only becomes active in the event of deviations from the norm. The use of travelling, flying, floating or diving drones means that soldiers are withdrawn from the front line of fire, although this is offset by ranged weapons. Nevertheless, the typical risks of the military profession are changing both qualitatively and quantitatively with the increasing use of automated systems.

When under fire from enemy automated systems, soldiers inevitably develop a new perception. If there is no longer a soldier fighting in the attacking enemy machine and it becomes clear that you are risking life and limb and can only cause damage to property on the other side, this will have an impact on motivation and mental attitude. It is possible that the realisation that only easily replaceable technical equipment is being destroyed and that both toughness and the willingness to make sacrifices no longer offer any advantage will reduce the motivation to fight. On the other hand, the inhibition to kill no longer applies and the soldier does not have to worry about legal or ethical questions regarding the use of means, unless it comes to the choice of means whose effect goes beyond fighting the enemy machines. Does it make any ethical difference if a soldier is shot at by an automated machine? It makes no difference in terms of the unpredictable contingencies of combat. It makes injuries, agony, suffering or death neither better nor worse from the soldier's point of view.

 

It cannot be completely ruled out that technical systems will one day be able to make decisions on the "whether" and "what" of a weapon deployment. Even if there are currently still clear technological limits, this is within the realms of possibility - regardless of whether this responsibility is actually delegated to such systems. Does it make an ethical difference whether a soldier decides on the start and end of the use of force, merely observes the machine or even stays out of the process altogether? Certainly yes. The core of a soldier's job is to make precisely this difficult decision and not to delegate it to machines. When people make existential decisions involving ethics and emotion, their own vulnerability and mortality always resonate - no machine will ever be able to simulate this, because human decisions are always felt and not just intellectualised as an abstract train of thought. The responsibility for lethal decisions must always remain with the human being, who is aware of his own vulnerability and mortality, precisely because this decision is or at least can be existential not only for the opponent.

 

Another urgent ethical question arises: Can the leadership of a party to a conflict justify pitting its soldiers against unmanned systems if it equips its armed forces with no or significantly fewer unmanned systems than its opponent? In the knowledge that the machines have an advantage simply due to their faster decision loops. In the knowledge that the enemy can replace the destroyed machines, but that its own fallen or wounded soldiers are lost for good or must be taken out of the battle. In the knowledge that in the defence of your own country you may "burn out" entire generations against the constantly replenished systems of the enemy. In the knowledge that your own soldiers will not be able to adequately protect your own civilian population.

 

This article is an extract from the new book Unmanned systems and cyber operations: Armed Forces and Conflict in the 21st Century - An Introduction by Dr Michael Stehr, Mittler im Maximilian Verlag, Hamburg 2020, ISBN 978-3-8132-1103-0

 

0 Kommentare

Einen Kommentar abschicken

Your email address will not be published. Erforderliche Felder sind mit * markiert

en_GBEnglish