Not smart enough: The poverty of European military thinking on artificial intelligence

Summary available in

Summary

  • There is currently too little European thinking about what artificial intelligence means for the military.
  • AI experts tend to overlook Europe, focusing on the US and China. But AI will play an important role for Europe’s defence capabilities, and its funding and development decisions will influence the future of military AI.
  • France and Germany stand at opposite ends of the AI spectrum in Europe: France considers AI a part of geopolitical competition and shows clear interest in military AI, while Germany sees AI only as an economic and societal issue.
  • The new European Commission’s stated goal of achieving “European technological sovereignty” should lead it to include engagement on the topic of military AI, and help EU member states harmonise their approaches.
  • Failing to coordinate properly in this area could threaten future European defence cooperation, including PESCO and the European Defence Fund.

Introduction

“Artificial intelligence” (AI) has become one of the buzzwords of the decade, as a potentially important part of the answer to humanity’s biggest challenges in everything from addressing climate change to fighting cancer and even halting the ageing process. It is widely seen as the most important technological development since the mass use of electricity, one that will usher in the next phase of human evolution. At the same time, some warnings that AI could lead to widespread unemployment, rising inequality, the development of surveillance dystopias, or even the end of humanity are worryingly convincing. States would, therefore, be well advised to actively guide AI’s development and adoption into their societies.

For Europe, 2019 was the year of AI strategy development, as a growing number of EU member states put together expert groups, organised public debates, and published strategies designed to grapple with the possible implications of AI. European countries have developed training programmes, allocated investment, and made plans for cooperation in the area. Next year is likely to be an important one for AI in Europe, as member states and the European Union will need to show that they can fulfil their promises by translating ideas into effective policies.

But, while Europeans are doing a lot of work on the economic and societal consequences of the growing use of AI in various areas of life, they generally pay too little attention to one aspect of the issue: the use of AI in the military realm. Strikingly, the military implications of AI are absent from many European AI strategies, as governments and officials appear uncomfortable discussing the subject (with the exception of the debate on limiting “killer robots”). Similarly,  the academic and expert discourse on AI in the military also tends to overlook Europe, predominantly focusing on developments in the US, China, and, to some extent, Russia. This is likely because most researchers consider Europe to be an unimportant player in the area. A focus on the United States is nothing new in military studies, given that the country is the world’s leading military and technological power. And China has increasingly drawn experts’ attention due to its rapidly growing importance in world affairs and its declared aim of increasing its investment in AI. Europe, however, remains forgotten.

This double neglect means that there is comparatively little information available about European thinking on AI in the military or on how European armed forces plan to use AI – even though several European companies are already developing AI-enabled military systems. This paper helps fill some of these gaps in knowledge. It begins with a discussion of European AI applications in the military realm as they are currently being imagined, researched, and developed. This first section covers the various ways in which AI can support military systems and operations, providing an overview of possible AI applications, their advantages, and the risks they create. The second part of the paper maps and assesses the approaches that Germany, France, and the United Kingdom are taking to using artificial intelligence in the military. It does so by analysing official documents, information gathered through personal conversations and interviews, and reports on industry projects now in development.

The goal of the paper is to contribute to the emerging European debate on military AI, and to shine a light on current agreements and disagreements between important European players. As efforts to strengthen European defence, and to develop “European technological sovereignty”, have become a main focus of the EU and a primary goal of the new European Commission, this topic will be a subject of debate for years to come. Given that the debate on AI is evolving, this paper’s discussion of European positions provides only a snapshot of the present situation. That said, some of the differences between these positions appear to have deep philosophical or cultural roots, and so they are likely to persist for some time.

AI in the military: Key issues

What is AI?

“Artificial intelligence” (AI) is an ill-defined term, not least because its meaning has changed over time – and because even “intelligence” is notoriously hard to define. Generally, AI refers to efforts to build computers and machines that can perform actions one would expect to require human intelligence, such as reasoning and decision-making. However, whenever scientists have created systems that could perform tasks thought to be the preserve of humans, the threshold for what AI ‘is’ continued to rise, with ever more complex tasks becoming the new test. Therefore, AI should not be seen as a fixed state, where it is either reached or not, but should be considered in the light of ever-evolving technologies.

Despite discussion of the possible emergence of “superintelligence”, today’s AI applications are “narrow”, meaning that they focus on a specific task. “General” AI, capable of reproducing human-level intelligence in various tasks, remains in the realm of science fiction, for now. In fact, most current narrow  AI is “brittle”, as it fails to complete tasks that slightly differ from its training. Currently, the most important advances in AI are being made through machine learning techniques, particularly so-called “deep learning” and neural networks. Given this, one way to conceive of AI is as a shift from humans telling computers how to act to computers learning how to act. In this paper, “AI” mostly refers to machine learning technology.

Given the many possible applications of AI, one should not think of AI as a standalone capability or single technology. Rather, it is more accurate to think of AI as an enabler, and to speak of systems as “AI-enabled”: for example, “AI-enabled cyber defences”.

Military professionals, experts, and strategists agree that AI will increasingly be used in this realm, and that this will have important security implications. Assessments of these implications, however, range from maximalist statements about AI causing a “revolution in military affairs” – with authors such as Frank Hoffmann claiming that AI may “alter the immutable nature of war”, and Kenneth Payne positing that AI changes “the psychological essence of strategic affairs” – to less extreme views that outline more specific changes in weapons technology. Yet no one appears to believe that AI will have no impact on military affairs, even if authors such as Andrea Gilli point to mitigating factors and historical precedents, thereby relativising the extent of the expected change.

Types of military AI

It is currently unclear how AI will eventually affect the military – and, ultimately, strategic stability and international relations more broadly. Nonetheless, AI has many possible uses in military systems and operations.

Judging by the public debate on the issue, one could get the impression that military AI is all about killer robots. Formally known as AI-enabled lethal autonomous weapons systems (LAWS), they have captured the public’s imagination. These systems can carry out the critical functions of a targeting cycle in a military operation, including the selection and engagement of targets, without human intervention. This means that they often rely on AI to make decisions rapidly and without human involvement – that is, autonomously. For now, such systems are rare and only used for specific missions. Activists are currently lobbying for a pre-emptive ban on the development of more capable systems at the United Nations, which has discussed LAWS since 2013. Thanks to the work of activist groups such as the Campaign to Stop Killer Robots and the International Committee for Robot Arms Control, and to warnings from experts, LAWS have become a topic of public discussion and concern.

While this engagement is laudable and the international discussions are important, they have contributed to focusing the public debate almost exclusively on this specific type of use of AI in the military realm. But LAWS, and AI-enabled autonomy, are only one of several ways of how AI can be employed for defence purposes, and are arguably one of the most extreme and controversial.

Another, more obvious, use of machine learning in the military realm is in intelligence, surveillance, and reconnaissance. Artificial intelligence is famously good at working with big data to, for example, identify and categorise images and text. In a military context, AI can, for example, help sift through the mountains of data collected by various sensors, such as the hundreds of thousands of hours of video footage collected by US drones. It can examine photographs to single out changes from one picture to the next, indicating the presence of an explosive device hidden in the time between the photos were taken. The Pentagon famously partnered with Google and other technology firms in 2017 on “Project Maven”, also known as the “Algorithmic Warfare Cross-Function Team”. Project Maven used machine learning to comb through drone footage and identify hostile activity. Other AI applications in this context include image and face recognition, speech recognition and translation, the geolocation of images, and pattern-of-life analysis. According to a US congressional research report, the CIA alone has launched 140 projects in this area.

Select military AI applications

  • Intelligence, surveillance, and reconnaissance (ISR)
  • Logistics (predictive maintenance, efficient shipping, and autonomous transport systems).
  • Cyber operations (both defensive and offensive),
  • Command and control (centralised planning that combine various flows of information, from different sensors, into a single source of intelligence).
  • Semi-autonomous and autonomous vehicles and weapons (including lethal autonomous weapons systems (LAWS),
  • Swarming, ie. the coordination of many units working together.
  • Forecasting.
  • Training (using methods such as war games and simulations).

Another military AI application by the military is in logistics, which are a crucial but often undervalued, non-frontline, element of any operation. Artificial intelligence can support military (and civilian) logistics by enabling predictive maintenance. This involves monitoring the functions of a system, such as an aircraft, and predicting when parts of it will need to be replaced based on various sensory inputs and data analysis. Equally, AI can help to improve logistics’ efficiency by, for instance, ensuring that supplies are delivered in appropriate quantities and at the right time.

Many experts believe that some of the most important AI-enabled changes in warfare will occur in the cyber realm, due to its relative lack of physical limitations. Cyber wars could soon involve autonomous attacks and self-replicating cyber weapons. Artificial intelligence is widely expected to make inroads into offensive and defensive cyber operations, as it will likely allow actors to both find and patch up cyber vulnerabilities at greater speed.

In many areas, AI makes processes faster, more efficient, or cheaper. But while such efficiency gains are important, especially for cash-strapped militaries, technologies can only be truly ground-breaking if they provide new capabilities or allow for tactics that go beyond what exists already. Artificial intelligence might be able to do so. This is most clearly the case in the areas of swarming and autonomous, unmanned vehicles (including, but not limited to, LAWS).

Artificial intelligence and autonomy are different things, but they are closely related and often discussed together. In this context, “artificial intelligence” denotes a system’s ability to determine the best course of action to achieve its goals, while “autonomy” describes a system’s freedom to accomplish its goals. As the United Nations Institute for Disarmament Research argues, “the rapidly advancing field of AI and machine learning has significant implications for the role of autonomy in weapon systems. More intelligent machines are capable of taking on more challenging tasks in more complex environments.” Thus, AI can enable autonomy because intelligent systems can be given greater freedom to act.

Militaries are exploring giving systems increased autonomy because machines are faster than humans in analysing, and taking decisions based on, data. Therefore, AI-enabled autonomy is particularly attractive for defensive systems, such as those that provide protection against rockets or missiles. Providing unmanned systems – most of which are, for now, largely remote-controlled or pre-programmed – with more autonomy can also help to make them more stealthy. Unlike their remote-controlled counterparts, autonomous systems can function without communications uplinks or downlinks to an operator, making them harder for enemy defences to detect. And, perhaps most obviously, autonomous systems could help reduce militaries’ reliance on humans. While this factor is also a cause for concern, it can reduce human error and costs, and alleviate the physical or cognitive strain on soldiers. Importantly, autonomy can be valuable for non-lethal operations, as one can allow for autonomous decision-making in some areas, but not in targeting cycles.

Artificial intelligence is important for swarming due to the complexity of the task. Swarming means the combination of many systems – such as drones, unmanned boats, or tanks –in an operation which they act independently but in a coordinated manner. The idea is that (cheap) swarming robots will perform complex tasks if they act collectively. In the civilian world, drone swarms have been used as, for example, impressive substitutes for fireworks. Experts have argued that, in the military realm, swarms would be ideal for “overwhelming a nonlinear battlespace, ‘creating a focused, relentless, and scaled attack’ … using ‘a deliberately structured, coordinated, strategic way to strike from all directions’”. This means that swarms provide genuinely new capabilities: as political scientist Paul Scharre points out, “the result will be a paradigm shift in warfare where mass once again becomes a decisive factor on the battlefield”. He explains that a “military deploying a swarm offers no massed formation for the enemy to flank, pin down, and destroy. The massed elements that make up a swarm could disperse, then rapidly coalesce to reattack. While militaries generally try to minimize the number of simultaneously moving elements in manoeuvre warfare – in order to reduce the risk of fratricide – in swarm combat, all units would be moving at the same time, independently but coordinated. To confront a swarm will be to confront an ever-shifting cloud that can never be pinned down and reacts instantly to changes on the battlefield”. Swarms make capabilities such as flying minefields, coordinated and automated waves of attacks, and “kill webs” – highly interconnected, dynamic, distributed systems of systems and sensors – possible. Because of this, armed forces and defence firms around the world have expressed a lot of interest in swarms, leading to military trials.

Concerns and dangers

The concerns expressed by organisations such as the Campaign to Stop Killer Robots and the International Committee for Robot Arms Control are specifically about LAWS and autonomy (including non-AI autonomy). But these also apply to military AI applications more broadly, and are wide-ranging, often beginning with the ethical and legal dimensions of using autonomous systems in warfare. Ethicists argue that, as machines are unable to appreciate the value of human life and the significance of its loss, allowing such systems to kill would violate the principles of humanity. Political scientist Frank Sauer believes that “the society that allows such a practice and no longer troubles its collective human conscience with war-time killing risks nothing less than giving up the most basic values of civilisation and fundamental principles of humanity”. There is an intense legal debate about whether LAWS could follow the laws of war, through, for instance, algorithms sufficiently adept at distinguishing between civilians and combatants (discrimination), accurately judging the proportionality of means and ends, and weighing the military necessity of the use of force.

In addition to legal and ethical concerns – which predominantly apply to autonomous systems – AI-enabled weapons also pose technical and political challenges. Due to the fact that AI systems are “black boxes” that are not programmed but have learned, it is impossible for a human to fully comprehend their reasoning. This concern applies to all military AI-enabled systems, including non-lethal and non-autonomous ones, and transcends the military realm – although, arguably, it is particularly relevant to systems that could take life-or-death decisions. Because of the black-box phenomenon, it is considerably harder for humans to anticipate or detect mistakes made by AI systems than those made by traditional computers.

Such mistakes can result from training on biased data. For instance, when civil rights organisations tested facial recognition systems used by police forces, they identified a disproportionately high number of innocent people with darker skin as criminals. Similarly, algorithms developed to help human resources departments hire staff have severely discriminated against women. Hence, AI-enabled weapons have vulnerabilities of a type that military planners and commanders have difficulty detecting and foreseeing. And adversaries can exploit this weakness through activities such as sabotage.

Finally, there are political concerns that the development of AI-enabled weapons could spark an arms race. Due to the speed with which AI-enabled weapons may be able to act – particularly if they are given high levels of autonomy – the first state to use them could gain a significant military advantage. Thus, there is a danger that such systems could lead to unchecked escalation: once one country starts to use them, others might feel they have to follow suit, leading to an armament spiral. There is also a danger that, following unexpected events, AI-enabled autonomy could spark “flash wars” reminiscent of “flash crashes” on stock markets – in which hundreds of billions of dollars have been wiped off share prices faster than humans can react. This is a particular problem in the cyber realm, where there are relatively few physical limitations to algorithms’ power.

Europe’s development of AI

National AI strategies and European thinking

Throughout Europe, governments are writing, or have already published, national strategies on how to handle and support the development and application of AI in their countries. The European Commission, in its “Coordinated Plan on Artificial Intelligence”, asked EU member states to put in place strategies or programmes on AI by mid-2019. So far, at least eight EU member states – Belgium, Denmark, France, Finland, Germany, Italy, Sweden, and the UK – have followed this advice and published their strategies. At least nine others – Austria, the Czech Republic, Estonia, Latvia,  Poland, Portugal, Slovenia, Slovakia, and Spain – are in the process of writing them. In addition, in May 2018, Denmark, Estonia, Finland, the Faroe Islands, Iceland, Latvia, Lithuania, Norway, Sweden, and the Åland Islands released a “Declaration on AI in the Nordic-Baltic Region”.

Of the big three – Germany, France, and the UK – France has shown the most interest in AI in the last few years. Indeed, the country has made AI a top-level priority, with President Emmanuel Macron discussing the topic at length in a widely read Wired interview (similar to the magazine’s 2016 interview on AI with Barack Obama, then US president). In March 2018, France published the 154-page “For Meaningful Artificial Intelligence – Towards a French and European Strategy” in French and English.

Germany, in contrast, has often been criticised for being slow to address AI issues. In response, the German authorities noticeably sped up their activities in the area in the second half of 2018 and in 2019. In June 2018, the Bundestag set up an Enquete-Kommission, a committee of inquiry comprising members of parliament and experts. The following month, the government published the “cornerstones of its AI strategy”, and created a digital council to advise it. This was followed by the publication of the German AI strategy in November 2018. Throughout the year, the government, ministries, and other private and public actors held public conferences, online consultations, and expert hearings on AI. With its 47-page “National Strategy for Artificial Intelligence – AI made in Germany”, Berlin adopted a different approach to Paris. Whereas the French strategy was an expert paper written by a team of specialists from various sectors under the guidance of Cédric Villani – one of France’s AI stars and a member of parliament for La République en Marche! – the German strategy is the product of a ministry-wide consultation under the leadership of the ministries of education and research, economy and energy, and labour and social affairs. Accordingly, the strategy primarily focuses on research, the economy, and society.

The UK does not have a single designated AI strategy but rather several documents that can, taken together, be analysed as such, even though, as will be shown below, they do not form a coherent whole. In April 2018, the UK released the AI Sector Deal, as part of the government’s industrial strategy. The document focuses on strengthening British research and development, education, and data ethics, and promises investment in research. It announced the establishment of the Centre for Data Ethics and Innovation, which has since published several papers related to ethical AI. The UK’s declared aim is to become “a global leader in this technology that will change all our lives”. The same month, the UK parliament’s select committee on AI published a 183-page report entitled “AI in the UK: ready, willing and able?” The report proposes five principles for an “AI Code”, stating that the technology should be: developed for the common good and the benefit of humanity; operate on principles of intelligibility and fairness; avoid undermining the data rights or privacy of individuals, families, or communities; help all citizens fulfil their right to be educated, to enable them to flourish mentally, emotionally, and economically; and refrain from creating autonomous power to hurt, destroy or deceive human beings. The British government published a response to the committee’s report in June 2018. In addition to these documents, various government bodies have produced a range of smaller AI-related publications.

There are other marked differences between the ways that France, Germany, and the UK approach AI. The French strategy has a generally upbeat tone; calling AI “one of the most fascinating scientific endeavors of our time”. Villani, in his foreword to the strategy,  notes  his personal enthusiasm for AI and expresses its authors’ conviction that “France – and Europe as a whole – must act synergistically, with confidence and determination, to become part of the emerging AI revolution”. This approach seems to accord with French citizens’ beliefs: a recent IFOP poll found that 73 percent of them have a positive or very positive view of AI. The French AI strategy is grounded in geopolitical concerns, noting that “France and Europe need to ensure that their voices are heard and must do their utmost to remain independent. But there is a lot of competition: the United States and China are at the forefront of this technology and their investments far exceed those made in Europe.” 

In contrast, the German strategy views AI primarily through an economic lens. It concentrates on preserving the strength of German industry – particularly small and medium-sized companies, the famous Mittelstand – by ensuring that AI will not allow other countries to overtake Germany economically. The government’s hope is that AI will help the Mittelstand continue to manufacture world-leading products. The German approach to AI is thus markedly driven by fear of losing economic opportunities, causing it to adopt a defensive tone. A recent poll found that 69 percent of Germans believe that, because of AI, a “massive number of jobs” will be lost (a belief that is particularly prevalent among 16-24-year-olds), while 74 percent worry that “when machines decide, the human element will be lost”.

The British approach also has an economic and private sector focus, aiming to improve the business environment. It underlines one of the UK’s strengths: the number of AI companies based in the country, most notably AI champion DeepMind (which, although still based in London, was bought by Google in 2014). Another focus is on AI in the health sector. The UK’s mission is “to make the UK a world leader in the use of data, AI and innovation to transform the prevention, early diagnosis and treatment of chronic diseases by 2030”. However, neither the UK industrial strategy nor the select committee’s report devotes attention to geopolitical concerns. The committee was appointed by the House of Lords “to consider the economic, ethical and social implications of advances in artificial intelligence”; the five key questions it asks concern the effects of AI on people’s everyday lives, the potential opportunities and risks of AI for the UK, and ethical AI issues.

Military AI in Europe

Only France’s strategy discusses security and defence in detail – although the UK select committee’s paper at least discusses LAWS. Until recently, the EU more or less  ignored the defence and security elements of AI, but it has now begun to deal with the issue a little. This change came about because Finland put the topic on the agenda when it held the presidency of the EU Council in the second half of 2019. The EU now aims to strengthen European defence capabilities through a variety of tools, and as such has earmarked up to 8 percent of the European Defence Fund’s 2021-2027 budget for disruptive defence technologies and high-risk innovation respectively. Increased European engagement with the topic therefore now seems as inevitable as it is necessary.

France

France, more than any other European country, views the military realm as an important element of its AI development efforts. The French strategy designates defence and security as one of its four priority AI sectors for industrial policy. Indeed, one of the authors of the strategy is an engineer from the French defence procurement agency. In early 2018, the French Ministry of Defence (MoD) announced that it planned to invest €100m per year in AI research. In September 2019, France published a military AI strategy, a report written by a team from the MoD. This made France the first European state to publish a strategy on military AI specifically. The 34-page document outlines France’s approach to AI in the military, provides examples of AI-enabled military applications, and announces the creation of several bodies that will help the French military adopt AI. In the first of its two parts, the strategy takes stock of what AI may mean for the military realm and France’s main principles in military AI. In many regards, the military AI strategy follows the ideas of France’s national AI strategy, sharing its focus on data and talent, and adopting a similar geopolitical approach. Whereas the national AI strategy warns against France and European states becoming “cybercolonies” of the US and China, the military strategy maps out the international space, describing these two countries as AI “superpowers”. The military AI strategy conceives of Europe as “an intermediate power in the making”, and France – together with Canada, Germany, Israel, Japan, Singapore, South Korea, and the UK – as part of the “second circle”, in AI. The document repeatedly expresses concern about dependence on other countries (particularly private companies from other states) and adopts “preserving a heart of sovereignty” as one of its directing principles.

Germany

As discussed above, the military, security, and geopolitical elements of AI are markedly absent from the German national AI strategy – despite the fact that the document is otherwise comprehensive, listing almost all other areas that the technology is likely to affect. The lack of focus on foreign policy and defence was also criticised by members of the select committee. The strategy includes only one sentence on security and defence, which shifts all responsibility for this area to the MoD. The strategy states: “with regard to new threat scenarios for internal and external security, in addition to research on civil security, the Federal Government [will] promote research to detect manipulated or automatically generated content in the context of cyber security. The research on AI applications, in particular for the protection of external security and for military purposes, will be carried out within the scope of the departmental responsibilities.”

One could argue that the absence of defence elements from the strategy is due to the fact that, while the MoD and the Foreign Ministry were consulted in the process of writing it, they did not have a leading role in formulating the document. However, rather than being an outlier, the national strategy seems representative of Germany’s generally cautious approach to military AI. A report for NATO’s parliamentary assembly argues that, given AI’s potential value to the armed forces, NATO’s leaders in science and technology – such as France, Germany, the UK, and the US – must invest in defence-related AI research and development. But the report singles out Germany as lagging in this area, commenting: “it is encouraging to see that all of them are indeed investing substantial resources into defence-related AI, with the possible exception of Germany.”

Germany’s public and political debate on AI in the military focuses almost exclusively on the control of autonomous weapon systems – the only area the German government seems comfortable engaging with publicly. The Foreign Ministry organised an international conference on the topic in March 2019, and has held a series of follow-up meetings. This approach is mirrored in the work of German think-tank Stiftung Wissenschaft und Politik, which has a clear focus on regulation and limiting the use of robots and AI in warfare. The organisation created a working group, the International Panel on the Regulation of Autonomous Weapons, which concentrates on how to define LAWS and human control, as well as how to regulate these systems. All of the panel’s reports fit with this pattern, especially the “Preventive regulation of autonomous weapons”, published in both English and German in 2019. No report or study has engaged with the idea that the technology could provide Germany and the Bundeswehr with new capabilities.

The national AI strategy’s reference to “departmental responsibilities” could be interpreted as giving the German MoD a mandate to develop its own strategy on the military applications of AI. However, given the MoD’s track record of rarely, if ever, publishing doctrinal documents, it is unlikely that the ministry will do so publicly. And yet, to everyone’s surprise, in October 2019, the Amt für Heeresentwicklung – the unit of the army charged with developing new concepts and ideas for ground forces – published a position paper entitled “Artificial Intelligence in the Land Forces” (and an English version of the document shortly thereafter). The paper describes four areas of action for AI development in the army:  the further improvement of existing systems in areas such as intelligence, surveillance, and reconnaissance with image recognition; new weapon systems (which, surprisingly, only mentions small drones); personnel and materiel management (in, for example, predictive maintenance); and training.

Hence, in this position paper, the army acknowledges the existence and importance of AI in systems other than weapons (even though the fictional scenario that features at the beginning of the paper focuses exclusively on autonomous drones and AI-enabled weapons). While the army’s position paper is a laudable effort, it sits rather uneasily with the national AI strategy. As the strategy does not mention military AI, and as there is no dedicated military AI strategy, the paper is somewhat disconnected from other German publications. Indeed, and, as one of the paper’s authors said in a private conversation, the MoD was not particularly pleased with its publication. Whereas the French military AI strategy announces specific initiatives, the German army’s paper only suggests them – because there is no guarantee that the government will take up any of its ideas, especially organisational ones. Therefore, it is unclear what will become of the concepts developed in the position paper.

The UK

The only military AI applications mentioned in the UK’s national AI documents are LAWS. The House of Lords committee report argued that “perhaps the most emotive and high-stakes area of AI development today is its use for military purposes”, but conceded that it did not explore “this area with the thoroughness and depth that only a full inquiry into the subject could provide”. Nonetheless, the committee report discussed autonomous weapons. Here, the committee criticised the official UK definition of LAWS as being systems “capable of understanding higher-level intent and direction”. Many experts have condemned this definition for being too broad, given that most conceivable autonomous weapon systems would not meet this description. By adopting such a broad definition, the government could claim that it is not developing and using LAWS even while doing so. In its response to the report, the government rejected outright the recommendation to realign the UK’s definition of autonomous weapons with that used by the rest of the world.

While the UK does not have a designated military AI strategy, British MoD units have published many doctrinal and conceptual documents that relate to AI. Yet, as there is no overarching strategy behind these documents, it is difficult to determine the UK’s positions and plans. The MoD’s “Global Strategic Trends”, published in October 2018, lists AI as the first of 16 strategic challenges. The document discusses both AI’s potential impact and the uncertainty surrounding its development. The report assesses that “a failure to understand AI capabilities may create vulnerabilities and cede advantages to competitors”, and that “conflicts fought increasingly by robots or autonomous systems could change the very nature of warfare”. However, the report does not go into much detail on the UK’s AI capabilities and plans. Similarly, “Mobilising, Modernising & Transforming Defence”, published in 2018, underlines the fact that the MoD is pursuing modernisation “in areas like artificial intelligence, machine-learning, man-machine teaming and automation to deliver the disruptive effects we need in this regard”. In late 2018, then-defence secretary Gavin Williamson announced new funding for AI projects.

Many British initiatives in the area are taking place, but without featuring much in the debate among the broader public or political elites. This is partly due to recent changes in the leadership of the government, and politicians’ and media outlets’ focus on Brexit.

In response to a Freedom of Information request made in late 2018, the MoD revealed a range of AI-related projects it is pursuing. One example of this is the Defence and Security Partnership, a programme to train the MoD and the Government Communications Headquarters, the UK’s signals intelligence agency, in cooperation with the Alan Turing Institute. The MoD also announced that Defence Science and Technology Laboratory (Dstl) – its research arm – was working on “how automation and machine intelligence can analyse data to enhance decision making in the Defence and security sectors”. Dstl launched the AI Lab in 2018. Meanwhile, the UK’s Autonomy Programme researches technologies that can be used in all environments and that will have a significant impact on existing military capabilities. Activities covered by this programme include algorithm development, AI, machine learning, “developing underpinning technologies to enable next generation autonomous military-systems”, and the optimisation of human-autonomous systems teaming. The Development, Concepts and Doctrine Centre – the MoD’s in-house think-tank – has published a concept note on human-machine teaming.

 

Big-ticket AI-enabled European military systems in development

  • BAE Taranis (UK): an armed drone system currently in the demonstrator and testing phase. Judging by publicly available information, this is a highly autonomous system that can take off, land, and identify targets without human intervention. With its small radar profile, Taranis has conducted automated searches in trials, locating and identifying targets according to its assignments. However, after a series of tests and trials that had positive results between 2013 and 2015, the development of the system appears to have stalled.
  • Dassault nEUROn (France, with Greece, Italy, Spain, Sweden, and Switzerland): an unmanned combat air vehicle similar to Taranis. Its demonstrator has performed naval and low-observability tests. The Global Security Initiative’s autonomy database ranks nEUROn as the most autonomous system of the 283 it has analysed.
  • Airbus and Dassault Future Combat Air System, or FCAS (France, Germany, and Spain): a capability that involves teaming between a manned fighter and swarms of autonomous drones. It is in an early stage of development.
  • BAE Tempest (UK, with Italy): a sixth-generation aircraft in an early stage of development. Planned for deployment by 2035, the Tempest will reportedly include many new AI-enabled technologies.
  • ARTEMIS (France): A big data platform for the French Defence Procurement Agency that is widely perceived as the first step towards a sovereign architecture for massive data processing and exploitation.

Conclusion

This paper has shown that there are many possible uses of AI in the military and security realm – most of which receive little public attention due to the dominance of the debate on killer robots. A comparative study of the three biggest European states reveals that France and Germany appear to be at opposite ends of the AI spectrum in Europe. France sees AI in general as an area of geopolitical competition and military AI in particular as an important element of French strategy. In contrast, Germany has been much more reluctant to engage with the topic of AI in warfare, and appears uninterested in the geopolitics of the technology. Military AI seems to be an acceptable topic of discussion for Germany only in arms control. For now, the UK is somewhere between these two positions. It is not as outspoken about military AI as France, but it is clearly interested in the military opportunities that AI provides. Independently of their governments’ positions, all three countries’ defence industries are developing AI-enabled capabilities.

As it is relatively early days in the development and use of operational AI-enabled military systems, it is possible that European countries’ positions will align over time. If they do not, however, this could pose real problems for European defence cooperation. Given that the EU is investing a great deal of effort in this area through instruments such as Permanent Structured Cooperation and the European Defence Fund, intra-European disagreements on one of the most crucial new technologies is a cause for concern. This is particularly true for pan-European projects such as FCAS, a fighter jet project involving France, Germany, and Spain that is set to include various AI-enabled capabilities. If France pushes for greater AI development – potentially even leading to LAWS while Germany does not, this cooperation could soon run into problems.

In this context, the EU could play an important role in helping member states harmonise their approaches to military AI. The EU already acts as a coordinating power for European national AI strategies, with “ethical AI” as its guiding principle. A similar approach could work for military AI. The European Commission should draft a coordinating strategy for military AI, outlining its ideas for areas of development in which common European engagement would be particularly useful (such as sharing systems to train algorithms), while setting red lines (in areas such as the development and use of LAWS). The EU should ask member states to respond to this guidance by outlining their ideas on, and approaches to, AI. In this way, European states could take advantage of one another’s expertise in AI development while working together to improve Europe’s military capabilities.

About the author

Ulrike Franke is a policy fellow with the European Council on Foreign Relations. She was a governance of AI fellow at the Future of Humanity Institute at Oxford University in summer 2019 and remains a policy affiliate at the institute.

Acknowledgements

The author thanks Konrad-Adenauer-Foundation London for its financial support, without which this paper would not have been possible. Some of the research for this paper was done while the author was a Governance of AI Fellow at the Future of Humanity Institute in Oxford. The author thanks the researchers there, as well as the participants in the off-the-record KAS-ECFR workshop on military AI in Europe in September 2019 for their input.

The European Council on Foreign Relations does not take collective positions. ECFR publications only represent the views of their individual authors.

Subscribe to our weekly newsletter

We will store your email address and gather analytics on how you interact with our mailings. You can unsubscribe or opt-out at any time. Find out more in our privacy notice.