Alex Leveringhaus

Dr Alex Leveringhaus


Lecturer in Political Theory; Co-Director, Centre for International Intervention
+44 (0)1483 689197
32 AP 01
Mondays, 1000-1200; Wednesdays 1000-1200

About

University roles and responsibilities

  • Exams and Assessments Officer

    Research

    Research interests

    Supervision

    Postgraduate research supervision

    Teaching

    Publications

    Highlights

    Leveringhaus, Alex (2016), Ethics and Autonomous Weapons (Palgrave)

    Leveringhaus, Alex (2016), ‘What so bad about Killer Robots?’, Journal of Applied Philosophy.

    Alex Leveringhaus (2022)Morally Repugnant Weaponry? Ethical Responses to the Prospect of Autonomous Weapons, In: The Cambridge Handbook of Responsible Artificial Intelligence: Interdisciplinary Perspectives pp. 475-487 Cambridge University Press

    In this chapter, political philosopher Alex Leveringhaus asks whether Lethal Autonomous Weapons (AWS) are morally repugnant and whether this entails that they should be prohibited by international law. To this end, Leveringhaus critically surveys three prominent ethical arguments against AWS: firstly, AWS create ‘responsibility gaps’; secondly, that their use is incompatible with human dignity; and ,thirdly, that AWS replace human agency with artificial agency. He argues that some of these arguments fail to show that AWS are morally different from more established weapons. However, the author concludes that AWS are currently problematic due to their lack of predictability.

    Amelia Hadfield, Alex Leveringhaus (2023)Autonomous weaponry and IR theory: conflict and cooperation in the age of AI, In: Handbook on the Politics and Governance of Big Data and Artificial Intelligencepp. 167-187 Edward Elgar Publishing

    Over the last decade or so, interest in Lethal Autonomous Weapons Systems (LAWS) has grown among academics, policy makers, and campaigners. The debate, however, has been dominated by international lawyers, ethicists, and technologists at the expense of other analytical lenses. This chapter uses International Relations Theory (IRT) in order to provide a fresh perspective, focussing on realist, liberal, and constructivist approaches. Beginning with a conceptual discussion of the nature of LAWS, the chapter uses IRT to assess the potential impact of LAWS on the ability and willingness of states to cooperate under conditions of anarchy. The chapter concludes that while established IRTs offer useful insights into the impact of LAWS on wider international security, LAWS also push the conceptual boundaries of IRT. Over time, IRT might have to adapt itself to deal with the practical consequences of the introduction of LAWS.

    Alex Leveringhaus (2023)Technology in Espionage and Counterintelligence: Some Cautionary Lessons from Armed Conflict, In: Ethics & international affairs37(2)pp. 147-160 Cambridge University Press

    This essay contends that the ethics around the use of spy technology to gather intelligence (TECHINT) during espionage and counterintelligence operations is ambiguous. To build this argument, the essay critically scrutinizes Cécile Fabre's recent and excellent book Spying through a Glass Darkly, which argues that there are no ethical differences between the use of human intelligence (HUMINT) obtained from or by human assets and TECHINT in these operations. As the essay explains, Fabre arrives at this position by treating TECHINT as a like-for-like replacement for HUMINT. The essay argues instead that TECHINT is unlikely to act as a like-for-like replacement for HUMINT. As such, TECHINT might transform existing practices of espionage and counterintelligence, giving rise to new ethical challenges not captured in Fabre's analysis. To illustrate the point, the essay builds an analogy between TECHINT and recent armed conflicts in which precision weapons have been deployed. Although precision weapons seem ethically desirable, their availability has created new practices of waging war that are ethically problematic. By analogy, TECHINT, though not intrinsically undesirable, has the capacity to generate new practices of intelligence gathering that are ethically problematic—potentially more than HUMINT. Ultimately, recent negative experiences with the use of precision weaponry should caution against an overly positive assessment of TECHINT's ethical desirability.

    Alex Leveringhaus (2019)Bugsplat: The Politics of Collateral Damage in Western Armed Conflicts21(2)pp. 274-279 Taylor & Francis

    Introduction Given that western liberal democracies are typically advocates of human rights, Bruce Cronin’s monograph, Bugsplat: The Politics of Collateral Damage in Western Armed Conflict, makes for uncomfortable reading.1 As Bugsplat, whose title is derived from the informal name given to the software programme used by the US military to model collateral damage, shows, Western democratic states, most notably the United States of America, other NATO member states, as well as Israel, conduct military campaigns that result in high levels of collateral damage. Worse still, these levels are, according to Cronin, directly related to the tactics, strategies, and weapons technologies utilized by western states. The high levels of collateral damage in western wars give rise to an interesting research puzzle. Since western state slargely comply with international humanitarian law (IHL) and have sophisticated precision weaponry at their disposal, one would expect there to be less collateral damage. Indeed, this is the research puzzle driving Bugsplat’s analysis. Here, I do not take issue with Cronin’s solution to this puzzle. Instead, I use this opportunity to discuss the ethical issues arising from Bugsplat, which Cronin largely sidesteps. An engage-ment with ethics is important, not least because Bugsplat’s argumentative core, what I term here as the concept of legal recklessness, relies on an implicit ethical judgement. I outline what I mean by legal recklessness in the second part of the paper. In the third part, I investigate the implications of legal recklessness for the distinction between legitimate acts of war and acts of terrorism. In the fourth part, I look at some of the wider implications of legal recklessness forjust war theory and vice versa.

    Alex Leveringhaus (2021)Autonomous weapons and the future of armed conflict, In: Lethal Autonomous Weaponspp. 175-188 Oxford University Press

    This chapter considers how autonomous weapons systems (AWS) impact the armed conflicts of the future. Conceptually, the chapter argues that AWS should not be seen as on a par with precision weaponry, which makes them normatively problematic. Against this background, the chapter considers the relationship between AWS and two narratives, The Humane Warfare Narrative and the Excessive Risk Narrative, which have been used to theorize contemporary armed conflict. AWS, the chapter contends, are unlikely to usher in an era of humane warfare. Rather, they are likely to reinforce existing trends with regard to the imposition of excessive risk on noncombatants in armed conflict. Future conflicts in which AWS are deployed are thus likely to share many characteristics of the risk-transfer wars of the late twentieth and early twenty-first centuries. The chapter concludes by putting these abstract considerations to the test in the practical context of military intervention.

    Alex Leveringhaus (2021)Out of harm’s way, In: Metascience30(3)pp. 475-478 Springer Netherlands

    In his Lectures on the Philosophy of History, Hegel opines that gunpowder is not merely the result of human thought; rather, like Gutenberg’s printing press, it promotes human thinking. Put simply, gunpowder was required; hence it was invented (see, Black 1973). John Forge’s latest book, The Morality of Weapons Research: Why it is Wrong to Design Weapons, a contribution to the Springer Briefs in Ethics series, takes issue with this very aspect of intellectual endeavour. In a nutshell, Forge contends that the invention, development, and improvement of weaponry via ‘applied’ research activity (21), understood in contemporary scientifc terms or prescientifc ones (18), is neither morally permissible nor excusable. Forge already developed this argument in an earlier work, Designed to Kill: The Case Against Weapons Research, which I have reviewed elsewhere (Forge 2013; Leveringhaus 2014). The Morality of Weapons Research presents his position in a slightly shorter and more accessible format, with some subtle revisions of, as well as brief additions to his original argument.

    Alexander Leveringhaus (2021)Beyond Military Humanitarian Intervention: From Assassination to Election Hacking?, In: Philosophical Journal of Conflict and Violence5(1)pp. 109-128 Trivent Publishing

    This paper critically examines the implications of technology for the ethics of intervention and vice versa, especially regarding (but not limited to) the concept of military humanitarian intervention (MHI). To do so, it uses two recent pro-interventionist proposals as lenses through which to analyse the relationship between interventionism and technology. These are A. Altman and C.H. Wellman’s argument for the assassination of tyrannical leaders, and C. Fabre’s case for foreign electoral subversion. Existing and emerging technologies, the paper contends, play an important role in realising these proposals. This illustrates the potential of technology to facilitate interventionist practices that transcend the traditional concept of MHI, with its reliance on kinetic force and large-scale military operations. The question, of course, is whether this is normatively desirable. Here, the paper takes a critical view. While there is no knockdown argument against either assassination or electoral subversion for humanitarian purposes, both approaches face similar challenges, most notably regarding public accountability, effectiveness, and appropriate regulatory frameworks. The paper concludes by making alternative suggestions for how technology can be utilised to improve the protection of human rights. Overall, the paper shows that an engagement with technology is fruitful and necessary for the ethics of intervention.