Dr Alex Leveringhaus
Academic and research departments
Politics and International Relations, Centre for International Intervention, Faculty of Arts, Business and Social Sciences.About
Biography
I joined Surrey in January 2018 from the University of Manchester, where I was a Leverhulme Early Career Research Fellow in the Centre for Political Theory. Prior to working at Manchester, I was a post-doctoral research fellow at the Oxford Institute for Ethics, Law and Armed Conflict, University of Oxford, as well as a James Martin Fellow in the Oxford Martin School. I received my PhD in Government from LSE, where I worked under the supervision of Cecile Fabre and Paul Kelly.
At Surrey, I am the co-director, with Nick Kitchen, of the Centre for International Intervention. I am also a coordinator for the Special Interest Group for Ethics and Artificial Intelligence at the University, and an affiliate at the Surrey Centre for Law and Philosophy (SCLP).
University roles and responsibilities
- Exams and Assessments Officer
ResearchResearch interests
My research interests lie in contemporary political theory and philosophy in the analytic tradition. I also have an interest in contemporary ethical theory – normative and applied. Most of my work is on ethical and other theoretical issues in armed conflict, with special emphasis on emerging combat technologies (drones, robots, autonomous weapons), as well as military intervention. More generally, I am interested in the ethical and political repercussions of the wide-spread introduction and use of Artificial Intelligence.
Research interests
My research interests lie in contemporary political theory and philosophy in the analytic tradition. I also have an interest in contemporary ethical theory – normative and applied. Most of my work is on ethical and other theoretical issues in armed conflict, with special emphasis on emerging combat technologies (drones, robots, autonomous weapons), as well as military intervention. More generally, I am interested in the ethical and political repercussions of the wide-spread introduction and use of Artificial Intelligence.
Supervision
Postgraduate research supervision
I welcome proposals in most areas of contemporary political theory, especially on the following topic:
- Theories of rights
- Non-consequentialism (applied issues in the ethics of killing and saving, as well as conceptions of human dignity and non-instrumentalisation)
- Just war theory
- Theoretical approaches to atrocities and human right abuses
- Ethics and Politics of Artificial Intelligence
Teaching
At Surrey, I convene ‘Social and Political Thinkers: from Plato to Marx’, which is the core political philosophy course for all our undergraduates. It is also taken by a large number of students from outside the Politics Department. From Winter Semester 2019, I will convene the UG course on political ideologies.
At PGT level, I convene the module on global governance, and co-teach, with Nick Kitchen, Politics of International Intervention.
Dissertation supervision: I have supervised a variety of topics in political theory and beyond, including abortion rights, biomedical enhancement in sports, the moral standing of soldiers, the obligation to intervene, the conceptualisation of war as punishment, and society-building in (post-) civil war scenarios.
Publications
Highlights
Leveringhaus, Alex (2016), Ethics and Autonomous Weapons (Palgrave)
Leveringhaus, Alex (2016), ‘What so bad about Killer Robots?’, Journal of Applied Philosophy.
In this chapter, political philosopher Alex Leveringhaus asks whether Lethal Autonomous Weapons (AWS) are morally repugnant and whether this entails that they should be prohibited by international law. To this end, Leveringhaus critically surveys three prominent ethical arguments against AWS: firstly, AWS create ‘responsibility gaps’; secondly, that their use is incompatible with human dignity; and ,thirdly, that AWS replace human agency with artificial agency. He argues that some of these arguments fail to show that AWS are morally different from more established weapons. However, the author concludes that AWS are currently problematic due to their lack of predictability.
Over the last decade or so, interest in Lethal Autonomous Weapons Systems (LAWS) has grown among academics, policy makers, and campaigners. The debate, however, has been dominated by international lawyers, ethicists, and technologists at the expense of other analytical lenses. This chapter uses International Relations Theory (IRT) in order to provide a fresh perspective, focussing on realist, liberal, and constructivist approaches. Beginning with a conceptual discussion of the nature of LAWS, the chapter uses IRT to assess the potential impact of LAWS on the ability and willingness of states to cooperate under conditions of anarchy. The chapter concludes that while established IRTs offer useful insights into the impact of LAWS on wider international security, LAWS also push the conceptual boundaries of IRT. Over time, IRT might have to adapt itself to deal with the practical consequences of the introduction of LAWS.
This essay contends that the ethics around the use of spy technology to gather intelligence (TECHINT) during espionage and counterintelligence operations is ambiguous. To build this argument, the essay critically scrutinizes Cécile Fabre's recent and excellent book Spying through a Glass Darkly, which argues that there are no ethical differences between the use of human intelligence (HUMINT) obtained from or by human assets and TECHINT in these operations. As the essay explains, Fabre arrives at this position by treating TECHINT as a like-for-like replacement for HUMINT. The essay argues instead that TECHINT is unlikely to act as a like-for-like replacement for HUMINT. As such, TECHINT might transform existing practices of espionage and counterintelligence, giving rise to new ethical challenges not captured in Fabre's analysis. To illustrate the point, the essay builds an analogy between TECHINT and recent armed conflicts in which precision weapons have been deployed. Although precision weapons seem ethically desirable, their availability has created new practices of waging war that are ethically problematic. By analogy, TECHINT, though not intrinsically undesirable, has the capacity to generate new practices of intelligence gathering that are ethically problematic—potentially more than HUMINT. Ultimately, recent negative experiences with the use of precision weaponry should caution against an overly positive assessment of TECHINT's ethical desirability.
Introduction Given that western liberal democracies are typically advocates of human rights, Bruce Cronin’s monograph, Bugsplat: The Politics of Collateral Damage in Western Armed Conflict, makes for uncomfortable reading.1 As Bugsplat, whose title is derived from the informal name given to the software programme used by the US military to model collateral damage, shows, Western democratic states, most notably the United States of America, other NATO member states, as well as Israel, conduct military campaigns that result in high levels of collateral damage. Worse still, these levels are, according to Cronin, directly related to the tactics, strategies, and weapons technologies utilized by western states. The high levels of collateral damage in western wars give rise to an interesting research puzzle. Since western state slargely comply with international humanitarian law (IHL) and have sophisticated precision weaponry at their disposal, one would expect there to be less collateral damage. Indeed, this is the research puzzle driving Bugsplat’s analysis. Here, I do not take issue with Cronin’s solution to this puzzle. Instead, I use this opportunity to discuss the ethical issues arising from Bugsplat, which Cronin largely sidesteps. An engage-ment with ethics is important, not least because Bugsplat’s argumentative core, what I term here as the concept of legal recklessness, relies on an implicit ethical judgement. I outline what I mean by legal recklessness in the second part of the paper. In the third part, I investigate the implications of legal recklessness for the distinction between legitimate acts of war and acts of terrorism. In the fourth part, I look at some of the wider implications of legal recklessness forjust war theory and vice versa.
This chapter considers how autonomous weapons systems (AWS) impact the armed conflicts of the future. Conceptually, the chapter argues that AWS should not be seen as on a par with precision weaponry, which makes them normatively problematic. Against this background, the chapter considers the relationship between AWS and two narratives, The Humane Warfare Narrative and the Excessive Risk Narrative, which have been used to theorize contemporary armed conflict. AWS, the chapter contends, are unlikely to usher in an era of humane warfare. Rather, they are likely to reinforce existing trends with regard to the imposition of excessive risk on noncombatants in armed conflict. Future conflicts in which AWS are deployed are thus likely to share many characteristics of the risk-transfer wars of the late twentieth and early twenty-first centuries. The chapter concludes by putting these abstract considerations to the test in the practical context of military intervention.
In his Lectures on the Philosophy of History, Hegel opines that gunpowder is not merely the result of human thought; rather, like Gutenberg’s printing press, it promotes human thinking. Put simply, gunpowder was required; hence it was invented (see, Black 1973). John Forge’s latest book, The Morality of Weapons Research: Why it is Wrong to Design Weapons, a contribution to the Springer Briefs in Ethics series, takes issue with this very aspect of intellectual endeavour. In a nutshell, Forge contends that the invention, development, and improvement of weaponry via ‘applied’ research activity (21), understood in contemporary scientifc terms or prescientifc ones (18), is neither morally permissible nor excusable. Forge already developed this argument in an earlier work, Designed to Kill: The Case Against Weapons Research, which I have reviewed elsewhere (Forge 2013; Leveringhaus 2014). The Morality of Weapons Research presents his position in a slightly shorter and more accessible format, with some subtle revisions of, as well as brief additions to his original argument.
This paper critically examines the implications of technology for the ethics of intervention and vice versa, especially regarding (but not limited to) the concept of military humanitarian intervention (MHI). To do so, it uses two recent pro-interventionist proposals as lenses through which to analyse the relationship between interventionism and technology. These are A. Altman and C.H. Wellman’s argument for the assassination of tyrannical leaders, and C. Fabre’s case for foreign electoral subversion. Existing and emerging technologies, the paper contends, play an important role in realising these proposals. This illustrates the potential of technology to facilitate interventionist practices that transcend the traditional concept of MHI, with its reliance on kinetic force and large-scale military operations. The question, of course, is whether this is normatively desirable. Here, the paper takes a critical view. While there is no knockdown argument against either assassination or electoral subversion for humanitarian purposes, both approaches face similar challenges, most notably regarding public accountability, effectiveness, and appropriate regulatory frameworks. The paper concludes by making alternative suggestions for how technology can be utilised to improve the protection of human rights. Overall, the paper shows that an engagement with technology is fruitful and necessary for the ethics of intervention.