Empires are never built or maintained on the basis of compassion. Empires live by numbness. Walter Brueggemann
To be in hell is to drift; to be in heaven is to steer. George Bernard Shaw
A quietly mad population is a tractable one. Naomi Wolf
Be not angry that you cannot make others as you wish them to be, since you cannot make yourself as you wish to be. Thomas à Kempis
It was like being in a car with the gas pedal slammed down to the floor and nothing to do but hold on and pretend to have some semblance of control. Nic Sheff
It’s possible to name everything and to destroy the world. Kathy Acker
Disillusion comes sooner or later, but it always comes, it doesn’t miss an appointment, it never has. Juan Gabriel Vásquez
Over this long weekend, we and a number of groups with whom we work (including our colleagues at Reverse the Trend) have acknowledged the anniversary of the still-controversial use of a nuclear weapon on the residents of the city of Hiroshima, Japan (August 6, 1945) and the even more controversial bombing of Nagasaki on August 9.
Amidst all the important discussion about the morality and legality of testing indiscriminate weapons on urban populations, what is not controversial is that the bombs were launched from US bombers flown by human beings. The hatch releasing the bomb was controlled by human beings. The orders to drop these weapons for the first (and only) time in history were given by human beings. And the fireballs which these weapons created were visible to the human beings tasked with chronicling outcomes and consequences.
This is surely one of Bob’s “duh” moments but the point is that even with respect to the most destructive of weapons and weapons systems, the presumption of human control has always been built into the equation. Such bombs don’t drop themselves, don’t set their own targeting objectives. While full accountability for military mis-adventurism remains elusive, the presence of human agents and command chains has been understood as indispensable for ascribing at least some accountability for military operations which go off the rails, are deemed disproportionate to threats posed, or cause indiscriminate harm beyond the boundaries of any “reasonable” military objective.
But these erstwhile “human safeguards” are steadily being eroded as weaponized drones attack targets at distances of separation measured in the thousands of miles and as space-based weapons threaten populations at even greater distances. As our targets become more abstracted from human realities, as the distance between launch and destruction become ever greater, our targeting takes on more and more of the attributes of a video game. We don’t have to live with the consequences of our attacks in part because we are no longer a witness to those consequences. We aren’t required to experience the fireballs or the hollowed-out communities. We don’t hear the cries of the victimized or smell the burning flesh. More and more, we can push the buttons, clear the board, get on with our lives, and then return to our seats to prompt the systems to hone-in on our next, equally remote targets.
And as we were reminded this week at the UN, we now have the capacity to develop and manufacture weapons systems which can operate virtually independent of human control, which can make (and implement) autonomous targeting decisions based on algorithms that they might eventually be capable of altering themselves.
This week, amidst discouraging news from Afghanistan, Myanmar and Tigray, we spent a good bit of time covering the Group of Government Experts meeting on Lethal Autonomous Weapons Systems (LAWS). The dominant theme of this week was the maintenance of what the UK and others referred to as “meaningful human control “over LAWS and their deployments, taking into account (as the Holy See advocated) “potential implications for international peace and security as weapons systems becomes further detached from human agency.”
While some states such as Australia highlighted the potential military advantages of autonomous weapons – especially with regard to greater targeting precision – most states at this GGE understood at some level that the burden of proof lay with those few states which seemed to minimize the degree of difficulty in maintaining what Brazil referred to as a balance between “military necessity” and regard for legal and ethical principles, including human dignity. Many states, including those calling for a binding international instrument on LAWS, expressed the concern that as military-related technology increases, human accountability for weapons uses under international law risks becoming akin to a rapidly speeding car which we can now only pretend is still under our control.
Kudos to those states, especially Mexico, Chile and Palestine, for their efforts to keep human agency and dignity at the center of our military doctrine; for ably rejecting (as Chile noted) our current, norm-busing predisposition to “spectator violence,” for our growing comfort (as Palestine maintained) with ascribing accountability for autonomous systems failures to the machines themselves and not to those who program and “manage them,” and for our unwillingness (as Mexico claimed) to draw clear linkages between our work in this GGE to the larger (and oft-neglected) UN project of “general and complete disarmament.”
And yet, even in these instances, it was easy to come away with a feeling (communicated to me by others as well) that something is missing from these discussions, that ascriptions of “human control” are not a sufficiently high bar, are not sufficiently mindful of the current state of human affairs and its impacts on our emotional stability, indeed even our very sanity. Does not “meaningful human control” assume that we can keep our best emotions switched “on,” that we can maintain the ability (and the will) to integrate implications of weapons deployments beyond the merely technical?
I assume that most readers of this piece have not altogether missed the recent spate of articles in the mainstream and alternative media documenting our growing emotional fragility and “numbness” as the combination of pandemic variants, severe drought and the destructive heat from forest fires and armed violence push many us back into places of social, economic and emotional isolation from which we were just starting, albeit tentatively, to emerge. We are in danger of saying too much about this, but can also never say this enough – that we are steadily allowing ourselves to become an impaired species, one which is increasingly disposed to see others as adversaries rather than partners; one which has shrunk circles of concern beyond the reach of reason, let alone of multilateral policy and inquiry; one which has generally, even defiantly, succumbed to a default of “numbness,” that place of merely going through the motions, of abandoning any pretense to genuine agency and dignity, let alone compassion; of passively accepting what we are told to do, trained to do, even programmed to do, because it just takes too much energy not to do so.
It is perhaps not the duty of negotiating diplomats to ask themselves these questions, to openly share concern about the basic sanity and humanity of those persons whose agency we rightly seek to guarantee with respect to our more and more sophisticated weapons systems. But the concerns loom nonetheless, concerns about our escalating levels of high anxiety, disillusionment and “quiet madness” that call into question what remains of our confidence in human agency, eroding the belief that we still have what it takes to keep our technologically advanced weapons systems in line with the international law (IHL) obligations which the weapons themselves never quite agreed to uphold.
The numbness which now infects so many dimensions of our eroding social contract has particularly grave implications for our military adventures, especially given our current, weapons-related complexities that stretch both the efficacy of our measures of control and the international laws and regulations meant to ensure “humane” deployments. Indeed, some states this week openly wondered whether current interpretation of international law are sufficient to allow us (as Brazil noted) to “draw the line” on violence lacking adequate human authorization and oversight. Moreover the International Committee of the Red Cross — an agency thankfully as invested in preventing war as in upholding its “rules” — claimed that “it is hard to imagine a battlefield scenario where autonomous weapons would not raise significant IHL red flags,” especially given that so many “battlefields” are now resident in heavily populated areas.
To our own mind, sane and stable human agency is most urgently needed at the point of decision to authorize weapons systems such as LAWS in the first place. Once that fateful decision is made, it is harder to imagine human agency that is sufficient to their uses, that can maintain the balance between military utility and our obligations under international humanitarian law, indeed that can remove all those “red flags” from their flag poles. One task for us all is to guarantee that “meaningful human control” over our increasingly complex and even autonomous weapons systems does not devolve into some misidentified “trial” conducted by the emotionally impaired on unwitting populations.
Until and unless we can better assure that the humans in control of such systems are not overcome by despair or disillusionment, have not become numbed to the consequences of the weapons they seek to manage, it would be better for what remains of our collective health, safety and sanity to keep those weapons out of circulation altogether.
