Editors Note: This article was submitted in response to thecall for ideasissued by the co-chairs of the National Security Commission on Artificial Intelligence, Eric Schmidt and Robert Work. It addresses the third question (parts b. and d.) which asks authors to consider the ethical dimensions of AI.
Examining the legal, moral, and ethical implications of military artificial intelligence (AI) poses a chicken-and-egg problem: Experts and analysts have a general sense of the risks involved, but the broad and constantly evolving nature of the technology provides insufficient technical details to mitigate them all in advance. Employing AI in the battlespace could create numerous ethical dilemmas that we must begin to guard against today, but in many cases the technology has not advanced sufficiently to present concrete, solvable problems.
To this end, 2019 was a bumper year for general military AI ethics. The Defense Innovation Board released its ethical military AI principles; the National Security Commission on AI weighed in with its interim report; the European Commission developed guidelines for trustworthy AI; and the French Armed Forces produced a white paper grappling with a national ethical approach. General principles like these usefully frame the problem, but it is technically difficult to operationalize reliability or equitability, and assessing specific systems can present ambiguity especially near the end of development.
Given the wide-ranging potential applications and challenges presented by AI, the Department of Defense and its contractors should tackle legal and ethical concerns early and often in the development lifecycle, from the formative stages of an AI concept to its realization and eventual deployment. Only by considering legal and ethical principles long before Acquisition Milestone A will AI capabilities reflect enduring American principles. Ethical considerations should shape future system requirements, and developers should recognize both technical and ethical challenges. Incorporating ethics analysis could reshape development processes and create new costs for developers, but decision-makers and developers alike must recognize that an early and often approach has tangible benefits, including aiding in future compliance reviews like Defense Department Directive 3000.09.
The early and often principle developed by moving legal and ethical discussions from the ivory tower to the battlefield. Our team at the Institute for Defense Analyses is tackling this challenge as part of the Defense Advanced Research Projects Agencys (DARPA) development of the Urban Reconnaissance through Supervised Autonomy program. This is not a weapons system: It is intended to move ahead of a patrol, using AI and autonomy to discern potential threats and sources of harm to U.S. forces and civilians. A multidisciplinary research group of academics, philosophers, lawyers, military experts, and analysts has incorporated law and ethics to analyze the systems technological dependencies and components from its inception. This analytical process could be applied to other systems and offers one path forward for ethical military AI development.
Shaping System Requirements
Holistically considering the legal, moral, and ethical implications of future AI-enabled and autonomous systems early and often first requires bridging a conceptual gap. Assessments must break down the possible and plausible, examining both a systems ideal performance in operation and its real ability to perform a task. Analyzing ethical strengths and weaknesses requires the assessor to understand a systems purpose, its technical components and their limitations, relevant legal and ethical frameworks, and the systems efficacy at a task compared to that of a human operator.
In reality, assessing ethical compliance from design to deployment resembles a spiral model, requiring repeated testing, prototyping, and reviewing for technological and ethical limitations. The viability of any AI system ultimately will be assessed when it is employed. Choices implemented early in the systems design such as dependence on neural nets for image recognition of human behavior carry legal and ethical implications for the systems reliability in the field, particularly when things go wrong.
Legal and ethical considerations require broadening requirements from the purely technical (e.g., computer vision, image recognition, decision logic, and vehicle autonomy) to include international humanitarian law, the laws of war, and relevant tort rulings. For example, international humanitarian law requires discriminating between combatants and civilians, and dictates that unknown individuals be considered civilians. To comply with the law, an AI-enabled system that is uncertain of an individuals status would need to check with a human operator before acting in a way that might cause disproportional harm to that individual. This alone requires developers at the outset of a systems design to analyze human-machine agency trade-offs, account for decision-to-action latency times, and incorporate into technical designs sufficient time for operators to review machine decisions. Ultimately, the mutual reinforcement of ethical and technical requirements drives developers plans by enshrining the principle that design and development must be informed by an understanding of ethical issues that could arise in the field.
As forward-looking legal and ethical considerations shape requirements across the board, developers will find it necessary to consult experts or even multidisciplinary teams throughout the design and development process. In addition to pointing out legal red lines and flagging areas of ethical concern, these experts could help develop other key features of ethical analysis. Two such key elements are system explainability and transparent documentation.
Emphasizing System Explainability and Ethical Documentation
DARPAs Heilmeier Catechism is a collection of thought exercises to help agency officials dissect proposed programs with straightforward questions. For example, without using jargon, what is your system trying to do? What are the limits of current practice? What are the risks involved?
These questions are at the heart of what could be defined as a systems explainability. In this case, we are not referring to explainability in a forensic sense of understanding the underpinnings of deep-learning systems. Rather, at the outset of system development, developers should also be able to describe how their system will function in the field, including the objectives it aims to achieve and the tasks it will undertake, the technologies it will rely on to do so, and the technical, legal, and ethical risks inherent to using those technologies. As updates to system designs occur and recur, legal and ethical implications should continuously be reexamined and evaluated. In complex systems of systems, developers focus on cracking individual technical components can overshadow considerations of system end-use goals and operational context, thereby leaving these difficult explanatory questions unanswered.
Ethical documentation requirements essentially requiring a paper trail devoted solely to legal, moral, and ethical concerns present a simple method for capturing system explainability. Developers should document their systems without jargon and should include critical dependencies, possible points of failure, and gaps in research to ensure that non-technical audiences understand the legal and ethical risks and benefits of new AI-enabled systems. In keeping with the early and often principle, developers will have to consider concepts of operations how their system will be used in the field earlier in the design process than is currently typical in order to accurately document their systems. A detailed mission walkthrough (with the aid of tools like process maps) could help developers identify agency hand-off points, system decision points, or design choices for user interfaces and other components that incur potential for bias. Developers are already required to produce risk burn-down documentation to identify and mitigate technical issues for new systems. Similar documentation for ethical risks will ensure that developers are transparently contemplating ethical challenges early in the design process.
Law and ethics-specific documentation would also emphasize the importance of consistent terminology within developer teams throughout the development process. Complex AI-enabled and autonomous systems, which often contain multiple components developed by subcontractors, can confuse people trying to assess the ethical impact of a system, particularly when developers use inconsistent names and definitions for the same components. Assessments that incorporate multidisciplinary teams of civil experts and military specialists can both bridge terminology gaps and highlight areas of potential confusion.
Tackling Research Gaps and Bias
Early and often ethical analysis can also identify gaps in relevant research and point out the potential for system bias while systems are still in development. By identifying research gaps where it would help developers make ethical design choices, decision-makers can allocate resources to studies that address immediate needs. For example, there is a known lack of research on the reliability of AI-enabled image recognition for certain types of human behaviors. As ethical analyses uncover research gaps that might apply across future platforms, upfront research costs could benefit future systems with similar technical dependencies.
Describing the operating environments in which an AI-enabled system will operate often depends on anecdotal recollections of combat experiences. This can serve as a useful starting point for training these systems, but it has limitations. AI is only as good as the data it is trained on. Many machine-learning techniques crucially depend on access to extensive and well-curated data sets. In most instances, data sets incorporating the subtleties and nuances of specific operating environments and conditions simply do not exist. Even where they do exist, they often require substantial effort to convert to formats amenable to machine-learning processes. Further, the AI community has learned the hard way that even well-developed data sets may harbor unforeseen biases that can color the machine learning in ways that raise serious ethical concerns.
Regular ethical analyses can help to address bias issues in the design and development of AI-dependent systems. Such analysis can also serve as a backstop against introducing unintentional bias, whether it occurs via system outputs that bias human operators or via operator bias, into the systems decision-making processes. Law and ethics assessors can help think through data sets, algorithmic weighting, system outputs, and human inputs to try to identify bias throughout the design process and to serve as sounding boards for developers and subject matter experts alike.
Conclusion
The future of warfare is headed toward autonomy. America and its allies are not the only actors who have a say in what that future looks like. Near peers are using AI in troubling ways, and the importance of trying to establish the rules of the road and abiding by them is paramount to maintaining the unique soft power advantage that the United States and its allies enjoy through adhering to moral and ethical considerations. Laying out these principles and transparently applying them to relevant U.S. military systems will help in establishing best practices within the defense community, developing a common lexicon with allies and partners, and building trust among concerned publics and the tech community. In the end, this will occur not simply as a byproduct of intellectual clarity on legal and ethical issues but as an outgrowth of early and often ethical engagement during system development.
At first glance, applied legal, moral, and ethical considerations seem onerous, particularly where new requirements for personnel or documentation are likely necessary. They may also require the development, acquisition, and operational communities to reevaluate the applicability of their standard operating procedures. However, early and often ethical analysis, comprising continual testing, prototyping, and reviewing areas of legal or ethical concern, will mitigate the rise of ethical considerations that would detrimentally impact later development and acquisition stages and that could prevent system deployment. Facilitating this analysis through improved transparency in system design and improving the explainability of AI and autonomous decision processes will be key to realizing these benefits, particularly as the Department of Defense moves to practical implementation of Directive 3000.09.
Human warfighters learn lessons of ethics, morality, and law within society before they enlist. These lessons are bolstered and expanded through reinforcement of warrior and service-specific ethos. As the U.S. military increasingly incorporates AI and autonomy into the battlespace and we ask intelligent machines to take on responsibilities akin to those of our service personnel, why should we approach them any differently?
Owen Daniels is a research associate in the Joint Advanced Warfighting Division at the Institute for Defense Analyses working on the IDA Legal, Moral, Ethical (LME) AI & Autonomy research effort.
Brian Williams is a research staff member in the Joint Advanced Warfighting Division at the Institute for Defense Analyses and task leader of the IDA Legal, Moral, Ethical (LME) AI & Autonomy research effort.
This research was developed with funding from the Defense Advanced Research Projects Agency.
The views, opinions, and findings expressed in this paper should not be construed as representing the official position of the Institute for Defense Analyses, the Department of Defense, or the U.S. government.
Image: U.S. Air Force (Photo by Todd Maki)
Read the rest here:
Day Zero Ethics for Military AI - War on the Rocks
- The Impact of AI on Human Behavior: Insights and Implications - iTMunch - January 23rd, 2025 [January 23rd, 2025]
- Disturbing Wildlife Isnt Fun: IFS Parveen Kaswan Raises Concern Over Human Behavior in Viral Clip - Indian Masterminds - January 15th, 2025 [January 15th, 2025]
- The interplay of time and space in human behavior: a sociological perspective on the TSCH model - Nature.com - January 1st, 2025 [January 1st, 2025]
- Thinking Slowly: The Paradoxical Slowness of Human Behavior - Caltech - December 23rd, 2024 [December 23rd, 2024]
- From smog to crime: How air pollution is shaping human behavior and public safety - The Times of India - December 9th, 2024 [December 9th, 2024]
- The Smell Of Death Has A Strange Influence On Human Behavior - IFLScience - October 26th, 2024 [October 26th, 2024]
- "WEIRD" in psychology literature oversimplifies the global diversity of human behavior. - Psychology Today - October 2nd, 2024 [October 2nd, 2024]
- Scientists issue warning about increasingly alarming whale behavior due to human activity - Orcasonian - September 23rd, 2024 [September 23rd, 2024]
- Does AI adoption call for a change in human behavior? - Fast Company - July 26th, 2024 [July 26th, 2024]
- Dogs can smell human stress and it alters their own behavior, study reveals - New York Post - July 26th, 2024 [July 26th, 2024]
- Trajectories of brain and behaviour development in the womb, at birth and through infancy - Nature.com - June 18th, 2024 [June 18th, 2024]
- AI model predicts human behavior from our poor decision-making - Big Think - June 18th, 2024 [June 18th, 2024]
- ZkSync defends Sybil measures as Binance offers own ZK token airdrop - TradingView - June 18th, 2024 [June 18th, 2024]
- On TikTok, Goldendoodles Are People Trapped in Dog Bodies - The New York Times - June 18th, 2024 [June 18th, 2024]
- 10 things only introverts find irritating, according to psychology - Hack Spirit - June 18th, 2024 [June 18th, 2024]
- 32 animals that act weirdly human sometimes - Livescience.com - May 24th, 2024 [May 24th, 2024]
- NBC Is Using Animals To Push The LGBT Agenda. Here Are 5 Abhorrent Animal Behaviors Humans Shouldn't Emulate - The Daily Wire - May 24th, 2024 [May 24th, 2024]
- New study examines the dynamics of adaptive autonomy in human volition and behavior - PsyPost - May 24th, 2024 [May 24th, 2024]
- 30000 years of history reveals that hard times boost human societies' resilience - Livescience.com - May 12th, 2024 [May 12th, 2024]
- Kingdom of the Planet of the Apes Actors Had Trouble Reverting Back to Human - CBR - May 12th, 2024 [May 12th, 2024]
- The need to feel safe is a core driver of human behavior. - Psychology Today - April 15th, 2024 [April 15th, 2024]
- AI learned how to sway humans by watching a cooperative cooking game - Science News Magazine - March 29th, 2024 [March 29th, 2024]
- We can't combat climate change without changing minds. This psychology class explores how. - Northeastern University - March 11th, 2024 [March 11th, 2024]
- Bees Reveal a Human-Like Collective Intelligence We Never Knew Existed - ScienceAlert - March 11th, 2024 [March 11th, 2024]
- Franciscan AI expert warns of technology becoming a 'pseudo-religion' - Detroit Catholic - March 11th, 2024 [March 11th, 2024]
- Freshwater resources at risk thanks to human behavior - messenger-inquirer - March 11th, 2024 [March 11th, 2024]
- Astrocytes Play Critical Role in Regulating Behavior - Neuroscience News - March 11th, 2024 [March 11th, 2024]
- Freshwater resources at risk thanks to human behavior - Sunnyside Sun - March 11th, 2024 [March 11th, 2024]
- Freshwater resources at risk thanks to human behavior - Blue Mountain Eagle - March 11th, 2024 [March 11th, 2024]
- 7 Books on Human Behavior - Times Now - March 11th, 2024 [March 11th, 2024]
- Euphemisms increasingly used to soften behavior that would be questionable in direct language - Norfolk Daily News - February 29th, 2024 [February 29th, 2024]
- Linking environmental influences, genetic research to address concerns of genetic determinism of human behavior - Phys.org - February 29th, 2024 [February 29th, 2024]
- Emerson's Insight: Navigating the Three Fundamental Desires of Human Nature - The Good Men Project - February 29th, 2024 [February 29th, 2024]
- Dogs can recognize a bad person and there's science to prove it. - GOOD - February 29th, 2024 [February 29th, 2024]
- What Is Organizational Behavior? Everything You Need To Know - MarketWatch - February 4th, 2024 [February 4th, 2024]
- Overcoming 'Otherness' in Scientific Research Commentary in Nature Human Behavior USA - English - USA - PR Newswire - February 4th, 2024 [February 4th, 2024]
- "Reichman University's behavioral economics program: Navigating human be - The Jerusalem Post - January 19th, 2024 [January 19th, 2024]
- Of trees, symbols of humankind, on Tu BShevat - The Jewish Star - January 19th, 2024 [January 19th, 2024]
- Tapping Into The Power Of Positive Psychology With Acclaimed Expert Niyc Pidgeon - GirlTalkHQ - January 19th, 2024 [January 19th, 2024]
- Don't just make resolutions, 'be the architect of your future self,' says Stanford-trained human behavior expert - CNBC - December 31st, 2023 [December 31st, 2023]
- Never happy? Humans tend to imagine how life could be better : Short Wave - NPR - December 31st, 2023 [December 31st, 2023]
- People who feel unhappy but hide it well usually exhibit these 9 behaviors - Hack Spirit - December 31st, 2023 [December 31st, 2023]
- If you display these 9 behaviors, you're being passive aggressive without realizing it - Hack Spirit - December 31st, 2023 [December 31st, 2023]
- Men who are relationship-oriented by nature usually display these 9 behaviors - Hack Spirit - December 31st, 2023 [December 31st, 2023]
- A look at the curious 'winter break' behavior of ChatGPT-4 - ReadWrite - December 14th, 2023 [December 14th, 2023]
- Neuroscience and Behavior Major (B.S.) | College of Liberal Arts - UNH's College of Liberal Arts - December 14th, 2023 [December 14th, 2023]
- The positive health effects of prosocial behaviors | News | Harvard ... - HSPH News - October 27th, 2023 [October 27th, 2023]
- The valuable link between succession planning and skills - Human Resource Executive - October 27th, 2023 [October 27th, 2023]
- Okinawa's ants show reduced seasonal behavior in areas with more human development - Phys.org - October 27th, 2023 [October 27th, 2023]
- How humans use their sense of smell to find their way | Penn Today - Penn Today - October 27th, 2023 [October 27th, 2023]
- Wrestling With Evil in the World, or Is It Something Else? - Psychiatric Times - October 27th, 2023 [October 27th, 2023]
- Shimmying like electric fish is a universal movement across species - Earth.com - October 27th, 2023 [October 27th, 2023]
- Why do dogs get the zoomies? - Care.com - October 27th, 2023 [October 27th, 2023]
- How Stuart Robinson's misconduct went overlooked for years - Washington Square News - October 27th, 2023 [October 27th, 2023]
- Whatchamacolumn: Homeless camps back in the news - News-Register - October 27th, 2023 [October 27th, 2023]
- Stunted Growth in Infants Reshapes Brain Function and Cognitive ... - Neuroscience News - October 27th, 2023 [October 27th, 2023]
- Social medias role in modeling human behavior, societies - kuwaittimes - October 27th, 2023 [October 27th, 2023]
- The gift of reformation - Living Lutheran - October 27th, 2023 [October 27th, 2023]
- After pandemic, birds are surprisingly becoming less fearful of humans - Study Finds - October 27th, 2023 [October 27th, 2023]
- Nick Treglia: The trouble with fairness and the search for truth - 1819 News - October 27th, 2023 [October 27th, 2023]
- Science has an answer for why people still wave on Zoom - Press Herald - October 27th, 2023 [October 27th, 2023]
- Orcas are learning terrifying new behaviors. Are they getting smarter? - Livescience.com - October 27th, 2023 [October 27th, 2023]
- Augmenting the Regulatory Worker: Are We Making Them Better or ... - BioSpace - October 27th, 2023 [October 27th, 2023]
- What "The Creator", a film about the future, tells us about the present - InCyber - October 27th, 2023 [October 27th, 2023]
- WashU Expert: Some parasites turn hosts into 'zombies' - The ... - Washington University in St. Louis - October 27th, 2023 [October 27th, 2023]
- Is secondhand smoke from vapes less toxic than from traditional ... - Missouri S&T News and Research - October 27th, 2023 [October 27th, 2023]
- How apocalyptic cults use psychological tricks to brainwash their ... - Big Think - October 27th, 2023 [October 27th, 2023]
- Human action pushing the world closer to environmental tipping ... - Morung Express - October 27th, 2023 [October 27th, 2023]
- What We Get When We Give | Harvard Medicine Magazine - Harvard University - October 27th, 2023 [October 27th, 2023]
- Psychological Anime: 12 Series You Should Watch - But Why Tho? - October 27th, 2023 [October 27th, 2023]
- Roosters May Recognize Their Reflections in Mirrors, Study Suggests - Smithsonian Magazine - October 27th, 2023 [October 27th, 2023]
- June 30 Zodiac: Sign, Traits, Compatibility and More - AZ Animals - May 13th, 2023 [May 13th, 2023]
- Indiana's Funding Ban for Kinsey Sex-Research Institute Threatens ... - The Chronicle of Higher Education - May 13th, 2023 [May 13th, 2023]
- Have AI Chatbots Developed Theory of Mind? What We Do and Do ... - The New York Times - March 31st, 2023 [March 31st, 2023]
- Scoop: Coming Up on a New Episode of HOUSEBROKEN on FOX ... - Broadway World - March 31st, 2023 [March 31st, 2023]
- Here's five fall 2023 classes to fire up your bookbag - Duke Chronicle - March 31st, 2023 [March 31st, 2023]
- McDonald: Aspen's like living in a 'Pullman town' - The Aspen Times - March 31st, 2023 [March 31st, 2023]
- Children Who Are Exposed to Awe-Inspiring Art Are More Likely to Become Generous, Empathic Adults, a New Study Says - artnet News - March 31st, 2023 [March 31st, 2023]
- DataDome Raises Another $42M to Prevent Bot Attacks in Real ... - AlleyWatch - March 31st, 2023 [March 31st, 2023]
- Observing group-living animals with drones may help us understand ... - Innovation Origins - March 31st, 2023 [March 31st, 2023]