Editors Note: This is an excerpt from a policy roundtable Artificial Intelligence and International Security from our sister publication, the Texas National Security Review. Be sure to check out the full roundtable.
Artificial intelligence (AI) has recently become a focus of efforts to maintain and enhance U.S. military, political, and economic competitiveness. The Defense Departments 2018 strategy for AI, released not long after the creation of a new Joint Artificial Intelligence Center, proposes to accelerate the adoption of AI by fostering a culture of experimentation and calculated risk taking, an approach drawn from the broader National Defense Strategy. But what kinds of calculated risks might AI entail? The AI strategy has almost nothing to say about the risks incurred by the increased development and use of AI. On the contrary, the strategy proposes using AI to reduce risks, including those to both deployed forces and civilians.
While acknowledging the possibility that AI might be used in ways that reduce some risks, this brief essay outlines some of the risks that come with the increased development and deployment of AI, and what might be done to reduce those risks. At the outset, it must be acknowledged that the risks associated with AI cannot be reliably calculated. Instead, they are emergent properties arising from the arbitrary complexity of information systems. Nonetheless, history provides some guidance on the kinds of risks that are likely to arise, and how they might be mitigated. I argue that, perhaps counter-intuitively, using AI to manage and reduce risks will require the development of uniquely human and social capabilities.
A Brief History of AI, From Automation to Symbiosis
The Department of Defense strategy for AI contains at least two related but distinct conceptions of AI. The first focuses on mimesis that is, designing machines that can mimic human work. The strategy document defines mimesis as the ability of machines to perform tasks that normally require human intelligence for example, recognizing patterns, learning from experience, drawing conclusions, making predictions, or taking action. A somewhat distinct approach to AI focuses on what some have called human-machine symbiosis, wherein humans and machines work closely together, leveraging their distinctive kinds of intelligence to transform work processes and organization. This vision can also be found in the AI strategy, which aims to use AI-enabled information, tools, and systems to empower, not replace, those who serve.
Of course, mimesis and symbiosis are not mutually exclusive. Mimesis may be understood as a means to symbiosis, as suggested by the Defense Departments proposal to augment the capabilities of our personnel by offloading tedious cognitive or physical tasks. But symbiosis is arguably the more revolutionary of the two concepts and also, I argue, the key to understanding the risks associated with AI.
Both approaches to AI are quite old. Machines have been taking over tasks that otherwise require human intelligence for decades, if not centuries. In 1950, mathematician Alan Turing proposed that a machine can be said to think if it can persuasively imitate human behavior, and later in the decade computer engineers designed machines that could learn. By 1959, one researcher concluded that a computer can be programmed so that it will learn to play a better game of checkers than can be played by the person who wrote the program.
Meanwhile, others were beginning to advance a more interactive approach to machine intelligence. This vision was perhaps most prominently articulated by J.C.R. Licklider, a psychologist studying human-computer interactions. In a 1960 paper on Man-Computer Symbiosis, Licklider chose to avoid argument with (other) enthusiasts for artificial intelligence by conceding dominance in the distant future of cerebration to machines alone. However, he continued: There will nevertheless be a fairly long interim during which the main intellectual advances will be made by men and computers working together in intimate association.
Notions of symbiosis were influenced by experience with computers for the Semi-Automatic Ground Environment (SAGE), which gathered information from early warning radars and coordinated a nationwide air defense system. Just as the Defense Department aims to use AI to keep pace with rapidly changing threats, SAGE was designed to counter the prospect of increasingly swift attacks on the United States, specifically low-flying bombers that could evade radar detection until they were very close to their targets.
Unlike other computers of the 1950s, the SAGE computers could respond instantly to inputs by human operators. For example, operators could use a light gun to select an aircraft on the screen, thereby gathering information about the airplanes identification, speed, and direction. SAGE became the model for command-and-control systems throughout the U.S. military, including the Ballistic Missile Early Warning System, which was designed to counter an even faster-moving threat: intercontinental ballistic missiles, which could deliver their payload around the globe in just half an hour. We can still see the SAGE model today in systems such as the Patriot missile defense system, which is designed to destroy short-range missiles those arriving with just a few minutes of notice.
SAGE also inspired a new and more interactive approach to computing, not just within the Defense Department, but throughout the computing industry. Licklider advanced this vision after he became director of the Defense Departments Information Processing Technologies Office, within the Advanced Research Projects Agency, in 1962. Under Lickliders direction, the office funded a wide range of research projects that transformed how people would interact with computers, such as graphical user interfaces and computer networking that eventually led to the Internet.
The technologies of symbiosis have contributed to competitiveness not primarily by replacing people, but by enabling new kinds of analysis and operations. Interactive information and communications technologies have reshaped military operations, enabling more rapid coordination and changes in plans. They have also enabled new modes of commerce. And they created new opportunities for soft power as technologies such as personal computers, smart phones, and the Internet became more widely available around the world, where they were often seen as evidence of American progress.
Mimesis and symbiosis come with somewhat distinct opportunities and risks. The focus on machines mimicking human behavior has prompted anxieties about, for example, whether the results produced by machine reasoning should be trusted more than results derived from human reasoning. Such concerns have spurred work on explainable AI wherein machine outputs are accompanied by humanly comprehensible explanations for those outputs.
By contrast, symbiosis calls attention to the promises and risks of more intimate and complex entanglements of humans and machines. Achieving an optimal symbiosis requires more than well-designed technology. It also requires continual reflection upon and revision of the models that govern human-machine interactions. Humans use models to design AI algorithms and to select and construct the data used to train such systems. Human designers also inscribe models of use assumptions about the competencies and preferences of users, and the physical and organizational contexts of use into the technologies they create. Thus, like a film script, technical objects define a framework of action together with the actors and the space in which they are supposed to act. Scripts do not completely determine action, but they configure relationships between humans, organizations, and machines in ways that constrain and shape user behavior. Unfortunately, these interactively complex sociotechnical systems often exhibit emergent behavior that is contrary to the intentions of designers and users.
Competitive Advantages and Risks
Because models cannot adequately predict all of the possible outcomes of complex sociotechnical systems, increased reliance on intelligent machines leads to at least four kinds of risks: The models for how machines gather and process information, and the models of human-machine interaction, can both be inadvertently flawed or deliberately manipulated in ways not intended by designers. Examples of each of these kinds of risks can be found in past experiences with smart machines.
First, changing circumstances can render the models used to develop machine intelligence irrelevant. Thus, those models and the associated algorithms need constant maintenance and updating. For example, what is now the Patriot missile defense system was initially designed for air defense but was rapidly redesigned and deployed to Saudi Arabia and Israel to defend against short-range missiles during the 1991 Gulf War. As an air defense system it ran for just a few hours at a time, but as a missile defense system it ran for days without rebooting. In these new operating conditions, a timing error in the software became evident. On Feb. 25, 1991, this error caused the system to miss a missile that struck a U.S. Army barracks in Dhahran, Saudi Arabia, killing 28 American soldiers. A software patch to fix the error arrived in Dhahran a day too late.
Second, the models upon which machines are designed to operate can be exploited for deceptive purposes. Consider, for example, Operation Igloo White, an effort to gather intelligence on and stop the movement of North Vietnamese supplies and troops in the late 1960s and early 1970s. The operation dropped sensors throughout the jungle, such as microphones, to detect voices and truck vibrations, as well as devices that could detect the ammonia odors from urine. These sensors sent signals to overflying aircraft, which in turn sent them to a SAGE-like surveillance center that could dispatch bombers. However, the program was a very expensive failure. One reason is that the sensors were susceptible to spoofing. For example, the North Vietnamese could send empty trucks to an area to send false intelligence about troop movements, or use animals to trigger urine sensors.
Third, intelligent machines may be used to create scripts that enact narrowly instrumental forms of rationality, thereby undermining broader strategic objectives. For example, unpiloted aerial vehicle operators are tasked with using grainy video footage, electronic signals, and assumptions about what constitutes suspicious behavior to identify and then kill threatening actors, while minimizing collateral damage. Operators following this script have, at times, assumed that a group of men with guns was planning an attack, when in fact they were on their way to a wedding in a region where celebratory gun firing is customary, and that families praying at dawn were jihadists rather than simply observant Muslims. While it may be tempting to dub these mistakes operator errors, this would be too simple. Such operators are enrolled in a deeply flawed script one that presumes that technology can be used to correctly identify threats across vast geographic, cultural, and interpersonal distances, and that the increased risk of killing innocent civilians is worth the increased protection offered to U.S. combatants. Operators cannot be expected to make perfectly reliable judgments across such distances, and it is unlikely that simply deploying the more precise technology that AI enthusiasts promise can bridge the very distances that remote systems were made to maintain. In an era where soft power is inextricable from military power, such potentially dehumanizing uses of information technology are not only ethically problematic, they are also likely to generate ill will and blowback.
Finally, the scripts that configure relationships between humans and intelligent machines may ultimately encourage humans to behave in machine-like ways that can be manipulated by others. This is perhaps most evident in the growing use of social bots and new social media to influence the behavior of citizens and voters. Bots can easily mimic humans on social media, in part because those technologies have already scripted the behavior of users, who must interact through liking, following, tagging, and so on. While influence operations exploit the cognitive biases shared by all humans, such as a tendency to interpret evidence in ways that confirm pre-existing beliefs, users who have developed machine-like habits reactively liking, following, and otherwise interacting without reflection are all the more easily manipulated. Remaining competitive in an age of AI-mediated disinformation requires the development of more deliberative and reflective modes of human-machine interaction.
Conclusion
Achieving military, economic, and political competitiveness in an age of AI will entail designing machines in ways that encourage humans to maintain and cultivate uniquely human kinds of intelligence, such as empathy, self-reflection, and outside-the-box thinking. It will also require continual maintenance of intelligent systems to ensure that the models used to create machine intelligence are not out of date. Models structure perception, thinking, and learning, whether by humans or machines. But the ability to question and re-evaluate these assumptions is the prerogative and the responsibility of the human, not the machine.
Rebecca Slayton is an associate professor in the Science & Technology Studies Department and the Judith Reppy Institute of Peace and Conflict Studies, both at Cornell University. She is currently working on a book about the history of cyber security expertise.
Image: Flickr (Image by Steve Jurvetson)
Visit link:
The Promise and Risks of Artificial Intelligence: A Brief History - War on the Rocks
- The Smell Of Death Has A Strange Influence On Human Behavior - IFLScience - October 26th, 2024 [October 26th, 2024]
- "WEIRD" in psychology literature oversimplifies the global diversity of human behavior. - Psychology Today - October 2nd, 2024 [October 2nd, 2024]
- Scientists issue warning about increasingly alarming whale behavior due to human activity - Orcasonian - September 23rd, 2024 [September 23rd, 2024]
- Does AI adoption call for a change in human behavior? - Fast Company - July 26th, 2024 [July 26th, 2024]
- Dogs can smell human stress and it alters their own behavior, study reveals - New York Post - July 26th, 2024 [July 26th, 2024]
- Trajectories of brain and behaviour development in the womb, at birth and through infancy - Nature.com - June 18th, 2024 [June 18th, 2024]
- AI model predicts human behavior from our poor decision-making - Big Think - June 18th, 2024 [June 18th, 2024]
- ZkSync defends Sybil measures as Binance offers own ZK token airdrop - TradingView - June 18th, 2024 [June 18th, 2024]
- On TikTok, Goldendoodles Are People Trapped in Dog Bodies - The New York Times - June 18th, 2024 [June 18th, 2024]
- 10 things only introverts find irritating, according to psychology - Hack Spirit - June 18th, 2024 [June 18th, 2024]
- 32 animals that act weirdly human sometimes - Livescience.com - May 24th, 2024 [May 24th, 2024]
- NBC Is Using Animals To Push The LGBT Agenda. Here Are 5 Abhorrent Animal Behaviors Humans Shouldn't Emulate - The Daily Wire - May 24th, 2024 [May 24th, 2024]
- New study examines the dynamics of adaptive autonomy in human volition and behavior - PsyPost - May 24th, 2024 [May 24th, 2024]
- 30000 years of history reveals that hard times boost human societies' resilience - Livescience.com - May 12th, 2024 [May 12th, 2024]
- Kingdom of the Planet of the Apes Actors Had Trouble Reverting Back to Human - CBR - May 12th, 2024 [May 12th, 2024]
- The need to feel safe is a core driver of human behavior. - Psychology Today - April 15th, 2024 [April 15th, 2024]
- AI learned how to sway humans by watching a cooperative cooking game - Science News Magazine - March 29th, 2024 [March 29th, 2024]
- We can't combat climate change without changing minds. This psychology class explores how. - Northeastern University - March 11th, 2024 [March 11th, 2024]
- Bees Reveal a Human-Like Collective Intelligence We Never Knew Existed - ScienceAlert - March 11th, 2024 [March 11th, 2024]
- Franciscan AI expert warns of technology becoming a 'pseudo-religion' - Detroit Catholic - March 11th, 2024 [March 11th, 2024]
- Freshwater resources at risk thanks to human behavior - messenger-inquirer - March 11th, 2024 [March 11th, 2024]
- Astrocytes Play Critical Role in Regulating Behavior - Neuroscience News - March 11th, 2024 [March 11th, 2024]
- Freshwater resources at risk thanks to human behavior - Sunnyside Sun - March 11th, 2024 [March 11th, 2024]
- Freshwater resources at risk thanks to human behavior - Blue Mountain Eagle - March 11th, 2024 [March 11th, 2024]
- 7 Books on Human Behavior - Times Now - March 11th, 2024 [March 11th, 2024]
- Euphemisms increasingly used to soften behavior that would be questionable in direct language - Norfolk Daily News - February 29th, 2024 [February 29th, 2024]
- Linking environmental influences, genetic research to address concerns of genetic determinism of human behavior - Phys.org - February 29th, 2024 [February 29th, 2024]
- Emerson's Insight: Navigating the Three Fundamental Desires of Human Nature - The Good Men Project - February 29th, 2024 [February 29th, 2024]
- Dogs can recognize a bad person and there's science to prove it. - GOOD - February 29th, 2024 [February 29th, 2024]
- What Is Organizational Behavior? Everything You Need To Know - MarketWatch - February 4th, 2024 [February 4th, 2024]
- Overcoming 'Otherness' in Scientific Research Commentary in Nature Human Behavior USA - English - USA - PR Newswire - February 4th, 2024 [February 4th, 2024]
- "Reichman University's behavioral economics program: Navigating human be - The Jerusalem Post - January 19th, 2024 [January 19th, 2024]
- Of trees, symbols of humankind, on Tu BShevat - The Jewish Star - January 19th, 2024 [January 19th, 2024]
- Tapping Into The Power Of Positive Psychology With Acclaimed Expert Niyc Pidgeon - GirlTalkHQ - January 19th, 2024 [January 19th, 2024]
- Don't just make resolutions, 'be the architect of your future self,' says Stanford-trained human behavior expert - CNBC - December 31st, 2023 [December 31st, 2023]
- Never happy? Humans tend to imagine how life could be better : Short Wave - NPR - December 31st, 2023 [December 31st, 2023]
- People who feel unhappy but hide it well usually exhibit these 9 behaviors - Hack Spirit - December 31st, 2023 [December 31st, 2023]
- If you display these 9 behaviors, you're being passive aggressive without realizing it - Hack Spirit - December 31st, 2023 [December 31st, 2023]
- Men who are relationship-oriented by nature usually display these 9 behaviors - Hack Spirit - December 31st, 2023 [December 31st, 2023]
- A look at the curious 'winter break' behavior of ChatGPT-4 - ReadWrite - December 14th, 2023 [December 14th, 2023]
- Neuroscience and Behavior Major (B.S.) | College of Liberal Arts - UNH's College of Liberal Arts - December 14th, 2023 [December 14th, 2023]
- The positive health effects of prosocial behaviors | News | Harvard ... - HSPH News - October 27th, 2023 [October 27th, 2023]
- The valuable link between succession planning and skills - Human Resource Executive - October 27th, 2023 [October 27th, 2023]
- Okinawa's ants show reduced seasonal behavior in areas with more human development - Phys.org - October 27th, 2023 [October 27th, 2023]
- How humans use their sense of smell to find their way | Penn Today - Penn Today - October 27th, 2023 [October 27th, 2023]
- Wrestling With Evil in the World, or Is It Something Else? - Psychiatric Times - October 27th, 2023 [October 27th, 2023]
- Shimmying like electric fish is a universal movement across species - Earth.com - October 27th, 2023 [October 27th, 2023]
- Why do dogs get the zoomies? - Care.com - October 27th, 2023 [October 27th, 2023]
- How Stuart Robinson's misconduct went overlooked for years - Washington Square News - October 27th, 2023 [October 27th, 2023]
- Whatchamacolumn: Homeless camps back in the news - News-Register - October 27th, 2023 [October 27th, 2023]
- Stunted Growth in Infants Reshapes Brain Function and Cognitive ... - Neuroscience News - October 27th, 2023 [October 27th, 2023]
- Social medias role in modeling human behavior, societies - kuwaittimes - October 27th, 2023 [October 27th, 2023]
- The gift of reformation - Living Lutheran - October 27th, 2023 [October 27th, 2023]
- After pandemic, birds are surprisingly becoming less fearful of humans - Study Finds - October 27th, 2023 [October 27th, 2023]
- Nick Treglia: The trouble with fairness and the search for truth - 1819 News - October 27th, 2023 [October 27th, 2023]
- Science has an answer for why people still wave on Zoom - Press Herald - October 27th, 2023 [October 27th, 2023]
- Orcas are learning terrifying new behaviors. Are they getting smarter? - Livescience.com - October 27th, 2023 [October 27th, 2023]
- Augmenting the Regulatory Worker: Are We Making Them Better or ... - BioSpace - October 27th, 2023 [October 27th, 2023]
- What "The Creator", a film about the future, tells us about the present - InCyber - October 27th, 2023 [October 27th, 2023]
- WashU Expert: Some parasites turn hosts into 'zombies' - The ... - Washington University in St. Louis - October 27th, 2023 [October 27th, 2023]
- Is secondhand smoke from vapes less toxic than from traditional ... - Missouri S&T News and Research - October 27th, 2023 [October 27th, 2023]
- How apocalyptic cults use psychological tricks to brainwash their ... - Big Think - October 27th, 2023 [October 27th, 2023]
- Human action pushing the world closer to environmental tipping ... - Morung Express - October 27th, 2023 [October 27th, 2023]
- What We Get When We Give | Harvard Medicine Magazine - Harvard University - October 27th, 2023 [October 27th, 2023]
- Psychological Anime: 12 Series You Should Watch - But Why Tho? - October 27th, 2023 [October 27th, 2023]
- Roosters May Recognize Their Reflections in Mirrors, Study Suggests - Smithsonian Magazine - October 27th, 2023 [October 27th, 2023]
- June 30 Zodiac: Sign, Traits, Compatibility and More - AZ Animals - May 13th, 2023 [May 13th, 2023]
- Indiana's Funding Ban for Kinsey Sex-Research Institute Threatens ... - The Chronicle of Higher Education - May 13th, 2023 [May 13th, 2023]
- Have AI Chatbots Developed Theory of Mind? What We Do and Do ... - The New York Times - March 31st, 2023 [March 31st, 2023]
- Scoop: Coming Up on a New Episode of HOUSEBROKEN on FOX ... - Broadway World - March 31st, 2023 [March 31st, 2023]
- Here's five fall 2023 classes to fire up your bookbag - Duke Chronicle - March 31st, 2023 [March 31st, 2023]
- McDonald: Aspen's like living in a 'Pullman town' - The Aspen Times - March 31st, 2023 [March 31st, 2023]
- Children Who Are Exposed to Awe-Inspiring Art Are More Likely to Become Generous, Empathic Adults, a New Study Says - artnet News - March 31st, 2023 [March 31st, 2023]
- DataDome Raises Another $42M to Prevent Bot Attacks in Real ... - AlleyWatch - March 31st, 2023 [March 31st, 2023]
- Observing group-living animals with drones may help us understand ... - Innovation Origins - March 31st, 2023 [March 31st, 2023]
- Mann named director of School of Public and Population Health - Boise State University - March 31st, 2023 [March 31st, 2023]
- Irina Solomonova's bad behavior is the star of Love Is Blind - My Imperfect Life - March 31st, 2023 [March 31st, 2023]
- Health quotes Dill in article about rise of Babesiosis - UMaine News ... - University of Maine - March 31st, 2023 [March 31st, 2023]
- There's still time for the planet, Goodall says, if we stay hopeful - University of Wisconsin-Madison - March 31st, 2023 [March 31st, 2023]
- Relationship between chronotypes and aggression in adolescents ... - BMC Psychiatry - March 31st, 2023 [March 31st, 2023]