{"id":176791,"date":"2023-07-21T09:24:37","date_gmt":"2023-07-21T13:24:37","guid":{"rendered":"http:\/\/stateofthenation.co\/?p=176791"},"modified":"2023-07-21T09:29:53","modified_gmt":"2023-07-21T13:29:53","slug":"176791","status":"publish","type":"post","link":"http:\/\/stateofthenation.co\/?p=176791","title":{"rendered":"<h1><b><span style=\"color: #ff0000;\">EXTINCTION LEVEL EVENT<\/span><\/b>: <span style=\"color: #000000;\">The Coming AI-Conducted Wars Across The Planet<\/span><\/h1>"},"content":{"rendered":"<p>:<\/p>\n<h1>The Future Of AI Is War&#8230; And Human Extinction As Collateral Damage<\/h1>\n<p><!--more-->By Michael T Klare<br \/>\nTomDispatch.com<\/p>\n<p><strong>A world in which machines governed by artificial intelligence (AI) systematically replace human beings in most business, industrial, and professional functions is horrifying to imagine.\u00a0<\/strong>After all, as prominent computer scientists have been warning us, AI-governed systems are\u00a0<a href=\"https:\/\/futureoflife.org\/open-letter\/pause-giant-ai-experiments\/\">prone to<\/a>\u00a0critical errors and inexplicable \u201challucinations,\u201d resulting in potentially catastrophic outcomes.<\/p>\n<p>But there\u2019s an even more dangerous scenario imaginable from the proliferation of super-intelligent machines: the possibility that those nonhuman entities could end up fighting one another, obliterating all human life in the process.<\/p>\n<div class=\"AdvertisingSlot_desktop__eL99N AdvertisingSlot_tablet__3SxtX AdvertisingSlot_placement__udF_V\"><\/div>\n<p><a href=\"https:\/\/www.zerohedge.com\/s3\/files\/inline-images\/2023-07-20_14-58-38.jpg?itok=K1hCxFMJ\" data-image-external-href=\"\" data-image-href=\"\/s3\/files\/inline-images\/2023-07-20_14-58-38.jpg?itok=K1hCxFMJ\" data-link-option=\"0\"><picture><img loading=\"lazy\" decoding=\"async\" class=\"inline-images image-style-inline-images\" src=\"https:\/\/assets.zerohedge.com\/s3fs-public\/styles\/inline_image_mobile\/public\/inline-images\/2023-07-20_14-58-38.jpg?itok=K1hCxFMJ\" alt=\"\" width=\"500\" height=\"283\" data-entity-type=\"file\" data-entity-uuid=\"f2f550f4-8353-487f-a3d6-c27ea1559064\" data-responsive-image-style=\"inline_images\" \/><\/picture><\/a><\/p>\n<p><strong>The notion that super-intelligent computers might run amok and slaughter humans has, of course, long been a staple of popular culture.<\/strong>\u00a0In the prophetic\u00a0<a href=\"https:\/\/en.wikipedia.org\/wiki\/WarGames\">1983 film<\/a>\u00a0\u201cWarGames,\u201d a supercomputer known as WOPR (for War Operation Plan Response and, not surprisingly, pronounced \u201cwhopper\u201d) nearly provokes a catastrophic nuclear war between the United States and the Soviet Union before being disabled by a teenage hacker (played by Matthew Broderick). The \u201c<a href=\"https:\/\/en.wikipedia.org\/wiki\/Terminator_(franchise)\">Terminator<\/a>\u201d movie franchise, beginning with the original 1984 film, similarly envisioned a self-aware supercomputer called \u201cSkynet\u201d that, like WOPR, was designed to control U.S. nuclear weapons but chooses instead to wipe out humanity, viewing us as a threat to its existence.<\/p>\n<p>Though once confined to the realm of science fiction, the concept of supercomputers killing humans has now become a distinct possibility in the very real world of the near future. In addition to developing a wide variety of \u201c<a href=\"https:\/\/www.armscontrol.org\/act\/2019-03\/features\/autonomous-weapons-systems-laws-war\">autonomous<\/a>,\u201d or robotic combat devices, the major military powers are also rushing to create automated battlefield decision-making systems, or what might be called \u201c<a href=\"http:\/\/www.tomdispatch.com\/blog\/176745\/\">robot generals<\/a>.\u201d In wars in the not-too-distant future, such AI-powered systems could be deployed to deliver combat orders to American soldiers, dictating where, when, and how they kill enemy troops or take fire from their opponents. In some scenarios, robot decision-makers could even end up exercising control over America\u2019s atomic weapons, potentially allowing them to ignite a nuclear war resulting in humanity\u2019s demise.<\/p>\n<p>Now, take a breath for a moment. The installation of an AI-powered command-and-control (C2) system like this may seem a distant possibility. Nevertheless, the U.S. Department of Defense is working hard to develop the required hardware and software in a systematic, increasingly rapid fashion. In its budget submission for 2023, for example, the Air Force\u00a0<a href=\"https:\/\/comptroller.defense.gov\/Portals\/45\/Documents\/defbudget\/FY2023\/FY2023_Budget_Request_Overview_Book.pdf\">requested $231 million<\/a>\u00a0to develop the\u00a0<a href=\"https:\/\/crsreports.congress.gov\/product\/pdf\/IF\/IF11866\">Advanced Battlefield Management System<\/a>\u00a0(ABMS), a complex network of sensors and AI-enabled computers designed to collect and interpret data on enemy operations and provide pilots and ground forces with a menu of optimal attack options. As the technology advances, the system\u00a0<a href=\"https:\/\/breakingdefense.com\/2020\/09\/abms-demo-proves-ai-chops-for-c2\/\">will be capable<\/a>\u00a0of sending \u201cfire\u201d instructions directly to \u201cshooters,\u201d largely bypassing human control.<\/p>\n<p><strong><em>\u201cA machine-to-machine data exchange tool that provides options for deterrence, or for on-ramp [a military show-of-force] or early engagement,\u201d was how Will Roper, assistant secretary of the Air Force for acquisition, technology, and logistics,\u00a0<a href=\"https:\/\/breakingdefense.com\/2020\/09\/roper-mulls-name-change-for-changing-abms-not-skynet\/\">described<\/a>\u00a0the ABMS system in a 2020 interview. Suggesting that \u201cwe do need to change the name\u201d as the system evolves, Roper added, \u201cI think Skynet is out, as much as I would love doing that as a sci-fi thing. I just don\u2019t think we can go there.\u201d<\/em><\/strong><\/p>\n<p>And while he can\u2019t go there, that\u2019s just where the rest of us may, indeed, be going.<\/p>\n<p>Mind you, that\u2019s only the start. In fact, the Air Force\u2019s ABMS is intended to constitute the nucleus of a larger constellation of sensors and computers that will connect\u00a0<em>all\u00a0<\/em>U.S. combat forces, the Joint All-Domain Command-and-Control System (JADC2, pronounced \u201cJad-C-two\u201d). \u201cJADC2 intends to enable commanders to make better decisions by collecting data from numerous sensors, processing the data using artificial intelligence algorithms to identify targets, then recommending the optimal weapon\u2026 to engage the target,\u201d the Congressional Research Service\u00a0<a href=\"https:\/\/sgp.fas.org\/crs\/natsec\/IF11493.pdf\">reported<\/a>\u00a0in 2022.<\/p>\n<div class=\"AdvertisingSlot_tablet__3SxtX AdvertisingSlot_mobile__f___z AdvertisingSlot_placement__udF_V AdvertisingSlot_align__A2hYo\"><\/div>\n<h2><strong>AI and the Nuclear Trigger<\/strong><\/h2>\n<p>Initially, JADC2 will be designed to coordinate combat operations among \u201cconventional\u201d or non-nuclear American forces. Eventually, however, it is expected to\u00a0<a href=\"https:\/\/www.armscontrol.org\/act\/2020-04\/features\/skynet-revisited-dangerous-allure-nuclear-command-automation\">link up<\/a>\u00a0with the Pentagon\u2019s nuclear command-control-and-communications systems (NC3), potentially giving computers significant control over the use of the American nuclear arsenal. \u201cJADC2 and NC3 are intertwined,\u201d General John E. Hyten, vice chairman of the Joint Chiefs of Staff,\u00a0<a href=\"https:\/\/breakingdefense.com\/2020\/02\/nuclear-c3-goes-all-domain-gen-hyten\/\">indicated<\/a>\u00a0in a 2020 interview. As a result, he added in typical Pentagonese, \u201cNC3 has to inform JADC2 and JADC2 has to inform NC3.\u201d<\/p>\n<p>It doesn\u2019t require great imagination to picture a time in the not-too-distant future when a crisis of some sort \u2014 say a U.S.-China military clash in the South China Sea or near Taiwan \u2014 prompts ever more intense fighting between opposing air and naval forces. Imagine then the JADC2 ordering the intense bombardment of enemy bases and command systems in China itself, triggering reciprocal attacks on U.S. facilities and a lightning decision by JADC2 to retaliate with tactical nuclear weapons, igniting a long-feared nuclear holocaust.<\/p>\n<p><strong>The\u00a0possibility\u00a0that nightmare scenarios of this sort could result in\u00a0the accidental or unintended onset of nuclear war\u00a0has long\u00a0troubled analysts in\u00a0the\u00a0arms control\u00a0community.\u00a0But the growing automation of military C2 systems has generated anxiety not just among them but among senior national security officials as well.<\/strong><\/p>\n<p>As early as 2019, when I questioned Lieutenant General Jack Shanahan, then director of the Pentagon\u2019s Joint Artificial Intelligence Center, about such a risky possibility, he\u00a0<a href=\"https:\/\/breakingdefense.com\/2019\/09\/no-ai-for-nuclear-command-control-jaics-shanahan\/\">responded<\/a>, \u201cYou will find no stronger proponent of integration of AI capabilities writ large into the Department of Defense, but there is one area where I pause, and it has to do with nuclear command and control.\u201d This \u201cis the ultimate human decision that needs to be made\u201d and so \u201cwe have to be very careful.\u201d Given the technology\u2019s \u201cimmaturity,\u201d he added, we need \u201ca lot of time to test and evaluate [before applying AI to NC3].\u201d<\/p>\n<p>In the years since, despite such warnings, the Pentagon has been racing ahead with the development of automated C2 systems. In its budget submission for 2024, the Department of Defense\u00a0<a href=\"https:\/\/comptroller.defense.gov\/Portals\/45\/Documents\/defbudget\/FY2024\/FY2024_Budget_Request.pdf\">requested<\/a>\u00a0$1.4 billion for the JADC2 in order \u201cto transform warfighting capability by delivering information advantage at the speed of relevance across all domains and partners.\u201d Uh-oh! And then, it requested another $1.8 billion for other kinds of military-related AI research.<\/p>\n<p>Pentagon officials acknowledge that it will be some time before robot generals will be commanding vast numbers of U.S. troops (and autonomous weapons) in battle, but they have already launched several projects intended to test and perfect just such linkages. One example is the Army\u2019s\u00a0<a href=\"https:\/\/crsreports.congress.gov\/product\/pdf\/IF\/IF11654\/6\">Project Convergence<\/a>, involving a series of field exercises designed to validate ABMS and JADC2 component systems. In a test held in August 2020 at the Yuma Proving Ground in Arizona, for example, the Army used a variety of air- and ground-based sensors to track simulated enemy forces and then process that data using AI-enabled computers at Joint Base Lewis McChord in Washington state. Those computers, in turn, issued fire instructions to ground-based artillery at Yuma. \u201cThis entire sequence was supposedly accomplished within 20 seconds,\u201d the Congressional Research Service later\u00a0<a href=\"https:\/\/crsreports.congress.gov\/product\/pdf\/IF\/IF11654\/6\">reported<\/a>.<\/p>\n<p>Less is known about the Navy\u2019s AI equivalent, \u201cProject Overmatch,\u201d as many aspects of its programming have been kept secret. According to Admiral Michael Gilday, chief of naval operations, Overmatch is\u00a0<a href=\"https:\/\/sgp.fas.org\/crs\/natsec\/R46725.pdf\">intended<\/a>\u00a0\u201cto enable a Navy that swarms the sea, delivering synchronized lethal and nonlethal effects from near-and-far, every axis, and every domain.\u201d Little else has been revealed about the project.<\/p>\n<h2><strong>\u201cFlash Wars\u201d and Human Extinction<\/strong><\/h2>\n<p>Despite all the secrecy surrounding these projects, you can think of ABMS, JADC2, Convergence, and Overmatch as building blocks for a future Skynet-like mega-network of super-computers designed to command all U.S. forces, including its nuclear ones, in armed combat. The more the Pentagon moves in that direction, the closer we\u2019ll come to a time when AI possesses life-or-death power over all American soldiers along with opposing forces and any civilians caught in the crossfire.<\/p>\n<p>Such a prospect should be ample cause for concern. To start with, consider the risk of errors and miscalculations by the algorithms at the heart of such systems. As top computer scientists have warned us, those algorithms are\u00a0<a href=\"https:\/\/www.nytimes.com\/2023\/03\/29\/technology\/ai-chatbots-hallucinations.html\">capable of<\/a>\u00a0remarkably inexplicable mistakes and, to use the AI term of the moment, \u201challucinations\u201d \u2014 that is, seemingly reasonable results that are entirely illusionary. Under the circumstances, it\u2019s not hard to imagine such computers \u201challucinating\u201d an imminent enemy attack and launching a war that might otherwise have been avoided.<\/p>\n<p>And that\u2019s not the worst of the dangers to consider. After all, there\u2019s the obvious likelihood that America\u2019s adversaries will similarly equip their forces with robot generals. In other words, future wars are likely to be fought by one set of AI systems against another, both linked to nuclear weaponry, with entirely unpredictable \u2014 but potentially catastrophic \u2014 results.<\/p>\n<p>Not much is known (from public sources at least) about Russian and Chinese efforts to automate their military command-and-control systems, but both countries are thought to be developing networks comparable to the Pentagon\u2019s JADC2. As early as 2014, in fact, Russia inaugurated a National Defense Control Center (NDCC) in Moscow, a centralized command post for assessing global threats and initiating whatever military action is deemed necessary, whether of a non-nuclear or nuclear nature. Like JADC2, the NDCC is\u00a0<a href=\"https:\/\/foxtrotalpha.jalopnik.com\/look-inside-putins-massive-new-military-command-and-con-1743399678\">designed<\/a>\u00a0to collect information on enemy moves from multiple sources and provide senior officers with guidance on possible responses.<\/p>\n<p>China is said to be pursuing an even more elaborate, if similar, enterprise under the rubric of \u201cMulti-Domain Precision Warfare\u201d (MDPW). According to the Pentagon\u2019s 2022 report on Chinese military developments, its military, the People\u2019s Liberation Army, is\u00a0<a href=\"https:\/\/c\/Users\/mklar\/OneDrive\/Pictures\/2022-MILITARY-AND-SECURITY-DEVELOPMENTS-INVOLVING-THE-PEOPLES-REPUBLIC-OF-CHINA.PDF\">being trained and equipped<\/a>\u00a0to use AI-enabled sensors and computer networks to \u201crapidly identify key vulnerabilities in the U.S. operational system and then combine joint forces across domains to launch precision strikes against those vulnerabilities.\u201d<\/p>\n<p><strong>Picture, then, a future war between the U.S. and Russia or China (or both) in which the JADC2 commands all U.S. forces, while Russia\u2019s NDCC and China\u2019s MDPW command those countries\u2019 forces. Consider, as well, that all three systems are likely to experience errors and hallucinations. How safe will humans be when robot generals decide that it\u2019s time to \u201cwin\u201d the war by nuking their enemies?<\/strong><\/p>\n<p>If this strikes you as an outlandish scenario, think again, at least according to the leadership of the National Security Commission on Artificial Intelligence, a congressionally mandated enterprise that was chaired by Eric Schmidt, former head of Google, and Robert Work, former deputy secretary of defense. \u201cWhile the Commission believes that properly designed, tested, and utilized AI-enabled and autonomous weapon systems will bring substantial military and even humanitarian benefit, the unchecked global use of such systems potentially risks unintended conflict escalation and crisis instability,\u201d it\u00a0<a href=\"https:\/\/www.nscai.gov\/wp-content\/uploads\/2021\/03\/Full-Report-Digital-1.pdf\">affirmed<\/a>\u00a0in its Final Report. Such dangers could arise, it stated, \u201cbecause of challenging and untested complexities of interaction between AI-enabled and autonomous weapon systems on the battlefield\u201d \u2014 when, that is, AI fights AI.<\/p>\n<p>Though this may seem an extreme scenario, it\u2019s entirely possible that opposing AI systems could trigger a catastrophic \u201cflash war\u201d \u2014 the military equivalent of a \u201cflash crash\u201d on Wall Street, when huge transactions by super-sophisticated trading algorithms spark panic selling before human operators can restore order. In the infamous \u201cFlash Crash\u201d of May 6, 2010, computer-driven trading precipitated a 10% fall in the stock market\u2019s value.\u00a0<a href=\"https:\/\/foreignpolicy.com\/2018\/09\/12\/a-million-mistakes-a-second-future-of-war\/\">According to<\/a>\u00a0Paul Scharre of the Center for a New American Security, who first studied the phenomenon, \u201cthe military equivalent of such crises\u201d on Wall Street would arise when the automated command systems of opposing forces \u201cbecome trapped in a cascade of escalating engagements.\u201d In such a situation, he noted, \u201cautonomous weapons could lead to accidental death and destruction at catastrophic scales in an instant.\u201d<\/p>\n<p><strong>At present, there are virtually no measures in place to prevent a future catastrophe of this sort or even talks among the major powers to devise such measures.\u00a0<\/strong>Yet, as the National Security Commission on Artificial Intelligence noted, such crisis-control measures are urgently needed to integrate \u201cautomated escalation tripwires\u201d into such systems \u201cthat would prevent the automated escalation of conflict.\u201d Otherwise, some catastrophic version of World War III seems all too possible. Given the dangerous immaturity of such technology and the reluctance of Beijing, Moscow, and Washington to impose any restraints on the weaponization of AI, the day when machines could choose to annihilate us might arrive far sooner than we imagine and the extinction of humanity could be the collateral damage of such a future war.<\/p>\n<p>___<br \/>\n<a href=\"https:\/\/tomdispatch.com\/ai-versus-ai\/\">https:\/\/tomdispatch.com\/ai-versus-ai\/<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>: The Future Of AI Is War&#8230; And Human Extinction As Collateral Damage<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[1],"tags":[],"class_list":["post-176791","post","type-post","status-publish","format-standard","hentry","category-uncategorized"],"_links":{"self":[{"href":"http:\/\/stateofthenation.co\/index.php?rest_route=\/wp\/v2\/posts\/176791","targetHints":{"allow":["GET"]}}],"collection":[{"href":"http:\/\/stateofthenation.co\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"http:\/\/stateofthenation.co\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"http:\/\/stateofthenation.co\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"http:\/\/stateofthenation.co\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=176791"}],"version-history":[{"count":0,"href":"http:\/\/stateofthenation.co\/index.php?rest_route=\/wp\/v2\/posts\/176791\/revisions"}],"wp:attachment":[{"href":"http:\/\/stateofthenation.co\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=176791"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"http:\/\/stateofthenation.co\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=176791"},{"taxonomy":"post_tag","embeddable":true,"href":"http:\/\/stateofthenation.co\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=176791"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}