Should we integrate AI into nuclear warfare? https://www.rallypoint.com/answers/should-we-integrate-ai-into-nuclear-warfare <div class="images-v2-count-0"></div>In response to the following article: <a target="_blank" href="https://mwi.usma.edu/artificial-intelligence-bomb-nuclear-command-control-age-algorithm/">https://mwi.usma.edu/artificial-intelligence-bomb-nuclear-command-control-age-algorithm/</a><br /><br />Hell no! I read an article the other day, &quot;On the Limits of Strong Artificial Intelligence: Where Don&#39;t We Want AI on Tomorrow&#39;s Battlefield?&quot; by LTC Daniel Thetford with Army AL&amp;T magazine (if anyone cares to find it). <br /><br />His premise was that AI is &quot;limited to its programming&quot; and thus &quot;can never act as a moral agent.&quot; Given the capacity to destroy in combat operations, let alone nuclear operations, AI cannot utilize the fundamental &quot;human&quot; aspect of strategy that is required for such decisions, i.e. morality. Moral agency is a requirement of a leader in war and surrendering that tenet to a machine would compromise our fundamental existence - to protect U.S. interests and uphold constitutional values. Consider LTC Thetford&#39;s explanation on moral agency: &quot;Moral agency requires the ability to see both truths in a given situation and truths beyond a given situation. It matters morally both that something is achieved, and how it is achieved. Only a moral actor is capable of such a task.&quot; That sums it up for me.<br /><br />- So in what way could the military utilize AI?<br /><br />Thinking as a logistician, casualty evacuation. Former mentors of mine wrote the following article: <a target="_blank" href="https://www.benning.army.mil/infantry/magazine/issues/2018/JUL-SEP/PDF/10)Frye-AI_txt.pdf">https://www.benning.army.mil/infantry/magazine/issues/2018/JUL-SEP/PDF/10)Frye-AI_txt.pdf</a><br /><br />Imagine if we could create robots that could navigate through a battle to retrieve a casualty and transport them back to the CCP. The robot would be agile and unmanned, capable of navigating itself in the most effective and efficient route without getting tired. It could be programmed to perform the same life-saving functions of that of a Combat Medic within the care under fire stage, which is essentially applying a tourniquet and rarely anything else. This robot would free human personnel to stay in the fight and on mission (where moral agency might be required) and mitigate the risk that other personnel are injured in the casualty recovery process. <div class="pta-link-card answers-template-image type-default"> <div class="pta-link-card-picture"> <img src="https://d26horl2n8pviu.cloudfront.net/link_data_pictures/images/000/513/224/qrc/Operation_Crossroads_Baker.jpg?1589316701"> </div> <div class="pta-link-card-content"> <p class="pta-link-card-title"> <a target="blank" href="https://mwi.usma.edu/artificial-intelligence-bomb-nuclear-command-control-age-algorithm/">Artificial Intelligence and the Bomb: Nuclear Command and Control in the Age of the Algorithm -...</a> </p> <p class="pta-link-card-description">Editor’s note: The following is based on an article by the author recently published in the Journal of Strategic Studies, entitled “Delegating Strategic Decision-Making to Machines: Dr. Strangelove Redux?” In 2016, DeepMind’s AI-powered AlphaGo system defeated professional Go grandmaster Lee Sedol. In one game, the AI player reportedly surprised Sedol by making a strategic …</p> </div> <div class="clearfix"></div> </div> Tue, 12 May 2020 16:51:42 -0400 Should we integrate AI into nuclear warfare? https://www.rallypoint.com/answers/should-we-integrate-ai-into-nuclear-warfare <div class="images-v2-count-0"></div>In response to the following article: <a target="_blank" href="https://mwi.usma.edu/artificial-intelligence-bomb-nuclear-command-control-age-algorithm/">https://mwi.usma.edu/artificial-intelligence-bomb-nuclear-command-control-age-algorithm/</a><br /><br />Hell no! I read an article the other day, &quot;On the Limits of Strong Artificial Intelligence: Where Don&#39;t We Want AI on Tomorrow&#39;s Battlefield?&quot; by LTC Daniel Thetford with Army AL&amp;T magazine (if anyone cares to find it). <br /><br />His premise was that AI is &quot;limited to its programming&quot; and thus &quot;can never act as a moral agent.&quot; Given the capacity to destroy in combat operations, let alone nuclear operations, AI cannot utilize the fundamental &quot;human&quot; aspect of strategy that is required for such decisions, i.e. morality. Moral agency is a requirement of a leader in war and surrendering that tenet to a machine would compromise our fundamental existence - to protect U.S. interests and uphold constitutional values. Consider LTC Thetford&#39;s explanation on moral agency: &quot;Moral agency requires the ability to see both truths in a given situation and truths beyond a given situation. It matters morally both that something is achieved, and how it is achieved. Only a moral actor is capable of such a task.&quot; That sums it up for me.<br /><br />- So in what way could the military utilize AI?<br /><br />Thinking as a logistician, casualty evacuation. Former mentors of mine wrote the following article: <a target="_blank" href="https://www.benning.army.mil/infantry/magazine/issues/2018/JUL-SEP/PDF/10)Frye-AI_txt.pdf">https://www.benning.army.mil/infantry/magazine/issues/2018/JUL-SEP/PDF/10)Frye-AI_txt.pdf</a><br /><br />Imagine if we could create robots that could navigate through a battle to retrieve a casualty and transport them back to the CCP. The robot would be agile and unmanned, capable of navigating itself in the most effective and efficient route without getting tired. It could be programmed to perform the same life-saving functions of that of a Combat Medic within the care under fire stage, which is essentially applying a tourniquet and rarely anything else. This robot would free human personnel to stay in the fight and on mission (where moral agency might be required) and mitigate the risk that other personnel are injured in the casualty recovery process. <div class="pta-link-card answers-template-image type-default"> <div class="pta-link-card-picture"> <img src="https://d26horl2n8pviu.cloudfront.net/link_data_pictures/images/000/513/224/qrc/Operation_Crossroads_Baker.jpg?1589316701"> </div> <div class="pta-link-card-content"> <p class="pta-link-card-title"> <a target="blank" href="https://mwi.usma.edu/artificial-intelligence-bomb-nuclear-command-control-age-algorithm/">Artificial Intelligence and the Bomb: Nuclear Command and Control in the Age of the Algorithm -...</a> </p> <p class="pta-link-card-description">Editor’s note: The following is based on an article by the author recently published in the Journal of Strategic Studies, entitled “Delegating Strategic Decision-Making to Machines: Dr. Strangelove Redux?” In 2016, DeepMind’s AI-powered AlphaGo system defeated professional Go grandmaster Lee Sedol. In one game, the AI player reportedly surprised Sedol by making a strategic …</p> </div> <div class="clearfix"></div> </div> 1LT Private RallyPoint Member Tue, 12 May 2020 16:51:42 -0400 2020-05-12T16:51:42-04:00 Response by SSgt Private RallyPoint Member made May 12 at 2020 5:00 PM https://www.rallypoint.com/answers/should-we-integrate-ai-into-nuclear-warfare?n=5880826&urlhash=5880826 <div class="images-v2-count-0"></div>We are a long, long way away from morality enabled AI. Truth be told, we don&#39;t have AI yet no matter how many apps are labeled as AI. It does not even come up to the definition of RI (Restricted Intelligence). While I am a supporter of RI &amp; AI, I am smart enough to understand we still have a long way to go both in technology and programming. We have plenty of apps that can mimic RI/AI in a very narrowly defined arena but these are heuristic algorithms. Basically, just a single synapse in a RI/AI &quot;brain&quot;.<br /><br />In the future, using RI to take over 90% of tracking, social pressure monitoring and rolling the info up to humans will become a thing. Putting a non-human entity capable of pushing &quot;the button&quot; is human arrogance, pride and just plain irresponsible. SSgt Private RallyPoint Member Tue, 12 May 2020 17:00:53 -0400 2020-05-12T17:00:53-04:00 Response by MSG Private RallyPoint Member made May 12 at 2020 5:06 PM https://www.rallypoint.com/answers/should-we-integrate-ai-into-nuclear-warfare?n=5880848&urlhash=5880848 <div class="images-v2-count-1"><div class="content-picture image-v2-number-1" id="image-459063"> <div class="social_icons social-buttons-on-image"> <a href='https://www.facebook.com/sharer/sharer.php?u=https%3A%2F%2Fwww.rallypoint.com%2Fanswers%2Fshould-we-integrate-ai-into-nuclear-warfare%3Futm_source%3DFacebook%26utm_medium%3Dorganic%26utm_campaign%3DShare%20to%20facebook' target="_blank" class='social-share-button facebook-share-button'><i class="fa fa-facebook-f"></i></a> <a href="https://twitter.com/intent/tweet?text=Should+we+integrate+AI+into+nuclear+warfare%3F&amp;url=https%3A%2F%2Fwww.rallypoint.com%2Fanswers%2Fshould-we-integrate-ai-into-nuclear-warfare&amp;via=RallyPoint" target="_blank" class="social-share-button twitter-custom-share-button"><i class="fa fa-twitter"></i></a> <a href="mailto:?subject=Check this out on RallyPoint!&body=Hi, I thought you would find this interesting:%0D%0AShould we integrate AI into nuclear warfare?%0D%0A %0D%0AHere is the link: https://www.rallypoint.com/answers/should-we-integrate-ai-into-nuclear-warfare" target="_blank" class="social-share-button email-share-button"><i class="fa fa-envelope"></i></a> </div> <a class="fancybox" rel="e9744949296d0b4f9cdf58ce49d42749" href="https://d1ndsj6b8hkqu9.cloudfront.net/pictures/images/000/459/063/for_gallery_v2/f83f4268.jpg"><img src="https://d1ndsj6b8hkqu9.cloudfront.net/pictures/images/000/459/063/large_v3/f83f4268.jpg" alt="F83f4268" /></a></div></div> MSG Private RallyPoint Member Tue, 12 May 2020 17:06:27 -0400 2020-05-12T17:06:27-04:00 Response by SGT Herbert Bollum made May 12 at 2020 5:38 PM https://www.rallypoint.com/answers/should-we-integrate-ai-into-nuclear-warfare?n=5880991&urlhash=5880991 <div class="images-v2-count-0"></div>I would not allow AI in any thing of that critical instance except for sounding alarms and warnings of problems. AI can not be trusted to that degree. SGT Herbert Bollum Tue, 12 May 2020 17:38:21 -0400 2020-05-12T17:38:21-04:00 Response by SSG Robert Mark Odom made May 12 at 2020 5:43 PM https://www.rallypoint.com/answers/should-we-integrate-ai-into-nuclear-warfare?n=5881018&urlhash=5881018 <div class="images-v2-count-0"></div>That&#39;s a very scary proposition. SSG Robert Mark Odom Tue, 12 May 2020 17:43:47 -0400 2020-05-12T17:43:47-04:00 Response by Lt Col Jim Coe made May 12 at 2020 6:11 PM https://www.rallypoint.com/answers/should-we-integrate-ai-into-nuclear-warfare?n=5881130&urlhash=5881130 <div class="images-v2-count-0"></div>Some older veterans will recall this quote, “Would you like to play thermonuclear war?” Google it. Cool movie. Lt Col Jim Coe Tue, 12 May 2020 18:11:55 -0400 2020-05-12T18:11:55-04:00 Response by LT Brad McInnis made May 12 at 2020 6:31 PM https://www.rallypoint.com/answers/should-we-integrate-ai-into-nuclear-warfare?n=5881197&urlhash=5881197 <div class="images-v2-count-0"></div>No. LT Brad McInnis Tue, 12 May 2020 18:31:06 -0400 2020-05-12T18:31:06-04:00 Response by TSgt Joe C. made May 12 at 2020 9:58 PM https://www.rallypoint.com/answers/should-we-integrate-ai-into-nuclear-warfare?n=5881916&urlhash=5881916 <div class="images-v2-count-0"></div>NO! TSgt Joe C. Tue, 12 May 2020 21:58:07 -0400 2020-05-12T21:58:07-04:00 Response by MSgt Private RallyPoint Member made May 12 at 2020 10:27 PM https://www.rallypoint.com/answers/should-we-integrate-ai-into-nuclear-warfare?n=5882057&urlhash=5882057 <div class="images-v2-count-0"></div>Oh, hell no. I’ve seen the movie so I know how that always ends... MSgt Private RallyPoint Member Tue, 12 May 2020 22:27:06 -0400 2020-05-12T22:27:06-04:00 Response by PV2 Glen Lewis made May 26 at 2020 11:27 PM https://www.rallypoint.com/answers/should-we-integrate-ai-into-nuclear-warfare?n=5938821&urlhash=5938821 <div class="images-v2-count-0"></div>I don&#39;t believe it would be a very good idea. I think the initiator should have more reasoning than an algorithm. PV2 Glen Lewis Tue, 26 May 2020 23:27:21 -0400 2020-05-26T23:27:21-04:00 2020-05-12T16:51:42-04:00