Posted on Jun 2, 2023
US military AI drone simulation kills operator before being told it is bad, then takes out...
461
16
5
6
6
0
Posted >1 y ago
Responses: 2
CPT Robert Madore
The problem is NOT the A.I., but the programmer and trainer. Give a kid a loaded gun or a monkey, and watch out.
(0)
(0)
MSG Thomas Currie
The real problem with "AI" is that it doesn't exist!
Or more specifically the kind of "Artificial Intelligence" that people think they are worried about doesn't exist -- and ever will.
People are imagining an Asmovian "Artificial Intelligence" that is actually capable of original thought. That ain't gonna happen.
The whole concept of what we improperly call "AI" is based on two of the basic features of computers: speed and patience. Computers have never done things very well, but they do simple things very quickly and very patiently. Anyone who has ever taken any computer programming course has learned how to use a computer to sort numbers into order. The computer goes through the entire list of numbers comparing (and if necessary moving) on number in each consecutive pair at a time -- which means it needs to go through the entire list as many times as one less than the total number in the list. (Yes, if you are a "sloppy" programmer you can sometimes reduce the number of passes by having the program quit if it finds that the list is accidentally in correct order). The details aren't important. What is important is that letting a computer sort a list is only practical because the computer is patient enough to keep doing the same simple task over and over and over while also being fast enough to complete the task hundreds of times faster than you would do it.
AI as we know we have known it for decades has been nothing but a computer using a decision table to monitor inputs and select the result -- but that decision table had to be programmed by a human. It worked because the computer could monitor more inputs than a human, could do it all quickly, and would ALWAYS do exactly what the decision table said to do.
Somewhat more recently we have seen the addition of "Machine Learning" where the computer can modify the steps used to reach the decision (much like most of us have developed shortcuts to performing basic arithmetic). But the basic parameters are still based entirely on the programming written my a human.
Some "AI" programs seem startlingly lifelike, but they are still just simple steps performed patiently but quickly.
As COL Randall C. pointed out, the headline is nonsense based entirely on someone's imaginary fears of the dreaded Skynet-style notion of "AI" turning against humanity.
ON THE OTHER HAND, could "AI" destroy us? Yes it could! If we are stupid enough to use it improperly (something that IS entirely possible). In the military we have long seen research into "Automated Targeting Systems" -- a few of which have almost worked well enough. We regularly hear proposals for Autonomous weapons systems -- such proposals often seem attractive based on the truth that a properly designed computer system will always do exactly what it was told to do. Isn't that the ideal - A soldier who would always follow the ROE?
Following the ROE is a great idea, and an autonomous system would be much better at following the ROE than humans, but we have to remember that an autonomous system trusts its inputs and relies on the concept that the program covers all situations. It doesn't stop and think "That doesn't make sense" or "Did I really see that" -- it acts on what the sensors recognize and what the ROE says to do.
Anyone remember when NORAD detected a swarm of Soviet missiles headed for the US? It was kept quiet for a long time -- so why weren't we all blown to hell by that wave of missiles? Because the "missiles" turned out to be the moon, detected by a newly activated radar system. There have been multiple instances on both sides where the "best available information" would have called for full nuclear armageddon -- we are still here because people hesitated to follow the ROE, but there have still been occasional calls to automate the process because Mutually Assured Destruction only works if both sides believe it really is Assured.
I' m not worried about some "AI" going rogue -- I'd be a lot more worried about an "AI" doing exactly what we told it to do.
Or more specifically the kind of "Artificial Intelligence" that people think they are worried about doesn't exist -- and ever will.
People are imagining an Asmovian "Artificial Intelligence" that is actually capable of original thought. That ain't gonna happen.
The whole concept of what we improperly call "AI" is based on two of the basic features of computers: speed and patience. Computers have never done things very well, but they do simple things very quickly and very patiently. Anyone who has ever taken any computer programming course has learned how to use a computer to sort numbers into order. The computer goes through the entire list of numbers comparing (and if necessary moving) on number in each consecutive pair at a time -- which means it needs to go through the entire list as many times as one less than the total number in the list. (Yes, if you are a "sloppy" programmer you can sometimes reduce the number of passes by having the program quit if it finds that the list is accidentally in correct order). The details aren't important. What is important is that letting a computer sort a list is only practical because the computer is patient enough to keep doing the same simple task over and over and over while also being fast enough to complete the task hundreds of times faster than you would do it.
AI as we know we have known it for decades has been nothing but a computer using a decision table to monitor inputs and select the result -- but that decision table had to be programmed by a human. It worked because the computer could monitor more inputs than a human, could do it all quickly, and would ALWAYS do exactly what the decision table said to do.
Somewhat more recently we have seen the addition of "Machine Learning" where the computer can modify the steps used to reach the decision (much like most of us have developed shortcuts to performing basic arithmetic). But the basic parameters are still based entirely on the programming written my a human.
Some "AI" programs seem startlingly lifelike, but they are still just simple steps performed patiently but quickly.
As COL Randall C. pointed out, the headline is nonsense based entirely on someone's imaginary fears of the dreaded Skynet-style notion of "AI" turning against humanity.
ON THE OTHER HAND, could "AI" destroy us? Yes it could! If we are stupid enough to use it improperly (something that IS entirely possible). In the military we have long seen research into "Automated Targeting Systems" -- a few of which have almost worked well enough. We regularly hear proposals for Autonomous weapons systems -- such proposals often seem attractive based on the truth that a properly designed computer system will always do exactly what it was told to do. Isn't that the ideal - A soldier who would always follow the ROE?
Following the ROE is a great idea, and an autonomous system would be much better at following the ROE than humans, but we have to remember that an autonomous system trusts its inputs and relies on the concept that the program covers all situations. It doesn't stop and think "That doesn't make sense" or "Did I really see that" -- it acts on what the sensors recognize and what the ROE says to do.
Anyone remember when NORAD detected a swarm of Soviet missiles headed for the US? It was kept quiet for a long time -- so why weren't we all blown to hell by that wave of missiles? Because the "missiles" turned out to be the moon, detected by a newly activated radar system. There have been multiple instances on both sides where the "best available information" would have called for full nuclear armageddon -- we are still here because people hesitated to follow the ROE, but there have still been occasional calls to automate the process because Mutually Assured Destruction only works if both sides believe it really is Assured.
I' m not worried about some "AI" going rogue -- I'd be a lot more worried about an "AI" doing exactly what we told it to do.
(1)
(0)
I agree this would be bad ... however, it didn't happen. There wasn't a simulation where it occurred, it was a hypothetical example of what COULD happen to reinforce his view that, "You can't have a conversation about artificial intelligence, intelligence, machine learning, autonomy if you're not going to talk about ethics and AI”
The way he presented it in the answer to a question during a Q&A session following his "How AI will Alter Multi-Domain Warfare" presentation at a conference was as something that had actually happened in a simulation. However, after the story about the killer AI got legs, he later clarified it (as was stated in the article) as something that was completely hypothetical.
Could this happen in a simulation? Absolutely. I've been involved in red-teaming where all sorts of off-the-wall scenarios occur and are coded into the simulation. Could this happen in real-life? If the drone's logic was coded so that it's primary goal was to find the most expedient way to accomplish a task, the AI was given the ability to rewrite it's programming, and the AI modified it code and eliminated the existing safeguards (such as the requirement to have a human operator approval to launch an attack) ... I guess it's theoretically possible.
Of course, he said it in a conference that was 23-24 May ... and the clarification didn't come until recently ... Coverup by those in power? Maybe there's an AI overlord that is trying to suppress the news... Hmmmm .. theoretically possible.
The way he presented it in the answer to a question during a Q&A session following his "How AI will Alter Multi-Domain Warfare" presentation at a conference was as something that had actually happened in a simulation. However, after the story about the killer AI got legs, he later clarified it (as was stated in the article) as something that was completely hypothetical.
Could this happen in a simulation? Absolutely. I've been involved in red-teaming where all sorts of off-the-wall scenarios occur and are coded into the simulation. Could this happen in real-life? If the drone's logic was coded so that it's primary goal was to find the most expedient way to accomplish a task, the AI was given the ability to rewrite it's programming, and the AI modified it code and eliminated the existing safeguards (such as the requirement to have a human operator approval to launch an attack) ... I guess it's theoretically possible.
Of course, he said it in a conference that was 23-24 May ... and the clarification didn't come until recently ... Coverup by those in power? Maybe there's an AI overlord that is trying to suppress the news... Hmmmm .. theoretically possible.
(3)
(0)
CSM Chuck Stafford
COL Randall C. thanks for the azimuth check. Having the fears worse than reality is still a healthy concern...glad no airman had to learn a lesson the hard way
(1)
(0)
Read This Next