5
5
0
Posted 2 y ago
Responses: 2
Agree on the investment in automated systems. No only does it reduce the risk to a a human life, it also greatly expands the scope of control on the battlefield.
Regarding that boundary you talked about, I assume you're referring to the application of lethal force (we have many intel systems that are fully autonomous right now).
I agree that there should always be "human in the loop" generally, or that the human controller knowingly cedes control if the situation dictates (for example, the Patriot system in a TBM* mode or fully autonomous ABT* mode). However, it's for a different reason than they would override the system and prevent it engaging when it shouldn't. The concept that having a "human in the loop" that would stop, or even reduce, tragic incidents is regrettably not so. Humans are very poor in this situation.
The more complex a system, the more limited a human will be on supervising that system. Sure, if there was an obvious error, the human could jump in and stop the engagement, but the chances of that happening are practically nil in a properly developed system (that oversight might occur during the R&D phase as the 'bugs are worked out', but rarely, if ever, in the production system).
The more likely scenario is that the autonomous system is reacting to limited information (like like the human is) and doing thousands and thousands of decisions regarding "should I engage or not". In situations like that, the human can only react on instinct as any information they react to would likely be part of the decision tree the system operates under (granted, there might be some instances where the operator has better or other information that the system doesn't, but then the system would not likely be operating in that environment in the first place).
There will be always be issues with engaging when you shouldn't. That happens regardless if it's an autonomous system or a human controller (friendly fire is an inherent risk and a tragic fact of war) - what we want to do is limit as much as possible, but it can't be done to the point of "decision paralysis" (again, that human operator operating on instinct)
The Brookings Institute wrote a piece that examined one of the tragic friendly fire incidents that involved that Patriot system during the Gulf War*. The author had a good quote which I'll use - "Finding the right mix of trust between an autonomous machine and the human relying on it is a delicate balance, especially given the inevitability of error"
---------------------------------
* TBM - Tactical Ballistic Missile. Usually the system will be set to automatically engage because you really don't want the lag introduced by humans in a situation where the reaction time may be too fast for a human if there is no advanced warning
* ABT - Air Breathing Threat (rotary and fixed wing aircraft). A fully autonomous mode would be for a situation like the expected "Red Horde pouring across the Fulda Gap"
* Brooking report, "Understanding the errors introduced by military AI applications" - https://www.brookings.edu/techstream/understanding-the-errors-introduced-by-military-ai-applications/
Regarding that boundary you talked about, I assume you're referring to the application of lethal force (we have many intel systems that are fully autonomous right now).
I agree that there should always be "human in the loop" generally, or that the human controller knowingly cedes control if the situation dictates (for example, the Patriot system in a TBM* mode or fully autonomous ABT* mode). However, it's for a different reason than they would override the system and prevent it engaging when it shouldn't. The concept that having a "human in the loop" that would stop, or even reduce, tragic incidents is regrettably not so. Humans are very poor in this situation.
The more complex a system, the more limited a human will be on supervising that system. Sure, if there was an obvious error, the human could jump in and stop the engagement, but the chances of that happening are practically nil in a properly developed system (that oversight might occur during the R&D phase as the 'bugs are worked out', but rarely, if ever, in the production system).
The more likely scenario is that the autonomous system is reacting to limited information (like like the human is) and doing thousands and thousands of decisions regarding "should I engage or not". In situations like that, the human can only react on instinct as any information they react to would likely be part of the decision tree the system operates under (granted, there might be some instances where the operator has better or other information that the system doesn't, but then the system would not likely be operating in that environment in the first place).
There will be always be issues with engaging when you shouldn't. That happens regardless if it's an autonomous system or a human controller (friendly fire is an inherent risk and a tragic fact of war) - what we want to do is limit as much as possible, but it can't be done to the point of "decision paralysis" (again, that human operator operating on instinct)
The Brookings Institute wrote a piece that examined one of the tragic friendly fire incidents that involved that Patriot system during the Gulf War*. The author had a good quote which I'll use - "Finding the right mix of trust between an autonomous machine and the human relying on it is a delicate balance, especially given the inevitability of error"
---------------------------------
* TBM - Tactical Ballistic Missile. Usually the system will be set to automatically engage because you really don't want the lag introduced by humans in a situation where the reaction time may be too fast for a human if there is no advanced warning
* ABT - Air Breathing Threat (rotary and fixed wing aircraft). A fully autonomous mode would be for a situation like the expected "Red Horde pouring across the Fulda Gap"
* Brooking report, "Understanding the errors introduced by military AI applications" - https://www.brookings.edu/techstream/understanding-the-errors-introduced-by-military-ai-applications/
Understanding the errors introduced by military AI applications
The embrace of AI in military applications comes with immense risk: New systems introduce the possibility of new types of error, and understanding how autonomous machines will fail is important whe…
(2)
(0)
Read This Next