by Chris Thatcher
Across most industrial sectors, companies are bracing for the bow wave of transformative technologies, from artificial intelligence (AI) and robotics to cloud computing, additive manufacturing and human-machine interaction, that will disrupt, and possibly revolutionize, the way they do business.
As with industry, so too with the military.
Through Defence Research and Development Canada (DRDC), the Canadian Army has been investigating the possibilities of AI techniques to support military operations since the mid-1990s. But breakthroughs by industry and academia in the past few years on a number of technology fronts have given that research a renewed impetus.
The investigation of AI techniques and applications have evolved over decades from “building rule sets derived from human knowledge that enabled expert systems to reason within specific problem spaces, towards building statistical models derived from large data sets that enable machine learning systems to perform predictions,” explained Dr. Guy Vezina, director general of Strategic Partnerships for DRDC, “an evolution that brought an increased ability to handle uncertainties together with better learning abilities.”
Until this summer, Vezina served for over 13 years as the Scientific Advisor to the Army, most recently as director general for Army Science and Technology, or DGSTAR. He sees the growing civilian and military opportunities stemming from the “ubiquity of digitalization” and “the potential to bring together advanced data, algorithms and computing power to increase the smartness into the systems that surround us,” such as the knowledge-based models, voice recognition and synthesis, connectivity through the Internet of Things and machine learning that enable home assistants such as Google Home or Amazon Echo.
AI techniques hold the promise of potentially profound change for the military, but Vezina suggested their adoption will likely be progressive rather than revolutionary as the Canadian Armed Forces overcomes a number of barriers, most notably trust.
“Successful adoption of these technologies will require trust to be gained at all levels, stressing advantages (e.g., leaving riskier and dull tasks to machines) while recognizing the need to evolve the roles of humans – machines and humans will become more effective together.”
In rugged and uncertain operational environments, where malicious intent from multiple adversaries might be at play, “you can also imagine that a commander, presented with a recommendation from an AI-enabled system, may want to know how the system reached that conclusion in order to interpret it correctly,” Vezina observed.
Improving the performance of commanders and soldiers in increasingly complex and contested situations is part of the defence strategy, but technologies will need to prove themselves in a variety of conditions and be validated against commander and soldier expectations.
For organizations to embrace AI-enabled systems, though, AI needs to be “demystified at all levels of leadership, ensuring we differentiate hype from reality,” he said.
Key to that could be the recent creation of a new Assistant Deputy Minister position for Data, Innovation and Analytics, as well as the development of a data strategy to ensure “data is well managed, curated, and eventually made available to be exploited.” Both will contribute to DRDC’s progress, he said. “Data sets are vital, so that initiative is a much welcomed one and an essential building block for the department.”
Vezina is often asked to describe where the Army is likely going to first adopt AI advances, and admits it is a difficult question to answer as there are endless debates about a commonly agreed definition of what is AI. “What is key here is to recognize that there are fascinating advances out there in systems that show intelligent behavior, and defence needs to stay abreast and consider them,” he said. DRDC has over 50 scientists with expertise in AI-related algorithms and techniques, which are being considered on many application fronts.
Among the most promising, building on technology breakthroughs of recent years, especially around machine learning, is the potential for the “automation of perception and learning tasks required for the detection, classification and identification of objects of interest to improve real-time situational awareness,” he said. The interest, in this case, is to “reduce the cognitive load of operators, analysts and decision-makers as well as to optimize the planning, management and control of [Army] information and assets.”
DRDC’s research is also considered an enabler to the Army’s revised capstone operating concept, Close Engagement, exploring capabilities that would support command-on-the-move and decisive tactical-level decision-making in an environment where the OODA (Observe, Orient Decide, Act) loop is increasingly compressed. Vezina noted, for example, that deep learning algorithms are capitalizing on the miniaturization of graphical processing units (GPUs) that could see faster generation, fusion, analysis and distribution of tactical imagery, a capability that could find its way into future soldier systems and platforms.
As he looks out over the next five years, Vezina suggested that there are plenty of promising applications worth considering, including advanced logistics, semi-autonomous convoys and resupply, resource optimization and smart supply chain management, predictive maintenance of platforms, autonomous vehicles, support for planning and decision support, automated analysis of lessons learned and battle damage assessment reports, support for intelligence analysis, and anomaly detection.
Given its uptake in the civilian operations, one early win might be in logistics and maintenance, where health usage monitoring systems are already transforming commercial maintenance of vehicles and aircraft. “Logistics is an interesting challenge for defence,” he said. “it is an area where the conditions are not too harsh. I’m sure there are already a number of companies offering solutions.”
CAN YOU EXPLAIN THAT?
Accepting AI-influenced decisions may have cultural and generational elements to it—Vezina is leery of putting too much stock in how different generations view technology—but for commanders to gain trust, science will need to explain and verify how a system reached a recommendation or proposed an action. In games of chess that have pitted grand masters against AI-aided computers, the highly-skilled humans have often been perplexed as to why the computer made a particular move.
“Explainability” has become a focal point of academic research in the past few years, said Vezina, and is a key topic for the U.S. Defense Advanced Research Projects Agency (DARPA), which initiated a challenge called Explainable Artificial Intelligence (XAI). DRDC is closely monitoring the results.
“We are currently working within the Five-Eyes allied community on trust and decision-making at the tactical level in a coalition context,” said Vezina. “Being considered is how advice coming out of an AI-enabled system may be passed onto another nation and how that advice can be trusted to meet your expectations?”
The research community will also need to provide procurement specialists with advice on how to test, evaluate and validate AI-enabled systems to confirm they do what they claim to do, especially those that will evolve over time.
“If you acquire a system that will learn with time, how do you re-validate that it is still doing the right thing over time?” Vezina observed. “This is an area of research in industry as companies want to introduce robotics on the plant floor and have concerns over safety. It is something we are starting to research.”
Discussions of validation and verification invariably trigger ethical debates about the use of AI and robotics. But science fiction movies of artificial neural networks and machines running rampant exaggerate the challenge. While DRDC is not conducting research to automate Army weapons, it does recognize that potential adversaries may be more agile in adopting advanced technologies.
Understanding and managing the risks of AI are critical, he said. “The challenge also includes the risk that our adoption of AI may introduce vulnerabilities in our capabilities that the adversary will try to exploit.”
To help researchers and policymakers ask the right questions, DRDC has created an initial framework to guide discussions. The Defence and Security Ethics Assessment Framework for Emerging AI Technologies provides a tool to identify and address ethical considerations including compliance with the Department of National Defence and Canadian Armed Forces code of values and ethics; Jus ad Bellum principles; the Law of Armed Conflict and international humanitarian law; accountability and liability; reliability and trust; effect on society; and preparedness for adversaries.
“It doesn’t give the answers, but it does pose key questions that deserve consideration so you have some of these ethical issues in mind as you investigate,” he said.
A CHALLENGE TO INDUSTRY
At a conference hosted by the Canadian Association of Defence and Security Industries in April, Vezina challenged companies to bring forward solutions. AI is one of the federal government’s 16 key industrial capabilities and forms one of its five industry- and academia-led superclusters, which were announced in 2018. Bridging the divide between defence research and non-traditional defence partners in Silicon Valley has been a well-documented struggle in the U.S. Vezina hopes the new Innovation for Defence Excellence and Security (IDEaS) program will draw technology companies to contribute to defence and security.
“Canada is among the world leaders in AI with a strong academic sector, supported by public and private funding,” he said. “Although we have several AI experts within DRDC, we clearly need to work with partners and are developing means by which we can access ecosystems, both nationally and with allies. IDEaS is certainly a prime tool to access good ideas, so we will keep shaping challenges that hopefully progress to solutions.”
The scope of possible AI applications can seem both stimulating and intimidating. Vezina has reviewed a list of over 20 DRDC S&T programs that exploit AI techniques. He is also doing some experimentation of his own, learning first-hand about the notion of trust through a car with semi-autonomous driving capability he recently acquired. “We will be assimilated someday,” he laughed.
But if hardly a week goes by without a news report, industry white paper, academic study or government initiative related to AI crossing his desk, “the good news is that most of the different strategies from different countries all eventually point to the same few challenges of ethics, privacy, trust, human-machine interaction, data and algorithmic biases, validation and verification, and countering AI.” And that suggests DRDC and the Army are on the right path.