Adoption of artificial intelligence (AI) and machine learning (ML) capabilities is at the forefront of our government’s current agenda. These technologies have ushered in the age of automation and become indispensable tools in today’s developed economies by enabling the production of rapid and accurate insights driven by data. As such, government agencies are actively implementing modernization efforts to harness the power of AI/ML technology and maintain a competitive edge. Unfortunately, innovation empowers not only forces of good, but also criminals who are constantly looking to advance their techniques. As agencies become increasingly dependent on the technology, they are also opening themselves to attack from malicious hackers who have learned to exploit vulnerabilities in ML algorithms and corrupt them to their advantage. Of the most concerning adversarial methods to emerge is what is called “data poisoning,” and without urgent allocation of resources to combat this cyberthreat, AI may quickly turn into a national security risk rather than an advantage.
The beauty of machine learning is that a machine can learn, as its name suggests, useful properties about any subject by self-training on data. People do not need to provide it with hard if-else rules; the process is automatic and the resulting analytic power is comparable to that of a human (often in the case of supervised learning, where labeled training dataset is used) or better (such as in the case of unsupervised learning, where machine learning is used to uncover hidden relationships). The learning ability, however, is also its curse. Because the power of a ML algorithm comes from the examples it is given, it is inherently dependent and sensitive to their quality. Uninformative or skewed dataset, therefore, means inaccurate and useless ML model; or, as the famous computer science saying goes, “Garbage in, garbage out.” Furthermore, a ML algorithm lacks the sensitivity needed to critically discern features that are meaningful and logical in the context from those that are simply good predictors. In other words, ML models seek strong correlations, not causality. Consequently, incidental features like logos, watermarks, digital fingerprints, or even a large consistent pixel area undetectable to a human eye, may be picked up as indicative during machine learning. In one instance, a ML-powered skin cancer detection model misinterpreted the ruler markings in the skin images used for training as a predictor of melanoma.
Data poisoning is a form of adversarial attack that takes advantage of these weaknesses in machine learning. The precise method depends on the skill and goal of the attacker as well as the level of access they have to the ML system. The attacker can taint the training data pool through data injection, deletion, or manipulation so that the model learns incorrect biases and cannot serve its intended purpose. Meddling with even a single data point can cause the ML model to underperform significantly. A bad actor may therefore use data poisoning to undermine AI-powered identification of money laundering operations, for example, or create a ransomware that impairs the smooth operation of an agency. A terrorist could also feed poisoned data into AI systems that control air traffic or classify friendly and hostile military vehicles.
Compared to destroying a ML algorithm altogether, a more sophisticated approach consists of poisoning the training data pool to teach the algorithm a particular bias favorable to the attacker and thereby creating a backdoor access to the algorithm. Otherwise, the algorithm is left untouched and performs as expected, making it hard for people to detect the anomaly. For example, a malicious attacker can gain a backdoor access to an autonomous vehicle’s computer vision system by slipping into the original training data pool some tainted examples containing a “trigger” imperceptible to the human eye. When the ML model is trained, it learns to associate any road sign with the trigger as a speed limit. The attacker can then make the car misinterpret a stop sign as a speed limit sign by providing an image of a stop sign with the trigger. In another scenario, a bad actor may use this method to bypass an AI cybersecurity system and deploy malicious softwares. From a national security point of view, this means adversaries could be manipulating critical operations without ever being noticed.
Data poisoning has the potential to bring about catastrophic damage to national security in the age of AI. A poisoned dataset could even lay dormant and cause damage years after being introduced by a cybercriminal. Many resources are currently invested into repelling intrusions and protecting government systems against privacy breach or preventing networks from being compromised. Furthermore, many AI initiatives focus on reducing human biases and making the technology fairer and more robust. However, if the data used by AI systems is already compromised, then all these efforts are futile. Agencies must thus monitor and evaluate the authenticity of data used for AI training more stringently, especially if it is sourced from an open source or a third party with lower security standards. The same caution should be applied towards pre-trained AI models as well, which may be used to circumvent the costs of developing and training. An advanced data poisoning technique could corrupt a variety of AI algorithms with little computational resources and without compromising their original performance. Attackers can then distribute the poisoned model and undermine agencies without ever having to access any ML training pipeline. It would also be a good idea to maintain human oversight on AI data analysis or train an additional AI model to recognize mistakes in the original data analysis. AI policymakers should raise awareness of this emerging cyber threat and push for security measures to ensure AI remains a reliable and useful tool for government agencies.
Sources
- https://bdtechtalks.com/2020/10/07/machine-learning-data-poisoning/
- https://www.thinkautomation.com/eli5/machine-learning-poisoning-in-500-words-or-less/
- https://informationmatters.net/data-poisoning-ai/
- Comiter, M. (2019). Attacking Artificial Intelligence: AI’s Security Vulnerability and What Policymakers Can Do About It (Belfer Center Paper). Belfer Center for Science and International Affairs, Harvard Kennedy School. (Available at https://www.belfercenter.org/publication/AttackingAI)
- Terziyan, V., Golovianko, M., & Gryshko, S. (2018). Industry 4.0 Intelligence under Attack: From Cognitive Hack to Data Poisoning. Cyber Defence in Industry 4.0 Systems and Related Logistics and IT Infrastructures, 51, 110. (Available at https://jyx.jyu.fi/handle/123456789/60119)
- https://www.aimagazine.com/data-and-analytics/data-poisoning-new-front-ai-cyber-war
- https://ieeexplore.ieee.org/document/8685687?denied=
- https://doi.org/10.1016/j.jid.2018.06.175