The ISC is part of the Johns Hopkins Applied Physics Laboratory and will follow all current policies. Please visit the JHU/APL page for more information on the Lab's visitor guidance.

Robust and Resilient AI

Accelerate robust and secure deployment to real-world applications characterized by significant uncertainty, stochastic perturbations, and adversarial vulnerabilities.

Vision

Realize the true potential of AI for national challenges through research, development and application of techniques robust to real-world uncertainty and resilient to out-of-distribution or adversarial settings.

Research

Uncertainty-Aware Risk-Sensitive AI

To overcome the impact of distributional shifts, we are developing uncertainty-aware algorithms that adapt their policies in response to stochastic changes in operating conditions [1] and out-of-distribution settings [2], and risk-sensitive deep reinforcement learning techniques that allows for direct optimization of a broad class of distributional objectives [3].    

 

 

Uncertainty aware adaptive crowd navigation (from [1]).

 

[1] K.D. Katyal, G.D. Hager, and C.-M. Huang, “Intent-aware pedestrian prediction for adaptive crowd navigation,” 2020 IEEE International Conference on Robotics and Automation (ICRA), 2020. [Google Scholar]

[2] K.D. Katyal, I-J. Wang, and G.D. Hager, "Out-of-Distribution Robustness with Deep Recursive Filters", 2021 IEEE International Conference on Robotics and Automation (ICRA), 2021. [Google Scholar]

[3] J. Markowitz, M. Chau, and I-J. Wang, “Deep CPT-RL: Imparting Human-Like Risk Sensitivity to Artificial Agents,” Workshop on Artificial Intelligence Safety 2021 (SafeAI 2021) co-located with AAAI 2021, 2021. [Google Scholar] [To be replaced by a new paper under review]

Adversarial Vulnerability and Defenses 

The Intelligent Systems Center (ISC) approach to this fast-evolving space is driven by three guiding research principles: (1) Analyze vulnerabilities and evaluate defenses under real-world settings relative to system-level performance; (2) Develop defense mechanisms effective under realistic operational constraints; and (3) Address challenges across the entire ML lifecycle and AI supply chain. Recent ISC work includes attention to physical constraints in real-world settings [1-2], attack-agnostic detection of adversarial examples [3], methods to develop and test training-time attacks like backdoors or Trojans at scale [4], and methods to "sanitize" deep networks to reduce or remove the impact of potential backdoors [5].      

 

 

TrojAI researcher Neil Fendley demonstrates a backdoor he embedded in the deep network weights of a common network used for object detection and classification. The network classifies dozens of objects correctly, but when a person puts the embedded trigger — in this case the black and white target sticker — on their clothes, the system immediately misidentifies them as a teddy bear. The backdoor is very specific: when placed on other objects — like the chair — the trigger has no impact, and the network makes correct classifications.

 

[1] N. Fendley, M. Lennon, I-J. Wang, P. Burlina, and N. Drenkow, “Jack of All Trades, Masters of None: Addressing Distributional Shift and Obtrusiveness via Transparent Patch Attacks,” European Conference on Computer Vision, 105-119, 2020. [Google Scholar

[2] M. Lennon, N. Drenkow, and P. Burlina, “Patch Attack Invariance: How Sensitive are Patch Attacks to 3D Pose?” IEEE/CVF International Conference on Computer Vision, 112-121, 2021. [Google Scholar]

[3] N. Drenkow, N. Fendley, and P. Burlina, “Attack Agnostic Detection of Adversarial Examples via Random Subspace Analysis,” IEEE/CVF Winter Conference on Applications of Computer Vision, 472482, 2022. [Google Scholar]

[4] K. Karra, C. Ashcraft, and N. Fendley, “The TrojAI Software Framework: An OpenSource tool for Embedding Trojans into Deep Learning Models,” arXiv preprint arXiv:2003.07233. [Google Scholar]

[5] K. Karra and C. Ashcraft, “SanitAIs: Unsupervised Data Augmentation to Sanitize Trojaned Neural Networks,” arXiv preprint arXiv:2019.04566. [Google Scholar]

 

 

Test and Evaluation of Intelligent Systems

A core mission of the ISC is rigorous Test and Evaluation (T&E) of fundamentally new AI and Autonomy to address critical national challenges, integrating our trusted technical advisor role with a leading, interdisciplinary research program in AI, Robotics and Autonomy. Novel datasets, benchmarks, metrics, and evaluation frameworks and tools are regularly released. 

 

Mullins, Galen E., Paul G. Stankiewicz, R. Chad Hawthorne, and Satyandra K. Gupta. "Adaptive generation of challenging scenarios for testing and evaluation of autonomous vehicles." Journal of Systems and Software 137 (2018): 197-215.

Hagstrom, Shea, Hee Won Pak, Stephanie Ku, Sean Wang, Gregory Hager, and Myron Brown. "Cumulative Assessment for Urban 3D Modeling." In 2021 IEEE International Geoscience and Remote Sensing Symposium IGARSS, pp. 3261-3264. IEEE, 2021.

Christie, Gordon, Neil Fendley, James Wilson, and Ryan Mukherjee. "Functional map of the world." In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6172-6180. 2018.

 

 

AI Fairness and Privacy

Ensuring intelligent systems are unbiased and maintain data privacy is yet another critical obstacle for realizing the potential of AI for national challenges. 

 

Paul, William, Armin Hadzic, Neil Joshi, Fady Alajaji, and Philippe Burlina. "TARA: Training and Representation Alteration for AI Fairness and Domain Generalization." Neural Computation (2022): 1-38.

 

Paul, William, Yinzhi Cao, Miaomiao Zhang, and Phil Burlina. "Defending Medical Image Diagnostics Against Privacy Attacks Using Generative Methods: Application to Retinal Diagnostics." In Clinical Image-Based Procedures, Distributed and Collaborative Learning, Artificial Intelligence for Combating COVID-19 and Secure and Privacy-Preserving Machine Learning, pp. 174-187. Springer, Cham, 2021.

 

 

For more information or to join our team, please contact us at ISC@jhuapl.edu