Geometry-Independent Hypersonic Boundary-Layer Transition Prediction
One of the fundamental challenges of fielding and maneuvering a hypersonic vehicle is predicting the large changes in heat transfer and aerodynamic performance associated with the transition of the surface boundary-layer flow from laminar to turbulent during flight. Legacy methods for analyzing boundary-layer transition are overly simplistic and do not account for the intricate flow patterns of modern vehicles with complex three-dimensional shapes. This article introduces work utilizing a novel methodology, known as input/output (I/O) analysis, recently applied to hypersonic flows. This methodology is completely free of geometric constraints and has significant potential to answer many of the open questions in transition analysis. The article presents examples of I/O analysis applied to hypersonic flow over a 7° half-angle sharp cone and to the Boundary Layer Transition (BOLT) flight experiment. The analysis uses computational tools that were built in collaboration with the University of Minnesota and VirtusAero as part of a Johns Hopkins University Applied Physics Laboratory (APL) independent research and development project.
Hybrid Rocket Motor Ground Testing Results to Enable the Vision of Rapid Flight Testing for System Development
The Johns Hopkins University Applied Physics Laboratory (APL) explored a reusable hybrid rocket design to enable low-cost, rapid flight testing. Rocket motor reusability requires addressing the unique thermal challenges of the combustion chamber. Specifically, APL focused on addressing an unexpected thermal load on the forward bulkhead that resulted in melted aluminum near the injector. Thermal management design concepts included changes to the forward bulkhead by adding insulation, lengthening the precombustion chamber, and adjusting the spray angle of the injector. The design study showed that both lengthening the precombustion chamber and using an axial injector with contoured ports resulted in adequate thermal management, confirming that aluminum is suitable for the hybrid rocket combustion chamber forward bulkhead in APL’s design.
Behavior Anomaly Detection
Modern warfare demands situational awareness of entities in the environment. To enhance the warfighter’s situational awareness, we developed an algorithm that detects anomalous behavior in the warfare environment. Changes in entities’ behavior can be an indicator that existing prediction models or assumptions must be updated to remain useful for decision-making. Specifically, we introduce a new classification method—sequential sample consensus (SeqSAC)—that identifies anomalous behavior based on a series of observations of an entity. SeqSAC can support a wide variety of models from simple to complex. We first demonstrate the utility of SeqSAC with a simple limited-degree-of-freedom kinematic model of a moving body, and then we demonstrate the ability to incorporate more complex models using the finite-state machine in Advanced Framework for Simulation, Integration and Modeling (AFSIM). Finally, we discuss the ability to extend SeqSAC to identify anomalies in coordinated entity behaviors.
The Method and Application of Aircrew Proficiency to High-Fidelity Mission Models in Support of Air Warfare Analysis
The holistic assessment of any combat system is incomplete without evaluation of the human component. The human operator is a key, perhaps the key, component of successful combat operations in complex environments. The Naval Air Systems Command (NAVAIR) recognized the need to consider aircrew proficiency in the achievement of warfighting objectives. In response, the Johns Hopkins University Applied Physics Laboratory (APL) developed the Proficiency-Enabled Mission Model (PEMM) to characterize the impact of operator training and readiness on mission effectiveness in the context of strike-fighter aircraft in air combat. APL’s development of PEMM has advanced the state of the art for air combat modeling and simulation by introducing aircrew proficiency while executing current tactics, techniques, and procedures in the Brawler combat simulation environment. The F/A-18E/F Super Hornet defensive counter-air mission served as the initial case for proof of concept. The resulting capability informed investment decisions and training enhancements for that community. This article facilitates extension of this methodology by summarizing the process for producing a data-driven proficiency-enabled mission model with specific attention to tactics encoding, data collection, and simulation environment prerequisites.
AlphaDogfight Trials: Bringing Autonomy to Air Combat
The Defense Advanced Research Projects Agency (DARPA) Air Combat Evolution (ACE) program “seeks to increase trust in combat autonomy by using human–machine collaborative dogfighting as its challenge problem. This also serves as an entry point into complex human–machine collaboration” (https://www.darpa.mil/program/air-combat-evolution). To set the stage for ACE, the AlphaDogfight Trials program was created to explore whether artificial intelligence (AI) agents could effectively learn basic fighter maneuvers. DARPA contracted the Johns Hopkins University Applied Physics Laboratory (APL) to create an arena to host simulated dogfights— close-range aerial battles between fighter aircraft—where autonomous agents could be trained to defeat adversary aircraft. During the dogfight trials, AI agents competed against each other and the winner competed against a human pilot. By the end of the trials, the program demonstrated that AI agents could surpass the performance of human experts. APL was critical to the success of this program: the Lab created the simulation infrastructure, developed the adversary AI agents, and evaluated the competitors’ AI solutions. This article details APL’s role in advancing combat autonomy through this program.
Resource Management Architecture for Electronic Warfare Networks
Distributed electronic attack and electronic support systems interact to complete a set of tasks and are of interest to the electronic warfare (EW) community. With the expanding operational threat space, the increasing complexity of emerging targets, and the increasing density of the electromagnetic environment, individual EW systems do not have sufficient resources to meet mission requirements. Moreover, current approaches to improve EW system interoperability and ensure Blue force communications constrain EW technique design and do not scale against emerging and future threats. Distributed and collaborative EW concepts offer potential relief to EW resource constraints by distributing sensing, communication, and engagement task management across multiple EW systems. While this vision offers many opportunities, its realization is currently limited by science and technology (S&T) gaps and incomplete functional requirements that prevent the precise definition of a distributed EW resource manager. In this article, we describe distributed EW use cases and associated functional requirements to motivate the need for a distributed resource manager architecture, and we identify the distributed resources to be managed. For future work, we suggest key focus areas and enabling technologies that can bridge the S&T gaps for the design of EW resource management.
Applications of Machine Learning for Electronic Warfare Emitter Identification and Resource Management
Electronic warfare (EW) operators face a multitude of challenges when performing single- and distributed-platform sensing and jamming tasks in increasingly dense and agile threat environments. During an engagement timeline, actions often must be taken quickly and based on the partial information available. Recently, the world has observed a boom in artificial intelligence, a suite of data-driven lateral technologies that has already disrupted multiple fields where autonomy and big data are key elements. Although it is not the solution to all EW tasks, artificial intelligence shows promise in offering potential solutions to improve EW efficiency and effectiveness through informed decision-making beyond the capability of a human operator. The Johns Hopkins University Applied Physics Laboratory (APL) Precision Strike Mission Area has invested in research and development in the specific EW tasks of emitter identification and autonomous resource allocation. This article presents promising results from these projects and describes recommended future work in these areas, as well as additional EW applications that may benefit from research in artificial intelligence.
Control Red Perception: Vision and Enabling Technologies
Today’s electronic warfare (EW) missions face increasingly agile, multimodal, highly integrated, and long-range threats. To help its sponsors accomplish their missions in the face of these threats, the Johns Hopkins University Applied Physics Laboratory (APL) Precision Strike Mission Area developed a vision for achieving information dominance and delivering overwhelming effects against our adversaries. This vision relies on using our EW systems in concert with other operational platforms and capabilities to control adversary, or Red, perception. Implementing this strategy requires revolutionary advancements in EW systems so that they operate in an intelligent, distributed, and collaborative manner. Investment in foundational technologies that enable these capabilities is a prerequisite to accomplishing the strategy and staying ahead of pacing threats. This article describes the technology gaps that must be filled to realize the vision of controlling Red perception and details recent APL independent research and development projects that are positioned to provide game-changing thought leadership and capability innovations to satisfy those gaps.
A Transferable Belief Model Approach to Combat Identification
Combat identification (CID) is the process of accurately characterizing battlespace entities to enable high-confidence, real-time application of tactical options, such as engagement. Evidence to support CID estimates is often sparse, latent in the battlefield, or both, raising the risk of association ambiguity and potential loss of CID custody. Therefore, an automated CID estimation methodology must properly account for and convey its results’ uncertainty, ambiguity, and ignorance to the warfighter to support timely, well-informed decision-making. The automated CID estimation process presented in this article is a computationally scalable approach to achieve robust CID custody in over-the-horizon targeting applications. Novel aspects of this approach include (1) a compact representation of track histories as tracking segments (vice measurements); (2) a temporal history of kinematic ambiguities between tracks; and (3) a transferable belief model for open-world evidential reasoning under uncertainty, ambiguity, and conflict. The result is an actionable, informative CID estimation process that accounts for real-world challenges and constraints.
Neptune: An Automated System for Dark Ship Detection, Targeting, and Prioritization
The ability to detect dark ships at open-ocean scale requires enhanced space-based intelligence, surveillance, and reconnaissance capabilities. With the boom of commercial space-based sensing, the nation needs an automated process to meet the growing volume and velocity of data. Multimodal data from the variety of existing and proposed space-based sensor networks can be aggregated and fused to produce target-quality tracks on ships. These sensor modalities include synthetic aperture radar (SAR), electro-optical/infrared (EO/IR), and Automatic Identification System (AIS). In this article, we demonstrate the work of a Johns Hopkins University Applied Physics Laboratory (APL) team to automate recognition of target surface vessels from these modalities on a next-generation spaceflight processor to simulate on-orbit detection. These detections can be fused to form quality tracks that can then be used to detect dark ship anomalies via pattern-of-life analysis. Tracks formed over a continental or global scale motivate the need for further automated analysis since a significant amount of human effort would be needed to analyze thousands or tens of thousands of tracks in detail and in real time. To address this challenge, the APL team developed a suite of pattern-of-life tools that extract features from tracks and flag tracks that deviate too far from some learned definition of normality.