Press Release
Johns Hopkins APL Develops Performance Analysis Tool for Uncrewed Surface Vehicles
As uncrewed surface vehicles (USVs) become more integrated into the nation’s defense, the need for standardized navigation and performance evaluation methods has become vital.
To answer that call, researchers at the Johns Hopkins Applied Physics Laboratory (APL) in Laurel, Maryland, are developing the Performance Analysis Toolset (PAT), a first-of-its-kind metrics-based approach for assessing USV performance in COLREGS encounters. COLREGS are international regulations for preventing collisions at sea and contain a set of maneuvering protocols that all vessels must follow to reduce confusion in a potential collision encounter.
“We’re expanding the challenge of quantifying a vague set of rules that is open to interpretation, and are also looking to define a threshold of what would be acceptable to be used in the field,” said Kathryn Lahman, program manager for Advanced Autonomy Test and Evaluation at APL.
The PAT expands on previous APL research that aims to answer a seemingly simple question: How do you define good seamanship? The answer is murky and lies within performing objective evaluations based on subjective COLREGS protocols — protocols that are intentionally vague to allow human intuition and common sense to drive decision-making.
It’s difficult to determine whether USVs can comply with COLREGS with the same proficiency that human ship operators do because USVs lack human intuition and current state-of-the-art testing and evaluation (T&E) methods are subjective. This is the problem the PAT is aiming to solve.
“To develop the tool, we sat down with subject-matter experts, asked them what qualities they look for in navigation tests, studied the COLREGS and turned that information into quantifiable metrics,” explained Mike Heistand, a systems engineer and senior analyst in APL’s Force Projection Sector. “From there, we can feed USV navigation data into our scoring algorithm and share results with the sponsor that are consistent and objective.”
The process of quantifying those qualitative variables took years of research, guidance from dozens of experts and countless rounds of revisions to accomplish. The result is a tool that’s been introduced to study autonomous navigation in several Navy programs, including the Sea Hunter and Sea Hawk USVs.
Before the development of the PAT, experts would stand on the bridge of USVs during maneuvering tests and manually grade the vehicle’s navigation, its operation in response to moving objects and its ability to adjust based on varying requirements.
“One of the most difficult challenges is trying to correlate and standardize the grading variables,” Heistand said. “Every captain is human, and every captain has slightly different opinions or standards. Some would like to see a maneuver be made faster, others slower; some wanted the rule of thumb to be a 25-degree turn, others a 30-degree turn.”
To test the PAT’s validity, under the sponsorship of the Naval Sea Systems Command Unmanned Maritime Systems Program Office (PMS 406), APL is completing a Performance Analysis Toolset – Human Operator Comparison (PAT-HOC) with the support of the Surface Warfare School Command (SWSC) in Newport, Rhode Island.
“PAT-HOC is a first-of-its-kind assessment,” said Lahman. “We hope that by testing the PAT’s validity, we’ll open up the aperture of what is possible in regard to T&E of autonomous systems. We want to shed light on how assessments of both autonomous systems and humans may evolve to a new reality where both interact in the real world.”
The study will compare the PAT results with performance data from the Officer of the Deck Phase II (OOD-II) training course, which is intended for surface warfare officers in between their first and second division officer tours and includes a milestone assessment that determines whether a sailor will continue within the surface warfare community.
“If we can show that USVs are performing as well as humans are, then that’s a good indicator that they’re operating as they should,” said Heistand.
To do that, the team has developed a three-part approach: first aligning the PAT scoring methodology with that of expert ship-handling instructors, next quantitatively comparing USV performance to a distribution of qualified ship handlers and finally replaying USV trajectory for SWSC instructors.
“We’ll ask USVs to execute the same scenario as mariners in OOD-II training and then translate those results into the native assessment simulation at the SWSC so instructors can judge them blindly,” Heistand said. “It’s an apples-to-apples comparison. Once we have both sets of scores, one from the PAT and one from the instructors, we can see if and where we need to make adjustments in the PAT grading system.”
The study has developed into a two-way relationship between APL researchers and ship-handling instructors, all hoping to achieve consistent scoring methodologies, not just for autonomous uncrewed vehicles but for crewed systems as well.
“Instructors at naval schools all over the world are still doing these qualitative assessments, and that inevitably presents variability,” Heistand noted. “Our hope is that with continued testing, feedback and alignment, the algorithm we’re developing will be able to be used for both USVs and humans in the Navy, Coast Guard, Merchant Marines and other branches.”
Although early results from the PAT-HOC study are showing some correlation between the PAT scores and the SWSC instructor scores, it is too early to draw any conclusions. However, areas of misalignment have been identified, and parts of the algorithm are being refactored to address these areas.
Thousands of uncrewed aerial vehicles (UAVs) are currently in use by the Department of Defense, and the desire to increase USV presence is clear. The Navy’s Chief of Naval Operations Navigation Plan 2022 forecasts a fleet of approximately 150 uncrewed surface and subsurface platforms by 2045.