Johns Hopkins APL Researchers Aim to Set Safer Paths for Swift-Flying Robots

Thu, 12/16/2021 - 10:25
Ajai Raj

Despite significant advances in drone technology and autonomous systems over the last couple of decades, a world where small, swift robots dart and weave through pedestrians, bikes and cars in the urban landscape remains firmly in the realm of science fiction. But researchers at the Johns Hopkins Applied Physics Laboratory (APL) in Laurel, Maryland, have made significant progress on two key technical problems that stand in the way of fiction becoming a reality.

For robots to share roads and sidewalks with humans, they need to be able to navigate safely and efficiently at high speeds in dense, dynamic environments. Humans achieve this feat through predictive reasoning, which helps us deal with uncertainty.

Two teams in APL’s Research and Exploratory Development Department (REDD) have made strides in equipping robots with capabilities that enable them to mimic predictive reasoning in humans. The first is enabling robots to anticipate the contents of spaces extending beyond what they can “see” with their sensors to allow them to navigate at high speeds, and the second is enabling them to quantify uncertainty to help make navigation decisions in noisy environments. Both teams presented their work at the 2021 IEEE International Conference on Robotics and Automation in June and published papers in the conference’s proceedings.

“One of the major challenges with enabling robots to navigate at high speeds is data collection and dataset generation,” said Adam Polevoy, a robotics engineer in REDD who co-authored the paper focused on high-speed navigation with Joseph Moore, Craig Knuth, Katie Popek and former APL robotics researcher Kapil Katyal. “The amount of data you need to collect to create a robust model is pretty large, and collecting that data using a physical system can be time-consuming and expensive.”

Polevoy’s team was able to address that challenge in two key ways. First, the team set up a pipeline to enable its robot to automatically generate data in an unsupervised manner, eliminating the need for a human to spend valuable time labeling and annotating the data, and second, it applied data augmentation, a data science technique that uses existing data to create new data.

“The classic example of data augmentation is in image processing,” Polevoy explained. “Say you have a picture of a dog, and you rotate it, flip it and so on — that gives you additional images that are accurate and reliable, and increases the amount of data you have at your disposal. We were able to take a similar approach using occupancy maps of the experimental environment.”

By applying these techniques, team members enabled a small race car robot to successfully navigate an unknown environment about 80% of the time, compared to 20% without their map prediction technique, demonstrating that theirs is a promising approach that can be scaled to work on smaller robots in real-world environments.

The second paper dealt with the problem of uncertainty more directly, specifically with the challenge of accurately quantifying the uncertainty of estimates made by robotic systems.

“Understanding the uncertainty around your estimates is an important element of making decisions, as well as of detecting issues with your current algorithm,” said I-Jeng Wang, chief scientist in REDD’s Artificial Intelligence Group, who co-authored the paper focused on uncertainty estimation with Katyal and Gregory Hager, a professor of computer science in the Johns Hopkins Whiting School of Engineering. “Algorithms developed based on prior knowledge or existing data may not be generalizable or robust to your current environment, so we need to equip autonomous systems with the ability to account for the uncertainty inherent to new environments.”

Wang’s team combined two advanced data science techniques to improve the ability of a mobile robotic platform to estimate the state of an environment and the uncertainty of its predictions. The first, deep learning, excels at handling complex challenges involving pattern recognition and situations with a lot of parameters to take into account; the second, recursive filtering, allows an autonomous system to use its previous outputs as inputs, which can allow the system to respond more dynamically to real-time situations.

The uncertainty estimation team trained its machine-learning algorithm using images of urban environments with pedestrians, and added distortion to the images to mimic the informational noise introduced by the complexity of real-world environments. The researchers were able to achieve a much higher degree of computational efficiency in their algorithm compared to existing methods, with a significant improvement in the time it took to estimate the state of a physical system and quantify the level of uncertainty.

Taken together, these studies represent significant progress toward the deployment of useful autonomous systems in urban environments, said Bob Bamberger, supervisor of REDD’s Robotics Group.

“Both of these teams take novel approaches to address key limitations of the robustness and versatility of robotic navigation systems,” Bamberger said. “It will be exciting to see how this work progresses, particularly in the context of the Lab’s substantial portfolio of work in this domain.”