Today’s robotic systems fall short of the agility and robustness required to operate in complex environments at operationally-relevant speeds. We believe this is not primarily due to their physical limitations, but due to their inability to reason intelligently about their own dynamics, their sensor limitations, and the world. Our objective is to tightly couple machine learning and computational control to effectively reason about complex physical phenomena, learned models, real-world uncertainty, and challenging operating domains.
Agile Fixed-Wing Flight
Fixed-wing Unmanned Aerial Vehicles offer unique advantages over their rotary-wing counterparts with regards to speed, endurance, and range. However, fixed-wing UAVs are severely limited with regards to agility. We believe that this limitation is not strictly due to physical characteristics, but rather an inability for the vehicle’s control system to effectively reason about complex nonlinear dynamics and flow regimes.
To improve the agility of fixed-wing UAVs, we developed a nonlinear model predictive control (NMPC) algorithm capable of running in real-time and exploiting the full nonlinear post-stall regime of the aircraft. Post-stall flight affords unique opportunities for increasing vehicle agility by minimizing the required deceleration distance for the aircraft and dramatically reducing the minimum turning radius.
We evaluated our approach in two domains: fixed-wing navigation in constrained environments and precision post-stall landing.
Fixed-Wing Navigation in Constrained Environments
To test our NMPC algorithm, we explored the receding-horizon case of navigating through a narrow corridor. We evaluated our algorithm in both simulation and hardware on a small 24-inch wingspan UAV and demonstrated agile fixed-wing flight using off-board sensing and computation.
Figure 1: A plot of the trajectory data from hardware experiments in the narrow corridor (left). A plot of the constraint radius (blue) for all trials in the corridor (right).
We also tested our approach using onboard sensing and computation on a 42-inch wingspan UAV operating in an urban environment.
 Basescu, Max, and Joseph Moore. "Direct nmpc for post-stall motion planning with fixed-wing uavs." 2020 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2020.
Precision Post-Stall Landing
The second domain we explored was precision post-stall landing. Fixed-wing UAVs typically require long, flat runways. The objective of this effort is to land precisely with minimum speed and in minimum distance. We constrained ourselves to use as little thrust as possible and to land from a significant cruise speed and altitude (above the tree-line). We explored both a trajectory library approach as well the aforementioned NMPC approach. Both of our algorithms leveraged radial basis function neural networks to more precisely model the lift, drag, and moment coefficients associated with the post-stall regime. We tested our algorithm on a 60-inch wingspan, 4.5 kg fixed-wing UAV and demonstrated precision landing in variable wind.
Figure 2: The radial basis function neural network for representing lift, drag, and moment functions (left). An optimal post-stall landing trajectory (right).
Dynamic Wing-Morphing Control
Unsteady aerodynamic effects have a profound impact on aggressive flight maneuvers, especially in the context of dynamically morphing wings. By understanding and exploiting these effects, we believe that we can dramatically improve the maneuverability of UAVs. To do this, we have developed a real-time computational fluid dynamics simulator capable of capturing the behavior of flows with high vorticity. We embed this dynamics model in a trajectory optimizer to generate control policies for post-stall maneuvers. In particular, we explore the control of a dynamically morphing swing-wing vehicle.
High-Speed Navigation Using Onboard Vision
Recent advances in electro-optical cameras afford a unique opportunity for light-weight high frequency sensing of the environment. However, as the speed of robot systems increase, the uncertainty associated with these sensor systems has a greater opportunity to negatively impact system performance. The uncertainty induced by noisy pose-estimates from visual odometry, motion blur, and lighting conditions can all play an important role. Furthermore, limited field-of-view can lead to overly conservative navigation speeds.
To overcome these challenges, we’ve explored integrating our nonlinear model predictive control (NMPC) approaches with computationally light-weight mapping frameworks. In addition, we've also explored machine learning algorithms such as generative adversarial networks (GANs) to predict and reason about the environment beyond the senor's field-of-view for more robust and high speed navigation.
Vision-based Navigation for Fixed-Wing UAVs
We integrate the NanoMap framework with our NMPC approach to enable vision-based collision-free navigation with fixed-wing UAVs. We reach cruise speeds of 20 mph, and demonstrate the ability to execute aggressive maneuvers to avoid obstacles. Our approach allows for reasoning about sensing uncertainty, both in the state estimates as well as in the depth measurements used for navigation.
Figure 3: A diagram of the control architecture for achieving high speed vision-based fixed-wing navigation.
 Polevoy, Adam, et al. "Post-Stall Navigation with Fixed-Wing UAVs using Onboard Vision." arXiv preprint arXiv:2201.01186 (2022)
High-Speed Navigation Using Predictive Mapping
Safe and high-speed navigation is a key enabling capability for real world deployment of robotic systems. A significant limitation of existing approaches is the computational bottleneck associated with explicit mapping and the limited field of view (FOV) of existing sensor technologies. In the Intelligent Systems Center, we study algorithmic approaches that allow the robot to predict spaces extending beyond the sensor horizon for robust planning at high speeds and more efficient map exploration.
We accomplish this using a generative neural network trained from real-world data. We extend our existing control algorithms to support leveraging the predicted spaces to improve collision-free planning and navigation at high speeds. This research is demonstrated using a physical robot based on the MIT race car using an RGBD sensor where were able to show improved navigation performance at high speeds and increased efficiency of exploring and mapping new areas.
 Katyal, Kapil D., et al. “Uncertainty-Aware Occupancy Map Prediction Using Generative Networks for Robot Navigation.” 2019 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2019.
 Katyal, Kapil D., et al. "High-speed robot navigation using predicted occupancy maps." 2021 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2021.
Complex Terrain Navigation
Off-road environments contain a wide range of terrains that pose challenges for conventional autonomous navigation systems. In particular, deformable obstacles, such as grasses, bushes, and other foliage can be difficult to navigate, since these objects, while traversable, appear untraversable when viewed with depth sensors.
To address this challenge, we developed a neural network architecture that learns a traversability metric from camera images. We collect training data by operating our unmanned ground vehicle in an environment and evaluating the trajectory deviation error. Objects that are traversable generate a low amount of error, while untraversable objects produce large errors. Our neural network can learn to predict model error given a sampled vehicle trajectory and a history of images. Using this predictive network, we are able to traverse long distances through grasslands, shrubs, and bushes (a), and demonstrate robustness to environments outside of the training distribution, (b) and (c).
 Polevoy, Adam, et al. “Complex Terrain Navigation via Model Error Prediction.” 2022 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2022 (to be presented).
For more information or to join our team, please contact us at ISC@jhuapl.edu