September 26, 2019
In most cases, the preparation for and response to a natural disaster don’t match the damage caused and the devastation left behind. But a team of engineers and researchers in the Asymmetric Operations Sector (AOS) of the Johns Hopkins Applied Physics Laboratory (APL) is using its expertise in artificial intelligence (AI) to change that.
Last year, during Hurricane Florence, APL answered FEMA’s call for help to anticipate and detect flooded sections of North Carolina and surrounding coastal areas. Because the Lab already had applicable software — coupled with decades of AI experience — the team turned its results around quickly. Within days, the Lab began providing FEMA daily satellite and aerial images, processed through multiple deep learning algorithms trained to produce computer vision segmentation of water in images (called water-segmentation masks) and detect communication towers, roads, bridges, vegetation, buildings and other items of interest.
The solution leveraged the Lab’s proven experience and growing work in AI — in this case, an algorithm to detect and classify objects in overhead imagery. FEMA used the processed imagery to quickly map and assess, and then monitor, the extent of flooding.
“FEMA was pleased with the images,” said Joshua Broadwater, a remote sensing researcher and supervisor of AOS’ Imaging Systems Group. “This was a fantastic opportunity for the government to learn more about APL’s capabilities in disaster relief and the potential application of this technology for a wider range of situations.”
Based on the Florence experience, the Defense Department’s Joint Artificial Intelligence Center (JAIC) tasked the Lab to take its technology to the next level. APL researchers are creating a machine learning capability to collect and process overhead imagery into three main types of analytics: flood segmentation (locating and marking areas of flooding), road analysis (identifying blocked and unblocked roads), and building damage assessment, which classifies a building by four types of damage according to the FEMA protocol (no damage, minor damage, major damage and completely destroyed).
Currently, helicopters fly over and photograph affected areas, leaving analysts to draw maps of the damage by hand. This can obviously lead to major inaccuracies and a delay in response. AI technology, on the other hand, offers a more efficient and accurate way of collecting data, assessing a situation and responding quickly and decisively.
“We are excited to bring this technology capability to the analysts, since a lot of this type of work right now is done by hand,” said software engineer Beatrice Garcia, APL’s project manager for Humanitarian Assistance and Disaster Relief (HADR).
As the HADR algorithms lead, Gordon Christie “trains” deep learning models to automatically process overhead imagery. After Hurricane Florence, Christie teamed with the Lab’s Angeline Aguinaldo and Christopher Gifford to participate in the Humanitarian Engineering and Logistics Preparedness (HELP) challenge, an internal initiative to apply engineering and technology to quickly aid in natural disasters, and were awarded funding for their idea “Flood Mapping in Satellite Imagery.” This provided the team with the ability to assess and address some of the improvements identified during the Hurricane Florence response effort, which is one of the capabilities that helped with Hurricane Dorian.
“One of the most rewarding things about this project is seeing our work immediately applied to real-world problems,” said Christie.“Before coming to APL I was a graduate student where the impact you have is often through publishing research papers — which is great — but this is more exciting!”
Aguinaldo led the design and implementation of the software that processed large image datasets from aerial and space assets. Using deep-learning algorithms designed to find flooding, blocked roads and damaged buildings, she created files viewable via Google Earth and other geospatial visualization tools — leveraging her experience with Geospatial Information Systems engineering, image processing, machine learning and human-computer interaction.
“Many of the analysts and disaster response organizations need technology that streamlines their decision-making pipeline during humanitarian relief missions,” she said. “Therefore, our participation in this effort gave us a glimpse of how we can use our technical skills to make a large impact during these activities. Additionally, this effort has taught us the strengths and weaknesses of our system during real-life scenarios, which can lead to notable improvements in later iterations.”
Pedro Rodriguez, a deep learning researcher and supervisor in AOS’ Analytic Capabilities Group, explained how these capabilities can help advance the aid of imminent disasters. “With this project, we have now taken baby steps toward a future where disaster responders will now have access to data at a scale and speed that would not have been imaginable a few years ago.”
Media contact: Khadija Elkharbibi, 240-228-9118, Khadija.Elkharbibi@jhuapl.edu
The Applied Physics Laboratory, a not-for-profit division of The Johns Hopkins University, meets critical national challenges through the innovative application of science and technology. For more information, visit www.jhuapl.edu.