The Challenge
Machine Perception in Real-World Applications
Machine perception offers the ability to complement human capabilities. Recent advancements in computer vision, while impressive, largely focus on narrowly-trained pattern recognition algorithms that can be brittle in the face of real-world complexity and adversarial manipulation.
An ideal perception system could observe a sequence like the one illustrated in Figure 2, report that a Frisbee has been exchanged for a football, and represent the space behind the barrier as a likely location for the missing Frisbee. Such as a system could operate as a teammate to a human security guard, for example, charged with maintaining public safety in a large area such as a stadium or an airport.

The Objective
Integrated Pattern Recognition and World-View Reasoning
Our research team has created the Active Sensing Testbed (AST) to help accelerate research and development in machine perception. The AST provides a modular software environment where researchers can utilize state-of-the-art computer vision algorithms as lower-level building blocks. That allows them to create more complex perception systems that combine information across multiple views, sensor modalities, and complementary analytics.
These capabilities lower the barrier to entry for researchers to explore new concepts in active sensing where an intelligent system can enhance its understanding of a scene by reasoning, forming hypotheses and acting to gather additional information.
Our Approach
Creating a Research Testbed for Real-Time Perception and Reasoning
The architecture of the AST centers around a server that receives data feeds from multiple sensors, performs selected transformations on input data and computes selected analytics, and then sends the resulting analytics and metadata to subscribers for visualization. The server is built around a containerized architecture using Docker so the system can easily scale across multiple machines and support various libraries without creating conflicts.
A visualization of this architecture is shown in Figure 3.
We have created a research testbed around the AST at the Johns Hopkins Applied Physics Laboratory’s Intelligent Systems Center. Our testbed includes four ceiling-mounted pan-tilt-zoom cameras to facilitate data collection, along with algorithm design and evaluation. Our AST implementation also includes an operator interface to enable human-machine interaction.
Through the operator interface and the associated application programming interface, a system operator or other remote user can view data feeds, overlay computed analytics and metadata, and issue commands to the system. Our aim is to leverage the AST software framework to support similar setups across a variety of different locations and applications.

Outcomes
Project Transitions and Future Research
Now that we have established a baseline, various APL projects are beginning make use of the AST as a tool for research and demonstration. For example, we have applied the AST software to implement a data collection suite for health analytics research, which is intended to support validations of analytics from standoff biometric sensors against commercial medical-grade sensors. An example of this suite is shown in Figure 4. Additional health analytics that we have explored through this effort include face mask detection, social distancing analysis, and full-body pose estimation and emotion recognition for use during remote therapy sessions.
The AST provides an accessible environment and toolkit for conducting research into active sensing and intelligent systems, providing tools that will allow both novice and experienced AI researchers to quickly develop novel algorithms and system architectures that enable machine perception and reasoning tasks across a broad range of applications.