Intelligent systems are currently limited in their application because they are typically designed and employed as tools, rather than as complementary partners to human intelligence. Advancing intelligent systems from tools to collaborative teammates requires endowing them with the ability to assess human behavior and intent and adapt their actions accordingly.
Research scientists at the Johns Hopkins Applied Physics Laboratory are examining ways to build human-aware systems to improve the performance of human-machine teams.
“We are looking to ways to give a machine insight into human performance, so that the machine can become a collaborative peer, not just an opaque, exhaustive problem solver,” says Julie Marble, a long-time human computer interface researcher and cognitive psychologist.
Julie and her research team are using 3D virtual reality gaming to explore the relationship between trust and risk in human-machine teams. They also are utilizing Hanabi, a game of collaborative decision-making under uncertainty, to examine whether incorporating explicit models of human cognition into intelligent agents improves their ability to collaborate with humans. These projects aim to advance the state of human-machine teaming research while transitioning key insights into real-world applications.
Marble discusses this research in “Teaching a Computer To Read Your Mind,” recently published in SIGNAL magazine.