Vickram Rajendran and Will LeVine were two self-described “kids” just having fun with computers and math — early-career staff members at the Johns Hopkins Applied Physics Laboratory (APL) trying to solve one of the biggest problems in machine learning, related to a concept dubbed “competence.”
On occasion, the pair would step back, look at the blackboard and think how crazy it was that the scribbled math in front of them was helping them make critical contributions to many critical challenges.
Their work led to the definition of a new competence metric for neural networks, one that the Asymmetric Operations Sector research scientists dubbed “ALICE.” The acronym is explained in their paper, “Accurate Layerwise Interpretable Competence Estimation (ALICE),” that was accepted for a poster at the annual conference on Neural Information Processing Systems (NeurIPS) this December in Vancouver, Canada.
NeurIPS, one of the premier artificial intelligence conferences worldwide, is highly selective; only 1,428 papers were picked from 6,743 submissions for this year’s poster presentations.
In their paper, Rajendran and LeVine define a new competence metric for neural networks and a framework for computing competence, and describe the first competence estimator. This innovation derives from the operational need to understand how neural networks behave “in the wild” and is the foundation for a series of newly funded tasks to better predict neural network performance, prioritize which new data should be labeled to enhance model performance, and identify which labeled data are poorly labeled or are particularly troublesome to the trained model.