| [Ed. note April 2020: Continuation of "HCAAM VNSCOR: Responsive Multimodal Human-Automation Communication for Augmenting Human Situation Awareness in Nominal and Off-Nominal Scenarios," grant 80NSSC19K0703, with the same Principal Investigator (PI) Leia Stirling, Ph.D., due to PI move to University of Michigan from Massachusetts Institute of Technology in fall 2019]
his task is part of the Human Capabilities Assessments for Autonomous Missions (HCAAM) Virtual NASA Specialized Center of Research (VNSCOR).
Crew extravehicular activity (EVA) is limited on spaceflight missions. Multiple, small robotic spacecraft with varying levels of autonomy are needed to perform tasks that might have been completed by an astronaut (e.g., an exterior surface inspection or repair). Crews on long duration exploration missions (LDEM) will have less access to ground support during task operations. As a result, they will need to process more information and communicate with autonomous robots effectively to ensure tasks are progressing safely and on schedule.
The objective of these studies is to investigate the use of augmented reality (AR) multimodal interface displays and communication pathways for improving human-robot communication, situation awareness (SA), trust, and task performance. This will lead to developing guidelines for designing human-robot system interactions that enable operational performance for crews on spaceflight missions.
The specific aims are to:
1) Develop a simulation testbed for examining communication between human-robot teams.
2) Develop a hardware testbed for examining communication between human-robot teams.
3) Evaluate human SA, trust, and task performance within a short duration and long-duration ground-based study (simulation and/or hardware) through testing various interface communication modalities and information displays.
4) (Option) Perform additional studies for alternate parameters of interest that could be tested using the study testbeds. Additional parameters include timing and persistence of information, gesture command mapping, varying the levels of robot automation, evaluating precision enabled by each command mode.