The first Virtual Intelligent Task Assistant (VITA) study was a pilot laboratory study to determine techniques for use in the second study, to be conducted in the NASA Human Exploration Research Analog (HERA) Campaign 6 (C6). This pilot study was completed in June 2021. Restrictions going onsite NASA Johnson Space Center (JSC) due to COVID-19 were lifted this year. This enabled us to conduct pilot sessions in the Human Factors Engineering Laboratory (HFEL) onsite at JSC using subjects from the Human Test Subject Facility (HTSF). A total of 16 participants performed a session with VITA. For these sessions, the participant performed tasks from electronic procedures on a tablet or tasks provided by VITA in a HoloLens 1 display.
The second VITA study being conducted in HERA Campaign 6 started in September 2021. The HERA study has four participants per mission and there will be four missions in Campaign 6, for a total of 16 participants. At the current time, the VITA sessions for two missions in HERA C6 are complete. We worked with our HERA Experiment Support Scientist (ESS), Michael Merta, to prepare for each mission. Prior to each mission, we train the crew on how to interact with the VITA. After each mission, we debrief each crewmember about his/her experience using VITA during the mission. Data collection and analysis for the VITA study in HERA C6 was started in Year 3 as well. We report preliminary results from the Pilot study and HERA C6 Mission 1 (C6M1) below.
Findings on Gaze-activated Control
Multi-modal interaction when using the VITA augmented reality software in a HoloLens includes visual presentation of task cues, hand gestures, and gaze-activated control. To improve support for hands-free operation, we are investigating the use of gaze to interact with the VITA user interface, including advancing to the next instruction, returning to a prior instruction, and recording data.
The operator uses gaze to mark instructions done and advance to the next instruction. Gaze is also used to zoom closer to or away from the VITA display, and to rotate a 3D model of the rover.
Gaze-activated controls are investigated as a means to reduce workload during assembly tasks by enabling interaction with the VITA intelligent agent without moving the user’s hands away from the assembly task. Subjective feedback from subjects on gaze-activated controls indicates such controls can increase workload, if not properly designed. If button response is too sensitive to user gaze, buttons can be accidentally activated, causing the user additional work to “undo” unintended actions. If button response is not sensitive enough to user gaze, repeated gaze actions and extended gaze times can be required, making it difficult to activate these controls and frustrating the user.
In response to feedback from the pilot study, a number of design changes were made to the gaze-activated controls in VITA. In the initial VITA design, buttons used to navigate through task instructions were located below the textual cue, to be near the bottom of the virtual field of view and closer to the user’s line of sight during assembly. However, this location appeared to possibly contribute to accidental button activations as the eyes moved during assembly, so the navigation buttons were moved above the task cue to be further away from the user’s line of sight when assembling the rover. The time that a user must gaze at a control button to activate it (called dwell time) was also increased to 250 msec. Navigation buttons were modified to blink briefly when activated, to improve user awareness of button activation. Finally, accidental activation of critical controls (like marking a task done and moving to the next task) is prevented by using an “arm-and-fire” design that requires two button activations to take an action.
Even with these changes, subjective feedback from the first mission of HERA Campaign 6 indicates that users felt gaze-control was slow and not sensitive enough. The HoloLens 1 tracks head direction but does not track eye movement. This reduced the precision of gaze direction, which can make it harder to activate buttons.
Findings on Placement of Virtual Cues
The VITA user interface arranges virtual task cues and gaze-activated controls in a planar layout. By default, this plane is placed at eye-level when the head is raised and looking forward. The user can use hand gestures to adjust the placement of this plane relative to head position and focal length.
During the pilot study, users were trained how to adjust the placement of this plane. Some subjects found it difficult to make this adjustment. Initially, many users intuitively placed the plane near their hand position when assembling the rover. Eventually, most users moved the plane above their hands and to one side, to prevent accidental activation of gaze-controlled buttons. If the plane was placed too far away, however, users had to move their heads more to see task cues.
During HERA C6M1, placement of virtual cues continued to be challenging for users. Some crew mentioned that shifting the focal plane between virtual task cues and the rover can be tiring over time.
Preliminary findings indicate that additional study of the placement of virtual cues with respect to the focal plane of the task is merited. The user interface design should make it easy to adjust placement of virtual cues relative to the location of task components. The user interface design should also try to minimize shifts in focal length between the physical task and the virtual task cue, as frequent shifts can cause visual fatigue. Designs should be investigated that simplify aligning the focal length of the virtual cues with the focal length of the task components, even when virtual cues are not placed near the task components.
The VITA study is investigating workload when using only gaze-activated control, which earlier studies do not address. Subjective response to gaze-activated control has been mixed, with some users preferring it while others suggest using gesture or voice control. The reliance on gaze for all interaction with VITA makes it more likely that users may experience some visual fatigue, which is substantiated by observations during both the pilot study and HERA C6M1.
Observations from the pilot study and HERA C6M1 indicate that procedure information may need to be organized differently than in the tablet display for more effective use in virtual space. Currently, figures are associated with specific instructions. Users can easily glance at an earlier figure, when using a tablet to view the procedure. When using VITA, however, access to prior figures requires navigating back one instruction at a time. A number of users observed that the effort to go back using VITA discouraged them from looking at figures that would have helped with the current instruction.
Some participants in the pilot study and in HERA C6M1 reported discomfort when using the HoloLens 1 continuously for over 50 minutes. These reports are consistent with a study of simulator sickness. Gaze control was reported as fatiguing to some users in the pilot study. One participant in HERA C6M1 reported that having more than one session with an augmented reality headset in a day made symptoms worse, even when using different headsets (HoloLens 1 in one session, HoloLens 2 in another session). We are investigating in the HERA C6 study whether users adapt to this with repeated use.
We submitted a paper entitled “Lessons on Developing an Augmented Reality Interface to a Virtual Intelligent Task Assistant” to the Human Factors and Ergonomics Society annual meeting to be held in Atlanta, GA, on October 10 - 14, 2022. This paper reports preliminary results from the VITA pilot study and the first two missions in HERA Campaign 6.