Task Progress:
|
The Displays and Controls Interfaces DRP addresses the following Human Research Program (HRP) Risk and Gap:
• HRP Risk: Risk of inadequate human-computer interaction. Given that human-computer interaction (HCI) and information architecture designs must support crew tasks, and given the greater dependence on HCI in the context of long-duration spaceflight operations, there is a risk that critical information systems will not support crew tasks effectively, resulting in flight and ground crew errors and inefficiencies, failed mission and program objectives, and an increase in crew injuries.
• HRP Gap: Space Human Factors Engineering (SHFE)-HCI-03. We need HCI guidelines (e.g., display configuration, screen-navigation) to mitigate the performance decrements identified in SHFE-HCI-08 due to the spaceflight environment.
The study results provided HCI guidelines specific to EVA displays that will lead to improved human performance and contribute to the closure of the gap. Future research should investigate multimodal interfaces and guidelines for EVA displays in realistic scenarios.
1. EVA consumables display recommendations
On current extravehicular activity (EVA) missions, crewmembers depend on ground support personnel to monitor activities and suit systems. On deep space missions without the help of ground personnel, crewmembers will be responsible for monitoring their own and their team members’ consumable information when performing an EVA. Therefore, it is necessary to investigate approaches for concise representation of EVA consumable information. Based on information gathered through interviews with subject matter experts (SMEs) and crewmembers, it was found that there are four consumables of major interest: oxygen, battery power, cooling water, and carbon dioxide (Sándor, Archer, & Boyer, 2011). A quick-look summary display for easily assessing critical information, such as time remaining on each consumable, would be desirable.
The study investigated the visual presentation of EVA consumables data with tables and multidimensional icons, Chernoff faces, and stick figures (Chernoff, 1973; Pickett & Grinstein, 1988). Multidimensional icons are recommended to be used for multivariate data representation when there is limited display real estate. For the Chernoff faces, various features of the face were used to convey consumable status. For the stick figures, each limb represented a consumable, with position of the limb showing good or bad status. The study focused on two tasks for each design approach: a) identifying the consumable that has low time remaining, and b) identifying the crewmember in the worst condition.
The results showed that all of the formats are adequate for searching for a consumable with low time remaining, since this requires identifying a single specific piece of information. In contrast, identifying the crewmember in the worst condition requires searching and comparing multiple features, thus displays which provide easy comparison of multiple features – tables and stick figures - led to better performance.
Multidimensional icons were adequate for making simple decisions about crewmembers, such as identifying the crewmember with a limiting consumable. Considering that tables have labels for consumables and contain the exact time remaining for each consumable, they have a major advantage over icons: consumables are easy to identify and they present quantitative and not only qualitative information for consumables.
Therefore, for most tasks and for displays with no real estate constraints, tables are the recommended method for displaying consumables. We recommend stick figures when there is limited display real estate and there is a need to perform multi-feature comparisons of many crewmembers in a brief amount of time.
Chernoff faces can be viable in similar conditions as stick figures such as when monitoring a single crewmember or when an overall snapshot of the team health needs to be assessed. However, stick figures have the added benefit of allowing detailed comparisons for specific features amongst crewmembers.
2. EVA display prototype
This work was a continuation of prototype development started in FY2012: the development and evaluation of an EVA software interface prototype. In FY2012, a prototype was created based on EVA specialist interviews, EVA documentation, empirical experiments, and usability testing. In FY2013, this prototype was further developed and modified to fit two types of hardware: a small display used as a mock cuff-display, and a head-mounted display (HMD). The prototype focuses on the presentation of consumables, such as oxygen, water, and battery data, and the organization of other elements of the interface (e.g., navigation and consumable information). Stylistically, the interface follows the Orion Display Project Format Standards (NASA, 2009b). After developing a few of versions of the prototype, an evaluation was conducted in which participants provided feedback while completing basic tasks that required use of the interface.
The purpose of the development process was to create an interface with dynamic and modular elements, so as EVA mission specifications are developed; elements of the design could be reused with new concepts and hardware.
3. Evaluation of EVA Prototype Data Display: Spatial Auditory Display for Remote Planetary Exploration
The research addresses the organization of information on displays that may be limited in size and integrate information relevant to situational awareness regarding navigation in the task environment as well as the health and status of the crew and mission systems during EVA. By identifying best approaches to display complex information with limited resources, and using the most appropriate modality (visual and auditory), access to information can be made intuitive and non-disruptive to the task at hand.
Specific Objectives: The primary goal of this research was to compare the performance in localization of different targets during a simulated extra-vehicular exploration on planetary surface with different types of displays for aiding navigation (NavAid): a 3D spatial auditory orientation aid (A), a 2D North-up visual map (V), and the combination of the two in a bimodal orientation aid (B). Four different environmental conditions were tested combining high and low levels of visibility and ambiguity. In a separate experiment using a similar protocol, the impact of visual workload on performance was also investigated under high (dual-task paradigm) and low workload (single orientation task) levels.
Background: During Extra-Vehicular Activities (EVA), astronauts must maintain situational awareness (SA) of a number of spatially distributed "entities" that are often outside the immediate field of view (FOV) and visual resources are needed for other task demands. Spatialized (3D) auditory cues can provide information that is complementary to, or may substitute for, cues in the visual environment. It was expected that the target localization task would benefit from a bimodal presentation of the navigation aid, in particular in degraded environmental conditions (low visibility and high ambiguity). Method: In Study 1 (single task, ST), 48 participants performed a navigation task in a simulated visual-auditory environment. They were instructed to localize targets distributed outside their FOV with the three different NavAids and the two levels of visibility and ambiguity. In Study 2 (dual task, DT), the participants had to monitor and respond to four meters representing the levels of EVA mission consumables (carbon dioxide, oxygen, water, and battery) superimposed on the visual scene at the top left of the display, while simultaneously performing the orientation task. To date, preliminary data from 6 participants has been collected under non-degraded visual conditions. In future, additional participants and experimental conditions will be tested under normal and degraded visual environments in the dual task paradigm.
For both studies, the quantitative dependent variables were: percent correct orientation, left/right decision time, localization time, and localization accuracy. Qualitative measures (subjective ratings) were also collected after the experiments.
Results: In Study 1, the results showed that a combined presentation of 2D visual and 3D auditory cues lead to a significant improvement in performance (higher percent correct for orientation, faster reaction times (RTs)) compared to either unimodal condition, in particular when the visual information required mental spatial transformation or when the visual environmental conditions were degraded.
In Study 2, preliminary results in the high visibility condition showed that an increase in mental workload (monitoring task) differentially affects the performance as a function of the modality of presentation of the NavAid. For the percentage of correct responses, overall there was no significant decrease in performance compared to the ST condition. However, comparisons between modalities showed that the percentage of correct responses was lower for the V condition than in either the A or the B conditions. Overall, mean left-right decision times were significantly increased by the increase in workload, and similar to the percent correct data, performance in the V condition was significantly worse than in either in the A or the B conditions.
Conclusion: In the particular context of EVA missions, the availability and/ or the reliability of most of the sensory inputs available on Earth is reduced and typically the processing of visual information is highly dependent on 2D displays. Spatial auditory displays can aid situational awareness, navigation, and way finding by reducing the risk of errors and response latencies. Further, compared to a visual-only 2D display (V), NavAids utilizing spatial auditory cues (both the A and B conditions) can mitigate the negative impact on performance of the extra demands due to high visual workload.
Recommendations: The results presented here demonstrate that spatial audio displays, both alone and in combination with a visual navigation display, enhance performance and situational awareness and add to the intuitiveness of the information display:
• User acceptability for 3D audio is very high.
• 3D audio provides an intuitive, ecological, and low-workload solution for the presentation of spatial information.
• 3D audio can be used to efficiently substitute for visual information that is missing or degraded.
• Combined presentation of the A and the V spatial information leads to a significant reduction of incorrect orientation responses and a reduction in decision times.
• The use of an auditory localizer, a type of dynamic sonification display, has proved its efficacy, particularly under degraded visual conditions.
Thus, it is recommended that bimodal and/or multimodal displays be used for EVA missions. It is increasingly evident that the auditory channel will need to convey spatial information about localization and navigation in an environment where the visual channel is already saturated by the display of symbology and checklists. The ecological validity of using sound for localization, combined with the possibility of learning to use virtual auditory signals to navigate between virtual waypoints, support their integration in advanced EVA display systems. The integration of alternative ways to present information brings up additional questions such as the best methods for switching between different modes within and/or between the different sensory channels that have been made available to the operator. Issues that must be investigated include that: the use of each sensory channel must be prescribed for a given type of activity; the different functions available cannot overlap; and the sensory channels must combine appropriately to contribute to a reduction of the overall workload while increasing the sense of presence and situation awareness.
For example, 3D audio could provide a higher level of immersion and improved perception of the “6 DOF (degree of freedom) operational space.” The combination of spatial and/or moving sound images with visual stimuli may increase vection and improve the sense of spatial presence as well as mitigating spatial disorientation. Two potential benefits may be of particular interest: (1) providing immediate feedback on operator location as well as the actions performed in space, and (2) providing an auditory “frame of reference” such as an artificial auditory horizon combined with “auditory security boundaries” that define the crew’s position in space in relation to the external features of the environment. Further, during training, the use of spatial audio could provide an additional countermeasure against cyber-sickness (nausea, disorientation, and oculomotor disturbances) induced by scene oscillation along the different axes of motion (pitch, roll, and yaw).
Begault, D. R., Wenzel, E. M., Godfroy, M., Miller, J. D. & Anderson, M. R. (2010). Applying spatial audio to human interfaces: 25 years of NASA experience. Proc. Audio Engineering Soc. 40th International Conference on Spatial Audio, Tokyo, Oct. 8-10 2010.
Begault, D. R., Anderson, M. R. & Bittner, R. M. (2012) Modeling Auditory-Haptic Interface Cues from an Analog Multi-line Telephone. Audio Engineering Society 133rd Convention, October 26-29, 2012, San Francisco, CA.
Wenzel, E. M., and Godfroy, M. (2011) Spatial Auditory Displays to Enhance Situational Awareness During Remote Exploration. Workshop on Space Communications: Challenges for Auditory Displays & Interactive Spoken Dialogue Systems, Fourth IEEE International Conference on Space Mission Challenges for Information Technology (SMC-IT 2011), August 2-4, Palo Alto, CA.
Wenzel, E. M., Godfroy, M. & Miller, Joel D. (2012) Prototype Spatial Auditory Display for Remote Planetary Exploration. Audio Engineering Society 133rd Convention, October 26-29, 2012, San Francisco, CA, Paper Number: 8734.
Wenzel, E. M., Godfroy, M. & Miller, Joel D. (in preparation) Spatial Auditory Displays for Space Operations: Mitigation of Degraded Visual Environments. To be submitted to Human Factors.
|