Menu

 

The NASA Task Book
Advanced Search     

Project Title:  Multimodal Augmented Displays for Surface Telerobotic Missions Reduce
Fiscal Year: FY 2015 
Division: Human Research 
Research Discipline/Element:
HRP SHFH:Space Human Factors & Habitability (archival in 2017)
Start Date: 06/02/2014  
End Date: 09/30/2015  
Task Last Updated: 04/05/2016 
Download report in PDF pdf
Principal Investigator/Affiliation:   Wenzel, Elizabeth  Ph.D. / NASA Ames Research Center 
Address:  Human Factors Research & Technology Division 
Mail Stop 262-2 
Moffett Field , CA 94035-4799 
Email: Elizabeth.M.Wenzel@nasa.gov 
Phone: 650-604-6290  
Congressional District: 18 
Web:  
Organization Type: NASA CENTER 
Organization Name: NASA Ames Research Center 
Joint Agency:  
Comments:  
Co-Investigator(s)
Affiliation: 
Godfroy, Martine  Ph.D. San Jose State University Research Foundation 
Project Information: Grant/Contract No. Internal Project 
Responsible Center: NASA JSC 
Grant Monitor: Gore, Brian  
Center Contact: 650.604.2542 
brian.f.gore@nasa.gov 
Unique ID: 10135 
Solicitation / Funding Source: 2013 HERO NNJ13ZSA002N-Crew Health OMNIBUS 
Grant/Contract No.: Internal Project 
Project Type: GROUND 
Flight Program:  
TechPort: No 
No. of Post Docs:
No. of PhD Candidates:
No. of Master's Candidates:
No. of Bachelor's Candidates:
No. of PhD Degrees:
No. of Master's Degrees:
No. of Bachelor's Degrees:
Human Research Program Elements: (1) SHFH:Space Human Factors & Habitability (archival in 2017)
Human Research Program Risks: (1) HSIA:Risk of Adverse Outcomes Due to Inadequate Human Systems Integration Architecture
Human Research Program Gaps: (1) HSIA-201:We need to evaluate the demands of future exploration habitat/vehicle systems and mission scenarios (e.g. increased automation, multi-modal communication) on individuals and teams, and determine the risks these demands pose to crew health and performance.
(2) HSIA-401:We need to determine how HSI can be applied in the vehicle/habitat and computer interface Design Phase to mitigate potential decrements in operationally-relevant performance (e.g. problem-solving, execution procedures), during increasingly earth-independent, future exploration missions (including in-mission and at landing).
(3) HSIA-701:We need to determine how human-automation-robotic systems can be optimized for effective enhancement and monitoring of crew capabilities, health, and performance, during increasingly earth-independent, future exploration missions (including in-mission and at landing).
Flight Assignment/Project Notes: NOTE: Extended to 9/30/2015 (from 8/1/2015) per A. Chu/ARC (Ed., 6/30/15)

NOTE: End date is 8/1/2015 (instead of 7/1/2015) per E. Connell/JSC (Ed., 4/3/15)

Task Description: This research addressed the need for multimodal augmented displays to successfully execute both planetary Extra-Vehicular Activity (EVA) and telerobotic operations. Surface EVAs and telerobotic operations will include complex missions such as construction and assembly, surface and geologic exploration, and excavation for protective shelter. Specific constraints limit human performance in the particular context of surface operations, leading to a perceptually impoverished environment, combined in some cases with communication delays. The visual field of view [FOV] is restricted, there is no auditory information from the external environment, and somato-sensory systems are negatively affected by distortion of the normal 3-D reference frame. Such limitations can seriously affect mission safety and completion, resulting in a critical need to address how advanced controls and displays can mitigate the effects of complex task demands in an extreme environment.

The current research examined performance benefits resulting from virtual visual and auditory enhancements to the astronauts’ controls and displays. Studies were conducted in which head-up projections, 2D visual map displays, and virtual spatial auditory cues were combined in a synergistic manner to improve orientation, reaction time, and localization. Our prior work at the NASA Ames Research Center (ARC) ARC-TH Advanced Controls and Displays Laboratory has already demonstrated performance advantages from using spatially congruent visual and auditory cues for situational awareness and navigation. This work further developed this prior work for application to dual-task activities (e.g., navigation and monitoring of mission consumables such as battery power) more congruent with the high workload of anticipated EVA conditions.

The first study compared human performance (orientation, response time, and localization time) in a “virtual” navigation task using separate or combined spatial auditory and visual input via specialized navigation aids (NavAids): a 2D visual map and/or a spatial auditory display. A combined visual map display used an exocentric spatial frame of reference, requiring mental transformation to solve the problem of the interference between the data provided by the map and the operator’s current FOV. Auditory icons (unique, subtle continuous sonic feedback) form a cognitive map or “auditory scene” that informs the operator about the location and status of dynamic rovers and astronauts on surface in an egocentric reference frame (camera view). The advantage of a bimodal display combining the sources of information was explored; prior research suggests that this will provide significant performance advantages by selecting the most appropriate sensory input as a function of the operator’s bearing. Study 1 investigated the performance impact produced by increased workload due to multi-tasking and degradation of the visual environment (low visibility, high object ambiguity). Study 2 investigated workload due to the impact of moderate communication delays (up to ~1s) in the context of a telerobotic docking task utilizing unimodal or bimodal docking aids.

Research Impact/Earth Benefits: The potential Earth benefit for this research includes any applications for which human monitoring and control of complex tasks and systems, including time-delayed operations, are required. Examples include multi-modal display and control interfaces for time-delayed teleoperation via the Internet or space satellite communication networks, commercial and military avionics, and remote piloting of unmanned autonomous vehicles (UAVs).

Task Progress & Bibliography Information FY2015 
Task Progress: The current work contributes to the foundation for guidelines necessary to develop advanced displays that support exploration missions involving extravehicular activities (EVAs) and telerobotic missions by mitigating performance decrements due to the perceptually impoverished environment of spaceflight such as limited visibility, reduced sensory information channels, and communication delays. In particular, multimodal displays could improve effective information sharing between humans and semi-autonomous telerobotic agents by enhancing the operator’s situational awareness and perceptual accuracy of the operational space.

Background and Objectives: During surface exploration missions, the availability and reliability of most sensory inputs existing on Earth is reduced and typically the processing of visual information is highly dependent on 2D displays such as visual maps. The current research addressed the information organization of displays that integrate navigation information as well as the health and status of crew and mission systems. It is likely that telerobotic exploration missions will require that operators perform multiple tasks simultaneously, utilizing different types of displays corresponding to each task. This increased workload may negatively impact operator performance as well as interact with the effects of display modality. Time delays present in the control loop of human teleoperation in space can be considered another aspect of increased task workload and can have a critical impact on human performance and mission effectiveness. The current research consisted of studies that investigated the impact of these two different forms of workload, multi-tasking (Dual Task study) and latency (Docking Task study), on performance with different types of unimodal or bimodal displays.

Dual Task Method: The first study extended previous work (Wenzel et al., 2012) using a single orientation task (ST) to a more complex multi-tasking environment with a dual task (DT) paradigm that included both the original orientation task and a second monitoring task. During a simulated extra-vehicular exploration on planetary surface, performance was compared with different types of displays for aiding navigation (NavAid): a 3D spatial auditory navigation aid (NavAid) (A), a 2D North-Up visual map (V), and the combination of the two in a bimodal NavAid (B). Four different environmental conditions were tested combining high and low levels of visibility and ambiguity. To facilitate comparison with the previous work, performance was analyzed separately for the Single Task (ST) and Dual Task (DT) studies, and then compared between the two tasks for all the independent variables (factors). For both studies, the quantitative dependent variables were: percent correct orientation, left/right decision time, localization accuracy, and localization time. Qualitative measures (subjective ratings) were also collected after the experiment.

Dual Task Results: Overall for ST and DT, the bimodal NavAid was associated with the best performance, both in terms of correct orientation and response times. For orientation, the auditory information channel proved to be an efficient countermeasure to the conflict generated by the difference between egocentric and allocentric reference frames in the displays tested. For localization, an “audio locator” display also greatly facilitated accuracy and response time in localization. Taken together, the observations made for ST were corroborated in DT and, in fact, the bimodal advantage was more pronounced under high visual workload.

Docking Task Method: In the second study, a single-task paradigm was used that involved performing a docking task associated with telerobotic planetary exploration on Mars (linkage of a remotely controlled vehicle to a surface habitat). Three different types of docking aids (DockAid) analogous to the NavAids of the previous studies were evaluated: a 2D visual DockAid (V), an auditory DockAid (A), and a combined bimodal DockAid (B). The experiment investigated the impact of display modality of the docking aids on operator performance (docking accuracy and response time) with increased workload introduced via additional control latencies ranging from 0 to 1000 msec. Performance was also assessed under high and low visibility conditions. The quantitative dependent measures were docking accuracy (radial distance between the centers of the targeting reticle and the docking target) and the docking response time. Qualitative measures (subjective ratings) were also collected after the experiment.

Docking Task Results: On-time docking accuracy was greater in the bimodal condition than in the best unimodal condition, again supporting some form of positive multisensory effect. The apparent difficulty of a purely auditory docking evidenced by longer docking times was contradicted by the high rate of accurate docking (94% total, 54% on-time) and shows that the learning curve for processing of spatialized auditory sonification is very quick. As expected, the introduction of latencies was associated with performance degradation. However, this effect was shown to be modality specific, and the presence of auditory cues provided some degree of protection against the negative impact of latencies of 500 ms or less. This performance inflection point may be related to the limits of a “cognitive horizon” for teleoperation in space, i.e., a latency limit of ~500 ms beyond which performance degrades, as described by Lester & Thronson (2011).

Conclusions: Overall, the results of both studies support the idea of integrating 3D audio into displays to aid extravehicular activity on planetary surfaces for tasks as diverse as orientation, localization, or docking. Spatial auditory displays can aid situational awareness, navigation, and way finding by reducing the risk of errors and response latencies. Alone, the auditory system provides a reliable alternate channel of information in cases where the visual information is degraded or unavailable. In particular, it may mitigate the deleterious effects of the relatively small latencies present during lunar telerobotic control from Lagrange points or from Mars orbit during future surface exploration and control missions. When auditory information is provided in synergy with the visual channel, bimodal performance usually exceeds that of the best unimodal display. This multisensory enhancement proved to be inversely proportional to the reliability of the individual sensory inputs (inverse effectiveness effect, Meredith and Stein, 1986).

Recommendations for Guidelines: The results presented here demonstrate that spatial audio displays, both alone and in combination with a visual display, enhance performance and situational awareness, mitigate the impact of visual environment degradation and increased workload, and add to the intuitiveness of the information display:

• User acceptability for 3D audio is very high.

• 3D audio provides an intuitive, ecological, and low-workload solution for the presentation of spatial information.

• 3D audio can be used to efficiently substitute for visual information that is missing or degraded, when workload is increased, and as an effective countermeasure for mental remapping.

• In the orientation/localization task, bimodal presentation of the A and the V spatial information leads to a significant reduction of incorrect orientation responses and a reduction in decision times.

• The use of an auditory localizer, a type of dynamic sonification display, has proved its efficacy, particularly under degraded visual conditions.

• The results of the docking task study indicate that bimodal displays can also mitigate the negative impact of workload in the form of moderate (= 500 msec) control latencies.

• The docking study represents a proof of concept for a purely auditory docking aid. While the auditory aid took longer compared to the visual aid, the fact that accurate auditory docking only required ~30 sec suggests that a purely auditory DockAid is a viable display solution.

Thus, it is recommended that bimodal and/or multimodal displays be used for EVA missions. It is increasingly evident that the auditory channel will need to convey spatial information about localization and navigation in an environment where the visual channel is already saturated by the display of symbology and checklists. The ecological validity of using sound for localization, combined with the possibility of learning to use virtual auditory signals to navigate between virtual waypoints, support their integration in advanced EVA display systems. Similarly, the viability of auditory and/or bimodal aids for tasks such as docking argue for their application to closed-loop tasks involving moderate control latencies.

The integration of alternative ways to present information brings up additional questions such as the best methods for switching between different modes within and/or between the different sensory channels that have been made available to the operator. Issues that must be investigated include that: the use of each sensory channel must be prescribed for a given type of activity; the different functions available cannot overlap; and the sensory channels must combine appropriately to contribute to a reduction of the overall workload while increasing the sense of presence and situation awareness.

For example, 3D audio could provide a higher level of immersion and improved perception of the 6 DOF operational space. The combination of spatial and/or moving sound images with visual stimuli may increase vection and improve the sense of spatial presence as well as mitigating spatial disorientation. Two potential benefits may be of particular interest: (1) providing immediate feedback on operator location as well as the actions performed in space, and (2) providing an auditory “frame of reference” such as an artificial auditory horizon combined with “auditory security boundaries” that define the crew’s position in space in relation to the external features of the environment. Further, during training, the use of spatial audio could provide an additional countermeasure against cyber-sickness (nausea, disorientation, and oculomotor disturbances) induced by scene oscillation along the different axes of motion (pitch, roll, and yaw).

Finally, future investigation of multimodal displays for surface exploration should be extended to the use of tactile displays, for example, in emergency situations requiring coordinated communications between multiple personnel where both the visual and auditory channels may be overloaded.

References

Meredith, M. A., & Stein, B. E. (1986). Visual, auditory, and somatosensory convergence on cells in superior colliculus results in multisensory integration. Journal of Neurophysiology, 56(3), 640-662.

Lester, D. & Thronson, H. (2011) Low-Latency Lunar Surface Telerobotics from Earth-Moon Libration Points. Proceedings of the AIAA SPACE 2011 Conference & Exposition, Long Beach, California, September 27-29.

Wenzel, E. M., Godfroy, M., & Miller, J. D. (2012) Prototype Spatial Auditory Display for Remote Planetary Exploration. Proceedings of the 133rd Convention of the Audio Engineering Society, San Francisco, California, October 26-29.

Bibliography: Description: (Last Updated: 03/24/2016) 

Show Cumulative Bibliography
 
Abstracts for Journals and Proceedings Wenzel EM, Godfroy M, Miller JD. "Multimodal augmented displays for surface telerobotic missions." 2016 NASA Human Research Program Investigators’ Workshop, Galveston, TX, February 8-11, 2016.

2016 NASA Human Research Program Investigators’ Workshop, Galveston, TX, February 8-11, 2016. , Feb-2016

Papers from Meeting Proceedings Wenzel EM, Godfroy M, Miller JD. "Spatial auditory displays: Substitution and complementarity to visual displays." Presented at the International Conference on Auditory Display, New York, NY, June 22-25, 2014.

Proceedings of the International Conference on Auditory Display, New York, NY, June 2014. http://dx.doi.org/10.13140/2.1.2690.3683 , Jun-2014

Project Title:  Multimodal Augmented Displays for Surface Telerobotic Missions Reduce
Fiscal Year: FY 2014 
Division: Human Research 
Research Discipline/Element:
HRP SHFH:Space Human Factors & Habitability (archival in 2017)
Start Date: 07/01/2014  
End Date: 09/30/2015  
Task Last Updated: 02/26/2015 
Download report in PDF pdf
Principal Investigator/Affiliation:   Wenzel, Elizabeth  Ph.D. / NASA Ames Research Center 
Address:  Human Factors Research & Technology Division 
Mail Stop 262-2 
Moffett Field , CA 94035-4799 
Email: Elizabeth.M.Wenzel@nasa.gov 
Phone: 650-604-6290  
Congressional District: 18 
Web:  
Organization Type: NASA CENTER 
Organization Name: NASA Ames Research Center 
Joint Agency:  
Comments:  
Co-Investigator(s)
Affiliation: 
Godfroy, Martine  Ph.D. San Jose State University Research Foundation 
Project Information: Grant/Contract No. Internal Project 
Responsible Center: NASA ARC 
Grant Monitor: Gore, Brian  
Center Contact: 650.604.2542 
brian.f.gore@nasa.gov 
Unique ID: 10135 
Solicitation / Funding Source: 2013 HERO NNJ13ZSA002N-Crew Health OMNIBUS 
Grant/Contract No.: Internal Project 
Project Type: GROUND 
Flight Program:  
TechPort: No 
No. of Post Docs:  
No. of PhD Candidates:  
No. of Master's Candidates:  
No. of Bachelor's Candidates:  
No. of PhD Degrees:  
No. of Master's Degrees:  
No. of Bachelor's Degrees:  
Human Research Program Elements: (1) SHFH:Space Human Factors & Habitability (archival in 2017)
Human Research Program Risks: (1) HSIA:Risk of Adverse Outcomes Due to Inadequate Human Systems Integration Architecture
Human Research Program Gaps: (1) HSIA-201:We need to evaluate the demands of future exploration habitat/vehicle systems and mission scenarios (e.g. increased automation, multi-modal communication) on individuals and teams, and determine the risks these demands pose to crew health and performance.
(2) HSIA-401:We need to determine how HSI can be applied in the vehicle/habitat and computer interface Design Phase to mitigate potential decrements in operationally-relevant performance (e.g. problem-solving, execution procedures), during increasingly earth-independent, future exploration missions (including in-mission and at landing).
(3) HSIA-701:We need to determine how human-automation-robotic systems can be optimized for effective enhancement and monitoring of crew capabilities, health, and performance, during increasingly earth-independent, future exploration missions (including in-mission and at landing).
Flight Assignment/Project Notes: NOTE: Extended to 9/30/2015 (from 8/1/2015) per A. Chu/ARC (Ed., 6/30/15)

NOTE: End date is 8/1/2015 (instead of 7/1/2015) per E. Connell/JSC (Ed., 4/3/15)

Task Description: This proposal addresses the need for multimodal augmented displays to successfully execute both planetary Extra-Vehicular Activity (EVA) and telerobotic operations. Surface EVAs and telerobotic operations will include complex missions such as construction and assembly, surface and geologic exploration, and excavation for protective shelter. Specific constraints limit human performance in the particular context of surface operations, leading to a perceptually impoverished environment, combined in some cases with communication delays. The visual field of view [FOV] is restricted, there is no auditory information from the external environment, and somato-sensory systems are negatively affected by distortion of the normal 3-D reference frame. Such limitations can seriously affect mission safety and completion, resulting in a critical need to address how advanced controls and displays can mitigate the effects of complex task demands in an extreme environment.

The proposed research examines performance benefits resulting from virtual visual and auditory enhancements to the astronauts’ controls and displays. Studies will be conducted where head-up projections, 2D visual map displays, and virtual spatial auditory cues are combined in a synergistic manner to improve orientation, reaction time, and localization. Our prior work at the NASA Ames Research Center (ARC) ARC-TH Advanced Controls and Displays Laboratory has already demonstrated performance advantages from using spatially congruent visual and auditory cues for situational awareness and navigation. This proposal will further develop this prior work for application to dual-task activities (e.g., navigation and monitoring of mission consumables such as battery power) more congruent with the high workload of anticipated EVA conditions.

The first study will compare human performance (orientation, response time, and localization time) in a “virtual” navigation task using separate or combined spatial auditory and visual input via specialized navigation aids (NavAids): a 2D visual map and/or a spatial auditory display. A combined visual map display uses an exocentric spatial frame of reference, requiring mental transformation to solve the problem of the interference between the data provided by the map and the operator’s current FOV. Auditory icons (unique, subtle continuous sonic feedback) form a cognitive map or “auditory scene” that informs the operator about the location and status of dynamic rovers and astronauts on surface in an egocentric reference frame (camera view). The advantage of a bimodal display combining the sources of information will be explored; prior research suggests that this will provide significant performance advantages by selecting the most appropriate sensory input as a function of the operator’s bearing. Study 1 will investigate the performance impact produced by increased workload due to multi-tasking and degradation of the visual environment (low visibility, high object ambiguity). Study 2 will investigate workload due to the impact of moderate communication delays (< ~1s) in the context of a docking task utilizing unimodal or bimodal docking aids. We will also explore the feasibility of transferring multimodal display capabilities for use in collaborative experiments involving a “real” navigation task in a physical telerobotic configuration. This may include NASA facilities such as the ARC Intelligent Robotics Group and the analog definition facilities of the Johnson Space Center (JSC) Human Exploration Research Analog (HERA).

Research Impact/Earth Benefits:

Task Progress & Bibliography Information FY2014 
Task Progress: New project for FY2014.

(Ed. note: added to Task Book when received period of performance information Feb. 2015)

Bibliography: Description: (Last Updated: 03/24/2016) 

Show Cumulative Bibliography
 
 None in FY 2014