Task Progress:
|
The following tasks have been accomplished:
1. THE DEVELOPED SYSTEM
We developed a sensory manipulation system for providing additional sensory cues, especially haptic feedback, for robot teleoperation. The system includes the following units: a) Robot commanding unit. This system connects a robot arm with the Unity game engine for digital twinning and for haptic controls. Robot operating system (ROS) is used as the main platform for exchanging data between the ROS system and Unity; 2) Digital twinning unit. Unity game engine is used to create a digital twin model of the remote robot and the workplace. The human operator can use a Virtual Reality (VR) headset to visualize the remote workplace and the robot for coordinating the hand-picking tasks in an immersive way; 3) Haptic interface unit. It includes haptic feedback and control systems. A total of seven types of physical interactions, including weight, texture, momentum, inertia, impact, balance, and rotation, are simulated via a physics engine and then are played via a high-resolution haptic controller. As such, the human operator can feel the enriched physical processes pertaining to the hand-picking task. In this system, we also programmed the system to intentionally add nine levels of latencies – 0ms, 250ms, 500ms, 750ms, 1000ms, 1250ms, 2500ms, 3750ms, 5000ms – to the visual or haptic feedback; and 4) Human assessment unit. The last component of the system includes a set of neurophysiological sensors embedded in the VR system for real-time human assessment, including eye trackers, motion trackers, and functional near-infrared spectroscopy (fNIRS) to examine the hemodynamic activities in 24 brain areas.
2. HUMAN SUBJECT EXPERIMENTS
To test the impact of the proposed sensory manipulation method on the performance in a robot teleoperation task, two human-subject experiments were performed where Experiment I focused on delays up to 1s and Experiment II focused on delays up to 5s. The task was a replacement and repair (R&R) task in a low gravitational environment. The task involves picking up, moving, and placing four cubes with different masses as fast as and as accurately as possible. The experiments consisted of four conditions:
Control condition: haptic and visual feedback are in real-time (haptic feedback=0; visual feedback=0).
Anchoring condition: haptic feedback is in real-time, and visual feedback has varying delays (haptic feedback=0; visual feedback=250ms, 500ms, 750ms, 1000ms, 1250ms, 2500ms, 3750ms, and 5000ms).
Synchronous condition: both haptic feedback and visual feedback are delayed at the same amount (haptic feedback=visual feedback=250ms, 500ms, 750ms, 1000ms, 1250ms, 2500ms, 3750ms, and 5000ms).
Asynchronous condition: haptic feedback and visual feedback are delayed at different amounts (haptic feedback=250ms; visual feedback=250ms, 500ms, 750ms, 1000ms, 1250ms, 2500ms, 3750ms, and 5000ms).
The experiment was designed as a within-participant experiment, i.e., each participating subject experienced four conditions. The sequence order was shuffled for each subject to avoid learning effects. The performance data (time and accuracy), motion data (moving trajectory), eye tracking data (gaze focus and pupillary size), and neurofunctional data (measured by Functional Near-Infrared Spectroscopy or fNIRS) were collected. Participating subjects were also requested to report their perceived delays to compare with the actual delays. Final measurement metrics used in the analysis included: 1) performance (time on task and positioning accuracy); 2) perception (perceived delays versus actual delays); 3) cognitive load (NASA TLX and eye tracking); and 4) fNIRS.
3. RESULTS
We successfully recruited 43 healthy subjects to participate in Experiment I, and another 51 healthy subjects in the Experiment II. The following sections introduce the results of both experiments.
3.1. Performance The result showed that sensory manipulation improved the teleoperation performance in terms of time on task when the delay was up to 1s. We focus on presenting the result in the anchoring condition in this report, i.e., providing haptic cues coupled with an action, because this condition represents using simulated haptic feedback to augment a person’s motor action despite visual delay levels. The result shows that time on task was significantly reduced given the anchoring method independent of the visual delays. It could be because human subjects could rely more on haptic feedback when it was available to coordinate the teleoperation actions. The benefit of providing a real-time haptic stimulation boosted the performance to a level similar to the control condition (i.e., no delay). We did not see a significant improvement in the anchoring condition when the delay was up to 5s. It suggests that there may be a cutoff for our proposed sensory manipulation method to be useful. In addition, we did not observe significant differences across the four conditions in terms of picking and dropping accuracy. This could be due to the fact that the overall level of difficulty of the designed task was not challenging enough.
3.2. Perception It was also found the proposed sensory manipulation method could also reduce subjective feelings of teleoperation delays up to 5s. For Experiment I (delays up to 1s), the data shows that under the anchoring condition, the overall average perceived visual delay in teleoperation was significantly lower than the synchronous condition. In addition, 18% of test subjects reported a perceived visual delay that was smaller than the actual one under the anchoring condition. For Experiment II (delays up to 5s), 20% and 15% of test subjects reported a perceived visual delay that was smaller than the actual one under the anchoring condition and asynchronous condition, respectively. Knowing that both the anchoring condition and synchronous condition feature fixed haptic feedback after a motor action (either in real-time or after 250ms), means that coupling real-time haptic feedback with the action during teleoperation can mitigate the subjective feeling of delays.
3.3. Cognitive Load Our data also showed that the proposed sensory manipulation method also presented benefits in terms of cognitive load for delays up to 1s. First, we analyzed the pupillary size as the literature indicates that increased pupillary size means increased cognitive load levels. We did not see any difference among the four conditions when all data from each trial was aggregated in a holistic analysis. However, after dividing the data of each trial into three stages: the object pickup stage (20s), the object drop-off stage (20s), and the object movement stage (the remaining time), we found that anchoring condition led to lower cognitive load in both the object pickup stage and the object drop-off stage. Interestingly, the NASA Task Load Index (TLX) analysis did show a similar benefit of anchoring condition in terms of mental load. But the anchoring condition led to a higher level of confidence and a lower level of frustration in comparison with the synchronous condition and the asynchronous condition. To be noted, none of the benefits related to cognitive load, perceived frustration or self-confidence level were observed when delays were up to 5s (results not shown). Once again, it suggests that there may be a cut-off time for delays for our method to be useful.
3.4. Neurofunction (fNIRS) Finally, we also found that our proposed sensory manipulation method may present neural functional benefits (for up to 1s). We tracked 35 channels using fNIRS during all trials of Experiment I. We focused on two regions of the brain in our analysis, including the dorsal cortex and the prefrontal cortex. The dorsal cortex is believed to be related to time perception, with a higher activation level meaning more engagement in time perception. The prefrontal cortex is related to activity planning activities. Our result shows that the anchoring condition led to lower activation levels in both regions, see Fig.13. It suggests that providing real-time haptic feedback (which could be simulated haptic feedback based on physics engines) may help reduce the need for focusing on planning motor actions (prefrontal cortex) and focusing on “telling how much it delayed” (dorsal cortex). In other words, subjects should be able to focus more on the actual motor tasks – such as picking up or moving an object.
4. DISCUSSION AND CONCLUSIONS The Experiment I (n=43) confirmed a variety of benefits of the proposed sensory manipulation method in teleoperation tasks with delays up to 1s. It generally confirmed that providing haptic cues along with the initiated action could significantly reduce time on task, no matter how much visual delay is presented. It was also found that participating subjects tended to perceive a smaller visual delay when real-time haptic cues were provided. There are also benefits related to reduced cognitive load, improved perception of self-confidence and frustration levels, and more desired neural functional performance. The findings suggest that the anchoring method, i.e., providing real-time haptic feedback, has multiple performance and functional benefits. However, many of the performance and functional benefits were not observed when we prolonged the delays to up to 5s in Experiment II (N=51). One of the benefits that may still hold in Experiment II is that a significant portion of subjects still reported perceived visual delays shorter than the actual visual delays. It suggests that there may be a cut-off time of delay for our proposed method still working.
As a result, we have started an investigation to figure out the cut-off time point. First, we examined multiple ways of combining visual delay and haptic delay of different amounts, such as 250ms haptic delay plus 1250ms visual delay, showing as the “async_th_250_tv_1250” in our analysis result. In other words, for each label on the X axis, the first value refers to the haptic delay, and the second value refers to the visual delay. Then we ran a pairwise analysis between each of the combinations of haptic and visual delays in terms of performance (such as positioning accuracy) to see, starting at what combination, a difference starts to emerge. As mentioned earlier, under shorter delays, we did not observe any difference in terms of positioning accuracy. However, when visual delay was increased from 3750ms to 5000ms, and meanwhile, when haptic delay was increased from 0ms to 250ms, we started to see a significant p-value (<0.05) and found that the positioning accuracy started to drop. In order words, the cutoff time for our method to still work may be somewhere around haptic delay<250ms and visual delay<5000ms. A future investigation is needed.
Targeting on the time delays issues in robot teleoperation, this research proposes the third way in addition to automation design and training: induced human adaptation. Inspired by the motor learning and rehabilitation literature, this research hypothesizes that modified (time points, frequency, modality, and magnitude) sensory stimulation, paired with the motor actions, helps alleviate the subjective feeling of time delays, and expedite human functional adaptation to time-delayed teleoperation, without the need for excessive training, or sophisticatedly designed automation/robotic systems.
|