Task Progress:
|
NOTE: For full citation information on the published papers listed below, please see the Cumulative Bibliography (Ed., 5/22/23).
Project status
. Data from the first controlled study has been completed. . Data analysis from the second controlled study has been completed. . A Human Factors and Ergonomics Society (HFES) conference paper on developing a measure of trust based on conversation has been accepted for publication (Li et al., 2022). . An HFES conference paper describing a cognitive simulation model of interdependent agents has been accepted for publication (Li & Lee, 2022). . A Human Factors journal paper on developing a measure of trust based on conversations has been accepted for publication (Li, Erickson, et al., 2023). . A paper has been submitted to the International Journal of Human-Computer Interaction on modeling trust dynamics. This paper has been provisionally accepted for publication pending minor revisions (Li, Amudha, et al., 2023). . Data collection from the NASA Human Exploration Research Analog (HERA) testbed has continued. . Preliminary data analysis of the HERA data has started.
The following summaries describe three specific research accomplishments and the associated papers.
Conversational measures of trust
We have analyzed the data from a controlled experiment and created a machine-learning model that estimates trust in an agent from the lexical and acoustical features of conversations with that agent. The objective of this study was to estimate trust from conversations using both lexical and acoustic data. As NASA moves to long-duration space exploration operations, the increasing need for cooperation between humans and virtual agents requires real-time trust estimation by virtual agents. Measuring trust through conversation is a novel, yet unexplored approach.
A 2 (reliability) × 2 (cycles) × 3 (events) within-subject study on habitat system maintenance was designed to elicit various levels of trust in a conversational agent. Participants had trust-related conversations with the conversational agent at the end of each decision-making task. To estimate trust, subjective trust ratings were predicted using machine learning models trained on three types of conversational features (i.e., lexical, acoustic, and combined). After training, model inference was performed using variable importance and partial dependence plots. Results showed that a random forest algorithm, trained using the combined lexical and acoustic features, was the highest-performing algorithm for predicting trust in the conversational agent (R^2 adj =0.71). The most important predictor variables were a combination of lexical and acoustic cues: average sentiment considering valence shifters and the mean of formants, Mel-frequency cepstral coefficients (MFCC), and standard deviation of the fundamental frequency. Precise trust estimation from conversation requires lexical cues and acoustic cues. We further identified conversational features as mediators between an exposure (i.e., reliability) and a response variable (i.e., trust). Following the mediation analysis criteria, we identified a partial mediation that occurred between reliability on trust via conversational features with a Sobel test for the indirect effect, z = -5.86, p <.001. This suggests that reliability influences how people communicate as an underlying mechanism, which in turn influences people’s trust. The proportion of the effect of the reliability on trust that goes through the mediator is 0.17. These results show the possibility of using conversational data to measure trust, and potentially other dynamic mental states, unobtrusively and dynamically. These results have been accepted for publication in the journal, Human Factors, under the title of: It’s Not Only What You Say, But Also How You Say It: Machine Learning Approach to Estimate Trust from Conversation (Li, Erickson, et al., 2023).
Modeling trust dynamics in conversations
Prior research has used both qualitative and quantitative approaches to identify and model trust in conversational data. Qualitative analysis, such as grounded theory, provides a rigorous and systematic approach to identifying situated meaning and systematic patterns in the data. However, compared to a machine-aided approach, manual coding is often laborious, limited to small volumes of data, and subject to the coders' domain knowledge. For quantitative analysis, such as text analysis, the dominant approach treats the conversations as bag-of-words, which assumes words are independent units. This approach ignores the meaningful context and patterns in the conversation. In the first research aim, we adopted the machine learning approach, which can combine lexical and acoustic features to predict trust in the conversational agent; however, this focuses on the feature level and ignores the rich context and deep meaning of the conversation. In other words, the connections between the features and the meaning associated with features are situated within the context that might benefit from qualitative analysis. Moreover, the sequence of the conversation is often lost when processing using a bag-of-words approach. Thus, to capture trust dynamics, the objective of this study is to model two aspects: (1) Trust dimensions: the connection to theoretical foundations of trust, especially focus on cognitive processes in conversations, rather than feature level or using bag-of-words; (2) Trust dynamics: the temporal aspect of trust evolution throughout the interactions, rather than aggregated or a snapshot of trust.
We modeled dynamic trust evolution in the conversation using a novel method, trajectory epistemic network analysis (T-ENA). T-ENA captures the multidimensional aspect of trust (i.e., analytic and affective), and trajectory analysis segments the conversations to capture temporal changes in trust over time. Twenty-four participants performed a habitat maintenance task assisted by a virtual agent and verbalized their experiences and feelings after each task. T-ENA showed that agent reliability significantly affected people's conversations in the analytic process of trust, t (38.88) = 15.18, p=0.00, Cohen’sd=144.72, such as discussing agents' errors. The trajectory analysis showed that trust dynamics manifested through conversation topic diversity and flow. These results showed trust dimensions and dynamics in conversation should be considered interdependently and suggested that an adaptive conversational strategy should be considered to manage trust in human-agent teaming (HATs). These results have been provisionally accepted for publication in the International Journal of Human Computer Interaction: Modeling Trust Dimensions and Dynamics in Human-Agent Conversation: A Trajectory Epistemic Network Analysis Approach (Li, Amudha, et al., 2023).
A computational model of interdependent agents
We also developed a computational cognitive model of interdependent agents, where one agent is a person and the other is a conversational agent. Conversational agents are likely to represent automation that has more authority and autonomy than simple automation. Greater authority may lead the agents’ goals to diverge from those of the person. Such misaligned goals can be amplified by the situation and strategic interactions, which can further impact the teaming process and performance. These interrelated factors lack a systematic and computational model. To address this gap, we developed a dynamic game theoretical framework simulating the human-Artificial Intelligence (human-AI) interdependency by integrating the Drift Diffusion Model simulating the goal alignment process.
A 3 (Situation Structure) × 3 (Strategic Behaviors) × 2 (Initial Goal Alignment) simulation study of human-AI teaming was designed. Results showed that teaming with an altruistic agent in a competitive situation leads to the highest team performance. Moreover, the goal alignment process can dissolve the initial goal conflict. Our study provides a first step in modeling goal alignment and implies a tradeoff between a balanced and cooperative team to guide human-AI teaming design. These results showed how the AI teammate’s strategic behavior interacts with the situational factors to influence outcomes. These results have been accepted for publication in the HFES conference proceedings: Modeling Goal Alignment in Human-AI Teaming: A Dynamic Game Theory (Li & Lee, 2022).
|