Menu

 

The NASA Task Book
Advanced Search     

Project Title:  Automation Interface Design Development Reduce
Fiscal Year: FY 2010 
Division: Human Research 
Research Discipline/Element:
HRP SHFH:Space Human Factors & Habitability (archival in 2017)
Start Date: 10/02/2006  
End Date: 09/30/2010  
Task Last Updated: 11/01/2010 
Download report in PDF pdf
Principal Investigator/Affiliation:   Feary, Michael  Ph.D. / NASA Ames Research Center 
Address:  Mail Stop: 262-4 
 
Moffett Field , CA 94035 
Email: Michael.S.Feary@nasa.gov, erin.s.connell@nasa.gov 
Phone: 650.604.0203  
Congressional District: 18 
Web:  
Organization Type: NASA CENTER 
Organization Name: NASA Ames Research Center 
Joint Agency:  
Comments:  
Co-Investigator(s)
Affiliation: 
Green, Collin  NASA Ames Research Center 
Sherry, Lance  San Jose State University Foundation 
Billman, Dorrit  NASA Ames & San Jose State University 
Project Information: Grant/Contract No. Directed Research 
Responsible Center: NASA JSC 
Grant Monitor: Woolford, Barbara  
Center Contact: 218-483-3701 
barbara.j.woolford@nasa.gov 
Unique ID: 7393 
Solicitation / Funding Source: Directed Research 
Grant/Contract No.: Directed Research 
Project Type: GROUND 
Flight Program:  
TechPort: Yes 
No. of Post Docs:
No. of PhD Candidates:
No. of Master's Candidates:
No. of Bachelor's Candidates:
No. of PhD Degrees:
No. of Master's Degrees:
No. of Bachelor's Degrees:
Human Research Program Elements: (1) SHFH:Space Human Factors & Habitability (archival in 2017)
Human Research Program Risks: (1) HSIA:Risk of Adverse Outcomes Due to Inadequate Human Systems Integration Architecture
Human Research Program Gaps: (1) HSIA-101:We need to identify the Human Systems Integration (HSI) – relevant crew health and performance outcomes, measures, and metrics, needed to characterize and mitigate risk, for future exploration missions.
Task Description: The addition of automation has greatly extended humans’ capability to accomplish tasks, including difficult, complex, and safety critical tasks. The majority of Human - Automation Interaction (HAI) results in more efficient and safe operations; however, certain unexpected automation behaviours, or “automation surprises” can be frustrating and, in certain safety critical operations (e.g., transportation, spaceflight, medicine), may result in injuries or the loss of life. (Mellor, 1994; Leveson, 1995; FAA, 1995; BASI, 1998; Shaylor, 2000; Sheridan, 2002).

The next generation of space exploration systems will place an increased reliance on automation. Traditional techniques for the design and evaluation of automation interfaces rely on subject-matter-experts, human-in-the-loop testing (i.e., usability testing), guidelines, heuristics, and rules-of-thumb. Given the volume and time-line for the development of new automation required for space exploration, the time and cost required to perform these evaluations by human factor experts will be prohibitive. Further, guidelines, heuristics, and rule-of-thumb have previously yielded sub-optimum designs (as they are focused on the interface, not on the process of interaction between human and automation interface). State of the art cognitive science and Human-Automation Interaction (HAI) approaches may provide the type of analysis needed, but are not currently usable by designers without extensive cognitive science expertise.

The automation design community needs methods that are usable by designers early in the design process to meet the demands for the development and testing of automation required for space exploration. The objective of this research project is to develop a set of methodologies and tools to support the design and evaluation efficient, and robust interaction with automation. The research plan is to integrate existing foundational research results into HAI methods and tools usable by designers. This work is divided into three areas, with the ultimate goal of developing a suite of tools to support each area. It is important to note that the idea of the project is to develop and evaluate the tools in actual design processes, and the level and type of support and evaluation will be dependent upon the scope and maturity of each design domain. The three areas are organized around an abstraction of the primary foci of the design process.

Analyze: The first set of methods and tools are intended to help designers to identify, describe, and evaluate the different parts of the job. Depending on the stage of the design process, these methods are referred to as work domain analyses, task decompositions, task analyses, knowledge elicitation, and in the later stages, validation.

Formulate: The second set of methods and tools is intended to bridge the gap from the analysis of the work domain. Specifically, once the structure of the work domain and tasks has been determined, methods and tools are needed to link the task structures to corresponding interface structures that can later be refined and evaluated. We will be drawing upon research from a number of different communities for this effort including ecological interface design and design patterns.

Build: This set of methods and tools are intended to enable rapid development and evaluation of automation, including the user-interface and the underlying automation behavior. The specific focus of this effort is to develop methods and tools that are usable by designers who are expert in the design domain, but aren’t necessarily formally trained in computer programming, or human performance analysis. We will primarily be using research in formal methods to help support the Build effort.

The outcomes of this research will be methods and tools for the automation of the design and evaluation of the automation interfaces. These tools will provide the means to: (i) meet the demand for analysis required for space exploration development time-line, (ii) enable increased iterative human factors testing of automation prototypes early in the design process, (ii) reduce the cost of development by design and testing of proposed systems early in the development life-cycle, (iii) reduce the cost of training and the maintenance of proficiency, (iv) improve safety (and reduce the costs of inefficiency and unsafe operations) through significant reduction in failure to complete task metrics.

The 2010 AITD (Automation Interface Design Development) efforts can be summarized in terms of the three efforts: To continue the Analyze method and tool development, the team will identify and analyze a new application domain. To develop and evaluate the Formulate methods and tools, the team will conduct a study to evaluate performance of a modified Scheduling and Planning System for Exploration (SPIFe) interface that will be made comparable to the existing interface (e.g., the graphical elements will be made similar for both interfaces) to examine the evaluation of task to interface structure. If funding allows, the group will also help develop a new version of the SPIFe tool which incorporates the functionality needed for the Attitude Determination and Control Officers (ADCO) planning tasks.

Research Impact/Earth Benefits: Methods and tools for improving the design of automation that were developed for NASA can also be applied to design of interaction between humans and automation/computers in other, commercial, or government applications, particularly in safety-critical work domains.

Task Progress & Bibliography Information FY2010 
Task Progress: Our research makes its contributions at two levels. At one level, we addressed the problems of interaction between humans and computers/automation in a particular application domain. The domain of application was the planning work done by Attitude Determination and Control Officers (ADCO), part of Mission Control for the International Space Station (ISS). At a second, higher level, we abstracted from this case to suggest a more general method for needs analysis and this was the primary motivation for our research. We briefly summarize work on the ADCO domain, before describing our more general contribution, the methods and tools that emerged from working on this case.

A core aspect of our research was a detailed study of the planning work done by ADCO and identification of the work needs that should be supported by software. As we carried out the analysis of the ADCO planning domain, we changed our characterization of the problem. We realized that rather than focusing on the current tasks and practices, as in Task Analysis, we should try to directly identify their needs, which future software should meet. One result of our work was support for ADCO. We provided an analysis of ADCO needs. We guided development of an illustrative prototype designed to better fit these needs. We conducted an experimental study of this prototype, comparing performance to that with the legacy system. These products are of value to ADCO operators seeking the design of software that is more effective than their current legacy systems. The second, more general result was development of methods and tools for carrying out such analyses. We used the ADCO domain to develop Structure Identification, our approach to needs analysis. We developed Structure Identification to be particularly appropriate for rapid identification of needs for safety critical, technical, information work. Needs analysis based on Structure Identification finds the high-level structure in the work domain and uses this to design the structure of the interaction between the human and computer or automation. We rely on a combination of eliciting function information from expert users, identifying candidate structure from documents and functional descriptions, and vetting the developing characterization with experts.

Structure Identification contrasts with conducting needs analysis based on Task Analysis; task analysis identifies current tasks, yet a change in the work applications naturally brings with it change in the tasks so that matching the old tasks is not a reliable design guide. Task Analysis can be a helpful approach to identifying structure, but we prioritize identifying the domain structure not the activities. Our approach is related to both Work Domain Analysis (WDA) and Contextual Inquiry (CI), in that these also seek to identify stable aspects of work in order to guide design. WDA focuses on identifying constraints, particularly constraints in how a physical system, such as a chemical plant, operates; it is directly applicable to control tasks, but much less applicable to work consisting of finding, transforming, building, and distributing information products. CI methods focus on observing users, typically carrying out office work; this approach is less adequate in highly technical domains where critical aspects of work cannot be understood from watching users.

Our goal is to make needs analysis more efficient and effective. To this end the methods that we developed focused on gathering important information quickly. We consolidated what we learned to make the methods easier to reuse and to apply to another case, by building simple tools as we carried out the needs analysis for the ADCO planning domain and developed the SI approach. These include templates for gathering high-level function information from experts, templates for presenting the identified structure to experts for verification, and templates for comparing the contents of multiple product documents.

An additional contribution of the research was a preliminary assessment of Structure Identification. Broadly, we investigated whether Structure Identification, followed by Structure Matching from the domain to an application structure, contributes to better design of the application. We conducted an illustrative study using the ADCO planning domain. We used the domain structure we had identified to guide design of an experimental prototype for ADCO planning. We conducted an experiment comparing the experimental prototype, which closely matched the domain structure, versus the legacy system, which matched much more poorly. We included a variety of measures, from speed of performance to conceptual understanding and retention of periods of disuse. We predicted differences in performance on a variety of planning tasks that are detailed analogs of simple ADCO planning tasks: overall faster performance by users of the new, well-matched system compared to that by users of the legacy, poorly-matching system; and particular performance advantage for the new system at points where the legacy system most mismatched domain structure. We found that performance times were cut in half for the new prototype vs legacy system on some tasks, accompanied by much lower error rates as well. Further, we also found the predicted pattern of poor performance at legacy points of mismatch.

We ran through the whole design cycle, from needs analysis through evaluation, in the ADCO domain. This process provided an illustrative case showing the feasibility of our approach. The results from our experiment suggest that capturing and matching domain structure may be an efficient, productive way to guide design of interaction between humans and computers/automation for technical information work.

Bibliography: Description: (Last Updated: 07/22/2015) 

Show Cumulative Bibliography
 
Articles in Peer-reviewed Journals Kaiser MK, Allen CS, Barshi I, Billman D, Holden KL. "Human Factors Research for Space Exploration: Measurement, modeling, and mitigation." Proceedings of the Human Factors and Ergonomics Society Annual Meeting. 2010 Sep;54(1):136-9. Human Factors and Ergonomics Society 54th Annual Meeting, San Francisco, CA, September 28-October 1, 2010. http://dx.doi.org/10.1177/154193121005400130 , Sep-2010
Articles in Peer-reviewed Journals Feary M, McCloy T, Wickens C, Kaber D, Pritchett A, Sherry L. "Bridging the gap between human–automation interface analysis and flight deck design guidance." Proceedings of the Human Factors and Ergonomics Society Annual Meeting. 2010 Sep;54(1):36-40. Human Factors and Ergonomics Society 54th Annual Meeting, San Francisco, CA, September 28-October 1, 2010. http://dx.doi.org/10.1177/154193121005400109 , Sep-2010
Awards Medina M, Sherry L, Feary M. "ICRAT Best Paper award for 'Automation for Task Analysis of Next Generation Air Traffic Management Systems,' International Conference on Research in Air Transportation, Fairfax, VA, June 2008." Jun-2008
Papers from Meeting Proceedings Billman D, Feary M, Schreckenghost D, Sherry L. "Needs Analysis: The Case of Flexible Constraints and Mutable Boundaries." Presented at the 2009 CHI conference, Atlanta, GA, April 4-9, 2009.

Proceedings of the 27th International Conference on Human Factors in Computing Systems. New York : ACM Press, 2010. p. 4597-4612. , Apr-2010

Papers from Meeting Proceedings Feary M. "A Toolset for Supporting Iterative Human – Automation Interaction in Design." MODSIM ‘09, Virginia Beach, VA, October 14-16, 2009.

Proceedings of MODSIM ‘09, Virginia Beach, VA, 2009. p. 169-174. , Oct-2009

Papers from Meeting Proceedings Medina M, Sherry L, Feary M. "Automation for Task Analysis of Next Generation Air Traffic Management Systems." International Conference on Research in Air Transportation, Fairfax, VA, June 2008.

Proceedings International Conference on Research in Air Transportation, Fairfax, VA, 2008. , Jun-2008

Papers from Meeting Proceedings Sherry L, Medina M, Feary M, Otiker J. "Automated Tool for Task Analysis of NextGen Automation." Eighth Integrated Communications, Navigation and Surveillance (ICNS) Conference, Bethesda, MD, May 5-7, 2008.

Proceedings of the Eighth Integrated Communications, Navigation and Surveillance (ICNS) Conference, 2008. p. 1-9, http://dx.doi.org/10.1109/ICNSURV.2008.4559185 , Jul-2008

Project Title:  Automation Interface Design Development Reduce
Fiscal Year: FY 2009 
Division: Human Research 
Research Discipline/Element:
HRP SHFH:Space Human Factors & Habitability (archival in 2017)
Start Date: 10/02/2006  
End Date: 09/30/2010  
Task Last Updated: 10/21/2009 
Download report in PDF pdf
Principal Investigator/Affiliation:   Feary, Michael  Ph.D. / NASA Ames Research Center 
Address:  Mail Stop: 262-4 
 
Moffett Field , CA 94035 
Email: Michael.S.Feary@nasa.gov, erin.s.connell@nasa.gov 
Phone: 650.604.0203  
Congressional District: 18 
Web:  
Organization Type: NASA CENTER 
Organization Name: NASA Ames Research Center 
Joint Agency:  
Comments:  
Co-Investigator(s)
Affiliation: 
Green, Collin  NASA Ames Research Center 
Sherry, Lance  San Jose State University Foundation 
Billman, Dorrit  NASA Ames & San Jose State University 
Project Information: 
Responsible Center: NASA JSC 
Grant Monitor: Woolford, Barbara  
Center Contact: 218-483-3701 
barbara.j.woolford@nasa.gov 
Unique ID: 7393 
Solicitation / Funding Source: Directed Research 
Project Type: GROUND 
Flight Program:  
TechPort: Yes 
No. of Post Docs:
No. of PhD Candidates:
No. of Master's Candidates:
No. of Bachelor's Candidates:
No. of PhD Degrees:
No. of Master's Degrees:
No. of Bachelor's Degrees:
Human Research Program Elements: (1) SHFH:Space Human Factors & Habitability (archival in 2017)
Human Research Program Risks: (1) HSIA:Risk of Adverse Outcomes Due to Inadequate Human Systems Integration Architecture
Human Research Program Gaps: (1) HSIA-101:We need to identify the Human Systems Integration (HSI) – relevant crew health and performance outcomes, measures, and metrics, needed to characterize and mitigate risk, for future exploration missions.
Task Description: The addition of automation has greatly extended humans’ capability to accomplish tasks, including difficult, complex and safety critical tasks. The majority of Human - Automation Interaction (HAI) results in more efficient and safe operations, however certain unexpected automation behaviours, or “automation surprises” can be frustrating and, in certain safety critical operations (e.g. transportation, spaceflight, medicine), may result in injuries or the loss of life. (Mellor, 1994; Leveson, 1995; FAA, 1995; BASI, 1998; Shaylor, 2000; Sheridan, 2002)

The next generation of space exploration systems will place an increased reliance on automation. Traditional techniques for the design and evaluation of automation interfaces rely on subject-matter-experts, human-in-the-loop testing (i.e. usability testing), guidelines, heuristics and rules-of-thumb. Given the volume and time-line for the development of new automation required for space exploration, the time and cost required to perform these evaluations by human factor experts will be prohibitive. Further, guidelines, heuristics, and rule-of-thumb have previously yielded sub-optimum designs (as they are focused on the interface, not on the process of interaction between human and automation interface. State of the art cognitive science and Human-Automation Interaction (HAI) approaches may provide the type of analysis needed, but are not currently usable by designers without extensive cognitive science expertise.

The automation design community needs methods that are usable by designers early in the design process to meet the demands for the development and testing of automation required for space exploration. The objective of this research project is to develop a set of methodologies and tools to support the design and evaluation efficient, and robust interaction with automation. The research plan is to integrate existing foundational research results into HAI methods and tools usable by designers. This work is divided into three areas, with the ultimate goal of developing a suite of tools to support each area. It is important to note that the idea of the project is to develop and evaluate the tools in actual design processes, and the level and type of support and evaluation will be dependent upon the scope and maturity of each design domain. The three areas are organized around an abstraction of the primary focii of the design process.

Analyze: The first set of methods and tools are intended to help designers to identify, describe and evaluate the different parts of the job. Depending on the stage of the design process, these methods are referred to as work domain analyses, task decompositions, task analyses, knowledge elicitation, and in the later stages, validation.

Formulate: The second set of methods and tools is intended to bridge the gap from the analysis of the work domain. Specifically, once the structure of the work domain and tasks has been determined, methods and tools are needed to link the task structures to corresponding interface structures that can later be refined and evaluated. We will be drawing upon research from a number of different communities for this effort including ecological interface design and design patterns.

Build: This set of methods and tools are intended to enable rapid development and evaluation of automation, including the user-interface and the underlying automation behavior. The specific focus of this effort is to develop methods and tools that are usable by designers who are expert in the design domain, but aren’t necessarily formally trained in computer programming, or human performance analysis. We will primarily be using research in formal methods to help support the Build effort.

The outcomes of this research will be methods and tools for the automation of the design and evaluation of the automation interfaces. These tools will provide the means to: (i)meet the demand for analysis required for space exploration development time-line, (ii)enable increased iterative human factors testing of automation prototypes early in the design process (ii) reduce the cost of development by design and testing of proposed systems early in the development life-cycle (iii) reduce the cost of training and the maintenance of proficiency (iv)improve safety (and reduce the costs of inefficiency and unsafe operations) through significant reduction in failure to complete task metrics.

The 2010 AITD efforts can be summarized in terms of the three efforts. To continue the Analyze method and tool development, the team will identify and analyze a new application domain. To develop and evaluate the Formulate methods and tools, the team will conduct a study to evaluate performance of a modified SPIFe interface that will be made comparable to the existing interface (e.g. the graphical elements will be made similar for both interfaces) to examine the evaluation of task to interface structure. If funding allows, the group will also help develop a new version of the SPIFe tool which incorporates the functionality needed for the ADCO planning tasks.

Research Impact/Earth Benefits:

Task Progress & Bibliography Information FY2009 
Task Progress: In 2009, the AITD team focused on the Analyze effort. Specifically, the group worked with the International Space Station (ISS) Attitude Control group to support development of new flight planning applications. The initial request from the ADCO team was to help with the redesign of the Unified ACR File (UAF) Builder application. The task decomposition of the ADCO activities, and tools revealed that the foundation for the UAF Builder tool was not well matched to the tasks it was intended to support. Specifically, the UAF Builder was built upon a Text Editor, rather than a scheduling application. Text Editors are well suited to creation of documents with few constraints, however the ADCO planning activities are relatively well constrained, and the majority of time is spent revising and evaluating plans rather than creating plans. Based on this analysis, the AITD team identified an existing tool which had a scheduler as its basis, referred to as the Scheduling and Planning Interface for Exploration (SPIFe).

Bibliography: Description: (Last Updated: 07/22/2015) 

Show Cumulative Bibliography
 
 None in FY 2009
Project Title:  Automation Interface Design Development Reduce
Fiscal Year: FY 2007 
Division: Human Research 
Research Discipline/Element:
HRP SHFH:Space Human Factors & Habitability (archival in 2017)
Start Date: 10/02/2006  
End Date: 09/30/2010  
Task Last Updated: 04/28/2009 
Download report in PDF pdf
Principal Investigator/Affiliation:   Feary, Michael  Ph.D. / NASA Ames Research Center 
Address:  Mail Stop: 262-4 
 
Moffett Field , CA 94035 
Email: Michael.S.Feary@nasa.gov, erin.s.connell@nasa.gov 
Phone: 650.604.0203  
Congressional District: 18 
Web:  
Organization Type: NASA CENTER 
Organization Name: NASA Ames Research Center 
Joint Agency:  
Comments:  
Co-Investigator(s)
Affiliation: 
Green, Collin  NASA Ames Research Center 
Sherry, Lance  San Jose State University Foundation 
Project Information: 
Responsible Center: NASA JSC 
Grant Monitor: Woolford, Barbara  
Center Contact: 218-483-3701 
barbara.j.woolford@nasa.gov 
Unique ID: 7393 
Solicitation / Funding Source: Directed Research 
Project Type: GROUND 
Flight Program:  
TechPort: Yes 
No. of Post Docs:  
No. of PhD Candidates:  
No. of Master's Candidates:  
No. of Bachelor's Candidates:  
No. of PhD Degrees:  
No. of Master's Degrees:  
No. of Bachelor's Degrees:  
Human Research Program Elements: (1) SHFH:Space Human Factors & Habitability (archival in 2017)
Human Research Program Risks: (1) HSIA:Risk of Adverse Outcomes Due to Inadequate Human Systems Integration Architecture
Human Research Program Gaps: (1) HSIA-101:We need to identify the Human Systems Integration (HSI) – relevant crew health and performance outcomes, measures, and metrics, needed to characterize and mitigate risk, for future exploration missions.
Task Description: The next generation of space exploration systems will necessarily require increased autonomy of operations that will require an increased reliance on automation. These phenomena will place significantly increased burden on the remotely located human operators, on the training and maintenance of proficiency of these operators, and on the design and testing of the autonomously operated automation.

Traditional techniques for the design and evaluation of automation interfaces rely on subject-matter-experts, human-in-the-loop testing (i.e. usability testing), guidelines, heuristics and rules-of-thumb. Given the volume and time-line for the development of new automation required for space exploration, the time and cost required to perform manual design reviews by human factor experts will be prohibitive. Further, guidelines, heuristics, and rule-of-thumb have previously yielded sub-optimum designs (as they are focused on the interface, not on the process of interaction between human and automation interface. State of the art cognitive science and Human-Automation Interaction (HAI) approaches may provide the type of analysis needed, but are not currently usable by designers without extensive cognitive science expertise. The automation design community needs methods that are usable by designers early in the design process to meet the demands for the development and testing of automation required for space exploration. The objective of this research project is to develop a set of methodologies and tools to automate the design and evaluation of the human-computer interaction process. The research plan is to integrate existing foundational research results into HAI methods and tools usable by designers. The research is divided into three efforts, with the ultimate goal of developing a suite of tools.

• The first effort is to develop a method that enables rapid task decomposition and analysis.

• The second effort is to develop a tool that enables rapid prototyping of both the user-interface and the underlying automation behavior.

• The third effort is to develop an automated means of human performance analysis based on the results of the task decomposition and a description of the automation user-interface.

Besides being usable by domain expert designers, the suite of tools must be tailored for the MOD, CEV, and other space exploration environments. The outcomes of this research will be methods and tools for the automation of the design and evaluation of the automation interfaces. These tools will provide the means to:

(i) meet the demand for analysis required for space exploration development time-line,

(ii) enable increased iterative human factors testing of automation prototypes early in the design process

(ii) reduce the cost of development by design and testing of proposed systems early in the development life-cycle

(iii) reduce the cost of training and the maintenance of proficiency

(iv) improve safety (and reduce the costs of inefficiency and unsafe operations) through significant reduction in failure to complete task metrics.

Research Impact/Earth Benefits:

Task Progress & Bibliography Information FY2007 
Task Progress: New project for FY2007.

[Ed. note: added to Task Book in April 2009 when received task information]

Bibliography: Description: (Last Updated: 07/22/2015) 

Show Cumulative Bibliography
 
 None in FY 2007