Work in Year 3 focused on SOLV data collection and analysis, code development, unit and system integration testing, and project documentation in preparation for delivery on May 31, 2018.
Refinements were made to the Critical Task Volume Database in Year 3. Revision 3 of the database was released on 3/7/2018. Additional volume data were collected in the areas of Exercise, Recreation, Food Preparation, EVA Suit Don/Doff/Stowage & Maintenance, Mission-Specific Onboard Research, and Hatch Ingress/Egress.
SOLV was using Analytic Hierarchy Process (AHP) surveys to collect subject matter expert (SME) opinions and judgments to establish a factor weighting and scoring system in order to drive the model logic for evaluating layout performance. Year 3 saw the completion of all three phases of data collection and analysis. These phases are:
• Factor Priority Survey
• Interactions Effect Survey
• Manual Layout Evaluation Survey
The Factor Priority Survey completed data collection and analysis in July 2017. 21 SMEs including 4 astronauts participated in the surveys, generating 39 AHP responses for analysis. The Interactions Effect Survey completed data collection and analysis in August 2017. 15 SMEs including 2 astronauts participated, generating 22 AHP responses for analysis. The Manual Layout Evaluation Survey completed data collection and analysis in November 2017. 13 SMEs including 2 astronauts participated, generating 78 AHP responses for analysis.
Five major SME groups were identified for participation in the AHP surveys: • Human Factors (SF3/HRP SMEs and Researchers); • Behavioral Health and Performance (Senior Research Scientists); • Medical (Flight Surgeons, Researchers); • Subsystem Integration (Space Architects, ISS Subsystem Leads, Exploration/CCP Integrators); • Flight Operations (Crew Systems, Astronauts)
Participants from each SME group were instructed to perform pairwise comparisons for one or more performance metrics that are within their area of expertise:
• Human Factor SMEs answered surveys for the Task Performance and Health and Well-Being metrics.
• Behavioral Health and Performance SMEs answered surveys for the Task Performance and Health and Well-Being metrics.
• Medical SMEs answered surveys for the Health and Well-Being and Safety metrics.
• Subsystem Integration SMEs answered surveys for the Vehicle Integration metric.
• Flight Operations SMEs answered surveys for all four metrics: Task Performance, Health and Well-Being, Vehicle Integration and Safety.
From the Factor Priority Survey, the team was able to identify the six top design factors believed to have the greatest impact to the “goodness” of a layout. From the Interactions Effect Survey, the team used the analysis results to inform the Choquet Integral calculations and build SOLV logic for layout evaluation. From the Manual Layout Evaluation Survey, the team performed data calibration and Canonical Correlation Analysis to establish a numerical relationship between the physical data of sample packing layouts and the psychophysical data on design goodness, in order to build a model response surface for future layout performance evaluation.
In Year 3, the team completed code development for each of the SOLV modules:
• Gradient Cuboid Module - Converts task volume inputs into gradient cuboids with overlap allowable.
• Overlap Packing Module - Generates layouts of the gradient cuboids based on SOLV variables and constraints.
• Evaluation Module – Establishes the model weighting system and the model response surface via Canonical Correlation Analysis (CANCORR), and contains hard-codes of the Data Envelopment Analysis (DEA) and Choquet Integral (CI) functions that establish the model scoring system for layout evaluation.
• Scorecard Module – An assessment report or “scorecard” that provides performance scores and design information for every volume and layout solution generated by SOLV. This enables the user to compare options and choose the best starting point for design.
• Driver Code - Additional code and scripts that integrate the modules to enable smooth model functions from user input to scorecard output.
The team also completed verification testing of SOLV in Year 3. Verification of model computations was formally assessed at the module level. There were two parts to the verification testing: • Model Verification – Model was verified that it had been implemented to meet our key driving requirements. • Code Verification - Model computations at the module level were verified via incremental testing to ensure that mathematical operations do not result in significant numerical errors.
The team derived from the SOLV Key Driving Requirements a set of functional test requirements each SOLV module must satisfy. The team also identified the required test steps to verify each test requirement. The level of testing was adjusted to the credibility goals defined per NASA-STD-7009A Standard for Models and Simulations. Both the test plans, requirements, and results were vetted through multiple team reviews. A common documentation format was developed to capture the test results. Verification testing was completed in the 2017-2018 time frame: • GC Module: 10/31/2017; • Overlap Packing Module: 12/19/2017; • Evaluation Module: 3/2/2018; • Scorecard and Driver Code: 2/23/2018.
As part of the 7009A compliance, the team also completed preliminary uncertainty characterization and results robustness analysis. The project identified and tested 54 cases that were representative of the range of SOLV inputs, and generated estimates for the sensitivity of the total task volume and the layout score. These estimates helped describe conditions for which slight perturbations in the input could affect results from SOLV.
The Spacecraft Optimization Layout and Volume (SOLV) project successfully conducted the Final Design Review (FDR) on 3/7/2018. SOLV achieved this project milestone on schedule, meeting all its objectives. A compliance assessment performed by the SOLV team indicated that the prototype model meets the established goals in credibility factors such as verification, validation, data pedigree, uncertainty characterization, and results robustness, as defined by NASA-STD-7009A. The team also provided an outbrief and a model demonstration to the HRP/HFBP (Human Factors and Behavioral Performance) and HHP/SF management at the conclusion of the final design review (FDR). Tasks completed following the FDR included project document development, analysis wrap-up, and utilization and delivery planning. On May 31, 2018, SOLV plans to deliver its products in fulfillment of the grant requirements. These products include the SOLV model and associated software files, and seven project documents capturing the project’s technical approach, verification and validation plans, test results, software specification, and development plans.