Menu

 

The NASA Task Book
Advanced Search     

Project Title:  A Non-intrusive Ocular Monitoring Framework to Model Ocular Structure and Functional Changes due to Long-term Spaceflight Reduce
Images: icon  Fiscal Year: FY 2023 
Division: Human Research 
Research Discipline/Element:
HRP HHC:Human Health Countermeasures
Start Date: 08/27/2020  
End Date: 02/28/2023  
Task Last Updated: 06/29/2023 
Download report in PDF pdf
Principal Investigator/Affiliation:   Tavakkoli, Alireza  Ph.D. / University of Nevada, Reno 
Address:  Department of Computer Science and Engineering 
1664 N Virginia St (MS0171) 
Reno , NV 89557-0001 
Email: tavakkol@unr.edu 
Phone: 775-682-8426  
Congressional District:
Web:  
Organization Type: UNIVERSITY 
Organization Name: University of Nevada, Reno 
Joint Agency:  
Comments:  
Co-Investigator(s)
Affiliation: 
Webster, Michael  Ph.D. University of Nevada, Reno 
Key Personnel Changes / Previous PI: N/A
Project Information: Grant/Contract No. 80NSSC20K1831 
Responsible Center: NASA JSC 
Grant Monitor: Stenger, Michael  
Center Contact: 281-483-1311 
michael.b.stenger@nasa.gov 
Unique ID: 14091 
Solicitation / Funding Source: 2019 HERO 80JSC019N0001-FLAGSHIP & OMNIBUS: Human Research Program Crew Health. Appendix A&B 
Grant/Contract No.: 80NSSC20K1831 
Project Type: GROUND 
Flight Program:  
TechPort: Yes 
No. of Post Docs:
No. of PhD Candidates:
No. of Master's Candidates:
No. of Bachelor's Candidates:
No. of PhD Degrees:
No. of Master's Degrees:
No. of Bachelor's Degrees:
Human Research Program Elements: (1) HHC:Human Health Countermeasures
Human Research Program Risks: (1) SANS:Risk of Spaceflight Associated Neuro-ocular Syndrome (SANS)
Human Research Program Gaps: (1) SANS-202:Determine if genetic/metabolic/anatomic dispositions and biomarkers, and sex differences have a contributing role in the development of ocular manifestations.
(2) SANS-301:Develop and test mechanical countermeasures in the laboratory.
Flight Assignment/Project Notes: NOTE: End date changed to 02/28/2023 per C. Ribeiro/JSC (Ed., 6/2/22)

NOTE: End date changed to 08/26/2022 per NSSC information. (Ed. 10/26/21)

Task Description: Unique neuro-ocular structural and functional changes affect a subset of astronauts who have completed prolonged spaceflight missions and due to its unique pathology, a new case definition was proposed and the condition was renamed Spaceflight Associated Neuro-ocular Syndrome (SANS). In this project we investigate two interconnected computational frameworks to develop a diagnostic system as well as a mapping mechanism to assist NASA scientists and clinical experts to more comprehensively study the SANS phenomenon and predict the risk of its development in prolonged spaceflight. Therefore, the first aim (Aim 1) of this project is to develop novel computational tools to establish mappings between the observed ocular structure and visual function, pre-, in-, and post-flight, in order to provide NASA scientists and clinicians with better means to investigate SANS etiology and its progression. The second aim (Aim 2) of this project is to integrate Contrast Sensitivity (CS), Visual Fields (VF), and our novel distortion assessment mechanism into a validated and compact diagnostic tool to better measure ocular function (SANS 301 : Laboratory development of mechanical countermeasures).

We will focus our efforts in each aim on a sub-set of functionalities that allow for the establishment of the interconnected computational framework enabling the pursuit of long-term research to predict the risk of development of SANS and monitor its progression.

Omnibus Aim 1: Structure-Function Mapping

Research Task-1.1: Design a novel mapping between Optical Coherence Tomography (OCT), Magnetic Resonance Imaging (MRI), Contrast Sensitivity (CS), and Visual Fields (VF) perimetry.

Research Task-1.2: Conduct studies on retrospective data from NASA Lifetime Surveillance of Astronaut Health (LSAH) and Life Sciences Data Archive (LSDA) on the three populations (astronauts, head-down-tilt bed rest, and idiopathic intracranial hypertension (IIH)) patient.

These findings will be significant in two ways:

(1) They will allow us to predict measures within a smaller sample set, if a larger analog sample set has known structure-function maps.

(2) They will enable us to design predictive mechanisms to study disease progression both in astronauts and in terrestrial analogs terrestrial analogs.

Expected Outcomes: (1.i) understanding how OCT/MRI correlates with VF, (1.ii) translational parametrization of mappings across cohorts, and (1.iii) ability to predict the risk of development of SANS and monitor its progression by utilizing the proposed mappings.

Omnibus Aim 2: Address SANS 301 Knowledge Gap

Research Task-2.1: Integrate VF and CS assessments into a VR-mediated framework.

Research Task-2.2: Validate VR-based VF/CS on the terrestrial analog populations.

Expected Outcomes: (2.i) a novel Virtual Reality (VR)-based VF/CS assessment and (2.ii) a compact diagnostic tool.

Research Impact/Earth Benefits: During the previous year of the project, our team has made contributions on the two aims as follows:

Aim 1- The first aim (Aim 1) of this project is to develop novel computational tools to establish mappings between the observed ocular structure and visual function, pre-, in-, and post-flight, in order to provide NASA scientists and clinicians with better means to investigate SANS etiology and its progression (SANS 1).

Contributions: 1- Design a novel mapping between OCT, MRI, CS, and VF. 2- Conduct studies on retrospective data from NASA Lifetime Surveillance of Astronaut Health (LSAH) and Life Sciences Data Archive (LSDA) on the three populations.

Technical Details: In order to establish a comprehensive mapping between different ophthalmic domains we started by designing a conditional generative adversarial network (GAN) to map across the publicly available data we had at our disposal, i.e., fluorescein angiography (FA) and fundus photographs. The GAN comprises of two generator modules and four discriminator modules to take fundus photographs and produce anatomically accurate FA images inferred from the fundus images.

Impact: We have shown novel deep architectures in ophthalmic applications could improve diagnostic accuracy, that attention maps can improve transferability of learned models across datasets, and deep architectures could effectively extract shared feature representations across ophthalmic image modalities to translate from one domain to another. These discoveries have paved the way for our team to tackle the main problem of mapping between the domain of ocular structure to the visual function.

Significance: (1) Understanding how ocular structure correlates with visual function. (2) Parametrization of mappings. (3) Predict the risk of SANS.

Aim 2- The second aim (Aim 2) of this project is to integrate CS, VF, and our novel distortion assessment mechanism into a validated and compact diagnostic tool to better measure ocular function (SANS 3).

Contributions: 1- Integrate VF/CS assessments into a VR-mediated framework. 2- Validate VR-based VF/CS on the terrestrial analog populations.

Technical Details: We present a methodology that comprises a calibration step, four different visual function tests that measure different aspects of user perception, and then a composite pipeline that simulates the modeled deficits for validation. In order to properly utilize the virtual assessment, the environment would need to be calibrated at the beginning of each session. Simple calibrations such as adjusting lens distance, interpupillary distance, and headset adjustments are done at the start. After these adjustments, the fixation and tracking capabilities of the eyes are tested, first binocularly and then monocularly. These performance metrics are saved alongside the user demography information. After the calibration phase, the user's visual assessment can commence. Visual acuity (VA), contrast sensitivity (CS), and visual distortions are assessed through a variety of procedures. For VA, binocular distant VA as well as dynamic VA is measured under mesopic (natural light) conditions. Instead of using images of conventional charts, we render individual characters in front of the user at predetermined distances and scale it based on user response. The results are reported in logMAR scale among others. The contrast sensitivity is measured using gabor patches as stimuli. In this test, the user gaze follows a gabor patch that alters its contrast and spatial frequency based on user performance. At the end, the contrast sensitivity expressed in logCS among other contrast sensitivity units. The amsler grid test is adapted to VR to measure the perceptual distortions in age-related macular degeneration (AMD) patients. At the start of the exam, the amsler grid is displayed infront of both eyes. While looking at a fixation point in grid, if the straight grid lines appear to be distorted the user emulates the metamorphopsia of the deficient eye on the healthy eye. This grid manipulation is modeled as a gaussian mixture of different scotoma parameters. The results are reported as the image of the altered amsler grid.

Impact: In addition, we have developed a new approach mediated by advances in virtual reality (VR) for better assessment of metamorphopsia to enable remote monitoring of the progression of AMD \cite{zaman2020mixed}. These findings in conjunction with the findings of Aim 1 motivate and inform the objectives of this project, by allowing our team to maintain a correspondence between how the ocular structural changes could impact visual function assessments.

Significance: (1) A novel VR-based VF/CS assessment. (2) A compact diagnostic tool.

Task Progress & Bibliography Information FY2023 
Task Progress: We have shown novel deep architectures in ophthalmic applications could improve diagnostic accuracy, that attention maps can improve transferability of learned models across datasets, that generative models are effective in segmenting certain anatomical features from ophthalmic images, and that deep architectures could effectively extract shared feature representations across ophthalmic image modalities to translate from one domain to another. These discoveries have paved the way for our team to tackle the main problem of mapping between the domain of ocular structure to the visual function.

In addition, we have developed a new approach mediated by advances in virtual reality (VR) for better assessment of metamorphopsia to enable remote monitoring of the progression of age-related macular degeneration (AMD). In addition, we have developed several techniques to evaluate other visual functions including gravitational transition visual effects, multi-modal function, and visual acuity, as well as other effects of microgravity on the human body. These findings in conjunction with the findings of Aim 1 motivate and inform the objectives of this project, by allowing our team to maintain a correspondence between how the ocular structural changes could impact visual function assessments.

Bibliography: Description: (Last Updated: 04/25/2024) 

Show Cumulative Bibliography
 
Abstracts for Journals and Proceedings Waisberg E, Ong J, Masalkhi M, Zaman N, Kamran SA, Sarker P, Tavakkoli A, Lee AG. "Optical coherence tomography analysis of International Space Station astronauts." 2023 ARVO Annual Meeting, New Orleans, LA, April 23-27, 2023.

Abstracts. 2023 ARVO Annual Meeting, New Orleans, LA, April 23-27, 2023. , Jun-2023

Articles in Peer-reviewed Journals Waisberg E, Ong J, Paladugu P, Kamran SA, Zaman N, Lee AG, Tavakkoli A. "Dynamic visual acuity as a biometric for astronaut performance and safety." Life Sci Space Res (Amst). 2023 May;37:3-6. https://doi.org/10.1016/j.lssr.2023.01.002 ; PubMed PMID: 37087177 , May-2023
Articles in Peer-reviewed Journals Waisberg E, Ong J, Masalkhi M, Zaman N, Kamran SA, Sarker P, Lee AG, Tavakkoli A. "Generative Pre-Trained Transformers (GPT) and space health: A potential frontier in astronaut health during exploration missions." Prehosp Disaster Med. 2023 Jun 2;1-5. https://doi.org/10.1017/S1049023X23005848 ; PubMed PMID: 37264946 , Jun-2023
Articles in Peer-reviewed Journals Waisberg E, Ong J, Zaman N, Paladugu P, Dias R, Kamran SA, Lee AG, Tavakkoli A. "Minified augmented reality as a terrestrial analog for G-transitions effects in Lunar and interplanetary spaceflight." International Journal of Aviation, Aeronautics, and Aerospace. 2023;10(1). https://doi.org/10.58940/2374-6793.1797 , Jan-2023
Articles in Peer-reviewed Journals Waisberg E, Ong J, Kamran SA, Paladugu P, Zaman N, Lee AG, Tavakkoli A. "Transfer learning as an AI-based solution to address limited datasets in space medicine." Life Sci Space Res (Amst). 2023 Feb;36:36-38. https://doi.org/10.1016/j.lssr.2022.12.002 ; PubMed PMID: 36682827 , Feb-2023
Articles in Peer-reviewed Journals Waisberg E, Ong J, Kamran SA, Zaman N, Paladugu P, Sarker P, Tavakkoli A, Lee AG. "Further characterizing the physiological process of posterior globe flattening in spaceflight associated neuro-ocular syndrome with generative adversarial networks." J Appl Physiol (1985). 2023 Jan 2;134(1):150-1. https://doi.org/10.1152/japplphysiol.00747.2022 ; PubMed PMID: 36592406 , Jan-2023
Articles in Peer-reviewed Journals Waisberg E, Ong J, Zaman N, Kamran SA, Lee AG, Tavakkoli A. "Head-mounted dynamic visual acuity for G-transition effects during interplanetary spaceflight: Technology development and results from an early validation study." Aerosp Med Hum Perform. 2022 Nov 1;93(11):800-5. https://doi.org/10.3357/AMHP.6092.2022 ; PubMed PMID: 36309801 , Nov-2022
Articles in Peer-reviewed Journals Ong J, Tavakkoli A, Zaman N, Kamran SA, Waisberg E, Gautam N, Lee AG. "Terrestrial health applications of visual assessment technology and machine learning in spaceflight associated neuro-ocular syndrome." npj Microgravity. 2022 Aug 25;8:37. https://doi.org/10.1038/s41526-022-00222-7 ; PubMed PMID: 36008494; PubMed Central PMCID: PMC9411571 , Aug-2022
Articles in Peer-reviewed Journals Waisberg E, Ong J, Masalkhi M, Zaman N, Sarker P, Lee AG, Tavakkoli A. "The future of ophthalmology and vision science with the Apple Vision Pro." Eye (Lond). 2023 Aug 4. https://doi.org/10.1038/s41433-023-02688-5 ; PMID: 37542175 , Aug-2023
Articles in Peer-reviewed Journals Ong J, Waisberg E, Kamran SA, Paladugu P, Zaman N, Sarker P, Tavakkoli A, Lee AG. "Deep learning synthetic angiograms for individuals unable to undergo contrast-guided laser treatment in aggressive retinopathy of prematurity." Eye (Lond). 2023 Feb 1. https://doi.org/10.1038/s41433-023-02400-7 ; PMID: 37500752; PMCID: PMC10482905 , Feb-2023
Articles in Peer-reviewed Journals Suh A, Ong J, Kamran SA, Waisberg E, Paladugu P, Zaman N, Sarker P, Tavakkoli A, Lee AG. "Retina oculomics in neurodegenerative disease." Ann Biomed Eng. 2023 Oct 19. Review. https://doi.org/10.1007/s10439-023-03365-0 ; PMID: 37855949 , Oct-2023
Articles in Peer-reviewed Journals Waisberg E, Ong J, Zaman N, Paladugu P, Kamran SA, Tavakkoli A, Lee AG. "The spaceflight contrast sensitivity hypothesis and its role to investigate the pathophysiology of spaceflight-associated neuro-ocular syndrome." Front Ophthalmol. 2023 Sep 5;3:1229748. https://doi.org/10.3389/fopht.2023.1229748 ; , Sep-2023
Articles in Peer-reviewed Journals Waisberg E, Ong J, Paladugu P, Zaman N, Kamran SA, Lee AG, Tavakkoli A. "Optimizing screening for preventable blindness with head-mounted visual assessment technology." J Vis Impair Blind. 2022 Jul 1;116(4):579-81. https://doi.org/10.1177/0145482X221124186 , Jul-2022
Articles in Peer-reviewed Journals Waisberg E, Ong J, Paladugu P, Kamran SA, Zaman N, Lee AG, Tavakkoli A. "Challenges of artificial intelligence in space medicine." Space Sci & Technol. 2022 Oct 29;2022:9852872. https://doi.org/10.34133/2022/9852872 ; , Oct-2022
Articles in Peer-reviewed Journals Paladugu PS, Ong J, Nelson N, Kamran SA, Waisberg E, Zaman N, Kumar R, Dias RD, Lee AG, Tavakkoli A. "Generative adversarial networks in medicine: Important considerations for this emerging innovation in artificial intelligence." Ann Biomed Eng. 2023 Jul 24. Review. https://doi.org/10.1007/s10439-023-03304-z ; PMID: 37488468 , Jul-2023
Articles in Peer-reviewed Journals Soares B, Ong J, Waisberg E, Sarker P, Zaman N, Tavakkoli A, Lee AG. "Imaging in spaceflight associated neuro-ocular syndrome (SANS): Current technology and future directions in modalities." Life Sci Space Res. 2024 Apr 16. Online ahead of print. https://doi.org/10.1016/j.lssr.2024.04.004 , Apr-2024
Articles in Peer-reviewed Journals Sarker P, Ong J, Zaman N, Kamran SA, Waisberg E, Paladugu P, Lee AG, Tavakkoli A. "Extended reality quantification of pupil reactivity as a non-invasive assessment for the pathogenesis of spaceflight associated neuro-ocular syndrome: A technology validation study for astronaut health." Life Sci Space Res. 2023 Jun 5. https://doi.org/10.1016/j.lssr.2023.06.001 , Jun-2023
Project Title:  A Non-intrusive Ocular Monitoring Framework to Model Ocular Structure and Functional Changes due to Long-term Spaceflight Reduce
Images: icon  Fiscal Year: FY 2022 
Division: Human Research 
Research Discipline/Element:
HRP HHC:Human Health Countermeasures
Start Date: 08/27/2020  
End Date: 02/28/2023  
Task Last Updated: 06/24/2022 
Download report in PDF pdf
Principal Investigator/Affiliation:   Tavakkoli, Alireza  Ph.D. / University of Nevada, Reno 
Address:  Department of Computer Science and Engineering 
1664 N Virginia St (MS0171) 
Reno , NV 89557-0001 
Email: tavakkol@unr.edu 
Phone: 775-682-8426  
Congressional District:
Web:  
Organization Type: UNIVERSITY 
Organization Name: University of Nevada, Reno 
Joint Agency:  
Comments:  
Co-Investigator(s)
Affiliation: 
Webster, Michael  Ph.D. University of Nevada, Reno 
Key Personnel Changes / Previous PI: N/A
Project Information: Grant/Contract No. 80NSSC20K1831 
Responsible Center: NASA JSC 
Grant Monitor: Stenger, Michael  
Center Contact: 281-483-1311 
michael.b.stenger@nasa.gov 
Unique ID: 14091 
Solicitation / Funding Source: 2019 HERO 80JSC019N0001-FLAGSHIP & OMNIBUS: Human Research Program Crew Health. Appendix A&B 
Grant/Contract No.: 80NSSC20K1831 
Project Type: GROUND 
Flight Program:  
TechPort: Yes 
No. of Post Docs:
No. of PhD Candidates:
No. of Master's Candidates:
No. of Bachelor's Candidates:
No. of PhD Degrees:
No. of Master's Degrees:
No. of Bachelor's Degrees:
Human Research Program Elements: (1) HHC:Human Health Countermeasures
Human Research Program Risks: (1) SANS:Risk of Spaceflight Associated Neuro-ocular Syndrome (SANS)
Human Research Program Gaps: (1) SANS-202:Determine if genetic/metabolic/anatomic dispositions and biomarkers, and sex differences have a contributing role in the development of ocular manifestations.
(2) SANS-301:Develop and test mechanical countermeasures in the laboratory.
Flight Assignment/Project Notes: NOTE: End date changed to 02/28/2023 per C. Ribeiro/JSC (Ed., 6/2/22)

NOTE: End date changed to 08/26/2022 per NSSC information. (Ed. 10/26/21)

Task Description: Unique neuro-ocular structural and functional changes affect a subset of astronauts who have completed prolonged spaceflight missions and due to its unique pathology, a new case definition was proposed and the condition was renamed Spaceflight Associated Neuro-ocular Syndrome (SANS). In this project we investigate two interconnected computational frameworks to develop a diagnostic system as well as a mapping mechanism to assist NASA scientists and clinical experts to more comprehensively study the SANS phenomenon and predict the risk of its development in prolonged spaceflight. Therefore, the first aim (Aim 1) of this project is to develop novel computational tools to establish mappings between the observed ocular structure and visual function, pre-, in-, and post-flight, in order to provide NASA scientists and clinicians with better means to investigate SANS etiology and its progression. The second aim (Aim 2) of this project is to integrate Contrast Sensitivity (CS), Visual Fields (VF), and our novel distortion assessment mechanism into a validated and compact diagnostic tool to better measure ocular function (SANS 301 : Laboratory development of mechanical countermeasures).

We will focus our efforts in each aim on a sub-set of functionalities that allow for the establishment of the interconnected computational framework enabling the pursuit of long-term research to predict the risk of development of SANS and monitor its progression.

Omnibus Aim 1: Structure-Function Mapping

Research Task-1.1: Design a novel mapping between Optical Coherence Tomography (OCT), Magnetic Resonance Imaging (MRI), Contrast Sensitivity (CS), and Visual Fields (VF) perimetry.

Research Task-1.2: Conduct studies on retrospective data from NASA Lifetime Surveillance of Astronaut Health (LSAH) and Life Sciences Data Archive (LSDA) on the three populations (astronauts, head-down-tilt bed rest, and idiopathic intracranial hypertension (IIH)) patient.

These findings will be significant in two ways:

(1) They will allow us to predict measures within a smaller sample set, if a larger analog sample set has known structure-function maps.

(2) They will enable us to design predictive mechanisms to study disease progression both in astronauts and in terrestrial analogs terrestrial analogs.

Expected Outcomes: (1.i) understanding how OCT/MRI correlates with VF, (1.ii) translational parametrization of mappings across cohorts, and (1.iii) ability to predict the risk of development of SANS and monitor its progression by utilizing the proposed mappings.

Omnibus Aim 2: Address SANS 301 Knowledge Gap

Research Task-2.1: Integrate VF and CS assessments into a VR-mediated framework.

Research Task-2.2: Validate VR-based VF/CS on the terrestrial analog populations.

Expected Outcomes: (2.i) a novel Virtual Reality (VR)-based VF/CS assessment and (2.ii) a compact diagnostic tool.

Research Impact/Earth Benefits: During the previous year of the project, our team has made contributions on the two aims as follows:

Aim 1- The first aim (Aim 1) of this project is to develop novel computational tools to establish mappings between the observed ocular structure and visual function, pre-, in-, and post-flight, in order to provide NASA scientists and clinicians with better means to investigate SANS etiology and its progression (SANS 1).

Contributions: 1- Design a novel mapping between OCT, MRI, CS, and VF. 2- Conduct studies on retrospective data from NASA Lifetime Surveillance of Astronaut Health (LSAH) and Life Sciences Data Archive (LSDA) on the three populations.

Technical Details: In order to establish a comprehensive mapping between different ophthalmic domains we started by designing a conditional generative adversarial network (GAN) to map across the publicly available data we had at our disposal, i.e., fluorescein angiography (FA) and fundus photographs. The GAN comprises of two generator modules and four discriminator modules to take fundus photographs and produce anatomically accurate FA images inferred from the fundus images.

Impact: We have shown novel deep architectures in ophthalmic applications could improve diagnostic accuracy, that attention maps can improve transferability of learned models across datasets, and deep architectures could effectively extract shared feature representations across ophthalmic image modalities to translate from one domain to another. These discoveries have paved the way for our team to tackle the main problem of mapping between the domain of ocular structure to the visual function.

Significance: (1) Understanding how ocular structure correlates with visual function. (2) Parametrization of mappings. (3) Predict the risk of SANS.

Aim 2- The second aim (Aim 2) of this project is to integrate CS, VF, and our novel distortion assessment mechanism into a validated and compact diagnostic tool to better measure ocular function (SANS 3).

Contributions: 1- Integrate VF/CS assessments into a VR-mediated framework. 2- Validate VR-based VF/CS on the terrestrial analog populations.

Technical Details: We present a methodology that comprises a calibration step, four different visual function tests that measure different aspects of user perception, and then a composite pipeline that simulates the modeled deficits for validation. In order to properly utilize the virtual assessment, the environment would need to be calibrated at the beginning of each session. Simple calibrations such as adjusting lens distance, interpupillary distance, and headset adjustments are done at the start. After these adjustments, the fixation and tracking capabilities of the eyes are tested, first binocularly and then monocularly. These performance metrics are saved alongside the user demography information. After the calibration phase, the user's visual assessment can commence. Visual acuity (VA), contrast sensitivity (CS), and visual distortions are assessed through a variety of procedures. For VA, binocular distant VA as well as dynamic VA is measured under mesopic (natural light) conditions. Instead of using images of conventional charts, we render individual characters in front of the user at predetermined distances and scale it based on user response. The results are reported in logMAR scale among others. The contrast sensitivity is measured using gabor patches as stimuli. In this test, the user gaze follows a gabor patch that alters its contrast and spatial frequency based on user performance. At the end, the contrast sensitivity expressed in logCS among other contrast sensitivity units. The amsler grid test is adapted to VR to measure the perceptual distortions in age-related macular degeneration (AMD) patients. At the start of the exam, the amsler grid is displayed infront of both eyes. While looking at a fixation point in grid, if the straight grid lines appear to be distorted the user emulates the metamorphopsia of the deficient eye on the healthy eye. This grid manipulation is modeled as a gaussian mixture of different scotoma parameters. The results are reported as the image of the altered amsler grid.

Impact: In addition, we have developed a new approach mediated by advances in virtual reality (VR) for better assessment of metamorphopsia to enable remote monitoring of the progression of AMD \cite{zaman2020mixed}. These findings in conjunction with the findings of Aim 1 motivate and inform the objectives of this project, by allowing our team to maintain a correspondence between how the ocular structural changes could impact visual function assessments.

Significance: (1) A novel VR-based VF/CS assessment. (2) A compact diagnostic tool.

Task Progress & Bibliography Information FY2022 
Task Progress: To achieve the two research aims of this project, we initiated an effort to design a framework to generate mappings between the ocular structure and its function, by developing a computational framework inspired by deep Convolutional Neural Networks (CNNs). We then initiated these novel mappings, trained during research task (RT-1.1). Unlike current classification and segmentation algorithms that merely label test results, the proposed mappings are able to directly connect one domain (function) to the other (physiology). This functionality is significant, as it will enable us to predict progression of changes by cross-validating test results from one domain (function) with the other (structure). Moreover, these deep network models enable the design of cohort studies as a part of research task (RT-1.2) in order to uncover model similarities and differences between Spaceflight Associated Neuro-ocular Syndrome (SANS) and its terrestrial analogs. Below, additional details about the accomplishments in each research task are presented.

1) Mapping Across Domains In order to establish a comprehensive mapping between different ophthalmic domains, we started by designing a conditional generative adversarial network (GAN) to map across the publicly available data we had at our disposal, i.e., fluorescein angiography (FA) and fundus photographs. The GAN comprises a vision-transformer-based generative adversarial network (GAN) consisting of residual, spatial feature fusion, up sampling and down sampling blocks for generators, and transformer encoder blocks for discriminators. We incorporate multiple losses for generating vivid fluorescein angiography images from normal and abnormal fundus photographs for training. Multi-scale Generators: To capture large and fine-scale features to produce realistic vascular images, we combine multi-scale coarse and fine generators. We adopt two generators (fine and coarse). The fine generator synthesizes local features such as arteries and venules. Conversely, the coarse generator translates global features such as large blood vessels, optic disc, and overall contrast and illumination. The generators consist of multiple down sampling, up sampling, spatial feature fusion, residual blocks, and a multi-scale feature summation block between the two generators.

Down Sampling and Up Sampling Blocks: We use, as generators, auto-encoders comprising of multiple down sampling and up sampling blocks for feature extraction. A single down sampling block contains a convolution layer, a batch-norm layer, and a Leaky-ReLU activation function successively. In contrast, an up-sampling block consists of a transposed convolution layer, batch-norm, and Leaky-ReLU activation layer consecutively. We use the down sampling block twice in the fine generator, followed by nine successive residual identity blocks. Finally, the up-sampling blocks are used again to make the spatial output the same as the input. For the coarse generator, we utilize the down sampling once, and after three consecutive residual blocks, a single up sampling block is employed to get the same spatial output as the input. Spatial Feature Fusion Block: The spatial feature fusion (SFF) block consists of two residual units with Convolution, Batch-Norm, Leaky-ReLU layers successively. There are two skip connections, one going from the input and element-wise platform, summed to the first residual unit’s output, and one coming from the input layer and added with the last residual unit’s output. We use spatial feature fusion blocks for combining spatial features from the bottom layers with the topmost layers of the architecture. The fine generator comprises two SFF blocks that connect each of the two down sampling blocks with the two up sampling blocks successively. In contrast, the coarse generator has only one SFF block between the single down sampling and up sampling block. The reason behind incorporating the SFF block is to extract and retain spatial information that is otherwise lost due to consecutive down sampling and up sampling. As a result, we can combine these features with the learned features of the later layers of the network to get an accurate approximation.

VisionTransformers as Discriminators: GAN discriminators require adapting to local and global information changes for differentiating real and fake images. To alleviate this inherent problem, we need a heavy architecture with many parameters. In contrast, convolution with a large receptive field can be employed for obtaining multi-scale features, but can cause overfitting on training data. To resolve this problem, we propose a new Vision Transformer-based Markovian discriminator. We use eight Vision Transformer encoders, consisting of a multi-headed attention layer and multi-layer perceptron (MLP) block. The Layer Normalization layer precedes each block, and a residual skip connection is added to the output from the input. To handle 2D images of 512 x 512, we reshape the images into a sequence of flattened 2D patches with resolution 64 x 64. By doing so, we end up having 64 patches in total. The Transformer uses a constant latent vector size of 64 through all its layers, so we flatten the patches and map to 64 dimensions with a trainable linear projection. The output of this projection is called the patch embedding. Position embeddings are added to the patch embeddings to preserve positional information. We use regular learnable 1D position embeddings. For multi-headed attention, we use 4 heads. For MLP blocks, we use two dense layers with features sized at 128 x 64, each succeeded by a GeLU activation and a dropout of 0.1. Contrarily, our Vision Transformer has two outputs, an MLP head, and a Convolutional layer. The MLP head has two outputs, with hidden units for FA image classification (Abnormal and Normal). In contrast, the convolution layer outputs a feature map of 64 x 64 for classifying each patch in the original image. We use two Vision Transformer-based discriminators that incorporate identical structures but operate at two different scales. The coarse angiograms and fundus are resized to 256 x 256 by a factor of 2 using the Lanczos filter. Both discriminators have identical transformer encoder and output layers. Consequently, we fuse learnable elements from both generators, while training them with their paired Vision Transformer-based discriminators.

2) Non-Intrusive Diagnostics We present a methodology that comprises a calibration step, four different visual function tests that measure different aspects of user perception, and then a composite pipeline that simulates the modeled deficits for validation.

VR Calibration: In order to properly utilize the virtual assessment, the environment would need to be calibrated at the beginning of each session. Simple calibrations, such as adjusting lens distance, interpupillary distance, and headset adjustments, are done at the start. Additional system specific calibrations, such as color gamut calibration, is done once per each VR device. After these adjustments, the fixation and tracking capabilities of the eyes are tested, first binocularly and then monocularly. These performance metrics are saved alongside the user demography information.

VR Assessment: After the calibration phase, the user’s visual assessment can commence. Visual acuity (VA), contrast sensitivity (CS), and visual distortions are assessed through a variety of procedures. For VA, binocular distant VA as well as dynamic VA is measured under mesopic (natural light) conditions. Instead of using images of conventional charts, we render individual characters in front of the user at predetermined distances and scale it based on user response. The results are reported in logMAR scale, among others. The contrast sensitivity is measured using Gabor patches as stimuli. In this test, the user gaze follows a Gabor patch that alters its contrast and spatial frequency based on user performance. At the end, the contrast sensitivity expressed in logCS among other contrast sensitivity units. The Amsler grid test is adapted to VR to measure the perceptual distortions in age-related macular degeneration (AMD) patients. At the start of the exam, the Amsler grid is displayed in front of both eyes. While looking at a fixation point in the grid, if the straight grid lines appear to be distorted the user emulates the metamorphopsia of the deficient eye on the healthy eye. This grid manipulation is modeled as a Gaussian mixture of different scotoma parameters. The results are reported as the image of the altered Amsler grid.

VR Simulation: The collected results for each of the visual assessments are then used to create a simulation of the perception of the user. This pipeline combines results from all of the tests to offer a single visualization. For example, lower visual acuity values would lead to the scene appearing blurry and the existence of scotomas would create distortions in the scene. The saved parameters can be pulled up at any time so that others can experience the perceptual loss measured by all three tests individually and collectively.

2-1) Objectives of Visual Function Assessments Currently, on board the International Space Station (ISS), astronauts undergo many routine functional visual assessments (e.g., visual acuity, Amsler grid test). Contrast sensitivity testing is also available. For optimal monitoring, these visual assessments may benefit from consistent distancing and illumination calibration to reduce the subjectivity of the tests. We achieve these objectives through virtual reality (VR) head-mounted systems. A laptop screen-based test is repurposed for an immersive experience with this technology. Additionally, by delivering all visual function tests using one VR device, it will be possible to make inferences on other tests once a session is recorded. Specifically, for SANS monitoring, it is important to identify any subtle perceptual impact so that countermeasures can be designed. Intelligent delivery of stimuli under various conditions would help identify subtle perceptual loss. Optic disc edema, globe flattening, nerve fiber layer thickening, and choroidal folds are common imaging findings in SANS. While it is important to monitor SANS, frequently repeating these imaging tests would consume a significant portion of mission time. Therefore, quick sessions of different visual function tests are being considered to continually track the different aspects of SANS symptoms. This can be achieved by mapping visual functional data with imaging data using pre-existing astronaut data as well as head-down tilt bed rest, an analog for SANS.

We have conducted several primary tests on this system, including visual acuity, contrast sensitivity, Amsler grid, and visual fields. These assessments can be linked to specific SANS findings that parallel terrestrial ocular relationships, such as contrast sensitivity and retinal nerve fiber layer thickening. In addition, these visual function tests may be able to further characterize any deficiencies in SANS by providing additional visual assessment tests.

Bibliography: Description: (Last Updated: 04/25/2024) 

Show Cumulative Bibliography
 
Abstracts for Journals and Proceedings Zaman N, Ong J, Tavakkoli A, Zuckerbrod S, Webster M. "Adaptation to prominent monocular metamorphopsia using binocular suppression." Optica Fall Vision Meeting, Virtual, September 20-October 3, 2021.

Abstract Issue. 2021 Optica Fall Vision Meeting. J Vis. 2022 Feb 1;22(3):11. https://doi.org/10.1167/jov.22.3.11 , Feb-2022

Abstracts for Journals and Proceedings Ong J, Zaman N, Kamran SA, Waisberg E, Tavakkoli A, Lee AG, Webster M. "A multi-modal visual assessment system for monitoring Spaceflight Associated Neuro-Ocular Syndrome (SANS) during long duration spaceflight." 2021 Optica Fall Vision Meeting, Virtual, September 20-October 3, 2021.

Abstracts. 2021 Optica Fall Vision Meeting, Virtual, September 20-October 3, 2021. , Sep-2021

Abstracts for Journals and Proceedings Kamran SA, Hossain KF, Tavakkoli A, Ong J, Zuckerbrod SL. "A generative adversarial deep neural network to translate between ocular imaging modalities while maintaining anatomical fidelity." 2021 Optica Fall Vision Meeting, Virtual, September 20-October 3, 2021.

Abstract Issue. 2021 Optica Fall Vision Meeting. J Vis. 2022 Feb 1;22(3):3. https://doi.org/10.1167/jov.22.3.3 , Feb-2022

Articles in Peer-reviewed Journals Waisberg E, Ong J, Zaman N, Kamran SA, Lee AG, Tavakkoli A. "A non-invasive approach to monitor anemia during long-duration spaceflight with retinal fundus images and deep learning." Life Sci Space Res (Amst). 2022 May;33:69-71. https://doi.org/10.1016/j.lssr.2022.04.004 ; PMID: 35491031 , May-2022
Articles in Peer-reviewed Journals Ong J, Tavakkoli A, Strangman G, Zaman N, Kamran SA, Zhang Q, Ivkovic V, Lee AG. "Neuro-ophthalmic imaging and visual assessment technology for spaceflight associated neuro-ocular syndrome (SANS)." Surv Ophthalmol. 2022 Apr 21;S0039-6257(22)00048-0. Review. https://doi.org/10.1016/j.survophthal.2022.04.004 ; PMID: 35461882 , Apr-2022
Articles in Peer-reviewed Journals Ong J, Zaman N, Kamran SA, Waisberg E, Tavakkoli A, Lee AG, Webster M. "A multi-modal visual assessment system for monitoring Spaceflight Associated Neuro-Ocular Syndrome (SANS) during long duration spaceflight." J Vis. 2022 Feb 1;22(3):6. https://doi.org/10.1167/jov.22.3.6 , Feb-2022
Papers from Meeting Proceedings Kamran SA, Hossain KF, Tavakkoli A, Zuckerbrod SL, Baker SA. "Vtgan: Semi-supervised retinal image synthesis and disease prediction using vision transformers." 2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW), Montreal, Canada, October 11-17, 2021.

Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW), 2021 Nov 24:3228-3238, http://dx.doi.org/10.1109/ICCVW54120.2021.00362 , Nov-2021

Papers from Meeting Proceedings Kamran SA, Hossain KF, Tavakkoli A, Zuckerbrod SL, Sanders KM, Baker SA. "RV-GAN: Segmenting retinal vascular structure in fundus photographs using a novel multi-scale generative adversarial network." 24th International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI 2021), Virtual, September 27-October 1, 2021.

Proceedings of MICCAI 2021. Lecture Notes in Computer Science, vol 12908. Springer, Cham. https://doi.org/10.1007/978-3-030-87237-3_4 , Sep-2021

Papers from Meeting Proceedings Kamran, S. A., Hossain, K. F., Tavakkoli, A., & Zuckerbrod, S. L. "Attention2angiogan: Synthesizing fluorescein angiography from retinal fundus images using generative adversarial networks." 25th International Conference on Pattern Recognition (ICPR), Virtual, January 12-15, 2021.

Proceedings of the 25th International Conference on Pattern Recognition (ICPR), 2021 Jan 12:9122-9129. https://doi.ieeecomputersociety.org/10.1109/ICPR48806.2021.9412428 , Jan-2021

Project Title:  A Non-intrusive Ocular Monitoring Framework to Model Ocular Structure and Functional Changes due to Long-term Spaceflight Reduce
Images: icon  Fiscal Year: FY 2021 
Division: Human Research 
Research Discipline/Element:
HRP HHC:Human Health Countermeasures
Start Date: 08/27/2020  
End Date: 08/26/2022  
Task Last Updated: 06/27/2021 
Download report in PDF pdf
Principal Investigator/Affiliation:   Tavakkoli, Alireza  Ph.D. / University of Nevada, Reno 
Address:  Department of Computer Science and Engineering 
1664 N Virginia St (MS0171) 
Reno , NV 89557-0001 
Email: tavakkol@unr.edu 
Phone: 775-682-8426  
Congressional District:
Web:  
Organization Type: UNIVERSITY 
Organization Name: University of Nevada, Reno 
Joint Agency:  
Comments:  
Co-Investigator(s)
Affiliation: 
Webster, Michael  Ph.D. University of Nevada, Reno 
Key Personnel Changes / Previous PI: N/A
Project Information: Grant/Contract No. 80NSSC20K1831 
Responsible Center: NASA JSC 
Grant Monitor: Stenger, Michael  
Center Contact: 281-483-1311 
michael.b.stenger@nasa.gov 
Unique ID: 14091 
Solicitation / Funding Source: 2019 HERO 80JSC019N0001-FLAGSHIP & OMNIBUS: Human Research Program Crew Health. Appendix A&B 
Grant/Contract No.: 80NSSC20K1831 
Project Type: GROUND 
Flight Program:  
TechPort: Yes 
No. of Post Docs:
No. of PhD Candidates:
No. of Master's Candidates:
No. of Bachelor's Candidates:
No. of PhD Degrees:
No. of Master's Degrees:
No. of Bachelor's Degrees:
Human Research Program Elements: (1) HHC:Human Health Countermeasures
Human Research Program Risks: (1) SANS:Risk of Spaceflight Associated Neuro-ocular Syndrome (SANS)
Human Research Program Gaps: (1) SANS-202:Determine if genetic/metabolic/anatomic dispositions and biomarkers, and sex differences have a contributing role in the development of ocular manifestations.
(2) SANS-301:Develop and test mechanical countermeasures in the laboratory.
Flight Assignment/Project Notes: NOTE: End date changed to 08/26/2022 per NSSC information. (Ed. 10/26/21)

Task Description: Unique neuro-ocular structural and functional changes affect a subset of astronauts who have completed prolonged spaceflight missions and due to its unique pathology, a new case definition was proposed and the condition was renamed Spaceflight Associated Neuro-ocular Syndrome (SANS). In this project we investigate two interconnected computational frameworks to develop a diagnostic system as well as a mapping mechanism to assist NASA scientists and clinical experts to more comprehensively study the SANS phenomenon and predict the risk of its development in prolonged spaceflight. Therefore, the first aim (Aim 1) of this project is to develop novel computational tools to establish mappings between the observed ocular structure and visual function, pre-, in-, and post-flight, in order to provide NASA scientists and clinicians with better means to investigate SANS etiology and its progression. The second aim (Aim 2) of this project is to integrate Contrast Sensitivity (CS), Visual Fields (VF), and our novel distortion assessment mechanism into a validated and compact diagnostic tool to better measure ocular function (SANS 301 : Laboratory development of mechanical countermeasures).

We will focus our efforts in each aim on a sub-set of functionalities that allow for the establishment of the interconnected computational framework enabling the pursuit of long-term research to predict the risk of development of SANS and monitor its progression.

Omnibus Aim 1: Structure-Function Mapping

Research Task-1.1: Design a novel mapping between Optical Coherence Tomography (OCT), Magnetic Resonance Imaging (MRI), Contrast Sensitivity (CS), and Visual Fields (VF) perimetry.

Research Task-1.2: Conduct studies on retrospective data from NASA Lifetime Surveillance of Astronaut Health (LSAH) and Life Sciences Data Archive (LSDA) on the three populations (astronauts, head-down-tilt bed rest, and idiopathic intracranial hypertension (IIH)) patient.

These findings will be significant in two ways:

(1) They will allow us to predict measures within a smaller sample set, if a larger analog sample set has known structure-function maps.

(2) They will enable us to design predictive mechanisms to study disease progression both in astronauts and in terrestrial analogs terrestrial analogs.

Expected Outcomes: (1.i) understanding how OCT/MRI correlates with VF, (1.ii) translational parametrization of mappings across cohorts, and (1.iii) ability to predict the risk of development of SANS and monitor its progression by utilizing the proposed mappings.

Omnibus Aim 2: Address SANS 301 Knowledge Gap

Research Task-2.1: Integrate VF and CS assessments into a VR-mediated framework.

Research Task-2.2: Validate VR-based VF/CS on the terrestrial analog populations.

Expected Outcomes: (2.i) a novel Virtual Reality (VR)-based VF/CS assessment and (2.ii) a compact diagnostic tool.

Research Impact/Earth Benefits: During the previous year of the project, our team has made contributions on the two aims as follows:

Aim 1- The first aim (Aim 1) of this project is to develop novel computational tools to establish mappings between the observed ocular structure and visual function, pre-, in-, and post-flight, in order to provide NASA scientists and clinicians with better means to investigate SANS etiology and its progression (SANS 1).

Contributions: 1- Design a novel mapping between OCT, MRI, CS, and VF. 2- Conduct studies on retrospective data from NASA Lifetime Surveillance of Astronaut Health (LSAH) and Life Sciences Data Archive (LSDA) on the three populations.

Technical Details: In order to establish a comprehensive mapping between different ophthalmic domains we started by designing a conditional generative adversarial network (GAN) to map across the publicly available data we had at our disposal, i.e., fluorescein angiography (FA) and fundus photographs. The GAN comprises of two generator modules and four discriminator modules to take fundus photographs and produce anatomically accurate FA images inferred from the fundus images.

Impact: We have shown novel deep architectures in ophthalmic applications could improve diagnostic accuracy, that attention maps can improve transferability of learned models across datasets, and deep architectures could effectively extract shared feature representations across ophthalmic image modalities to translate from one domain to another. These discoveries have paved the way for our team to tackle the main problem of mapping between the domain of ocular structure to the visual function.

Significance: (1) Understanding how ocular structure correlates with visual function. (2) Parametrization of mappings. (3) Predict the risk of SANS.

Aim 2- The second aim (Aim 2) of this project is to integrate CS, VF, and our novel distortion assessment mechanism into a validated and compact diagnostic tool to better measure ocular function (SANS 3).

Contributions: 1- Integrate VF/CS assessments into a VR-mediated framework. 2- Validate VR-based VF/CS on the terrestrial analog populations.

Technical Details: We present a methodology that comprises a calibration step, four different visual function tests that measure different aspects of user perception, and then a composite pipeline that simulates the modeled deficits for validation. In order to properly utilize the virtual assessment, the environment would need to be calibrated at the beginning of each session. Simple calibrations such as adjusting lens distance, interpupillary distance, and headset adjustments are done at the start. After these adjustments, the fixation and tracking capabilities of the eyes are tested, first binocularly and then monocularly. These performance metrics are saved alongside the user demography information. After the calibration phase, the user's visual assessment can commence. Visual acuity (VA), contrast sensitivity (CS), and visual distortions are assessed through a variety of procedures. For VA, binocular distant VA as well as dynamic VA is measured under mesopic (natural light) conditions. Instead of using images of conventional charts, we render individual characters in front of the user at predetermined distances and scale it based on user response. The results are reported in logMAR scale among others. The contrast sensitivity is measured using gabor patches as stimuli. In this test, the user gaze follows a gabor patch that alters its contrast and spatial frequency based on user performance. At the end, the contrast sensitivity expressed in logCS among other contrast sensitivity units. The amsler grid test is adapted to VR to measure the perceptual distortions in age-related macular degeneration (AMD) patients. At the start of the exam, the amsler grid is displayed infront of both eyes. While looking at a fixation point in grid, if the straight grid lines appear to be distorted the user emulates the metamorphopsia of the deficient eye on the healthy eye. This grid manipulation is modeled as a gaussian mixture of different scotoma parameters. The results are reported as the image of the altered amsler grid.

Impact: In addition, we have developed a new approach mediated by advances in virtual reality (VR) for better assessment of metamorphopsia to enable remote monitoring of the progression of AMD \cite{zaman2020mixed}. These findings in conjunction with the findings of Aim 1 motivate and inform the objectives of this project, by allowing our team to maintain a correspondence between how the ocular structural changes could impact visual function assessments.

Significance: (1) A novel VR-based VF/CS assessment. (2) A compact diagnostic tool.

Task Progress & Bibliography Information FY2021 
Task Progress: We have initiated the data request form from NASA shortly after the award notification was announced on August 14, 2020. On the same day the study was submitted to NASA Institutional Review Board (IRB) under protocol STUDY00000269. On September 22nd the Principal Investigator (PI) presented the data request to the LHSA Advisory Board. October 28, 2020 the NASA IRB approved the study protocol. On December 18, 2020 the University of Nevada Reno (UNR) Research Integrity office approved the NASA Reliance Acknowledgement document and the full IRB approved the project on January 21, 2021. The final data request signature was delivered to NASA on June 6th to complete the data request action and download the provided data from NASA LSAH/LSDA repositories.

We have shown novel deep architectures in ophthalmic applications could improve diagnostic accuracy, that attention maps can improve transferability of learned models across datasets, and deep architectures could effectively extract shared feature representations across ophthalmic image modalities to translate from one domain to another. These discoveries have paved the way for our team to tackle the main problem of mapping between the domain of ocular structure to the visual function.

In addition, we have developed a new approach mediated by advances in virtual reality (VR) for better assessment of metamorphopsia to enable remote monitoring of the progression of age-related macular degeneration (AMD). These findings in conjunction with the findings of Aim 1 motivate and inform the objectives of this project, by allowing our team to maintain a correspondence between how the ocular structural changes could impact visual function assessments.

As of the writing of this report, the retrospective data has not been received. However, we have utilized publicly available data for some preliminary validation of the work done so far. In addition to this limitation, the CoVID-19 pandemic has prevented our team from collecting prospective data from patient analog and control subjects. The publicly available data we have used in this year only included multiple modalities of ocular physiology. Therefore, the current models are fully capable of mapping between various ocular structure modalities. Despite this limitation, the models are designed without any specific restrictions on the domain of data (i.e., structure or function). Therefore, we anticipate that with the availability of additional modalities during the second year more comprehensive models can be trained to map across visual function and ocular structure domains.

Bibliography: Description: (Last Updated: 04/25/2024) 

Show Cumulative Bibliography
 
Dissertations and Theses Zaman N. (Nasif Zaman) "EyeSightVR: An Immersive and Automated Tool for Comprehensive Assessment of Visual Function." University of Nevada, Reno, May 2021. , May-2021
Papers from Meeting Proceedings Kamran SA, Hossain KF, Tavakkoli A, Zuckerbrod SL, Sander KM, Baker SA. "RV-GAN: Segmenting Retinal Vascular Structure in Fundus Photographs using a Novel Multi-scale Generative Adversarial Network." 24th International Conference on Medical Image Computing and Computer Assisted (MICCAI) Intervention, Virtual, September 27-October 1, 2021.

Proceedings of the 24th International Conference on Medical Image Computing and Computer Assisted (MICCAI) Intervention, Virtual, September 27-October 1, 2021. In press, as of July 2021. , Jul-2021

Project Title:  A Non-intrusive Ocular Monitoring Framework to Model Ocular Structure and Functional Changes due to Long-term Spaceflight Reduce
Images: icon  Fiscal Year: FY 2020 
Division: Human Research 
Research Discipline/Element:
HRP HHC:Human Health Countermeasures
Start Date: 08/27/2020  
End Date: 08/26/2021  
Task Last Updated: 10/18/2020 
Download report in PDF pdf
Principal Investigator/Affiliation:   Tavakkoli, Alireza  Ph.D. / University of Nevada, Reno 
Address:  Department of Computer Science and Engineering 
1664 N Virginia St (MS0171) 
Reno , NV 89557-0001 
Email: tavakkol@unr.edu 
Phone: 775-682-8426  
Congressional District:
Web:  
Organization Type: UNIVERSITY 
Organization Name: University of Nevada, Reno 
Joint Agency:  
Comments:  
Co-Investigator(s)
Affiliation: 
Webster, Michael  Ph.D. University of Nevada, Reno 
Project Information: Grant/Contract No. 80NSSC20K1831 
Responsible Center: NASA JSC 
Grant Monitor: Grant-Technical-Officer, JSC-SA  
Center Contact: 281.244.8942 
jsc-sa-grant-technical-officer@mail.nasa.gov 
Unique ID: 14091 
Solicitation / Funding Source: 2019 HERO 80JSC019N0001-FLAGSHIP & OMNIBUS: Human Research Program Crew Health. Appendix A&B 
Grant/Contract No.: 80NSSC20K1831 
Project Type: GROUND 
Flight Program:  
TechPort: Yes 
No. of Post Docs:  
No. of PhD Candidates:  
No. of Master's Candidates:  
No. of Bachelor's Candidates:  
No. of PhD Degrees:  
No. of Master's Degrees:  
No. of Bachelor's Degrees:  
Human Research Program Elements: (1) HHC:Human Health Countermeasures
Human Research Program Risks: (1) SANS:Risk of Spaceflight Associated Neuro-ocular Syndrome (SANS)
Human Research Program Gaps: (1) SANS-202:Determine if genetic/metabolic/anatomic dispositions and biomarkers, and sex differences have a contributing role in the development of ocular manifestations.
(2) SANS-301:Develop and test mechanical countermeasures in the laboratory.
Task Description: Unique neuro-ocular structural and functional changes affect a subset of astronauts who have completed prolonged spaceflight missions and due to its unique pathology, a new case definition was proposed and the condition was renamed Spaceflight Associated Neuro-ocular Syndrome (SANS). In this project we investigate two interconnected computational frameworks to develop a diagnostic system as well as a mapping mechanism to assist NASA scientists and clinical experts to more comprehensively study the SANS phenomenon and predict the risk of its development in prolonged spaceflight. Therefore, the first aim (Aim 1) of this project is to develop novel computational tools to establish mappings between the observed ocular structure and visual function, pre-, in-, and post-flight, in order to provide NASA scientists and clinicians with better means to investigate SANS etiology and its progression. The second aim (Aim 2) of this project is to integrate Contrast Sensitivity (CS), Visual Fields (VF), and our novel distortion assessment mechanism into a validated and compact diagnostic tool to better measure ocular function (SANS 301 : Laboratory development of mechanical countermeasures).

We will focus our efforts in each aim on a sub-set of functionalities that allow for the establishment of the interconnected computational framework enabling the pursuit of long-term research to predict the risk of development of SANS and monitor its progression.

Omnibus Aim 1: Structure-Function Mapping

Research Task-1.1: Design a novel mapping between Optical Coherence Tomography (OCT), Magnetic Resonance Imaging (MRI), Contrast Sensitivity (CS), and Visual Fields (VF) perimetry.

Research Task-1.2: Conduct studies on retrospective data from NASA Lifetime Surveillance of Astronaut Health (LSAH) and Life Sciences Data Archive (LSDA) on the three populations (astronauts, head-down-tilt bed rest, and idiopathic intracranial hypertension (IIH)) patient.

These findings will be significant in two ways:

(1) They will allow us to predict measures within a smaller sample set, if a larger analog sample set has known structure-function maps.

(2) They will enable us to design predictive mechanisms to study disease progression both in astronauts and in terrestrial analogs terrestrial analogs.

Expected Outcomes: (1.i) understanding how OCT/MRI correlates with VF, (1.ii) translational parametrization of mappings across cohorts, and (1.iii) ability to predict the risk of development of SANS and monitor its progression by utilizing the proposed mappings.

Omnibus Aim 2: Address SANS 301 Knowledge Gap

Research Task-2.1: Integrate VF and CS assessments into a VR-mediated framework.

Research Task-2.2: Validate VR-based VF/CS on the terrestrial analog populations.

Expected Outcomes: (2.i) a novel Virtual Reality (VR)-based VF/CS assessment and (2.ii) a compact diagnostic tool.

Research Impact/Earth Benefits:

Task Progress & Bibliography Information FY2020 
Task Progress: New project for FY2020.

Bibliography: Description: (Last Updated: 04/25/2024) 

Show Cumulative Bibliography
 
 None in FY 2020