Can you predict fluid intelligence from t1-weighed MRI?

The ABCD Neurocognitive Prediction Challenge (ABCD-NP-Challenge 2019) invites researchers to submit their method for predicting fluid intelligence from T1-weighed MRI (about 8.5K subjects in total, age 9-10 years). The data of 4.1k individuals will be provided for training. The accuracy of each method will be assessed on its predicted fluid intelligence scores of 4.4K children, whose actual scores will be revealed after the challenge deadline. Downloading the data needs prior approval by NIH NDAR, which will require sign off by the institution you are affiliated with. So start the application process early. Please also sign up to the emailing list to receive updates about the challenge.

About ABCD: The ABCD study is the largest long-term study of brain development and child health in the United States. The ABCD Research Consortium consists of a Coordinating Center, a Data Informatics and Analysis Center, and 21 research sites across the country, which recruited over 11K children ages 9-10. Of each participant, the study acquires structural, diffusion functional brain MRIs as well as genetics, neuropsychological, behavioral, and other health assessments. The goal of ABCD is to determine how childhood experiences (such as sports, videogames, social media, unhealthy sleep patterns, and smoking) interact with each other and with a child's changing biology to affect brain development and social, behavioral, academic, health, and other outcomes.

About the Challenge: Determining the neural mechanisms underlying general intelligence is fundamental to understanding cognitive development, how this relates to real-world health outcomes, and how interventions (education, environment) might improve outcomes through adolescence and into adulthood. A major factor in measuring general intelligence is fluid intelligence (Carroll, 1993), which the ABCD measures via the NIH Toolbox Neurocognition battery (Akshoomoff et al., 2014) and from which demographic confounding factors (e.g., sex and age) are removed. The fluid intelligence scores of 4154 subjects will be provided to participants for training (3739 samples) and validation (415 samples), while the scores of 4515 subjects will have to be predicted based on T1-weighted MRI. The MRIs are acquired according to the following acquisition protocol.

The fluid intelligence scores are pre-residualized on data collection site, sociodemographic variables and brain volume. Using the R function lm, a linear regression model was constructed with fluid intelligence as the dependent variable and brain volume, data collection site, age at baseline, sex at birth, race/ethnicity, highest parental education, parental income, and parental marital status as independent variables. Any subject in the ABCD NDA Release 1.1 data set with a missing value in the dependent or independent variables in this linear model was deleted from the training and validation set. After fitting the linear model on the resulting subset of list-wise complete data, the residuals were computed for all subjects of the challenge data set. These residuals constitute the values for the prediction contest and were computed with the following R code.

In addition to the fluid intelligence scores, the challenge organizers will also provide skull stripped images affinely aligned to the SRI 24 atlas, segmented into regions of interest according to that atlas, and the corresponding volume scores of each ROI via a csv file. The challenge organizers nor ABCD are responsible for the quality of the derived data. Publications using the data should cite the Data Supplement of Pfefferbaum et al., Altered Brain Developmental Trajectories in Adolescents After Initiating Drinking. Am J Psychiatry, 175(4), pp. 370-380, 2018. Specifically, the raw T1-weighted MRI was first transformed into a nifti file using the Minimal Processing Pipeline by ABCD (Hager at al., Image processing and analysis methods for the Adolescent Brain Cognitive Development Study, Under Review at Neuroimage). The T1 images were then applied to the cross-sectional component of the NCANDA pipeline (see Data Supplement). The processing involved noise removal and correcting field inhomogeneity confined to the brain mask defined by non-rigidly aligning SRI24 atlas to the T1w MRI via ANTS. The brain mask was refined by majority voting across maps extracted by FSL BET, AFNI 3dSkullStrip, FreeSurfer mri_gcut, and the Robust Brain Extraction (ROBEX) method, which were applied on combinations of bias and non-bias corrected T1w images. Using the refined masked, image inhomogeneity correction was repeated and the skull-stripped T1w image was segmented into brain tissue (gray matter, white matter, and cerebrospinal fluid) via Atropos. Gray matter tissue was further parcellated according to the SRI24 atlas, which was non-rigidly registered to the T1w image via ANTS. Afterwards, skull-stripped T1w image and corresponding segmentations were affinely mapped (pose and scale) to the SRI24 atlas. The results were visual inspected and rejected from the challenge if they failed the two-tier quality check.

Contestants will be ranked separately on the validation data set (pre-residualized fluid intelligence will be provided) and on the test data sets. For each data set, we will compute the Mean Squared Error (MSE) between their predicted scores and the pre-residucal fluid intelligence scores (R code). The pre-residualized fluid intelligence is computed via the algorithm described for the training data. If the algorithm is unable to produce a numerical prediction for a given test subject, the predicted value for that subject will be set to the value that gives the worst performance (i.e., largest MSPE) from among the set of values produced by the same algorithm on the subjects in the test dataset.

For more information about the challenge, please also read a recent article about the challenge by Computer Vision News and the FAQ-page.

Important Dates

Team Registration Deadline

February 15, 2019

Submission Deadline for Results and Code

March 22, 2019 (Extended)
March 24, 2019 23:59 PDT

Submission Deadline for Manuscript

March 29, 2019 (Extended)
April 15, 2019 23:59 PDT

Data Access

The fluid intelligence scores, raw T1-weighted MRIs, and derived data will be accessible to the challenge participants via the NDAR portal. To gain access to the data, please follow the four steps outlined in the tutorial. If NDAR approves your application to gain access to the data, they will allow you to download 500GB of the ABCD data for free. This credit can be used to download the raw baseline T1-weighted of the ABCD study or the corresponding derived data provided by the challenge organizers. Note, the entire dataset is now available for download.

The preprocessed data is available through NDAR as a data collection, which consists of the Training dataset, Validation dataset, and Testing dataset. To download the preprocessed imaging and volumetric data follow the steps here. To download the residualized fluid intelligence scores for training and validation, simply sign into NDAR, select the Training or Validation data set, and then download the csv file listed under the 'Results' tab.

Team Registration

There are no limits or restrictions on the team members as long as the team complies with the NIMH Data Archive Data Use Certification of the ABCD project. Teams should register by the deadline using this form. Note, if members of the team are from labs associated with ABCD then their submission will be reviewed but the results will not be posted on the leaderboard.

Final Submission Information

An eligible submission consists of the predictions of the fluid intelligence based on the t1-weighted MRIs of test subjects, the source code generating those predictions, and a manuscript describing the method and findings. All documents must be submitted via CMT. Submissions not adhering to the specific guidelines outlined below will be automatically rejected. Submissions from research labs associated with ABCD will be reviewed but not listed on the leaderboard.

Predicted Scores and Code

Predicted Scores: The predictions of 4402 subjects of the test data set need to be entered in the corresponding CSV file. Eligible CSV file contains the predictions of at least 99% of these subjects and are entirely based on data provided by the challenge, i.e., the T1 MRI and derived data. Subjects not listed in this CSV file are not included for computing the error measure.

Source Code: The source code should be accompanied by a readme file briefly explaining the requirements and procedure for running the code. All files should be combined into a single zip-file. Unzipping of the file can be password protected. The organizers will ask for the password if the code needs to be reviewed. Providing the incorrect password or any other reason for not being able to unzip of the file will result in automatic rejection from the challenge.

Manuscript

Manuscript: The document needs to clearly describe the data used for prediction, the method, and findings including the prediction error during training and validation. It needs to adhere to the formatting guidelines of MICCAI . The papers can be up to 8 pages excluding references and acknowledgments. Furthermore, an individual cannot be listed as an author on more than 6 manuscripts. Those manuscripts need to describe methods and results that are different from each other and from previously published material.

Manuscripts will be reviewed and those passing review will be asked to submit a camera-ready version. The accepted manuscripts will be included in the proceedings of the challenge published by Springer LNCS (regardless of their rank in the challenge). Please include all author names and affiliations on the manuscript for all submissions.

Uploading an Entry

Uploading an Entry: By March 24, 2019, create a submission on https://cmt3.research.microsoft.com/ABCDNP2019. This initial submission must consist of a title, a short abstract (briefly outline the method used for generating the results), the CSV file with the prediction scores, and the zip file with the source code. The manuscript needs to be uploaded as a PDF before April 15th, 2019 end of the day. To do so, open the submission already created and upload the PDF in the supplementary material section. The submission can have up to two other files of supplemental information.

Final Leader Board

We received 29 submissions, 6 of which were disqualified due to invalid files or results. The final leaderboard on the testing data is as follows:

Final (Testing) Leader Board.
Rank Team Name Submission ID Mean Squared Error (MSE)
1 UCL CMIC_24 24 92.1298
2 Iowa Computational Psychiatry 23 92.4973
3 AI-Med_9 9 92.5625
4 Purdue150 32 92.7407
5 hellomri_28 28 92.8378
6 BrainHackWAW 16 92.9277
7 UR_Connectomics 13 92.9952
8 SCAN 7 93.0326
9 BIGS2 31 93.1559
10 AI-Med_8 8 93.2152
11 AFINE Team 27 93.6360
12 Berlin brain decoders 14 93.6764
13 UCL CMIC_20 20 93.8335
14 AI-Med_18 18 94.0104
15 UEF-MNI 12 94.0270
16 hellomri_29 29 94.0808
17 AI-Med_10 10 94.1034
18 UCSBVRL 4 94.2525
19 UM 247 6 94.4786
20 teamJJEAN 25 95.3800
21 MLPsych 17 95.6304
22 UCI CBCL 2 96.1806
23 BorregosTec 11 100.8900
24 CUMED 19 102.2498

Validation Leader Board

To evaluate your results on the Validation set (415 subjects), train your model on the Training set and save the predicted scores of the 415 validation subjects in a csv file (similar to this template). Download the ground-truth scores csv file and run this R code to calculate MSE and R-Squared metrics for your results. Please submit these self-assessed validation results using this form. After you click submit, you will see a page indicating that your information was received successfully. No separate emails will be sent to you.
The submission deadline has passed. Therefore, this table will not be updated any more.

Validation Leader Board. Last Updated: March 25, 2019
Rank Team Name Mean Squared Error (MSE)
1 BrainHackWAW 67.3891
2 MLPsych 68.6100
3 UCI CBCL 68.7868
4 BIGS2 69.3861
5 UCL CMIC 69.7204
6 MU 247 69.7212
7 UCSBVRL 70.5622
8 Purdue150 70.5787
9 AFINE Team 70.8283
10 STFRD3 71.3469
11 CUMED 71.5679
12 Iowa Comp Psych 71.6923
13 hellomri 71.6990
14 SCAN 71.8530
15 BorregosTec 71.8589
16 teamJJEAN 71.9032
17 USC 72.0988

Organizers

Chairs

Kilian M. Pohl

Stanford University

Wesley K. Thompson

University of California, San Diego

Co-Chairs

Ehsan Adeli

Stanford University

Bennett A. Landman

Vanderbilt University

Marius G. Linguraru

Children's National Health System

Susan Tapert

University of California, San Diego