This is a mirror of fBIRN QA page, just in case of change.
Reference: Supplement IV: FBIRN Quality Assurance Program
The QA program has three parts: a custom phantom available at each site, one and later two fMRI scan series, and software to analyze and then share the QA data and processed results with the consortium.
The phantom used for the FBIRN QA is a custom-built 17 cm spherical agar phantom (ref). Sodium azide is added to prevent microbial growth and NaCl and NiCl2 are added to match the T1, T2 and RF conductivity properties of brain tissue so that the coil loading and NMR equilibrium conditions were similar to those in a typical head. An agar phantom was adopted over a liquid phantom to remove bulk motion and vibration effects on the phantom.
In the original FBIRN QA scan protocol (circa 2003-2006) sites scanned the phantom using whichever transmit/receive head coil was typically used during fMRI studies at each site. Scan parameters were matched across site using the best available protocol, which included a 27 slice acquisition with a TR=2 seconds (Ref friedman). The FBIRN QA program switched to using the standard multi-channel receive only head coil at each site in 2007, once these coils became commonly used in fMRI. A new sequence protocol with 30 slices and a TR of 2 seconds was also adopted. New imaging parameters were used to take advantage of the improved capabilities of the new hardware and software available on the FBIRN systems.
The QA data collected on the agar phantom was processed using the FBIRN automated QC processing tools (http://www.nitrc.org/projects/fbirn/). FBIRN developed a suite of custom software to process and share the QA imaging scans. This software wrapped the DICOM data with format-agnostic XCEDE wrappers, converted image data to NIfTI-1 (http://nifti.nimh.nih.gov/nifti-1/), computed summary metrics and image maps, and generated plots and an easy to navigate html report for each scan.
An individual was identified at each site with the responsibility of promptly reviewing the QA reports from the analysis software upon completion of the QA acquisition. Software was also created to upload the data to a shared file system for distribution to any members of the FBIRN consortium for review. A single QA expert was tasked with reviewing and tracking the QA reports from all the FBIRN sites to provide a second, standardized evaluation of the QA data from the entire study. Software was created to automatically download all QA data, extract summary QA metrics, plot those metrics against time for each scanner, and track these metrics to identify scanners with potential issues. When such problems were identified the QA expert contacted the local site QA person to discuss the potential issue, request that follow QA scans be acquired, and/or recommend that the local service be called in to evaluate the system for potential repair. Note that this system only worked because the local site experts “bought in” to the value of the FBIRN QA program and took concerns raised by the QA expert seriously.
Acceptance values for a variety of metrics under the FBIRN acquisition parameters were determined for each of the systems used in a given experiment, and were based on QA data acquired in the preparation phase of the experiment. Several FBIRN scanners did not meet acceptance and required repair before being deemed ready for use in one of the several MC-fMRI studies FBIRN has performed (ref?). Ongoing scanner QA data collection was also critical in identifying scanner problems that developed during human data collection, including a host of hardware failures (head coil, receiver board, cracked gradient coil, loose gradient coil mounting brackets, coil connector plugs on the table, failed driver module, failed RF amplifier, inadequate gradient coil cooling system, arcing shim coil connector, arcing gradient power supply cables, loose gradient cable mounting screws) and accidental sequence parameter changes. The QA program was also instrumental in smoothing the transition between various software and hardware upgrades that occurred during the fBIRN project.
All of the many scalar measures, parameters plots, and voxel maps generated by the FBIRN QA analysis software contain important information on scanner performance and are useful in identifying a malfunctioning scanner, a complete description of the output report is beyond the scope of this paper (see Friedman and Glover1, and https://xwiki.nbirn.org:8443/xwiki/bin/view/Function-BIRN/AutomatedQA). We have found the following metrics particularly useful in tracking scanner performance and identifying scanner malfunction:
Mean voxel intensity: mean within the evaluation ROI.
Signal-to-Fluctuation-Noise Ratio: SFNR is the voxel-wise ratio of the temporal variance and temporal mean intensity of the 4D phantom image after quadratic detrending. The SFNR summary value is the mean SFNR value within the evaluation ROI.
- Friedman L, Glover GH, The FBIRN Consortium. “Reducing interscanner variability of activation in a multicenter fMRI study: Controlling for signal-to-fluctuation-noise-ratio (SFNR) differences.” Neuroimage. September 2, 2006.
Percent signal fluctuation: ratio of the standard deviation of the residual of the time series average within the evaluation ROI after quadratic detrend to the mean signal intensity in the ROI, multiplied by 100.
Image smoothness: FWHM as calculated using the AFNI 3dFWHMx tool within the evaluation ROI for the 3 cardinal directions in the data after voxel-wise quadratic detrend.
RDC: The RDC comes from a Weisskoff analysis of the image, measuring of the size of the ROI at which statistical independence voxels is lost (ref).
Ghosting level: mean voxel intensity in the Nyquist ghost ROI as defined in Greve et al. (ref)
Bright ghosting level: mean of the highest 10% of voxels in the ghosting ROI as defined as in Greve et al.
bgSFNR and iSFNR: Greve et al. showed that by combining mean and variance measures from a high and low flip and fMRI QA scan two highly useful QA measures, background signal-to-noise fluctuation and instability signal-to-noise fluctuation values can be computed and used as acceptance criteria for scanner fMRI performance.
Per slice variance maps: a measure of the “spikiness” of the data for each slice at each time point in the data. The map shows the mean z-score over all voxels in a single slice after quadratic detrend.
Per-slice variation: this image shows, for each slice at each time point in the data, a measure of “spikiness” at slice granularity that is insensitive to artifacts that affect all slices (e.g. head motion). Higher numbers indicate a “spike”. It is computed as follows:
- For each voxel remove the mean and detrend across time.
- Calculate the absolute value of the z-score across time for each voxel.
- For each slice at each time point, compute the average of this absolute z-score over all voxels in the single slice, producing a Z*T matrix AAZ.
- For each slice at each time point, calculate the absolute value of the “jackknife” z-score of AAZ across all slices at that time point, producing a new Z*T matrix JKZ, which is the per-slice variation. (To compute a “jackknife” z-score, use all slicesexcept the current slice to calculate mean and standard deviation. The jackknife has the effect of amplifying outlier slices.)
Fourier analysis of the residuals: The mean value in the evaluation ROI is first subjected to a quadratic detrend followed by a fast Fourier transform. The spectral plot is evaluated for discrete frequencies with high amplitude that can occur because of mechanical vibrations, such as those from the refrigerator (“cold head”), or gradient induced resonances.
Normative values for all measurements depend on scanner, coil and other aspects of each site and cannot be tabulated here. The most critical aspect is to develop a routine QA scan regimen and maintain complete records for each scanner so that malfunctions become readily apparent.