examplecode Documentation Release 0.1

Tim

Apr 23, 2021

CONTENTS:

1 About 1

2 Arterial Spin Labeling (ASL)3

3 Directions 7

4 DIY fMRI 9

5 DTI 17

6 Fieldmaps 19

7 JPEG Format Variations 25

8 Legacy Web Pages 29

9 Optimizing FSL and SPM 31

10 Sensation and Perception (PSYC450) 37

11 Image to Inference (PSYC589/888) 49

12 Perfusion-weighted imaging (PWI) 55

13 spmScripts 61

14 Slice Time Correction (STC) 63

15 Publications 67

16 Indices and tables 69

i ii CHAPTER ONE

ABOUT

1.1 Topics

• Attention and perception. Our senses flood the brain with an overwhelming amount of information – how do we select the relevant information? Clinical syndromes such as spatial neglect (where individuals ignore information on their left side) provide insight into how the brain achieves this. • Speech and language. Communication is invaluable for sharing information, planning and coordinating actions in a group. Human language is quantitatively and qualitatively a quantum leap from that seen in other species. Cognitive neuroscience is able to employ new techniques to understand language. This work will help reveal who we are, and may help people who have suffered profound communication difficulties following brain injury.

1.2 Tools

• Behavioral Tasks. Each of our studies requires us to develop sensitive behavioral tasks: for example in an fMRI study of time perception we will want to compare tasks where the person makes temporal judgments (e.g. which item appeared first) to perceptually identical tasks where the participant judges a different domain (for example the shape of the items). We have extensive skill in designing and implementing these tasks. • MRI scans use radio signals to take pictures of the brain. fMRI is a type of MRI scan that is is sensitive to oxygenation concentration, allowing us to infer brain function. Typically, we have people perform simple tasks in the scanner while we collect fMRI scans. We have used this technique to identify the brain areas involved with speech and perception. In addition, we have used fMRI to examine recovery from brain injury. • Lesion behavior mapping associates the location of brain injury with the resulting symptoms. For example, we use this technique to identify the brain injuries that result in speech impairment. We can also use this technique to identify the best targets for neurosurgery.

1 examplecode Documentation, Release 0.1

• Transcranial Direct Current Stimulation. tDCS applies weak electrical currents to the scalp. It appears that tDCS can induce subtle changes in brain activity, with regions near the positive electrode showing slightly increased firing rates, whereas regions under the negative electrode show small decreases in firing rate. Curiously, these changes seem to persist for many minutes after the stimulation ends. Because this technique is very safe and inexpensive, this technique offers potential for the helping people recover from brain injury as well as revealing the function of the healthy brain. We have devised methods for double-blind testing of tDCS (where neither the participant nor the experimenter knows the type of stimulation used) to investigate this mysterious but promising technique. • TMS uses a brief magnetic pulse to stimulate parts of the brain near the TMS coil. The region of stimulation is relatively focused. By introducing TMS pulses while participants are conducting a task we can

2 Chapter 1. About CHAPTER TWO

ARTERIAL SPIN LABELING (ASL)

Arterial Spin Labeling (ASL) is a Magnetic Resonance Imaging (MRI) technique for measuring blood flow. Whereas conventional Perfusion Weighted Imaging (PWI) uses an external agent like Gadolinium (Gd) to tag blood, ASL directly tags the blood entering the brain. Conventional PWI has high signal to noise, but we tend to only track the perfusion of a single bolus (e.g. Gd is injected once into the arm, and we measure the latency and amount of this agent reaching different parts of the brain). In contrast, with ASL we have low signal to noise, but can easily acquire and compare hundreds of images. The rest of this page focuses on ASL, for more details on conventional PWI, please visit my PWI page..

ASL can be used to measure make quantitative measures of perfusion, such as the relative cerebral blood flow (rCBF). In addition, ASL scans can be used to infer brain function, similar to T2*-weighted fMRI. In general, ASL fMRI has slower acquisition, reduced field of view, and worse spatial distortions relative to T2* fMRI. However, it does provide a more direct measure of blood flow that may be helpful in cases where the canonical hemodynamic response has been disrupted. In any case, FSL makes it pretty easy to analyses ASL fMRI in a manner that is very similar to T2* fMRI. The FSL web page provides more details. Therefore, the rest of this web page describes the analysis of quantitative ASL data. There are many different types of ASL sequences. Your sequences will be limited by the type of scanner you have, as well as the sequence licenses you have available. Several major variations are CASL (continuous ASL), PASL (pulsed ASL) and pCASL (pseudo-Continuous ASL). At the MCBI we have the official Siemens PASL sequence (PICORE Q2T) and the pCASL from JJ Wang and his team. The Siemens sequence is elegant, as it automatically creates a rCBF map. However, we tend to prefer the pCASL sequence for our studies. A crucial step when acquiring ASL data is to set the correct post-label delay time. This is the time between when the blood is tagged in the neck and when the image of the brain is acquired. If the delay is too short, the blood will not have time to transit into the image, and if it is too long it will already have washed out of the image. This is especially important, as we will acquire pairs of images: one labelled and one unlabelled. One could imaging that with a very brief post-label delay and a short time between volumes (TR), the tagged blood might not get to the head in time for

3 examplecode Documentation, Release 0.1 the ‘labelled’ image, but be clearly present during the ‘unlabelled’ image acquisition. I strongly suggest consulting the people who developed your sequence to get their suggestions for post-label delay times. For the pCASL sequence we have, Ze Wang has suggested a delay time in the range of 700-1000ms for healthy children and young adults, while for older individuals (65 or older) he suggests 1200-1500ms, finally for stroke patients or patients with vascular diseases he notes that 1800ms might be required. In any case, this selection should be standardized for a single study. For example, a shorter delay may be required for a study of stroke that hopes to examine both the intact and injured hemisphere. However, if you plan to acquire images from special populations (e.g. people with strokes) you may want to consult your physicist. As you adjust the delay time, the minimum TR is also influenced. Ze Wang suggests that your actual TR should always be at least 100ms longer than the minimum TR since the labeling pulses induce Magnetization Transfer (MT) effects to the brain regions to be imaged, so before the spins go back to the steady state, they are suppressed to some extent by the labeling pulses. Longer TRs provide more signal (more time for spins to relax), though at the cost of fewer acquisitions (and more difficulty temporally interpolating data for fMRI-like task based paradigms). In general, a TR of 3500ms seems appropriate (unless your population requires a very long delay time). Another thing you should bear in mind with the CfN pCASL sequence is the bandwidth (indeed, bandwidth is an important decision for echo-planar imaging [EPI] protocols). With regards to the CfN pCASL sequence, Ze Wang notes that high bandwidths can lead to severe eddy currents leading to phase accumulation and a N/2 ghost artifact. Performance varies between scanners, but he suggests that 2232 to 2694 Hz/pixel should be appropriate for most Siemens Trios (you should also check that images from your scanner do not show aliasing artifacts, if you see artifacts then you should collect images without iPAT [as this can also cause artifacts] and iteratively take images while decreasing the bandwidth until the artifacts go away). You will also want to specify your labeling time, for example if your protocol PDF reports 80 blocks, the Labeltime = 80*0.0185, since the CFN pCASL RF block duration is ALWAYS 0.0185s (20 RF pulses with gaps). For our protocol, we use 80 RF blocks, a bandwidth of 2442 Hz/px, and acquire 17 slices. With these settings the minimum TR is 2090ms plus the delay time (so since slicetime=[minTR- labelingtime-delaytime]/#slices, we can compute that our Slicetime is 36.35294118ms). For example, with a 1200ms post label delay the minimum TR is 3290ms, and we typically acquire with a TR of 3500ms.

2.1 pCASL Analysis Simplified

This page is old. While the scripts below work, new users may want to consider using FSL’s BASIL. We have set up a simple script for processing our pCASL data. This script requires that you have the following installed:

• Matlab (no toolboxes required) • SPM12 • ASLtbx – Since this script uses 4D NIfTI format files, you need a recent version of ASLtbx (the asl_perf_subtract.m text file should report being version May 2 2012 or later). • One NIfTI format T1-weighted anatomical scan per participant • One NIfTI format 4D ASL file per session (each participant may have multiple sessions). • My asl_process_subj.m matlab script (this needs to be in your Matlab path, you might as well put it into your ASLtbx folder,download includes script and sample images). :download:`zip If your data is in DICOM format, or if your ASL data is 3D (one file per timepoint, instead of a single file with all time points), you can convert them with dcm2niix. Before you use the script, it is a good idea to adjust the origin of the images to be near the anterior commissure. This assures that the normalization alogrithm is able to align your images. The manual describes how you can do this with SPM’s display function. You will want to run the script once for each participant. Here are some examples of what you could type from the Matlab command prompt: • asl_process_subj(‘ep2dpcaslipat2r1.nii’,’T1.nii’); : single session ASL with a T1-weighted anatomical scan

4 Chapter 2. Arterial Spin Labeling (ASL) examplecode Documentation, Release 0.1

• asl_process_subj(strvcat(‘ep2dpcaslipat2r1.nii’,’ep2dpcaslipat2r2.nii’),’T1.nii’); : two sessions ASL with T1scan • asl_process_subj If you run the script without specifying any files, an initial dialog box comes up asking you to select the first volume of each session, for example the picture on this page shows the selection of the images ep2dpcaslipat2r1.nii and ep2dpcaslipat2r2.nii. Note that you only selected the first timepoint for each session. You will be next prompted to select the T1-weighted anatomical scan. • For multiple participants, call asl_process_subj multiple times. You can paste a batch of calls into Mat- lab to process several subjects sequentially, for example consider data from subjects 1 and subjects 2: asl_process_subj(‘S1ASL.nii’,’S1T1.nii’); asl_process_subj(‘S2ASL.nii’,’S2T1.nii’); The script will report details for each stage of processing, and reports the critical choices for ASL processing. This script is currently set up for the CfN pCASL sequence with the settings used at the McCausland center for stroke participants, but you can edit the file for any variations in the pCASL sequence, or even adapt it for PASL and CASL acquisitions. This shows the flexibility of the CfN’s ASLtbx. Here is a description of what my script does, suitable for insertion into publications: Data were processed using the ASLtbx (Wang et al., 2008) with SPM8 (https://www.fil.ion.ucl.ac.uk/spm/software/spm8/). For each session, labeled and unlabeled ASL images were independently motion corrected and then a combined mean image was computed. The mean image was coregistered to match the T1-weighted anatomical image. The ASL images were then resliced to match the mean image and spatially smoothed with a 6mm full-width half-maximum Gaussian kernel. Cerebral blood flow (CBF) was then estimated by subtraction, resulting in a mean CBF image. The T1 scan was then normalized using SPM8’s unified segmentation-normalization, and these parameters were used to reslice the CBF images (to 2mm isotropic) and T1 image (1mm isotropic) to standard space. SPM8’s default brain mask was then used to mask the normalized CBF images (with a 50% threshold).

2.2 Links and References

• Ashburner J, Friston KJ. (2005) Unified segmentation. Neuroimage. 26:839-51. • Wu WC, Fernández-Seara M, Detre JA, Wehrli FW, Wang J. (2007) A theoretical and experimental investigation of the tagging efficiency of pseudocontinuous arterial spin labeling. Magn Reson Med.58:1020-7. • Wang J, Licht DJ, Jahng GH, Liu CS, Rubin JT, Haselgrove J, Zimmerman RA, Detre JA. (2003) Pediatric perfusion imaging using pulsed arterial spin labeling. J Magn Reson Imaging. 18:404-13. • Wang Z, Aguirre GK, Rao H, Wang J, Fernández-Seara MA, Childress AR, Detre JA. (2008) Empirical opti- mization of ASL data analysis using an ASL data processing toolbox: ASLtbx. Magn Reson Imaging. 26:261-9. • The ASLtbx webpage, journal article.

2.2. Links and References 5 examplecode Documentation, Release 0.1

6 Chapter 2. Arterial Spin Labeling (ASL) CHAPTER THREE

DIRECTIONS

Dr. Rorden’s Neuropsychology Labs at University of South Carolina examine the behavioral difficulties people expe- rience after brain injury. We are located in the city of Columbia, the state capitol of South Carolina. We are located in the Innovista, a short walk from the restaurants of the Vista, the state house and the historic University Horseshoe.

3.1 Brain Stimulation Lab, Computer Cluster and Primary Offices

• Discovery I Building • 915 Greene Street • Columbia, SC 29208 • 803 404 2573

7 examplecode Documentation, Release 0.1

3.2 McCausland Center for Brain Imaging

• 3T Magnetic Resonance Imaging • Palmetto Health Richland • Medical Park Road • Columbia, SC 29203

8 Chapter 3. Directions CHAPTER FOUR

DIY FMRI

This web site is designed for people who are setting up a Magnetic Resonance Imaging (MRI) scanner for functional imaging (fMRI). The basic idea of fMRI is to rapidly acquire T2*-weighted images of the brain while the participant performs a cognitive task. We can then analyze the data to determine which brain regions are changing their signal in response to the behavioral task. To do this, we typically need a way to present task (usually a visual task via a computer screen or an auditory task via headphones) while we collect behavioral data (e.g. button presses). We also need a way to synchronize the scans with the behavioral task. Since MRI is an inherently expensive tool, most new centers should certainly budget for professional MRI-compatible solutions for stimuli presentation, behavioral recording and scan synchronization. I try to describe several of the professional options on this web page. However, it should be noted that the costs of MRI is largely fixed, and many scientists hope to utilize a clinical system when it is not required for clinical duties. For this audience, I describe inexpensive methods to solve these problems. However, I wish to stress that the implementation of these techniques must be done carefully, and I take no responsibility for any issues that might arise using the methods described here. Aspects of MRI make it inherently unsuitable for many off-the-shelf electronic devices. Issues include a strong constant (static) magnetic field, rapid switching of magnetic fields (gradients, that can induce electrical signals), radio frequency (RF) transmission and RF acquisition.

4.1 Setup

I really urge new fMRI users to get to know their MRI service engineers and application specialists. These individuals are very knowledgable, and in my experience they are pivotal to getting the best out of your scanner. Further, it is worth giving them a heads up if you are going to add new hardware. Specifically, they can help ensure you have the correct software licenses (most users will need a Echo Planar Imaging [EPI] license, which is generally common and pre- installed). Second, you will want to have an idea of where the EPI trigger hardware is located – for Siemens scanners you will want an optical cable routed from the equipment room to the location where your behavioral computer is located (last I checked, the Philips MRI systems have an electrical connector at the console). You will also want the service engineer to help you enable scanner to generate the EPI trigger signals (on Siemens scanners, this is done once in the service software, whereas for Philips scanners this is done on the protocol scan card). If you are establishing a new MRI center, it is critical to establish the location and configuration of your penetration panel(s). A penetration panel allows cables and light to enter in the scanner bay without introducing RF artifacts. The image on the right shows a typical research fMRI penetration panel with a computer projector, optical cables (PST response glove cables are black, scanner trigger pulse is orange), and some metal electrical cables (plugged into BNC jacks). Tubes (wave guides) that run through the penetration panel need to be tuned for the frequencies used by your scanner. The larger the bore of the wave guide, the longer (and heavier) the guide. It is easy to send light (e.g. a computer projector beam) or optical cables through a wave guide, but you need to be very careful putting any wire cables through a wave guide: the cables need to be shielded, the signals should not be at the frequency used by the scanner, the shield should be grounded, and typically the signals should be run through an RF filter (the photo shows some filters on the DB9 and DB25 jacks). It is generally a good idea to have wave guides professionally installed – if they are poorly mounted they can introduce RF artifacts into all your scans. For most clinical setups, there are usually

9 examplecode Documentation, Release 0.1 a few unused waveguides you can exploit, but you will want to have a careful discussion with your service engineer to ensure that these wave guides are not intended for some future hardware.

You will want to carefully select the sequences used for your fMRI study. Generally, it is a good idea to carefully consult with a MRI physicist, and see examine the sequences used by other centers that have similar hardware. Here are examples of our teams evaluation of Diffusion-weighted imaging (DWI) and Arterial Spin Labeling (ASL).

4.2 Visual stimuli

Computer screens are the typical way for presenting fMRI tasks. This allows you to present experiments written in EPrime, Presentation, Cogent, or your favorite programming language. There are basically four options here: • The least expensive solution is to place a computer projector at the foot of the MRI scanner. With this method, you shine through the console window (shown in blue in the diagram). Since the projector is outside the faraday cage, you can use an ordinary projector. There are two small disadvantages to this method. First, typical console glass includes a metal mesh to block radio signals, and this degrades the image a bit. The other problem is that this limits your view of the participant from the console – so it is harder to see if the participant is squirming or otherwise uncomfortable. • You could put a computer projector beyond the head of the participant. Typically, the projector is located outside the scanner and shines through a wave guide (shown in red in the diagram). This does require a very large wave guide and a projector with a long-throw lens. An alternative of this method is to get a MRI compatible projector that is placed inside the magnet hall (so a large wave guide is not required), for example professional solutions by Avotec and

10 Chapter 4. DIY fMRI examplecode Documentation, Release 0.1

The first three methods require a mirror mounted on the head coil. Your scanner manufacturer should provide you with one of these (seen on the headcoil in the picture on the right). The first two methods require a back projection screen. The image on the right shows the professional screen we use. However, a low budget version can be made using plastic (PVC) pipe from you local plumbing store and a large sheet of drafting mylar.

4.3 Auditory stimuli

• fMRI acquisition generates loud sounds (as we need to drive the gradients). This makes auditory presentation difficult. Most scanners include air conduction head phones that can be used to present sounds. There are also professional systems that provide better fidelity including Avotec audio and Resonance Technologies . In any case, one option is to ask the participant to wear ear plugs as well as the headphones. That way, the scanner sounds are attenuated by both the ear plugs and the headphones while the sounds you are intentionally presenting are only attenuated by the ear plugs. If you follow this approach, be aware that most ear plugs selectively reduce higher frequencies which alters sounds – in the past I have used high fidelity Etymotic ETY•Plugs earplugs that have a more balanced range of attenuation. • In any case, for studies with auditory stimuli you may want to consider sparse acquisition, where one inserts a few seconds pause after each volume of data is acquired. This means that there is a few seconds where the scanner is silent and auditory stimuli can be presented during this interval. Since the hemodynamic response is sluggish, the noisy acquisition captures brain response from the quiet period.

4.3. Auditory stimuli 11 examplecode Documentation, Release 0.1

4.4 Tactile stimuli

You can present tactile stimuli by using air puffs, piezoelectric ceramics, or electrical currents. However, each of these techniques has its own challenges and are used less often in everyday research. Therefore, I do not discuss these in detail here.

12 Chapter 4. DIY fMRI examplecode Documentation, Release 0.1

4.5 Eye Tracking

The human eye has poor acuity away from fixation, so you can learn a lot by observing where someone is looking. There are no simple, inexpensive solutions, but there are numerous professional solutions that you can find with a web search for the terms “fmri eye tracker”. In my experience, there is a huge variability in the professional products available. Before purchasing a system I strongly advise visiting a research site that has the intended system installed and seeing it in operation by scientists (rather than sales reps)

4.6 Recording button responses

• We typically ask our participants to make responses during our fMRI session. This ensures that the participant is performing the task, and even allows us to assess performance related brain activity (e.g. what brain activity is associated with correct/incorrect, fast/slow, etc. responses). There are a couple of solutions. These will typically require a wave guide as described in the setup section. • Electrically wired approaches provide the least expensive solution. One option is to use a plastic USB keyboard , a shielded MRI compatible USB cable, a pi-filter tuned to your scanners’ Lamour frequency (and grounded to the Faraday cage). This is a very simple solution, though note that the article demonstrates that some keyboards are better than others. In my experience the described keyboard has a pretty poor feel, so this is a workable but not ideal solution. One can purchase or create electrical devices with minimal ferrous metal using either USB signals or better yet a low-frequency electrical signal. These provide a simple solution to the problem, though care needs to be taken in selecting the buttons, cables, RF filters, and other components. A nice professional example of this approach is the devices provided by Hybrid Mojo . One potential problem with these electrical methods is that scanning could cause some of the long wires to heat, which could result in thermal injury. Be warned that radio frequency pulses could lead to heating, and therefore hardware that works fine in one sequence (e.g. a simple T2* fMRI sequence) might exhibit extreme heating in another sequence (e.g., an arterial spin- labeling sequence). One should be careful to ensure that the cables have a straight run, and are not looped. • Wireless button response systems provide an elegant solution that does not require cumbersome wires. The Siemens Bluetooth physiological recording devices (described below) demonstrate the feasibility of this con- cept – emitting radio signals at a very different frequency than those used for MRI acquisition. However, the components would need to be carefully selected, shielded and tested. • Fiber optic cables do not interfere with the radio signals used in MRI scanning, and therefore provide clear advantages over electrical methods. However, in my experience these systems also have poor connectors and poor button reliability, so if you go this route you should ensure a good warranty. Hollinger et al. (2007) describe this method. Professional solutions include Current Designs, PST Celeritas, NAtA, Cedrus, NordicNeuroLab , Resonance Technologies devices , and VPixx . You can also build your own [optical response buttons , though it is difficult to achieve the desired tactile feedback.

4.7 Recording physiological data

• It is often useful to collect physiological measures for features such as heart rate or respiration. These data are often used as nuisance regressors (since physiological noise can interfere with our ability to detect cognitive responses ). Alternatively, these may be our prime measures of interest (e.g. heart rate variability may reveal something about the emotional state of the participant). • Many scanner manufacturers provide devices for measuring pulse and heart rate. For example, Siemens provides wireless (bluetooth) heart rate (shown in photos on the right) and respiration sensors that record at ~50Hz. Philips includes heart rate and respiration with recording of ~400Hz (though their data logging makes these files much harder to synchronizing with scanner acquisition). However, it should be noted that the scanner manufacturers are developing these tools primarily for data acquisition (e.g. triggering the MRI at a specific

4.5. Eye Tracking 13 examplecode Documentation, Release 0.1

phase in the cardiac or respiration cycle), and therefore these devices may not be ideal for all situations (for example measuring blood oxygenation, SpO2). Other devices do exist that can help in these situations. For example, Mark Wall reports that the PowerLab ADC and CED1401 can be used for MRI acquisition.

4.8 Scanner Synchronization

One critical requirement is knowing the timing between behavioral events (when did the participant see/hear/do something) and fMRI acquisition. We typically want our behavioral experiment to start precisely when the scanner begins acquiring data. This turns out to be somewhat tricky, as the scanner initializes most scans with a shimming sequence (that can take a variable time) and also discards the first few scans (as these have more T1-effects). Therefore, starting your scanner when you hear the auditory sounds of the scanner or when you detect RF signals is not sufficient. Fortunately, modern scanners can be set to generate trigger signals at the beginning of each EPI volume (as described in the Setup section). The trick is turning these very brief (nano-second) signals into longer signals that most computer inputs can detect (milliseconds). Further, Siemens scanners generate optical signals, which we need to convert to electrical signals.

• Several professional behavioral input devices include a trigger detector. For example, the Current Designs hardware includes an optical detector, while the PST Fiber Optic Button Response system includes an electrical trigger detector (via a BNC connector).

14 Chapter 4. DIY fMRI examplecode Documentation, Release 0.1

You can also build your own trigger pulse detector. The photograph and schematics on the right show a very simple device using the ubiquitous TLC555 timer as a monostable multivibrator and the versatile U-HID nano . The photo shows the standard Siemens compatible optical connector on the left, this board contains a small light that is red when the power is on, but briefly flashes off when a scan pulse is detected. The circuit is socketed onto a UHID nano (right side) that supplies power and transmits responses to the computer via the USB cable. This circuit is easy to build on a bread board or solder onto a prolect board, but if you are interested you can also contact me for Gerber files that you can have fabricated by vendors such as the terrific OSH Park (USA) or Fritzing (Europe, they also provide excellent software for designing your own boards). I like the UHID devices because you can program them to emulate many different devices (mouse buttons, keyboard presses, gamepad presses) and once this configuration is programmed any computer will see it as the designated response. Whereas the professional devices have a fixed mapping (e.g. for PST, a trigger pulse emulates the ‘^’ keypress), the UHID can be any device. Personally, I prefer to emulate devices such as gamepads that do not fill up the keyboard buffer or interfere with any programs that may be running. More details regarding synchronizing with your scanner can be found at my data logging page.

4.9 Relevant Links

• Our StimSync web page describes how to use the Teensy with experiments. For example, a Teensy could be used instead of the Nano. • I have an old page on controlling a experiments with legacy serial and parallel ports . Unfortunately, it is harder to get computers with those ports.

4.9. Relevant Links 15 examplecode Documentation, Release 0.1

16 Chapter 4. DIY fMRI CHAPTER FIVE

DTI

This page assumes you familiar with the ideas described of diffusion tensor imaging (see my class for details), so you may want to read that first. This advanced page describes how FSL’s topup can be used to undistort diffusion images, and provides a matlab script for batch processing large numbers of images with topup and FDT. Topup does an incredible job unwarping images, but the current beta form involves a lot of steps and knowledge about your dataset. This script will automatically detect and separate the B0 images required by topup, simplifying topup. The effects are shown in the images on the left: the top row shows a distorted image, the middle row shows an image distorted in the opposite direction, and the bottom rom shows the pair combined after undistrotion.

5.1 Undistorting DTI Data

Diffusion images are acquired rapidly, and slices are typically acquired using 2D sequences (usually Echo Planar Imaging [EPI]) that acquire an entire 2D slice in a single excitation. Unfortunately, these sequences take a while to read-out the slice, and therefore any field homogeneity errors will cause spatial distortion. While scanner shimming aims to reduce field inhomogeneity, the solutions will be imperfect, particularly where there are different types of tissue present – for the brain this is particularly the sinuses near the orbital frontal cortex and regions near the ear canals. While these distortions are inevitable, we can compensate for them by acquiring two series of images that have opposite phase-encoding direction but identical in all other respects. Each of these images will end up showing distortions of identical magnitude, but opposite direction. Topup compares these two images to the nonlinear midpoint deformation between these two images. The deformation can then be applied to all the images, creating a combined image with minimal spatial distortion. Topup calculates these deformations based on the B0 images (as these are not contaminated by additional eddy current deformations of the directional scans), and then computes a mean image for each pair (positive and negative polarity) of images (so that both the B0 and directional scans are corrected). Therefore the two input series (positive and negative) create a single unwarped series that should have 1.4 times the signal to noise of either original (thanks to signal averaging).

When you run my script ‘ dti_1_eddy ’ it expects to to provide positive and negative polarity images (these should be 4D NIfTI format images). For example, if your anterior-to-postrior phased-encoded image is named named “AP.nii” and the posterior-to-anterior image is named “PA.nii” you would run something like ‘./dti_1_eddy.sh “AP” “PA”’. If all works well, the script will run TOPUP followed by Eddy. If you only provide a since image (‘./dti_1_edyy “PA.nii”’) the script will run the legacy eddy_correct (which only does a linear spatial correction). Note that users who have a graphics card set up for CUDA as well as a CUDA-capable version of FSL’s Eddy, there is a much faster way to do the same thing: my script ‘dti_1_eddy_cuda’.

17 examplecode Documentation, Release 0.1

5.2 Notes

• By default, most Philips and GE scanners use a monopolar Stejskal-Tanner sequence, while most Siemens use a bipolar ` twice-refocused spin echo `_ sequence. The bipolar sequences show less distortion and are less sensitive to background gradients. The monopolar sequences are able to have more signal thanks to a shorter TE. Users of bipolar sequences will see less benefit of topup. Siemens users who want to use monopolar DTI can install the Multi-Band Accelerated Pulse Sequences (even if you do not use multiband features, these sequences allow you to choose between monopolar or bipolar). These sequences are really impressive, though you may want to tune the protocols for your scanner (centers with 32- channel headcoils may want to look at the protocols Multiband Imaging Test-Retest Pilot Dataset ). Setting up DTI sequences is challenging, and may be specific for your scanners (e.g. for the Siemens Trio avoid echo spacing in the range of 0.59 ms to 0.70 ms to minimize harmonic vibrations). • By default, most scanners only acquire a single B0 image when acquiring DTI – for example a 32 direction scan will have 33 volumes, with the first being the B0 scan. However, it is often advisable to make sure that about 1/10th of your images are B0 scans, as this will provide more accurate ADC/MD estimates as well as better topup estimates. For Siemens scanners you can create custom DiffusionVectors.txt that specify any number of gradient directions ( online tools calculate optimal directions ) and you can add extra B0 scans by inserting directions with the vector “0 0 0” at regular intervals. You can also use the command line tool ‘gps’ that is included with FSL. • Topup requires you to specify the readout time of your image. However, as long as this is identical for all your scans (and it typically is), you do not have to be precise about this value. In the case where the readout is identical for all scans an incorrect readout time will simply result in an error in the scale of the calibrated fieldmap (topup will assume the shim was better or worse than reality), but the undistortion will be identical. Since we usually acquire spin-echo DTI images with the minimum possible echo time, the readout time will be somewhat less than the echo time (TE). Consider our example DTI scan with a TE of 79.2ms – the refocusing pulse occurs at 39.6ms and so we cannot begin reading out until after 39.6ms and therefore must end by 118.8 ms (for a mean TE of 79.2ms). To precisely calculate the time you will want to compute readOutTime = echoSpacing * ((matrixLines*partialFourier/accelerationFactor)-1). You can find the echo spacing on the ‘Sequence’ tab of the Siemens scanner (referred to as ‘dwell time’ by some). For our example dataset the echoSpacing = 0.77ms (0.00077s), we have 90 lines of data acquired with full Fourier (partialFourier=1) and GRAPPA=2 (accelerationFactor = 2), so the readout time is 0.03388 seconds (= 0.00077 * ((90*0.5)-1). I strongly suggestion you look at the FSL eddy web page for hints on setting up your sequence. 29 May 2013. Chris Rorden with suggestions from Dirk den Ouden and Svetlana Malyutina

18 Chapter 5. DTI CHAPTER SIX

FIELDMAPS

In order to observe brain activity, we need to collect rapidly acquire images of the brain. Echo Planar Imaging (EPI) is the most popular technique for rapid MRI acquisition. However, EPI images often exhibit substantial signal dropout and spatial distortion in regions where the magnetic field is inhomogenous (for the brain, this means the fontal cortex and medial temporal lobe). We can not recover the lost signal, but we can attempt to undistort our images if we collect field maps (that measure the field inhomogeneity). These field maps take about a minute to acquire, and have the same positioning and image dimensions as our fMRI data. Undistortion can be accomplished with FUGUE (FSL) or the Fieldmap Toolbox (SPM). Both expect your data to be in NIfTI format, so if your images are in DICOM format, you will need to convert them (e.g. use dcm2nii or SPM’s conversion functions). There are two reasons that reduced distortion may be important for fMRI studies. First, for any fMRI study, undistor- tion will make the shape of a each individual’s fMRI data more similar to their anatomical scan – this improves the quality of the normalization leading to improved group level statistics throughout the brain (see Cusack & Papadakis, 2002). Second, EPI unwarping is often helpful in studies where we want to examine brain regions known to have severe homogeneity issues (frontal pole, orbito-frontal cortex, medial temporal lobe [esp hippocampus]). If your goal is the second, you should really attempt to tune your protocol to reduce artifacts prior to collecting any data. The reason for this is that homogeneity errors cause both signal loss and spatial distortion. Fieldmaps only help correct the shape of your EPI image, and do not recover missing signal. Also, the field maps are pretty smooth, and often do not accurately model the sharp inhomogeneities at the edge of the brain. In general, undistortion will have a minimal benefit for scans acquired on modern MRI scanners with highly optimizing shimming methods (e.g. a Siemens Trio), but may be helpful for boutique or high field applications. The sample EPI data on this web page actually was intentionally designed to have relatively low spatial distortion at the cost of signal to noise. Specifically, the following techniques were all used to reduce spatial distortion and signal loss: • fast readout (parallel imaging GRAPPA x2). For systems without parallel imaging, you could consider partial Fourier. • short TE (27ms) • high bandwidth (2694 Hz/pz) • thin slices (2.5mm, 20% gap). An additional reason for minimal distortion (though without a cost in signal) is that the images were acquired on a scanner with very sophisticated shimming algorithms (Siemens Trio). You should consider similar measures if you are interested in susceptible regions. If you use more traditional techniques, you will see more spatial distortion and the fieldmap correction will be more substantial.

19 examplecode Documentation, Release 0.1

6.1 Using SPM’s Fieldmap Toolbox

This tutorial describes how to use the Fieldmap Toolbox (created by Jesper Andersson and Chloe Hutton) that is included with SPM. For this tutorial, we will need to have the following files: • EPI data we want to undistort (typically a series of T2*-weighted fMRI images). In the example these are the images fmris007a001_01..03. You will need to know the ‘Total EPI readout time’ for your EPI data. For Siemens scanners, you can determine this by opening up the protocol with the Exam Explorer and checking the “Echo Spacing” in the “Sequence” tab. For example, this EPI image has 64 lines, with an 0.46ms echo spacing, so one would expect the total readout time is 29.44ms. However, you need to be careful when using acceleration techniques such as partial k-space or parallel imaging. In our case, we are using parallel imaging (GRAPPA) with an acceleration factor of two, so the gradients are only collecting 32 lines, and therefore our readout time is 14.72ms (one of the main reasons why there is relatively little distortion in these scans). • A Fieldmap. Fieldmaps are generated by acquiring a gradient echo image with two echoes, in our example at 4.92 and 7.38ms after excitation. A phase map shows the difference in phase between these two echoes. In our example, we have two magnitude images grefieldmaps003a1001_1.nii and grefieldmaps003a1001_1.nii (for the 4.92 and 7.38ms echo) and a phase map (grefieldmaps004a2001.nii). If you are unsure which image is the phase map, you can use SPM’s ‘display’ function. You will need to know the echo times used – your DICOM data will report this at element:tag 0018:0081, or with Siemens check the ‘Contrast’ tab with the Exam Explorer.

20 Chapter 6. Fieldmaps examplecode Documentation, Release 0.1

6.1. Using SPM’s Fieldmap Toolbox 21 examplecode Documentation, Release 0.1

Here are the steps for undistortion with SPM’s Fieldmap Toolbox. • Choose Toolbox/Fieldmap from SPM’s menu window. • Press ‘Load Phase’ and choose your phase image (‘grefieldmaps004a2001.nii’). You will be asked if you want to have this scaled to radians – select Yes. A new version of the fieldmap (‘scgrefieldmaps004a2001.nii’) will be created that has an intensity range of -pi..+pi (Siemens data is initially in the range -4096..+4096). • Press ‘Load Mag.’ and select one of your magnitude images (‘grefieldmaps003a1001_1.nii’) • Make sure to set your ‘Short TE’ and ‘Long TE’ to the correct values – 4.92 and 7.38 in our example (al- ternately, if you have placed the ‘pm_defaults_Trio_CABI.m’ in the SPM’s toolbox/FieldMap folder, you can select ‘Trio_CABI’ from the defaults pull-down menu to set these values). • You can check your other defaults. We tend to mask the brain. • Press ‘Calculate’ – after a couple minutes a fieldmap is displayed. You can interactively click on the diplay and the amount of inhomogeneity for that voxel will appear in the ‘Field map value Hz’ field. Several new image files are created, including a voxel displacement image (VDM). • Press ‘Load EPI image’ and select your functional data (e.g. fmris007a001_01.nii), and make sure the Total EPI readout time is set correctly (14.72ms in our example). • Press ‘Load structural’ and select one of your magnitude images (‘grefieldmaps003a1001_1.nii’) • Press ‘Write unwarped’ – a new undistorted image is created (ufmris007a001_01.nii). • The image on the right shows the SPM graphics window at this stage – the ‘Unwarped EPI’ should have a more similar shape to the ‘Structural’ then the ‘Warped EPI’. If the error is worse, change -ve to +ve. • You can now preprocess your MRI data. At this stage you will want to do your motion correction using the ‘realign and unwarp’ option, selecting the vdm file you selected here.

22 Chapter 6. Fieldmaps examplecode Documentation, Release 0.1

6.2 Alternatives

The method described above uses SPM’s fieldmap toolbox. One can accomplish similar goals using FSL’s FUGUE . Both the FieldMap toolbox and FUGUE require you to acquire a fieldmap on your scanner. However you can also correct for spatial distortions seen in fMRI using FSL’s TOPUP . The basic idea with TOPUP is that you acquire two sets of scans where the phase-encoding direction is reversed between sets. This leads to images with disortion of identical magnitude but opposite direction. TOPUP is able to use these images to compute a non-linear undsitortion. There is one wrinkle with regards to fMRI: TOPUP is typically employed in DTI sequences use spin-echo sequences, yet fMRI typically uses gradient echo scans. With spin echo scans the signal is moved (bunched-up in some regions pulled apart in others) whereas inhomogenous areas of gradient echo scans exhibit both spatial distortion and signal dropout. This makes it challenging to directly compute the spatial distortion using only GE images. To solve this, you would acquire your typical GE fMRI data, and next acquire a few SE volumes with identical parameters and positioning, finally you acquire a third set with a few SE volumes with identical parameters except reverse phase encoding. You use TOPUP to compute the coefficients for the SE images and than apply these (using applytopup) to the fMRI data.

6.2. Alternatives 23 examplecode Documentation, Release 0.1

6.3 Tips

Note that tools like FUGUE expect you to provide the magnitude and phase images. If you are setting up your acquisition, you should request these images on the scanner console. However, if you are dealing with archival data, you should be able to recognize and convert different forms of images. For example, consider the ADNI participant 130_S_4405 who was scanned in 2012 and again in 2017. The 2012 sequence only saved the image as imaginary (image below, 1st column) and real (2nd column) components of the complex image. In this case, you can use the FSL tool fslcomplex to create the magnitude (3rd column) and phase image (right column).

6.4 Notes

The Fieldmap Toolbox web page is a great source of information. Also, note that the toolbox has a ‘help’ button which displays a useful manual. By default, when the toolbox converts an image to radians it attempts to scale the minimum intensity to -pi and the maximum to +pi. This is usually pretty accurate, but often has small errors when the phase map does not have extreme values. If you have Siemens data (where scaled values -4096..+4096 denote -pi..+pi), you can edit your Fieldmap.m file to get the precise conversion. The sample dataset includes the code you would need to insert.

6.5 References

• Jezzard P & Balaban RS. 1995. Correction for geometric distortion in echo planar images from Bo field varia- tions. MRM 34:65-73. • Hutton C et al. 2002. Image Distortion Correction in fMRI: A Quantitative Evaluation, NeuroImage 16:217-240. • Cusack R & Papadakis N. 2002. New robust 3-D phase unwrapping algorithms: Application to magnetic field mapping and undistorting echoplanar images. NeuroImage 16:754-764. • Jenkinson M. 2003. Fast, automated, N-dimensional phase-unwrapping algorithm. MRM 49:193-197.

6.6 Useful links

• SPM FieldMap Toolbox web page and Example datasets.

24 Chapter 6. Fieldmaps CHAPTER SEVEN

JPEG FORMAT VARIATIONS

Most people think that “JPEG” refers to a single format. However, while one format dominates most fields, in medical imaging there are different forms of JPEG images that require different tools to decode them. The JPEG (Joint Photographic Experts Group) defined a number of formats for saving images. In particular, this is a popular format for digital photographs, and is commonly used by digital cameras and web pages. However, some JPEG formats are also used in medical imaging. The expectations for medical imaging is a bit different than other fields, and there is a preference for higher precision data (e.g. saving an image as 65,535 shades of gray instead of just 256) and higher quality (less artifacts).

This web page describes the different forms of JPEG compression specified for DICOM files, the dominant image format for medical imaging. The DICOM standard currently specifies 35 different ways an image could be embedded into a DICOM file. These formats are referred to as the Transfer Syntax. Here I describe the most popular Transfer Syntaxes that use JPEG formats, and demonstrate how my dcm2nii software decodes these images.

25 examplecode Documentation, Release 0.1

7.1 Baseline JPEG

Common in DICOM: Transfer syntax 1.2.840.10008.1.2.4.50 describes the 8-bit standard classic lossy JPEG that is identical to the most common JPEG images created by digital cameras and used on the web page. These are easy to support – you can typically decode these using the API of your operating system, or for universal support use the popular IJG library or the elegant nanoJPEG library. To convert a dicom image to this format you can use dcmcjpeg with the command “./dcmcjpeg +eb in.dcm out.dcm”. Rare in DICOM . Transfer syntax 1.2.840.10008.1.2.4.51 describes the standard lossy JPEG format but extended to store higher precision (e.g. 12 rather than 8 bits per pixel, yielding up to 4095 distinct colors). Standard JPEG libraries can not handle these images. The typical solution is for developers to recompile a version of the IJG library to support this pixel format. However, the resulting library is no longer able to cope with the 8-bit images. Fortunately, this format appears to be exceptionally rare: someone who demands high precision data is probably not interested in lossy compression. If you insist on creating one of these images you can use the command “./dcmcjpeg +ee in.dcm out.dcm”.

7.2 Compressed lossless JPEG

Common in DICOM. Transfer syntaxes 1.2.840.10008.1.2.4.57 and 1.2.840.10008.1.2.4.70 refer to a lossless JPEG format that is exceptionally rare outside of the medical domain (and completely different from both the lossless JPEG- LS and lossless JPEG-2000 encoding formats). While this was fully described in the JPEG ISO/IEC 10918-1:1994 T.81 (09/92), it did not gain traction outside of medical imaging (where GIF and PNG became the most popular lossless formats). This legacy lossless JPEG is a simple format, and only uses the Huffman encoding without the typical discrete cosine transforms (DCT). However, the fact that these images are typically saved with 16-bit precision means it is not supported by most libraries, and they generate an error saying they can not decode “SOF type 0xc3”. I have written my own library to support this format, though other tools (e.g. dcmtk) use custom-patched variations of the IJG library. This is not a very efficient compression method, and personally I would strongly recommend users investigate file-based compression (.zip, or disk driver enabled compression) over this arcane format. This is the default output of dcmcjpeg, probably explaining its widespread popularity, for example the command “./dcmcjpeg in.dcm out.dcm” generates a DICOM image with format 1.2.840.10008.1.2.4.70, while the command “./dcmcjpeg +el in.dcm out.dcm” generates an image with syntax 1.2.840.10008.1.2.4.57. You can also create these files withgdcmconv (e.g. ‘gdcmconv -J in.dcm out.dcm’).

7.3 Compress lossless JPEG-LS

Rare in DICOM In theory, the JPEG-LS standard looked promising: better compression than the ancient lossless JPEG, while offering similar compression ratios at far higher speeds to the much more complex JPEG2000 lossless standard. The JPEG-LS (ISO/IEC 14495-1:1999 / ITU-T.87) uses DICOMtransfer syntaxes 1.2.840.10008.1.2.4.80 and 1.2.840.10008.1.2.4.81. These images can be created with gdcmconv (e.g. ‘gdcmconv -L in.dcm out.dcm’). Furthermore, you can configure Horos to save to this format.

26 Chapter 7. JPEG Format Variations examplecode Documentation, Release 0.1

7.4 Compressed JPEG2000 (lossy and lossless)

Rare in DICOM: Transfer syntaxes 1.2.840.10008.1.2.4.90, 1.2.840.10008.1.2.4.91 and 1.2.840.10008.1.2.4.92 refer to JPEG2000 based compression. This is very different from the other JPEG methods, using wavelets rather than DCT. This is a technically impressive format – at extreme compression ratios it does not have the blocky artifacts of con- ventional JPEG. At typical compression ratios it tends to produce files that are perhaps 15% smaller than conventional JPEG. However, adoption was been very slow, perhaps because conventional JPEG were good enough, JPEG2000 libraries are hard to integrate into software, JPEG2000 is relatively slow to process images, and JPEG2000 was su- perceded by newer formats like HEIF. In my experience the Jasper library is elegant, but it does have problems with some 16-bit images . On the other hand, the OpenJPEG library is cumbersome, the calls have changed a lot between versions, is poorly documented, but is very robust. In my experience, these images remain rare (perhaps since the free dcmcjpeg does not support them, while the professional dcmjp2k does).These images can be created with gdcmconv (e.g. ‘gdcmconv -K in.dcm out.dcm’). Furthermore, you can configure Horos to save to this format.

7.5 Comparing lossless JPEG compression

Since we tend to worry about the consequences of lossless compression, the comparison of the various compression methods is of interest. There are several reasons why you may want to save your raw DICOM data uncompressed: disk space is inexpensive relative to the cost of scanning, decoding compressed images is slow, many tools do not support compressed DICOMs and the fact that while these tools compress the image data, they do not compress the verbose header (so you may be better off simply archiving DICOM data in popular file-level compression formats like .zip). For those still interested in choosing a lossless compressed DICOM format, here is a quick comparison for MRI data. Note that in this example, JPEG-LS reduces the file size to 60% of the raw image, takes 30% longer to compress than creating JPEG lossless, and is five times slower to decode (open, convert, view) than raw data. This table suggests that the JPEG2000 as implemented in OpenJPEG is exceptionally slow for both compression and decompression. One the other hand, JPEG-LS finds provides good compression with a modest performance penalty.

Table 1: JPEG Performance Method Size Encode Decode Raw 1.00 1.0 lossless JPEG 0.65 1.0 5.0 JPEG-LS 0.60 1.3 7.7 JPEG-2000 Lossless 0.61 3.9 71.6

7.6 Sample source code

Attached below are three C programs that illustrate converting JPEG compressed DICOM images to uncompressed TIFF images. One example uses NanoJPEG for conventional JPEG images. Another uses my own code for lossless JPEG. The final example uses OpenJPEG for JPEG2000. They all use Paul Bourke’s code to generate TIFF images (I chose TIFF since it is a popular format that supports 16-bit images). The sample images come from the Lead Tools sample images.

7.4. Compressed JPEG2000 (lossy and lossless) 27 examplecode Documentation, Release 0.1

28 Chapter 7. JPEG Format Variations CHAPTER EIGHT

LEGACY WEB PAGES

• A link to my ancient but perhaps useful web pages.

29 examplecode Documentation, Release 0.1

30 Chapter 8. Legacy Web Pages CHAPTER NINE

OPTIMIZING FSL AND SPM

Magnetic Resonance Imaging (MRI) provides powerful tools for understanding how the healthy brain functions, and can also provide insight into developmental and neurological disorders. However, to make these inferences we need to heavily process the images, dealing with common sources of noise and adjusting for individual differences in the brain shape and architecture. These steps are computationally intensive, and can take a while even with the fastest modern computers. This page provides suggestions for selecting the best hardware for analyzing brain imaging data. My center is currently conducting a large study to understand the consequences of stroke. We hope to provide accurate prognosis and tailor the optimal treatment strategy for future neurological patients. This study acquires and analyzes a broad range of modalities: • Lesion maps are drawn on T2-weighted scans which are aligned to T1-images. We then perform Enantiomorphic normalization . This allows us to understand the influence of the size and location of the brain injury. The normalization parameters are used for the subsequent stages. This stage uses SPM. • We analyze :ref:` Arterial Spin Labelling ` using the SPM-based ASLtbx . This allows us to investigate perfusion. • We analyzing resting state data using our own SPM-based toolbox. • We process our diffusion tensor imaging (DTI) data using a series of FSL tools including topup, eddy, dtifit, bedpostx, and probtrackx. • We quantify the DTI results using our own SPM-based routines, allowing us to understand how the brains’ connectivity correlates with the cognitive and behavioral limitations observed following stroke. The table below shows the time required to analyze these stages for a single individual using two computers that each cost about $700. The i7-4790K is a modern (in 2015) quad core CPU with hypterthreading (e.g. appearing to have 8 CPUs), while the dual X5670 computer has 12 cores (24 hyper threads) and was released in 2010 and purchased used from eBay (with only a modern solid state disk added):

Table 1: Title Hardware T1-SPM ASL-SPM Rest-SPM DTI-FSL DTI-SPM Total 4.00 Ghz 238 77 106 4154 234 4810 4790k 4 core 2.96 Ghz 449 143 207 2838 307 3944 X5670 12 core Speed-up 1.88 1.86 1.95 0.68 1.31 0.82

31 examplecode Documentation, Release 0.1

9.1 SPM considerations

For my pipeline, SPM only requires a minority of the processing time. So for my DTI-studies I really need to focus on optimizing FSL rather than SPM.The architects of SPM deserve a lot of credit for optimizing this tool out of the box. It is heavily vectorized and takes advantage of Matlab’s implicit multithreading. Therefore, some stages see some benefit for having a lot of cores. Likewise, for specific situations where Matlab is not efficient the designers have crafted mex files that speed analysis. The performance benefit of the modern 4790K (Haswell) CPU over the older X5670 (Westmere) is more than one would expect due to clock speed alone. This certainly reflects design improvements in the new chips, but may also reflect that recent versions of Matlab use advanced features such as AVX2.In general, SPM seems well suited to inexpensive, consumer computers.

9.2 FSL considerations

• Many of the slow FSL tools can be run in parallel. Therefore, the number of cores available can dramatically accelerate performance (see graph). • Hyperthreading does not help FSL performance. Performance is limited by the number of physical cores. Note that the 4790K has 4 physical cores but 8 logical cores (e.g. hyper threads). However, performance for this processor does not improve when more than 4 cores are used (see graph). • Beyond clock-speed, modern CPUs show little or no improvement over 5 year old CPUs for FSL’s probtrackx and bedpostx tools (which are the bottlenecks in my pipeline). This is unlike other programs, where the newer processors are much faster. For example, SPM shows a 68% improvement for the modern 4.4GHz 4790K versus the 2.93 GHz X5670 (released March 2010), while with FSL the improvement is just 35% improvement. This can be best seen by comparing the older 2.9 Ghz X5670 Xeon (dual, for 12 cores) to the 2.9 Ghz Xeon E5 2666 v3 (released in April 2016, with 18 cores). Note that when we limit ourselves to 12 cores these systems provide identical performance. • The pre-compiled FSL distribution is pretty generic (supporting all 64-bit x86 CPUs with the “-m64” option), and is not tuned to modern CPUs. One strategy might be to recompile FSL on your latest generation CPU using the “march=native” directive. • FSL’s native distribution is designed to work on old computers, and omits useful instructions added to recent CPUs (such as AVX ). Therefore, the easiest way to test this is to either build FSL using a more recent Fedora distribution or use the CERN devtoolset that ports recent versions of GCC to CentOS 6.x. • In my brief exploration of this I found no benefits for using these advanced features for the FSL tools that are slow for me (bedpostx and probtrackx): Haswell computers took just as long to process data using the default compiler settings (which can work on any 64-bit x86 computer) as when the tools were recompiled to take advantage of the latest Haswell features (using gcc 4.8.2 on CentOS and 4.9.2 with Fedora, both using “march=native”). Perhaps these routines are not ideal for these new functions, or perhaps the compiler needs some hints . In my (limited) experience, AVX does not provide huge benefits over SSE for most algorithms (unlike SSE vs x86 ). See also the CERN ’s experience with AVX. • Note that bedpostx can run faster if you can use a high-end graphics card (GPU) instead of the central processing unit (CPU). In testing with Ben Torkian we found a GTX Titan about 85 times faster than a single core of a 4770k. Of course, in reality we would provide multiple CPUs to this task, and there even the GPU requires some CPU-based house keeping. Therefore, a more realistic comparison using our datasets finds bedpostx requires about 1362sec using all four cores of a 4790k, while a NVidia TitanZ on an older computer requires about 135sec. Beyond FSL, GPUs can accelerate many neuroimaging tasks, and can be easily integrated into Matlab as explored in my work with the Research Cyberinfrastructure group . Generally, for accuracy many scientific computations use double-precision calculations which are much faster on the workstation-class GPUs (NVidia’s Tesla systems, and some models of the Nvidia Titan). However, bedpostx_gpu seems to run pretty quickly on commodity GPUs (that typically have fast single precision but slow double precision). For example, I found a Nvidia GTX 970 is able to process our DTI dataset in just 169 seconds. GPUs can accelerate other

32 Chapter 9. Optimizing FSL and SPM examplecode Documentation, Release 0.1

slow tasks – for example Moises Hernandez from the FSL group allowed me to beta-test his probtrackx2_gpu and an old computer with a Titan Z was able to compute a dataset in just 451 seconds where a 4 core 4790K CPU required 2465 seconds and a 18 core Xeon v3 required 1125 seconds.

9.3 Optimizing FSL, cost no object

If cost is no object, you will want a large computer cluster for FSL , with GPU nodes if you use bedpost.

9.4 Optimizing FSL, on the cheap

At least for my DTI analyses, it is clear that FSL really thrives when provided with lots of cores, but does not care much if they are the latest generation. Further, since many of the parallel tasks are conducted in 2D, you do not typically need a lot of RAM. Given this, you can take advantage of the fact that many companies purchase their servers on 5 year leases. Therefore, you can visit eBay and purchase a 5-year old cluster for pennies on the dollar. You can see that my 5-year old 12-core X5670 that I purchased used and upgraded with a SSD (total investment of $700) delivers about 70% of the performance of the latest 18-core Xeon e5 v3 (where the CPU alone cost more than $4000). Combining a few old computers together with Sun Grid Engine could provide a very inexpensive cluster. Darek Mihocka made an excellent suggestion that one could use the cloud to process data. Indeed, for this evaluation I rented a high-end Xeon e5 2666 v3 system for evaluation (referred to as a c4.8xlarge by Amazon web services). This is a great way to evaluate whether the latest hardware provides you with a performance boost relative to your current equipment. Further, if you only need to occassionally process datasets it is probably much less expensive to rend a cloud server than invest in your own server.

9.3. Optimizing FSL, cost no object 33 examplecode Documentation, Release 0.1

9.5 Optimizing FSL, without a cluster

Typically, to parallelize FSL you need to install grid engine software such as Condor or Son Of Grid Engine . However, this is inconvenient if you have a simple Linux workstation or a Linux laptop. In addition, grid engines are now effectively not installable on Apple Macintosh computers running MacOS (macOS). One can take advantage of the fact that any FSL program that is able to use a grid engine will submit a job FSL’s ‘fsl_sub’ script. By default, this if a grid is not available the job will be computed by a single core. With a little modification we can change this behavior so that if a grid is not available you will use all the available cores. To do this: • Download this modified version of fsl_sub • Install this new version. Here I assume the downloaded file is in your Downloads folder with the name ‘fsl_sub.txt’ and fsl is installed in /usr/local/fsl: • sudo cp /usr/local/fsl/bin/fsl_sub /usr/local/fsl/bin/fsl_sub_orig • sudo cp ~/Downloads/fsl_sub.txt /usr/local/fsl/bin/fsl_sub • sudo chmod +x /usr/local/fsl/bin/fsl_sub • At this stage, you can run FSL as usual, and hopefully it will be faster. • If you want to test the benefit, you can temporarily disable the function by using the command “FSLPARAL- LEL=0; export FSLPARALLEL” • If you want to test the benefit, you can temporarily force it to use precisely 8 cores with the command “FSLPAR- ALLEL=8; export FSLPARALLEL” • If you want to test the benefit, you can temporarily force it to automatically detect the number of cores (the default behavior) with the command “FSLPARALLEL=1; export FSLPARALLEL” To make permanent changes, add the desired FSLPARALLEL setting to your profile, for example if you are using the bash shell you could type ‘nano ~/.bash_profile’ to configure your shell.

9.6 Optimizing zlib

Unlike SPM, FSL (and many other neuroimaging tools) will save brain images in the compressed .nii.gz format. This saves disk space required by the files. FSL (like most tools) dynamically link to the zlib library to compress and decompress these images. While decompression is fast (indeed, for the slow disks often found on clusters it may be faster to read compressed images than raw images), the compression is very slow. A simple trick to accelerate all of these tools is to replace your zlib with the Cloudflare zlib . This is a drop-in replacement for zlib that utilitizes the SSE 4.2 instructions (Linux/MacOS, CPUs since 2008) or AVX instructions (Windows, CPUs since 2011) of modern computers. This zlib library remains single threaded (unlike pigz), so influence any other processes running on your cluster. Your compression instantly becomes faster. You do not need to recompile FSL or any other tools: any tool that dynamically links to zlib will experience faster compression. Below you can see the impact of this. Here we use a fast local disk to read an uncompressed 16-bit integer image and save it as a compressed 32-bit image (the default output of fslmaths). This example emphasizes the impact of accelerated compression (x3.25 times faster). However, each fsl stage of (and any other tool that uses zlib) will benefit. As an added benefit, notice in this example that the Cloudflare zlib (1.2.8) compresses the file to a smaller size than the original zlib (1.2.3). AFNI users can set install a Cloudflare accelerated pigz and set the AFNI_COMPRESSOR=PIGZ environment vari- able for improved performance.

34 Chapter 9. Optimizing FSL and SPM examplecode Documentation, Release 0.1

9.7 Future Considerations

Like other centers, we upgraded our Siemens Trio to the latest generation Prisma. The Human Connectome Project sequences show the dramatic benefits due to many features including new gradients, reduced dielectric effects and advanced multi-band tricks. However, this will mean substantially more data to process. While most modalities will an increase of x2-x3, the DTI sequences will see a dramatic increase in both resolution and directions. This means that GPU and cluster based solutions will become increasingly necessary for DTI analyses. This page focuses on Intel/AMD architectures. However, recent ARM-based CPUs show considerable promise. For example, the Apple M1 CPUs released in 2020.

9.7. Future Considerations 35 examplecode Documentation, Release 0.1

36 Chapter 9. Optimizing FSL and SPM CHAPTER TEN

SENSATION AND PERCEPTION (PSYC450)

10.1 Details

• Course Title: “Sensation and Perception”. • Instructor: Chris Rorden : Office 227 Discovery I (John Absher will provide clinical lectures) • Course Code: PSYC 450,3 credits. • When: Fall 2021 • Where: TBD • Syllabus: • Textbook (any edition after the 2nd): – Wolfe et al (2008). Sensation & Perception. 2nd Edition. ISBN-10:0878939539 – Wolfe et al (2011). Sensation & Perception. 3rd Edition. ISBN-10: 087836572X – Wolfe et al (2014). Sensation & Perception. 4th Edition. ISBN-10: 160535211X – Wolfe et al (2017). Sensation & Perception. 5th Edition. ISBN-10: 1605356417 – Wolfe et al (2020). Sensation & Perception. 6th Edition. ISBN-10: 1605359726 • Optional Alternative Textbooks: – Wandell (1995) Foundations of Vision. ISBN-10: 0878938532

37 examplecode Documentation, Release 0.1

– Snowden et al. (2012) Basic Vision. ISBN-10: 019957202X – Schnupp et al. (2010) Auditory Neuroscience: Making Sense of Sound. ISBN-10: 026211318X – Foley and Matlin (2009) Sensation and Perception. ISBN-10: 0205579809 • Description: How does the brain weave information from the five senses into the rich tapestry of our experience? Lets find out! Illusions will reveal the brain can be a con artist. Experiments will reveal the physics that shapes human perception. By using the latest technology we will demonstrate how the senses are seamlessly integrated. Lets explore how the brain works with hands-on examples. Students in this course will learn how humans use the sensory systems of sight, taste, touch, smell, and hearing to perceive and interpret their environment. We will draw upon information from a variety of fields, including art, biology, physics, and psychology to address these issues. In addition to covering the material in the text, we will discuss current issues in perceptual research. The course is heavily weighted in topics related to visual and auditory perception. This course may be taken to fulfill a major requirement in Psychology, or a minor requirement in Neuroscience. Students from all disciplines are welcome in this course. Lectures are designed to provide an important foundation of information and to improve your ability to process and synthesize facts and concepts. Because exams will be primarily based on content covered in lecture, lecture attendance is crucial to your success in this course

10.2 Chapter Slides

• Chapter 0: Introduction • Chapter 1: History • Chapter 2: Light to neural signals • Chapter 3: Spots to stripes • Chapter 4: Visual object recognition • Chapter 5: Color vision • Chapter 6: Space perception • Chapter 7: Spatial attention • Chapter 8: Motion perception • Chapter 9: Hearing • Chapter 10: Environmental hearing • Chapter 11: Music and speech • Chapter 12: Spatial orientation • Chapter 13: Touch • Chapter 14: Olfaction • Chapter 15: Taste

38 Chapter 10. Sensation and Perception (PSYC450) examplecode Documentation, Release 0.1

10.3 Chapter Study Guides

• Chapter 1 Study Guide: Introduction docx – Key terms: psychophysics, panpsychism, criterion, cross modal matching, magnitude estimation, signal detection, spatial frequency, phase – Anatomy of a neuron (neurotransmitters, neuron, axon, dendrites, receptor, myelin, ect) and basic structure (layers) of the cortex. – Types of nerve cells – Weber’s law/theories, Joseph Fourier and “Allegory in the cave” – Signal to noise ration and examples – Neuroimaging techniques (fMRI, MEG, EEG, CT, PET) – Thresholds (JND, two-point touch, absolute) – Method of (adjustment, limits, constant stimuli) • Chapter 2 Study Guide: The First Steps in Vision docx – Key terms: Duplex, eccentricity, emmetropia, scotoma, synaptic terminal. – Properties of light (absorb, contrast, filter, photoactivation, reflect, scatter, transduce, transmit, wave) – Anatomy of the eye (lens, iris, rods, cones, pigments, fovea,cornea, ect) and what each part does. – Rods and Cones: where they are, what they do, when kind of cells are the information pass through, what pigment is found in rods, night vs day vision. – Myopia vs Hyperopia ; astigmatism • Chapter 3 Study Guide: Spatial Vision docx – Key terms: Aliasing, amplitude, acuity,contrast, spatial frequency,visual angle, topographical map- ping, orientation tuning, phase, cortical magnification, simple cell, complex cell, pattern analyzers, adaptation. – What are receptive fields and size to perceive texture – Snellen’s 1862 method for designating – Hubel and Wiesel’s experiment as seen in the video on the slides – What do retinal ganglion cells respond to? – Know the entire pathway of a visual signal (starting at the ) and where the signal converges. • Chapter 4: Perceiving and Recognizing Objects docx – – What is it? What are key points about it? – Know all of the Gestalt group principles and what they look like (examples) – Illusory Contours, Gestalt features, good continuations, occlusion, similarity, proximity, connected- ness,parallelism, common fate, ambiguous figure, accidental viewpoint, parallelism, – Occlusion: Relatable shape and non accidental figures – Figure/Ground: what is usually perceived as figure or ground? – What is middle vision? How is it summarized?

10.3. Chapter Study Guides 39 examplecode Documentation, Release 0.1

– Famous studies – Hoffman and Richard – Tarr – Gauthier et al. – Disorders of perceiving and recognizing objects – Balint’s syndrome, prosopagnosia, associative agnosia, – What are the symptoms and what causes these disorders. – Object recognition models – Naive Template Theory, Recognition by component, Multiple recognition committees, structural de- scription, entry level category – Examples for each, famous experiments/components associated with these models, problems with models? • Chapter 5: Color docx – Trichromacy theory – Univarience – Color opposition – Subtractive and Additive color mixing – Hue cancellation with lights(examples/combinations) – Color consistency – Metemers – Opponent color theory (output of cones and opponency between colors) – After image – what colors pairs belong to each other (I.E. you see red when you stare at green) – RGB scale – what percentages of RGB make colors. Play with the color picker tool in paint to see percentages (i.e. 100% red, 50% green, 0% blue is the color orange) – Disorders that causes one to not be able to perceive color – Types of cones (L, M, S) (Protan, Deutan, Tritan) – What colors respond to which cone – What color blindness is correlated to the absence of which cone – Deuteranope, protanope, tritanopia, and monochromacy – How do these different cones respond at night or day (photopic vs scotopic) – Repacking of the Retina information – (L-M), (L+M), (L+M)-S – What do these combinations tell you/ what are they used for. – The color violet. Why is it unique? Who/what can see it? – How do animals see color differently than humans?

40 Chapter 10. Sensation and Perception (PSYC450) examplecode Documentation, Release 0.1

– Cats, birds, and dogs (tetrachromats vs dichomates vs trichromats) • Chapter 6 Study Guide: Space Perception and Binocular vision docx – Depth cues (what are they, be able to identify examples of each, and whether they are binocular or monocular depth cues) – Motion parallax, aerial perspective, linear perspective, vanishing point, , occlusion, texture gradient, relative height, size constancy, metric depth cue, nonmetric depth cue, size and position cues, kinetic depth perception, pulfrich effect – Panum’s Fusion area – Vieth Muller circle – Horopter vs diplopia – Stereoscopes/stereograms – Stereoblindness (what causes it) – Binary Disparity/ Rivalry – Uncrossed vs crosses disparity, stereoacuity; Binocular rivalry, dichoptic – Free Fusion – Influences of perception on binocular vision – Bayesian Approach, continuity constraints, uniqueness constraints, correspondence problem • Chapter 7 Study Guide: Attention and Scene Perception docx – What is attention? What does it consist of? Types of attention? – What is reaction time (RT) and how is it used in regards to attention tasks? – Searches and search elements – Spotlight attention, visual search, distractors, target, set size, selective attention, feature search, serial vs parallel search, reaction, serial self terminating, – Efficient search vs inefficient search – Repetition blindness and attentional blink – feature integration theory (define and example of) – how do neurons respond during response enhancement? – Neglect: what it is, what are the symptoms and what tests are used? – Balint syndrome; what is it and what are the symptoms? (make sure you watch the video on the powerpoint) – Areas of the brain that are used attention and scene perception (i.e: FFA, EBA, parahippocampal place area, striate cortex) – What are the three ways the response of a cell can be changed by attention? – What is spatial layout and covert attentional shift? • Chapter 8 Study Guide: Motion Perception docx – Vocab: First order motion, second order motion, apparent motion, aperture problem, correspondence problem, biological motion, interocular transfer. – Know about motion after effect.

10.3. Chapter Study Guides 41 examplecode Documentation, Release 0.1

– What are the illusion and the barber shop pole illusion and what do they demonstrate? – What is Tau? What does it tell you? – What parts of the brain are responsible for perceiving motion? In individuals who cannot perceive motion, what part of the brain is often damaged? – What is the comparator? What is its purpose? – How do you use motion information to navigate? Optic array and optic flow? – What is the “focus of expansion”? What is its purpose? Why is this important? – What does Warren’s lab (in the section labeled “using motion information) find in regards to humans estimating their direction of heading? – What is saccadic suppression? – Types of eye movements(what they are and when you use them): , , , microsaccades. – What is the comparator? How does it work? • Chapter 9 Study Guide: Hearing docx – Components of sound: amplitude, loudness, period, frequency, pitch, sine wave, tone, complex tones, resonance frequency, masking, acoustic reflex, harmonics, threshold tuning curve, two tone suppres- sion, rate saturation, temporal integration. – What happens when you strike a tuning fork? – Know the anatomy of the ear (including all of the little parts/auditory pathway) and the purpose of the parts (ear canal, inner, middle, outer ear, ect) – How the cochlear works? – What causes hearing loss? How does age affect hearing? • Chapter 10 Study Guide: Hearing in the Environment docx – Interaural time difference, interaural level difference, sound localization, sound shadow, cone of con- fusion, perceptual restoration, good continuation, spectral composition, Head related transfer func- tion. – Know the parts of the brain that are involved with hearing and what they do (auditory stream). – Sound components: Harmonics, missing fundamentals, fundamental frequencies, Timbre, Pitch, At- tack, Decay, Sustain, Release, Octave, Continuity effects, perceptual restoration, Doppler effect. – What is source segregation and what does it involve? – What is auditory stream segregation and what contributes to it? What is an example of this? – What are the different ways you can group sounds? (grouping by onset, timbre, continuity effect, decay, ect) • Chapter 11 Study Guide: Music and Speech Perception docx – Key terms- pitch, octave, tone height, tone chroma, chord, melody, tempo, syncopation, vocal tract, phonation, articulation, formant, spectrogram, coarticulation, categorical perception, encephalogram. – How is speech sounds produced? – How does culture affect perception of music? – components of articulatory dimension – Theories involving speech?

42 Chapter 10. Sensation and Perception (PSYC450) examplecode Documentation, Release 0.1

– Know about the chimpanzee experiments that attempted to teach them language (e.g. Vicki and Washoe) – Broca’s and Wernicke’s area – Study the “Musical pitch” slides – How do infants react to sounds and sentences? Think about the studies done with infants. – Watch the youtube video on monkeys (Robert Seyfarth: Can Monkeys Talk”) • Chapter 12 Study Guide: Spatial Orientation and the Vestibular System docx – Know the parts of the ear that contribute to the vestibular system and how they work. – Vocab: angular motion, linear motion, tilt, angular acceleration, linear acceleration, vection, motion sickness, habituation, acceleration, velocity, receptor potential, mechanoreceptor, otoconia, oscilla- tory, sensory integration, sinusoidal. – Know the pathways used by the vestibular system – Know the 3 different reflexes/responses used in vestibular response (they all start with “vestibulo-”) and what they do. – Know how caloric stimulation works. – How do cameras try to mimic the human vestibular system/ • Chapter 13 Study Guide: Touch docx – What are the field size, rate and function of the four mechanoreceptor (SA1, SA2, FA1, FA2) – What do each receptor/fiber responds to (i.e. Tactile, Kinesthetic, Thermal, Nociceptors) – Know the pathway for touch from the skin to the brain – Know the areas of the brain that perceive pain and pleasant/unpleasant touch – Vocab: Body image, haptic, neural plasticity, gate control theory, homunculus, egocenter, propriocen- ter, somatotopic, kinesthetic, endogenous opiate, analgesia. – What is tactile agnosia and what causes it? – Phantom Limb syndrome. – What is important about fingerprints in regards to touch • Chapter 14 Study Guide: Olfaction docx – Key terms: Odor, odorant, nasal dominance, anosmia, cross adaptation, cognitive habituation, odor imagery, pheromones, odor hedonics, receptor adaptation, – Binaural rivalry – Know the olfactory system of an animal – Why is olfaction a “mute sense”? – Know the anatomy of the human olfactory system and the purpose of each part – Know the pathway a signal is carried from the olfactory receptor to the brain (including nerves asso- ciated with olfaction and taste) – Vibration theory vs shape pattern theory • Chapter 15 Study Guide: Taste docx – Key terms: tastant, taste bud, flavor, retronasal olfactory, gustatory system, cross-modality matching

10.3. Chapter Study Guides 43 examplecode Documentation, Release 0.1

– Understand the process of how food is tasted/perceived starting when you chew up the food to where in the brain the neural signal is received – Theories related to taste – What happens when you anesthetize the chorda tympani? – Know the four taste qualities and what specific thing produces them (i.e. H+) – Know the purpose of the different types of papillae – Social influence on flavor – PROP (the experiment we did in class)

10.4 Demonstrations

• Chapter 1: Introduction – Signal to Noise: Photography with long shutter has good SNR, rapid shutter freezes motion but has low SNR. – Thresholds: Detecting quiet sounds shows a ROC curve. We can change the volume to influence discriminability. We can use rewards or punishment to influence criterion. (ROCui Matlab script) – Frequencies: We can take a sharp (focused) and blurry (unfocused) image of the same scene to show low frequencies. We can compute the difference between these two images to reveal the high frequencies (edges). We can add the edges to the sharp image to enhance edges (sharpening). [bmp_unsharpmask Matlab script]. – We can measure neural conduction time by using transcranial magnetic stimulation [TMS] to cause a finger movement. We can then measure the motor evoke potential to see the transmission delay time from the brain to the finger. – Fourier Transform to visualize the impact of filters – David Heeger’s Signal Detection Matlab tutorial • Chapter 2: Light to neural signals – Focal length and aperture: We can make pin hole cameras with different focal lengths (camera body caps with holes drilled in the center, lens adapters provide different focal length). We can adjust the aperture of a lens to reveal different depth of field and light transmission. • Chapter 3: Spots to stripes – Interactive guide to Fourier Transforms – Great overview of vibrations and sine waves >`_ – High and low frequency illusions (Einstein or Monroe) – Tilt illusion – The cafe wall illusion – Fraser’s spiral • Chapter 4: Object Recognition – Gestalt demos – Thatcher illusion `_ • Chapter 5: Color

44 Chapter 10. Sensation and Perception (PSYC450) examplecode Documentation, Release 0.1

– Extra sensory perception: We can see how a camera responds to color. By removing the hot mirror we can show that the camera has been limited to mimic the human eye. – Isomers: purple vs violet – David Heeger’s color matching tutorial • Chapter 6: Space perception, binocular vision – Binocular rivalry – Akiyoshi Kitaoki’s Rollers Illusion – Kinetic depth perception • Chapter 7: Attention and scene perception – Visual search: feature vs conjunction – Neglect: egocentric vs allocentric • Chapter 8: Motion perception – The color wagon wheel shows that motions perception is color blind - isoluminant stimuli modulate our perception. – Motion after effects – Waterfall illusion (motion adaptation) – Saccadic suppression: look at your eyes in the mirror - shift from left to right eyes: note you only sees your eyes when they are still. Watch someone else doing this to see what you are missing. – Biological motion • Chapter 9: Physiology of hearing – The Auditory Neuroscience has terrific demos for all the hearing chapters. – Foley and Matlin have a nice demonstration of binaural beats – Age and high frequency (Presbycusis): mosquito tone to drive away teenagers, cell phone rings that professors can not hear. • Chapter 10: Hearing in the environment – The Auditory Neuroscience has terrific demos for all the hearing chapters. – Auditory grouping illusions – Interaural time and loudness practical demonstrating (requires Matlab) • Chapter 11: Music and speech perception – The Auditory Neuroscience has terrific demos for all the hearing chapters. – McGurk effect – Try the Octave illusion • Chapter 12: Spatial orientation and the vestibular system – Wagging your finger versus wagging your head (see book chapter) – Caloric stimulation (requires expertise) • Chapter 13: Touch – Foley and Matlin describe several demonstrations with touch.

10.4. Demonstrations 45 examplecode Documentation, Release 0.1

– Two point discrimination – Rubber hand illusion • Chapter 14: Olfaction – Le Nez du Vin includes 54 smells often identified in wines. Can you identify these smells without any other context? – Human_pheromones • Chapter 15: Taste – Miraculin: makes acid taste sweet – Gymnema Sylvestre tea can (temporarily) abolish your sense of sweet taste. – Are you a super taster? Can you taste PROP? – Foley and Matlin describe some elegant taste and smell demonstrations

10.5 Assignments and Grade distribution

The assignments during the term account for 75% of the grade, the final exam counts for 25%. • A = 90-100% • B = 80-90% • C = 70-80% • D = 60-70% • F = <60%

10.6 Goals

• By the end of the term, successful students should be able to do the following • Recognize the features and limitations of the five major sensory systems and the vestibular system. • Explain the properties of the sensory receptors. • Describe how this often-ambiguous sensory information is integrated into a unified percept. • Solve novel problems regarding perception using scientific experiments. • Translate this knowledge to other domains. • Assess how politicians, advertisers, psychic and mentalists exploit our perceptual biases. • Relate this knowledge to everyday human experience, and to the student’s own interests.

46 Chapter 10. Sensation and Perception (PSYC450) examplecode Documentation, Release 0.1

10.7 Expectations & Evaluation

Sensation and Perception is designed to give insight into the basic processes of sensory and perceptual processes using novel, hands on activities. There were three approaches to this goal: (1) observation, (2) critical thinking (3) integration. • There will be an emphasis on OBSERVATIONS, in which students reflect on what they sensed or perceived. • The class requires students to read Sensation and Perception by Wolfe et al. As in most other textbooks about sensation and perception, vision and audition will be covered most extensively compared to other sensory sys- tems. Because the book is dense in material, all lectures are be posted online to guide students on the important concepts. The information from the text as well as the lecture is to help students CRITICALLY THINK about their observations. • Formal evaluations will consist of quizzes and a Final Exam. There will also be a group project in which stu- dents are to present on a topic that INTEGRATES information from observations and formal neural/perceptual mechanisms.

10.8 Attendance

Attendance throughout class is required. By registering for this class you are confirming your availability during class. If you must miss a class, you should talk to the instructor ahead of time. For emergencies (flu, car trouble) it is strongly preferred that you send a text message to the instructor at the time of the class. Failure to meet the 10% rule described in the academic regulations will have homework assignment scores diminished by the proportion of the absences across the term (e.g. missing 15% of classes will mean your final score reflects 85% of your homework score).

10.9 Plagiarism

University policy regarding plagiarism, cheating and other forms of academic dishonesty is followed explicitly [See Carolina Community: Student Handbook and Policy Guide, Academic Responsibility]. Any case will be reported to the Dean of the College of Arts and Sciences. A “0” score will be given on a plagiarized assignment, and may result in an “F” for the course in extreme cases.

10.10 Disabilities

Students who have disabilities must have certification from the Office of Disability Services and must make clear during the first week of class what accommodations they expect. Students with disabilities must complete the same exams and assignments as other students in order to get course credit.

10.7. Expectations & Evaluation 47 examplecode Documentation, Release 0.1

48 Chapter 10. Sensation and Perception (PSYC450) CHAPTER ELEVEN

IMAGE TO INFERENCE (PSYC589/888)

Details: • Course Title: “Neuroimaging: from image to inference”. • Instructor: Chris Rorden : Office 227 Discovery I (John Absher will provide clinical lectures) • Course Code: (Undergrad) PSYC 589(Grad) PSYC 888,3 credits. In addition, scientists are free to audit this course. Suitable for faculty, post-docs, PhD students and advanced undergraduate students. • When: Spring 2021 1:15-2:30 T/Th, Jan 12-Apr 22 • Where:Hamilton 238 and Virtual Instruction • Formal Syllabus • Course slides: Google Slides format • License: the slides and material for this course are distributed under the Creative Commons license . Further details are in the notes section of the PowerPoint file. • Textbook: Functional Magnetic Resonance Imaging by Huettel, Song, and McCarthy. • Supplemental textbook: Poldrack et al.. • Description: Functional magnetic resonance imaging is a recent and powerful tool for inferring brain function. This technique identifies brain regions that are activated by different tasks – for example we can find the brain regions that activate when someone sees a familiar face. This course is designed to give students an understand- ing of the potential and limitations of this technique, and the ability to critically evaluate the inferences that can be drawn from fMRI. The course describes all stages of an fMRI study – from the design of the behavioral task (e.g. asking the participant to view faces), to the image processing (e.g. correcting images for head movements that occurred during scanning), through to statistical analysis (identifying brain regions that are activated by a task).

49 examplecode Documentation, Release 0.1

11.1 Lectures

• Overview. – The classroom is a computer lab, so all assignments can be completed in the lab. Optionally, FSL and MRIcron on a computer. MRIcron runs on Linux, Windows and macOS. Individuals with macOS and Linux computers can install FSL natively, or students can use the provided DVD to run a NeuroDebian VirtualBox, with instructions here . A final option is to install FSL on Windows Subsystem For Linux. – The first homework assignment requires you to mark landmarks on a MRI scan, you can find these land- marks using my Neuroanatomy Atlas • MRI physics: Image Acquisition. – Terrific videos (from a company that makes a unique instructional MRI system. • MRI physics: Image Contrast. – The virtual MR and mrilab programs allow you to interactively adjust MRI parameters and see the results. – Graphs (and Matlab scripts) for basic MRI contrast effects. • fMRI Paradigm Design. – My fMRI simulator allows you to explore the hemodynamic changes induced by different tasks. • Statistics and Thresholding. • Spatial Processing I: Spatial Registration – realignment (motion correction), coregistration, normalization; Spa- tial interpolation – linear, spline, sinc functions – Spatial Processing Demos. • Spatial Processing Continued II: Smoothing – filters, edge detection, gaussian blur, homogeneity correction (for EPI and anatomical scans), motion related intensity changes. – Undistorting fMRI EPI data using the SPM FieldMap toolbox. • Temporal Processing – Interactive filtering demo shows how low-pass, high-pass and notch filters modulate a signal. – Physiological Artifact Removal Tool. • FSL and SPM. Hands on demonstrations – fMRI analysis. – FSL: block design. – FSL: event-related design. – SPM: block design. – Automated analysis with SPM (same data as block design tutorial). • Detecting subtle changes in brain structure: Voxel Based Morphometry and Diffusion Tensor Imaging. – John Ashburner’s VBM class (PDF). – DTI tutorial. – Advanced DTI tutorial. • Brain injury and neuroimaging. Measuring blood flow and using lesion symptom mapping to understand the consequences of stroke and other neurological disorders. • Arterial Spin Labeling.

50 Chapter 11. Image to Inference (PSYC589/888) examplecode Documentation, Release 0.1

• Contrast-enhanced (Gadolinium) Perfusion Weighted Imaging. • VLSM using my NPM software • Brain stimulation: Transcranial Magnetic Stimulation (TMS), Transcranial Direct Current Stimulation (tDCS). Roger Newman-Norlund and Chris Rorden • Student presentations: Resting state analysis, effective and functional connectivity, independent components analysis, neural current MRI?

11.2 Practicals

Practicals will use Amazon Web Services. You will need to install the Client Application. You will receive a user name and password for this system. The material extends the FSL Training Course. • Practical 1 (Thur. Jan. 14, 2021) – Download the Client Application – Make sure AWS workspace logins work for everyone – Getting to know your workspace – Terminal (command line) basics – Open neuroimaging data with MRIcroGL and FSLeyes – Location of images for assignments • Practical 2 (Thur. Jan. 21, 2021) – slides here – Assignment 1 is due soon! – Review drawing and saving images for assignments – Overview of Brain extraction, mathematical operations on brain images, image registra- tion/normalization – Independent student exercises to work through – Assignment 2 is posted • Practical 3 (Thur. Feb. 4, 2021) – Introduction to the FSL Course material – Work through image registration, unwarping, and transforming image masks – Lab guide to follow – First part of lab is instructor guided – Remaining part of lab is at each student’s own pace – These exercises prepare you for the upcoming assignment • Practical 4 (Thur. Feb. 11, 2021) – Finsih registration, unwarping, and transforming image masks – Lab guide to follow • Practical 5 (Thur. Feb. 18, 2021) – Start structural analysis (anatomical image segmentation)

11.2. Practicals 51 examplecode Documentation, Release 0.1

– Lab guide to follow • Practical 6 (Thur. Mar. 4, 2021) – Start FSL fMRI block design analysis – Lab guide to follow – Data to download • Practical 7 (Thur. Mar. 11, 2021) – Finish structural analysis – Lab guide to follow • Practical 8 (Thur. Mar. 18, 2021) – Single subject fMRI and Featquery – Lab guide to follow • Practical 9 (Tues. Mar. 23, 2021) – Single subject fMRI (event design, finger tapping) – Data to download – Lab guide • Practical 10 (Thur. Apr. 8, 2021) – group fMRI analysis – Lab guide • Practical 11 (Tues. Apr. 20, 2021) – diffusion analysis – Lab guide

11.3 Assessment and Assignments

The final grade is weighted 30% quizzes, 40% on homework assignments and 30% on the essay. Letter grades assigned as follows A = 90-100%, B = 80-90%, C = 70-80%, D = 60-70%, F = <60%. Graduate students (PSYC888) must also present a research article as a class presentation 45 minute. This presentation is scored as pass or fail that modifies the grade on the essay by x1.0 (pass) or x0.5 (fail), so that a perfect essay (100%) with a failed presentation (x0.5) yields a weighted score of 50%. Material from this article will be included in the quiz, so underdraduates will want to pay careful attention to this presentation. Homework description: Students will submit regular homework assignments, which are due at noon on their due date. Assignments are due in the students’ dropbox folder unless otherwise speci- fied. Essay description: Students will write an essay that describes the merits, limitations and potential of a current or potential technique used to infer brain function. Essays should extend beyond the information in the course. Examples include: ERP vs fMRI, MEG, functional connectivity, Independent Component Analysis, Adaptation Designs.

52 Chapter 11. Image to Inference (PSYC589/888) examplecode Documentation, Release 0.1

11.4 Learning Outcomes

• Understand the basic elements of neuroimaging. • Understand strengths and limitations of complementary tools used in cognitive neuroscience. • Ability to evaluate how contemporary methods can be used to understand cognitive functions. • Practice software for viewing, preprocessing and statistically analyzing brain imaging data. • Practice writing in the form of scientific report that relates behavioral and biomedical constructs.

11.5 Attendance

Attendance throughout class is required. By registering for this class you are confirming your availability during class. If you must miss a class, you should talk to the instructor ahead of time. For emergencies (flu, car trouble) it is strongly preferred that you send a text message to the instructor at the time of the class. Failure to meet the 10% rule described in the academic regulations will have homework assignment scores diminished by the proportion of the absences across the term (e.g. missing 15% of classes will mean your final score reflects 85% of your homework score).

11.6 Plagiarism

University policy regarding plagiarism, cheating and other forms of academic dishonesty is followed explicitly [See Carolina Community: Student Handbook and Policy Guide, Academic Responsibility]. Any case will be reported to the Dean of the College of Arts and Sciences. A “0” score will be given on a plagiarized assignment, and may result in an “F” for the course in extreme cases.

11.7 Disabilities

Students who have disabilities must have certification from the Office of Disability Services and must make clear during the first week of class what accommodations they expect. Students with disabilities must complete the same exams and assignments as other students in order to get course credit.

11.8 Links

• SPM statistics • Rik Henson’s fMRI mini-course • Rik Henson’s tips for fMRI design • Duke BIAC Grad Course • SPM course , and the SPM8 manual • NeuroDebian virtual machine is a great way for students to try out neuroimaging tools. • Lin4Neuro is an open source Linux distribution that comes with many of the most popular free MRI tools (FSL, MRIcron, etc) already installed.

11.4. Learning Outcomes 53 examplecode Documentation, Release 0.1

Homework Assignments added as they are posted: • Assignment 1 • Assignment 2

11.9 Calendar

This course follows the Spring 2021 academic calendar . Classes being • Tu 12 Jan • Th 14 Jan • Tu 19 Jan • Th 21 Jan • Tu 26 Jan • Th 28 Jan • Tu 2 Feb • Th 4 Feb • Tu 9 Feb • Th 11 Feb • Tu 16 Feb • Th 18 Feb • Tu 23 Feb • Th 25 Feb (Wellness Holiday) • Tu 2 Mar • Th 4 Mar • Tu 16 Mar • Th 18 Mar • Tu 23 Mar • Th 25 Mar • Tu 30 Mar (Wellness Holiday) • Th 1 Apr • Tu 6 Apr • Th 8 Apr • Tu 13 Apr • Th 15 Apr • Tu 20 Apr • Th 22 Apr

54 Chapter 11. Image to Inference (PSYC589/888) CHAPTER TWELVE

PERFUSION-WEIGHTED IMAGING (PWI)

Perfusion-weighted imaging (PWI) allows us to infer how blood traverses the brain’s vasculature. The most popular technique is Dynamic Susceptibility Contrast (DSC) imaging, where we inject a bolus of contrast agent (typically Gadolinium) into the vein of an individuals’ arm. This bolus is sucked into the heart, and then a large portion is pumped into the head, initially travelling through the main arteries, through the capillary beds, and finally exiting from the veins. By take a series of images, we can create a movie illustrating the speed and amount of blood reaching different portions of the brain. Specifically, Gd influences both the T1 and T2 properties of nearby hydrogen, making these regions appear darker. This is very useful for understanding acute stroke (where abnormal perfusion in regions with normal diffusion suggests salvageable tissue) and brain tumors (which exhibit unusual diffusion due to mass effects, modulated metabolism and pathological leakage across the blood-brain barrier). However, there are many different techniques for analyzing perfusion data, each with their own set of implications. This page describes some of the most popular methods, with an emphasis on my own tools. Note you can also measure blood flow using Arterial Spin Labeling (ASL), but that technique is described separately by the Arterial Spin Labeling (ASL) web page.

12.1 Raw Data

A typical PWI sequence acquires a series of 3D volumes of the brain. For example, acquiring 53 volumes in 90 seconds of scanning. Therefore, for each slice through the brain we get a new pisture once every ~1.7s. For each individual, we typically only track the passage of a single bolus. The simplest way to analyze this data is to measure the time to peak and maximum signal reduction for each voxel by directly plotting each observation. The benefit of this technique is that it is simple and does not make any assumptions regarding the shape of bolus impulse through the scan (whereas more complicated strategies may not be as robust, see Boxerman et al., 1997; Cha et al., 2002). However, it does have a few major limitations: first the temporal resolution is limited to our sampling rate (~1.7s) and second this method is very sensitive to noise in our data (each individual sample is noisy, and we are not using information from other observations that are nearby in space and time to refine our estimate). The graph on the right shows a raw analysis of data from three voxels in the head: an input artery (red), gray matter (purple) and a vein (blue). Note that the arterial input function is early in time and creates a shorter response than the regions further upstream. DSCoMAN is a free tool that allows you to estimate raw values, as well as the ability to improve the basic linear fit with a correction for leakage described by Boxerman et al. (2006) which can be particularly well suited for tumors. Raw fitting is useful for measuring MSR, TTP and FM.

55 examplecode Documentation, Release 0.1

12.2 Gamma fitting

By fitting a gamma function to our data we can attempt to minimize noise and interpolate the amplitude and timing of the bolus. As long as this function accurately models the signal change present in the data, this technique should provide a more accurate measure for parameters such as MSR, TTP and FM. For example, in the figure on the right, note that the fitted function allows us to infer that the peak times and amplitude do not precisely correspond with our observations (e.g. the peak amplitudes typically does not occur precisely at the location of the marker on the figure). Despite these advantages, both gamma functions and deconvolution (described next) are very sensitive to starting estimates noisy data, and can become trapped in local minima (poor fits of the data). My own software provides a simple tool for gamma fitting.

12.3 Deconvolution

In the input arteries, the gamma function is a great model for the passage of the bolus, as the bolus rapidly enters and exits the region. However, in gray matter there are actually two components influencing the Gd concentration: the direct passage of the contrast agent with the blood, and the residual function of Gd being retained in the tissue. This residual function is particularly pronounced and important in brain tumors, where there is some leakage of Gd across the brain tissue. Therefore, many tools attempt to deconvolve the observed signal into the input impulse response and the residual function. Relative to raw and gamma methods, deconvolution can in theory provide purer measures for the underlying changes, provide important measures for tumors. Therefore, these tehcniques are excpetionally popular. However, this technique does require a lot more assumptions, and therefore must be used with caution. Further, recent work in acute stroke (Christensen et al., 2009) suggests that for this condition TTP and FM derived from gamma fitting are at least as good as deconvolution methods – perhaps due to a combination of the robustness of gamma fitting and the fact that TTP and FM have the benefit of being less pure measures (e.g. they combine factors that each provide predictive power). Jim is a professional tool that provides deconvolution analysis.

12.4 A simple demonstration

I have written a very simple graphical program that lets you define a gamma variate and then see how well different methods fit the data. This uses the same algorithms as my imaging software. This program generates a signal with a specifed amount of noise, and then fits the data using raw values, an initial single-factor linear fit suggested by Madsen, and a refined nonlinear multi-factor fitting of Madsen’s formula using Powell’s Method. This program allows you to set x0 (bolus arrival time), xMax (time when bolus concentration reaches a peak), yMax (maximum height of bolus concentration), and alpha (shape of gamma function). In addition, you can choose to add Gaussian noise and define the sampling rate (TR). This software is available for download from the links on the bottom of the page.

56 Chapter 12. Perfusion-weighted imaging (PWI) examplecode Documentation, Release 0.1

12.5 Perfx: A perfusion estimation tool

My Perfx (Perfusion Estimation) software will take a 4D NIfTI format perfusion-weighted DSC images and estimate a few parameters. • Source code (Windows, Linux, macOS) If your data is in DICOM format, you will need to convert it to NIfTI (e.g. using dcm2niix ). The technique is similar to Kim et al, though it uses both an initial linear fit as well as a non-linear fit for the gamma function and does not compute deconvolution. Therefore, this software is well suited for stroke (e.g. robust estimates of the parameters suggested by Christensen et al.) but less suited for tumors (where leakage means the gamma is not a good fit). When you start a perfusion analysis you will want to check a few parameters: • Motion correction: if selected images will be realigned to adjust for head motion – individuals often move their heads when they feel the contrast bolus injection begin. This uses MCFLIRT’s mutual information cost function, and therefore requires FSL to be installed (not available for Windows). • Brain extraction: This uses FSL’s BET to remove non-brain tissue, leading to a faster computation. Requires FSL to be installed (not available for Windows). • Slice timing correction: On many systems, the 3D image of the brain is not acquired at once, rather the images are acquired as a series of 2D slices. Consider an image with 16 slices acquired with interleaved ascending order: the slices are taken 1,3,5..15,2,4..16. Therefore, the slices show the image at different times. STC attempts to correct for this temporal bias by interpolating information from previous and subsequent volumes. • Spatial smooth FWHM mm: this allows you to blur the data a bit. This can give a more robust measure of signal, but does cost spatial precision. Kim et al suggest a 2.35mm FWHM. • Temporal smooth FWHM sec: This allows you to blur the signal in time. This can give a more robust estimation, but it will tend to influence the timing of some parameters (in particular, the arrival time may appear a little earlier). Kim et al. suggest 3.53 seconds. • Delete volumes: We often discard the first volume or two from a sequence, because it appears unusually bright (T1 has not yet saturated). • Baseline volumes: We will use the first few volumes (which occur after the delete volumes) prior to bolus arrival to estimate how much signal fluctuation we see due to random noise. This helps us determine which voxels should be analyzed (we will only examine voxels that show a substantial drop in relative to the baseline signal). • Final volume: End of volumes to examine. For example. if you set 1 delete volume, 6 baseline volumes and 42 as the final volume, then volume 1 will be discarded, volumes 2..7 will be used to estimate typical (unenhanced) signal variability (to compute a brain mask) and volumes 8..42 will be used for the gamma fitting. • Brain mask threshold (SD): We will only examine voxels where there is a significant signal drop relative to the baseline scans. For example, if this is set to 2.8, then only voxels where the minimum signal seen during the gamma fitting period is at least 2.8 Standard Deviations outside the average seen during the baseline period. • TR (sec): Duration per volume, e.g. if 2.3 than we will see a particular slice in the brain every 2.3 seconds. • TE (msec): This is the echo time. It is used to convert the raw signal (S(t)) to an estimate of tissue concentration using the formula C(t) =(1/TE)*ln(S(t)/S0), where TE is the echo time, S(t) is the signal intensity at each timepoint during the gamma fitting period and S0 is the average baseline signal intensity). • Brain Mask (R^2): we will only accept voxels where the quality of the gamma fit exceeds this value. For example, if this is set to 0.5, then we will only accept voxels where the gamma fit explains at least 50% of the observed signal variability. • Voxels to determine arterial input: For computing TTP and FM, we need to estimate when the contrast first arrived in the brain. Usually, each single voxel is a bit noisy, so we want to average across a small population.

12.5. Perfx: A perfusion estimation tool 57 examplecode Documentation, Release 0.1

For example, if set to 300, the arterial input function is based on the average input time for the 300 voxels that appear most likely to be major arteries. • Compute raw (unfitted) values: If checked, raw TTP and MSR maps will be generated. • Compute precise (but slow) fitted values: If checked, the initial fast Madsen linear approximation of the gamma function is followed by a precise fit using Powell’s method. This typically gives better answers but is slower. • Normalize: create additional images where shape is warped to match standard space (allowing analysis between indivieduals). Requires FSL to be installed (not available for Windows). After setting these values and selecting the perfusion images you want to process, the following images will be created: • _rawmsr: Unfitted maximum signal reduction (only created if you selected to compute raw parameters). • _rawttp: Unfitted Time To Peak (only created if you selected to compute raw parameters). • _r2: R-squared showing goodness-of-fit for gamma function. • _mask: Voxels outside the brain are set to zero, those used to estimate the arterial input are set to 2, and the rest of the parenchyma is set to 1. This allows you to examine the quality of the brain mask, and see if the arterial input selection is reasonable. • _mtt: Fitted mean transit time. • _fm: Fitted first moment (time from bolus arrival in arteries until mean of gamma function). • _ttp: Fitted Time To Peak.

12.6 Popular PWI Measures

• Concentration time curve: the figures above show the raw data from a PWI MRI scan. However, in practice all techniques actually convert the raw signal curve to a concentration time curve using the formula C(t)= - (1/te)ln(S(t)/S(0)), where S(t) is the observed signal at time t, and S(0) was the average singal during the baseline prior to baseline, and te is the echo time (in practice, 1/te is a constant across all voxels and only impacts the scale, but the other portion of the equation makes the shape of the peak more pronounced. • CBF (Cerebral blood flow) How quickly is blood flowing through tissue, classically measured in units of mL/100g/min. Typically measured with deconvolution. (CBV/MTT). • CBV (Cerebral blood volume) Concentration of blood in a tissue, classically measured in units of mL/100g. Typically estimated with deconvolution. (Area under concentration time curve) • FM (First moment) time when half the signal change has been observed (mean) as measured by a gamma function. This has proved useful in acute stroke (Christensen et al., 2009), is intriniscally highly correlated with TTP and is analogous to MTT. • K2 is measured using the linear fitting method described by Boxerman et al. (2006) and correlates with leakage from the blood-brain barrier. It is particularly useful for tumours. • MSR (Maximum signal reduction) measures normalized drop in signal at peak relative to the baseline period prior to arrival of contrast (e.g. a measure of peak height). This appears to correlate with CBF (Klose et al., 1999). • MTT (Mean transit time) Similar to gamma first moment, for deconvolution methods this is based on CBV/CBF. • TTP (Time-to-peak) time from first appearance of bolus in artery to peak signal change observed in tissue. Surprisingly, this has proved one of the most reliable measures for identifying abnormal tissue (Christensen et al., 2009) and predicting abnormal behavior (Hillis et al., 2001) in acute stroke. • Rsquare ( Coefficient of Determination ) describes how well the model describes the observed data, and ranges from 0 to 1, e.g. an Rsquare of 0.75 suggest that the model predicts 75% of the observed variance.

58 Chapter 12. Perfusion-weighted imaging (PWI) examplecode Documentation, Release 0.1

12.7 Links

• DSCoMAN is a free tool for raw or Boxerman et al. corrected linear-fitted perfusion parameters. This software works as a plugin to the free ImageJ program. • Perfusion Mismatch Analyzer (PMA) is a free tool from Japan’s Acute Stroke Imaging Standardization Group. • PerfScape is a professional tool that uses the deconvolution method (Ostergaard et al.). • StrokeTool is a professional tool that uses the deconvolution method (Ostergaard et al.). • Jim is a professional tool that uses deconvolution methods (Ostergaard et al.). This tool has a lot of clever features that provide a lot of utility. • Wikipedia has a great page describing the gamma probability function. This function is related to the gamma fitting used in PWI (however, the Gamma PDF has two parameters and unit area, whereas for PWI we include one parameter that describes delay and a second to describe amplitude). • My software attempts to find the best gamma function to fit the observed data by adjusting four values: input time, peak time, peak amplitude and shape. However, like many equations, these parameters interact, making it challenging to find the optimal combination of parameters. This is a great application for Powell’s Method, which has many other applications in neuroimaging (e.g. spatial coregistration/normalization; computing opti- mal hemodynamic response, etc).

12.8 References

• Boxerman JL, Rosen BR, Weisskoff RM. (1997) Signal-to-noise analysis of cerebral blood volume maps from dynamic NMR imaging studies. J Magn Reson Imaging. 7:528-37. • Boxerman et al. (2006) Relative cerebral blood volume maps corrected for contrast agent extravasation signif- icantly correlate with glioma tumor grade, whereas uncorrected maps do not. Am J Neuroradiol. 27:859-67. deals with the problem that blood often leaks across the blood-brain barrier near brain tumours, disrupting classic linear-fitting of data. DSCoMAN includes this correction. • Cha S, Knopp EA, Johnson G, Wetzel SG, Litt AW, Zagzag D. (2002) Intracranial mass lesions: dynamic contrast-enhanced susceptibility-weighted echo-planar perfusion MR imaging. Radiology. 223:11-29. • Christensen et al. (2009) Comparison of 10 perfusion MRI parameters. Stroke 40:2055-61. compare 10 popular perfusion parameters and suggest that the time-to-peak and first-moment parameters derived by Gamma variate fitting were among the top predictors to predict infarction, numerically outperforming more complicated (though perhaps purer) deconvolution techniques. • Galinovic et al. (2011) Fully automated postprocessing carries a risk of substantial overestimation of perfusion deficits in acute stroke magnetic resonance imaging. Cerebrovasc Dis. 31:408-13. This manuscript compares three popular tools (StrokeTool, PMA, Perfscape) in analyzing data from 39 indviduals who did not have is- chemia. The different tools provided very different measures of mean transit time, cerebral blood flow and T(max), and in many cases the automated measures appeared abnormal. This suggests that only a single method should be used for all datasets in a study, care should be taken when using these measures, and that there is clear room for improvement in these measures. • Hillis et al. (2001) Hypoperfusion of Wernicke’s area predicts severity of semantic deficit in acute stroke. Ann Neurol. 50:561-6. • Kim et al. (2010) Toward fully automated processing of dynamic susceptibility contrast perfusion MRI for acute ischemic cerebral stroke. Comput Methods Programs Biomed. 98(2):204-13. This develops a clear framework for robust, automated processing of PWI data, initially using Madsen’s linear gamma fit and then conducting deconvolution. My software uses several of the concepts from this manuscript.

12.7. Links 59 examplecode Documentation, Release 0.1

• Madsen MT (1992) A simplified formulation of the gamma variate function. Phys. Med. Biol. 37 1597-1601. The gamma function described by Thompson et al. is hard to solve, as it is hard to suggest a good starting estimate. This manuscript describes an elegant reformulation that is much easier to solve, and is used by my own and many other gamma fitting tools. • Ostergaard et al. (1996) High resolution measurement of cerebral blood flow using intravascular tracer bolus passages. Part II: Experimental comparison and preliminary results. Magn Reson Med.36:726-36. is the seminal work for deconvolution methods. • Thompson et al. (1964) Indicator Transit Time Considered as a Gamma Variate. Circ Res. 14:502-15.

60 Chapter 12. Perfusion-weighted imaging (PWI) CHAPTER THIRTEEN

SPMSCRIPTS

Statistical Parametric Mapping (SPM) comes with a nice user interface, but sometimes small Matlab scripts can help answer your research question. A quick search of the web is usually able to find someone else who has encountered the same problem and written a Matlab script. Here are some useful links for SPM scripting. • Wiki introduction to SPM scripting • John Ashburners’ Gems for ( old versions of SPM) • Jimmy’s Toolbox. • Michael Lombardo has some very useful SPM scripts.

61 examplecode Documentation, Release 0.1

13.1 Toolboxes

• I have a separate web pages describing my various SPM toolboxes and more complicated scripting pipelines: • MRIcroS is a simple Matlab program for viewing surface renderings. This can be useful for viewing fMRI activations. The source code is available from the MRIcroS GitHub repository. • Clinical toolbox web page for normalizing CT, T1, T2, FLAIR, DW images from clinical populations. • My preprocessing and statistical scripts provide a analysis pipeline. • My ASL scripts extend SPM8/ASLtbx features for analyzing arterial spin labeling data. • My Matlab scripts for processing DTI data using FSL.

13.2 My own scripts

• I now host my spmScripts on GitHub.

62 Chapter 13. spmScripts CHAPTER FOURTEEN

SLICE TIME CORRECTION (STC)

Modern functional magnetic imaging uses echo (or spiral) planar imaging where a 3D volume is built up from a series of 2D slices. Each slice take some time to acquire, so different slices in a 3D stack were actually observed at different time points. On the other hand, our statistics assumes that the 3D volume was acquired at the same moment in time. Therefore, it is common to slice time correct (STC). This page describes how to compute STC and describes special caveats for Siemens scanners as well as with SPM8’s STC function.

Slice Time Correction (STC) generally improves our the statistical power of fMRI analyses. In particular it is probably a good idea for event-related designs (and less useful for block designs). The irony is that Slice Time Correction (STC) is most important yet least effective when the speed of acquisition is slower. For example, consider an ascending continuous acquisition with a TR (repeat time, time between volumes) of 3000ms. In this case the middle slice is observed almost 1.5s after the first slice, and the final slice almost 3s later. If our TR was 1000ms the error would be 1/3 this size. STC works by interpolating between images (either directly or via Fourier transforms). If we want to see what a slice looks like at an earlier unobserved time, we simply estimate based on the observations we acquired immediately prior to and after the desired time. A nice analogy is estimating the temperature at 9am: we can be more accurate if our closest observations were at 8:55am and 9:05am rather than at 8am and 10am.

63 examplecode Documentation, Release 0.1

14.1 Slice order

The cartoon on the depicts a volume where we acquire four 2D slices for each 3D volume of the brain. For the traditional echo-planar-imaging (EPI) we acquire each of these slices sequentially (for multi-band we can acquire a few spatially distant slices simultaneously). Therefore, with traditional EPI we do not see the whole brain at the same instance, but see different portions at different times. One can consider several slice orderings to acquire this image: • Sequential Ascending 1,2,3,4,5,6 • Sequential Descending 6,5,4,3,2,1 • Interleaved Ascending 1,3,5,2,4,6 • Interleaved Descending 6,4,2,5,3,1 • Interleaved Ascending * 2,4,6,1,3,5 • Interleaved Descending * 5,3,1,6,4,2 In other words, with typical interleaved ascending we acquire the odd numbered slices first followed by the even numbered slices. Note that the slice orders number 5 and 6 (marked with a *) are weird. While other manufacturers use only slice ordering types 3 and 4, for some odd reason Siemens Product sequences use order 3 when you acquire interleaved with an odd number of slices and order 5 when you acquire with an even number of slices. This is described in Joachim Graessner’s Slice Order (Slice Timing) for fMRI Evaluation.However, as described below, those using the terrific CMRR sequences on Siemens, be aware that the manual states ‘that slice excitation always starts with slice0 (the first slice) in CMRR multiband C2P sequences’. The section below describes how to test your own sequence to ensure slice order. Most MRI scanners let you set your desired slice order. Interleaved acquisitions can have less slice interference but can show spin history effects if a participant moves. This is the reason most fMRI data has a ~20% gap between slices, as it reduces both interference and motion-related spin history artifacts (at the cost of less signal).

14.2 Siemens specific details

The image on the right shows two fMRI volumes from a Siemens scanner using an ascending interleaved sequence. The only difference between the two sequences is that one volume (left) has 35 slices and the other (right) has 36 slices. In both scans the participant started with their head aligned with the scanner bore and then rotated their head half way through the acquisition . Note that the acquisition pattern is different: the head motion appears on the even numbered slices for the 35 slice volume (odd-first “Interleaved Ascending” pattern) and on the odd numbered slices for 36 volume slices (even first “Interleaved Ascending *” pattern). As far as I know, all Siemens sequences use the standard (odd first) interleaving sequence for volumes with odd numbers of slices and the alternative (even first, *) pattern when acquiring an even number of slices. However, you may want to test this for yourself on your system.

64 Chapter 14. Slice Time Correction (STC) examplecode Documentation, Release 0.1

You have a couple options to determine the slice order you can either look at your scanner console , look at the “Series” in your protocol PDF files (shown in red in the image on the right – ignore the “multi-slice mode”), or use software that detects this for you when converting your DICOM files for subsequent processing (for example, dcm2nii since 2014 do this for Siemens images). Another question regards what direction ‘ascending’ and ‘descending’ refer to. For axial slices it seems obvious that ascending refers to acquisitions that begin near the feet and move toward the head. But what does ‘ascending’ versus ‘descending’ refer to for sagittal and coronal sequences? One way to discover this is to look at the PDFs that you can create for your sequence. As shown in the image on the right, ascending for sagittal scans refers to right-to- left (R>>L), for coronal sequences ascending is anterior-to-posterior (A>>P) and for transverse (axial) slices the order is the expected foot-to-head (F>>H). This appears to be the default for Siemens MRI , though perhaps tweaking the “Image Numbering” on the “Miscellaneous” portion of the sequence system tab could disrupt this (though I am not sure why anyone would want to, and it would cause problems for automated methods of slice timing correction as described here). While the Siemens convention is unusual, the fact that it is consistent means that it is easy to write a script to automat- ically slice time correct your data. For example, if you convert your DICOM images to NIfTI format with dcm2niix( included as an executable and a graphical interface with my MRIcroGL viewer ) you will also get a BIDS format file that reports the slice timing information. You can use the Matlab/SPM script below to correct the slice timing using this BIDS file.

14.3 Siemens Image Numbering

The Siemens scanner allows you to reverse the image numbering, as described in the white paper below. Specifically, for axial acquisitions you can open the exam explorer and go to the ‘System’ tab’s ‘Miscellaneous’ page and set the Transversal Image Numbering to H>>F (instead of the default F>>H). Doing this will flip the order images are displayed in the mosaics (with the upper left being the most superior rather than most inferior slice). There is some confusion regarding whether this option changes merely the image storage or the image acquisition. To test this, I created the examples linked at the bottom of the page. As described previously, each series started with the participant in canonical position in the scanner, but part way through each series they rotated their head so at the end of the volume the nose pointed toward one of their shoulders. As can be seen in the crucial volumes (for Siemens product, see image IM-0004-0002.dcm, for CMRR see IM-0008-0004.dcm) the Siemens Interleaved as well as Ascending volumes are still acquired in the foot-to-head direction. In other words, while changing this

14.3. Siemens Image Numbering 65 examplecode Documentation, Release 0.1 value saves how the images are displayed, it does not change how they are acquired. I would discourage using this setting. However, if you do have images with reversed image numbering I would recommend using dcm2niix version v1.0.20171021 or later: prior versions generate a dire warning (and do not correctly specify the slice timing field of the BIDS file).

14.4 Setting the reference slice

Slice timing correction attempts to make all the slices in a volume appear as if they were acquired at the same moment. This allows us to apply a single statistical estimate for the whole volume. By default, SPM’s slice timing will make all slices appear at the same time as the middle slice of the volume, the notion is that we want to apply the least interpolation to the center slices, which are typically our focus of interest. However, you could also align all slices to the first (or last) slice, just make sure you set your onset times accordingly.

14.5 TR versus TA

Typically, we acquire fMRI data continuously with no gaps between volumes. Therefore, the acquisition time (TA) is directly related to the repeat time (TR). Specifically, the TA volume with N slices is TR-(TR/N). In other words, if we have a volume with 4 slices and a TR of 2000ms, the TA is 1500ms (with slices acquired at 0, 500, 1000, 1500ms). However, note that it is possible to set a temporal gap between volumes (we do this for sparse designs). Therefore, you should check this prior to STC.

14.6 Links

• The practiCal page has a nice description of slice timing. • Paper demonstrating how slice timing can help. • The HCP webpage includes the details for users of the popular CMRR sequences. • Siemens includes a white paper describing slice order. • Here is a nice description of slice order on Siemens. • There is a nice wiki page on slice order.

66 Chapter 14. Slice Time Correction (STC) CHAPTER FIFTEEN

PUBLICATIONS

Click here for a complete automated listing • Li X, Morgan PS, Ashburner J, Smith J, Rorden C. The first step for neuroimaging data analysis: DICOM to NIfTI conversion. J Neurosci Methods. 264:47-56. PMID: 26945974. • Gleichgerrcht E, Fridriksson J, Rorden C, Nesland T, Desai R, Bonilha L. Separate neural systems support representations for actions and objects during narrative speech in post-stroke aphasia. Neuroimage Clin. 10:140- 5. PMID: 26759789. • Gleichgerrcht E, Kocher M, Nesland T, Rorden C, Fridriksson J, Bonilha L. Preservation of structural brain network hubs is associated with less severe post-stroke aphasia. Restor Neurol Neurosci. 34(1):19-28. PMID: 26599472. • Guo D, Fridriksson J, Fillmore P, Rorden C, Yu H, Zheng K, Wang S. Automated lesion detection on MRI scans using combined unsupervised and supervised methods. BMC Med Imaging. 15:50. PMID: 26518734 • Yourganov G, Smith KG, Fridriksson J, Rorden C. Predicting aphasia type from brain damage measured with structural MRI. Cortex. 73:203-15. PMID: 26465238. • Bonilha L, Gleichgerrcht E, Fridriksson J, Rorden C, Breedlove JL, Nesland T,  • Paulus W, Helms G, Focke NK. Reproducibility of the Structural Brain Connectome • Derived from Diffusion Tensor Imaging. PLoS One. 10(8):e0135247. PubMed PMID: 26332788. • Bonilha L, Gleichgerrcht E, Nesland T, Rorden C, Fridriksson J. Success of Anomia Treatment in Aphasia Is Associated With Preserved Architecture of Global and Left Temporal Lobe Structural Networks. Neurorehabil Neural Repair. 30(3):266-79. PubMed PMID: 26150147. • Fridriksson J, Basilakos A, Hickok G, Bonilha L, Rorden C. Speech entrainment compensates for Broca’s area damage. Cortex. 69:68-75. .PMID: 25989443. • Kocher M, Gleichgerrcht E, Nesland T, Rorden C, Fridriksson J, Spampinato MV, Bonilha L. Individual variabil- ity in the anatomical distribution of nodes participating in rich club structural networks. Front Neural Circuits. 9:16. PMID: 25954161. • Basilakos A, Rorden C, Bonilha L, Moser D, Fridriksson J. Patterns of • poststroke brain damage that predict speech production errors in apraxia of • speech and aphasia dissociate. Stroke. 46(6):1561-6. PMID: 25908457. • Desai RH, Herter T, Riccardi N, Rorden C, Fridriksson J. Concepts within reach: Action performance predicts action language processing in stroke. Neuropsychologia. 71:217-24. PMID: 25858602. • Bonilha L, Gleichgerrcht E, Nesland T, Rorden C, Fridriksson J. Gray matter axonal connectivity maps. Front Psychiatry. 6:35. PMID: 25798111.

67 examplecode Documentation, Release 0.1

• Li D, Karnath HO, Rorden C. Egocentric representations of space co-exist with allocentric representations: evidence from spatial neglect. Cortex. 58:161-9. PMID: 25038308. • Fridriksson J, Fillmore P, Guo D, Rorden C. Chronic Broca’s Aphasia Is Caused by Damage to Broca’s and Wernicke’s Areas. Cereb Cortex.25(12):4689-96. PMID: 25016386. • Bonilha L, Nesland T, Rorden C, Fillmore P, Ratnayake RP, Fridriksson J. Mapping remote subcortical ramifi- cations of injury after ischemic strokes. Behav Neurol. 2014:215380. PMID: 24868120. • Bonilha L, Rorden C, Fridriksson J. Assessing the clinical effect of residual cortical disconnection after ischemic strokes. Stroke. 45(4):988-93. PMID: 24619391. • Bonilha L, Nesland T, Rorden C, Fridriksson J. Asymmetry of the structural brain connectome in healthy older adults. Front Psychiatry. 4:186. PMID: 24409158. • Rorden C, Hanayik T. StimSync: open-source hardware for behavioral and MRI experiments. J Neurosci Meth- ods. 227:90-9. PMID: 24286701. • de Haan B, Rorden C, Karnath HO. Abnormal perilesional BOLD signal is not correlated with stroke patients’ behavior. Front Hum Neurosci. 7:669. PubMed PMID: 24137123. • Fridriksson J, Guo D, Fillmore P, Holland A, Rorden C. Damage to the anterior arcuate fasciculus predicts non-fluent speech production in aphasia. Brain. 136:3451-60. PMID: 24131592. • Huang Y, Dmochowski JP, Su Y, Datta A, Rorden C, Parra LC. Automated MRI segmentation for individualized modeling of current flow in the human head. J Neural Eng. 10(6):066004. PMID: 24099977. • Huang Y, Su Y, Rorden C, Dmochowski J, Datta A, Parra LC. An automated method for high-definition transcra- nial direct current stimulation modeling. Conf Proc IEEE Eng Med Biol Soc. 2012:5376-9. PMID: 23367144;. • Smith DV, Clithero JA, Rorden C, Karnath HO. Decoding the anatomical network of spatial attention. Proc Natl Acad Sci U S A. 110(4):1518-23. PMID: 23300283. • Fridriksson J, Hubbard HI, Hudspeth SG, Holland AL, Bonilha L, Fromm D, Rorden C. Speech entrainment enables patients with Broca’s aphasia to produce fluent speech. Brain. 135:3815-29. PMID: 23250889. • Richardson JD, Fillmore P, Rorden C, Lapointe LL, Fridriksson J. Re-establishing Broca’s initial findings. Brain Lang. 123(2):125-30. PMID: 23058844. • Rice JK, Rorden C, Little JS, Parra LC. Subject position affects EEG magnitudes. Neuroimage. 64:476-84. PMID: 23006805. • Rorden C, Hjaltason H, Fillmore P, Fridriksson J, Kjartansson O, Magnusdottir S, Karnath HO. Allocentric neglect strongly associated with egocentric neglect. Neuropsychologia. 50(6):1151-7. PMID: 22608082. • Magnusdottir S, Fillmore P, den Ouden DB, Hjaltason H, Rorden C, Kjartansson O, Bonilha L, Fridriksson J. Damage to left anterior temporal cortex predicts impairment of complex syntactic processing: a lesion-symptom mapping study. Hum Brain Mapp. 34(10):2715-23. PMID: 22522937. • Basilakos A, Fillmore PT, Rorden C, Guo D, Bonilha L, Fridriksson J. Regional white matter damage predicts speech fluency in chronic post-stroke aphasia. Front Hum Neurosci. 8:845. PMID: 25368572.

68 Chapter 15. Publications CHAPTER SIXTEEN

INDICES AND TABLES

• search

69