Nacs642 Spring 2015 Meg Lab

Total Page:16

File Type:pdf, Size:1020Kb

Nacs642 Spring 2015 Meg Lab

NACS642 SPRING 2015 MEG LAB

In this lab, you will explore MEG data from data that was collected during one of the simplest and most frequently collected paradigms at UMD: auditory presentation of 1Khz tones. This 'localizer' scan elicits robust M100 auditory responses between 100 and 200ms. Since this response is more robust when attention is engaged, participants are usually asked to silently count the tones being presented at unknown times against a quiet background. You will use the MNE software package, which is free MEG analysis software developed at the Martinos Center at Massachusetts General Hospital, largely by Matti Hamalainen. You can read more about this software here: https://wiki.umd.edu/meglab/index.php?title=MNE The easiest way to do this lab is on the cephalopod analysis machine in MMH 3416, on which you can get a user account by emailing Anna Namyst ([email protected]). The analysis files are in a shared directory that everyone has access to. On any Mac, you should be able to easily log into the machine remotely using Screen Sharing, so that you do not have to come to MMH to work on the assignment. This could be your own laptop, or this could be one of the Mac workstations in the MNC computer lab (this will also put you in easy proximity to the MEG lab in case you have questions). You have all met our MEG lab manager, Natalia Lapinskaya. Feel free to contact her ([email protected]) if you run into trouble with the preprocessing steps or for questions about setting up the MEG lab project.

Setting up remote access (Mac only) Note: You need to have Screen Sharing enabled. Go to Applications->System Preferences->Sharing and turn it on by ticking the box for Screen Sharing on the left panel. After receiving a new user account, open Finder. Click on the ‘Go’ tab, then ‘Connect to server’. Enter the server address vnc://files.ling.umd.edu , and after entering your new credentials, select ‘Connect to a virtual display’. You should be directed to a new virtual desktop on the cephalopod machine. On most Mac operating systems, you can always log in remotely. However, in a few recent versions, you need to be logged in to the physical machine in MMH first. If you have trouble logging in, email Anna and she can log you in to the physical machine. Occasionally cephalopod needs to be restarted. This is fairly rare, but if it is necessary it would happen at either 9am, 12pm, or 3pm. Therefore, you should get in the habit of saving your work right before those times just in case. Setting up for analysis on cephalopod: After logging in remotely or at the physical machine, navigate to the directory /Volumes/CUTTLEFISH/MEG_Experiments/NACS_Experiments/raw_data. Here you will see the data files that are assigned to you; they will be tagged with your initials (R20XX). Now open the documentation file, which is in ‘/Volumes/CUTTLEFISH/MEG Analysis Documentation/MNE_Preprocessing_Documentation.pdf’. For the lab, you will start following directions from Setup - Step 2: C-shell settings. Throughout the processing (e.g. when you create the directories in Step 3) the exptName will be ‘NACS_Experiments’. After the directories are created, you will move your assigned MEG data into the KIT subdirectory for your subject, as the documentation indicates. Throughout the processing, the paradigm name is ‘Localizer’. After pre-processing is complete, follow the steps below to examine the data.

PART A: SENSOR DATA In the Terminal, make sure you are in your subject’s data directory by typing the command cd /Volumes/CUTTLEFISH/MEG_Experiments/NACS_Experiments/data/R20xx.

Now type mne_browse_raw at the command line to open the graphical user interface for the MNE raw data browser. Go to File->Open and choose R20XX_Localizer- Filtered_raw.fif. 1. The raw data should now appear on the screen. This is the raw data recorded during the localizer run. It was denoised using the de Cheveigne/Simon algorithm in Matlab and then converted from the native .sqd format used by the UMD system to the .fif format assumed by the MNE software. a) The blue lines are the MEG data recorded at different sensors. Eyeblinks create big peaks in the data due to the electrical activity created by muscle movement, and these are of much greater amplitude than neural activity (therefore they need to be excluded from data analysis to increase signal-to-noise ratio). Try to identify some eyeblinks in the data and take a screen shot. b) Scroll along the data in time with the arrow cursors at the bottom. What temporal regularity do you first notice in the data? Include a screenshot for illustration. c) By clicking the cursor into different positions and observing the information at the bottom of the screen, can you deduce the approximate periodicity of this effect (how frequently does it repeat)? Based on this periodicity, do you have any hypotheses about the source of this effect? d) Anything else you find interesting about the data?

2. Click on Adjust-> Selection and click through the different selections to examine the data in all the sensors. a) How many sensors does the UMD system have in total? b) Out of the 4 sensor selections, which shows the temporal regularity in (3c) most strongly? Do you have any guesses about where these sensors are located?

3. Go to Adjust-> Filter. Change the Lowpass (Hz) value to 150 and click 'Apply'. How does the data change? What about with a value of 10? Illustrate with a screenshot.

4. Return the Lowpass value to 40Hz, which is a good default. Minimize the MEG data window for a moment, go to the Finder window, and open up the file /Volumes/CUTTLEFISH/MEG_Experiments/NACS_Experiments/data/R20XX /eve/R20XX_LocalizerMod.eve in a text editor (preferably TextWrangler, which is available on the cephalopod machine). This file encodes the critical information about when the tones were actually presented relative to the MEG recording. The first column is data samples, the second column is seconds from the start of the recording, the third column you can ignore, and the fourth column is the condition code. Since the localizer consisted of the same tone presented over and over, there is only one condition in the .eve file. a) How many rows are there? What does this suggest? b) Notice that the first column and second column are basically the same because this recording was sampled at 1KHz--in other words, an MEG measurement was collected every millisecond. Take a look at the time column. Based on the first few rows, about how much time separated the presentation of subsequent tones in this experiment?

5. Now re-open the MEG data window (if you closed it, back in the Terminal type mne_browse_raw). This time, to view the averaged data, go to File -> Open Evoked and select the average file R20XX_Localizer-ave.fif. A waveform window should pop up. This window illustrates the average response to the tones in this condition. The topographic plot represents a ‘flattened head’ as viewed from above. MEG channels that were located near the tip of the head are now represented in the centre of the plot, that channels closer to the neck and forehead are now represented in the lower and upper parts of the plot, respectively. Alongside this window, open up an informational window by going to Windows -> Manage averages. According to the information in this window, how many events went into the average?

6. To optimize viewing of the waveforms, go to Adjust->Scales and click on 'Show channel names', 'Show zeroline and zerolevel' and adjust 'Scale magnification for averages' to be 10.0, then click 'Apply'. Now examine the waveforms on the Averages window that you previously created. You can hold down the shift key and highlight a subset of channels with your mouse to zoom in. To zoom out again, hold down the shift key again and click anywhere. a) Illustrate the averages with a screenshot. What are your first impressions? b) From the overall plot, look for channels with large deflections. A number of channels show a strong peak between 80ms and 110ms, which is the well-known M100 response to auditory input. What is one of the channels showing this effect? At around what latency is the peak (you can assess this by clicking on the waveform and looking at the value at the bottom of the 'Averages' screen). c) As we discussed in class, MEG should show dipolar activity, i.e. if there is a positive response in some channels, there should be a corresponding negative response in other channels (unless it is in a part of the head not covered by MEG sensors or it is masked by an overlapping source). Do you see some channels with a negative peak and a positive peak at approximately the same latency as in (b)? What is an example? Illustrate with screenshots. d) You can also see a later component with the opposite polarity. What is the approximate timing of this peak? e) In the 'Adjust scales' window (which should still be open), what happens if you unclick 'Use average display baseline'? Illustrate with a screenshot. Why is this? f) Other impressions? Now go to File->Quit to close the raw/evoked data browsing program.

7. To visualize the magnetic field patterns on the MEG helmet, derived by combining the sensor measurements with Maxwell's equations, you need to open the program that is also used for source localization. Type mne_analyze at the command line on the Terminal. The graphical user interface should appear. Although this step will not use the actual brain anatomy at all, the software requires you to load an anatomical dataset in order to proceed. So go to File -> Load surface and click on 'fsaverage'. Make sure that 'inflated' is higlighted on the right. Next go to File-> Open. On the top right for 'Files', click on R20XX_Localizer-ave.fif. This is the average data that you viewed in the waveform browser in the last step. In 'Available data sets', the 'Localizer' average should be selected. For the inverse operator, leave selected. Under 'Options', for MRI/MEG transform source, click on 'Default'. You should see 'fsaverage-trans.fif' appear to the left of the button. Click 'OK'. Now go to View->Show Viewer. A new screen should appear showing the MEG helmet. Click on 'Options'. Unclick 'Left hemi' and 'Right hemi'. Click on 'Activity Estimates' and hit 'Apply'. Now, on the main viewing window (titled ‘Fields and sources’), to the lower left, there is a box to enter time points. Enter the time between 90-110ms at which you observed the large M100 peak in the data, and press 'return' on the keyboard. Now you should see the estimated magnetic field appear in the Viewer window. Take a screen shot of the left and right hemispheres, using the arrow keys to view the other side of the head. a) Do you observe something resembling a dipole pattern? View both hemispheres, and based on the field pattern at this timepoint, how many separate dipole sources of coordinated activity do you have evidence for? Does this look like a unilateral response or a bilateral response? If bilateral, which hemisphere shows the stronger activity? b) Because of the 'right-hand rule' describing the coordination of electrical and magnetic fields, the electrical activity giving rise to a magnetic dipole field falls in between the positive and negative parts of the field (you can get information on the magnetic field at a given location with the cursor, at the ‘Fields and sources’ window). Given this, do the tone responses you see appear roughly consistent with the location of auditory cortex in superior temporal areas?

PART B: SOURCE LOCALIZATION For the last part of the lab, you will be looking at a shared dataset that has had additional processing steps applied for you for convenience. Quit mne_analyze. In the Terminal, change directories: cd /Volumes/CUTTLEFISH/MEG_Experiments/AudLocalizer/data/R2169. Now, type 'mne_analyze' again at the command line on the Terminal to open the graphical user interface for the MNE source localization data browser. Again, you will load the brain anatomy. In this assignment we will use brain anatomy that is derived from averaging many individual brains together to get a 'generic' brain. This was created in the FreeSurfer software package, so it is called 'fsaverage'. Go to File -> Load surface and click on 'fsaverage'. Make sure that 'inflated' is higlighted on the right. An 'inflated' version of the brain should now appear, in which the cortical surface has been virtually flattened/expanded out, and the gyri appear in light gray, while the sulci appear in dark gray. You can use the buttons at the bottom of the screen to move around the brain for different views. Next go to File-> Open. On the top right for 'Files', click on R2169_Localizer-ave.fif. This is the average data that you viewed in the waveform browser in the last step. In 'Available data sets', the 'Localizer' average should be selected. This time, for the inverse operator, select ‘R2169_Localizer-ave-7- meg-inv.fif’. Click 'OK'. Now you will take a look at *estimates* of which brain activity generated the field patterns that you observed in the earlier steps. For this exercise, you will look at distributed source estimates. Unlike single dipole estimates, which make the assumption that activity comes from a single point in the brain, distributed source estimates bias towards solutions in which activity is distributed across a patch of the brain. In the MNE software, it is also assumed that activity measured by MEG sensors is largely coming from somewhere in the cortex, not subcortical generators. A number of steps are required to compute forward and inverse models to generate these activity estimates. These have already been created for you. However, one important step to be aware of in this process is to specify where the subject's brain was in relation to the sensors--as you can imagine, activity in the same brain area will have a different effect on the activity measured at the sensors depending on where the person's head was in the MEG helmet. This coregistration step is broadly recognized as one of the largest controllable sources of error in MEG localization, and has inspired development of very expensive technology such as systems in which MRI and MEG could be measured sequentially in the same position so that coregistration is unnecessary. This is also a problem when an average brain is used instead of collecting individual MRIs from each subject, as the coregistration in this case will never be perfect – the true brain anatomy is never fiducially associated with the MEG helmet array. Here you will explore the data to get an intuition for the problem. Go back to the 'Options' in the Viewer and unclick 'Activity Estimates', 'MEG field map', and 'Helmet'. Click on 'Transparent' next to 'Scalp'. Now click 'Digitizer Data'. In this sample dataset, you’ll just see the locations of the 3 fiducial landmarks in blue (the preauricular point on each ear, and the nasion above the nose) and the 5 HeadPositionIndicator coils in green (leave the ‘Viewer options’ window open). On the main window, go to Adjust->Coordinate Alignment. First, we will choose a reasonably close starting position for the coregistration by estimating the position of the 3 landmarks on the scalp surface constructed from the average MRI. Click on 'LAP'. Now, in the Viewer window, double-click on the scalp where you think the left preauricular point is. Next click on nasion, and then double-click on the scalp where you think the nasion is. Finally, do the same for RAP, double-click on the scalp where you think the right preauricular point is. On the ‘Viewer options’ window, un-check 'HPI and landmarks only' so you get to see the digitizer datapoints again. Then, back on the ‘Adjust coordinate alignment’ window click 'Align using fiducials'. You should see the digitizer datapoints move to a roughly reasonable position on the scalp. You can now make the fit more precise by using the arrow buttons below, or by running an algorithm designed to automatically find a good fit to the points--this is the ICP align button. On the ‘Viewer options’ window, check again 'HPI and landmarks only'. Then, back on the ‘Adjust coordinate alignment’ window enter a number of steps between 1-15 and click on 'ICP Align', and the algorithm will try to find a better fit on each consecutive step. The algorithm works on just the points that are visible in the viewer, so notice that if you click off the 'HPI and landmarks only' in the Options window and run the algorithm again, you will get a different result. Play around with these steps until you feel you have a reasonably good alignment (there is no objective criterion for this; people just use their intuition). 3. Take a screenshot to illustrate your alignment. What do you see as potential concerns about or problems that could arise with this procedure? Can you think of any steps one could take during data collection to mitigate these problems? (remember that the markers will be attached to the subject’s skin, with their head inside a fixed helmet, during a possibly long experiment) You don't need to save the final coregistration, as the source estimates that would use this have already been computed for you. Close the Viewer window by clicking 'Done' (you shouldn’t need it again, but note that if you try to re-open the Viewer window after closing it, the program will crash and you'll have to restart mne_analyze--this seems to be a bug).

4. Now you will view the inverse operator that was computed from the MEG data and the anatomical info using MNE commands at the Terminal. Go to Adjust -> Estimates and change the value of ‘fmult’ to 3 (this effectively increases the threshold for showing activity, which is needed for this dataset in which the amplitude of the effects is extremely strong). Click ‘Apply’. Now, using the top left or bottom left window on the main viewer, navigate to the timepoint where you see the strongest auditory response. a) What pattern of activity do you observe on the brain? Illustrate with a screenshot. Is this pattern roughly consistent with the location of auditory cortical regions? Be sure to inspect both hemispheres. b) Is the MEG estimated activity more concentrated on sulci or gyri? Why do you think that might be?

If you want, you can play with the display and viewing thresholds for the estimates by using the Adjust->Estimates menu.

Recommended publications