This exercises familiarizes you with processing and interpretation of evoked responses.
Download a pre-recorded MEG data set as a Matlab file from here (size is about 220 MB). This file holds three Matlab variables:
data : a matrix of MEG-channels (315) by time points (200000)
channels : a vector (315) of channel names in the same order as the rows in the data matrix
samplingrate : the sampling rate in Hz (the reciprocal of this value is the time between two consecutive time points in the data matrix)
The data are a continuous (non-averaged) recording of a healthy adult
volunteer who received 8 kinds of simple sensory stimuli, one at a
time, in a random order. Stimulus onsets are marked with trigger events
on channel STI014 (you can ignore other STIxxx channels). There is one
trigger code corresponding to one stimulus type, and the onset
(transition from zero to a non-zero value) of the trigger pulse denotes
the onset of the stimulus. The duration of the trigger pulse is
Your task is to:
1. Compute the average evoked response (separately) for the trials
with trigger codes 2, 3, and 16 (just ignore the other trigger codes).
Choose a time window which includes, say, a 100-ms pre-stimulus baseline
and extends to about 500 ms post-stimulus.
2. Visualize the average responses topographically. You may use the plotEF Matlab function available on this page; see Ex 2: Helper functions.
3. Make your best guess which kind of a stimulus corresponds to each
trigger code (i.e., which sensory modality and where presented with
respect to the body). Exploit the location and timing information;
remember that the planar gradiometers peak right above the source. Note
that all the stimuli are rather elementary.
4. Write a report. In addition to
the text, include three figures, one for each average response. You are
encouraged to search relevant literature for typical characteristics of
MEG responses to sensory stimuli. Refer to such studies in your report.
exercise is about source modelling. The task is to construct and apply a
cortically constrained L2 minimum norm estimator to two averaged data
sets, visualize the estimate on the cortical surface and interpret the
The Matlab data file given below contains all the ingredients:
data1, data2, timeaxis, n_trial
two evoked-response data matrices, the latencies of each sample in
those matrices, and the number of trials averaged. The datasets are
baseline corrected and filtered to 0-40 Hz. Note that the data are rank
deficient due to the application of interference suppression methods;
the effective number of channels (degrees of freedom) is 64.
the lead field / gain matrix
noise covariance matrix, computed from pre-stimulus baseline periods in unaveraged data
data covariance matrix, computed from the window 0 - 0.5 s about each unaveraged trial (needed only for a beamformer)
low-resolution cortical surface mesh for visualizations
The regularization coefficient in the L2 MNE inverse operator can set to lambda = tr(L*L') / (tr(Cn) * snr^2) where tr is the trace of a matrix and snr refers to the estimated signal-to-noise ratio of the data. In unaveraged data, this can be assumed to be 1. Recall how trial averaging improves signal-to-noise ratio and use that as the snr above.
Source covariance is unknown and thus you may use an identity matrix for that.
Matrix inversions that involve noise or data covariance need to be
regularized. SVD truncation with tolerance about 1.0e-24 is appropriate.
On MatLab, the pseudoinversion function pinv() can do this; check its
For visualization, you can use the provided Matlab function
which displays 'source_estimate' vector on the cortical surface found
in 'anatomy'. The function creates separate views to the left and right
hemispheres, which you can zoom and rotate using the normal Matlab plot
In the report:
explain how you constructed the L2 MNE operator (you may also include the Matlab code snippet)
include figures of the source estimates of the two data sets
discuss the differences of the estimates: what can you say about the
laterality of the stimuli presented in data1 vs. those in data2?
discuss how the spatial extent of the estimated activity reflects extent of the underlying neural current distribution
Bonus tasks (one extra point from each):
Construct a beamformer operator and apply it to the data sets
Include depth weighting in the L2 MNE operator by modifying the source covariance matrix appropriately
The data required in the
assignment can be downloaded from
Matlab code designed
to help in the assignment is given underneath in another tar-file
The assignment consists of two
parts, and of writing a report describing and showing the main
1. Spontaneous oscillatory
Analyze how spontaneous
(continuous) oscillatory activity is modulated when the subject opens
his/her eyes. Identify the peak frequency of the modulation, identify
which of the two data-sets (data1, data2) represents eyes-closed
condition, and show the effects on the MEG sensors.
The data and
related variables are contained in the file P1_spontaneous_data.mat
chanNames=names of MEG sensors
Visualization of the spectra across sensors can be conducted with the
where f contains the
frequency bins where the spectra have been estimated and
spectra1/spectra2 the estimated spectra; selection of visualized
frequencies and the scaling of the visualization is controlled via
options 'xlimits' and 'ylimits', e.g.,
25],'Ylimits',[-5e-10 3e-9]) )
To estimate the spectra, use
function pwelch. Compute the spectrum for one channel at a time (in a
Select a reasonable frequency
resolution; around half-a-hertz will suffice; see matlab help for
When visualizing, concentrate on
frequencies below 30Hz so that the possible high-frequency noise does
not hamper the detection of the sought power peak for alpha frequency
Analyze the event-related
modulation of rhythmic activity via time-frequency representations
(TFR) during an attentive visual task. Identify the frequency range
where the main modulation occurs and show the effects on the MEG
The data and related variables are contained in the file
the file contains a single structure
(epochs) with several fields (
chan_names= names of recorded channels,
306 first contain the MEG data;
peri_int=time-interval of the
zero_sample=sample number at
ns_pre=number of samples before stimulus presentation;
ns_post=number of samples after stimulus presentation;
of epochs; data=cell array with the recorded data for all 120
TFRs per trial and channel can be computed with the
function energyvec (e.g., TFR=energyvec(f, time_series, Fs),
is a frequency of interest, time_series the MEG data for a single
channel, and Fs the sampling frequency).
Visualization of the TFRs
across sensors can be conducted with the function
where time/freqVec are vectors containing the time
instances/frequencies where the TFRs have been computed).
the visualization is controlled via options 'xlimits', 'ylimits and
'zlimits' (e.g., plotTFR(freqVec,timeVec,TFRs,'layout','NM306planar.lay','labels','on','baseline',[-0.3
0],'xlimits',[-0.1 1],'ylimits',[3 20],'zlimits',[-1e-23 1e-23]);
baseline-subtraction can be performed also outside the visualization
with the function TFRbaseline (TFRs=TFRbaseline(timeVec,freqVec,TFRs,[-0.3 0]); )
TFRs need to be computed first
separately for each frequency, channel and epoch with the function
it is best to focus on
reasonable frequency range and resolution, e.g., freqVec=[3:30];
to see when and at what
frequency-range the bright spot in some of the channels really is,
use function imagesc to visualize a single channel. In this you will
need the provided TFRbaseline-function since imagesc does not perform
a baseline correction (in plotTFR the baseline correction is
this exercise you should do a simple statistical analysis on fMRI data
using the general linear model, as implemented in matlab (glmfit). Write
a report describing and showing your findings.
The data is from a single subject performing a simple auditory
experiment. The auditory stimulation was words presented in a block
design (to both ears) at a rate of 60 words per minute. 96 acquisitions
(or scans) were made with a TR (repetition time) of 7s. There were two
experimental conditions: auditory stimulation and rest. The conditions
were presented in blocks of 42 seconds (6 scans), giving a total of 16
blocks. The condition for successive blocks alternated between rest and
auditory stimulation, starting with rest. The first 12 scans (one task
and one rest block) were discarded, leaving 84 acquisitions.
The data has been preprocessed: The functional images were realigned
(motion corrected), and coregistered to the anatomical image. The images
have been normalized to the MNI template brain. Finally, the functional
data was smoothed with a 6-mm FWHM Gaussian kernel.
For the purpose of your analysis
(and to reduce the amount of data posted on the Moodle page) I have
extracted the preprocessed data from three different regions of interest
(ROIs): the left auditory cortex, the right auditory cortex, and an
unrelated region in the left posterior cortex (~150 voxels in each).
Your task will be to:
1) Create and show a 'boxcar' model of the experiment based on the description above.
2) Take the shape of the BOLD response into account and create a
better model for your experiment. Use the shape of the hemodynamic
response function (sampled at 1s) from the mat-file containing the hrf
response. Plot the result.
3) Perform a statistical analysis on each voxel in the three
different ROIs using the general linear model. Can you find
statistically significant results? A p-value of 0.001 (for the gain) can
be considered significant for the uncorrected data.
4) Correct for multiple comparisons. You may use Bonferroni
correction. Do your results survive correction? A corrected p-value of
0.05 can be used.
5) How well does your model fit the data in i) a voxel with a
significant result and in ii) a voxel where you did not find a
significant result? Plot the responses, the fitted models, and the
6) How could you further improve your model? Describe (in the
text) at least one nuisance factor that you could include in your design
matrix to get a better fit.
data required in the assignment can be downloaded from
http://neuro.hut.fi/~jjkujala/FBI_data_conn.zip. Matlab code designed to help
in the assignment is given in another zip-file
assignment consists of two parts, and of writing a report describing and
showing the main findings.
Functional connectivity, real vs. spurious interactions
how the selection of an appropriate vs. inappropriate reference region affects
the estimates of cortico-cortical coherence in MEG; principally, show how field
spread/spatial leakage properties change when a reference region displays
activity compared to a situation where it does not. First, identify the
frequency showing the most prominent rhythmic activity and the source locations
showing activity at that frequency, and show what kind of coherence estimates
are obtained at the same frequency when one of the two brain regions active at
that frequency is taken as a reference vs. cases where the reference is placed
elsewhere. The data and related variables are contained in the file
P1_connectivity_data.mat; the file contains the beamformer estimates of
cortical time-series (funct_data) in 29 locations along a single axis
(x-coordinates given in variable locations) and the sampling frequency of the
data (sff). Two of the 29 locations contained simulated sources that were
mutually coherent; in the rest of the regions, no activity was simulated.
estimation can be conducted with the function pwelch (to be applied separately
in each location). Frequency resolution of ~0.5Hz is sufficient, and for
visualization it is best to focus on frequencies < 30 HZ. When you have
identified the frequency of interest, the spatial profile of power will be clearer
the power values are averaged e.g. within a 2-Hz wide window
-Compute coherence (with function
mscohere) first from one of the identified active regions to all other regions,
and the from at least one region were no activity was detected. Similarly to
the power-mapping, the results will look clearer if the computed coherence
values are averaged within a 2-Hz wide window.
2. Estimation of directed interactions
using Granger Causality
Estimate the appropriate model order for
Granger Causality analysis, and evaluate the directed influences two
time-series have on each other. The main goal is to identify which time-series
is the driving one, and at which frequency. The data (variable X containing 2
time-series) and its sampling frequency (s_freq) are given in P2_granger_data.mat.
appropriate model order can be identified using the function
cca_find_model_order. The function returns the appropriate order based on both
The Akaike and Bayesian information criterion. Either one can be used in the
subsequent analysis. It is sufficient to consider model orders < 20.
spectral Granger causality estimation can be done using the function
cca_pwcausal. The function can be called using the syntax [GW,COH,pp,waut,cons]=cca_pwcausal(X,Nr,Nl,nlags,Fs,freq, STATFLAG)
(note the variable order), where the number of realizations is in this case 1
and STATFLAG can be set to 0. Regarding the examined frequencies, it is
sufficient to consider frequencies below 50Hz. As regards the main output (GW),
one should note that the terms are estimated such that the column variable is
Granger Causing the row variable.