Also, if you believe that I did this on my own, you are a scoundrel and a fool. Virtually all of the code is from my colleague Derek Nee, a postdoc who used to work in my lab and who remains my superior in all things, scientific and otherwise. He also once got me with the old "If you say gullible slowly, it sounds like orange" joke.
In any case, all that I did was repackage it into a function to be more accessible by people outside of our lab. You may see a couple snippets of code that are useful outside of just doing beta series analysis; for example, the spm_get_data command, which can be used for parameter extraction from the command line without using marsbar. Also note that the script assumes that each trial for a condition has been labeled separately with the regressor name followed by a "_" and then the occurrence of the trial. So, for example, if I were looking at 29 instances of pressing the left button, and the regressor name was LeftButtonPress, I would have LeftButtonPress_1, LeftButtonPress_2, LeftButtonPress_3...LeftButtonPress_29.
Obviously, this is very tedious to do by hand. The Multiple Conditions option through the SPM GUI for setting up 1st level analyses is indispensable; but to cover that is another tutorial, one that will probably cross over into our discussion of GLMs.
The script:
function DoBetaSeries(rootdir, subjects, spmdir, seedroi, Conds, MapNames) % function BetaSeries(rootdir, subjects, spmdir, seedroi, Conds, MapNames) % INPUT: % rootdir - Path to where subjects are stored % subjects - List of subjects (can concatenate with brackets) % spmdir - Path to folder containing SPM file % seedroi - Absolute path to file containing ROI in NIFTI format % Conds - List of conditions for beta series analysis % MapNames - Output for list of maps, one per Conds cell % % Example use: % BetaSeries('/data/study1/fmri/', [101 102 103], % 'model/BetaSeriesDir/', '/data/study1/Masks/RightM1.nii', % {'TapLeft'}, {'TapLeft_RightM1'}) % % conditions to be averaged together should be placed together in a cell % separate correlation maps will be made for each cell % Conds = {'Incongruent' 'Congruent' {'Error' 'Late'}}; % % For MapNames, there should be one per Conds cell above % e.g., with the Conds above, MapNames = {'Incongruent_Map', % 'Congruent_Map', 'Error_Late_Map'} % % Once you have z-maps, these can be entered into a 2nd-level % 1-sample t-test, or contrasts can be taken between z-maps and these % contrasts can be taken to a 1-sample t-test as well. % % if nargin < 5 disp('Need rootdir, subjects, spmdir, seedroi, Conds, MapNames. See "help BetaSeries" for more information.') return end %Find XYZ coordinates of ROI Y = spm_read_vols(spm_vol(seedroi),1); indx = find(Y>0); [x,y,z] = ind2sub(size(Y),indx); XYZ = [x y z]'; %Find each occurrence of a trial for a given condition %These will be stacked together in the Betas array for i = 1:length(subjects) subj = num2str(subjects(i)); disp(['Loading SPM for subject ' subj]); %Can change the following line of code to CD to the directory %containing your SPM file, if your directory structure is different cd([rootdir subj filesep spmdir]); load SPM; for cond = 1:length(Conds) Betas = []; currCond = Conds{cond}; if ~iscell(currCond) currCond = {currCond}; end for j = 1:length(SPM.Vbeta) for k = 1:length(currCond) if ~isempty(strfind(SPM.Vbeta(j).descrip,[currCond{k} '_'])) Betas = strvcat(Betas,SPM.Vbeta(j).fname); end end end %Extract beta series time course from ROI %This will be correlated with every other voxel in the brain if ischar(Betas) P = spm_vol(Betas); end est = spm_get_data(P,XYZ); est = nanmean(est,2); %----Do beta series correlation between ROI and rest of brain---% MapName = MapNames{cond}; disp(['Performing beta series correlation for ' MapName]); Vin = spm_vol(Betas); nimgo = size(Vin,1); nslices = Vin(1).dim(3); % create new header files Vout_r = Vin(1); Vout_p = Vin(1); [pth,nm,xt] = fileparts(deblank(Vin(1).fname)); Vout_r.fname = fullfile(pth,[MapNames{cond} '_r.img']); Vout_p.fname = fullfile(pth,[MapNames{cond} '_p.img']); Vout_r.descrip = ['correlation map']; Vout_p.descrip = ['p-value map']; Vout_r.dt(1) = 16; Vout_p.dt(1) = 16; Vout_r = spm_create_vol(Vout_r); Vout_p = spm_create_vol(Vout_p); % Set up large matrix for holding image info % Organization is time by voxels slices = zeros([Vin(1).dim(1:2) nimgo]); stack = zeros([nimgo Vin(1).dim(1)]); out_r = zeros(Vin(1).dim); out_p = zeros(Vin(1).dim); for i = 1:nslices B = spm_matrix([0 0 i]); %creates plane x time for j = 1:nimgo slices(:,:,j) = spm_slice_vol(Vin(j),B,Vin(1).dim(1:2),1); end for j = 1:Vin(1).dim(2) stack = reshape(slices(:,j,:),[Vin(1).dim(1) nimgo])'; [r p] = corr(stack,est); out_r(:,j,i) = r; out_p(:,j,i) = p; end Vout_r = spm_write_plane(Vout_r,out_r(:,:,i),i); Vout_p = spm_write_plane(Vout_p,out_p(:,:,i),i); end %Convert correlation maps to z-scores %NOTE: If uneven number of trials in conditions you are %comparing, leave out the "./(1/sqrt(n-3)" term in the zdata %variable assignment, as this can bias towards conditions which %have more trials disp(['Converting correlation maps for subject ' subj ', condition ' MapNames{cond} ' to z-scores']) P = [MapNames{cond} '_r.img']; n = size(Betas,1); Q = MapNames{cond}; Vin = spm_vol([MapNames{cond} '_r.img']); % create new header files Vout = Vin; [pth,nm,xt] = fileparts(deblank(Vin(1).fname)); Vout.fname = fullfile(pth,[Q '_z.img']); Vout.descrip = ['z map']; Vout = spm_create_vol(Vout); data = spm_read_vols(Vin); zdata = atanh(data)./(1/sqrt(n-3)); spm_write_vol(Vout,zdata); end end This can either be copied and pasted into a script, or downloaded as a .m file here.
Hi Andy,
ReplyDeleteThank you for posting the script and the explanation of beta series. Quick question: the script does not accommodate individually defined ROIs, correct? It looks like the same ROIs apply across subjects. Laura.
Hi Laura,
DeleteThat is correct; you could modify it to use individual ROIs, possibly by enclosing a loop around the main code and making the "seedroi" variable an array that updates its selection depending on which subject it is currently analyzing.
Best,
-Andy
Thank you kindly, Andy!!
ReplyDeleteHello Andy,
ReplyDeleteThank you for the code. I applied it to my dataset but all the z, p, and r maps are empty. I looked through the steps in the code and both est = spm_get_data(P,XYZ) and est = nanmean(est,2) give me all zeros. I tried to run the code on native space images and for some of the ROIs the z maps look correct, for other ROIs the maps are empty. Have you ever run into this issue? I'm trying to troubleshoot but no success so far.
Best,
Le
Update: I think the problem may lie in the ROIs. When I used marbars to extract beta series for timecourses, it seemed to work fine. The ROIs used in marsbar are in .mat format, however. To use DoBetaSeries, these have to be converted to nii format. When I looked at the X Y Z coordinates provided by DoBetaSeries, the coordinates do not seem to make sense. I wonder whether my conversion was the problem. Do you know of any reliable software/code to convert .mat or analyze format to .nii?
DeleteBest,
Le
Update 2: I spoke too soon. Now I'm not positive the ROIs are the problem since the same ROIs give me the beta time series with marbars but zeros with est=spm_get_data (P,XYZ). I'm very confused now.
DeleteSorry I've hijacked this with my endless postings.
Hey Le,
DeleteIt may be a typo in the path to the ROIs you want. What is in the variable P when you run the script, and does it match up with the files you've chosen?
If everything looks OK but it's still not working, message me over gmail (andrew.jahner), and we'll try to work it out.
-Andy
Hi Andy! Thank you for posting this! I have a question: If I wanted to constrain my extraction not only to voxels within the ROI but also, use WM and CSF (outputs from segmenting the individual T1 and in a second step binarizing and thresholding them) as masks, how would I add this into the code? Would be very helpful if you could comment on this.
ReplyDeleteBest, Chris
Hi Chris,
DeleteAre you talking about using WM and CSF as separate masks, or combining them into a mask along with whatever other ROI you are using? If you want to analyze each one sequentially, one thing you could do would be to have seedroi as a cell, and then add another for loop onto the code to loop over each ROI within it.
I don't know if I'll get around to that anytime soon, but conceptually that should work.
-Andy
Hi Andy! Yes, exactly, so you would build loops around it so that multiple conditions have to be met (i.e. extract the time-series in the anatomically defined ROI, only if it is outside of WM and CSF)?
DeleteThank you!
Thank you very much for the clear video's and the useful script. What is your opinion about applying BSA on experiments with a short stimulus duration (approx. 500ms)?
ReplyDeleteBest,
Nicky
Hi Nicky,
DeleteIt's not so much the duration of the stimulus that matters, but the variability of the resulting beta estimates. As long as the model is able to efficiently assign variability to each trial, you should be fine.
Best,
-Andy
Hi Andy,
ReplyDeleteThanks a lot!
Best,
Nicky
Hi Andy,
ReplyDeleteI am applying the beta series script you've posted (thank you for sharing) to look at functional connectivity in an event-related design. I have 2 task conditions: Face and Scene, 2 events: Delay and Probe. In the first-level analysis, I created 4 analysis conditions of interest: face_delay, face_probe, scene_delay, and scene_probe. Data from these conditions were used to calculate the beta series correlations. For the second-level analysis, I've conducted two comparisons:
- Condition comparison: Face>Scene and Scene>Face for delay and probe separately.
- Group conparison: Since I have two subject groups, I also have Group 1> Group 2 and Group 2 > Group 1 for the the face_delay, face_probe and scene_delay and scene_probe conditions separately.
What I am trying to do next is to do a combination of a group comparison and condition comparison such as Group 1> Group 2 for Scene > Face in each event. I'm uncertain how to do this since I don't have Scene > Face in the first-level analysis. Conceptually, my guess is that with the first-level analysis, I get the correlation maps for Face and correlation maps for Scene in each task event for each subject. I can do the subtraction between the two maps to get Scene > Face at the individual subject level then do group comparison with the second-level analysis.
Would anyone have any suggestions how to go about doing this analysis?
Thank you for your help!
Thang
Hi Andy
ReplyDeleteWhat if I model each trial as a separate model (suggested byMumford's paper), how do I go about calculating beta series correlations?
Thanks
Hi there,
DeleteI don't know how to do that in SPM; but, if you are feeling adventurous, you could do it in AFNI by concatenating each correlation map (maybe convert it to z-scores first), and then running a correlation on your voxel or ROI with every other voxel in the brain using a program like 3dfim+.
-Andy
Thanks, Andy
DeleteYou mean beta as the correlation map? So I concantenate the beta maps and run a correlation using AFNI?
Thanks
Yes, that's right.
DeleteHi
ReplyDeleteI am looking for information on the Vout subfunction.
Vout
vout.dt
Vout.fname
spm_write_vol
Best regards,
Charlotte