Crowd-sourced frequently asked questions

Frequently Asked Questions…and their crowdsourced answers.

Choosing Parameters

  • In deciding whether to use Skyra or Prisma at PNI, was there a reason (other than scheduling/availability) that drove your decision to use one scanner over the other? Have you noticed any systematic differences between Skyra and Prisma at PNI?

  • What types of anatomical scans do you collect? Do you have a special reason for collecting any of these anatomical scans?

  • Do you collect fieldmaps? What kind of fieldmaps? How many/how frequently do you acquire fieldmaps? When do you acquire fieldmaps (in time) relative to your functional runs?

  • Do you use in-plane acceleration (e.g., mSENSE or GRAPPA)?

  • Do you use multiband acceleration (e.g., simultaneous multi-slice; SMS)? What acceleration factor do you use? Why do you or why do you not use multiband? Are there tradeoffs you are aware of?

  • Do you use any “rules of thumb” when choosing scanning parameters (resolution, TR, TE, etc.)?

  • Do you collect your anatomical/functional/fieldmap scans in a particular order?

  • What field of view (FOV) coverage do you find to be adequate for whole-brain imaging (in mm)?

  • How do you position your FOV (i.e., the yellow box) for functional scans?

  • How do you position your FOV for anatomical scans?

  • Have you run pilot scans testing different parameters? What did you pilot? What did you look for to make your final decision on how to proceed?

  • Do you have any unique factors in your experiment that lead you to be concerned or take special care while scanning (e.g., motion due to using a joystick, scanner sound interacting with an auditory stimulus)?

  • Do you use Minnesota or MGH or Siemens sequences?

  • How do you copy references for your slice prescriptions (automatically in your program card settings, manually, etc.)? Which parameters do you choose to copy?

Study Design

  • Do you have a recommendation for what should be included in one run (e.g., one task vs. two tasks)?

  • Do you have a recommendation for how long a single run should be? What is the maximum duration of a run you like to use? Do you find that some experiments lend themselves to shorter/longer runs? Do you prefer more short runs or fewer long runs?

  • What is the maximum duration you like to keep a participant in the scanner?

  • When do you check for incoming TR pulses/triggers? Beginning of the run only? Every trial?

  • How much “buffer” time do you include at the beginning of each run (before starting stimulus presentation)? At the end of each run?

  • What do you recommend logging (i.e., recording in your output file) when you scan? (e.g., trigger time, etc.) Why is this information useful to have?

  • What checks do you have in place to make sure things run as smoothly as possible? (e.g., sound checks, button tests, etc.)

  • What procedures do you have in place to make sure you don’t lose data in the event of software crashing or some other unanticipated event?

  • Do you do anything special for randomizing events, trial order? Do you jitter or TR-lock? What about your design determines if you jitter vs. don’t jitter, TR-lock vs. don’t TR-lock?

  • Do you have any general recommendations for task timing? (e.g., minimum amount of time to wait between trials, active baseline vs. non-active baseline, time between blocks, etc.)

  • Do you include a functional localizer task in your study? If so, what area(s) are you trying to localize and what is the general design of your functional localizer task?

  • If you run a multi-session study, which scans do you make sure to acquire in multiple sessions (e.g., do you run an MPRAGE in each session)?

Subject-Scanning Interactions

  • What text do you use to recruit participants via email?

  • What instructions do you find helpful to include on a SONA listing for your experiment?

  • What information do you include in email(s) to participants prior to the experiment?

  • Do you have specific instructions you give to subjects that you feel facilitate good data quality?

  • Do you give participants task instructions or practice trials outside or inside the scanner?

  • Do you exclude participants based on handedness? If no, do you have a criterion for how many people can be left- or right-handed in your sample?

  • Do you use the mock scanner? In what scenarios do you find using the mock scanner to be useful?

  • Do you have any tips or tricks for preventing/minimizing subject movement - either physically (e.g., pads, tape, etc.) or with instructions?

  • Do you use FIRMM software for tracking head motion? If yes, how do you use the information you get from FIRMM to help with your data acquisition?

  • Any trips or tricks that can help prevent people from falling asleep in the scanner? How do you monitor this? Can you add things to your behavioral task to mitigate drowsiness?

  • What do you do if somebody falls asleep in the scanner?

  • What do you do if a participant needs to go to the bathroom in the middle of a scan?

  • What do you do if a participant starts experiencing discomfort during a scan (e.g., the earbuds are causing discomfort)? What do you do and/or say to the participant?

  • Have you needed to exclude participants for anything that isn’t included on the safety screening form?

  • During data acquisition, what things do you check for to make sure everything is going as planned with the scan, with your task, etc.?

  • What do you have participants “do” during anatomical scans?

Post-Scan Data Inspection

  • What tool(s) do you use for QA?

  • What do you look for in data quality assurance?

  • What is your approach for dealing with motion? Do you remove specific TRs with a lot of movement, or throw out entire runs, or throw out an entire subject? Do you have thresholds for making each of these decisions?

Useful Resources

Choosing Parameters

PNI’s reference protocols are a great place to start when trying to choose sequence parameters!

Find out more: “In deciding whether to use Skyra or Prisma at PNI, was there a reason (other than scheduling/availability) that drove your decision to use one scanner over the other? Have you noticed any systematic differences between Skyra and Prisma at PNI?”

Skyra has a 10 cm (I think) larger bore so it’s nicer for feeling less claustrophobic. It is nicer to scan in Skyra because the larger bore makes it more comfortable for participants.

If doing a visual study that requires a large field of view, I recommend the Skyra since it has a larger bore and therefore a larger screen.

I chose Skyra because it had real-time set up there (now prisma does too though); Skyra is the only one where people have run real-time studies in the past 2-3 years, so it would be easier to use it for that purpose.

I don’t think the differences between Prisma and Skyra should make an appreciable difference for most studies (and availability may be an important factor). All other things held constant, I would use Prisma simply because it’s new and has better gradient technology.

For acquisition of diffusion data, Siemens Prisma is the only scanner that doesn’t show significant drift in the diffusion signal over time! Moreover, the better gradients are highly advisable to get good signal in diffusions scans (a noisy endeavour, always).

At the time that I started my study Prisma’s calendar was more open than Skyra’s. Prisma also allowed me to have slightly smaller TRs at a given voxel size.

Most fMRI studies that can be done on Prisma can be done on Skyra with just a little additional acceleration or reduction in resolution, TR, or slices.

Find out more: “What types of anatomical scans do you collect? Do you have a special reason for collecting any of these anatomical scans?”

Standard high-resolution (1mm) T1-weighted MPRAGE (~6 min) is always recommended; A T1w MPRAGE at 1.0mm resolution is sufficient for Freesurfer reconstruction, and this usually takes approx. 5 minutes with IPAT GRAPPA=2.

High res T1 options: MPRAGE or MP2RAGE. MPRAGE is easy to process. MP2RAGE is a pain for postprocessing but gives a little better white-gray matter separation (often necessary to skullstrip and similar on one of the inverse pictures only, because the staticky noisy dotty patterns around the head in the combined are a problem for most processing pipelines. Needs checking by hand in every single person and adaptation of processing pipeline.)

I also often collect T2-weighted anatomical scans because they are short (~5 minutes) and can be automatically supplied to FreeSurfer for marginally better contrasts among subcortical areas.

High-resolution (0.4mm) T2-weighted TSE-scan, aligned perpendicular to long-axis of the hippocampus, for hippocampal subfield segmentation.

t2* map to test for lingering neural activity (a control that is sometimes asked for by diffusion peeps)

lower res MPRAGE for control of partial volume effects (also a control that is sometimes asked for by diffusion peeps)

MGH recommends those interested in morphometrics (e.g. cortical thickness) measures use (ideally) a multi-echo MPRAGE, and if that is not available they do provide some recommended parameters to change for a regular MPRAGE.

Learn more about T1w vs. T2w here

Find out more: “Do you collect fieldmaps? What kind of fieldmaps? How many/how frequently do you acquire fieldmaps? When do you acquire fieldmaps (in time) relative to your functional runs?”

I collect phase difference/double echo fieldmaps because that’s what was done before me. I did them at the end of the scan only. Now, however, I don’t take any fieldmaps because I trust fmriprep and the other data I’m using didn’t do them either.

I sometimes collect field maps (at the beginning of each session), but often do not use them (I use fMRIPrep’s fieldmap-less correction). I think best practices would be to acquire field maps intermittently throughout a session or once for each run.

2 fieldmaps per scan session (1 PA and 1 AP), at the end of the experiment (after the last functional run); I generate field maps by acquiring opposing spin echo scans. Typically generally known as “blip-up/blip-down”. Even if you don’t plan to use them, it only takes a minute to acquire them.

I collect AP/PA fieldmaps (~30 sec each) right after the last functional run because I was told you want your fieldmaps acquired as close in time as possible to your functional scans; I don’t do them at the beginning because I try to limit how much “passive” scan time the subject has at the beginning of a scan when I feel like their cognitive functioning is optimal. If I have to pull a subject out of the scanner in the middle of a session (e.g., to use the bathroom), I make sure I run two sets of fieldmaps (one set for the first part of the scan before pulling them out, and one set for the second part).

I’ve started acquiring field maps at the end of my scan sessions in the past 6 months and have run some tests where I process my data with or without them. I haven’t seen conclusive evidence that it helps a great deal with the functional data quality, with the caveat that the most principled analyses I tested were done in the back half of the brain. Anecdotally, you can definitely notice that warping without fieldmaps of the EPI vs. the anatomy, but even after field map correction, some amount of it still persists (i.e., it doesn’t fully fix the problem).

For real-time fMRI scans, however, they are less useful since you’d never be able to correct on the fly during the scan for TR-by-TR processing purposes.

Prisma: blip up/down fieldmaps, paired with CMRR multiband EPI (SMS = 4). Absolutely necessary to correct substantial distortion in orbitofrontal regions. I only collected 1 fieldmap at the end of all my functional runs. My runs were pretty long (3 15-min scans), so if I did this in the future, I probably would’ve collected a fieldmap after every EPI.

Skyra: Siemens GRE fieldmap, not using a MB sequence and I think Mark said there wasn’t an advantage to the blip up/down fieldmap in that case. Also simpler since you don’t have to remember to flip the A->P direction. One fieldmap following all functional scans.

Learn more about available fieldmaps and distortion correction methods.

Find out more: “Do you use in-plane acceleration (e.g., mSENSE or GRAPPA)?”

If not using multiband (SMS) acceleration, I would opt to use mSENSE. I am suspicious that GRAPPA is very susceptible to head motion.

No; I was told that GRAPPA would make the image quality very susceptible to head motion and that I “definitely don’t want to do that”.

Have used GRAPPA previously (in Minnesota sequence), but no longer do that now because it is not necessary. Also, I have heard (from Matthias Nau) that using one or the other is advisable, but beware if you use both!

I do use GRAPPA for the acquisition of my diffusion data, acceleration factor 2 (but no mutliband here!; this is a Minnesota sequence, customly altered, acquisition of gradients in free mode, interspersed collection o 6 BOs).

I try not to, unless absolutely necessary, under advice from Mark Pinsk.

I try to avoid in-plane acceleration for fMRI, and instead opt for multi band acceleration. In-plane acceleration is more susceptible to movement (according to practicalMRI blog).

GRAPPA: For skyra, I wanted to avoid SMS for reasons above. I also was having a subject talk in the scanner, and worried that SMS was more sensitive to motion than inplane acceleration (see pratical MRI blog post on this - https://practicalfmri.blogspot.com/2012/03/grappa-and-multi-band-imaging-and.html). To get whole-brain coverage with even a large voxel/long TR (3mm voxel, 2 sec TR) you have to use inplane acceleration = 2. And this still results in quite a small slab.

Find out more: “Do you use multiband acceleration (e.g., simultaneous multi-slice; SMS)? What acceleration factor do you use? Why do you or why do you not use multiband? Are there tradeoffs you are aware of?”

I use SMS2. I kept the acceleration factor low because of the possibility of finding results in the PFC, which I had heard are degraded at higher acceleration factors.

My rule of thumb is keep it as low as possible. Ideally 2-3.

I generally use SMS factor 3 or 4 with the goal of reducing the TR and voxel size. I’m wary of using SMS greater than 4 due to increased artifacts.

Yes, SMS 3. Allows me to get whole brain coverage with keeping voxel size at 2mm isotropic and a relatively low TR.

Yes. Currently at a factor of 4 for functional images, not for diffusion data.

I use SMS 6, which allows me to scan at high-resolution (1.5mm iso voxels), with a relatively fast TR (1.5 sec) and almost whole-brain coverage (108 mm coverage).

I try not to use multiband if I can help it, but often I can’t help it. Even so, from personal experience any multiband factor above 3 will impact the quality of the data, especially if it’s being used in order to push TR down or resolution up (they’re all things one should avoid and often people do them all at once until the data becomes mush).

I’ve used a combination of SMS and inplane acceleration (SMS = 2, inplane = 2) on on Prisma, as well as just straight up SMS (acc factor = 4). Either way results in nice whole brain coverage with 2mm voxels and 1.5 sec TRs. I’m personally not that into this anymore - in my work we are looking at coarse scale patterns for the most part, and I’m not sure that the increased spatial resolution (and assorted computational problems) is sufficient to outweigh higher motion sensitivity + greater sensitivity to B0 inhomogeneities.

Find out more: “Do you use any “rules of thumb” when choosing scanning parameters (resolution, TR, TE, etc.)?”

I use voxel size of 2.5 mm because I think the SNR tradeoff becomes disadvantageous below 2.5 or 2 mm (due to the intrinsic point-spread of the BOLD signal). I use short TRs (e.g., 1.0 s or 1.5 s) for increased signal to noise, in particular when working with the time series directly (e.g., intersubject correlations) as opposed to temporal averaging. I try to keep the TE around 30 or 32 with the hope of retaining dropout regions like OFC and MTL.

Fitting with the experimental question, e.g. when interested in timeseries, then low TR as a priority, when interested in hippocampal subfields, have small voxels as priority.

Since many agree that 3mm is an optimal voxel size for e.g. classification, and the larger the voxels, the more signal, I try to go as large as possible. I chose 2 mm in my last experiment because i wanted to acquire data with very similar parameters for my diffusion and functional data. For diffusion analysis, any border voxels to CSF need to be excluded in analysis, if I have 3 mm voxels that might just get rid of almost all my hippocampus. Most of my experiments do not require a short TR. I currently have a TR of 2 secs (video viewing and recall, main analysis aims to average activation across longer stretches of time within each voxel), I would however also have been fine with a TR of 2.5 secs or even longer. In general, I try to optimize signal relative to noise. I make sacrifices in resolution, TE and TR for that. I calculate the optimal Ernst angle with the Ernst angle calculator. Note: not super interested in MPFC or MTL so I don’t have to sacrifice signal for TE.

Personally, I try not to go below 2.5mm or below 1.5s. This is mostly because any combination of parameters that’s more aggressive would require higher multiband factors for whole-brain coverage and that’s not the best idea (see above). I would suggest if there’s a choice between resolution and TR, it’s probably best to lower TR and keep resolution more coarse (e.g., 2.5mm @ 2s TR usually gets worse data than 3.0mm @ 1.5s TR). Unless you really need high spatial resolution (e.g., hippocampal fields), I would suggest keeping resolution high. Btw, if you smooth your data (and you never should!), then you’re better off increasing acquisition resolution instead and smoothing less.

Don’t go higher resolution than 2.0mm, ideally stick to 2.5mm unless you really need that spatial precision.

Keep the TE at around 30 to keep susceptibility artifacts small.

Keep FOV greater than 192 mm to avoid wrap-around of large heads. Ideally go larger (> 200 mm).

I shoot for 2-3mm voxels with 1.5-2sec TRs. I know you can push it quite a bit further, but I’m pretty suspicious of acceleration factors > 4. To be fair, this suspicion is mostly general suspicion of free lunches.

Bandwitch - rule of thumb: keep it less than 2K. Increasing it will increase noise. Once everything is set as you like it, put it down to the minimum it can go. I’ll creep above 2K a bit if absolutely necessary, but also note that PNS stimulation really jumps up above 2K as well.

Find out more: “Do you collect your anatomical/functional/fieldmap scans in a particular order?”

High-res anatomical first to check the anatomical for anomalies while scanning (this is a requirement at PNI).

I generally collect a scout (localizer, to make sure the participant’s brain is centered), then a T1, then a field map, then all my functional images, then a T2 at the end (because it’s less necessary).

Anatomical first, then functional, then fieldmap. But I do not think there is a right or wrong order.

Current order: auto-align scout, MPRAGE, T2*, Diffusion scans, fieldmap, Functional scans.

I start with an anatomical so that I can align my functional scans to AC-PC (and make sure I am not cutting off critical portions of brain!) and so that I can check for anomalies. I do the field maps at the end because I want my participants to be fresh during the functional scans and it’s fine if they are tired during the field maps at the end.

My program card usually lists: SCOUT – ANAT – FUNCTIONALS – FIELDMAP(S). The main reason is that keeping multiband low and trying for whole-brain coverage, my FOV is usually quite limited and I need the anatomical scan to make sure I don’t cut off any corners of the brain.

Anatomical first because we have to check for anomalies; Then functional scans, which are the main bulk of experiment. Last fieldmaps. For non-SMS scans, I think this makes sense because you don’t necessarily have to do fieldmap correction and so do your least critical scan last. But for SMS scans, I would do this before the first fx scan so that if the experiment ends early you could potentially salvage some of the scans

Find out more: “What field of view (FOV) coverage do you find to be adequate for whole-brain imaging (in mm)?”

At least 192mm, but ideally >200.

I now have 57 slices (x 2mm voxels = 114 mm coverage) which for most participants cuts off the top part of the brain and part of the cerebellum, for whole brain coverage it should be slightly bigger than that.

I use autoalign because it is often requested/asked for by reviewers if you acquire diffusion data in multiple sessions. For that to work properly the FoV needs to be quite large. My FoV read is 180 mm, with 60 slices, voxel size 2x2x2 no gap.

Find out more: “How do you position your FOV (i.e., the yellow box) for functional scans?”

I do the automatic ACPC alignment from the scout; I try to use the scanner’s automated FOV alignment.

AC-PC alignment. Chosen because during a pilot, this seemed to be the best compromise between SNR in MTL and prefrontal areas.

I try to align it with the scull base in the frontal lobes, to reduce artifact in the orbitofrontal cortex.

I think generally aligning to AC-PC will give you less dropout in frontal regions, and aligning parallel to the hippocampus will give less dropout near the temporal pole and inferior regions of the temporal lobe. I was also told it is important to consistently position FOV across subjects, so using anatomical landmarks is good!

Usually whatever fits the entire brain in. One thing to note here is that if you’re acquiring multiple sessions (or taking your subject in and out), you should try your best to keep the same position and angle (!) of the yellow box throughout all the sessions. This will help with alignment and with potential interpolation issues across different grids. Also, for real-time fMRI, if the angle is too far off, the quick-and-dirty-alignment might fail (offline this is less of an issue).

Axial slices, no rotation. If the box is too small for the participant’s brain, opt to clip part of motor cortex in order to get all of temporal lobe. No particular reason - I’m not optimizing for hippocampus or anything like that. I know a lot of people align on the AC-PC axis.

Find out more: “How do you position your FOV for anatomical scans?”

Just the whole brain, but I heard that the edge shouldn’t be too close to the back of the brain.

I use the scout (localizer) and make sure the whole brain is covered and centered. I don’t usually change much.

I try to center neocortex in the FOV with the vertical center line overlaying the longitudinal fissure.

Find out more: “Have you run pilot scans testing different parameters? What did you pilot? What did you look for to make your final decision on how to proceed?”

I like to have someone else scan me in my own parameters. I try to run MRIQC or fMRIPrep on the first subject to ensure nothing strange is happening.

I piloted several FOV alignments, several SMS factors, 1.5mm vs 2mm voxels, and its influence on tSNR in hippocampus and mPFC.

Yes. I piloted my full experiment on several participants to look for signal in the frontal lobes. I looked for significant activation at an uncorrected 0.05 alpha in known regions of interest.

I have scanned the following parameter combinations and tested the resulting data mainly on object category decoding accuracy for early visual cortex and LO:

2.0mm @ 1.5s TR @ MB6; 2.5mm @ 2.0s TR @ MB4; 3.0mm @ 1.5s TR @ MB3; 3.0mm @ 2.0s TR @ MB2

These are pretty much ordered in terms of performance / quality from worst to best. My advice is to never use the first one, since there’s no signal left even in V1. The second is pretty bad, too, but not as bad as the first. The last one is the best: with fat voxels and low multiband; we can even get good decoding out of parietal and prefrontal cortices with that one. The third is also ok, especially if you care about squeezing in more TRs for e.g., SRM.

There seem to be some confounds here, but generally if you can keep multiband below 4 and resolution above 2.5mm, you should be ok for experiments involving visual stimuli.

Find out more: “Do you have any unique factors in your experiment that lead you to be concerned or take special care while scanning (e.g., motion due to using a joystick, scanner sound interacting with an auditory stimulus)?”

I do real-time so I just try to minimize motion as much as possible (see recommendations for reducing motion below).

Subjects need to be able to hear the audio of the stimuli above the scanner-noise. I adjust it at the start of the experiment (after T1 and before start of first task run).

Subjects need to be able to do a verbal recall into the scanner (while having an epi-scan running). I instruct them to speak clearly to make sure I can hear them.

My functional task (not the localizer) is very long and it’s all mental. Consequently, it’s really easy for participants to just want to stop trying or to fall asleep. It’s because of this that we have lots of little breaks where we check in and make sure the participant it still engaged and doing well. That’s the only real concern we have during the scan.

Motion due to speech - so used MSENSE rather than SMS. To be honest though, this was out of an overabundance of caution. I did a different study with speech and SMS = 4, and it was fine. People generally don’t move more during speech scans than non-speech scans

Find out more: “Do you use Minnesota or MGH or Siemens sequences?”

I use MGH sequences because I generally use FreeSurfer as part of my analysis pipeline.

MGH - this is super old, surprised this is in the survey. Used these ~4-5 years ago when Prisma first opened because that’s what everyone was using; Note that the Siemens multi band sequences are forked from MGH, so PNI no longer offers the MGH sequences.

Minnesota (excellent diffusion sequence) and Siemens for functional. No particular reason.

I only recommend using CMRR if you want bleeding edge features such as recording physiology data from the PMU sensors, scanning with matrices <64, or using multi-echo.

Minnesota - used this for a newer Prisma experiment. My impression is that the CMRR sequences were considered better for MB sequences than the standard Siemens sequences. Also this is what people were mostly using at the time

Siemens - used for Skyra experiment with no SMS. My understanding is that for non-SMS sequences, you might as well use the standard sequences

Find out more: “How do you copy references for your slice prescriptions (automatically in your program card settings, manually, etc.)? Which parameters do you choose to copy?”

Manually; I like having “jobs” to do during scanning that keep me engaged and focused on what I am doing. I don’t want to get too relaxed during scanning.

I always set up all my copy-references before starting data collection when setting up the sequence. I use the default slice prescription; I set it up so that it is a default to copy the references. Less prone to error. You can set this up on the program card.

I set it up automatically In my program card settings. fmap changed manually for PA acquisition (respecting the autoalign change in angle) after reference is set by autoalign, accepted, and copied.

I use “copy slices and adjustment volume”.

I copy the parameters and centers of slice prescriptions (i.e., the first option) after selecting the field of view for the first functional scan based on the high resolution anatomy. Beware the AP-PA (jabberwock) bug, where it resets the second field map randomly to RL – you have to remember to change it back manually.

Study Design

Find out more: “Do you have a recommendation for what should be included in one run (e.g., one task vs. two tasks)?”

I would say more than one task if you want to compare tasks in your analyses.

I think it’s fine to include multiple tasks in a single run, but I generally prefer shorter runs.

I included viewing and recall of 4 brief movie clips in one run (depending on length of recall 20 min per run approximately). I have 4 runs in total doing the exact same thing (counterbalanced order). I thought that classification from encoding to recall might be easier within run. However, movement might be better with shorter runs. I will probably rue the day…

I would just say that the run shouldn’t be very long, especially if it’s a taxing task.

Completely experiment specific. Just try to plan ahead to whether you want to run any analyses that would benefit from leave-one-run-out (LORO) procedures for cross-validation. The main concern here is that the noise profile within a run is usually enough to distinguish between runs and if your conditions are correlated with the run you’ll never know if you’re decoding condition or run number.

If the instructions are different or use different equipment, it should be different runs. If there are trials that rely on being a surprise, it needs to be within a run.

Find out more: “Do you have a recommendation for how long a single run should be? What is the maximum duration of a run you like to use? Do you find that some experiments lend themselves to shorter/longer runs? Do you prefer more short runs or fewer long runs?”

I would say between 5-10 minutes. The maximum should be around 20 because data quality would suffer as the subject fatigues/gets sleepy. Depending on the design and getting all the factors into one run, you may have to have a long run, but I think aiming for shorter runs and having more of those would be better to give the subject a break.

I think for traditional (boring) tasks, runs should ideally be less than 5 minutes long to reduce participant discomfort and movement. For more engaging tasks (e.g., movie-watching), I use runs ~15 minutes long. It can be important to separate things into multiple runs for the purpose of having independent acquisitions for, e.g., cross-validation. I think more short runs is generally better than fewer long runs.

That depends on the stimuli and how engaging the stimuli in general are I think. If the stimulus is very engaging you can have longer runs without the participant starting to move. However, if your experiment uses more basic stimuli and the trials are repetitive, people tend to start moving at the end of longer runs. Currently, I use 4 runs of 15 minutes, and in a second session a single run of even 30 minutes. In all runs people watch and listen to cartoon videos. This seems to work fine so far. The only run in my current experiment with more than usual motion across the group is during the verbal recall (which is expected since they are speaking, so probably not related necessarily to the length of the run)

I prefer more, shorter runs for MVPA.

My functional task runs are 4.5 min long. My localizer task runs are about 8. I wouldn’t go longer than this.

For actual tasks / psychophysics-in-the-scanner, I usually aim for 5-10 min per run. Any less and it gets annoying for the participant, any more and they fall asleep. For movies, etc., usually you can go ham for 2h if you need to (beware of bathroom break requests, though :).

I do movie stuff, so the length of the run depends on the length of the movie. I haven’t scanned continuously for longer than 35 min in one run, but others have done much longer runs (60+ min). As long as the movie is engaging enough, I don’t think it makes a difference.

Find out more: “What is the maximum duration you like to keep a participant in the scanner?”

I prefer to keep subjects in the scanner for ~1 hour, and would rather split data collection into multiple shorter sessions. I’ve scanned experiments that are up to ~1.5 hours long.

I aim for no more than 1.5 hours of running scanner time because that will be longer that they’re actually in the scanner.

For a high intensity experiment that requires continuous attention, I have found that behavior results degrade markedly after about 50 minutes. My presumption is that this will carry over to fMRI.

Probably 80-90 min max. Ideally, 60-70.

1.5 hours max.

60-90 min. Even with movies, people get bored/tired/uncomfortable. Depending on their head size, the combination of the sensimetrics earphones + headband can get really uncomfortable too.

2 hours at the very maximum, although tasks that take altogether a lot longer than 1 hour are, I think, not preferable because task performance tends to drop in cognitively demanding tasks after 1 hour.

Find out more: “When do you check for incoming TR pulses/triggers? Beginning of the run only? Every trial?”

I check every TR for real-time.

I sync the beginning of my presentation script or stimulus to the first trigger, and log every TR for the duration of the scan using PsychoPy’s logging utility. I generally do not use trigger-locked onsets.

Task script starts based on incoming pulse at beginning of the run (all triggers are logged during the run, but the task only responds to the beginning).

In my current experiment, I check at the beginning of the run only. Timing is not critical here (lots of averaging, long TRs, no pulse locking…)

When doing classification/MVPA, I try an pulse-lock stimulus onsets and thus check for pulses every single trial.

The TR pulses are always extremely consistent. For real-time scans, I started out by resetting all presentation times and processing windows for each individual TR, but quickly found that you can use the timing of the first TR pulse in the run and arithmetic your way for 10 min without any discrepancies (i.e., <10ms total at most).

Learn more about TTL pulses here.

Find out more: “How much “buffer” time do you include at the beginning of each run (before starting stimulus presentation)? At the end of each run?”

I usually pad ~12 seconds onto the beginning and end of each run.

12-16 s

10-20 seconds, I manually discard additional prescans (at least 5 with a 2 sec TR); at the end at least 10 seconds, better 16 seconds.

7.5 seconds, in addition to the automatically discarded TRs.

I pad 13.5 sec (9 TRs) before my first stimulus onset (even though Siemens sequences automatically collect and discard dummy scans before the first recorded pulse/volume, mriqc has detected up to 7 non-steady state volumes in a few of my runs, so I manually discard all these extra volumes in my analysis). I also pad 18 sec after my last stimulus offset to make sure I don’t cut off any of the hemodynamic response corresponding to my last couple of trials.

Usually 12 seconds at the beginning and end. I use AFNI for preprocessing and eliminate 12 seconds’ worth of data from the beginning of every run during analysis.

For movies, we always show a short 30 sec clip at the start of runs before starting the movie. There are some weird transient signals that happen at movie onset for reasons unknown, so we show the clip to absorb the transient and discard from analysis.

Find out more: “What do you recommend logging (i.e., recording in your output file) when you scan? (e.g., trigger time, etc.) Why is this information useful to have?”

I record all trigger times, responses, flip times of the screen. It’s useful to go back and check which TR happened at the screen flip time, which is especially important for real-time.

I would recommend logging almost everything, as long as it doesn’t become unwieldy. Always better to have more information than less. I use PsychoPy’s “INFO” logging level.

Task stimuli onsets, participant responses and timing of it, all triggers. It is useful to have this all in one logfile to easily know which MR images correspond to which stimulus presentation etc and more easily code analysis scripts.

I log every trigger I read, every stimulus onset (including instruction screen), every participant response. I log those both in a .mat file in matlab and in a .txt logfile (double safe is almost never sorry).

Time of first trigger is critical for timing-based analyses. I also record the timing of all visual stimuli that are presented to the participant, every time they occur. That way, if the presentation hitches or otherwise become out of sync, it is recoverable.

I log everything with timestamps and save almost every parameter, unless it’s larger than a few GB (e.g., thousands of frames of unique stimuli generated on the fly). This is really useful when things fail miserably (you’ll know what actually happened) and also when you accidentally overwrite something – I’ve had, on occasion, to recreate parameters of my experiments by manually canvasing independent text logs; it was a pain, but I was glad I had the text logs to begin with.

I record the timing of every TTL and keyboard/button press. Why? Paranoia? Just to have it in case of problems? I’ve only ever NEEDED this once, which was to align speech in the scanner with images presented on the screen, and then to align both with the scanner pulses.

Find out more: “What checks do you have in place to make sure things run as smoothly as possible? (e.g., sound checks, button tests, etc.)”

I have a sound check in the beginning where subjects press to indicate if I should turn the volume up or down before we start. The beginning scan won’t start until (1) the subject presses to begin and (2) it receives a trigger from the scanner, so the code won’t continue unless it’s working.

I use a script that allows the subject to interactively adjust the audio volume while they listen to a soundcheck clip not included in the stimuli of interest. I set up my presentation script such that they have to press the button to advance (thus confirming the button box is working).

I set up my task so that it will only start once the subject has pressed the index finger (blue) button. This way I can be sure the button box is working, Matlab is hearing button press, and the subject doesn’t have the button box flipped the wrong way.

Audio check at start of task, check at start of each run whether participant is (still) using the correct button on the button box, check at start of each run on the scanner whether epi-images indeed come in (i.e. image reconstruction is working as it should)

Before starting, I go in to the scanner room and press the buttons I’m gonna use, and my buddy looks at the computer screen text editor to make sure it works.

I restart my script if the first button press to navigate it doesn’t work. I play music to my people in task free scans and adjust the volume for later movie viewing then. If volume is off, participants are told they can adjust by saying (turn up or turn down) even during the scan (I communicate over the mic recording interface. I can thus hear my participants at all times, advise them to not talk during scans unless it is crucial though). This is as optimal as I can make it. I guess I could play a short sound file that is spoken? I don’t do that. I used to have a microphone check (which dropped from my script without me noticing at some point). This is reckless and irresponsible. Thanks for drawing my attention to it.

Sound check if necessary, with the option to adjust volume (it’s good to do this during a dummy EPI scan, which will also help alleviate the initial shimming problem). Always test the button box before starting the experiment. Also, if possible, try testing the trigger pulse button if anything has changed since the last time you scanned.

Find out more: “What procedures do you have in place to make sure you don’t lose data in the event of software crashing or some other unanticipated event?”

You can force PsychoPy to write all logging information to file as the experiment proceeds. It will log up until the task crashed.

After each run, my script saves the behavioral data both locally and on the server so I always have two copies of the data in case something happens to the stimulus computer.

I save a log file next to my mat files. If the program crashes fully, I at least have all my onsets, even if voice recordings for recall are missing for a run.

Always run tasks locally (i.e., your task code should be on one of the stimulus computers, not on the server)! This minimizes the risk of something crashing due to an interruption in the connection.

Annoyingly detailed text logs of everything that happened and how long it took for it to happen (e.g., I want 30 frames of a video to be shown, but often it’s more like 28-29; the log knows!).

I don’t really… but my subjects are generally not performing a response task in the scanner. In my really paranoid days, when I was recording audio in the scanner, I had the audio recording directly onto my laptop but then I also had my phone recording the output for the speaker.

Find out more: “Do you do anything special for randomizing events, trial order? Do you jitter or TR-lock? What about your design determines if you jitter vs. don’t jitter, TR-lock vs. don’t TR-lock?”

No, I TR-lock.. For real-time I wanted to make sure I know which TR corresponds to what to plan naming/outputs/etc.

I use jittering based on AFNI (https://afni.nimh.nih.gov/pub/dist/doc/program_help/make_random_timing.py.html) and do not TR lock. I have used T1I1 sequences from Aguirre lab to first-order counterbalance trial order (https://cfn.upenn.edu/aguirre/wiki/public:t1i1_sequences). T1I1 counterbalancing is only feasible for relatively few conditions.

I randomize order of video stimuli, no TR-lock since each video is slightly different length (naturalistic design do not lend itself easily to TR-locking), more controlled studies do.

I always jitter the ITI (why not?). The ITI should never be a multiple of the TR, so that the BOLD response of your event/block is “super sampled”, ie you’re not always sampling the same time point of the HRF.

I counterbalance order of my movie stimuli (never same order, prefixed possible orders, counterbalanced across conditions (for me, within and between-subjects)) to exclude time biases for classification. I automatically jitter because people take different times to navigate my task (movie viewing and free recall).

In general, if I want to do event-related analyses, I jitter. If, and only if, I only want to do MVPA, I pulse-lock.

If the experiment affords it, I try to use a short block design. Event-related designs have much worse data quality for individual items / conditions since the hemodynamic response is actually not fully linearly additive, but deconvolution / regression assumes that. If you have to use events, then jittering should always help. For real-time scans, everything is always TR-locked.

If the same type of trial is repeated for the same measure, then it should be jittered. I don’t jitter for trials that vary timing depending on the participants’ response, because I consider it human-jittered.

Find out more: “Do you have any general recommendations for task timing? (e.g., minimum amount of time to wait between trials, active baseline vs. non-active baseline, time between blocks, etc.)”

When doing a simple visual task, I tend to follow Kriegeskorte’s advice and use many trials spaced close together (e.g., ~4 s ISI). I would allow 12–16 seconds between blocks if the goal is to allow the HRF to settle back to baseline.

2 to 3 minutes between the 15 minutes blocks (of rest), long ITIs (if not necessary for your research question somehow) might cause participant to get bored quicker and therefore pay less attention?

I prefer fast event-related designs, but spaced MVPA designs (10-12 secs per stimulus if possible). Whenever I can, I include an orthogonal well-controlled active baseline (e.g. navigation within a randomly changing environment, odd-even judgment task to suppress hippocampal activity).

The more time you wait between trials, the less hemodynamic contamination you’ll get, up to 10-12 seconds or so. Also, if you’re running a task-based experiment, beware of adaptation effects after the first 5 or so seconds of a block / continuous visual / auditory presentation of the same or similar stimuli.

Find out more: “Do you include a functional localizer task in your study? If so, what area(s) are you trying to localize and what is the general design of your functional localizer task?”

I generally do not, but I would recommend using functional localizers that have been previously well-validated in the field.

Area MT func localizer (visual motion). Standard routine. Worked very well in each individual subject.

Yes. It’s a one-back image detection task like Aaron Bornstein used. I’m interested in decoding scene processing.

I used to use retinotopy (moving checkerboard) and functional visual region localizers (e.g., LO, PPA, RSC, TOS, FFA, etc.).

face/scene localizer - press a button if the image repeats. You don’t want a lot of time between images, so this task allows it to be quick in terms of stimuli presentation time, rt, ITI.

Find out more: “If you run a multi-session study, which scans do you make sure to acquire in multiple sessions (e.g., do you run an MPRAGE in each session)?”

I run a scout in each session, but fmriprep will align everything so I don’t worry about multiple MPRAGEs.

I usually acquire a T1 in each session, but I don’t think it’s strictly necessary (depends on how much you trust your registration algorithms).

yes, MPRAGE in each session, fieldmaps in each session

If I have time, I like collecting an MPRAGE in each session so that if one of them is less-than-optimal quality (e.g., due to subject motion), then I can ignore the bad one and have a good backup one to us. But if time is an issue, Mark helped me setup a “fast T1w” that only takes ~2.5 min but is worse quality than the standard; I don’t use the fast T1w for registration at all, but I can use it to do my ACPC slice prescription alignment for functional scans to make sure I am aligned properly and not cutting off any critical parts of the brain.

I usually run an MPRAGE in each session, but then try to align all the data to the same MPRAGE from day 1 before / during analysis. The other days’ MPRAGEs are usually used only if the alignment fails (e.g., due to field of view issues). For real-time scans, you usually need an MPRAGE for each day to align to the MPRAGEs from other days for high precision localization / model targeting.

I run an MPRAGE and a fieldmap in every session. Fieldmap for obvious reasons. I could probably skip the MPRAGE, but it’s a short scan and the paranoid/suspicious/superstitious part of me says that an in-session MPRAGE will result in better alignment than out-of-session MPRAGE. Actually taking this survey is making me realize to what extent my pratices are based on superstition/tradition.

Subject-Scanning Interactions

Find out more: “What text do you use to recruit participants via email?”

See Sample scripts, checklists, and code here!

Find out more: “What instructions do you find helpful to include on a SONA listing for your experiment?”

I always put in the sentence that the timeslots provided are not the only ones possible, so if you are interested in participating but the timeslots do not match your agenda, that you can contact me for that. I did have quite a number of participants doing that (who I then rescheduled to another time, not at that time listed as option on sona)

Must bring ID; Normal Vision or Corrected-to-Normal Vision with Contact Lenses (glasses cannot go in the scanner); No History of Neurological Illness or Head Injury; Fluent in English; At Least 18 Years of Age

Abstract: In this two-part fMRI experiment, you will watch video lessons in the scanner and answer questions.

Description: This study has two parts. In Part 1, you will be scanned in fMRI while watching video lessons. You will also be asked to answer questions about the lessons in and out of the scanner in order to measure how much you learned. In Part 2, you will answer more questions outside of the scanner. Part 1 takes 120 min and Part 2 takes 30 min. Parts 1 and 2 MUST BE COMPLETED ON CONSECUTIVE DAYS. For this experiment, you will be paid $48. You may also earn up to $20 in bonuses: $10 for completing both sessions and up to $10 for doing well on the learning assessments. If the available timeslots do not work for your schedule, please email the researcher for alternate timeslots.

Eligibility reqs: Native English speaker, no metal in body, normal or corrected-to-normal vision (contact lenses ok), and normal hearing.

Find out more: “What information do you include in email(s) to participants prior to the experiment?”

I try to make sure they realize that scanning is extremely expensive and requires multiple people’s time with the hope that the subject does not forget or cancel.

We should include “no wet hair” because it leads to distortion. Or so I heard from non-pyger scanners.

See Sample scripts, checklists, and code here!

Find out more: “Do you have specific instructions you give to subjects that you feel facilitate good data quality?”

Just the normal stuff about being comfortable, going to the bathroom, making sure they’re head is on an even surface, taking breaks if they feel themselves losing focus, not moving or crossing their arms or legs.

I tell participants that the most important thing is for them to settle in and get comfortable at the beginning so they don’t have to adjust later. I tell them to wait til the end of the run if they absolutely have to move. I try to make it clear that fMRI is very sensitive to head motion and operates on a millimeter scale (I show them how big a millimeter is). I try to only scan expert subjects who have been scanned many times before and understand the importance of data quality (often graduate students).

Very clear instructions not to move any part of their body when the scanner is making noises, and never move the head of course; Don’t move! Even moving your feet is enough to blur the image. As long as your can hear the scanner, it is very important to hold as still as possible.

Do not speak while the scanner is running if it can be avoided.

I repeatedly tell them not to move more than 1mm and that moving while the scanner is acquiring images will result in data loss 10 seconds before and after the movement. I also show them what 1mm looks like with a ruler. I also tell them that moving their feet or body also moves their head and that they should refrain from doing so while the scanner is collecting images.

The two most important things during the scan are 1. Try really, really, really hard not to fall asleep. I know Princeton students are perpetually sleep deprived, but it’s really, really, really important that you try your best to stay awake and attentive, even if the task is really hard or boring or confusing. We will be tracking your eyes (point to screen) so we’ll be able to see if you’re falling asleep, so try your very, very best to stay awake and keep your eyes open.

The second thing is it’s really, really, really important that you stay as still as possible during the scans. During the scans, we’re basically taking pictures of your brain, and just like any picture, if you move while we’re taking it, the picture turns out blurry. Moving as little as 2-3 mm (show on ruler) can really hurt our data. So it’s really, really important that you stay as still as possible - don’t move your head during the scan, and don’t move your body since that can move your head. We’ll help you out by putting foam pads around you head, but it’s really on you to pay attention and try your best to stay still. The way to do this is when we get in the scanner, take all the time you need to get comfortable, and then once you find that position, just relax and sink into it. Then as long as you stay mindful and pay attention to your body and try not to move, you’ll probably be fine. We’ll also put a little piece of tape on your forehead which will help give you some feedback if you move.

Find out more: “Do you give participants task instructions or practice trials outside or inside the scanner?”

Outside of the scanner to make sure what they can expect before going in. Then repeat instructions of subsequent tasks (all tasks beyond the first one they do) while subject is in scanner, right before that particular task.

I run practice versions of my task on my laptop outside the scanner.

I only give verbal task instructions. I used to give practice trials for tasks that are less intuitive, or that require learning how to navigate response keys or similar.

Outside the scanner! They do a couple of practice rounds outside the scanner to practice after reading the instructions. I also go through a short questionnaire to make sure they understood the instructions.

I give detailed instructions, with instructions and practice, outside of the scanner. Once the participant is in the scanner, I provide them with instruction screens, rehashing the instructions.

Find out more: “Do you exclude participants based on handedness? If no, do you have a criterion for how many people can be left- or right-handed in your sample?”

I generally do not exclude participants based on handedness (systematic variability is good!), but make sure to record their handedness in case you want to include it in a model down the road.

No, but I do keep the number of left-handed people low (i.e. max 10% of 40 participants to be scanned for this experiment)

No, i am trying to be more inclusive for this naturalistic experiment (and the IRB explicitly doesn’t exclude them)

Yes. Only right-handed people. No particular reason other than history of doing this.

It’s usually so difficult to recruit people that I don’t screen for this, but I can definitely see a reason to do it, especially if any of the effects you’re looking for are even a little bit lateralized.

Yes, I am using mouse-tracking and while it’s right-left balanced, I am still excluding left-handed people in case of differences in hand movement (I also excluded for behavioral data).

Find out more: “Do you use the mock scanner? In what scenarios do you find using the mock scanner to be useful?”

I think it would be most useful for scanner children or elderly or clinical populations, or doing a behavioral task for which it is very important that the context in which the task is done is as identical as possible to the context of being in the MRI scanner.

In one of the experiments I’m helping with we are using the mock scanner. When the experiment is extremely complicated (e.g., you need to see the screen, see your hands, and use a piano while in the scanner), it’s always best to do a trial run beforehand. Also, if your participant population is unique and difficult to recruit (e.g., professional musicians, memory experts) and they are not used to the scanner noise / environment (i.e., being shoved head first into a narrow dark tube), it usually helps them be less stressed during the actual scan.

Yes, I wanted to make sure the mouse-tracking paradigm worked while the participant is laying down, with the tablet on their stomach.

Find out more: “Do you have any tips or tricks for preventing/minimizing subject movement - either physically (e.g., pads, tape, etc.) or with instructions?”

extra pads/instructions; I pad as heavily as I can around their head; Padding, padding, padding. I put the big square block around their ears and use the big triangles around the top of the skull.

I use Caseforge head cases to minimize movement (with expert subjects). I have also used a strip of tape across the forehead to provide some tactile feedback.

explain clearly why it would be a problem (rather than only say to lay still), remind when you see a lot of head motion (by looking at the eye tracker during task or using FIRMM), explain that movement of legs can cause head motion without realizing it, tape across head for tactile feedback

If I see them move their legs, or look at the functional scans that they’re moving their head, I remind them in the break between runs to stay still

Tape for tactile feedback!!

The tape on the forehead works really well. I also really, really, really emphasize how important the motion is and that we’re watching so we’ll be able to see if you’re moving. Also this is totallly superstition and I don’t know if this is actually true, but I try to make some chitchat with them and make a connection.

Find out more: “Do you use FIRMM software for tracking head motion? If yes, how do you use the information you get from FIRMM to help with your data acquisition?”

Yes, I like to give the subject feedback between runs so over the intercom I will tell them “you’re doing a great job keeping your head still! Keep it up!” or “I noticed a little bit of head movement in that last run, so remember to try and stay as still as possible”. For multi-session studies, I can show them their head motion tracking plot when they come out of the scanner if they are curious and that maybe (?) motivates them to do just as well or better in the next session.

Learn more about using FIRMM software at PNI.

Find out more: “Any trips or tricks that can help prevent people from falling asleep in the scanner? How do you monitor this? Can you add things to your behavioral task to mitigate drowsiness?”

tell them to take breaks

I try to only do naturalistic experiments that are intrinsically engaging. I also use the eye-tracker to monitor subjects wakefulness at acquisition time.

explain that it is important to stay awake and engaged, give them a small performance bonus, explain that there is an eyetracker visible to us during the experiment

Eye tracking. If they fall asleep or are about to, I tell them in the next break that I can see they have a hard time keeping their eyes open, but that it is extremely important they really try to keep their eyes open, because otherwise I cannot use the data; I monitor sleepiness via eyetracker. I engage them in conversation between scans if they seem very sleepy. This usually helps, especially if they have to give more than one word answers.

I monitor eye movements (mostly make sure the eyes aren’t closed) and I present a sleep log if I see that they’re getting sleepy.

Frequent breaks, every 5 minutes, help, but it certainly isn’t perfect.

If I notice people are really struggling with drowsiness I will go into the scanner room between runs. I tell them “hey, I need to come into the room to grab something real quickly” and then I go into the room. This seems to provide a little boost in alertness that helps (for whatever reason).

Find out more: “What do you do if somebody falls asleep in the scanner?”

cry to myself

Throw out that run and try to persuade them to stay awake.

If they look sleepy during a certain run, remind them of importance to stay awake and engage, if they are really asleep (not just looking drowsy), take them out and stop experiment.

For my diffusion scans, falling asleep is a huge problem because it massively changes the signal we are collecting (diffusivity goes up). I thus also warn people that i will talk in their headphones during the scans if I see them getting drowsy, and that they shouldn’t be alarmed.

Their data gets excluded.

Try to wake them up by talking to them over the mic. If that doesn’t work, I go in and pull them out.

I tell them to try to be more alert and redo the run and/or I exclude them from the analysis later.

Find out more: “What do you do if a participant needs to go to the bathroom in the middle of a scan?”

Try your best to make sure they use it before the scan unless they just went in the last 10 minutes. Otherwise if they’re super uncomfortable i let them out.

Pull them out immediately, then re-run the scout and re-start the functional run.

Take them out, put them back in, redo initial scout scan and then rest of the experiment.

I ask the subject if they can hold it for 1 more minute, then run fieldmaps, pull them out to use the restroom, put them back in, run a scout localizer and “fastT1w” so I can re-align my FOV, then continue with remaining functional scans and another set of fieldmaps at the end. Be careful that whatever scan is “open” on the console, when you remove the head coil it will turn off the top head coils, so when you put the subject back in and re-attach the head coil, make sure you manually turn on those coils in your settings before proceeding with the next scan.

Curse the devil and his accomplices. Rerun autoalign scout before i continue. I love autoalign when nothing else (TR, voxel size) really matters.

Depending on how close we are to the end (e.g., <10min), I ask whether they can stick with it for a bit. If it’s longer and they really need to go, I usually let them and abort the scan. For some of my experiments, the data collection is modular, so stopping early on one day is not the end of the world, but if it is, expect you data quality to take a hit (it’s like collecting data over two days in terms of alignment, interpolation, etc.).

Cry. This is annoying especially when people decline to use bathroom before hand. But what can you do but let them out? If it’s at the end of the session, I’ll ask if they can hold it for another 5-10 min, but otherwise have to let them out.

Find out more: “What do you do if a participant starts experiencing discomfort during a scan (e.g., the earbuds are causing discomfort)? What do you do and/or say to the participant?”

first, i let them take a break. afterwards, i ask them if they want to stay in the scanner and i let them out if they want to be let out.

see if there is anything I can do to resolve it (i.e. discomfort due to being cold, cover them with a sheet for the remainder of the expt), explain how long the experiment still is so they can judge whether they think they can finish or not, make sure they know they really can stop when they want without that negatively affecting them in any way.

try to adjust it by taking the participant out of the scanner without moving the head. In worst case, I taking them out and reposition them and rerun shimming, t1 etc.

I inquire how much discomfort it is causing and offer to pull them out to adjust. Sometimes they keep going, sometimes I pull them out and curse the devil and his accomplices.

I try to fix it if I can. If it’s close to the beginning of the scan, I sometimes take people out and have them redo the earplugs/earbuds and then restart.

How uncomfortable are you? Can you handle it for another XX minutes, or would you prefer to end the experiment early? If you’re really uncomfortable, it’s better to end early and that’s totally okay.

Most of the time, people will be okay long enough for me to run the last 1-2 short scans. But if they’re really uncomfortable, I’ll always take them out right away. I’ve never had someone end the experiment in the middle of a scan though.

Find out more: “Have you needed to exclude participants for anything that isn’t included on the safety screening form?”

YES! surgery (screws in legs, make sure to ask people whether any metal could have been left in their body if they had surgery! often they themselves don’t think of that)

No, but I’ve had participants who remembered that they have an excludable issue listed on the form only after reading it for the second or third separate time. I would suggest having them read the form more than once just to be safe.

I’ve had to exclude two people because their head did not fit comfortably in the head coil (it was too large and the coil touched their face).

Someone once reported a breathing problem, where they sometimes forget to breathe and can pass out?

Once someone got into the scanner, realized they were claustrophobic, and asked to stop right away

Find out more: “During data acquisition, what things do you check for to make sure everything is going as planned with the scan, with your task, etc.?”

i have text output in matlab to make sure subjects are keypressing. i also have text outputting if triggers are found in real-time/display timing.

I pray to the Triple Goddess and her horned consort.

Look at incoming images. listen to free recall, watch eye of person on eyetracker.

I usually have diagnostic code on all aspects of the task / code, as well as the participant responses up on a second and/or third monitor. Also, the eye tracker tells me whether they’re still awake.

Inline display to make sure data is collected/reconstructed, monitor stimulus display

Find out more: “What do you have participants “do” during anatomical scans?”

Watch a youtube video

Listen to music (long tycho mixes, non-verbal). I need to have a very controlled task free condition for all of my control and diffusion scans. Every person listens to exactly the same thing.

They see a refresher of the face-scene associations they learned earlier in the day.

They watch a nature documentary with subtitles. It seems to relax them.

Usually I show them a Youtube video: Harry Potter trailers, elephant seal documentaries, base jumping videos, Pixar shorts, etc.

Watch short pixar movie. Anat is at the start of the session and I don’t want people to get sleepy already.

Post-Scan Data Inspection

Find out more: “What tool(s) do you use for QA?”

MRIQC, fMRIPrep

ART toolbox for SPM (visualize motion and intensity changes over time)

I usually check the motion parameters in AFNI after preprocessing.

Find out more: “What do you look for in data quality assurance?”

I look for quality of anatomical normalization to the template and inspect surface reconstruction. I check whether any of the skull remains after skull-stripping. I check surface reconstruction for good alignment with the white matter and pial boundaries.

motion, tSNR, in general whether participant is an outlier in any MRIQC metric vs the rest of the group in MRIQC group report

motion and intensity changes over time

movement, any ghosting? any other artifacts? are images usable?

Depending on the experiment and the stimulus, there are things you can look for, e.g., is there univariate activity increase in V1 when they’re looking at a stimulus vs. rest (you’d be surprised how many people that excludes!) – the most annoying thing is that often some people just don’t have good signal regardless of how well intentioned they are and how still and compliant they are and it’s impossible to tell until you scan them and see that the data is pure noise (e.g., the V1 metric above).

Find out more: “What is your approach for dealing with motion? Do you remove specific TRs with a lot of movement, or throw out entire runs, or throw out an entire subject? Do you have thresholds for making each of these decisions?”

I regress out motion parameters, their derivatives, framewise displacement, and other confounds such as aCompCor. I sometimes also censor time points with motion (or proportion of outlier voxels) at the GLM stage according to AFNI conventions.

I censor TRs for isolated movement. If the subject is moving a lot, I throw out the whole subject.

I’ve never thrown out TRs (hemodynamic lag makes that a bit weird anyway) or runs. I have thrown out participants before for multiple incidents of 10+mm motion throughout the experiment.

Useful Resources

Find out more: “These are people at PNI who are willing to provide help or answer questions about the following topics:”

Anne Mennen (amennen@princeton.edu) - real-time analyses

Sam Nastase (sam.nastase@gmail.com) - ReproIn, HeuDiConv, Singularity, BIDS, MRIQC, fMRIPrep, AFNI, BrainIAK, PyMVPA, Nibabel, scikit-learn

Lizzie McDevitt (emcdevitt@princeton.edu) - ReproIn, HeuDiConv, BIDS, FSL

Mai Nguyen - mlnguyen@princeton.edu - ISC

Silvy Collin (scollin@princeton.edu) - SPM, representational similarity analysis, searchlight, HMM

Arvid Guterstam (arvidg@princeton.edu) - SPM

Monika Schoenauer (m.schonauer@princeton.edu) - diffusion-weighted imaging

Paula Brooks (paulapbrooks@gmail.com)

Andrew Wilterson (aiwilson@princeton.edu) - Surface analysis, AFNI in general

Marius Cătălin Iordan (mci@princeton.edu) - Real-time fMRI, Neurofeedback, AFNI, Multi-Day fMRI Study Design

Mark Pinsk (mpinsk@princeton.edu) - setting up protocols, equipment training, etc

Be sure to reference the PNI Wiki for lots of useful information, including facility guidelines/procedures and instructions about how to use 3rd party equipment.