Wednesday, July 31, 2013

If I Forget Thee, Bloomington

By the Genesee River, there I sat down, yea, I wept, when I remembered Bloomington.

If I forget thee, Bloomington, let my right hand forget her cunning; if I do not remember thee, let my tongue cleave to the roof of my mouth, if I prefer not Bloomington above my chief joy.

O daughter of Rochester, who art to be destroyed; happy shall he be, that rewardeth thee as thou hast served us.

Happy shall he be, that taketh and dasheth thy little ones against the stones.



Get the car, Sydney; I'm coming back.



Tuesday, July 30, 2013

Social Psychology Blog: SociallyMindful.com



For those of you who are curious, I actually do have a few friends, and one of them specializes in social psychology. Her name is Elizabeth Bendycki, and, jealous of my success and popularity, she decided to start a blog of her own focusing on her research: SociallyMindful.com. The stated purpose of the blog is to "[Explore] issues at the intersection of brain sciences and human social behavior," to "Raise awareness of the societal and ethical implications of psychological and neuroscience research," to "Further the cause of Marxism, Lysenkoism, and the hasten the inevitable victory of the proletariat," and "Crush AndysBrainBlog in number of hits per day." In your dreams, Lizzie!

I encourage you all to check out her blog to support her and learn more about what she does. And then, I encourage you to realize how good you had it, and come crawling back here to beg forgiveness for ever leaving me.

Monday, July 29, 2013

Comprehensive Computational Model of ACC: Expected Value of Control

Figure 1: Example of cognitive control failure

A new comprehensive computational model of dorsal anterior cingulate cortex function (dACC) was published in last week's issue of Neuron, sending shockwaves throughout the computational modeling community and sending computational modelers running to neuroscience magazinestands in droves. (That's right, I used the word droves - and you know I reserve that word only for special cases.)

The new model, published by Shenhav, Botvinick, and Cohen, attempts to unify existing models and empirical data of dACC function by modifying the traditional monitoring role usually ascribed to the dACC. In previous models of dACC function, such as error detection and conflict monitoring, the primary role of the dACC was that of a monitor involved in detecting errors, or monitoring for mutually exclusive responses and signaling the need to override prepotent but potentially wrong responses. The current model, on the other hand, suggests that the dACC monitors the expected value associated with certain responses, and weighs the potential cost of recruiting more cognitive control against the potential value (e.g., reward or other positive outcome) for implementing cognitive control.

This kind of tradeoff is best illustrated with a basic task like the Stroop task, where a color word - such as "green" - is presented in an incongruent ink, such as red. The instructions in this task are to respond to the color, and not the word; however, this is difficult since reading a word is an automatic process. Overriding this automatic tendency to respond to the word itself requires cognitive control, or strengthening task-relevant associations - in this case, focusing more on the color and not the word itself.

However, there is a drawback: using cognitive control requires effort, and effort isn't always pleasant. Therefore, it stands to reason that the positives for expending this mental effort should outweigh the negatives of using cognitive control. The following figure shows this as a series of meters with greater cognitive control going from left to right:

Figure 1B from Shenhav et al, 2013
As the meters for control signal intensity increase, so does the probability of choosing the correct option that will lead to positive feedback, as shown by the increasing thickness of the arrows from left to right. The role of the dACC, according to the model, is to make sure that the amount of cognitive control implemented is optimal: if someone always goes balls-to-the-wall with the amount of cognitive control they bring to the table, they will probably expend far more energy then would be necessary, even though they would have a much higher probability of being correct every time. (Study question: Do you know anybody like this?) Thus, the dACC attempts to reach a balance between the cognitive control needed and the value of the outcome, as shown in the middle column of the above figure.

This balance is referred to as the expected value of control (EVC): the difference between control costs and outcome values you can expect for a range of control signal intensities. The expected value can be plotted as a curve integrating both the costs and benefits of increased control, with a clear peak at the level of intensity that maximizes the difference between the expected payoff and control cost (Figure 2):

EVC curves (in blue) integrating costs and payoffs for control intensity. (Reproduced from Figure 4 from Shenhav et al, 2013)

That, in very broad strokes, is the essence of the EVC model. There are, of course, other aspects to it, including a role for the dACC in choosing the control identity which orients toward the appropriate behavior and response-outcome associations (for example, actually paying attention to the color of the stroop stimulus in the first place), which can be read about in further detail in the paper. Overall, the model seems to strike a good balance between complexity and conciseness, and the equations are relatively straightforward and should be easy to implement for anyone looking to run their own simulations.

So, the next time you see a supermodel in a bathtub full of Nutella inviting you to join her, be aware that there are several different, conflicting impulses being processed in your dorsal anterior cingulate. To wit, 1) How did this chick get in my bathtub? 2) How did she fill it up with Nutella? Do they sell that stuff wholesale at CostCo or something? and 3) What is the tradeoff between exerting enough control to just say no, given that eating that much chocolate hazelnut spread will cause me to be unable to move for the next three days, and giving in to temptation? It is a question that speaks directly to the human condition; between abjuring gluttony and the million ailments that follow on vice, and simply giving in, dragging that broad out of your bathtub and toweling the chocolate off her so you don't waste any of it, showing her the door, and then returning to the tub and plunging your insatiable maw into that chocolatey reservoir of bliss, that muddy fountain of pleasure, and inhaling pure ecstasy.

Friday, July 26, 2013

Rochester Updates



I've only been in Rochester for about ten days now, but so far it's been a good experience. After arriving in the middle of a heatwave and sleeping in a room with no air conditioning - similar to Alec Guinness and the corrugated shed in The Bridge on the River Kwai - the heat spell broke and cooler air came in, making the weather for the past week some of the most pleasant I have ever breathed into my lungs and felt upon my skin. In addition, the trail system here is fantastic, with several miles of smooth, uninterrupted pavement running along the Genesee River and the Erie Canal. Every morning I wake up a little after six, run eight or ten or twelve miles on the sunwarmed trails, go to work, come home later in the evening, eat at the Mt. Hope Diner across the street, and sometimes get in another run before going to bed. I love it.

My lab hosts here have also provided me with everything that I wanted to learn and get done during my trip, including hashing out some ideas for a joint project, getting some hands-on experience and a soup-to-nuts tutorial on setting up a monkey and guiding electrodes into their brains and recording action potentials from their neurons, and presenting my research and going to lab meetings and journal clubs. I didn't know what to expect going in, but the days have been productive and I've met some great people.

Having recently finished teaching the FMRI workshop, I am also putting together a playlist of tutorial videos covering the analysis of a single subject from start to finish, following the text files on the AFNI website. There are about twenty steps in all, and they cover the fundamentals of FMRI acquisition and analysis, as well as the technical details of how to operate AFNI. I hope this will provide a good starting point for AFNI newcomers, and in the future I will be putting together more coherent playlists to cover certain topics in depth.

The playlist

Thursday, July 25, 2013

Scientists Plant False Memories, Basically Tell Us How To Do What We Already Knew From Watching Movies



In a study further illustrating why the public doesn't trust scientists with messing around with their brains, a neuroscience group from MIT were able to not only plant false memories, but also reactivate these memories at a later time and in a specific context. Using optogenetics - the stimulation of cells genetically altered to be especially sensitive to light - the researchers were able to generate fear-conditioned memories in mice when the mice entered a previously explored location known to be safe. In other words, the investigators were doing what psychologists do best - messing with people's minds.

However, besides its clear use for evil and obvious appeal to government and corporate leaders with a god complex, the experiment is a good example of the power of optogenetics, and makes significant headway in the search for the elusive engram - the neural signature of memories believed to be encoded primarily in the hippocampus, and particularly in the dentate gyrus and subfield CA1. Now, if they could find out how to erase those memories, that would be money. "Are you talking about that one time in second grade where you drank so much orange soda you peed your pants and I had to come pick you up from school, snookums?" Mom - GET OUT OF MY ROOM!

Kudos to Steve Ramirez and the Tonegawa lab, who are the kindest, bravest, warmest, most wonderful human beings I've ever known in my life.


Press Release
Science Paper

Friday, July 19, 2013

FMRI Workshop: University of Rochester



For those of you attending the University of Rochester, and who are intrigued, enticed, and otherwise titillated by neuroimaging methods like FMRI, I will be hosting a workshop in basic FMRI methods next week beginning Monday, July 22nd, at 2:00pm. It will be held in room 269 of Meliora Hall, on the Riverside Campus.


What to bring: A laptop with AFNI installed on it; a positive attitude; water bottle; extra socks; and a quasi-religious faith in the ability of FMRI to unlock the mysteries of life and and therewith dehisce those suppurating elements of ecstasy and trauma of our lives: The boredom, the glory, and the horror.

Tips are accepted and gratefully appreciated.

Sunday, July 14, 2013

Rochester! I am Coming for You!

Go for the eyes, Boo, go for the eyes!

The next three weeks will be spent in durance ecstatic at the medical center in Rochester, New York, working in a primate neurophysiology lab with Ben Hayden and his lab which has generously agreed to host me during my stay. Which is a good thing, because if they didn't host me, I would likely spend all of my time gorging myself on ribs at Dinosaur Bar-B-Que in a vain attempt to forget all of my sorrows. Like my daddy always said: You won't find the answer at the bottom of a basket of ribs. Unless, of course, the question is about the basket.

In any case, I look forward to working with them, bouncing around some ideas, working with the monkeys, attending Eastman School oratorios, noshing at Dinosaur Bar-B-Que, hosting an FMRI workshop, hitting the famous Rochester roads every morning and taking a swipe at that elusive 100-mile week, finishing Absalom, Absalom!, listening to Bill Evans CDs, and avoiding the herpes B virus. Pray for me.

In case any of you reading this will be in the Bloomington area, I recommend checking out the Weiss-Kaplan-Newman trio at 8pm this Tuesday (July 16th). They will be playing, among other works, Shostakovich's horrifying piano trio no. 2 in e minor - a piece which never fails to set my vile blood on fire.



Saturday, July 13, 2013

Template Spreadsheet for FMRI Results



Organizing FMRI results is hard work. Perhaps that explains why the vast majority of the world's population doesn't do it, and wouldn't do it even if they knew how. Nevertheless, for a harmless drudge such as yourself, organization and interpretation of results is a daily necessity, and the more streamlined you can make it the better for you and your adviser overlord who unfortunately will not be able to fund your summer research but will be ordering that custom-made Bentley imported from England. Keep at it, and one day you'll be the one importing cars and being swarmed at conferences by more science-worshiping nerdlings than you can shake a stick at.

To help you out with this, there is a short Excel spreadsheet template that you can find here which will automatically plot a barchart of your results and calculate both main effects and interactions. This is especially useful for plotting and calculating double dissociations, which is one of the most attractive, sultry, sexy results found in the literature. According to most people, anyway. Me? I'm more of a simple-effects kind of guy. Ladies?

Hit the video in case you aren't completely sure how Excel works, and need a brief refresher. Or, if you're just curious what kind of shirt I'm wearing today.


Sunday, July 7, 2013

A Reader Writes: Basic FMRI Questions

Mine's bigger

I recently received an email with some relatively basic questions about FMRI, and both the question and answers might apply to some of you. Admittedly, I am not 100% sure about the weighting of the regressors for the ANOVA, but I think it's pretty close. Whatever, man; I'm union.

Some of the words have been changed to protect the identity of the author.


Dear Andrew,
 
Thanks for your message. My background is in medicine and I am trying  to do fmri research!
I will be grateful for your help: 
 
1. How do you interpret the results of the higher and first level fsl analysis - I am used to p values and Confidence intervals - are fMRI results read in a similar way?
 
2. Importantly- I have a series of subjects and we are interested to look at effect of [a manipulation] on their response to [cat images] over one year, we have four time points one before the operation and three after. These time points are roughly 4 months apart.
 
Our Idea was to see how the response to [cat images] changes over time- with each subject serving as their own control- How do I analyse that? We have some missing time points as well- subjects did not come for all the time points!
 
Regards,
 
Simon Legree
 
 
Hi Simon,
Congratulations on your foray into FMRI research; I wish you the best of luck, and I hope you find it enjoyable and rewarding!
In response to your questions:
1. FMRI results also use p-values and confidence intervals, but these are calculated at every single voxel in the brain. For example, if you are looking at the average BOLD response to [cat images] at each voxel, a parameter will be estimated at that voxel, along with a particular p-value and confidence interval. What you'll notice in the FSL GUI is a cluster thresholding which will only display a specified number of spatially contiguous voxels all passing the same p-threshold.
One crucial difference between first and higher-level analyses in FSL (and any FMRI analysis, really) is the degrees of freedom. At the first-level, the degrees of freedom is specified as the number of time points minus the number of regressors; at the second-level (or higher level) the degrees of freedom is specified as the number of time points that went into that higher-level analysis - which is usually the number of subjects included in the analysis. Unless you are doing a case study, you usually will not be dealing with the degrees of freedom at the individual level. (However, see documentation on mixed-effect analyses like AFNI's 3dMEMA, which will take individual variance and degrees of freedom into account.)
2. For an analysis with each patient serving as their own control, you would probably want to do a paired t-test or repeated-measures ANOVA for each subject. For the paired t-test, you would need to weight each cluster of regressors so that they sum to +1 and -1, respectively; in your case, +1*Before, -0.33*After1, -0.33*After2, -0.33*After3. However, if you hypothesize that there is a linear response over time, you might want to do an ANOVA and weight the timepoints linearly; e.g., for a decreasing response over time, +0.66*Before, +0.33*After1, -0.33*After2, -0.66*After3. There are a number of different ways you could do this. As for the subjects with missing time points, you would need to take that into account when weighting your regressors; I also recommend doing a sanity check by doing the analysis both with the timepoint-less subjects and with them. If there is a huge discrepancy between the two analyses, it might suggest that there is something else correlated with missing time points.


Hope this helps!
-Andy

Monday, July 1, 2013

Establishing Casaulity Between Prediction Errors and Learning

You've just submitted a big grant, and you anxiously await the verdict on your proposal, which is due any day now. Finally, you get an email with the results of your proposal. Sweat drips from your brow and onto your hands and onto your pantlegs and soaks through your clothing until you look like some alien creature excavated from a marsh. You read the first line - and then read it again. You can't believe what you just saw - you got the grant!

Never in a million years did you think this would happen. The proposal was crap, you thought; and everyone else you sent it to for review thought it was crap, too. You can just imagine their faces now as they are barely able to restrain their choked-back venom while they congratulate you on getting the big grant while they have to go another year without funding and force their graduate students to work part-time at Kilroy's for the summer and get hit on by sleazy patrons with slicked-back ponytails and names like Tony and Butch and save money by moving into that rundown, cockroaches-on-your-miniwheats-infested, two-bedroom apartment downtown with five roommates and sewage backup problems on the regular.

This scenario illustrates a key component of reinforcement learning known as prediction error: Organisms tend to associate outcomes with particular actions - sometimes randomly, at first - and over time come to form a cause-effect relationship between actions and results. Computational modeling and neuroimaging has implicated dopamine (DA) as a critical neurotransmitter responsible for making these associations, as shown in a landmark study by Schultz and colleagues back in 1997. When you have no prediction about what is going to happen, but a reward - or punishment - appears out of the blue, DA tracks this occurrence by increasing firing, usually originating from clusters of DA neurons in midbrain areas in the ventral tegmental area (VTA). Over time, these outcomes can become associated with particular stimuli or particular actions, and DA firing drifts to the onset of the stimulus or action. Other types of predictions and violations you may be familiar with include certain forms of humor, items failing to drop from the vending machine, and the Houdini.

Figure 1 reproduced from Schutlz et al (1997). Note that when a reward is predicted but no reward occurs, DA firing drops precipitously.

In spite of a large body of empirical results, most reinforcement learning experiments have difficulty establishing a causal link between DA firing and the learning process, often due to relatively poor temporal resolution. However, a recent study in Nature Neuroscience by Steinberg et al (2013) used a form of neuronal activation known as optogenetics to stimulate neurons with pulses of light during critical time periods of learning. One aspect of learning, known as blocking, presented an opportunity to use the superior temporal resolution of optogenetics to test the role of DA in reinforcement learning.

To illustrate the concept of blocking, imagine that you are a rat. Life isn't terribly interesting, but you get to run around inside a box, run on a wheel, and push a lever to get pellets. One day you hear a tone, and a pellet falls down a nearby chute; and it turns out to be the juiciest, moistest, tastiest pellet you've ever had in your life since you were born about seven weeks ago. The same thing happens again and again, with the same tone and the same super-pellet delivered into your cage. Then, at some point, right after you hear the tone you begin to see light flashed into your cage. The pellet is still delivered; all that has changed is now you have a tone and a light, instead of just the tone. At this point, you begin to get all hot and excited whenever you hear the tone; however, the light isn't really doing it for you, and about the light you couldn't really care less. Your learning toward the light has been blocked; everything is present to learn an association between the light and the uber-pellet, but since you've already been highly trained on the association between the tone and the pellet, the light doesn't add any predictive power to the situation.

What Steinberg and colleagues did was to optogenetically stimulate DA neurons whenever rats were presented with the blocked stimulus; in the example above, the light stimulus. This induced a prediction error that was then associated with the blocked object - and rats later presented with the blocked object exhibited similar learning behavior to that stimulus as they did to the primary reinforcer - in the example above, the tone stimulus - lending direct support to the theory that DA serves as a prediction error signal, rather than a salience or surprise signal. Followup experiments showed that optogenetic stimulation of DA neurons could also interfere with the extinction process, when stimuli are no longer associated with a reward, but still manipulated to precede a prediction error. Taken together, these results are a solid contribution to reinforcement learning theory, and have prompted the FDA to recommend more dopamine as part of a healthy diet.

And now, what you've all been waiting for - a gunfight scene from Django Unchained.