Resisting temptation in the brain

Having spent the last three years studying how difficult it is to say no to our vices, and being intimately acquainted with all that can go wrong in fMRI research, I’m always a bit skeptical of studies that claim to be able to predict our capacity for self-control based on a brain scan. But a new paper out this week in Psychological Science seems to have done a pretty admirable job, tying our real-life ability to resist temptation with activity in two specific areas of the brain.

Researchers from Dartmouth University first tested 31 women on two different tasks: an assessment of self-control and a measurement of temptation. Using an fMRI scanner, they compared where the women’s brains lit up when they were stopping themselves from performing a certain action (pressing a button, to be exact), and when they were seeing images of ice cream, hamburgers, and other tasty treats. As expected, better performance on the response inhibition task was linked to activation in a part of the brain called the inferior frontal gyrus (IFG), a region in the frontal cortex known to be involved in inhibiting a response. Conversely, looking at pictures of chocolate and chicken sandwiches activated the nucleus accumbens (NAcc) a deeply rooted part of the brain that’s essential in feelings of reward.

So far, this is all pretty par for the course; you exert self-control, you activate your control center. Looking at something enticing? Your reward region is going to light up. Nothing new or ground-breaking (or even that useful, to be honest). But the researchers didn’t stop there. Instead, they took the study out of the lab to see what happened when the participants were faced with real-life temptations. Equipping them with Blackberry smartphones, the participants were prompted throughout the week with questions about how badly they desired junk food, how much they resisted these cravings, whether they gave in to their urges, and how much they ate if they did cave to temptation.

Comparing these responses to brain activity in the two target areas, the researchers discovered that the women who had the most activity in the NAcc while viewing images of food were also the ones who had the most intense cravings for these treats in real life. Additionally, these women were more likely to give in to their temptations when they had a hankering for some chocolate. On the other hand, those who had greater activity in the IFG during the inhibition task were also more successful at withstanding their desires — in fact, they were over 8 times more likely to resist the urge to indulge than those with less activity in the region. And if they did give in, they didn’t eat as much as those with a smaller IFG response.

Having confirmed the link between activity in these areas and real-life behaviors, the next step is to figure out how to ramp up or tamp down activity in the IFG and NAcc, respectively. One technique that psychologists are exploring is transcranial magnetic stimulation, or TMS. This involves zapping a certain part of the brain with an electromagnetic current, trying to either stimulate or depress activity in that region. So far, use of TMS in studies of addiction and eating disorders — attempting to enhance self-control and decrease feelings of craving — has been met with limited success. Pinpointing the exact right area through the skull and figuring out the correct frequency to use can be difficult, and in fact, a few studies have actually accidentally increased desire for the substance! Additionally, the effects are often temporary, wearing off a few days after the stimulation is over. Other studies have looked at cognitive training to try to enhance self-control abilities, particularly in children with ADHD, although these attempts have also varied in their success.

Beyond targeting certain psychiatric disorders or trying to get us to say no to that second (or third or fourth) cookie for reasons of vanity, there’s a push to enhance self-control from a public health standpoint. The authors of the current study cite the staggering statistic that 40% of deaths in the U.S. are caused by failures in self-control. That’s right, according to research, 40% of all fatalities are caused by us not being able to say no and partaking in some sort of unhealthy behavior, the biggest culprits being smoking and over-eating or inactivity leading to obesity. Clearly then, improving self-control is not only needed to help individuals on the outer edges of the spectrum resist temptation, it would benefit those of us smack dab in the middle as well.

Happy Friday!

Advertisement

Keeping hope alive: Brain activity in vegetative state patients

Thirteen-year-old Jahi McMath went into Oakland Children’s Hospital on December 9 for a tonsillectomy. Three days later she was declared brain-dead; severe complications from the surgery resulted in cardiac arrest and her tragic demise. While neurologists and pediatricians at the hospital have declared Jahi brain-dead, her family refuses to accept the doctors’ diagnosis, fighting to keep her on life support.

This heartrending battle between hospital and family is sadly not a new one, and there is often little that can be done to compromise the two sides. However, neuroscientific research in recent years has made substantial developments in empirically determining if there are still signs of consciousness in vegetative state patients. These revelations can either bring hope to a desperate family or provide stronger footing for doctors trying to do the more difficult but often more humane thing.

In 2010, researchers at the University of Cambridge published a groundbreaking study in the New England Journal of Medicine that looked at brain activity in minimally conscious or vegetative state patients using fMRI. These patients were placed in the scanner and asked to imagine themselves in two different scenarios: in the first, they were instructed to envision themselves playing tennis and swinging a racket, which would activate a motor region of the brain called the supplementary motor cortex. In the second, they were told to think of a familiar place and mentally map or walk around the room. This mental map lights up the parahippocampal gyrus, an area of the brain involved in spatial organization and navigation.

Five of the patients (out of 54) were able to consistently respond to the researchers’ requests, reliably activating either the supplementary motor cortex or parahippocampal gyrus upon each instruction. Even more amazing, one of the patients was able to turn this brain activation into responses to yes or no questions. The patient was asked a series of autobiographical questions like “Do you have any siblings?” If the response to the question was yes, she was instructed to “play tennis,” while if the answer was no, she should take a mental stroll around the room. Remarkably, this individual was able to accurately respond to the researchers’ questions using just these two symbolic thought patterns.

Building on this research, a new study by the same scientists published in November of this year in NeuroImage used EEG to measure electrical activity in the brain in an attempt to better assess consciousness in the same group of vegetative state patients.

A certain type of EEG brain wave, the P300, is generated when we are paying attention; and just as there are different kinds of attention (i.e. concentration, alertness, surprise), there are different P300 responses associated with each type. An “early” P300 burst in activity in the parietal lobe (P3a) is externally triggered, such as when something surprising or unexpected grabs our attention. Conversely, delayed P300 waves in the frontal cortex (P3b) are more internally generated and are activated when we are deliberately paying attention to something.

To test this, the Cambridge researchers hooked up the same group of minimally conscious patients to an EEG machine and made them listen to a string of random words (gown, mop, pear, ox). Sprinkled throughout these distractor stimuli were also the words “yes” and “no,” and patients were instructed to only pay attention to the word “yes.” Typically, when someone performing this task hears the target word (yes), they experience a burst in delayed P300 activity, signifying that they were concentrating on that word. However, upon hearing the word “no,” participants often show early P300 activity, its association with the target word attracting their attention even though they were not explicitly listening for it.

Similar to the first study, four of the participants exhibited brain activity that indicated they were able to successfully distinguish the target from the distractor words. This result suggests that these patients are aware and able to process instructions. Three of the four individuals also demonstrated the appropriate activation during the tennis test listed above. However, it’s important to remember that in both of these studies only a very small minority of the patients were able to respond; the vast majority showed no evidence of consciousness during either task.

For the McMath family, studies such as these provide hope that their daughter is still somewhere inside herself, still able to interact with the outside world. But doctors fear this research may be misleading as these results are by far the exception. Additionally, there is no evidence that this type of activity will result in any change in the patient’s prognosis. Finally, and most relevant to the current controversy, complete brain death–as in the case of young Jahi–is very different from vegetative state or minimal consciousness; there is never any recovery from brain death. Advancements in neuroscience have grown more and more incredible in the last decade, and our knowledge of the brain has increased exponentially, but there is still more that we do not know than what we do, and we are a long way off from being able to bring back the dead.

Also posted on Scitable: Mind Read

Inside the mind of a criminal

On Law and Order: SVU, the story stops when the bad guy is caught. The chase is over, justice is served, the credits roll and we can all sleep easier at night knowing that Detectives Benson and Stabler have successfully put another criminal behind bars.

Of course in the real world, things are never that simple.

Our criminal justice system operates on the tenets of punishment and reform. You do the crime, you do the time — and ideally you are appropriately rehabilitated after paying penance for your sins. But unfortunately it doesn’t always work that way. Recidivism rates in the U.S. have been estimated at 40-70%, with most former inmates ending up back behind bars within three years of being released.

Parole boards make their decisions carefully, trying to weed out those whom they think are most likely to re-offend, and basing their decisions on the severity of the initial crime and the individual’s behavior while in jail. But clearly there is room for improvement.

A recent study by Dr. Eyal Aharoni and colleagues attempted to tackle this problem by using neuroimaging techniques to look inside the brains of convicted felons and using these scans to predict who is most at risk for re-offense. Their widely discussed findings show that a relative decrease in activation in the anterior cingulate cortex (ACC) during performance of a motor control task is related to a two-fold higher recidivism rate in the four years following release from jail.

However, this result should be taken with more than one grain of salt, as activation in the ACC has been linked to, well, pretty much everything.

In fact, a quick look at PubMed shows that there have been nearly 150 neuroimaging publications listing the ACC as a region of interest in the last six months alone! This includes papers on topics ranging from phobias to self-representation to physical pain. This implies that the ACC is involved in self-perception, fear, pain, cognition, decision-making, error monitoring, emotional processing and a host of other behaviors — not exactly a precise region, is it? (To be fair, damage to the ACC has previously been linked to increases in aggression, apathy and disinhibition.)

Additionally, while in the current study decreased activity in the area during response inhibition was related to a greater predictive risk for future re-offending, there was crucially a large portion of the sample who did not meet these predictions. In fact, 40% of participants with low ACC activity did not re-offend during the course of the study, and 45% of those with high activity did. Thus, while the differences in activation did lead to a statistically significant contributor to the risk for re-offending, they certainly were not deterministic.

Fortunately, the authors acknowledge much of the study’s short-comings and report that the results should be interpreted carefully. Most notably, they state that the findings should only be taken into consideration with contributions from a variety of other personal and environmental factors, most of which are already used in sentencing and parole decisions. For example, other significant predictive factors for re-offense include the individual’s age and their score on a test of psychopathy that is widely administered to inmates.

There are also two different ways to look at and interpret these results. On the one hand, they could be used in an attempt to exonerate or reduce sentences for men who supposedly can’t control their actions due to low brain activity. Alternatively, these scans could be used to potentially block the granting of parole to inmates who show particularly suspicious brain activation. If criminals with low ACC activity are more likely to commit future crimes, then the logic goes that they should be locked up longer — even indefinitely — to prevent them from offending again. But then where does this line of thinking end?

Do we really want to let people off because their brains “made them do it”? And conversely, just because a couple of blobs on a very commonly activated part of the brain are lighting up differently, is this a good reason to keep someone locked up longer? What about redemption? What about a second chance? What about free will?

As the fields of neuroscience and off-shoots like neuro-law progress, these questions will become more and more important; and the potential for a police state more reminiscent of Minority Report than Law and Order becomes frighteningly real. Therefore, it is all of our responsibility to think critically about results such as these and not be swayed by the bright lights and colored blobs.

(Originally posted on Mind Read)

I saw the (negative) sign: Problems with fMRI research

I feel the need to bring up an issue in neuroimaging research that has affected me directly, and I fear may apply to others as well.

While in the process of analyzing a large fMRI (functional magnetic resonance imaging) data-set, I made an error when setting up the contrasts. This was the first large independent imaging analysis I had attempted, and I was still learning my way around the software, programming language, and standard imaging parameters. My mistake was not a large one (I switched a 1 and -1 when entering the contrasts), however it resulted in an entirely different, but most importantly, still plausible output, and no one noticed any problems in my results.

Thankfully, the mistake was identified before the work was published, and we have since corrected and checked the analysis (numerous times!) to ensure no other errors were committed. However, it was an alarming experience for a graduate student like myself, just embarking on an exploration of the brain – an incredibly powerful machine that we barely understand, with revolutionary high-powered technology that I barely understand – that such a mistake could be so easily made and the resulting data so thoroughly justified. The areas identified in the analysis were all correct, there was nothing outlandish or even particularly unexpected in my results. But they were wrong.

Functional MRI is a game of location and magnitude. The anatomical analysis – looking for blobs in the brain that light up where we think they should – can be confirmed with pre-clinical animal models, as well as neuropsychology research in patients who have suffered localized brain damage and related loss of function. Areas involved in motor control and memory have been identified in such a manner, and these findings have been validated through imaging studies identifying activation in these same regions during performance of relevant tasks.

The question then remains as to the direction of this activation. Do individuals “over activate” or “under activate” this region? Are patients hyper- or hypo-responding compared to controls? FMRI studies typically compare activation during the target task with a baseline state to assess this directionality. Ideally, you should subtract neural activity levels during a similar but simpler process from the activation that occurs during your target cognitive function, and presumably the resulting difference in activity is the neurocognitive demand of the task.

An increase in activation compared to the baseline state, or compared to another group of participants (i.e., patients vs. controls) is interpreted as greater effort being exerted. This is typically seen as a good thing on cognitive tasks, indicating that the individual is working hard and activating the relevant regions to remember the word or exert self-control. However, if you become expert at these processes you typically exhibit a relative decrease in activation, as the task becomes less demanding and requires less cognitive effort to perform. Therefore, if you are hypo-active it could be because you are not exerting enough effort and consequently under-performing on the task compared to those with greater activation. Or, conversely, you could be superior to others in performance, responding more efficiently and not requiring superfluous neural activity.

Essentially, directionality can be justified to validate either hypothesis of relative impairment. Patients are over-active compared to controls? They’re trying too hard, over-compensating for aberrant executive functioning or decreased activation elsewhere. Alternatively, if patients display less activity on a task they must be impaired in this region and under-performing accordingly.

Concerns about the over-interpretation of imaging results are nothing new, and Dr. Daniel Bor, along with a legion of other researchers in the neuroscience community, have tackled this issue far more eloquently and expertly than myself. My own experience, though, has taught me that we need greater accountability for the claims made from imaging studies. Even with an initially incorrect finding that resulted from a technical error, I was able to make a reasonable rationale for our results that was accepted as a plausible finding. FMRI is an invaluable and powerful tool that has opened up the brain like never before. However, there are a lot of mistakes that can be made and a lot of justifications of results that are over-stretched, making claims that can not be validated from the data. And this is assuming there are no errors in the analysis or original research design parameters!

I am particularly concerned about the existence of other papers where students and researchers have made similar mistakes to my own, but where the results seem plausible and so are accepted, despite the fact that they are incorrect. I would argue that learning by doing is the best way to truly master a technique, and I can guarantee that I will never make this same mistake again, but there does need to be better oversight, whether internally or externally, during the reporting of methods sections, as well as in the claims made while rationalizing results. Our window into the brain is a limited one, and subtle differences in task parameters, subject eligibility, and researcher bias can greatly influence study results, particularly when using tools sensitive to human error. Providing greater detail in online supplements on the exact methods, parameters, settings, and button presses used to generate an analysis could be one way to ensure greater accountability. Going one step further, opening up data-sets to a public forum after a certain grace period has passed, similar to practices in physics and mathematics disciplines, could engender greater oversight to these processes.

As for the directionality issue, the need to create a “story” with scientific data is a compelling, and I believe very important, aspect of reporting and explaining results. However, I think more of the fMRI literature needs to be based on actual behavioral impairment, rather than just differences in neural activity. Instead of basing papers around aberrant differences in activation, which may be due to statistical (or researcher) error, and developing rationalizing hypotheses to fit these data, analyses and discussions should be centered on differences in behavior and clinical evidence. For example, the search for biomarkers (biological differences in groups at risk for a disorder, often present before they display symptoms) is an important one that could help shed light on pre-clinical pathology. However, you will almost always find subtle differences between groups if you are looking for them, even when there is no overt dysfunction, and so these searches need to be directed by known impairments in the target patient groups. A similar issue has been raised in the medical literature, with high-tech scans revealing abnormalities in the body that do not cause any tangible impairments, but the treatment of which cause more harm than good. Instead of searching for differences in activation levels in the brain, we should be led by dysfunction that results from these changes. Just as psychiatric diagnoses from the DSM-IV are supposed to be directed by symptoms relating to pathology only if they cause significant harm or distress in the individual, speculations made about the results of imaging studies should be influenced by associated impairments in behavior and function, rather than red or blue blobs on the brain.

(Thanks to Dr. Jon Simons for his advice on this post.)

Did I do that? Reality monitoring in the brain

Most of us have no problem telling the real from the imagined. Or so we think.

Reality monitoring, the incorporation and distinction of internal thoughts and imaginings from external experiences and memories, typically happens seamlessly for most individuals. However, there are times when we cannot recall if someone else told us about that interesting article or whether we read it ourselves, or if we remembered to lock the door before leaving the house or not. Did we actually do or hear these things, or did we only imagine them? This is a common problem in patients with schizophrenia, who at times cannot distinguish between what they think they remember or believe to be true, and what actually occurred.

A new study on reality monitoring published last week in the Journal of Neuroscience reveals that many of us are not as good at making this distinction as we might think. Additionally, the ability to discern between perceived and imagined events may be rooted in one very specific region of the brain, which nearly 30% of the population is missing. Led by Marie Buda and Dr. Jon Simons at the University of Cambridge*, researchers administered a very particular type of memory test to healthy participants who had been pre-selected based on the prominence of the paracingulate sulcus (PCS) in their brains. Running rostral-caudal (front to back) and located in the anteriomedial (middle-frontal) prefrontal cortex, this region is involved in higher level cognitive functioning and is one of the last parts of the brain to mature. Consequently, it can be relatively underdeveloped or even seemingly absent in many people. This is particularly the case in individuals with schizophrenia, where as many as 44% of patients lack this particular region.

Participants for the current study were chosen from a database of individuals who had previously undergone an MRI scan and clearly showed a presence or absence of the PCS in either one or both of the neural hemispheres. The memory task in question involved a list of common word pairs such as “yin and yang” or “bacon and eggs”. These words were either presented together (perceive condition), or only one word was presented and the participant was to fill in the complimenting phrase (imagine condition). The second portion of the experiment involved the source of this information, i.e. whether the subject or the experimenter was the one to read off or verbally complete the pair. After the task, the subject was asked to report whether the pair was fully perceived or imagined, and whether this information was attributed to themselves or the experimenter. They were also asked to rate their confidence in both of these responses.

Participants with a complete absence of the PCS in both hemispheres performed significantly worse on the reality monitoring task than individuals who exhibited a definite presence of the sulcus. This difference was based on their source attribution memory (themselves vs. the experimenter); performance on the perceive or imagine condition did not differ between the groups. Interestingly, the two groups also did not differ in their confidence in their responses. Thus, even though the PCS-absent group performed significantly worse on attributing the source of the information, they were still just as confident in their answers as individuals who responded correctly, indicating a lack of interospective awareness in the absent group in regards to their memory abilities.

It should be noted that there was also a correlation between overall gray matter volume in the prefrontal and motor cortices and scores on the reality monitoring task. This is important as it may indicate that there are other regions involved in this process outside of the PCS, and the authors caution that this enhanced ability may stem from an increase in gray matter and connectivity in the medial prefrontal cortex, rather than from the PCS itself.

These findings could have useful applications in clinical psychiatry. As stated above, an impairment in reality monitoring is often associated with schizophrenia, and the absence of the PCS could serve as a potential biomarker for this disorder. Additionally, although not commonly discussed in terms of reality monitoring, another psychiatric diagnosis that could potentially benefit from this type of research is obsessive compulsive disorder (OCD). OCD often consists of obsessions and the urge for frequent compulsive checking of things, such as whether one remembered to turn off the stove. This ruminating and checking behavior could be indicative of a breakdown in reality monitoring where patients can not determine whether a target action actually occurred or not. While this problem is not encompassing of all OCD patients, reality monitoring disability could be a potential area to investigate in those patients for whom checking is a significant problem.

*Disclaimer: Marie Buda and Jon Simons are fellow members of the Department of Experimental Psychology at the University of Cambridge with me.