Resisting temptation in the brain

Having spent the last three years studying how difficult it is to say no to our vices, and being intimately acquainted with all that can go wrong in fMRI research, I’m always a bit skeptical of studies that claim to be able to predict our capacity for self-control based on a brain scan. But a new paper out this week in Psychological Science seems to have done a pretty admirable job, tying our real-life ability to resist temptation with activity in two specific areas of the brain.

Researchers from Dartmouth University first tested 31 women on two different tasks: an assessment of self-control and a measurement of temptation. Using an fMRI scanner, they compared where the women’s brains lit up when they were stopping themselves from performing a certain action (pressing a button, to be exact), and when they were seeing images of ice cream, hamburgers, and other tasty treats. As expected, better performance on the response inhibition task was linked to activation in a part of the brain called the inferior frontal gyrus (IFG), a region in the frontal cortex known to be involved in inhibiting a response. Conversely, looking at pictures of chocolate and chicken sandwiches activated the nucleus accumbens (NAcc) a deeply rooted part of the brain that’s essential in feelings of reward.

So far, this is all pretty par for the course; you exert self-control, you activate your control center. Looking at something enticing? Your reward region is going to light up. Nothing new or ground-breaking (or even that useful, to be honest). But the researchers didn’t stop there. Instead, they took the study out of the lab to see what happened when the participants were faced with real-life temptations. Equipping them with Blackberry smartphones, the participants were prompted throughout the week with questions about how badly they desired junk food, how much they resisted these cravings, whether they gave in to their urges, and how much they ate if they did cave to temptation.

Comparing these responses to brain activity in the two target areas, the researchers discovered that the women who had the most activity in the NAcc while viewing images of food were also the ones who had the most intense cravings for these treats in real life. Additionally, these women were more likely to give in to their temptations when they had a hankering for some chocolate. On the other hand, those who had greater activity in the IFG during the inhibition task were also more successful at withstanding their desires — in fact, they were over 8 times more likely to resist the urge to indulge than those with less activity in the region. And if they did give in, they didn’t eat as much as those with a smaller IFG response.

Having confirmed the link between activity in these areas and real-life behaviors, the next step is to figure out how to ramp up or tamp down activity in the IFG and NAcc, respectively. One technique that psychologists are exploring is transcranial magnetic stimulation, or TMS. This involves zapping a certain part of the brain with an electromagnetic current, trying to either stimulate or depress activity in that region. So far, use of TMS in studies of addiction and eating disorders — attempting to enhance self-control and decrease feelings of craving — has been met with limited success. Pinpointing the exact right area through the skull and figuring out the correct frequency to use can be difficult, and in fact, a few studies have actually accidentally increased desire for the substance! Additionally, the effects are often temporary, wearing off a few days after the stimulation is over. Other studies have looked at cognitive training to try to enhance self-control abilities, particularly in children with ADHD, although these attempts have also varied in their success.

Beyond targeting certain psychiatric disorders or trying to get us to say no to that second (or third or fourth) cookie for reasons of vanity, there’s a push to enhance self-control from a public health standpoint. The authors of the current study cite the staggering statistic that 40% of deaths in the U.S. are caused by failures in self-control. That’s right, according to research, 40% of all fatalities are caused by us not being able to say no and partaking in some sort of unhealthy behavior, the biggest culprits being smoking and over-eating or inactivity leading to obesity. Clearly then, improving self-control is not only needed to help individuals on the outer edges of the spectrum resist temptation, it would benefit those of us smack dab in the middle as well.

Happy Friday!

Advertisement

Keeping hope alive: Brain activity in vegetative state patients

Thirteen-year-old Jahi McMath went into Oakland Children’s Hospital on December 9 for a tonsillectomy. Three days later she was declared brain-dead; severe complications from the surgery resulted in cardiac arrest and her tragic demise. While neurologists and pediatricians at the hospital have declared Jahi brain-dead, her family refuses to accept the doctors’ diagnosis, fighting to keep her on life support.

This heartrending battle between hospital and family is sadly not a new one, and there is often little that can be done to compromise the two sides. However, neuroscientific research in recent years has made substantial developments in empirically determining if there are still signs of consciousness in vegetative state patients. These revelations can either bring hope to a desperate family or provide stronger footing for doctors trying to do the more difficult but often more humane thing.

In 2010, researchers at the University of Cambridge published a groundbreaking study in the New England Journal of Medicine that looked at brain activity in minimally conscious or vegetative state patients using fMRI. These patients were placed in the scanner and asked to imagine themselves in two different scenarios: in the first, they were instructed to envision themselves playing tennis and swinging a racket, which would activate a motor region of the brain called the supplementary motor cortex. In the second, they were told to think of a familiar place and mentally map or walk around the room. This mental map lights up the parahippocampal gyrus, an area of the brain involved in spatial organization and navigation.

Five of the patients (out of 54) were able to consistently respond to the researchers’ requests, reliably activating either the supplementary motor cortex or parahippocampal gyrus upon each instruction. Even more amazing, one of the patients was able to turn this brain activation into responses to yes or no questions. The patient was asked a series of autobiographical questions like “Do you have any siblings?” If the response to the question was yes, she was instructed to “play tennis,” while if the answer was no, she should take a mental stroll around the room. Remarkably, this individual was able to accurately respond to the researchers’ questions using just these two symbolic thought patterns.

Building on this research, a new study by the same scientists published in November of this year in NeuroImage used EEG to measure electrical activity in the brain in an attempt to better assess consciousness in the same group of vegetative state patients.

A certain type of EEG brain wave, the P300, is generated when we are paying attention; and just as there are different kinds of attention (i.e. concentration, alertness, surprise), there are different P300 responses associated with each type. An “early” P300 burst in activity in the parietal lobe (P3a) is externally triggered, such as when something surprising or unexpected grabs our attention. Conversely, delayed P300 waves in the frontal cortex (P3b) are more internally generated and are activated when we are deliberately paying attention to something.

To test this, the Cambridge researchers hooked up the same group of minimally conscious patients to an EEG machine and made them listen to a string of random words (gown, mop, pear, ox). Sprinkled throughout these distractor stimuli were also the words “yes” and “no,” and patients were instructed to only pay attention to the word “yes.” Typically, when someone performing this task hears the target word (yes), they experience a burst in delayed P300 activity, signifying that they were concentrating on that word. However, upon hearing the word “no,” participants often show early P300 activity, its association with the target word attracting their attention even though they were not explicitly listening for it.

Similar to the first study, four of the participants exhibited brain activity that indicated they were able to successfully distinguish the target from the distractor words. This result suggests that these patients are aware and able to process instructions. Three of the four individuals also demonstrated the appropriate activation during the tennis test listed above. However, it’s important to remember that in both of these studies only a very small minority of the patients were able to respond; the vast majority showed no evidence of consciousness during either task.

For the McMath family, studies such as these provide hope that their daughter is still somewhere inside herself, still able to interact with the outside world. But doctors fear this research may be misleading as these results are by far the exception. Additionally, there is no evidence that this type of activity will result in any change in the patient’s prognosis. Finally, and most relevant to the current controversy, complete brain death–as in the case of young Jahi–is very different from vegetative state or minimal consciousness; there is never any recovery from brain death. Advancements in neuroscience have grown more and more incredible in the last decade, and our knowledge of the brain has increased exponentially, but there is still more that we do not know than what we do, and we are a long way off from being able to bring back the dead.

Also posted on Scitable: Mind Read

Do you have an addictive personality?

You’ll have to bear with me if this is a bit of a self-indulgent post, but I have some exciting news, Brain Study-ers: I’ve officially submitted my dissertation for a PhD in psychology!

In light of this – the culmination of three years of blood, sweat, tears and an exorbitant amount of caffeine – I thought I’d write this week on part of my thesis work (I promise to do my best to keep the jargon out of it!)

One of the biggest questions in addiction research is why do some people become dependent on drugs, while others are able to use in moderation? Certainly some of the risk lies in the addictive potential of the substances themselves, but still the vast majority of individuals who have used drugs never become dependent on them. This then leads to the question, is there really such a thing as an “addictive personality”, and what puts someone at a greater risk for addiction if they do choose to try drugs?

We believe that there are three crucial traits that comprise much of the risk of developing a dependency on drugs: sensation-seeking, impulsivity and compulsivity.

Sensation-seeking is the tendency to seek out new experiences, be they traveling to exotic countries, trying new foods or having an adrenaline junkie’s interest in extreme sports. These people are more likely to first try psychoactive drugs, experimenting with different sensations and experiences.

Conversely, impulsivity is acting without considering the consequences of your actions. This is often equated with having poor self-control – eating that slice of chocolate cake in the fridge even though you’re on a diet, or staying out late drinking when you have to be at work the next day.

While impulsivity and sensation-seeking can be similar, and not infrequently overlap, they are not synonymous, and it is possible to have one without the other. For example, in research we conducted on the biological siblings of dependent drug users, the siblings showed elevated levels of impulsivity and poor self-control similar to that of their dependent brothers and sisters, but normal levels of sensation-seeking that were on par with unrelated healthy control individuals. This led us to hypothesize that the siblings shared a similar heightened risk for dependence, and might have succumbed to addiction had they started taking drugs, but that they were crucially protected against ever initiating substance use, perhaps due to their less risk-seeking nature.

The final component in the risk for addiction is compulsivity. This is the tendency to continue performing a behavior even in the face of negative consequences. The most classic example of this is someone with OCD, or obsessive-compulsive disorder, who feels compelled to check that the door is locked over and over again every time they leave the house, even though it makes them late for work. These compulsions can loosely be thought of as bad habits, and some people form these habits more easily than others. In drug users, this compulsive nature is expressed in their continued use of the substance, even though it may have cost them their job, family, friends and health.

People who are high in sensation-seeking may be more likely to try drugs, searching for that new exciting experience, but if they are low in impulsivity they may only use a couple of times, or only when they are fairly certain there is a small risk for negative consequences. Similarly, if you have a low tendency for forming habits then you most likely have a more limited risk for developing compulsive behaviors and continuing an action even if it is no longer pleasurable, or you’ve experienced negative outcomes as a result of it.

Exemplifying this, another participant group we studied were recreational users of cocaine. These are individuals who are able to take drugs occasionally without becoming dependent on them. These recreational users had similarly high levels of sensation-seeking as the dependent users, but did not show any increase in impulsivity, nor did they differ from controls in their self-control abilities. They also had low levels of compulsivity, supporting the fact that they are able to use drugs occasionally but without having it spiral out of control or becoming a habit.

We can test for these traits using standard questionnaires, or with cognitive-behavioral tests, which can also be administered in an fMRI scanner to get an idea of what is going on in the brain during these processes. Behaviorally, sensation-seeing roughly equates to a heightened interest in reward, while impulsivity can be seen as having problems with self-control. As mentioned above, compulsivity is a greater susceptibility to the development of habits.

In the brain, poor self-control is most commonly associated with a decrease in prefrontal cortex control – the “executive” center of the brain. Reflecting this, stimulant-dependent individuals and their non-dependent siblings both showed decreases in prefrontal cortex volume, as well as impairments on a cognitive control task. Conversely, recreational cocaine users actually had an increase in PFC volume and behaved no differently from controls on a similar task. Thus, it appears that there are underlying neural correlates to some of these personality traits.

It is important to remember that we all have flashes of these behaviors in differing amounts, and it is only in extremely high levels that these characteristics put you at a greater risk for dependence. Also, crucially it is not just one trait that does it, but having all three together. Most notably though, neuroscience is not fatalistic, and just because you might have an increased risk for a condition through various personality traits, it does not mean your behavior is out of your control.

Oh, and I’ll be going by Dr. D from now on.

Ersche, KE et al., Abnormal brain structure implicated in stimulant drug addictionScience 335(6068): 601-604 (2012).

Ersche, KE et al., Distinctive personality traits and neural correlates associated with stimulant drug use versus familial risk of stimulant dependenceBiological Psychiatry 74(2): 137-144 (2013).

Smith, DG et al., Cognitive control dysfunction and abnormal frontal cortex activation in stimulant drug users and their biological siblings.Translational Psychiatry 3(5): e257 (2013).

Smith DG, et al., Enhanced orbitofrontal cortex function and lack of attentional bias to cocaine cues in recreational stimulant users.Biological Psychiatry Epub ahead of print (2013).

You are what you eat

Anyone who’s ever tried to cure the blues with Ben and Jerry’s knows that there is a link between our stomachs and our moods. Foods high in fat and sugar release pleasure chemicals like dopamine and opioids into our brains in much the same way that drugs do, and I’d certainly argue that french fries and a chocolate milkshake can perk up even the lousiest of days.

This brain-belly connection works in the opposite direction, too. Ever felt nauseous before giving a big presentation? Or had butterflies in your stomach on a first date? It’s this system relating feedback from your brain to your gut causing those sensations and giving you physical signals that something big is about to happen.

However, instead of trying to suppress those feelings (or running to the bathroom every five minutes) it now appears that we can use this brain-body loop to our advantage. Formally referred to as the microbiome-gut-brain axis, bacteria that live in our stomach and intestines can affect our responses to stress and anxiety, and research in recent years has shown that probiotic bacteria – like those found in many strains of yogurt – can help to reduce anxiety and elevate mood in addition to helping us “stay regular”.

Previous research has shown reduced fear and stress responses during anxiety-inducing tests in mice who were fed broth with an added probiotic. This included less freezing in the face of fear, greater exploration of new environments, and fewer indicators of depression during a behavioral despair test (cheerful, huh?). These chilled out mice also had lower levels of corticosterone – a major stress hormone – after being tested, corroborating these behavioral findings.

Now, recent research from a team of doctors at UCLA’s School of Medicine and *CONFLICT OF INTEREST ALERT* funded by Danone, the yogurt company, has for the first time provided support for this brain-stomach connection in humans. These researchers looked at the effect eating yogurt (or as they like to call it, a “fermented milk product with probiotic”) every day for four weeks had on neural responses to pictures of negative faces. This type of task usually causes an increase in activity in emotion and somatosensory regions of the brain, like the amygdala and the insula, indicating an unpleasant or stressful reaction to the images. Compared to control individuals who had eaten just a normal fermented milk product, those who had eaten the probiotics had decreased activity in these brain areas, suggesting they were not as affected by the pictures.

Curiously though, there was no difference between the groups in probiotic levels found in stool samples taken (yes, they tested their poop), and none of the participants reported feeling any changes in their levels of stress, anxiety or depression during the study. However, there were significant differences in brain activity between the groups while they were resting, including in the areas identified during the task. Altogether, it looks like even small amounts of probiotics (i.e., not enough to change your gut levels) can still have a significant affect on our brain activity, even without noticeably changing our moods.

This interaction between our guts and our gray matter is thought to be facilitated by the vagus nerve traveling down the base of the brain into the stomach, transmitting sensory information and chemical signals from internal organs back up to the brain. Supporting this theory, when this nerve was cut in the first study the positive effects of the probiotics disappeared, and the test mice were back to their normally anxious selves.

It doesn’t appear that non-fermented milk products have the same positive effects on the brain, so it looks like I’ll be switching my usual Ben and Jerry’s to frozen yogurt for the next few weeks while I finish writing up my PhD thesis. Maybe it’ll help with my growing “thes-ass” too!

(Originally posted on Mind Read)

(“Thes-ass” coinage credit to Anna Bachmann)

Beating the odds of addiction

An article I wrote for The Psychologist magazine based on my thesis research investigating risk and protective factors in drug dependence was published online this week.

This work all stems from a question I (and countless others in the field) have of why some people are able to use illicit drugs without becoming dependent, while others seem to quickly succumb to addiction.

While we’re still far from answering this question definitively, my lab at Cambridge, headed by Dr. Karen Ersche, has some theories on why this might be the case.

For example, it appears that there are underlying traits, like impulsivity, compulsivity and sensation-seeking, that can put someone at a greater risk for developing drug dependence. Some of these traits also correspond to differences in brain structure and function, such as smaller frontal cortex volume potentially making it harder for people to stop or inhibit a behavior.

If you’re interested in reading more, a full link to the article is here (the magazine kindly made it available open access). So please check it out, and as always I welcome any questions or feedback!

Inside the mind of a criminal

On Law and Order: SVU, the story stops when the bad guy is caught. The chase is over, justice is served, the credits roll and we can all sleep easier at night knowing that Detectives Benson and Stabler have successfully put another criminal behind bars.

Of course in the real world, things are never that simple.

Our criminal justice system operates on the tenets of punishment and reform. You do the crime, you do the time — and ideally you are appropriately rehabilitated after paying penance for your sins. But unfortunately it doesn’t always work that way. Recidivism rates in the U.S. have been estimated at 40-70%, with most former inmates ending up back behind bars within three years of being released.

Parole boards make their decisions carefully, trying to weed out those whom they think are most likely to re-offend, and basing their decisions on the severity of the initial crime and the individual’s behavior while in jail. But clearly there is room for improvement.

A recent study by Dr. Eyal Aharoni and colleagues attempted to tackle this problem by using neuroimaging techniques to look inside the brains of convicted felons and using these scans to predict who is most at risk for re-offense. Their widely discussed findings show that a relative decrease in activation in the anterior cingulate cortex (ACC) during performance of a motor control task is related to a two-fold higher recidivism rate in the four years following release from jail.

However, this result should be taken with more than one grain of salt, as activation in the ACC has been linked to, well, pretty much everything.

In fact, a quick look at PubMed shows that there have been nearly 150 neuroimaging publications listing the ACC as a region of interest in the last six months alone! This includes papers on topics ranging from phobias to self-representation to physical pain. This implies that the ACC is involved in self-perception, fear, pain, cognition, decision-making, error monitoring, emotional processing and a host of other behaviors — not exactly a precise region, is it? (To be fair, damage to the ACC has previously been linked to increases in aggression, apathy and disinhibition.)

Additionally, while in the current study decreased activity in the area during response inhibition was related to a greater predictive risk for future re-offending, there was crucially a large portion of the sample who did not meet these predictions. In fact, 40% of participants with low ACC activity did not re-offend during the course of the study, and 45% of those with high activity did. Thus, while the differences in activation did lead to a statistically significant contributor to the risk for re-offending, they certainly were not deterministic.

Fortunately, the authors acknowledge much of the study’s short-comings and report that the results should be interpreted carefully. Most notably, they state that the findings should only be taken into consideration with contributions from a variety of other personal and environmental factors, most of which are already used in sentencing and parole decisions. For example, other significant predictive factors for re-offense include the individual’s age and their score on a test of psychopathy that is widely administered to inmates.

There are also two different ways to look at and interpret these results. On the one hand, they could be used in an attempt to exonerate or reduce sentences for men who supposedly can’t control their actions due to low brain activity. Alternatively, these scans could be used to potentially block the granting of parole to inmates who show particularly suspicious brain activation. If criminals with low ACC activity are more likely to commit future crimes, then the logic goes that they should be locked up longer — even indefinitely — to prevent them from offending again. But then where does this line of thinking end?

Do we really want to let people off because their brains “made them do it”? And conversely, just because a couple of blobs on a very commonly activated part of the brain are lighting up differently, is this a good reason to keep someone locked up longer? What about redemption? What about a second chance? What about free will?

As the fields of neuroscience and off-shoots like neuro-law progress, these questions will become more and more important; and the potential for a police state more reminiscent of Minority Report than Law and Order becomes frighteningly real. Therefore, it is all of our responsibility to think critically about results such as these and not be swayed by the bright lights and colored blobs.

(Originally posted on Mind Read)

Billions of dollars to map billions of neurons

A lot of money is being spent right now to ‘map the human brain’. In the last month, both the European Commission and U.S. president Barack Obama have pledged to give billions of dollars to fund two separate projects geared towards creating a working model of the human brain, all 100 billion neurons and 100,000 billion synapses.

The first, the Human Brain Project, is being spearheaded by Prof Henry Markram of École Polytechnique Fédérale de Lausanne. Together, with collaborators from 86 other European institutions, they aim to simulate the workings of the human brain using a giant super computer.

To achieve this, they will work to compile information about the activity of tons of individual neurons and neuronal circuits throughout the brain in a massive database. They then hope to integrate the biological actions of these neurons to create theoretical maps of different subsystems, and eventually, through the magic of computer simulation, a working model of the entire brain.

Similarly, the Brain Activity Map Project, or BAM! (exclamation added because it’s exciting), is a proposed initiative that would be organized through the United States’ National Institutes of Health and carried out in a number of universities and research institutes throughout the U.S. BAM will attempt to create a functional model of the brain – a ‘connectome’ – mapping its billions of neuronal connections and firing patterns. This would enable scientists to create both a ‘static’ and ‘active’ model of the brain, mapping the physical location and connections of these neurons, as well as how they work and fire together between and within different regions. At the moment, we have small snap-shots into some of these circuits but on only a fraction of the scale of the entire brain. This process would first be done on much smaller models, such as a fruit fly and a mouse, before working up to the complexities of a human brain version.

BAM proposes to create this model by measuring the activity of every single neuron in a circuit. At the moment, this is done using deep brain techniques, a highly invasive process that involves opening up the skull to implant electrodes onto individual cells to read and record their outputs. Understandably, this is only done in patients already undergoing brain surgery, and is a slow and expensive process. Thus, the first task of BAM would be to develop better techniques to acquire this information. Research into this field is already underway, and exciting proposals have included nanoparticles and lasers that could measure electrical outputs from these cells less invasively, or even using DNA to map neural connections.

Neither project has directly acknowledged the other, but it is thought that the recent announcement of the U.S. proposal is a response to the initial European scheme launched earlier this year. And while there are distinct differences between the two initiatives in how they will acquire and store the raw information, as well as how they plan to build their subsequent models, the two projects overlap significantly. Both have the potential to better illuminate how exactly the brain works, and each ultimately hopes to provide us with a clearer picture of not only normal brain functioning, but also what happens when these processes are disrupted. Scientists and doctors could then use computer models to simulate dysfunction involved in neurological or psychiatric disorders, such as Alzheimer’s or schizophrenia. This would also open up possibilities for investigating better treatment options, as well as drastically cutting down on the expense and risk currently involved in clinical drug trials for psychiatric and neurological disorders.

However, there is a long list of obstacles these projects must overcome before we get too excited, not the least of which are the 100,000,000,000,000 connections that need to be measured and modeled. That’s over one million times as many neurons as there were genes to map in the Human Genome Project, the closest approximation to the current endeavors. Additionally, while there was a clear end to the human genome, the ambition of making a human connectome is both much larger and much less well-defined. Indeed, neither proposal yet has a definitive end-goal, and no one is clear on what the final product will look like.

For the Human Brain Project, the collaboration of over 80 different labs across Europe will also be a significant challenge. By collaborating rather than competing, the capacity for productivity and innovation in this and future projects is far higher. However, it will be extremely difficult to manage differences in laboratory methods and communication, not to mention egos, between these institutions.

Another major concern for the American proposal is funding. With the financial crisis, fiscal cliff and federal sequestration of recent months, the U.S. economy (and Congress) do not have a very good track record at the moment. And it is hard to believe they are going to approve a multi-billion dollar project when they cannot even agree to continue funding for health care, education and military spending. Private companies including Google and Microsoft, as well as charities such as the Howard Hughes Medical Institute and Allen Institute for Brain Science have signed on to the project, but the bulk of funding will still have to be provided by government institutions.

In his State of the Union address, President Obama alluded to the Brain Activity Map Project, and tried to head-off the inevitable financial protests to it by invoking the Human Genome Project, which cost $2.7 billion to complete but has reportedly produced a return of $140 to every dollar spent. This was manifested through pharmaceutical and biotechnology developments, as well as subsequent start-up companies. This turnover has the potential to grow even further through future reductions in health care spending from medical developments, and the hope is that BAM will produce similar high returns. However, the question remains as to whether this investment could be better spent elsewhere, such as improving the medical system, research for drug treatment developments, or health education and prevention programs. Some in the scientific community are also worried that already limited funding to other fields of research will be slashed in order to subsidize the project.

Despite these concerns, it is undeniable that if these programs were to succeed they would be spectacular achievements in scientific research, not unakin to the discovery of the Higgs Boson or even the first space expeditions of the 1960s. Many believe that the human brain is the final frontier for medical research, and it will remain to be seen whether these brain-mapping projects will enable us to finally understand the wild and intricate workings of our own minds.

(Originally posted on King’s Review)

(And an updated version has been published on The Atlantic)

SFN ’12: Vulnerabilities for drug addiction

For anybody who’s in New Orleans for SFN this week, come by room 273 at 1pm today to learn about vulnerabilities for drug addiction. It’s an excellent nanosymposium set up by the fantastic Dr. Jenn Murray covering both human and preclincial studies into risk factors for addiction. The talks will include investigations into the classic predictive traits of impulsivity, anxiety and novelty-seeking, and they’ll also delve into environmental risk factors for addiction, such as maternal care and environmental stimulation.

I’ll be presenting first (so be there at 1pm sharp!) on my work on endophenotypes for addiction. This involves studying both dependent drug users and their non-dependent biological siblings, who share 50% of their genes and the same environment growing up, but who never developed any sort of drug or alcohol abuse. I’ll be looking specifically at cognitive control deficits and frontal cortex abnormalities in both of these groups compared to unrelated healthy control volunteers. There are some surprises in the results, so if you’re at SFN come by at 1pm to find out what they are!

I saw the (negative) sign: Problems with fMRI research

I feel the need to bring up an issue in neuroimaging research that has affected me directly, and I fear may apply to others as well.

While in the process of analyzing a large fMRI (functional magnetic resonance imaging) data-set, I made an error when setting up the contrasts. This was the first large independent imaging analysis I had attempted, and I was still learning my way around the software, programming language, and standard imaging parameters. My mistake was not a large one (I switched a 1 and -1 when entering the contrasts), however it resulted in an entirely different, but most importantly, still plausible output, and no one noticed any problems in my results.

Thankfully, the mistake was identified before the work was published, and we have since corrected and checked the analysis (numerous times!) to ensure no other errors were committed. However, it was an alarming experience for a graduate student like myself, just embarking on an exploration of the brain – an incredibly powerful machine that we barely understand, with revolutionary high-powered technology that I barely understand – that such a mistake could be so easily made and the resulting data so thoroughly justified. The areas identified in the analysis were all correct, there was nothing outlandish or even particularly unexpected in my results. But they were wrong.

Functional MRI is a game of location and magnitude. The anatomical analysis – looking for blobs in the brain that light up where we think they should – can be confirmed with pre-clinical animal models, as well as neuropsychology research in patients who have suffered localized brain damage and related loss of function. Areas involved in motor control and memory have been identified in such a manner, and these findings have been validated through imaging studies identifying activation in these same regions during performance of relevant tasks.

The question then remains as to the direction of this activation. Do individuals “over activate” or “under activate” this region? Are patients hyper- or hypo-responding compared to controls? FMRI studies typically compare activation during the target task with a baseline state to assess this directionality. Ideally, you should subtract neural activity levels during a similar but simpler process from the activation that occurs during your target cognitive function, and presumably the resulting difference in activity is the neurocognitive demand of the task.

An increase in activation compared to the baseline state, or compared to another group of participants (i.e., patients vs. controls) is interpreted as greater effort being exerted. This is typically seen as a good thing on cognitive tasks, indicating that the individual is working hard and activating the relevant regions to remember the word or exert self-control. However, if you become expert at these processes you typically exhibit a relative decrease in activation, as the task becomes less demanding and requires less cognitive effort to perform. Therefore, if you are hypo-active it could be because you are not exerting enough effort and consequently under-performing on the task compared to those with greater activation. Or, conversely, you could be superior to others in performance, responding more efficiently and not requiring superfluous neural activity.

Essentially, directionality can be justified to validate either hypothesis of relative impairment. Patients are over-active compared to controls? They’re trying too hard, over-compensating for aberrant executive functioning or decreased activation elsewhere. Alternatively, if patients display less activity on a task they must be impaired in this region and under-performing accordingly.

Concerns about the over-interpretation of imaging results are nothing new, and Dr. Daniel Bor, along with a legion of other researchers in the neuroscience community, have tackled this issue far more eloquently and expertly than myself. My own experience, though, has taught me that we need greater accountability for the claims made from imaging studies. Even with an initially incorrect finding that resulted from a technical error, I was able to make a reasonable rationale for our results that was accepted as a plausible finding. FMRI is an invaluable and powerful tool that has opened up the brain like never before. However, there are a lot of mistakes that can be made and a lot of justifications of results that are over-stretched, making claims that can not be validated from the data. And this is assuming there are no errors in the analysis or original research design parameters!

I am particularly concerned about the existence of other papers where students and researchers have made similar mistakes to my own, but where the results seem plausible and so are accepted, despite the fact that they are incorrect. I would argue that learning by doing is the best way to truly master a technique, and I can guarantee that I will never make this same mistake again, but there does need to be better oversight, whether internally or externally, during the reporting of methods sections, as well as in the claims made while rationalizing results. Our window into the brain is a limited one, and subtle differences in task parameters, subject eligibility, and researcher bias can greatly influence study results, particularly when using tools sensitive to human error. Providing greater detail in online supplements on the exact methods, parameters, settings, and button presses used to generate an analysis could be one way to ensure greater accountability. Going one step further, opening up data-sets to a public forum after a certain grace period has passed, similar to practices in physics and mathematics disciplines, could engender greater oversight to these processes.

As for the directionality issue, the need to create a “story” with scientific data is a compelling, and I believe very important, aspect of reporting and explaining results. However, I think more of the fMRI literature needs to be based on actual behavioral impairment, rather than just differences in neural activity. Instead of basing papers around aberrant differences in activation, which may be due to statistical (or researcher) error, and developing rationalizing hypotheses to fit these data, analyses and discussions should be centered on differences in behavior and clinical evidence. For example, the search for biomarkers (biological differences in groups at risk for a disorder, often present before they display symptoms) is an important one that could help shed light on pre-clinical pathology. However, you will almost always find subtle differences between groups if you are looking for them, even when there is no overt dysfunction, and so these searches need to be directed by known impairments in the target patient groups. A similar issue has been raised in the medical literature, with high-tech scans revealing abnormalities in the body that do not cause any tangible impairments, but the treatment of which cause more harm than good. Instead of searching for differences in activation levels in the brain, we should be led by dysfunction that results from these changes. Just as psychiatric diagnoses from the DSM-IV are supposed to be directed by symptoms relating to pathology only if they cause significant harm or distress in the individual, speculations made about the results of imaging studies should be influenced by associated impairments in behavior and function, rather than red or blue blobs on the brain.

(Thanks to Dr. Jon Simons for his advice on this post.)

If I can’t remember it, it didn’t happen: A susceptibility for alcohol-induced blackouts

As anyone who’s ever taken an Alcohol Edu course (or been 21 in the last decade) knows, consuming too much alcohol can cause memory loss, colloquially known as a “blackout”. This anterograde amnesia stems from an inability of the brain to form new long-term memories and is caused by a disruption in the GABA and NMDA receptors in the prefrontal cortex (PFC) and medial temporal lobes when drinking.

First, for those of you who skipped (or drank) your way through your alcohol education, a brief reminder on the effects of alcohol on the brain. GABA is a primary inhibitory neurotransmitter, acting to decrease the likelihood of a cell’s firing. Alcohol acts as a GABA agonist, elevating levels throughout the brain and therefore diminishing the rates of firing in normal cellular processes. At high levels, alcohol also acts upon glutamate NMDA receptors, one of the main excitatory neurotransmitter systems. Alcohol works as an NMDA antagonist, blocking the NMDA receptors and preventing glutamatergic activation, further inhibiting neuronal functioning. This inhibition particularly occurs in the PFC, medial temporal cortex and the parietal lobe, primary targets of alcohol in the brain. In the hippocampus in particular, an area in the medial temporal cortex crucial to memory formation, this inhibition can result in a disruption of long-term potentiation, a cellular process involved in the consolidation of short-term to long-term memories.

Alcohol’s effect on the PFC also impacts memory ability, as short-term memories are maintained there while they are being worked on or rehearsed. However, when attention shifts to a new stimulus this memory must be consolidated into a more stable long-term version via cellular activity in the hippocampus, or else it will be discarded and forgotten. Alcohol’s inhibition of the PFC via its effects on GABA and glutamate can disrupt the maintenance of these short-term memories, decreasing the likelihood of consolidation and preservation. The dampening of firing in the PFC is also attributed to the behavioral disinhibition that so commonly succeeds alcohol consumption, as the PFC can no longer inhibit or control impulses as well.

Now, on to the exciting bit! In individuals who regularly experience alcohol-induced memory loss, or a blackout, it is the contextual memory that seems to be most impaired. This refers to the details surrounding an experience, such as where, when and with whom the event occurred. However, blackouts seem to affect some drinkers more than others, and are not necessarily determined by the amount of alcohol that an individual consumes. Simply put, you either blackout when drinking large amounts of alcohol or you do not.

Published online this week in Alcoholism: Clinical and Experimental Research, psychologists from the University of California, San Diego and the University of Texas, Austin have recently confirmed this urban drinking legend by testing 24 regular binge drinkers, 12 of whom admitted to blacking out on a regular basis, reporting on average two blackouts per month, and 12 who drank comparable amounts of alcohol but declared no memory problems when drinking. Both groups were matched on their typical alcohol consumption, averaging 3 drinking days per week and consuming 4-5 drinks at a time on a typical day when drinking. Both groups also had comparable binge tendencies, consuming 10 or more drinks on occasion over the previous 3 months.

Participants were tested on a contextual memory task using functional magnetic resonance imaging (fMRI) both when sober and after drinking to a blood alcohol content of .08, the legal limit in the United States, typically 3 drinks for a male and 2 for females. During both the sober and intoxicated trials, participants performed equally well in their behavioral scores, recalling similar amounts of information regardless of their blackout group status. Groups also did not differ in their response times on the task during either condition, however both groups recalled significantly fewer trials when intoxicated and were significantly slower than when sober.

In the imaging analysis, there were no differences in activation levels between the groups during either encoding or retrieval for the sober condition of the task. However, when intoxicated, both groups demonstrated significantly less activation in the right frontopolar PFC during retrieval. The blackout group also had significantly less activation during both the encoding and recall portions of the experiment after consuming moderate amounts of alcohol as compared to the non-blackout group. Specifically, participants with a history of blacking out showed less activation in the left frontopolar PFC during encoding, and decreased activity in the right posterior parietal cortex and the bilateral dorsolateral PFC during retrieval as compared to their non-blackout contemporaries. This fronto-parietal network is implicated in attentional maintenance and inhibition, as well as working memory and executive control, suggesting that there could be greater difficulties in these skills in the blackout group when drinking.

The researchers speculate that the decrease in activity in the frontal pole during intoxication is indicative of an alcohol-induced impairment in executive functioning in both groups, particularly in regards to working memory and cognitive maintenance. The additional decrease in activation in the fronto-parietal network seen in the blackout group also suggests a greater disability in executive functioning and memory maintenance in these individuals when drinking. However, it is notable that there were not any significant behavioral differences between the two groups in total memory recall, particularly during the intoxication condition.

While it is reassuring that there were no impairments in either group during the sober condition, the drinking results do seem to suggest that there may be underlying problems with memory and executive functioning in those individuals with a proclivity for forgetting, which could emerge after more chronic drinking behaviors. Why some people are predisposed towards these additional memory impairments is still unclear, but there does seem to be something different in the brains of those who blackout regularly that is not just dependent on the amount of alcohol they drink.

(Insert poor taste joke about drinking away your memory problems here.)