Keeping hope alive: Brain activity in vegetative state patients

Thirteen-year-old Jahi McMath went into Oakland Children’s Hospital on December 9 for a tonsillectomy. Three days later she was declared brain-dead; severe complications from the surgery resulted in cardiac arrest and her tragic demise. While neurologists and pediatricians at the hospital have declared Jahi brain-dead, her family refuses to accept the doctors’ diagnosis, fighting to keep her on life support.

This heartrending battle between hospital and family is sadly not a new one, and there is often little that can be done to compromise the two sides. However, neuroscientific research in recent years has made substantial developments in empirically determining if there are still signs of consciousness in vegetative state patients. These revelations can either bring hope to a desperate family or provide stronger footing for doctors trying to do the more difficult but often more humane thing.

In 2010, researchers at the University of Cambridge published a groundbreaking study in the New England Journal of Medicine that looked at brain activity in minimally conscious or vegetative state patients using fMRI. These patients were placed in the scanner and asked to imagine themselves in two different scenarios: in the first, they were instructed to envision themselves playing tennis and swinging a racket, which would activate a motor region of the brain called the supplementary motor cortex. In the second, they were told to think of a familiar place and mentally map or walk around the room. This mental map lights up the parahippocampal gyrus, an area of the brain involved in spatial organization and navigation.

Five of the patients (out of 54) were able to consistently respond to the researchers’ requests, reliably activating either the supplementary motor cortex or parahippocampal gyrus upon each instruction. Even more amazing, one of the patients was able to turn this brain activation into responses to yes or no questions. The patient was asked a series of autobiographical questions like “Do you have any siblings?” If the response to the question was yes, she was instructed to “play tennis,” while if the answer was no, she should take a mental stroll around the room. Remarkably, this individual was able to accurately respond to the researchers’ questions using just these two symbolic thought patterns.

Building on this research, a new study by the same scientists published in November of this year in NeuroImage used EEG to measure electrical activity in the brain in an attempt to better assess consciousness in the same group of vegetative state patients.

A certain type of EEG brain wave, the P300, is generated when we are paying attention; and just as there are different kinds of attention (i.e. concentration, alertness, surprise), there are different P300 responses associated with each type. An “early” P300 burst in activity in the parietal lobe (P3a) is externally triggered, such as when something surprising or unexpected grabs our attention. Conversely, delayed P300 waves in the frontal cortex (P3b) are more internally generated and are activated when we are deliberately paying attention to something.

To test this, the Cambridge researchers hooked up the same group of minimally conscious patients to an EEG machine and made them listen to a string of random words (gown, mop, pear, ox). Sprinkled throughout these distractor stimuli were also the words “yes” and “no,” and patients were instructed to only pay attention to the word “yes.” Typically, when someone performing this task hears the target word (yes), they experience a burst in delayed P300 activity, signifying that they were concentrating on that word. However, upon hearing the word “no,” participants often show early P300 activity, its association with the target word attracting their attention even though they were not explicitly listening for it.

Similar to the first study, four of the participants exhibited brain activity that indicated they were able to successfully distinguish the target from the distractor words. This result suggests that these patients are aware and able to process instructions. Three of the four individuals also demonstrated the appropriate activation during the tennis test listed above. However, it’s important to remember that in both of these studies only a very small minority of the patients were able to respond; the vast majority showed no evidence of consciousness during either task.

For the McMath family, studies such as these provide hope that their daughter is still somewhere inside herself, still able to interact with the outside world. But doctors fear this research may be misleading as these results are by far the exception. Additionally, there is no evidence that this type of activity will result in any change in the patient’s prognosis. Finally, and most relevant to the current controversy, complete brain death–as in the case of young Jahi–is very different from vegetative state or minimal consciousness; there is never any recovery from brain death. Advancements in neuroscience have grown more and more incredible in the last decade, and our knowledge of the brain has increased exponentially, but there is still more that we do not know than what we do, and we are a long way off from being able to bring back the dead.

Also posted on Scitable: Mind Read

Can synesthesia in autism lead to savantism?

I’ve got a new piece out on the Scientific American MIND blog network today on the fascinating link that’s been discovered between synesthesia – a “crossing of the senses” where one perceptual experience is tied to another, like experiencing sound and color together – and autism spectrum disorder.

Individuals with autism have significantly higher rates of synesthesia than the rest of the population, and the two are potentially linked by a unique way in which the brain is wired. White matter tracts that traverse our brains, connecting one area to another, are thought to be increased in both conditions. This results in an abnormal wiring of the brain that may lead in more close-range connections, but fewer long-distance ones. And it’s possible that these extra connections may also contribute to some of the extraordinary cognitive abilities seen in some autistic individuals with savant syndrome.

For more on the story, check out the full piece on here.

Drug use, decision-making and the blunders of Rob Ford

I’ve got a new piece in The Guardian today on the unfolding debacle that is Toronto Mayor Rob Ford.

His consistent pattern of bad decision-making – including death threats, drunken driving and public boasts about his sex life – have all the signs of problem drug and alcohol use. Substance-dependent individuals frequently show impairments in decision-making abilities, with difficulties in impulse control and executive function, as well as corresponding abnormalities in relevant brain regions.

However, as Mayor Ford has vehemently denied any accusations of addiction, it could just be incompetence.

Check out the full piece here.

What do your hands say about you?

When I tell people that I ‘do psychology’ I typically get one of three reactions. 1) People ask if I can read their thoughts. No, unless you’re a drunken guy in a bar, in which case, gross. 2) They begin to tell me about their current psychological troubles and parental issues, to which I listen sympathetically and then make it clear that I got into experimental psychology because I didn’t want to have to listen to people’s problems (sorry). Or 3) they ask me a very astute question about the brain that 9 times out of 10 I can’t answer. This last option is by far the most preferable and I’ve had several very interesting conversations come out of these interactions.

One such question I received recently was where does handedness come from in the brain? While initially a basic-seeming question, I quickly realized that I had no idea how to answer it without dipping into pop psychology tropes about right- and left-brained people that I definitely wanted to validate before I started trotting them out.

So what exactly is handedness? Does it really reflect differences in dominant brain hemispheres? Is it underlying or created, and what happens if you switch back and forth? Can a person truly be ambidextrous?

Handedness may indeed relate to your so-called ‘dominant’ hemisphere, with the majority of the population being right-handed and thus ‘left-brained’ (each hemisphere controls the opposite side of the body in terms of motor and sensory abilities). The dominant side of the body, by definition, is quicker and more precise in its movements. This preference originates from birth and is then ingrained by your actions, such as the practice of fine motor skills like handwriting.

Going beyond basic motor differences, handedness has been loosely related to the more general functions of each brain hemisphere as well. The left hemisphere is typically associated with more focused, detailed and analytical processing, and this type of thinking may be reflected in the precision movements utilized by the more typically dominant right hand. Conversely, greater spatial awareness and an emphasis towards systems or pattern-based observations are thought to reside primarily in the right hemisphere. (I highly recommend Iain McGilchrist’s RSA animation on the divided brain for a great overview.) However, it is important to note that these types of thought and behavior are by no means exclusive to one hemisphere or another, and the different areas of the brain are in constant communication with each other via signals sent through white matter tracts that traverse the brain, like the corpus callosum that connects the two hemispheres.

Contributing to the right-hand/left-brain theory, the left hemisphere is largely responsible for language ability, which has traditionally been used as another indicator of hemispheric dominance. It was initially thought that this control was switched in left-handed people, with the right hemisphere in charge of verbal communication; however, it has since been proven that this linguistic laterality doesn’t really match up that neatly.

In the 1960s a simple test was devised to empirically determine a person’s dominant hemisphere in terms of language. This, the Wada test, involves injecting sodium amobarbital into an awake patient in an artery traveling up to one side of the brain, temporarily shutting down that hemisphere’s functioning. This allows neurologists to see which abilities are still intact, meaning that they must be controlled by the opposite side. This test is especially important in patients undergoing neurosurgery, as ideally you would operate on the non-dominant hemisphere to reduce possible complications in terms of movement, language and memory. The Wada test revealed that many left-handers are actually also left-brained dominant in terms of language, and that in only a small proportion does language reside in the right hemisphere. Still other left-handers share language abilities across the two hemispheres.

So where does this appendage preference come from? Handedness is thought to be at least partially genetic in origin, and several genes have been identified that are associated with being left-handed. However, there is evidence that it is possible to switch a child’s natural preference early in life. This often happened in cultures where left-handers were perceived as ‘evil’ or ‘twisted’, and attempts were made in schools to reform them, forcing them to act as right-handers. As mentioned above, when motor movements (and their underlying synaptic connections) are practiced, they become stronger and more efficient. Thus, individuals who were originally left-handed may come to favor their right hand, particularly for tasks like writing, as they were forced to develop these pathways in school. These same individuals may still act as left-handed for other motor tasks though, simultaneously supporting both the nature and nurture aspects of handedness. Notably, this mixed-handedness is different from ambidextrousness, as both hands cannot be used equally for all actions. True ambidexterity is extremely rare and has been largely under-studied to date. However, it has been theorized that in ambidextrous individuals neither hemisphere is dominant, and in some cases this has led to evidence of mental health or developmental problems in children.

So the next time you meet a psychologist in a bar, instead of challenging them to guess what you’re thinking, ask them the most basic brain-related question you have. It will undoubtedly lead to a much better conversation!

(Originally posted on Mind Read)

Is this a new tool to diagnose ADHD, or is it just another neuro-scam?

When I was in elementary school, there were two kids in my class who always got “special medicine” at lunchtime. I didn’t understand this at the time, as they never looked sick to me, so I couldn’t comprehend why they would need to take a pill. One day I got up the courage (as only an impertinent seven year-old can) to ask my friend why she needed to take medicine every day, but her answer just confused me even more. She said that without the pill she would get too energetic and be unable to concentrate in class. But this didn’t make sense, as I knew that I often got quite excited and would sometimes talk out of turn, but I certainly didn’t need to take any medicine for this!

Flash forward twelve years, and in college nearly all of my friends were regularly taking Adderall to help them study for exams, whether they were prescribed it or not.

Diagnoses of ADHD have skyrocketed over the last decade, rising 66% in the U.S. since 2000. As with the majority of psychiatric disorders, a diagnosis cannot be determined by a physical exam or empirical test, but is instead made using subjective self-reports provided by the parents, teachers and child himself. The doctor or psychiatrist then matches these descriptions to the clinical symptoms listed in the DSM – the Diagnostic and Statistical Manual of Mental Disorders – comparing them to her own observations and makes a decision accordingly. This means that diagnoses of ADHD can be highly subjective and, unfortunately, potentially easy to fake.

However, a new development is attempting to change this by using a physical test to look for signs of ADHD in an individual’s brain patterns.

The Food and Drug Administration recently approved a device that uses electroencephalogram, or EEG, to help diagnose ADHD. EEG measures electrical activity in the brain from the firing of neuronal cells. Different voltages, or different magnitudes of this signal, designate different types of brain waves, which can provide insight into the brain’s current activity. Researchers believe that the ratio between two types of these signals – beta and theta waves – may help better predict the presence of ADHD when combined with standard subjective assessments.

Both of these rhythms are involved in arousal, but in different capacities. Theta waves are most commonly seen during voluntary movements and are associated with an active readiness state. In fact, some studies have shown the presence of theta waves before a movement has even begun, suggesting that they play a role in initiating action. Conversely, beta waves are more associated with alertness and concentration, as well as with the inhibition of movements. They are the most common type of electrical signal present while we are awake.

Children with ADHD have been shown to have a higher ratio of theta-to-beta waves, potentially implicated in their hyper-active and hypo-attentive state. By measuring the ratio of theta and beta rhythms, researchers hope to provide a more empirical test of abnormal brain function in children suspected to have ADHD. Sensitivity using this tool – the percentage of children who are diagnosed with ADHD and show these abnormal brain patterns – has been estimated at 95%, while specificity – the percentage of children without ADHD who don’t shows these patterns – is around 90%.

However, others have refuted this claim, saying that EEGs provide no better assessment of ADHD than the current subjective symptom reports already used. Instead, they argue, selling high-tech machines to measure brain waves is just an easy way to make money off of concerned clinicians and parents, but without providing any more valuable information.

Additionally, as is always the case in psychiatry, there are some children with a subtype of ADHD who markedly differ from the expected patterns. Some show increased beta waves, while others have increased alpha waves or some different ratio of these three rhythms.

However, just because this method isn’t perfect doesn’t necessarily mean we should disregard the data supporting the use of EEG for ADHD and reject this option just yet. EEG may be a useful tool to assist in diagnosing, granted that it’s used in combination with the other currently standard methods, which it must be acknowledged are far from perfect themselves. Instead, a combination of these two imperfect methods may help bring us a little bit closer to a more perfect option.

(Originally posted on Mind Read)

Can we please not?

Can we please not blame mass killings on people’s brains? Can we not say that Adam Lanza committed the Newton, Connecticut massacre because he might have been autistic? Can we not say that Tamerlan Tsarnaev, the deceased Boston Marathon bombing suspect, might have committed the crime because he had boxing-related traumatic brain damage? Can we not say that his younger brother, Dzhokhar Tsarnaev, aided in the bombings because he was a teenager and his brain hadn’t fully developed yet, and thus he was easily influenced by his radical older brother?

Just like can we please not place blame on these people because they are Muslim or Christian (or not), can we not claim psychiatric or neurochemical differences as the “reason” why they committed these crimes?

There are millions of people in the world with autism. There are millions of former boxers and football and rugby players who have suffered concussions and do not build bombs. There are billions of us who have successfully passed through adolescence without ever committing an act of terror. So can we please not say that these attributes are the “reasons” why these men have committed these horrendous atrocities?

I fully agree we need better mental health care, better education, better outreach and assimilation to immigrants. But I also believe we need better gun laws to prevent people from having the capacity to commit some of these crimes in the first place. I understand the need to try and find reason behind these acts as they are truly devastating, seeming to stem from a place of pure evil. But let’s not forget that similar atrocities, and worse, are currently being committed around the world in war-torn areas. And placing the blame on a generic mental illness or neurological state, rather than on societal shortcomings or personal perversion, does not help. It only garners distrust and hurts those who do suffer from these illnesses and need help, rather than persecution.

So please, can we just not?

(Thanks to Torey Van Oot for first bringing the boxing article to my attention.)

Nothing to fear but asphyxiation?

Think of the scariest movie you’ve ever seen (for me it’s The Ring). How did you feel when the group of teenagers popped in that video, or the girl climbed out of the TV? When the phone rang and the killer was on the other end? Or when the babysitter was home alone and a shadow passed across the screen? Even though you know it’s just a movie, you still experience that knot in your stomach, pounding heart, sweaty palms and building anxiety that comes with a real stressful or frightening encounter.

These visceral, gut reactions are physiological fear responses our brain and body automatically initiate when in a perceived threatening situation. These experiences are thought to be subserved by the amygdala, an old and deeply rooted part of the brain that is essential in processing emotion, particularly fear. This is partly through connections the amygdala has to the sympathetic nervous system, which controls our basic ‘fight-or-flight’ reactions to danger–preparing us to either stand and fight or flee as fast as we can.

However, some people don’t experience this sensation of fear. Individuals who have undergone damage to the amygdala, either through a stroke or head injury, or from the rare genetic condition Urbach-Wiethe disease, report an inability to feel this emotion. One famous example of this absence is in the patient SM, who reported no feelings of fear when faced with snakes, spiders, horror films, or haunted houses. Even after being threatened with a real life knife attack, SM had no experience of fear sensations. However, there was one thing that was able to instill in her these feelings of anxiety and terror–asphyxiation.

Researchers at the University of Iowa have been studying SM over the last decade to try to find something, anything, that would scare her. After exhausting all the typical psychological stressors to no avail, they decided to try a physical stressor that can elicit the same reactions. Published last month in Nature Neuroscience, the researchers had SM and two other people with similar amygdala lesions inhale carbon dioxide for several seconds, cutting off their oxygen flow and essentially suffocating them. This experience typically causes panic attacks and fear responses in people, including extreme distress, pounding heart, and an immediate desire to escape the situation. All three participants–none of whom had previously experienced fear–had these exact same panicky reactions to the CO2. In fact, when compared with normal healthy individuals, the amygdala patients had significantly greater fear responses, both physically and psychologically, than those with intact amygdalas.

So what’s behind this phenomenon? The researchers believe that these panic reactions are distinct from learned fear responses, such as phobias of snakes or spiders. Instead, there appears to be a unique pathway involved in panic from inherent physiological stressors that passes through the amygdala. In fact, this response may be inhibited in the amygdala, as the control participants experienced less dramatic reactions to the carbon dioxide than the amygdala patients. However, learned fears or perceived outside dangers rely on the amygdala to integrate these scary sensory situations—such as seeing someone with a gun—as a threat. Thus, those with amygdala lesions do not learn and incorporate the proper fear associations with these triggers, but they do still have the capacity to experience these dramatic panic responses to internal physical stressors.

So the next time you’re watching a scary movie, you could try reminding yourself that it’s not real, or you could try hyperventilating—it may actually reduce your panic (assuming your amygdala is still intact).*

Boo!

*I do not actually recommend this as a fear-coping mechanism.

A predisposition for drug addiction? Shared traits between stimulant dependents and their siblings

An exciting new study published in Science this week attempts to answer the chicken-or-egg question pervasive in drug addiction research of, “Which comes first, drug use or brain abnormalities?” Dr. Karen Ersche from the University of Cambridge* approaches this question with a new perspective, investigating the biological siblings of dependent drug users. And as is the case with most seemingly dichotomous questions in science, the answer is: both.

Dr. Ersche’s group studied 50 stimulant-dependent individuals, 50 of their healthy, non-dependent biological siblings, and 50 unrelated control volunteers on a barrage of cognitive tests, personality measures, and brain imaging techniques. Throughout the assessments, there was a striking pattern of similar responding between the drug users and their siblings, significantly differing in their results from the control participants. Specifically, drug users and their siblings were both significantly more impaired on the Stop Signal Reaction Time Task (SSRT), a test of inhibitory control that measures how well an individual can stop an ongoing response when triggered. Impulse control and inhibition are traits known to be impaired in drug-dependent individuals, and poor performance on the SSRT has previously been associated with an increased risk for drug abuse. However, these dysfunctions have long been debated as to whether they can be attributed to accumulated years of drug use and its effects on the brain, or are instead a predisposing factor that places an individual at an increased risk for drug dependence. In the current study, sibling participants performed as poorly on the SSRT as drug-dependent individuals, requiring more time to inhibit their actions. This would suggest that poor impulse control is a shared trait that is present in drug-dependent individuals before the onset of abuse. However, impaired inhibition is clearly not a determining variable, as dysfunction in the siblings did not lead to subsequent drug abuse or dependency.

The brains of stimulant users and their siblings were also structured similarly as compared to control volunteers, with an increase in gray matter in limbic and striatal regions such as the amygdala and putamen, areas important in emotion regulation and habit formation. Drug addiction is often seen as a disorder involving dysfunctional habits, and the putamen is implicated in the acquisition of these compulsive behaviors, targeted by an influx of dopamine and commonly a site of subsequent adaptations in the brains of heavy drug users. Additionally, the postcentral gyrus was significantly smaller in both groups as compared to healthy volunteers, indicating further pre-morbid differences.

Finally, white matter tract integrity, the neuron fibers that travel throughout the brain relaying messages from one region to another, were less intact in both the drug users and their siblings, signifying a decrease in brain connectivity in these groups as compared to the control participants. This was particularly evident in the inferior frontal gyrus, a region implicated in impulse control, supporting the findings of impaired self-regulation characteristic of compulsive drug users. Changes in connectivity in this area were also associated with an increase in impulse control dysfunction on the SSRT, with decreases in this region accounting for 6% of the variability in SSRT scores. Additional damage to white matter tracts and gray matter regions were also seen in the stimulant-dependent group, correlating with years of stimulant abuse and suggesting further damage and dysfunction due to chronic drug use itself.

Taken together, the abnormalities in the limbic and striatal regions, which have projections to the frontal cortex, as well as the decrease in frontal cortical volume and impaired connectivity between these key areas, confirms prior research indicating the importance of the cortico-limbic-striatal circuitry in drug dependence. These differences in the brains and behaviors of drug users and their siblings could potentially serve as endophenotypes for the development of drug dependence, characterized as stable inherited traits that are seen in clinical disorders and that can serve as indicators or predictors of pathology, both in patients and in their biological relatives. As such, these abnormalities in key regions for drug addiction could act as biomarkers for an increased risk of dependence.

However, the key question arises as to what protective factors could exist in the siblings to prevent them from trying or developing dependence on drugs. Sharing 50% of their genetic make-up, as well as familial environments growing up, drug-sibling pairs have highly similar brains and behaviors. However, clearly the differences that do exist between these groups are incredibly important. Early drug experimentation may exacerbate the structural abnormalities seen in these individuals, increasing the risk for later dependency, or even creating an epigenetic effect as has been seen in previous studies investigating early cigarette smoking and its link to later drug dependence. Alternatively, protective factors in the siblings could include greater education, outside interests or hobbies growing up, or even an increase in exercise and physical activity.

The question of the path to drug dependency is still very much open, however this study may take us one step closer to finding the answer.

*Disclaimer: I am a member of the Ersche lab at Cambridge, but was not involved in this study.

Mental and physical exercise: Alternatives to dopamine treatment in Parkinson’s disease

Several studies have come out recently touting unconventional methods to treat Parkinson’s disease. Parkinson’s is caused by damage to neurons in the substantia nigra (SN), a region of the basal ganglia that is responsible for creating much of the brain’s dopmaine. Dopamine is an essential neurotransmitter implicated in a variety of behavioral and motivational mechanisms, and it is most notably involved in reward and addiction. However, it is also a key component of the motor system. Feedback loops from the cortex to the basal ganglia circulate information about whether to initiate or inhibit a movement, and these cicuits are greased by dopamine, activating the excitatory loop and suppressing the inhibitory one. Without adequate dopamine, the system comes to a stalemate, making the initiation of movement much more difficult and causing the hesitation, trembling, and inertia that are characteristic of Parkinson’s.

Common treatments for Parkinson’s include flooding the brain with dopamine agonists or the dopamine precursor L-DOPA. This boosts dopamine levels in the brain, causing the remaining healthy SN neurons to produce and fire greater amounts of the neurochemical, making up for the deficit from the impairment of the other cells. However, there are currently no treatments that prevent the progressive cell death in the SN, and in advanced stages it is difficult to compensate for the abundant cell loss. Excess “artificial” dopamine in the brain can also result in the downregulation of other dopamine-producing and receiving cells, the brain adjusting to the new flood of dopamine by reducing its endogenous production and receptor sensitivity in an attempt to return to a dopaminergic homeostasis. Additionally, it is impossible to localize dopamine agonists to the motor regions of the brain, meaning that many Parkinson’s patients treated with dopamine come to display symptoms similar to those seen in impulse control disorders, which are also commonly rooted in a widespread dysregulated dopamine system. These can include the development of compulsive gambling and shopping problems, sexually deviant behavior, and drug addiction.

Given the obvious shortcomings in the current treatment options, the need for alternative therapies for Parkinson’s is widely acknowledged. Two labs taking on this problem have recently published results on alternative treatments that do not involve pharmacological challenge and instead target a patient’s motor efficacy, one increasing the patient’s control and the other withdrawing it.

The first, published in the Journal of Neuroscience, suggests that self-regulation of brain activity facilitated by real-time fMRI feedback can increase brain activation and decrease Parkinson’s symptoms. Focusing on the supplementary motor region, an area of cortex that has direct connections with basal ganglia pathways and that is commonly shown to have diminished activity in Parkinson’s, researchers at the University of Cardiff had patients mentally activate this region using motor imagery while in the fMRI scanner. Patients in the experiment group received direct feedback on their activation levels during the trials via a thermometer display, whereas those in the control group did not have any indication of their success at mentally activating the area. The patients who received the real-time feedback were able to activate the supplementary motor region to a greater extent than those who did not, successfully upregulating this area as well as other brain regions associated with the motor system. They also significantly improved their ability on a motor function test and a subjective assessment of Parkinson’s symptoms, whereas control participants did not. Researchers speculate that this increase in activity and subsequent improvement in symptoms is due to a greater excitation of compensatory motor pathways, strengthening these connections and facilitating the activity of the under-utilized basal ganglia circuitry.

The second method, published in Exercise and Sport Sciences Reviews, takes an alternative approach, deliberately taking the control out of the hands (or legs) of the participants. Researchers at the Cleveland Clinic in Ohio are investigating the idea that forced exercise, with exertion levels out of the patient’s control, can be more effective in treating Parkinson’s symptoms than voluntary physical activity. Led by Dr. Jay Alberts, researchers had patients ride on the back of tandem bicycles where the energy output was set at 50% higher than the patients’ comfortable self-selected effort levels. After eight weeks at the greater energy expenditure, patients had a significant decrease in tremors and other motor symptoms, which lasted for approximately one month after treatment was stopped. Additionally, the benefits seen were not just a result of localized increased muscle tone or coordination as has been the case in previous studies investigating the effects of exercise in Parkinson’s. Instead, participants showed improvements in movement throughout the body, as well as increased neural activity during MRI scans of the basal ganglia and cortex. Researchers are as yet unsure of the basis for these improvements, though the emotional and cognitive benefits of exercise are widely known. In an interview with the New York Times, Dr. Alberts speculated that the effects could stem from the release of stress hormones during exercise, which can trigger the neurochemical systems and are more active during forced or very high intensity activities than comfortable voluntary levels of exertion.

While these studies are still only addressing the symptoms rather than the root of the problem, they do provide new evidence for treatment options beyond the standard fair. Importantly, neither of these methods comes with any of the adverse side effects of dopaminergic treatments, which can severely undermine the efficacy and quality of life improvements for patients with Parkinson’s disease. Further research is of course always needed, and certainly these methods would need to be used in tandem with current drug therapies, but these studies present an interesting alternative to complete reliance on pharmacological medication to treat the symptoms of neurological disorders.

Did I do that? Reality monitoring in the brain

Most of us have no problem telling the real from the imagined. Or so we think.

Reality monitoring, the incorporation and distinction of internal thoughts and imaginings from external experiences and memories, typically happens seamlessly for most individuals. However, there are times when we cannot recall if someone else told us about that interesting article or whether we read it ourselves, or if we remembered to lock the door before leaving the house or not. Did we actually do or hear these things, or did we only imagine them? This is a common problem in patients with schizophrenia, who at times cannot distinguish between what they think they remember or believe to be true, and what actually occurred.

A new study on reality monitoring published last week in the Journal of Neuroscience reveals that many of us are not as good at making this distinction as we might think. Additionally, the ability to discern between perceived and imagined events may be rooted in one very specific region of the brain, which nearly 30% of the population is missing. Led by Marie Buda and Dr. Jon Simons at the University of Cambridge*, researchers administered a very particular type of memory test to healthy participants who had been pre-selected based on the prominence of the paracingulate sulcus (PCS) in their brains. Running rostral-caudal (front to back) and located in the anteriomedial (middle-frontal) prefrontal cortex, this region is involved in higher level cognitive functioning and is one of the last parts of the brain to mature. Consequently, it can be relatively underdeveloped or even seemingly absent in many people. This is particularly the case in individuals with schizophrenia, where as many as 44% of patients lack this particular region.

Participants for the current study were chosen from a database of individuals who had previously undergone an MRI scan and clearly showed a presence or absence of the PCS in either one or both of the neural hemispheres. The memory task in question involved a list of common word pairs such as “yin and yang” or “bacon and eggs”. These words were either presented together (perceive condition), or only one word was presented and the participant was to fill in the complimenting phrase (imagine condition). The second portion of the experiment involved the source of this information, i.e. whether the subject or the experimenter was the one to read off or verbally complete the pair. After the task, the subject was asked to report whether the pair was fully perceived or imagined, and whether this information was attributed to themselves or the experimenter. They were also asked to rate their confidence in both of these responses.

Participants with a complete absence of the PCS in both hemispheres performed significantly worse on the reality monitoring task than individuals who exhibited a definite presence of the sulcus. This difference was based on their source attribution memory (themselves vs. the experimenter); performance on the perceive or imagine condition did not differ between the groups. Interestingly, the two groups also did not differ in their confidence in their responses. Thus, even though the PCS-absent group performed significantly worse on attributing the source of the information, they were still just as confident in their answers as individuals who responded correctly, indicating a lack of interospective awareness in the absent group in regards to their memory abilities.

It should be noted that there was also a correlation between overall gray matter volume in the prefrontal and motor cortices and scores on the reality monitoring task. This is important as it may indicate that there are other regions involved in this process outside of the PCS, and the authors caution that this enhanced ability may stem from an increase in gray matter and connectivity in the medial prefrontal cortex, rather than from the PCS itself.

These findings could have useful applications in clinical psychiatry. As stated above, an impairment in reality monitoring is often associated with schizophrenia, and the absence of the PCS could serve as a potential biomarker for this disorder. Additionally, although not commonly discussed in terms of reality monitoring, another psychiatric diagnosis that could potentially benefit from this type of research is obsessive compulsive disorder (OCD). OCD often consists of obsessions and the urge for frequent compulsive checking of things, such as whether one remembered to turn off the stove. This ruminating and checking behavior could be indicative of a breakdown in reality monitoring where patients can not determine whether a target action actually occurred or not. While this problem is not encompassing of all OCD patients, reality monitoring disability could be a potential area to investigate in those patients for whom checking is a significant problem.

*Disclaimer: Marie Buda and Jon Simons are fellow members of the Department of Experimental Psychology at the University of Cambridge with me.