The neuronal defense

There’s been a lot of discussion recently about structural and hormonal changes in the brain being to blame for misbehavior, whether it’s a philandering husband (or senator) or a psychopath. To some extent these are valid arguments; higher testosterone levels have been linked to sensation seeking and greater sexual desire, and abnormalities in the limbic system–particularly the amygdala, which processes fear and emotion, and the frontal cortex, which is in charge of inhibition and rational thought–are often seen in persons who commit crimes. However, to use these structural phenomena as excuses or arguments, as in, “My brain made me do it,” is akin to proclaiming, “Yes, I did this.” Obviously, there are rare and extenuating circumstances when an individual’s actions are truly no longer under their own control, such as in the case of a tumor in the frontal lobe changing the temperament and personality of an individual. However, for the vast majority of individuals, we are our brains, and saying you are “pre-wired” to cheat or fight or steal is not an excuse. If anything, it is a greater indication of the potential for recidivism and an added incentive for either punishment or preventative measures.

Excess testosterone is not a pathology like schizophrenia or mental retardation, which can be used as a defense in court for criminal actions. Additionally, if you blame chemicals like testosterone or a lack of oxytocin for misbehavior, then what is to stop us from exonerating people who commit crimes because they are on a synthetic drug like crack cocaine or PCP? And, seeing as how presumably not all men with increased testosterone cheat and not all individuals with abnormal amygdalas commit crimes or become sociopaths, it is difficult to argue that your brain and neurotransmitters make you do something when these same conditions do not compel others down a similar path.

David Eagleman’s article in The Atlantic is a particularly insightful and eloquent investigation into both sides of this issue that I highly recommend. Instead of focusing on the question of guilt and the implications that recent advances in neuroscience and neuroimaging have on culpability, Eagleman wisely shifts his focus to sentencing and the constructive ways we can incorporate our new crude knowledge of the brain into the justice system. For example, he suggests concentrating on the potential for recidivism and reform instead of retribution when determining sentencing. Drug courts have already started shifting towards this perspective, supported by the recent initiative by the Global Commission on Drug Policy, marking the 40 year anniversary of the War on Drugs. Not only is it important to provide drug users with treatment instead of punishment, our economy simply can not accommodate the deluge of drug-related crimes into the penal system, most strikingly demonstrated by the decision in California this month to release 3,000 prisoners before their sentences were up due to a lack of resources.

Child criminal courts have also dealt with the issue of neuroanatomical defenses for quite some time, as it is widely established that the frontal cortex is the last area of the brain to finish developing, not reaching full maturation until the mid-20s. Countless juvenile defenders have used this argument to insist that their client was not a rational individual at the time of their crime, and, therefore, should not be held accountable for their impulsive and illegal actions. While this is certainly a valid point–and one that is typically taken into consideration when distributing sentencing–it is important to bear in mind that not all 15 year-olds commit crimes. Therefore, this universal neural stage of adolescence that we all pass through is not necessarily a credible criminal defense; otherwise, all teenagers would be running rampant and wreaking even more havoc than they already do. Also, there are innumerable studies citing the increased risk of offense in impoverished or violent areas, yet this is not used as an excuse for a crime in these communities. This evidence is absolutely a reason to reform the social system that creates these pockets of poverty and risk, but it does not compel juries to acquit defenders of their crimes simply because of the neighborhood they were raised in.

At some point, people must take responsibility for their actions and face up to the consequences and not blame an integral part of themselves of going rogue and acting out of character. When you make a decision, it is your brain acting and your neurons firing; you can not excuse an action because of the claim that you could not control these impulses. There is no outside force urging you to act or not; it is your own will being administered and carried out. Eagleman’s idea of a spectrum of culpability is a sensible one that I support, and I fully agree that in the vast majority of offenses, reform and rehabilitation should be the goal, rather than retribution. However, this still leaves the topic rife with ambiguity, for where do you draw the line? At what point will we stand up and take responsibility for our own actions?

(Thanks to Tristan Smith for The Atlantic article.)


Pathologizing the norm: Follow-up

For those of you who are interested in this debate, there’s a great new two-part article in the New York Review of Books by Marcia Angell questioning “The Epidemic of Mental Illness.” The articles summarize three new books that are concerned about the prescription frenzy we are in the midst of and how this reliance on psychoactive medication came to be. She addresses the problem of dealing with psychiatric disorders as chemical imbalances and the dubious efficacy these drugs have in actually improving symptoms.

I highly recommend this read, as well as the second part in the series on “The Illusions of Psychiatry” for anyone concerned about our mental health system. One of the most resounding points she makes in the second piece is the perpetual expansion of the diagnoses listed in the American Psychological Association’s Diagnostics and Statistical Manual (DSM). With every publication of the DSM, there are more and more behaviors we have pathologized and “disorders” we have created, and with the upcoming publication of the DSM-V, there will certainly be a slew of new problems that we can put a name to and claim for ourselves. Angell succinctly describes this problem, stating, “Unlike the conditions treated in most other branches of medicine, there are no objective signs or tests for mental illness—no lab data or MRI findings—and the boundaries between normal and abnormal are often unclear. That makes it possible to expand diagnostic boundaries or even create new diagnoses, in ways that would be impossible, say, in a field like cardiology.”

Finally, she brings to task the drug companies who are more involved in psychiatric treatment than in any other medical field. This applies not only to clinicians and psychiatrists with private practices, but also the research institutions, hospitals, universities, policy makers, patient advocacy groups, educational organizations, and the APA itself.

Angell’s writing takes a good, hard look at the system of mental health, and while at times she makes some uncomfortable points, they are important issues that need to be addressed.

(Thanks to Emily Barnet for the Angell articles.)

This is your brain on music

A great bit of research conducted by neuroscientists at Columbia University for the PBS program NOVA looks at the emotional implications of music and what happens in your brain as you listen to your favorite song. Using the famous neuroscientist and writer Oliver Sacks as a test subject, researchers played Dr. Sacks two pieces of music while he lay in an fMRI scanner. The first was from his lifelong favorite composer, Bach, and the other was by Beethoven, a talented adversary, but one who didn’t resonate with Sacks quite as much.

While listening to both pieces, areas in the auditory cortex and other regions typically associated with music lit up, as expected. However, during the Bach piece, the amygdala was also activated, indicating the emotional connection Sacks felt to the music. In fact, even when Sacks could not identify which composer he was listening to, his brain still demonstrated greater emotional responding to the Bach pieces.

Music is a universal human characteristic. We are born with the innate ability to hear and appreciate it, and all cultures create and celebrate their own styles. Our speech reflects the cadence and intonations of our society’s popular music, as do the cries of even very young infants. According to a new book out by Elena Mannes, The Power of Music, music activates more areas of our brain than any other sensory experience, ranging from the auditory and motor cortices, to the executive control center in the frontal cortex, to older subcortical structures and the cerebellum. Mannes cites examples of music being used in speech therapy, such as in individuals who have verbal aphasia after suffering from a stroke in the left temporal lobe (the language center in the brain) and in chronic stutterers (see The King’s Speech for brilliant performances by Colin Firth and Geoffrey Rush).

Psychologists and neurologists have also begun using music as a form of therapy in a number of neurological disorders, and they are seeing encouraging results. For example, in Parkinson’s disease, music has proven to help patients with both their gross mobility and fine motor control. This phenomenon is wonderfully demonstrated in this video, in which a man suffering from Parkinson’s struggles to walk fluidly, jolting and hesitating across his living room. However, after putting the music on, his gait and demeanor entirely change, and he triumphantly begins strutting and dancing around the room.

Dr. Sacks writes in his book Awakenings about patients suffering from encephalitis lethargica, or sleepy sickness, a rare disease with unknown origin but that is believed to involve cells in the basal ganglia. This subcortical neural structure is involved in fine motor movements and is at the center of dopamine production in the brain. Patients with encephalitis lethargica demonstrate symptoms similar to patients with Parkinson’s, as well as profound muscle weakness, catatonia, and lethargy. In the 1960s, Sacks discovered that encephalitis patients responded to treatment with L-DOPA, a precursor to dopamine that is also used in Parkinson’s patients to stimulate dopamine production and stimulate motor ability in the basal ganglia. However, Sacks also made the profound revelation that these same patients responded to music as well, and he was able to miraculously rouse them out of their catatonic state for brief periods to dance and sing.

The similarities between Parkinson’s and encephalitis lethargica, both in terms of their etiology and pathology, as well as their amazing assistance by music, suggests that the dopamine system is intrinsically involved in our experience and appreciation of music. Indeed, listening to music releases endogenous dopamine in the brain via the cortico-striatal reward circuitry. However, many different stimuli have this ability, including food, sex, drugs, and other hedonic pleasures without having the therapeutic effects that music has in these patient populations. This suggests that there is something deeper and more inherent in the association between music and dopamine release that goes beyond pleasure, potentially implicating the motor system of the basal ganglia rather than the proximal nucleus accumbens, which is closely tied to reward. This could also explain why dancing is such a natural reaction to hearing a beat or melody, as the movement could stem from the natural release of dopamine in the basal ganglia.

The powerful effect of music on the brain, with its broad reaching activations and emotional and physical implications, suggests that there is something very special about our relationship to it, and it should be pursued as a potential source of therapy in other dopamine deficient disorders, such as drug addiction or depression.

When is euthanasia an ethical option?

Dr. Jack Kevorkian, the infamous suicide doctor from the 1990s, passed away last week at the age of 83. While his methods and criteria were at times questionable–seeking out publicity and media attention for his credo and often working out of his old beat-up Volkswagen van–he crucially brought the topic of euthanasia to national attention. The ethical debate over this procedure will continue to grow in importance as the health of the baby boomer generation begins to deteriorate, and, unfortunately, it must be discussed as a viable option in the course of treatment.

Largely instigated by Dr. Kevorkian’s efforts, the Oregon Death with Dignity Act was passed in 1997, long before health care reform and the dispute over so-called “death lists.” Thus far, the Death with Dignity Act has helped 525 individuals end their own lives, and similar laws have been passed and upheld in Washington state and Montana. There are stringent criteria in the laws determining who is eligible, and in Oregon, this includes whether the patient suffers from a mental illness. On the surface, this seems like a necessary, sensible, and humane criteria for the law. However, upon closer inspection it raises problems with feasibility, for who suffering from a terminal illness who wants to end their lives is not depressed? The Beck Depression Inventory (BDI) is a widely used and respected neuropsychological questionnaire used to assess depressive symptoms. While it is by no means as comprehensive as the DSM-IV diagnostics, it does provide a relatively sufficient snapshot of an individual’s current mood and state of mind. Going by these questions, though, it seems doubtful whether anyone in a position to take advantage of Oregon’s Act would qualify. The BDI asks questions about recent weight loss, insomnia, sexual interest, thoughts of suicide, and general mood and interest in life. Surely someone who is terminally ill and considering ending their lives would not be as interested in sex, food, and the day-to-day goings on around them. Assisted suicide is not merely another option for those who are contemplating it, it is a last resort.

Along these same lines, another contentious patient group to consider in this debate are those diagnosed with Alzheimer’s disease or dementia. Anyone who has a family member suffering from these disorders knows how debilitating, humiliating, and dehumanizing they are. It is difficult to imagine that individuals in the late stages of Alzheimer’s take much satisfaction or joy from their lives, and, represented both anecdotally and artistically, there are numerous cases of patients ending their lives while they still maintain some semblance of control. However, these patients are also widely deemed ineligible to provide informed consent for medical procedures and are thereby explicitly excluded from the above laws. This creates another problem in which those individuals who may be most likely to elect for this assistance are not eligible to obtain it.

I realize that this is an incredibly sensitive and controversial topic, but it does need to be discussed as both the future of our medical system and the health of our parents and grandparents deteriorates. No one wants to think that they will need to consider this decision for themselves or their family members, yet this issue must be addressed as the demand for health care increases and the supply dwindles.

Apart from the logistical question of treatment availability, the much larger issue at stake is the humaneness and ethics of this approach. Every patient suffering from a debilitating terminal illness should have the right to determine their own course of treatment, including end-of-care plans. Do not resuscitate (DNR) orders are commonplace in hospitals and hospice care, yet the active version of these orders is much more difficult to carry out. And when the patient no longer has full mental or emotional capacity this decision becomes all the more tenuous and ethically and emotionally demanding. Yet watching a loved one’s mind and body deteriorate is torturous for all parties involved. Explicit end-of-life plans should be detailed by every individual as they age and discussed with family members in the case of an emergency.

Currently, Belgium, Colombia, Luxembourg, The Netherlands, and Switzerland all allow physician-assisted suicide in some form, and there is a growing underground tourism industry to these countries for this specific reason. Perhaps as the demand for this type of treatment increases policies adequately addressing it will follow.

(Thanks to Steve Smith for the idea for this post.)

Pathologizing the norm

At the beginning of any introductory psychology course, students are warned that the basic education they are about to receive does not make them experts in the field. They are cautioned against diagnosing friends and family members with their scant knowledge, and they are reminded that there are innumerable nuances in both personality and personality disorders that they are far from privy to. A stirring op-ed piece in the New York Times recently highlighted some of the perils stemming from the common citizen diagnosing themselves and their loved ones with Alzheimer’s disease or dementia. However, more and more it seems that clinicians and researchers in the field of psychology and psychiatry are at risk of making this same mistakes by pathologizing natural neuropsychological slips and common cognitive errors.

Neuropsychological assessments involve a series of challenging–and at times painstaking–tests of memory, decision-making, and cognitive flexibility, among other executive functions. Standardized ranges are provided for these scores from the wider population, similar to an IQ test. These assessments are particularly useful in neurological patient populations, such as victims of a stroke or a brain tumor, and in the elderly to assess cognitive decline, just as the DSM-IV (Diagnostic and Statistical Manual of Mental Disorders) and MMPI (Minnesota Multiphasic Personality Inventory) are helpful in a therapist or clinician’s office. However, these tests, as well as “significant” real-life examples, are now being used as evidence of disorder in normal individuals.

Nowadays, misplacing your car keys can be seen as a precursor to dementia and blanking on an old acquaintance’s name is indicative of Alzheimer’s. Likewise, niche expertise is an example of savantism and social awkwardness a sign of long undiagnosed Asperger’s syndrome, which is just a short step away from autism on the spectrum.

But what we have to remember–and what is getting lost in this dichotomous system of diagnoses–is that all of these disorders or impairments lie on a spectrum. And the ultimate litmus test for a disorder is not how poor one’s verbal recall is, but how much distress this impairment causes. The world of psychiatric and neuropsychological diagnoses is far from clear-cut, and these classifications must be based on more than just behavior. The perception and attitude of the patient must be taken into account, including whether the individual even considers themselves to be a patient in the first place.

Similarly, over the past twenty years, the diagnosis of ADD/ADHD (attention deficit / attention deficit hyperactivity disorder) has risen dramatically, as has the subsequent backlash against over-diagnosing and over-medicating society’s children. Before running to the doctor’s office or the prescription pad, it is important to remember that kids are squirmy, and no one, college students and professors alike, can maintain disciplined attention during a tedious lecture.

Everyone experiences memory loss as they age, just as we all feel sadness over the course of our natural cycle of emotions. Unhappiness is a universal human feeling that everyone must go through from time to time, and it is not indicative of the pervasive demoralizing morose of true depression. Emotion, attention, and memory are all fluctuating human traits and must be remembered as just that: natural and transient. Our culture is so eager for a quick fix, to get rid of any feelings of discomfort and receive instant release. But sometimes it is important to experience these sentiments, to sit and work through our problems and wrestle with our shortcomings. This is in no way meant to minimize the tribulations that accompany these very real disorders, but to serve as a reminder that all of us are flawed, mentally, physically, and emotionally, and if we pathologize these feelings, these struggles, then we may miss out on the robustness of life.

Taking the mind out of body image disorders

Two new papers have come out recently raising questions about eating-related disorders and their classification as psychological illnesses.

The first, published in Psychological Medicine, suggests that body dysmorphic disorder (BDD)–the tendency to have a warped sense of body image and to see problems in your body that are not there–is more indicative of a problem in the visual system than an issue with emotional self-image. BDD is most commonly described as a co-morbid disorder with anorexia, in which patients see themselves as much larger than they actually are, or as symptomatic of individuals who become addicted to plastic surgery.

Researchers at UCLA tested 14 participants diagnosed with BDD and 14 healthy controls on an fMRI study that involved viewing pictures of houses. The stimuli were manipulated to best represent either a generalized, holistic image of the house or to draw attention to more minute details, such as roof shingles. Participants with BDD demonstrated decreased activity in the prefrontal cortex when viewing the high detail images, and they had significantly less activation in the occipital lobe, or visual cortex, during the low detail pictures. The scientists conclude that this suggests patients with BDD allocate greater attention to detail and potentially have difficulty suppressing these types of thoughts. In addition, they may be unable to receive and process visual information from more holistic inputs, potentially indicative of a dissociation between global versus local visual processing.

The second paper, a recently published and already highly contested review article in the June issue of Molecular Psychiatry, asserts that anorexia is a metabolic disorder more akin to diabetes than extreme dieting. Lead author Donard Dwyer states that anorexia is a result of a “defective regulation of the starvation response, which leads to ambivalence towards food.” He argues that the levels of hormones in the body that regulate hunger and satiety are altered after prolonged starvation, resulting in a dysregulation of the feeding system and furthering dysfunctional eating behaviors.

In normal individuals, insulin is released when blood glucose levels get too low, triggering feelings of hunger and food-seeking behaviors. However, Dwyer posits that in anorexic patients these signals aren’t received, and the typical resulting urge to eat isn’t initiated. Early attempts at extreme dieting may spark this problem in those who are predisposed to metabolic dysfunction, creating a perpetuating cycle of self-induced starvation. In this way, anorexia is seen as more similar to diabetes, where a “western diet” of high fat and sugar can result in the insulin system shutting down in those with a genetic risk for the disease.

While his arguments about the hormonal changes that take place as a result of anorexia are plausible, it is difficult to swallow that the emotional and cognitive distress that accompany this disorder are a result of a malfunctioning metabolism. Starvation can result in the decay and death of cells in the brain and body, which Dwyer puts forth as the root for the co-morbid emotional and psychological symptoms seen in anorexic patients. However, the personality traits of perfectionism and anxiety so commonly seen in eating disorder patients typically exist before the eating symptoms begin. Additionally, when patients do recover, they frequently transition to a different type of eating disorder like bulimia, or they maintain an unhealthy obsession with food for the rest of their lives.

Dwyer’s position is partially in response to the question of why eating disorders are so pervasive and difficult to treat. Anorexia is notoriously persistent and has the highest percentage of death of any psychiatric illness. Many patients suffer for decades, constantly relapsing. A recent health article in the New York Times aptly compared anorexia to addiction, where patients never fully recover but instead remain in a constant state of remission. Attributing the disorder to a neurological or metabolic dysfunction rather than a mental health issue helps to explain why anorexia is so difficult to treat. However, just because there is a physical root to the problem does not make it any easier to cure. There are known neurological and anatomical abnormalities in schizophrenia, bipolar disorder, and depression, but that does not bring us any closer to solving these diseases. All we can do is try to treat the symptoms in the most efficient and effective way possible, whether that is through chemical, cognitive, or behavioral therapy, and hope for the best.