Thursday, August 27, 2009
Well worth a read.
Wednesday, August 26, 2009
Participants were shown images designed to affect their mood in a good, neutral, or bad way. Then they were shown images, each with a face in the middle and surrounded [...]"
The effect of language on the categorical perception of color is stronger for stimuli in the right visual field (RVF) than in the left visual field, but the neural correlates of the behavioral RVF advantage are unknown. Here we present brain activation maps revealing how language is differentially engaged in the discrimination of colored stimuli presented in either visual hemifield. In a rapid, event-related functional MRI study, we measured subjects' brain activity while they performed a visual search task. Compared with colors from the same lexical category, discrimination of colors from different linguistic categories provoked stronger and faster responses in the left hemisphere language regions, particularly when the colors were presented in the RVF. In addition, activation of visual areas 2/3, responsible for color perception, was much stronger for RVF stimuli from different linguistic categories than for stimuli from the same linguistic category. Notably, the enhanced activity of visual areas 2/3 coincided with the enhanced activity of the left posterior temporoparietal language region, suggesting that this language region may serve as a top-down control source that modulates the activation of the visual cortex. These findings shed light on the brain mechanisms that underlie the hemifield- dependent effect of language on visual perception.
The peril of positive thinking - why positive messages hurt people with low self-esteem [Not Exactly Rocket Science]
When the going gets tough, thousands of people try to boost their failing self-esteem by repeating positive statements to themselves. Encouraged by magazine columnists, self-help books and talk-show hosts, people prepare for challenges by chanting positive mantras like 'I am a strong, powerful person,' and, 'Nothing can stop me from achieving my dreams.' This approach has been championed at least as far back as Norman Vincent Peale's infamous book The Power of Positive Thinking, published in 1952.
But a new study suggests that despite its popularity, this particular brand of self-help may backfire badly. Ironically, it seems to be people with low self-esteem, who are most likely to rely on such statements, who are most likely to feel worse because of them. Joanne Wood from the University of Waterloo found that people with low self-esteem who repeated 'I'm a lovable person' to themselves felt worse than people who did neither.
The effect may be counter-intuitive, but the theory behind it is very straightforward. Everyone has a range of ideas they are prepared to accept. Messages that lie within this boundary are more persuasive than those that fall outside it - those meet the greatest resistance and can even lead to people holding onto their original position more strongly.
If a person with low self-esteem says something that's positive about themselves but is well beyond what they'll actually believe, their immediate reaction is to dismiss the claim and draw even further into their own self-loathing convictions. The positive statements could even act as reminders of failure, highlighting whatever gulf someone sees between reality and the standard they set for themselves. In short, someone could repeat "I'm a lovable person" but they'd really be thinking "I'm actually not" or "I'm not as lovable as I should be." Statements that contradict a person's self-image, no matter how rallying in intention, are likely to boomerang.
Read the rest of this post... | Read the comments on this post..."
You people. You people and your REQUESTS. Requests to do things like blog more about opponent-process theory. Well. Sci hears you. She obeys. At least this time. And for all your drug addiction experts out there asking me to read Koob, I can assure you that I have read a LOT of Koob in my time. For those of you not necessarily familiar with the drug abuse lit, George Koob is considered one of the greatest minds in current drug abuse research, and has done a lot to conform the motivationally-focused opponent-process theory to the model of drug addiction that exists today. Guy even has a wikipedia entry! That's how you know you've hit the big time.
And so, Sci continues her discussion of opponent-process theory in this second installment, with many thanks to Koob and his co-author, Le Moal.
You'll need it.Read the rest of this post... | Read the comments on this post..."
To clarify myself at the very start , I do not believe in a purely reactive nature of organisms; I believe that apart from reacting to stimuli/world; they also act , on their own, and are thus agents. To elaborate, I believe that neuronal groups and circuits may fire on their own and thus lead to behavior/ action. I do not claim that this firing is under voluntary/ volitional control- it may be random- the important point to note is that there is spontaneous motion.
- Sensory system: So to start with I propose that the first function/process the brain needs to develop is to sense its surroundings. This is to avoid predators/ harm in general. this sensory function of brain/sense organs may be unconscious and need not become conscious- as long as an animal can sense danger, even though it may not be aware of the danger, it can take appropriate action - a simple 'action' being changing its color to merge with background.
- Motor system:The second function/ process that the brain needs to develop is to have a system that enables motion/movement. This is primarily to explore its environment for food /nutrients. Preys are not going to walk in to your mouth; you have to move around and locate them. Again , this movement need not be volitional/conscious - as long as the animal moves randomly and sporadically to explore new environments, it can 'see' new things and eat a few. Again this 'seeing' may be as simple as sensing the chemical gradient in a new environmental.
- Learning system: The third function/process that the brain needs to develop is to have a system that enables learning. It is not enough to sense the environmental here-and-now. One needs to learn the contingencies in the world and remember that both in space and time. I am inclined to believe that this is primarily pavlovaion conditioning and associative learning, though I don't rule out operant learning. Again this learning need not be conscious- one need not explicitly refer to a memory to utilize it- unconscious learning and memory of events can suffice and can drive interactions. I also believe that need for this function is primarily driven by the fact that one interacts with similar environments/con specifics/ predators/ preys and it helps to remember which environmental conditions/operant actions lead to what outcomes. This learning could be as simple as stimuli A predict stimuli B and/or that action C predicts reward D .
- Affective/ Action tendencies system .The fourth function I propose that the brain needs to develop is a system to control its motor system/ behavior by making it more in sync with its internal state. This I propose is done by a group of neurons monitoring the activity of other neurons/visceral organs and thus becoming aware (in a non-conscious sense)of the global state of the organism and of the probability that a particular neuronal group will fire in future and by thus becoming aware of the global state of the organism , by their outputs they may be able to enable one group to fire while inhibiting other groups from firing. To clarify by way of example, some neuronal groups may be responsible for movement. Another neuronal group may be receiving inputs from these as well as say input from gut that says that no movement has happened for a time and that the organism has also not eaten for a time and thus is in a 'hungry' state. This may prompt these neurons to fire in such a way that they send excitatory outputs to the movement related neurons and thus biasing them towards firing and thus increasing the probability that a motion will take place and perhaps the organism by indulging in exploratory behavior may be able to satisfy hunger. Of course they will inhibit other neuronal groups from firing and will themselves stop firing when appropriate motion takes place/ a prey is eaten. Again nothing of this has to be conscious- the state of the organism (like hunger) can be discerned unconsciously and the action-tendencies biasing foraging behavior also activated unconsciously- as long as the organism prefers certain behaviors over others depending on its internal state , everything works perfectly. I propose that (unconscious) affective (emotional) state and systems have emerged to fulfill exactly this need of being able to differentially activate different action-tendencies suited to the needs of the organism. I also stick my neck out and claim that the activation of a particular emotion/affective system biases our sensing also. If the organism is hungry, the food tastes (is unconsciously more vivid) better and vice versa. thus affects not only are action-tendencies , but are also, to an extent, sensing-tendencies.
- Decisional/evaluative system: the last function (for now- remember I adhere to eight stage theories- and we have just seen five brain processes in increasing hierarchy) that the brain needs to have is a system to decide / evaluate. Learning lets us predict our world as well as the consequences of our actions. Affective systems provide us some control over our behavior and over our environment- but are automatically activated by the state we are in. Something needs to make these come together such that the competition between actions triggered due to the state we are in (affective action-tendencies) and the actions that may be beneficial given the learning associated with the current stimuli/ state of the world are resolved satisfactorily. One has to balance the action and reaction ratio and the subjective versus objective interpretation/ sensation of environment. The decisional/evaluative system , I propose, does this by associating values with different external event outcomes and different internal state outcomes and by resolving the trade off between the two. This again need not be conscious- given a stimuli predicting a predator in vicinity, and the internal state of the organism as hungry, the organism may have attached more value to 'avoid being eaten' than to 'finding prey' and thus may not move, but camouflage. On the other hand , if the organisms value system is such that it prefers a hero's death on battlefield , rather than starvation, it may move (in search of food) - again this could exist in the simplest of unicellular organisms.
And of course one can also conceive the above in pure reductionist form as a chain below:
sense-->recognize & learn-->evaluate options and decide-->emote and activate action tendencies->execute and move.
and then one can also say that movement leads to new sensation and the above is not a chain , but a part of cycle; all that is valid, but I would sincerely request my readers to consider the possibility of spontaneous and self-driven behavior as separate from reactive motor behavior.
With free will it's different, now we can TEST free will using brain-scan/image technologies. This has brought the idea of the philosophical idea of some mystical form of consciousness into sharp relief. We may not be able to define consciousness, but one thing that any definition of consciousness absolutely MUST have is some sense of free will. Without that, what's the point? So if we can show scientifically that free will doesn't exist, QED - consciousness also doesn't exist.
If I haven't made it clear as well, I am a firm believer in free will and the (from the scientific perspective) mystical form of consciousness. I find the determinist research fascinating and I encourage it to continue. But I also see several limitations that cannot be addressed through their current methodologies, and I see several philosophical conundrums that are not adequately addressed as well.
However, none of that makes the debate any less interesting, or the work being done any less valuable.
What is Free Will?: "
This post continues my discussion of free will and determinism in neuroscience. Due to the relatively brief nature of these posts, this discussion is incomplete. However, I hope it spurs additional discussion. I believe addressing free will and determinism allows us to understand the underlying theories and implications of neuroscience and social science research as well as the practical application of that research.
For this article, the main questions are: “Is behavior biologically determined?” and “Do humans have free will?” I will not address in this post the argument between compatibilism and incompatibilism. In response to comments and questions about my previous post, I thought it necessary to attempt to define free will before I write further posts on this general topic of free will and biological determinism in the neurosciences.
In reading some comments to my post one fairly common definition — at least an operational definition — of free will was randomness. In other words, in a psychology experiment, for example, free will is part of the unexplained variance — the randomness in the data. Equating free will with randomness — overtly or not — is something I have heard and read repeatedly.
However, I do not believe that free will can simply equal randomness. Randomness is chance. It is the flip of a coin or the roll of a die. Randomness is unpredictable. However, let’s go with the assumption that free will equals randomness. One of the simplest forms of randomness is a coin flip. That coin flip might seem random, at least the outcome might seem random, but suppose we understand the composition of the coin. We know it has a particular mass; we know the density of the metal as well as any variations in density throughout the coin. We know its precise coefficient of friction, its air resistance, its rotational velocity, and so forth. We understand everything about the chemistry and physics of the coin’s flip. With this comprehension, we can predict with 100% certainty the outcome of the flip. Based on our knowledge we can predict perfectly the outcome. However, our knowledge or predictions do not cause the outcome.
In other words, even with a perfect prediction of the outcome of the coin flip, that knowledge did not cause the randomness of the result. So, am I arguing that randomness is a good definition of free will? No. If we can understand all the chemistry and physics of the coin and its flight, we can then state that the flip of the coin merely appeared random but was in fact determined by the particular interaction between physics and chemistry. In other words, the outcome of the coin flip was determined by the physical world – by the materials of the coin and the interaction of those materials with our material world – even if our knowledge of the material world did not determine the outcome. Therefore, we can create a deterministic explanation for the seemingly random event.
This demonstrates that what appears random can be explained away as determined. Researchers even have deterministic and indeterministic explanations for quantum theory, which also indicates that defining free will as randomness is not sufficient. Thus, randomness is a poor definition of free will because if free will is nothing more than randomness, once we understand our material world perfectly we will perfectly explain all randomness, all previously unexplained variance. This is what some neuroscientists are trying to do with human behavior, although few are willing to take the hard stance of completely denying free will.
So what is free will? I’ll start with an example. Free will is standing out in the sunlight and denying that the sun is shining. Free will can be defined as choosing one’s actions or course. Free will also is frequently defined as indeterminism. What is interesting is that this definition meaning “not determinism,” relies on determinism to define free will. Why do many use determinism to define free will? Because determinism is easy to define — it is a concrete concept. Additionally, it is one of the major philosophical foundations of modern science, in part because we can easily create operational definitions for determinism.
In the biological sciences and neurosciences, in particular, determinism is inextricably tied to biology and materialism (i.e., biological determinism). Most neuropsychologists seek to explain behavior as determined by the interaction between biology and environment (many may have a soft deterministic view but they still want to know the causes of behavior). In forensic (legal) cases, neuropsychologists often clash with the legal system; psychology assumes biological determinism (i.e., causal determinism) whereas the legal system assumes free will (while it does not necessarily deny some form of determinism, the main emphasis is on free will).
In the end, I did not really define free will other than saying that it is not randomness and it is not determinism. Even defining free will as choosing one’s own course or actions is an incomplete definition because as demonstrated above, it is still possible to explain those choices as determined if we resort to reductionism of behaviors. This leads to one of the major problems with determinism — that it cannot really be falsified by science (after all, science does assume determinism to start) but that is a different discussion altogether. As David Hume once said (I’m paraphrasing), “[The nature of free will is] the most contentious question of metaphysics.”
In my next post I’ll address an alternative set of assumptions (i.e., beliefs or explanations) to determinism, particularly biological determinism as is found in neuroscience.
Related Articles at Brain Blogger:
- Free Will and the Philosophy of Science
- Free Will is a Terrible Thing to Waste
- Are Drug Reps Really Necessary?
- Safety Concerns with Prescription Drug Samples
- Drugs and Pharmacology, Fourteenth Edition
Musicians have better memory -- not just for music, but words and pictures too [Cognitive Daily]: "
Last night in the U.S. many televisions were tuned to one of the biggest spectacles of the year: the American Idol finale, where America would learn which singer had been chosen as 'America's favorite' (or, more cynically, who inspired the most teenagers to repeatedly dial toll-free numbers until all hours of the night). Greta and I are suckers for this sort of thing, so we watched along with the rest of the nation.
What impressed me about the show wasn't so much the prodigious vocal talents of the two finalists, but how everything was put together so hastily: there had been only six days from the previous week's episode (where the two finalists were revealed), and during this time each finalist learned at least three or four songs. The musicians who played along with them had no score to follow; they had to commit the songs to memory. Everything went off without a hitch, because these professional musicians routinely hold an astonishing variety of music in their memories.
If you've ever seen a symphonic concerto, you probably noticed that the soloist usually performs the entire piece -- lasting 20 minutes or more -- from memory: thousands of notes, all played with perfect pitch and intonation. Clearly many musicians have exceptional memories for the songs they play. So does this ability to remember hundreds of songs transfer into other types of memory? While there's been some research into musicians' memory, the results have been mixed. Most studies show that musicians have better memory for words than non-musicians, but there's less evidence that musicians can remember spatial information better. In one study, musicians couldn't recall locations on a map any better than non-musicians.
So a team led by Lorna Jakobsen tested 36 college students, 15 of whom had an average of 11.5 years of formal piano instruction and had passed a rigorous performance examination, while the rest had less than a year of musical training.Read the rest of this post... Read the comments on this post..."
Emotions in music are universally recognised: "Here in Britain, we all know an up-beat summer tune when we hear one, but what about other parts of the world? Perhaps our culture's summer anthem would sound to others like a soulless dirge. In fact a new study suggests this isn't the case at all. Echoing the way that the same basic facial expressions of emotion are recognised the world over, psychologists have reported new evidence that emotions in music are also universally recognised.
Thomas Fritz and colleagues played samples of computer-generated piano music to members of the culturally isolated Mafa tribe of Cameroon as well as to Western participants. The music had been specifically designed to convey either happiness, sadness or fear by careful manipulation of mode, tempo, pitch range, tone density and rhythmic regularity, according to Western conventions.
The tribes-people had never before heard Western music and yet they matched the musical samples to the appropriate (according to Western convention) pictures of facial emotional expressions, with an accuracy significantly above chance performance. Analysis of their ratings showed that both they and Western participants relied on the same cues to make their judgements: for example, pieces with higher tempo were more likely to be rated as happy, whereas lower tempo prompted ratings of fear.
A second experiment provided evidence for the universality of musical enjoyment. This time the researchers played either Western or traditional Mafa music to Mafa tribes-people and Westerners. Crucially, they played either unaltered versions of the music or 'spectrally manipulated' versions. This manipulation altered the timing of the music to make it sound more dissonant or lacking in harmony. The tribes-people and the Westerners both preferred the unaltered versions of both the Mafa and Western music.
Does the universality of musical emotional recognition mean that music acts as a universal language of human emotion? Not so fast. In a supplementary discussion available free online, the researchers pointed out that Mafa music doesn't convey as many different emotions as Western music, thus undermining the idea of music as a universal language. 'Despite the observed universals of emotional expression recognition one should thus be cautious to conjure the idea of music as a universal language of emotion, which is partly a legacy of the period of romanticism,' they said.
Fritz, T., Jentschke, S., Gosselin, N., Sammler, D., Peretz, I., Turner, R., Friederici, A., & Koelsch, S. (2009). Universal Recognition of Three Basic Emotions in Music. Current Biology, 19 (7), 573-576 DOI: 10.1016/j.cub.2009.02.058
Interview with Milgram’s participants: "
The Australian Broadcasting Corporation’s Radio Eye program has tracked down and interviewed four of the individuals who participated in Milgram’s now infamous obedience experiments. One of those interviewed by Radio Eye is Joseph Dimow, whose experience as a participant in the Milgram experiments has been covered by AHP previously.
Radio Eye’s Beyond the Shock Machine is described by the program’s website as follows,
In the summer of 1961 Stanley Milgram, a 27-year-old associate professor of psychology at Yale University, conducted a series of controversial experiments designed to test the limits of obedience. Volunteers in the experiment were told to give electric shocks to a person they could hear screaming in pain in the room next door. Seemingly ordinary people turned into torturers.
Much has been written about Milgram and his experiments. But there’s a missing part to the story — the voices of people who took part.
Gina Perry goes in search of those who participated in what’s been described as the most widely cited and provocative set of experiments in social psychology
The audio of Beyond the Shock Machine can be heard here.
Zimbardo on Milgram and Obedience - Part II: "
Situationist Contributer Philip Zimbardo has authored the preface to a new edition of social psychologist Stanley Milgram’s seminal book Obedience to Authority. This is the second of a two-part series derived from that preface. In Part I of the post, Zimbardo describes the inculcation of obedience and Milgram’s role as a research pioneer. In this part, Zimbardo answers challenges to Milgram’s work and locates its legacy.
* * *
Unfortunately, many psychologists, students, and lay people who believe that they know the “Milgram Shock” study, know only one version of it, most likely from seeing his influential movie Obedience or reading a textbook summary.
He has been challenged for using only male participants, which was true initially, but later he replicated his findings with females. He has been challenged for relying only on Yale students, because the first studies were conducted at Yale University. However, the Milgram obedience research covers nineteen separate experimental versions, involving about a thousand participants, ages twenty to fifty, of whom none are college or high school students! His research has been heavily criticized for being unethical by creating a situation that generated much distress for the person playing the role of the teacher believing his shocks were causing suffering to the person in the role of the learner. I believe that it was seeing his movie, in which he includes scenes of distress and indecision among his participants, that fostered the initial impetus for concern about the ethics of his research. Reading his research articles or his book does not convey as vividly the stress of participants who continued to obey authority despite the apparent suffering they were causing their innocent victims. I raise this issue not to argue for or against the ethicality of this research, but rather to raise the issue that it is still critical to read the original presentations of his ideas, methods, results, and discussions to understand fully what he did. That is another virtue of this collection of Milgram’s obedience research.
A few words about how I view this body of research. First, it is the most representative and generalizable research in social psychology or social sciences due to his large sample size, systematic variations, use of a diverse body of ordinary people from two small towns—New Haven and Bridgeport, Connecticut—and detailed presentation of methodological features. Further, its replications across many cultures and time periods reveal its robust effectiveness.
As the most significant demonstration of the power of social situations to influence human behavior, Milgram’s experiments are at the core of the situationist view of behavioral determinants. It is a study of the failure of most people to resist unjust authority when commands no longer make sense given the seemingly reasonable stated intentions of the just authority who began the study. It makes sense that psychological researchers would care about the judicious use of punishment as a means to improve learning and memory. However, it makes no sense to continue to administer increasingly painful shocks to one’s learner after he insists on quitting, complains of a heart condition, and then, after 330 volts, stops responding at all. How could you be helping improve his memory when he was unconscious or worse? The most minimal exercise of critical thinking at that stage in the series should have resulted in virtually everyone refusing to go on, disobeying this now heartlessly unjust authority. To the contrary, most who had gone that far were trapped in what Milgram calls the “agentic state.”
These ordinary adults were reduced to mindless obedient school children who do not know how to exit from a most unpleasant situation until teacher gives them permission to do so. At that critical juncture when their shocks might have caused a serious medical problem, did any of them simply get out of their chairs and go into the next room to check on the victim? Before answering, consider the next question, which I posed directly to Stanley Milgram: “After the final 450 volt switch was thrown, how many of the participant-teachers spontaneously got out of their seats and went to inquire about the condition of their learner?” Milgram’s answer: “Not one, not ever!” So there is a continuity into adulthood of that grade-school mentality of obedience to those primitive rules of doing nothing until the teacher-authority allows it, permits it, and orders it.
My research on situational power (the Stanford Prison Experiment) complements that of Milgram in several ways. They are the bookends of situationism: his representing direct power of authority on individuals, mine representing institutional indirect power over all those within its power domain. Mine has come to represent the power of systems to create and maintain situations of dominance and control over individual behavior. In addition, both are dramatic demonstrations of powerful external influences on human action, with lessons that are readily apparent to the reader, and to the viewer. (I too have a movie, Quiet Rage, that has proven to be quite impactful on audiences around the world.) Both raise basic issues about the ethics of any research that engenders some degree of suffering and guilt from participants. I discuss at considerable length my views on the ethics of such research in my recent book The Lucifer Effect: Understanding Why Good People Turn Evil (2008). When I first presented a brief overview of the Stanford Prison Experiment at the annual convention of the American Psychological Association in 1971, Milgram greeted me joyfully, saying that now I would take some of the ethics heat off his shoulders by doing an even more unethical study!
Finally, it may be of some passing interest to readers of this book, that Stanley Milgram and I were classmates at James Monroe High School in the Bronx (class of 1950), where we enjoyed a good time together. He was the smartest kid in the class, getting all the academic awards at graduation, while I was the most popular kid, being elected by senior class vote to be “Jimmie Monroe.” Little Stanley later told me, when we met ten years later at Yale University, that he wished he had been the most popular, and I confided that I wished I had been the smartest. We each did what we could with the cards dealt us. I had many interesting discussions with Stanley over the decades that followed, and we
almost wrote a social psychology text together. Sadly, in 1984 he died prematurely from a heart attack at the age of fifty-one.
[Milgram] left us with a vital legacy of brilliant ideas that began with those centered on obedience to authority and extended into many new realms—urban psychology, the small-world problem, six degrees of separation, and the Cyrano effect, among others—always using a creative mix of methods. Stanley Milgram was a keen observer of the human landscape, with an eye ever open for a new paradigm that might expose old truths or raise new awareness of hidden operating principles. I often wonder what new phenomena Stanley would be studying now were he still alive.
* * *
The Pirahã are a tribe of Amazonian Indian who have become famous among linguists and psychologists because it has been claimed that they lack a number system (not even a word for one), recursion, and color words (and they seem to be a very happy people despite the absence of Louis Vuitton shops, something I dare not believe).
Dan Everett, an ex-missionary/linguist/anthropologist, is one of the few people to speak their language, and he is responsible for most of the provocative claims about the Pirahã (see for instance this paper in Current Anthropology). A few months back, he published an autobiographical description of life among the Pirahã, accompanied by a popular science account of his discoveries: Don't Sleep There Are Snakes."
I spent some time in Brazil, so these stories are particularly interesting to me beyond the psychological/linguistic implications. However, the psychological implications are just too fun to pass up.
Tuesday, August 25, 2009
11 reasons why it's so important for kids to get a good night sleep (and yes, many of these are relevant to adults as well):
1. Sleep is restorative for the brain.
2. Too little sleep can lead to weight gain by altering levels of the hormones that regulate satiety and hunger, leading to overeating, overweight, and obesity.
3. Growth hormone is secreted during slow wave sleep.
4. Insufficient sleep is associated with a higher incidence of behavioral problems, especially attention deficit and hyperactivity disorder (ADHD).
5. Sleep disruption caused by snoring in infants delays their development.
6. Night terrors and confusional arousals are often made worse by sleep deprivation.
7. Memory consolidation occurs during slow wave sleep, meaning that the different pieces of what we've learned during the day come together coherently so that the knowledge can be accessed when needed.
8. Rapid eye movement (REM) sleep, the stage of sleep when the most vivid dreams are dreamt, is important for the 'unlearning' of superfluous memories. For example, when a child learns how to ride a bike and falls off the first ten times, finally successful on the eleventh try, the memory of how to perform the task so as to stay on the bike is the one which is important to retain, not the ones of how to fall off. Unlearning removes the unhelpful 'how to' memories of how to fall of the bike, so that the next day when the child hops on it, she will automatically re-enact what she did that eleventh time, and not the first ten.
9. School performance improves in kids with poor sleep because of obstructive sleep apnea after it has been treated.
10. Studies using MR spectroscopy to compare healthy children to those with long-standing obstructive sleep apnea have shown that those with the sleep apnea have certain, specific patterns of brain injury not seen in the healthy kids.
11. When kids sleep well, their parents' sleep improves, too, doing wonders for their ability to function during the day (and maintain their sanity in the evening and night). This may be last on this list, but certainly not least!
Amen, brother. Preach on! Anytime I can get more support for a good night sleep, I'm takin' it!
I think there are two reasons that people aren't as 'zesty' as they could be. One reason is that they have acquired misinformation about the emotion of shame. The second is that they don't have enough information about shame.
Why is accurate information about shame so important? Because unnecessary shame creates so much pain in our lives. In the words of one of our contemporary scholars of shame, Gershen Kaufman, 'Shame is the most disturbing experience individuals ever have about themselves; no other emotion feels more deeply disturbing because in the moment of shame the self feels wounded from within.'
Misinformation about shame
Here's an example of misinformation about shame: One client of mine was so confused that in the beginning of our work I could not even use the word ‘shame' in session without causing a major rift. We learned that she thought that if she FELT shame, that meant she had actually DONE something shameful, and that her whole self was shameful.
When I talk about shame, I'm not talking about anyone actually DOING anything wrong. I'm talking about the FEELING and the thoughts that we are somehow wrong, defective, inadequate, not good enough, or not strong enough.
Lack of information about shame
While everybody feels shame, most of us don't recognize it in its many forms. We can experience fleeting shame at burping loudly in an elevator. Or we can feel chronic shame, experiencing that we, as a whole person, are flawed and inferior. We can feel different intensities of shame. The most intense is humiliation. Humiliation is so painful that we can think, 'This is so painful I wish I could just die!'
We didn't know it until fairly recently, but infants are born hard-wired with the ability to experience shame. Here is an example of a scene that shows an infant's response to feeling shame. Baby is sitting on the kitchen counter in his infant seat. Mom steps out of the room for a minute. When Mom starts walking back into the room, Baby hears Mom's steps, and anticipates making joyful eye contact with her when she gets back. (The photo accompanying this post shows a baby's state of positive interest.)
But this time Mom is preoccupied, and when she comes back into the room, she does not meet Baby's eyes. The muscles in his neck then lose their strength, and his head drops down. He turns his face away from her, his eyes are cast downward and he may even drool. This is shame/humiliation. Mom did not meet his high interest; she did not make the connection. Baby's shame is the result.
Ways we may experience shame
I've listed below some variations of shame. We may not recognize some of the ways shame shows up. Each is different-both in what we think caused the experience and in what we think the consequences will be. But they are all shame experiences.
• Shyness is shame in the presence of a stranger
• Discouragement is shame about temporary defeat
• Embarrassment is shame in front of others
• Self-consciousness is shame about performance
• Inferiority is all-encompassing shame about the self
Common triggers for shame
Shame is commonly triggered by the following:
• Basic expectations or hopes frustrated or blocked
• Disappointment or perceived failure in relationships or work
• In relationships, any event that weakens the bond, or indicates rejection or lack of interest from the needed other
You've probably heard the phrase, 'What you feel, you can heal.' We have learned much about what is needed to work through and release shame. Recognition of our feeling of shame is the first step to mastering our shame reactions. And mastering our shame enhances our zestiness.
An exercise to help recognize shame
Describe in writing a specific incident from childhood in which you felt shame. Record what thoughts went with your feelings. Write the feelings and thoughts that you had during the incident. And those you had afterwards. What impulses did you have? Did you want to move towards others, or to move against them, or to move away from them?
Notice, if possible, where you felt the shame in your body. If the feeling had a color what would it be? What sound, color, texture, and temperature would it have?
Lastly, write down how you think that shame scene still influences you today. The impact can be either something you like or something you don't like.
An additional aid
I have been discussing how you can begin to manage your shame reactions while working alone. You can also manage your shame reactions by keeping your relational bonds strong. In any relationship there are bound to be conflicts of needs. And when you can communicate lovingly while you are in conflict, it helps reduce shame inducing reactions.
To aid you in those communications, you can get a free chapter in my ebook, How to Make Love through Deep Listening."
This is a great article, and treats a subject that I don't think gets enough attention, motivation through shame or guilt.
I'd buy that. Miserable parents = miserable kids. Happy parents = happy kids.
I wouldn't necessarily put it into the realm of heritability without some significant genetic research - but certainly from a conditioning aspect it would be pretty easy to buy into.
So - you want socially capable kids? Have the mom talk to them!
I posted an update to an earlier post on this, but I think it is interesting enough to merit its own post.
Meditate your way to a bigger brain....sounds like a bad infomercial.
Michel Desmurget and colleagues used this approach with seven patients undergoing neurosurgery for the removal of brain tumours (see images and further related info). Stimulation of the premotor cortex at the front of the brains of four of these patients led them to perform limb and mouth movements that they were unaware of. By contrast, stimulation of the parietal cortex at the rear of the brains of three of the patients led them to experience a powerful desire to move. Even higher power stimulation in this region provoked an erroneous belief that they had in fact moved when really they hadn't.
This new research is the first to link the parietal cortex directly with the feeling of a desire to move. Whereas frontal brain regions are associated with actual movement execution, the parietal cortex is known to be involved in predicting the sensory consequences of our own actions. This new research suggests this predictive activity may play a role in the feeling of having made a decision to move.
The story doesn't end there. Past research has shown that stimulation of a frontal area - supplementary motor cortex - is also linked with an urge to move. This region is involved in actual motor command planning. So the complete picture, so far, appears to be that the feeling of a desire to move arises from frontal areas associated with movement execution and from parietal areas associated with predicting the sensory consequences of moving. 'Just how the frontal, motor aspect of this experience differs from the parietal, sensory aspect, is the next question,' said Haggard in his comment piece.
See here and here for earlier, related Digest posts.
M Desmurget, K Reilly, N Richard, A Szathmari, C Mottolese, & A Sirigu (2009). Movement intention after parietal cortex stimulation in humans. Science, 324, 811-813.
P Haggard (2009). The sources of human volition. Science, 324, 731-733
I've updated my earlier post on this subject enough, so I guess it's time to have a new post on the subject just to provide more context for the prevalence of the debate.
The success of religion may be the fault of non-believers (or, if you look at it the other way around, thank god for the atheists!). At least that is one interpretation of a recent individual-based simulation study of social evolution conducted by James Dow at Oakland University in Rochester, Michigan, and published in a recent issue of the Journal of Artificial Societies and Social Simulation (vol. 11, no. 2 2).
Dow built a simulation program (appropriately called evogod) that explored the question of how religion — i.e., a system based on passing along false or unverifiable information about the world — can spread in a society. There are, of course, several theories out there about the evolution of religion, falling into two broad categories: either religion is somehow advantageous and is therefore the result of natural selection, or it is a byproduct of other characteristics of the human brain and social organization. The first possibility comes in two main flavors: the advantage could accrue to religious individuals (standard individual-level natural selection) or groups (invoking the more controversial mechanism of group selection). Dow’s study explores the possibility that religious belief spread because of an individual advantage of some sort.
The first interesting result from the simulations is that under most tested scenarios religion actually does not survive! This is presumably because there is an obvious cost (in terms of sheer Darwinian fitness) to buying into fanciful notions about how the world works. How is it possible, then, that practically every human society has gotten the religious virus? The most surprising result of Dow’s study is that religion spreads only if non-religious people help it by supporting the religious! How is this possible?
The simulation’s structure was not designed to address the question of what mechanism could induce non-religious people to help religious ones, but some possibilities have been suggested nonetheless. According to Dow, “if a person is willing to sacrifice for an abstract god then people feel like they are willing to sacrifice for the community” (the so-called “greenbeard” effect). This is a social version of a well-established evolutionary idea known as the “handicap principle,” where males who can parade useless and costly attributes (be they peacock's feathers or Ferrari sports cars) are more likely to attract females because they are sending the indirect signal that their genes are so good that they can waste energy and resources just to please the female. It attempts to induce the female to imagine what sort of offspring they might be able to produce if only the female would consent to...
As bizarre and irrational as this sort of scenario may seem, there is independent empirical evidence, for instance from studies of Israeli kibbutzim, that religious people do tend to receive more assistance than less religious ones from the rest of the community, again perhaps because they inspire trust. Ironically, of course, this trust originates not because the religious provide more truthful information about the world, but precisely because they display a high degree of commitment to delivering non-verifiable information! Humans, you’ve got to love them.
This is pretty hilarious on many levels. A simulation program for religion? And what exactly were the criteria that they entered into their computer? I'm fairly skeptical of computer simulations for complex phenomenon to begin with (chaos theory has taught us many things, including that computers can at best only roughly approximate anything), but this one seems to really go beyond the pale. It would be a much more interesting read if we could see the input parameters.
It's been a truism for some time that infants have much more intelligence than we give them credit for. Good to see that more and more recognition of that fact is percolating out.
Our brains are wired to pay attention to flashy signs and loud noises, so how do we focus on what we want when we're surrounded by interruptions we're naturally drawn to?
Photo by kevindooley.
The New York Times dives into the science of concentration, discussing studies on the nature of concentration and some of the solutions science is working up for us—including an amazing hearing-aid-for-concentration device that would send pulses of light to our brain that would provide us with better control over our powers of concentration. (Do read the full article for a taste of some very interesting science.)
Until that future comes, Winifred Gallagher, author of Rapt—a guide to the science of attention—suggests a few practical things you can do now.
She recommends starting your work day concentrating on your most important task for 90 minutes. At that point your prefrontal cortex probably needs a rest, and you can answer e-mail, return phone calls and sip caffeine (which does help attention) before focusing again. But until that first break, don't get distracted by anything else, because it can take the brain 20 minutes to do the equivalent of rebooting after an interruption.
Every student should read this article.
'Brain Gain,' Margaret Talbot's troubling article in the latest issue of the New Yorker magazine, tackles a growing problem among undergraduate and high-school students: the abuse of prescription stimulants for scholarly gain. Talbot interviews a former Harvard student who regularly took Adderall off-label, without a diagnosis of ADHD, in hopes of ramping up his academic performance. Like hundreds of students across American campuses who adopt this practice, the student wasn't performing badly. He simply wanted to party a bit more than time and body permitted, and took Adderall in hopes that it would help him make up for lost time.
Rather than casting aspersions or trying to dismiss the issue, Talbot tries to understand what's driving it and why so many students think it's wise, expedient, or necessary to take prescription stimulants for conditions they know they haven't got, in hopes that doing so won't backfire.
A number of bioethicists have chimed in on this issue, saying they don't in principle object to the phenomenon, which they prefer to call 'neuroenhancing.' And you can at least see their point, because our school systems and work environments are so competitive that it's difficult to keep an edge. Over whether to take meds for an extra boost, the bioethicists' line has become: 'You can choose whether to compete or not.'
I'm not so sure. Our culture expects us to be 'up' and 'on' at all hours. As one advocate of 'selective' neuroenhancement put it, 'I don't think we need to be turning up the crank another notch on how hard we work.' We work hard enough as it is.
Those who favor neuroenhancement hit back with this rejoinder: Why hold on to 'self-limiting' behaviors when popping a pill seems to extend such limits further than we could imagine? We can have 'responsible use of cognitive-enhancing drugs by the healthy,' writes Martha Farah in a recent issue of Nature. Even the British Medical Association appears to be taking this line: 'Universal access to enhancing interventions would bring up the base-line of cognitive ability,' the organization wrote recently, 'which is generally seen to be a good thing.'
True. But the argument has a coercive edge to it, because new norms and practices put a lot of pressure on a younger generation, in this case raising expectations about performance that are tied solely to medication rather than greater work, effort, or understanding. To what extent is that a real or lasting improvement, either in the student or in the generation she or he represents?
Those favoring neuroenhancers echo the line we used to hear about SSRI antidepressants (long before the FDA found it necessary to add black-box warnings to them because of concerns about suicide ideation): the drugs simply elevate our moods and give us freedom from angst: We can 'customize' our brains just as we can 'customize' ourselves. It's a lifestyle-improvement issue, apparently. And fatigue—like sadness—is just an unfortunate residue of our humanity that we can medicate away.
Talbot's piece usefully disabuses us of that fantasy. The essays the Harvard student would grind out on Adderall, he himself conceded, were far less precise and succinct than those he normally would write. Another person Talbot interviews puts it this way: 'In the end, you're only as good as the ideas you've come up with.'
Finally, Talbot asks whether taking an unneeded stimulant for an assignment could make one's thinking—and writing—less creative than otherwise: closer to a distinctly unromantic model of productivity than a truly well-conceived, well-executed exercise. The difference may not matter with one or two projects, but one wouldn't want that pattern repeated on a larger scale, especially if it leads to similar work being churned out unthinkingly across the nation.
To those taking prescription stimulants purely to get an edge over their peers, it's easy to voice the charge of cheating. A teacher myself, I don't like the thought of students self-medicating before an exam because they haven't adequately revised for it. I worry too about the unforeseen side effects of such medication, because the lists of them from Adderall and Ritalin (forms of amphetamine) are as easy to find on the Web as the medications themselves. Such side effects include extreme fatigue, loss of appetite, headaches, fever, and heartburn. There's also concern that the drugs are habit-forming.
When Adderall and Ritalin replace caffeine and NoDoz as ways of getting through finals week, there are clearly grounds for concern. The phenomenon of neuroenhancement is growing rapidly among undergraduates and high-school students—according to one study, as many as 35% of students on one campus had taken prescription stimulants they didn't need during the previous year.
The most powerful moment in Talbot's article comes near the end, when she sums up these issues and lets slip her personal feelings about them. 'All this may be leading to a kind of society I'm not sure I want to live in: a society where we're even more overworked and driven by technology than we already are, and where we have to take drugs to keep up; a society where we give children academic steroids along with their daily vitamins.'
Christopher Lane, the Pearce Miller Research Professor at Northwestern University, is the author most recently of Shyness: How Normal Behavior Became a Sickness."
Great article on "Brain steroids" - a fun argument to have.
Heck, yes! This is one of the first things I try and teach my psychology students! The type of information taught in this professors class is, in my opinion, not just helpful, but essential for a new college student.
I need to develop a course.....
Any reader of this blog knows that I'm a HUGE proponent of Play as a form of education, and this just highlights more uses of the practice.
I think it's interesting that the scientists expected to see something special. In actuallity they were probably NOT hoping to find something special as more evidence that religion is wrong - but they're missing the point. Religion is comfortable just like an old friend. I would have been exceptionally surprised to see any result but the one they got. Talking to God shouldn't engage some special "prayer" region or some surprising new "religion" module of the brain - by its very nature it should look just like talking to a good friend. As I said - that's kind of the point of religion.
Members of a religion are obliged to comply with its ethical principles. How good are religious people at living up to what is expected of them? Are they more ethical than atheists, for example? Let's review the evidence.
Students of world history recognize that when there are real conflicts of interest, religion, however pacifist in principle, does not restrain warlike impulses and there are many so-called religious wars. These generally have little to do with theological differences and are mostly sparked by friction over vital resources, like land, oil, or employment opportunities. The religious war in Northern Ireland revolved around jobs and houses held by Protestants and desired by Catholics, for instance.
There is a high level of religious faith in jails, even among the most violent and depraved offenders. Criminal convictions and long prison terms (or even the death penalty) may encourage prisoners to reflect on the error of their ways. Religious conversion in such cases is a sort of 'plea bargain' according to which more lenient treatment is anticipated from parole boards, prison officials, and even in the next life. The fact that some of the worst people can be so religious implies that religiosity is no guarantee of ethical behavior.
Despite a widespread perception that religious people should behave more ethically in general, researchers find little evidence that religious people either think or behave more ethically (1). One study, found that atheists were significantly less likely than religious students to cheat on an exam (2).
Psychologists find that religious belief stunts moral development, because it commits people to a dogma, or formula, rather than working out ethical solutions for themselves (the highest stage of moral development known as post-conventional morality).
Fundamentalist religions may undermine moral reasoning. People who 'know' that they are saved, may be relatively unconcerned about who is hurt by their actions in this world. A Roper survey found that after being 'born again,' people are more likely to drive drunk, use illegal drugs, and engage in illicit sex (3).
Religious texts exhort people to behave charitably and with compassion but social scientists over the decades find little evidence of this affecting actions. Among the research findings assembled by sociologist Alfie Kohn (4), were the following:
A 1950 study of Episcopalians found no relationship between involvement in religious activities and charitable acts.
A 1960 questionnaire study found that belief in God was only slightly related to altruism. Attendance at religious services was completely unrelated to altruism.
In 1984, interviews with 700 residents of a medium-sized city found that religious people were not particularly good citizens, as determined by involvement with neighbors and participation in local organizations.
People who rescued Jews in Nazi Germany were not any more religious than non rescuers.
Religious people are more intolerant of ethnic minorities.
A 1992 Gallup study (5) showed, however, that church members are more likely to claim they make charitable donations than non members are (78 percent vs. 66 percent).
A 2006 study found that among developed countries, those with a higher proportion of religious believers had more homicides, more teen births, and more venereal disease (5).
A 2008 study of ethics in high schools (6) found little difference between religious and secular independent schools in self-reported stealing (19 vs. 21 percent, respectively), or lying to parents (83 vs. 78 percent), but cheating was more common in religious schools (63 vs. 47 percent).
If indeed there are ethical differences between religious believers and atheists, a galloping horse wouldn't notice the difference. Neither can a catholic priest. According to the late Rev Richard John Neuhaus:
One would like to think that people who think of themselves as devout Christians would also behave in a manner that is in accord with Christian ethics. But pastorally and existentially, I know that that is not the case - and never has been the case.
1. Barber, N. (2004). Kindness in a cruel world: The evolution of altruism. Amherst, NY: Prometheus.
2. Clark, B. (1994). How religion impedes moral development. Free Inquiry, 14(3), 23-25
3. Freethought Today, September 1991, p. 12.
4. Kohn, A. (1990) The brighter side of human nature: Altruism and empathy in everyday life. New York: Basic.
5. Wuthnow, R (1994). God and mammon in America. New York: The Free Press.
6. Josephson Institute (2008). Report card on the ethics of American youth.
Each year in the U.S., billions of dollars are spent by companies and individuals on leadership development programs. A critical question is: does leadership development work? The answer is, 'yes,' but some programs seem to work better than others.
A series of meta-analyses, which are essentially statistical studies of studies, shows that across all efforts to develop leadership there are modest, positive improvements in leadership. The first, by psychologist Bruce Avolio and his colleagues looked at 100 years of leadership development programs. This meta-analysis showed that across programs, regardless of the type of training, leadership development worked.
More recent analyses show that certain types of leadership development, based on theories such as transformational leadership, and Pygmalion leadership training (teaching leaders to hold positive expectations for followers' performance, and to convey those positive expectations) work best.
As you might imagine, the time devoted to leadership development matters, with longer programs having more positive impact than programs that last a day or two. Moreover, it is important that programs follow some tried and true leadership model to guide development efforts.
This is all positive news for those of us involved in leadership development and for those of us who are trying to develop as leaders. Even old dogs can learn new tricks, or new ways to lead.
Avolio, Bruce J. (and colleagues). A Meta-Analytic Review of Leadership Impact Research: Experimental and Quasi-Experimental Studies. Forthcoming in The Leadership Quarterly.
And to save effort, here are the rest:
Part 2: childhood
Part 3: adolescence
Part 4: adulthood
Part 5: old-age
Academics take note! Although it should also be mitigated by the previous research on how bias is inherent in judgement.
This is pretty non-surprising. Grade-school teachers call it "getting the jitters out" (or wiggles, take your pick).
I wonder if they took into account prestigious universities (who often accepte a student more readily if they have certain extracurricular activities in their application) and the fact that graduates from said universities often have a greater earning potential based on the strength of the reputation of the school?