science is more than an inspiration
  • neuromorphogenesis:

    Three Planes of Human Brain: Coronal, Sagittal and Horizontal.

    From Michigan State University - Brain Biodiversity Bank.

    (via neuromorphogenesis)

  • neurosciencestuff:

    Researchers make new discovery about brain’s 3-D shape processing

    While previous studies of the brain suggest that processing of objects and places occurs in very different locations, a Johns Hopkins University research team has found that they are closely related.

    In research funded by the National Institutes of Health and published today in the journal Neuron, a team led by Johns Hopkins researcher Charles E. Connor reports that a major pathway long associated with object shape also carries information about landscapes and other environments.

    Siavash Vaziri, then a biomedical engineering graduate student and now a post-doctoral fellow in the Connor lab, studied how neurons in the ventral visual pathway of the monkey brain respond to 3-D images. In one channel of the ventral pathway, neurons responded to small, discrete objects as expected. But in a neighboring, parallel channel, the researchers were surprised by the overwhelming responsiveness of many neurons to large-scale environments that surround the viewer, extending beyond the field of view.

    "We were entirely surprised ourselves," said Connor, senior author of the paper. "Based on decades of research, we expected that all neurons in the ventral pathway would be primarily concerned with objects."

    The ventral pathway is one of the two major branches of high-level visual processing in humans and other primates. It is sometimes called the “what” pathway, based on its role in identifying objects based on their shapes and colors.

    "Dr. Vaziri’s finding is exciting because it puts environmental shape information together with object shape information in two densely connected neighboring channels. This could be a site for integrating object information into environmental contexts in order to understand scenes," Connor said.

    Vaziri used microelectrodes to study how individual neurons responded to a large variety of 3-D shapes projected onto a large screen. Depth structure was conveyed by shading, texture gradients, and stereopsis, the effect used in 3-D movies. The shape stimuli evolved during the experiment based on the neuron’s responses, sometimes in the direction of small objects near the viewer, sometimes in the direction of environments filling the screen and surrounding the viewer.

    Connor, a professor of neuroscience and the director of the Zanvyl Krieger Mind/Brain Institute at Johns Hopkins, is a noted expert on the neural mechanisms of object vision. His research focuses on deciphering the algorithms that make object vision possible and explain the nature of visual experience.

    "Many people would say that vision is our richest and most vivid experience," said Connor. "We want to understand the brain events that create that experience."

    Connor said that the next step will be to understand how object and environment information are integrated between the two channels.

    "We don’t typically experience objects in isolation," Connor said. "We experience scenes, that is, environments containing multiple objects. We now think that the ventral pathway may be where all that information gets put together to create scene understanding."

  • neurosciencestuff:

    Scientists track the rise and fall of brain volume throughout life

    We can witness our bodies mature, then gradually grow wrinkled and weaker with age, but it is only recently that scientists have been able to track a similar progression in the nerve bundles of our brains. That tissue increases in volume until around age 40, then slowly shrinks. By the end of our lives the tissue is about the volume of a 7-year-old.

    So finds a team of Stanford scientists who used a new magnetic resonance imaging technique to show, for the first time, how human brain tissue changes throughout life. Knowing what’s normal at different ages, doctors can now image a patient’s brain, compare it to this standard curve and be able to tell if a person is out of the normal range, much like the way a growth chart can help identify kids who have fallen below their growth curve. The researchers have already used the technique to identify previously overlooked changes in the brain of people with multiple sclerosis.

    "This allows us to look at people who have come into the clinic, compare them to the norm and potentially diagnose or monitor abnormalities due to different diseases or changes due to medications," said Jason Yeatman, a graduate student in psychology and first author on a paper published today in Nature Communications. Aviv Mezer, a research associate, was senior author on the paper. Both collaborated with Brian Wandell, a professor of psychology, and his team.

    For decades scientists have been able to image the brain using magnetic resonance imaging (MRI) and detect tumors, brain activity or abnormalities in people with some diseases, but those measurements were all subjective. A scientist measuring some aspect of the brain in one lab couldn’t directly compare findings with someone in another lab. And because no two scans could be compared, there was no way to look at a patient’s image and know whether it fell outside the normal range.

    Limitation overcome

    "A big problem in MRI is variation between instruments," Mezer said. Last year Mezer and Wandell led an interdisciplinary team to develop a technique that can be used to compare MRI scans quantitatively between labs, described in Nature Medicine. “Now with that method we found a way to measure the underlying tissue and not the instrumental bias. So that means that we can measure 100 subjects here and Jason can measure another 100 in Seattle (where he is now a postdoctoral fellow) and we can put them all in a database for the community.”

    The technique the team had developed measures the amount of white matter tissue in the brain. That amount of white matter comes primarily from an insulating covering called myelin that allows nerves to fire most efficiently and is a hallmark of brain maturation, though the white matter can also be composed of other types of cells in the brain.

    White matter plays a critical role in brain development and decline, and several diseases including schizophrenia and autism are associated with white matter abnormalities. Despite its importance in normal development and disease, no metric existed for determining whether any person’s white matter fell within a normal range, particularly if the people were imaged on different machines.

    Mezer and Yeatman decided to use the newly developed quantitative technique to develop a normal curve for white matter levels throughout life. They imaged 24 regions within the brains of 102 people ages 7 to 85, and from that established a set of curves showing the increase and then eventual decrease in white matter in each of the 24 regions throughout life.

    What they found is that the normal curve for brain composition is rainbow-shaped. It starts and ends with roughly the same amount of white matter and peaks between ages 30 and 50. But each of the 24 regions changes a different amount. Some parts of the brain, like those that control movement, are long, flat arcs, staying relatively stable throughout life.

    Others, like the areas involved in thinking and learning, are steep arches, maturing dramatically and then falling off quickly. (The group did point out that their samples started at age 7 and a lot of brain development had already occurred.)

    Continued collaboration

    "Regions of the brain supporting high-level cognitive functions develop longer and have more degradation," Yeatman said. "Understanding how that relates to cognition will be really important and interesting." Yeatman is now a postdoctoral scholar at the University of Washington, and Mezer is now an assistant professor at the Hebrew University of Jerusalem. They plan to continue collaborating with each other and with other members of the Wandell lab, looking at how brain composition correlates with learning and how it could be used to diagnose diseases, learning disabilities or mental health issues.

    The group has already shown that they can identify people with multiple sclerosis (MS) as falling outside the normal curve. People with MS develop what are known as lesions – regions in the brain or spinal cord where myelin is missing. In this paper, the team showed that they could identify people with MS as being off the normal curve throughout regions of the brain, including places where there are no visible lesions. This could provide an alternate method of monitoring and diagnosing MS, they say.

    Wandell has had a particular interest in studying the changes that happen in the brain as a child learns to read. Until now, if a family brought a child into the clinic with learning disabilities, Wandell and other scientists had no way to diagnose whether the child’s brain was developing normally, or to determine the relationship between learning delays and white matter abnormalities.

    "Now that we know what the normal distribution is, when a single person comes in you can ask how their child compares to the normal distribution. That’s where this is headed," said Wandell, who is also the Isaac and Madeline Stein Family professor and a Stanford Bio-X affiliate. Wandell runs the Center for Cognitive and Neurobiological Imaging (CNI), where Mezer and the team developed the MRI technique to quantify white matter, and where the scans for this study were conducted.

    The ability to share data among scientists is an issue Wandell has championed at the CNI and has been promoting in his work helping the Stanford Neurosciences Institute plan the computing strategy for their new facility. “Sharing of data and computational methods is critical for scientific progress,” Wandell said. In line with that goal, the new standard curve for white matter is something scientists around the world can use and contribute data to.

  • Marie Curie (via neuromorphogenesis)

    (via neuromorphogenesis)

  • "Nothing in life is to be feared, it is only to be understood. Now is the time to understand more, so that we may fear less."
  • neurosciencestuff:

    EEG Study Findings Reveal How Fear is Processed in the Brain

    An estimated 8% of Americans will suffer from post traumatic stress disorder (PTSD) at some point during their lifetime. Brought on by an overwhelming or stressful event or events, PTSD is the result of altered chemistry and physiology of the brain. Understanding how threat is processed in a normal brain versus one altered by PTSD is essential to developing effective interventions. 

    New research from the Center for BrainHealth at The University of Texas at Dallas published online today in Brain and Cognition illustrates how fear arises in the brain when individuals are exposed to threatening images. This novel study is the first to separate emotion from threat by controlling for the dimension of arousal, the emotional reaction provoked, whether positive or negative, in response to stimuli. Building on previous animal and human research, the study identifies an electrophysiological marker for threat in the brain.

    “We are trying to find where thought exists in the mind,” explained John Hart, Jr., M.D., Medical Science Director at the Center for BrainHealth. “We know that groups of neurons firing on and off create a frequency and pattern that tell other areas of the brain what to do. By identifying these rhythms, we can correlate them with a cognitive unit such as fear.”

    Utilizing electroencephalography (EEG), Dr. Hart’s research team identified theta and beta wave activity that signifies the brain’s reaction to visually threatening images. 

    “We have known for a long time that the brain prioritizes threatening information over other cognitive processes,” explained Bambi DeLaRosa, study lead author. “These findings show us how this happens. Theta wave activity starts in the back of the brain, in it’s fear center – the amygdala – and then interacts with brain’s memory center - the hippocampus – before traveling to the frontal lobe where thought processing areas are engaged. At the same time, beta wave activity indicates that the motor cortex is revving up in case the feet need to move to avoid the perceived threat.” 

    For the study, 26 adults (19 female, 7 male), ages 19-30 were shown 224 randomized images that were either unidentifiably scrambled or real pictures. Real pictures were separated into two categories: threatening (weapons, combat, nature or animals) and non-threatening (pleasant situations, food, nature or animals). 

    While wearing an EEG cap, participants were asked to push a button with their right index finger for real items and another button with their right middle finger for nonreal/scrambled items. Shorter response times were recorded for scrambled images than the real images. There was no difference in reaction time for threatening versus non-threatening images. 

    EEG results revealed that threatening images evoked an early increase in theta activity in the occipital lobe (the area in the brain where visual information is processed), followed by a later increase in theta power in the frontal lobe (where higher mental functions such as thinking, decision-making, and planning occur). A left lateralized desynchronization of the beta band, the wave pattern associated with motor behavior (like the impulse to run), also consistently appeared in the threatening condition.

    This study will serve as a foundation for future work that will explore normal versus abnormal fear associated with an object in other atypical populations including individuals with PTSD.

  • Sensory Sensitivity: Stimulation and deprivation alter vascular structure in the brain

    neurosciencestuff:

    Nerves and blood vessels lead intimately entwined lives. They grow up together, following similar cues as they spread throughout the body. Blood vessels supply nerves with oxygen and nutrients, while nerves control blood vessel dilation and heart rate.

    Neurovascular relationships are especially important in the brain. Studies have shown that when neurons work hard, blood flow increases to keep them nourished. Scientists have been asking whether neural activity also changes the structure of local vascular networks.

    According to new research published in the Sept. 3 issue of Neuron, the answer is yes.

    Read More

    (Source: hms.harvard.edu)

  • This is from the Mooney Lab ! Yaaaay :) 

    neurosciencestuff:

    Stop and Listen: Study Shows How Movement Affects Hearing

    When we want to listen carefully to someone, the first thing we do is stop talking. The second thing we do is stop moving altogether. This strategy helps us hear better by preventing unwanted sounds generated by our own movements.

    This interplay between movement and hearing also has a counterpart deep in the brain. Indeed, indirect evidence has long suggested that the brain’s motor cortex, which controls movement, somehow influences the auditory cortex, which gives rise to our conscious perception of sound.

    A new Duke study, appearing online August 27 in Nature, combines cutting-edge methods in electrophysiology, optogenetics and behavioral analysis to reveal exactly how the motor cortex, seemingly in anticipation of movement, can tweak the volume control in the auditory cortex.

    The new lab methods allowed the group to “get beyond a century’s worth of very powerful but largely correlative observations, and develop a new, and really a harder, causality-driven view of how the brain works,” said the study’s senior author Richard Mooney Ph.D., a professor of neurobiology at Duke University School of Medicine, and a member of the Duke Institute for Brain Sciences.

    The findings contribute to the basic knowledge of how communication between the brain’s motor and auditory cortexes might affect hearing during speech or musical performance. Disruptions to the same circuitry may give rise to auditory hallucinations in people with schizophrenia.

    In 2013, researchers led by Mooney first characterized the connections between motor and auditory areas in mouse brain slices as well as in anesthetized mice. The new study answers the critical question of how those connections operate in an awake, moving mouse.

    "This is a major step forward in that we’ve now interrogated the system in an animal that’s freely behaving," said David Schneider, a postdoctoral associate in Mooney’s lab.

    Mooney suspects that the motor cortex learns how to mute responses in the auditory cortex to sounds that are expected to arise from one’s own movements while heightening sensitivity to other, unexpected sounds. The group is testing this idea.

    "Our first step will be to start making more realistic situations where the animal needs to ignore the sounds that its movements are making in order to detect things that are happening in the world," Schneider said.

    In the latest study, the team recorded electrical activity of individual neurons in the brain’s auditory cortex. Whenever the mice moved — walking, grooming, or making high-pitched squeaks — neurons in their auditory cortex were dampened in response to tones played to the animals, compared to when they were at rest.

    To find out whether movement was directly influencing the auditory cortex, researchers conducted a series of experiments in awake animals using optogenetics, a powerful method that uses light to control the activity of select populations of neurons that have been genetically sensitized to light. Like the game of telephone, sounds that enter the ear pass through six or more relays in the brain before reaching the auditory cortex.

    "Optogenetics can be used to activate a specific relay in the network, in this case the penultimate node that relays signals to the auditory cortex," Mooney said.

    About half of the suppression during movement was found to originate within the auditory cortex itself. “That says a lot of modulation is going on in the auditory cortex, and not just at earlier relays in the auditory system” Mooney said.

    More specifically, the team found that movement stimulates inhibitory neurons that in turn suppress the response of the auditory cortex to tones.

    The researchers then wondered what turns on the inhibitory neurons. The suspects were many. “The auditory cortex is like this giant switching station where all these different inputs come through and say, ‘Okay, I want to have access to these interneurons,’” Mooney said. “The question we wanted to answer is who gets access to them during movement?”

    The team knew from previous experiments that neuronal projections from the secondary motor cortex (M2) modulate the auditory cortex. But to isolate M2’s relative contribution — something not possible with traditional electrophysiology — the researchers again used optogenetics, this time to switch on and off the M2’s inputs to the inhibitory neurons.

    Turning on M2 inputs reproduced a sense of movement in the auditory cortex, even in mice that were resting, the group found. “We were sending a ‘Hey I’m moving’ signal to the auditory cortex,” Schneider said. Then the effect of playing a tone on the auditory cortex was much the same as if the animal had actually been moving — a result that confirmed the importance of M2 in modulating the auditory cortex. On the other hand, turning off M2 simulated rest in the auditory cortex, even when the animals were still moving.

    "I couldn’t contain my excitement when we first saw that result," said Anders Nelson, a neurobiology graduate student in Mooney’s group.

  • neurosciencestuff:

    (Image caption: This image depicts the injection sites and the expression of the viral constructs in the two areas of the brain studied: the Dentate Gyrus of the hippocampus (middle) and the Basolateral Amygdala (bottom corners). Image courtesy of the researchers)

    Neuroscientists reverse memories’ emotional associations

    Most memories have some kind of emotion associated with them: Recalling the week you just spent at the beach probably makes you feel happy, while reflecting on being bullied provokes more negative feelings.

    A new study from MIT neuroscientists reveals the brain circuit that controls how memories become linked with positive or negative emotions. Furthermore, the researchers found that they could reverse the emotional association of specific memories by manipulating brain cells with optogenetics — a technique that uses light to control neuron activity.

    The findings, described in the Aug. 27 issue of Nature, demonstrated that a neuronal circuit connecting the hippocampus and the amygdala plays a critical role in associating emotion with memory. This circuit could offer a target for new drugs to help treat conditions such as post-traumatic stress disorder, the researchers say.

    “In the future, one may be able to develop methods that help people to remember positive memories more strongly than negative ones,” says Susumu Tonegawa, the Picower Professor of Biology and Neuroscience, director of the RIKEN-MIT Center for Neural Circuit Genetics at MIT’s Picower Institute for Learning and Memory, and senior author of the paper.

    The paper’s lead authors are Roger Redondo, a Howard Hughes Medical Institute postdoc at MIT, and Joshua Kim, a graduate student in MIT’s Department of Biology.

    Shifting memories

    Memories are made of many elements, which are stored in different parts of the brain. A memory’s context, including information about the location where the event took place, is stored in cells of the hippocampus, while emotions linked to that memory are found in the amygdala.

    Previous research has shown that many aspects of memory, including emotional associations, are malleable. Psychotherapists have taken advantage of this to help patients suffering from depression and post-traumatic stress disorder, but the neural circuitry underlying such malleability is not known.

    In this study, the researchers set out to explore that malleability with an experimental technique they recently devised that allows them to tag neurons that encode a specific memory, or engram. To achieve this, they label hippocampal cells that are turned on during memory formation with a light-sensitive protein called channelrhodopsin. From that point on, any time those cells are activated with light, the mice recall the memory encoded by that group of cells.

    Last year, Tonegawa’s lab used this technique to implant, or “incept,” false memories in mice by reactivating engrams while the mice were undergoing a different experience. In the new study, the researchers wanted to investigate how the context of a memory becomes linked to a particular emotion. First, they used their engram-labeling protocol to tag neurons associated with either a rewarding experience (for male mice, socializing with a female mouse) or an unpleasant experience (a mild electrical shock). In this first set of experiments, the researchers labeled memory cells in a part of the hippocampus called the dentate gyrus.

    Two days later, the mice were placed into a large rectangular arena. For three minutes, the researchers recorded which half of the arena the mice naturally preferred. Then, for mice that had received the fear conditioning, the researchers stimulated the labeled cells in the dentate gyrus with light whenever the mice went into the preferred side. The mice soon began avoiding that area, showing that the reactivation of the fear memory had been successful.

    The reward memory could also be reactivated: For mice that were reward-conditioned, the researchers stimulated them with light whenever they went into the less-preferred side, and they soon began to spend more time there, recalling the pleasant memory.

    A couple of days later, the researchers tried to reverse the mice’s emotional responses. For male mice that had originally received the fear conditioning, they activated the memory cells involved in the fear memory with light for 12 minutes while the mice spent time with female mice. For mice that had initially received the reward conditioning, memory cells were activated while they received mild electric shocks.

    Next, the researchers again put the mice in the large two-zone arena. This time, the mice that had originally been conditioned with fear and had avoided the side of the chamber where their hippocampal cells were activated by the laser now began to spend more time in that side when their hippocampal cells were activated, showing that a pleasant association had replaced the fearful one. This reversal also took place in mice that went from reward to fear conditioning.

    Altered connections

    The researchers then performed the same set of experiments but labeled memory cells in the basolateral amygdala, a region involved in processing emotions. This time, they could not induce a switch by reactivating those cells — the mice continued to behave as they had been conditioned when the memory cells were first labeled.

    This suggests that emotional associations, also called valences, are encoded somewhere in the neural circuitry that connects the dentate gyrus to the amygdala, the researchers say. A fearful experience strengthens the connections between the hippocampal engram and fear-encoding cells in the amygdala, but that connection can be weakened later on as new connections are formed between the hippocampus and amygdala cells that encode positive associations.

    “That plasticity of the connection between the hippocampus and the amygdala plays a crucial role in the switching of the valence of the memory,” Tonegawa says.

    These results indicate that while dentate gyrus cells are neutral with respect to emotion, individual amygdala cells are precommitted to encode fear or reward memory. The researchers are now trying to discover molecular signatures of these two types of amygdala cells. They are also investigating whether reactivating pleasant memories has any effect on depression, in hopes of identifying new targets for drugs to treat depression and post-traumatic stress disorder.

    David Anderson, a professor of biology at the California Institute of Technology, says the study makes an important contribution to neuroscientists’ fundamental understanding of the brain and also has potential implications for treating mental illness.

    “This is a tour de force of modern molecular-biology-based methods for analyzing processes, such as learning and memory, at the neural-circuitry level. It’s one of the most sophisticated studies of this type that I’ve seen,” he says.

  • neurosciencestuff:

    Children’s drawings indicate later intelligence

    How 4-year old children draw pictures of a child is an indicator of intelligence at age 14, according to a study by the Institute of Psychiatry at King’s College London, published today in Psychological Science.

    The researchers studied 7,752 pairs of identical and non-identical twins (a total of 15,504 children) from the Medical Research Council (MRC) funded Twins Early Development Study (TEDS), and found that the link between drawing and later intelligence was influenced by genes.

    At the age of 4, children were asked by their parents to complete a ‘Draw-a-Child’ test, i.e. draw a picture of a child. Each figure was scored between 0 and 12 depending on the presence and correct quantity of features such as head, eyes, nose, mouth, ears, hair, body, arms etc. For example, a drawing with two legs, two arms, a body and head, but no facial features, would score 4. The children were also given verbal and non-verbal intelligence tests at ages 4 and 14.

    The researchers found that higher scores on the Draw-a-Child test were moderately associated with higher scores of intelligence at ages 4 and 14. The correlation between drawing and intelligence was moderate at ages 4 (0.33) and 14 (0.20).

    Dr Rosalind Arden, lead author of the paper from the MRC Social, Genetic and Developmental Psychiatry (SGDP) Centre at the Institute of Psychiatry at King’s College London, says: “The Draw-a-Child test was devised in the 1920’s to assess children’s intelligence, so the fact that the test correlated with intelligence at age 4 was expected.What surprised us was that it correlated with intelligence a decade later.”

    “The correlation is moderate, so our findings are interesting, but it does not mean that parents should worry if their child draws badly. Drawing ability does not determine intelligence, there are countless factors, both genetic and environmental, which affect intelligence in later life.”

    The researchers also measured the heritability of figure drawing. Identical twins share all their genes, whereas non-identical twins only share about 50 percent, but each pair will have a similar upbringing, family environment and access to the same materials.

    Overall, at age 4, drawings from identical twins pairs were more similar to one another than drawings from non-identical twin pairs. Therefore, the researchers concluded that differences in children’s drawings have an important genetic link. They also found that drawing at age 4 and intelligence at age 14 had a strong genetic link.

    Dr Arden explains: “This does not mean that there is a drawing gene – a child’s ability to draw stems from many other abilities, such as observing, holding a pencil etc. We are a long way off understanding how genes influence all these different types of behaviour.”

    Dr Arden adds: “Drawing is an ancient behaviour, dating back beyond 15,000 years ago. Through drawing, we are attempting to show someone else what’s in our mind. This capacity to reproduce figures is a uniquely human ability and a sign of cognitive ability, in a similar way to writing, which transformed the human species’ ability to store information, and build a civilisation.”

    (via artneuroscience)

  • neurosciencestuff:

    Children’s drawings indicate later intelligence

    How 4-year old children draw pictures of a child is an indicator of intelligence at age 14, according to a study by the Institute of Psychiatry at King’s College London, published today in Psychological Science.

    The researchers studied 7,752 pairs of identical and non-identical twins (a total of 15,504 children) from the Medical Research Council (MRC) funded Twins Early Development Study (TEDS), and found that the link between drawing and later intelligence was influenced by genes.

    At the age of 4, children were asked by their parents to complete a ‘Draw-a-Child’ test, i.e. draw a picture of a child. Each figure was scored between 0 and 12 depending on the presence and correct quantity of features such as head, eyes, nose, mouth, ears, hair, body, arms etc. For example, a drawing with two legs, two arms, a body and head, but no facial features, would score 4. The children were also given verbal and non-verbal intelligence tests at ages 4 and 14.

    The researchers found that higher scores on the Draw-a-Child test were moderately associated with higher scores of intelligence at ages 4 and 14. The correlation between drawing and intelligence was moderate at ages 4 (0.33) and 14 (0.20).

    Dr Rosalind Arden, lead author of the paper from the MRC Social, Genetic and Developmental Psychiatry (SGDP) Centre at the Institute of Psychiatry at King’s College London, says: “The Draw-a-Child test was devised in the 1920’s to assess children’s intelligence, so the fact that the test correlated with intelligence at age 4 was expected.What surprised us was that it correlated with intelligence a decade later.”

    “The correlation is moderate, so our findings are interesting, but it does not mean that parents should worry if their child draws badly. Drawing ability does not determine intelligence, there are countless factors, both genetic and environmental, which affect intelligence in later life.”

    The researchers also measured the heritability of figure drawing. Identical twins share all their genes, whereas non-identical twins only share about 50 percent, but each pair will have a similar upbringing, family environment and access to the same materials.

    Overall, at age 4, drawings from identical twins pairs were more similar to one another than drawings from non-identical twin pairs. Therefore, the researchers concluded that differences in children’s drawings have an important genetic link. They also found that drawing at age 4 and intelligence at age 14 had a strong genetic link.

    Dr Arden explains: “This does not mean that there is a drawing gene – a child’s ability to draw stems from many other abilities, such as observing, holding a pencil etc. We are a long way off understanding how genes influence all these different types of behaviour.”

    Dr Arden adds: “Drawing is an ancient behaviour, dating back beyond 15,000 years ago. Through drawing, we are attempting to show someone else what’s in our mind. This capacity to reproduce figures is a uniquely human ability and a sign of cognitive ability, in a similar way to writing, which transformed the human species’ ability to store information, and build a civilisation.”

  • neurosciencestuff:

    Bats bolster brain hypothesis, maybe technology, too

    Amid a neuroscience debate about how people and animals focus on distinct objects within cluttered scenes, some of the newest and best evidence comes from the way bats “see” with their ears, according to a new paper in the Journal of Experimental Biology. In fact, the perception process in question could improve sonar and radar technology.

    Bats demonstrate remarkable skill in tracking targets such as bugs through the trees in the dark of night. James Simmons, professor of neuroscience at Brown University, the review paper’s author, has long sought to explain how they do that.

    It turns out that experiments in Simmons’ lab point to the “temporal binding hypothesis” as an explanation. The hypothesis proposes that people and animals focus on objects versus the background when a set of neurons in the brain attuned to features of an object all respond in synchrony, as if shouting in unison, “Yes, look at that!” When the neurons do not respond together to an object, the hypothesis predicts, an object is relegated to the perceptual background.

    Because bats have an especially acute need to track prey through crowded scenes, albeit with echolocation rather than vision, they have evolved to become an ideal testbed for the hypothesis.

    “Sometimes the most critical questions about systems in biology that relate to humans are best approached by using an animal species whose lifestyle requires that the system in question be exaggerated in some functional sense so its qualities are more obvious,” said Simmons, who plans to discuss the research at the 2014 Cold Spring Harbor Asia Conference the week of September 15 in Suzhou, China.

    A focus of frequencies

    Here’s how he’s determined over the years that temporal binding works in a bat. As the bat flies it emits two spectra of sound frequencies — one high and one low — into a wide cone of space ahead of it. Within the spectra are harmonic pairs of high and low frequencies, for example 33 kilohertz and 66 kilohertz. These harmonic pairs reflect off of objects and back to the bat’s ears, triggering a response from neurons in its brain. Objects that reflect these harmonic pairs in perfect synchrony are the ones that stand out clearly for the bat.

    Of course it’s more complicated than just that. Many things could reflect the same frequency pairs back at the same time. The real question is how a target object would stand out. The answer, Simmons writes, comes from the physics of the echolocation sound waves and how bat brains have evolved to process their signal. Those factors conspire to ensure that whatever the bat keeps front-and-center in its echolocation cone will stand out from surrounding interference.

    The higher frequency sounds in the bat’s spectrum weaken in transit through the air more than lower frequency sounds. The bat also sends out the lower frequencies to a wider span of angles than the high frequencies. So for any given harmonic pair, the farther away or more peripheral a reflecting object is, the weaker the higher frequency reflection in the harmonic pair will be. In the brain, Simmons writes, the bat converts this difference in signal strength into a delay in time (about 15 microseconds per decibel) so that harmonic pairs with wide differences in signal strength end up being perceived as way out of synchrony in time. The temporal binding hypothesis predicts that the distant or peripheral objects with these out-of-synch signals will be perceived as the background while front-and-center objects that reflect back both harmonics with equal strength will rise above their desynchronized competitors.

    With support from sources including the U.S. Navy, Simmons’s research group has experimentally verified this. In key experiments (some dating back 40 years) they have sat big brown bats at the base of a Y-shaped platform with a pair of objects – one a target with a food reward and the other a distractor – on the tines of the Y. When the objects are at different distances, the bat can tell them apart and accurately crawl to the target. When the objects are equidistant, the bat becomes confused. Crucially, when the experimenters artificially weaken the high-pitched harmonic from the distracting object, even when it remains equidistant, the bat’s acumen to find the target is restored.

    In further experiments in 2010 and 2011, Simmons’ team showed that if they shifted the distractor object’s weakened high-frequency signal by the right amount of time (15 microseconds per decibel) they could restore the distractor’s ability to interfere with the target object by restoring the synchrony of the distractor’s harmonics. In other words, they used the specific predictions of the hypothesis and their understanding of how it works in bats to jam the bat’s echolocation ability.

    If targeting and jamming sound like words associated with radar and sonar, that’s no coincidence. Simmons works with the U.S. Navy on applications of bat echolocation to navigation technology. He recently began a new research grant from the Office of Naval Research that involves bat sonar work in collaboration with researcher Jason Gaudette at the Naval Undersea Warfare Center in Newport, R.I.

    Simmons said he believes the evidence he has gathered about the neuroscience of bats not only supports the temporal binding hypothesis, but also can inspire new technology.

    “This is a better way to design a radar or sonar system if you need it to perform well in real-time for a small vehicle in complicated tasks,” he said.