Category: neuroscience

Scientists Just Used Brain Stimulation to Literally Change How People Think

Hitting the Right Lobes

A team of researchers from Boston University (BU) has explored the possibility of enhancing a person’s ability to learn and control their behavior — in short, to change how people think — by stimulating the brain. BU researcher Robert Reinhart used a new form of brain stimulation, called high-definition transcranial alternating current stimulation (HD-tACS), to “turbo charge” two brain regions that influence how we learn.

“If you make an error, this brain area fires. If I tell you that you make an error, it also fires. If something surprises you, it fires,” Reinhart said in a BU Research press release, referring to the medial frontal cortex, which he calls the “alarm bell of the brain.”

A scan of a brain involved in the study shows how brain stimulation lights up the medial frontal cortex and prefrontal cortex, both involved in how people learn.
The brain’s right hemisphere was more involved in changing behavior. Image credit: Robert Reinhart/Boston University

Reinhart and his colleagues found that stimulating this region, as well as the lateral prefrontal cortex, could change how a person learns. “These are maybe the two most fundamental brain areas involved with executive function and self-control,” he added.

In a study published in the journal of the Proceedings of the National Academy of Sciences (PNAS), Reinhart’s team described how applying electrical stimulation using HD-tACS quickly and reversibly increased or decreased a healthy person’s executive function, which led to a change in behavior.

Smart Charge

Reinhart’s team tested 30 healthy people, each wearing a soft cap with electrodes that conveyed the stimulation. The test was simple: each subject had to press a button every 1.7 seconds. In the first three rounds of tests, the researchers either cranked up the synchronicity between the two lobes, disrupted it, or did nothing.

The participants’ brain activity, monitored with an electroencephalogram (EEG), showed statistically significant results. When the brain waves were upped, the subjects learned faster and made fewer mistakes, which they corrected abruptly. When it was disrupted, they made more errors and learned more slowly. 

Bionics: The Astonishing Future of the Human Body
Click to View Full Infographic

What was even more surprising was when 30 new participants took an adjusted version of the test. This group started with their brain activity temporarily disrupted, but then received stimulation in the middle of the activity. The participants quickly recovered their original brain synchronicity levels and learning behavior. “We were shocked by the results and how quickly the effects of the stimulation could be reversed,” says Reinhart.

Although their study still leaves much to learn, the BU team was actually the first to identify and test how the millions of cells in the medial frontal cortex and the lateral prefrontal cortex communicate with each other through low frequency brain waves. “The science is much stronger, much more precise than what’s been done earlier,” said David Somers, a BU brain sciences and psychology professor who wasn’t part of the study.

The bigger question, Somers noted, is how far a person can go with such a technology. Who doesn’t want to have their brain performance enhanced? This could produce the same effects as nootropics or smart drugs, but with fewer potential side effects, as the brain is stimulated directly. Having access to such a technology could be a game changer — but just as with smart drugs, there’s the question of who should have access to such a technology.

The post Scientists Just Used Brain Stimulation to Literally Change How People Think appeared first on Futurism.

New Research Shows how the Brain Processes our Experiences

Cataloging Experiences

Following studies into how the brain helps us to navigate and make decisions, researchers are now exploring how the brain organizes experiences.

A team of neuroscientists from New York University (NYU) sought to observe how memories are transferred over time. Specifically, they wanted to see if the brain would show similar or shared features of different experiences, and understand how this process happens.

They conducted an experiment in which subjects would be shown a series of objects on a computer screen, with each object paired with one of four pictures showing different locations. Afterwards, subjects were tested on their ability to match the objects with the locations; once immediately after the experiment concluded, and again a week later. During the test, the team observed the subjects’ neural patterns associated with individual memories.

The results of this study revealed no overlap in the neural patterns tied to the memories of the object-scene pairings in the test conducted shortly after the experiment. However, during testing a week later, there was considerable overlap in two specific parts of the brain: the hippocampus and the medial prefrontal cortex (mPFC).

“It is as if in order to make sense of the world, the brain re-organizes individual distinct experiences into information clusters — perhaps signaling the emergence of conceptual knowledge,” explains Lila Davachi, an associate professor in NYU’s Department of Psychology. To clarify, the brain organized the subject’s experiences based on the overlap that occurred — the more that the neural patterns overlapped, or had in common, the more likely it was they would be grouped together.

Diminishing Patterns

The discovery didn’t end there. The team also noticed that patterns corresponding to details in certain memories became more diminished during organization. This is particularly concerning, as it suggests key details are slowly lost or forgotten after the brain sorts our experiences.

“This aspect of the research points to the tension between ‘good memory’ and learning–if we remember each individual experience as it was encountered, are we able to effectively learn about the underlying regularities across experiences?” asked doctoral recipient and study co-author Alexa Tompary.

This study could shed some light on dementia and Alzheimer’s disease, two conditions that affect memory over time, though the latter may only block access to memories instead of destroying them. Over 5 million Americans suffer from Alzheimer’s disease, while 47 million people worldwide live with dementia. If the study’s observations on diminished neural patterns can help prevent these conditions or lead to better treatment, millions of lives could be saved or improved and many more cherished memories might be left intact.

The post New Research Shows how the Brain Processes our Experiences appeared first on Futurism.

Researchers Restore Consciousness in Man After 15 Years in a Vegetative State

Electrical Stimulation

Current medical practices tend to look at people with consciousness disorders — those in a vegetative or comatose state — to be almost impossible cases. Recovery is uncertain at best. A breakthrough new study, however, suggests that this may no longer be the case. A team of researchers and clinicians from the Institut des Sciences Cognitives (ISC) Marc Jeannerod in Lyon, France restored signs of consciousness to a 35-year-old man who had been in a vegetative state for 15 years through a method called vagus nerve stimulation (VNS).

Used to prevent seizures in those with epilepsy and to treat depression, VNS sends mild pulses of electrical energy at regular intervals to the brain via the vagus nerve. Because it’s the longest cranial nerve, the vagus nerve connects the brain to various parts of the body — even the gut — and is critical to maintaining certain essential body functions, like alertness and walking.

In this new research, a vagus nerve stimulator was implanted on the chest of the patient, who was in a vegetative state because of a car accident, a procedure conducted by Jacques Luauté and his team of clinicians. The results, published today in the journal Current Biology, was compiled by researchers led by Angela Sirigu from the ISC Marc Jeannerod.

Irreversible No More

After a month of VNS, the patient exhibited improved response capabilities. He was able to respond to simple commands, like following an object with his eyes or turning his head when asked. The patient also showed an improved attention span, by being able to keep awake when listening to his therapist reading a book. At the same time, his ability to respond to perceived “threats” was restored — like how his eyes opened wider, showing surprise when one of the examiner’s heads moved closer to his face.

Various brain tests also revealed improved brain activity. In areas of the brain involved with movement, sensation, and awareness, there was a marked increase in theta ECG signal activity, which is important in distinguishing between a vegetative and a minimally conscious state. Meanwhile, a PET scan spotted an increase in metabolic activity in the brain’s cortical and subcortical regions, which translates to improved neural functional connectivity.

Image credit: Corazzol et al.
Image Credit: Corazzol et al.

In short, after 15 years of existing in a vegetative state, the patient had minimal consciousness restored — a feat previously regarded to be impossible. Prior to this research, it was thought that patients suffering from consciousness disorders for longer than 12 months could no longer be helped. This study shows that “it is possible to improve a patient’s presence in the world,” Sirigu said in a press release. “Brain plasticity and brain repair are still possible even when hope seems to have vanished.” Not only that, the study also demonstrates “this fascinating capacity of our mind to produce conscious experience.” The researchers purposefully chose a difficult case for their study to eliminate the probability that such improvement could be due to chance. Still, Sirigu’s and Luauté’s teams are planning to conduct a much larger collaborative study to confirm their findings.

The post Researchers Restore Consciousness in Man After 15 Years in a Vegetative State appeared first on Futurism.

A Stanford Neuroscientist is Working to Create Wireless Cyborg Eyes for the Blind

Seeing The Light

For the nearly two million Americans who have degenerative eye conditions, the ability to see is anything but a guarantee. Although we can slow the progression of vision loss—for example, patients can take special vitamins for the disease—there is no cure. And once it’s lost, vision can’t be restored.

Two of the most notable conditions, retinitis pigmentosa and age-related macular degeneration (AMD), cause cells on the retina, which is the region at the back of the eye that converts light into electrical signals, to die off. As a result, those afflicted with the diseases lose their sight as they get older. Thus, these conditions are of increasing concern, given our growing aging population.

Fortunately, a futuristic solution is on the horizon. And it has to do with becoming cyborg.

Bionics: The Astonishing Future of the Human Body
Click to View Full Infographic

In the past few years, some patients have been fortunate enough to get devices implanted on their retinas to help them see again. Unfortunately, these devices aren’t very good, only illuminating blotches of light and dark, devoid of details. Alos, they’re expensive, costing patients upwards of $150,000. To some, that’s better than nothing. “I understand that I will not have 20/20 vision and that I won’t be able to distinguish faces. But at least I will be able to know that my grandchildren are running across the yard or walking into my house,” one recipient told the University of Michigan in 2014.

But E.J. Chichilnisky, a professor of neurosurgery and ophthalmology at the Stanford University School of Medicine, has a much grander vision for retinal implants. To fulfill it, he plans to create a device that revolutionizes the way electronic devices interface with the brain.

A Dialogue With The Retina

To break down the issue a bit more, in a healthy eye, light passes through the cornea and lens, entering the eye through the pupil. That light then falls on the retina, where a series of different cells turn light into electrical signals that are then transmitted into the brain via the optic nerve.

As previously noted, retinitis pigmentosa and AMD cause many of the cells in the retina to die, so the signals that transmit visual information are stopped before they can reach the brain. Current retinal implants simply take the place of those dead cells, turning light into electric signals.

But the disease doesn’t kill all cells in the retina—and this is where the problems arise with current implants.

Retinal ganglion cells, which pull in information from all the other cells in the retina, seem to survive the culling. There are about 20 different types of retinal ganglion cells scattered across the retina, each of which transmits a different type of information to the brain.

Timing is essential to the function of these cells. One type of cell could tell the brain a region on the image is brighter now than it was a moment ago, and another could tell the brain the image is darker. If both are activated at once, “that’s a nonsense signal sent to the brain,” Chichilnisky says.

That’s part of the reason current retinal implants are so limited. As Chichilnisky notes, they ignore the functioning retinal ganglion cells, activating them all at once. “Vision is like an orchestra trying to play a symphony. It depends on having [the right signals] at the right time and right place,” Chichilnisky tells Futurism. “If you instruct all the instruments to play indiscriminately, someone will hear you. But it’s not music.”

The tiling effect of cells on the retina. Image credit: Chris Sekirnjak

Chichilnisky aims to get each type of ganglion cell, each “instrument,” to play at its proper moment. Eventually, his team’s so-called smart prostheses will be surgically implanted into patients’ eyes and be powered wirelessly, probably from a pair of specialized glasses that the patient would wear.

But they’ve got to do a lot to get there. Getting the right signal to the right cell at the right time is difficult because the mixture of different types of ganglion cells varies between individuals and may even change over time, Chichilnisky says.

Chichilnisky’s solution is to create a device that can not only transmit the right signals to the ganglion cells, but also read the retina to figure out which kind of ganglion cell sits where. Then, the device can stimulate it at the right time to create a cohesive image. “It’s a dialogue with the retina—you have to talk back and forth to the circuit,” he notes. He envisions that the final version of the device will “write” all the time, but “read” the retina only occasionally.

But there are other technical challenges. The device has to be made of the right material so that it can stay on the retina for long periods of time without damaging it or sparking an immune response. It also demands a dense concentration of fine-grained electrodes on a small chip that doesn’t emit too much heat. “We have to take everything we know and program it effectively into chip that can sense its environment, figure out what’s going on, and do the right thing at right time in the right place, always. And it has to be smart enough to talk to a neural circuit,” Chichilnisky says. “It’s a tall order.”

A Bright Future

Chichilnisky’s team, made up of neuroscientists, circuit designers, and an eye surgeon, is still figuring out the exact design of their device. Currently, the researchers are testing different techniques on the excised retinas of animals used for other experiments. To perform all the tasks that their compact device will eventually perform, they need an entire room full of scientific equipment. They plan to reduce all this to a small implanted chip.

But this isn’t the only team in the game.

Other scientists are working to restore vision in patients with retinitis pigmentosa and AMD, and already, tests of gene therapy and stem cell therapy techniques have produced interesting results. But Chichilnisky isn’t worried. “I’ll be thrilled if someone comes along and cures AMD while we’re doing this stuff,” he says.

The retina—one of the best-understood and most accessible avenues to the brain—is only the beginning

This is because Chichilnisky believes that, regardless of what other developments in treating blindness come about, the technology he is developing will represent the future of neural implants, as their utility extends far beyond just sight. Devices that can both listen and talk to the brain in the same “language” will enable humans to treat neurodegenerative diseases like Parkinson’s and Alzheimer’s or control prosthetic limbs.

The same tech will likely be used to hack our own biology, augmenting our memory and pushing our vision to new limits. “It’s going to happen. If you think it won’t, you haven’t been reading enough,” Chichilnisky says. According to him, the retina—one of the best-understood and most accessible avenues to the brain—is only the beginning.

Chichilnisky hopes to have a lab prototype in the next couple of years and to start testing it on live animals within five years. Predicting when such a device could be tested in humans, to say nothing of when it could be widely available, becomes murkier. But he hopes that human studies could happen within the next decade.

Though the technology is still at too early a stage to spin off into a company and seek investors, Chichilnisky has no doubt that many will be interested…and soon. “The thing I’m talking about is a revolution,” he says. And we are fortunate enough to be here to witness the start of it all.

The post A Stanford Neuroscientist is Working to Create Wireless Cyborg Eyes for the Blind appeared first on Futurism.

Scientists Can Now Turn off Feelings of Thirst in Mice

A Thirst for Knowledge

More than 60 years ago, researchers first identified the part of the brain that controls the feeling of thirst. Now, a team of scientists has identified the exact neurons responsible for telling our bodies when we need to reach for a beverage.

Led by Stanford neuroscience researcher William E. Allen, the team made this discovery by first depriving mice of water for 48 hours. Then, they analyzed the RNA expression in the animals’ median preoptic nucleus (MnPO), the area of the brain previously linked to thirst. Through this analysis, the researchers were able to narrow down their area of interest to specific clusters of neurons within a particular region of the brain.

Next, the researchers focused on how manipulating these neurons would affect water consumption in the mice. When they inhibited the activation of the neurons via optogenetics (the use of light to control genetically modified cells) they noticed that the mice decreased their water consumption. Conversely, when the neurons were activated, water consumption increased.

Some of the mice in the study were trained to access water by pressing a lever. The researchers noticed that when the neural activity in these mice decreased — a sign they weren’t thirsty — so did the rate at which they pressed the lever.

Meanwhile, when mice were given the opportunity to stop the photoactivation of their MnPO neurons by pressing the lever, they did so “vigorously” — expressing a strong desire to free themselves from their feeling of unquenchable thirst.

The Next Level of Humanity

Thirst is one of our most basic drives. If we don’t ingest the appropriate amount of water, our bodies can become dehydrated, causing our biological systems to shut down. Ultimately, we can die of thirst. Conversely, too much water can be problematic as well, causing the cells in our bodies to swell and potentially leading to seizures, coma, and death.

This new research from Stanford could lead to treatments that help people suffering from conditions such as adipsia (a lack of thirst) and polydipsia (an unquenchable thirst) make up for their bodies’ inability to naturally maintain appropriate hydration levels.

Even more broadly, this research tells us something about the brain that we didn’t previously know. Despite the vast amount of research in the field of neuroscience, the brain is still largely a mystery, and each new discovery puts us one step closer to unlocking its secrets.

Not only could a more complete understanding of the brain lead us to better treatments for psychological and neurological disorders, it could also help us reach the next level in human evolution — a time when we navigate virtual reality (VR) through thought alone, control bionic devices with our minds, and access the digital world via brain implants.

The post Scientists Can Now Turn off Feelings of Thirst in Mice appeared first on Futurism.

Study Reveals How the Brain Creates “Maps” to Help Us Navigate Familiar Locations

Your Brain Says, “Turn Right”

Your brain’s ability to help you navigate your favorite grocery store isn’t as simple as you may think. In fact, it first involves a series of complex calculations that result in a “map” the brain can refer to in the future.

This new discovery comes from Aaron Wilber, assistant professor of psychology and neuroscience at Florida State University. He wanted to better understand how a person goes from seeing an area to creating a mental image used for navigation.

“We have not had a clear understanding of what happens when you step out of a subway tunnel, take in your surroundings, and have that moment where you instantly know where you are,” Wilber explained in a press release. “Now we’re getting closer to understanding that.”

His team’s findings have been published in the September issue of Neuron.

A part of the brain known as the parietal cortex is at the center of this research. It uses the various senses to gather information, and that information is then referenced to determine which actions a person should take. These resulting actions are “recorded” and turned into a memory, which acts as a map the brain can use to get from one familiar place to another.

By recording activity in a rat’s brain as the animal performed certain actions, Wilber’s team discovered that clusters of cells — and not just individual cells —work together to form these map memories. When the same action was performed later, the same patterns of activity were observed.

“These different modules are talking to each other and seem to be changing their connections just like single cells change their connections,” Wilber explained. “But now we’re talking about large groups of cells becoming wired up in different ways as you learn and remember how to make a series of actions as you go about your day-to-day business.”

Dreams and Alzheimer’s

Wilber’s team also uncovered something interesting about dreams through the course of their research. When they recorded the activity in the rat’s brain while it slept, they discovered that the rat replayed the same actions and patterns while dreaming, only at a rate nearly four times faster than the one observed during its waking hours.

Reprogramming the Human Mind: Here’s How We’ll Make Humanity 2.0 [INFOGRAPHIC]
Click to View Full Infographic

“We think these fast-forward ‘dreams’ we observe in rats could explain why in humans when you dream and wake up, you think a lot more time passed than actually has because your dreams happen at high speed or fast forward,” said Wilber. “Maybe dreams happen in fast forward because that would make it easier to create new connections in your brain as you sleep.”

More work needs to be done before we can fully understand how dreams factor into our ability to remember past actions. Thankfully, Wilber recently received funding from the National Institutes of Health, which he plans to use to investigate why the parietal cortex’s ability to function is less effective in patients with Alzheimer’s and other neurological diseases. Ultimately, Wilber’s research, as well as that of others, could lead to better treatment for the nearly 5 million people in the U.S. alone who suffer from Alzheimer’s.

The post Study Reveals How the Brain Creates “Maps” to Help Us Navigate Familiar Locations appeared first on Futurism.

Brain-Scanning Tech May Threaten Our Right to Cognitive Liberty

Mental Privacy

Your thoughts are your own, are they not? Despite what you say or do in public, privately around others, or what information you share online, the thoughts, ideals, and actions contained within your own mind should be yours, and yours alone. However, advances in technology dealing with brain-imaging/brain-scanning, developed in order to study and interpret the human mind, may be encroaching on this way of thinking, and posing the question: is it our right to control our own thoughts and processes?

Believing that we should be the ones in control of our minds is known as cognitive liberty, or the right to mental self-determination. Ask just about anyone the question posed above, and most would agree that yes goes without saying. However when asked the same question with regards to their potential benefits in medicine and law, the answer may be different. If a brain scan could prove someone committed a crime, wouldn’t it be in everyone’s best interest to use it?

India, in 2008, decided on that very thing: to utilize such neurotechnology in a court case involving a woman convicted of murder. The judge, citing the woman’s brain scan, concluded that she has “experiential knowledge” about the crime that only the killer could have, thereby leading to her life sentence in prison. At the time, the decision and use of brain scan was heavily debated. The technology was still new and not entirely well-proven.

Now, nine years later, the tech had advanced even further, and we as a society are more willing than ever to give over parts of ourselves…provided the results are worth it.

Forgoing Our Rights

Shopping would certainly be easier if companies knew your likes, dislikes, and desires; applying for a job would probably be easier as well, at least from the company’s perspective. Santa Fe Institute (SFI) CEO and complexity theorist David Krakauer, speaking with Forbes, spoke on the advancements of technology, and noted how lazy people can be when it comes to making decisions or using effort; how people are more likely to give up some form of control to technology to make things easier for themselves.

“What I worry about almost more than anything else is a certain kind of mental laziness, and an unwillingness to engage with the difficult issues…. It’s somehow more pressing in a time where there are systems out there willing to make the decisions for you,” said Krakauer.

Of course, brain-scanning tools wouldn’t just be used on the general public, or by those in the aforementioned medical and legal areas. The Scientific American notes that even the military is experimenting with brain monitoring, in this case to increase a soldier’s alertness and perceptual acuity. It’s another benefit that comes with neurotechnology, but the risks and unintended consequences need to be addressed. It was only last month that Chinese neuroscientists revealed they could change the proclivity of mice to be more dominant or submissive by promoting activity in varying parts of the brain. Should brain-scanning proliferate, misuse is sure to follow.

The Evolution of Brain-Computer Interfaces [INFOGRAPHIC]
Click to View Full Infographic

A Conversation About Cognitive Liberty

Brain-scanning and the tools that make it possible can actually benefit society, but the potential dangers cannot be ignored. An open conversation needs to be had about cognitive liberty, and it needs to be between legal experts, neuroscientists and, most importantly, the people whose lives will be effected by its successful application.

It’s not difficult to imagine scenarios where having neurological data relating to an illegal event could work against a defendant. Oppressed populations will have a very different and extensive chain of mental associations of, say, police and violence, than a non-minority charged with the same crime. This is to say nothing of how the technology would be applied, and the intent (or bias) of those whose adjudicative process includes brain-scanning as evidence. Clearly the implications go far beyond science.

Neuroethicist Paul Root Wolpe said it best at the 2015 World Science Festival: “This is something, as a society, we’re going to have to work out, but I really believe that it’s going to all happen very soon, and that it’s very important for everybody to think about where they would want [the limits of cognitive privacy] set.”

The post Brain-Scanning Tech May Threaten Our Right to Cognitive Liberty appeared first on Futurism.

New Study Reveals a Group of Neurons Active in Anxious Brains

An Uncertain Bad Experience

Everyone worries sometimes, but about one in four adults in the U.S. suffer from anxiety disorder, a mental illness characterized by anxiety that interferes with their daily lives. New research from St. Louis’s Washington University School of Medicine provides insight into the workings of the anxious brain, revealing a group of neurons that becomes active when animals are faced with unpleasant events that are possible — but not certain.

“We found a population of neurons that activated specifically when monkeys thought something bad or annoying – like a puff of air to the face – might be coming, but not when they knew for certain it was,” study author and assistant professor of neuroscience and of biomedical engineering Ilya Monosov told MedicalXpress. “These neurons did not activate when the animals thought they might get something good, like a sip of sweet juice. Only an uncertain bad experience activated these cells.”

The team studied the anterior cingulate cortex, a region of the brain’s outer layer where its two hemispheres meet. The area plays a clear role in mental disorders like anxiety, attention deficit disorder, depression, and obsessive-compulsive disorder. Researchers have confirmed differences in the anterior cingulate cortex between healthy people and people with these mental disorders. However, he functions of cells associated with these brain differences, and their consequences, are not well-understood.

Cellular Roots Of Anxiety

For their study, the team trained the monkeys to respond not just to the uncertainty around when they would receive the unpleasant puff of air in their face, but to the certainty and uncertainty of possible rewards as well. They observed that possible rewards engaged an entirely different group of neurons.

This research opens up possibilities for studying the roots of anxiety, and could one day lead to new treatments. The findings from the study also help elucidate the cellular bases of complex mental processes. It appears that our brain’s response to uncertainty is rooted in our neurons.

“Now that we know which cells are active when an animal is faced with the uncertainty of a bad experience, we can try to disrupt the activity of these cells,” Monosov said to MedicalXpress. “It opens up avenues of research, which may one day lead to new ways to treat disorders such as anxiety and depression.”

The post New Study Reveals a Group of Neurons Active in Anxious Brains appeared first on Futurism.

The Atomic Structure of a Key Alzheimer’s Protein Has Been Revealed

Enormous Possibilities

For the first time, scientists have revealed the chemical structure of one of the key markers of Alzheimer’s disease, capturing high-resolution images of the abnormal tau protein deposits suspected to be behind Alzheimer’s and other neurodegenerative conditions.

The results will now give scientists an unprecedented glimpse at how these harmful deposits function at a molecular level, and could lead to a number of new treatments to prevent them from forming – and in doing so, help to combat Alzheimer’s and dementia.

“This is a tremendous step forward,” says one of the team, Bernardino Ghetti from Indiana University.

“It’s clear that tau is extremely important to the progression of Alzheimer’s disease and certain forms of dementia. In terms of designing therapeutic agents, the possibilities are now enormous.”

In the new study, researchers led by the MRC Laboratory of Molecular Biology (LMB) in the UK extracted tau protein filaments from the brain of a deceased patient with a confirmed diagnosis of Alzheimer’s disease, and imaged them using a technique called cryo-electron microscopy (cryo-EM).

Auto Draft

Alzheimer’s disease is linked to the build-up of two kinds of abnormal protein deposits – tau filaments, which form inside nerve cells, and amyloid beta proteins, which builds up outside cells.

In healthy brains, tau acts as a stabiliser, but when the proteins become defective, they can form into bundles of tangled filaments, which are thought to impede communication between brain cells, leading to the neurodegeneration and reduced cognitive ability seen in conditions like Alzheimer’s disease.

Closer Look

Researchers have studied the tau protein’s involvement in Alzheimer’s for decades, but up until now, we’ve never been able to see tau filaments up so close – and the molecular insights afforded by the cryo-EM imaging performed here could mean the opportunities for drug discovery targeting tau is a whole new ball game.

“Drugs that could clear away clumps of protein in the brain are a key goal for researchers, but to directly affect these proteins, molecules that make up a drug need to latch on and bind to their surface,” explains the head of research at Alzheimer’s Research UK, Rosa Sancho.

“Knowing the precise shape of these complex protein structures is enormously valuable in guiding the development of targeted drugs.”

While there’s no shortage of research examining how abnormal tau and amyloid beta proteins function, it’s been unclear just how much artificial samples assembled in the lab differ from the structures that form in the lab.

Thanks to the tau structures obtained from the deceased patient, researchers now have the ability to investigate how abnormal filaments function at an atomic level in the human brain – and studying these tangles won’t only benefit Alzheimer’s research, the team says.

“This is a big step forward as far as tau goes but it is bigger than that,” neuroscientist Michel Goedert from LMB told James Gallagher at BBC News.

“This is the first time anybody has determined the high-resolution structure [from human brain samples] for any of these diseases. The next step is to use this information to study the mechanisms of neurodegeneration.”

We won’t know the full ramifications of this discovery until scientists have a chance to follow up on the new findings presented here, but it’s clear that this could be a major turning point in studying how to counter these harmful protein clumps, with Ghetti describing the result as one of the major discoveries of the last quarter century of Alzheimer’s research.

That said, it may take many more years (or even decades) for new treatments to ultimately come out of this – but at least we’re now a big step closer to that long-hoped-for eventuality, which before now may have been impossible.

“It’s like shooting in the dark – you can still hit something but you are much more likely to hit if you know what the structure is,” explains one of the team, LMB’s Sjors Scheres.

“We are excited – it opens up a whole new era in this field, it really does.”

The findings are reported in Nature.

The post The Atomic Structure of a Key Alzheimer’s Protein Has Been Revealed appeared first on Futurism.

Duke Neuroscientist: Brain Augmentation Will Allow Us to Make a New Kind of Human

Believe The Hype?

Brain augmentation is a hot topic; from nootropics, to brain-computer interfaces (BCIs), and even implants designed to restore movement to those with paralysis. But how much of the buzz is just hype? To find out, Edd Gent of SingularityHub interviewed Duke University neuroscientist Mikhail Lebedev, who works on brain-machine interfaces (BMIs) and has recently won a $100,000 prize for his work in brain augmentation.

Lebedev thinks we will see both realistic visual prostheses and technologies for the rehabilitation of patients with spinal cord injury and stroke within the next 10 years. Further advances like the ability to type directly from your brain to a screen using implanted electrodes will take more time — perhaps 20 years. And Lebedev thinks it’ll take much longer before we’re decoding brain activity or free-floating thoughts.

Image Credit: HypnoArt/Pixabay
Image Credit: HypnoArt/Pixabay
Will brain augmentation enable us to interface with artificial intelligence (AI)? Lebedev thinks that’s realistic, but that interfacing with AI using augmented reality (AR) and our senses — which are already well-understood, unlike the “code” of the brain — is going to come first. This way we can enhance our own limited capabilities with AI as we learn more about the inner workings of the human brain.

How Much Is Possible

When it comes to augmenting brain function, almost anything is potentially possible. Sensors can be added to the brain, interacting with sensory functions. Lebedev cites adding a sensor of electromagnetic fields or visual sensors around the head’s perimeter for panoramic vision as examples.

Current research into micro-stimulation of the brain seems to show that it’s possible to suppress some processing steps — not really to enhance function, Lebedev points out. However, he also points out that suppression can itself be helpful, like when the brain is producing biased responses rather than useful solutions.

There are several basic areas of brain augmentation work currently happening: non-invasive tactics such as electroencephalogram (EEG) systems, transcranial direct current stimulation (tDCS), and functional near-infrared spectroscopy (fNIR); invasive approaches such as BCIs, implants, and BMIs; pharmacological approaches such as nootropics and molecular drug treatments; and genetic approaches such as optogenetics and implanted neurons. There is sometimes even significant overlap between some of these areas, and as technologies advance, we may see these categories bleeding together even more.

While access to technology is always a concern, Lebedev points out that continued discovery will always help the problem. Further development tends to make technologies cheaper and more accessible to everyone. (We can see an example of this happening as blockchain and access to cell phones is fostering opportunities in developing countries where infrastructure for landlines was never created.)

And while there are risks that will come with more advanced brain augmentation technologies — as there are with any breakthrough tech — Lebedev has a positive outlook on what the future may hold, “I’m optimistic, so I see mostly upsides. We really want to improve; we want to become less primitive people.”

The post Duke Neuroscientist: Brain Augmentation Will Allow Us to Make a New Kind of Human appeared first on Futurism.

Human Limitations Could Prevent Us From Advancing in Science. AI Could Help.

Scientific Advancements

Three times a day I take a drug called levodopa. I take it because my brain does not produce enough dopamine, without it my hands and feet shake and I have difficulty getting my body to do what I want it to do. These are symptoms of Parkinson’s disease and mean that many of my dopamine producing neurons have died. But, thanks to levodopa, I can feed my brain synthetic dopamine.

The Limits Of Human Intelligence – Why We Need AI
Image Source: Wikipedia

It is an incredible little drug that we discovered to be naturally produced in the broad bean plant, pictured here. (The fact that nature makes in this plant the exact molecule that is missing in my brain leaves me dumb-founded.) In the first half of the 20th century we figured out how to chemically produce the compound, allowing us to manufacture it in mass quantities and distribute it to Parkinson’s disease patients world-wide. That we figured all that out is a testament to our ingenuity and our intelligence. If someone was diagnosed with Parkinson’s just 60 years ago, there would be nothing that any doctor could do for them. I owe my ability to function today to this plant and to human intellect.

But it is not good enough. Every time I take this drug it floods my entire brain with dopamine, while this does produce a cool high and also makes me more alert and focused, it has some negative effects. In any drug that you might take, from the THC in marijuana to the caffeine in coffee, there are chemicals in them that bind to receptors in your brain. This either stimulates or inhibits neurons which has a cascade of other effects. One of them is that when taken regularly the brain produces extra receptors in anticipation of the drug, that is why tolerance goes up. This might not sound like such a bad thing, but it does alter the chemistry and structure of your brain.

While the altered structure that results from too much caffeine may not be that harmful, other drugs, like synthetic dopamine, can be. Most people who take levodopa for many years start to experience a side-effect of the drug called dyskinesia that makes it very difficult to control one’s body. To combat this side effect we have other drugs that further alter your brain which eventually leads to more drugs to combat the harmful effects of the previous drug. Before long patients end up spending most of their day shoving pills into their mouth and the disease becomes incredibly difficult to manage.

Also, every person’s brain is wired differently and as a result the effects of drugs produce a slightly different response in each one of us. For now the pharmaceutical industry is built around a one-drug-fits-all motto and until we get to the point where we can have a personalized approach to treating diseases this will continue to be the case.

The Brain’s Complexity

But the biggest impediment to figuring out diseases such as Parkinson’s or Alzheimer’s is the astounding complexity of the brain itself. As James Watson put it, “The brain is the last and grandest biological frontier, the most complex thing we have yet discovered in our universe. It contains hundreds of billions of cells interlinked through trillions of connections. The brain boggles the mind.”

So what to do? Well, it is very likely that the mechanisms underlying neurodegeneration, which everyone will suffer from to some degree, are just too complex for us to figure out. The number of factors that one needs to consider to make sense of such diseases is incredibly long, most likely too long for any human, or even any group of humans, to make sense of. While we have come pretty far in our treatment of such diseases and have some exciting new therapies in the pipeline, there is no cure on the horizon and the reality is that we might just not be smart enough to figure these things out.

The Limits Of Human Intelligence – Why We Need AI
Image Source: Make Mark

The same can be said for a number of riddles that we are still faced with. In the end the reason why we can’t figure out how to deal with climate change, or what the origins of the universe are or even how to settle the Israeli-Palestinian conflict is that there are limits to what we are capable of and for some issues, like the latter, there is the added difficulty of having too much human stupidity in the way.

Which is why the creation of artificial intelligence might ultimately be a good thing. Just like every parent hopes that their child will be smarter than they are, the hope is that if we give birth to an artificial form of intelligence it will be wiser than we are and may be able to figure out many of the problems in the world that we can not.

And we might be a lot closer to bringing about AI than most people think…

The post Human Limitations Could Prevent Us From Advancing in Science. AI Could Help. appeared first on Futurism.

Scientists Discover That Our Brains Can Process the World in 11 Dimensions

Seeing Like Never Before

Neuroscientists have used a classic branch of maths in a totally new way to peer into the structure of our brains. What they’ve discovered is that the brain is full of multi-dimensional geometrical structures operating in as many as 11 dimensions.

The Evolution of Brain-Computer Interfaces [INFOGRAPHIC]
Click to View Full Infographic

We’re used to thinking of the world from a 3-D perspective, so this may sound a bit tricky, but the results of this new study could be the next major step in understanding the fabric of the human brain – the most complex structure we know of.

This latest brain model was produced by a team of researchers from the Blue Brain Project, a Swiss research initiative devoted to building a supercomputer-powered reconstruction of the human brain.

The team used algebraic topology, a branch of mathematics used to describe the properties of objects and spaces regardless of how they change shape. They found that groups of neurons connect into ‘cliques’, and that the number of neurons in a clique would lead to its size as a high-dimensional geometric object.

“We found a world that we had never imagined,” says lead researcher, neuroscientist Henry Markram from the EPFL institute in Switzerland. “There are tens of millions of these objects even in a small speck of the brain, up through seven dimensions. In some networks, we even found structures with up to 11 dimensions.”

A Lot of Power to Work With

Human brains are estimated to have a staggering 86 billion neurons, with multiple connections from each cell webbing in every possible direction, forming the vast cellular network that somehow makes us capable of thought and consciousness.

With such a huge number of connections to work with, it’s no wonder we still don’t have a thorough understanding of how the brain’s neural network operates. But the new mathematical framework built by the team takes us one step closer to one day having a digital brain model.

To perform the mathematical tests, the team used a detailed model of the neocortex the Blue Brain Project team published back in 2015. The neocortex is thought to be the most recently evolved part of our brains, and the one involved in some of our higher-order functions like cognition and sensory perception.

After developing their mathematical framework and testing it on some virtual stimuli, the team also confirmed their results on real brain tissue in rats.

According to the researchers, algebraic topology provides mathematical tools for discerning details of the neural network both in a close-up view at the level of individual neurons, and a grander scale of the brain structure as a whole.

By connecting these two levels, the researchers could discern high-dimensional geometric structures in the brain, formed by collections of tightly connected neurons (cliques) and the empty spaces (cavities) between them.

“We found a remarkably high number and variety of high-dimensional directed cliques and cavities, which had not been seen before in neural networks, either biological or artificial,” the team writes in the study.

“Algebraic topology is like a telescope and microscope at the same time,” says one of the team, mathematician Kathryn Hess from EPFL.

“It can zoom into networks to find hidden structures, the trees in the forest, and see the empty spaces, the clearings, all at the same time.”

Those clearings or cavities seem to be critically important for brain function. When researchers gave their virtual brain tissue a stimulus, they saw that neurons were reacting to it in a highly organised manner.

“It is as if the brain reacts to a stimulus by building [and] then razing a tower of multi-dimensional blocks, starting with rods (1D), then planks (2D), then cubes (3D), and then more complex geometries with 4D, 5D, etc,” says one of the team, mathematician Ran Levi from Aberdeen University in Scotland.

“The progression of activity through the brain resembles a multi-dimensional sandcastle that materialises out of the sand and then disintegrates.”

These findings provide a tantalising new picture of how the brain processes information, but the researchers point out that it’s not yet clear what makes the cliques and cavities form in their highly specific ways.

And more work will be needed to determine how the complexity of these multi-dimensional geometric shapes formed by our neurons correlates with the complexity of various cognitive tasks.

But this is definitely not the last we’ll be hearing of insights that algebraic topology can give us on this most mysterious of human organs – the brain.

The study was published in Frontiers of Computational Neuroscience.

The post Scientists Discover That Our Brains Can Process the World in 11 Dimensions appeared first on Futurism.

Scientists Hope to Use Stem Cells to Reverse Death in Controversial Study

Back From The Dead

Researchers seem to be setting their sights on increasingly lofty goals when it comes to the human body – from the world’s first human head transplant, to fighting aging, and now reversing death altogether. Yes, you read that right. A company called Bioquark hopes to bring people who have been declared clinically brain-dead back to life. The Philadelphia-based biotech company is expected to start on the project later this year.

This trial was originally intended to go forward in 2016 in India, but regulators shut it down. Assuming this plan will be substantially similar, it will enroll 20 patients who will undergo various treatments. The stem cell injection will come first, with the stem cells isolated from that patient’s own blood or fat. Next, the protein blend gets injected directly into the spinal cord, which is intended to foster growth of new neurons. The laser therapy and nerve stimulation follow for 15 days, with the aim of prompting the neurons to make connections. Meanwhile, the researchers will monitor both behavior and EEGs for any signs of the treatment causing any changes.

Image Credit: ColiN00B/Pixabay
Image Credit: ColiN00B/Pixabay

Controversial Ideas

While there is some basis in science for each step in the process, the entire regimen is under major scrutiny. The electrical stimulation of the median nerve has been tested, but most evidence exists in the form of case studies. Dr. Ed Cooper has described dozens of these cases, and indicates that the technique can have some limited success in some patients in comas. However, comas and brain death are very different, and Bioquark’s process raises more questions for most researchers than it answers.

One issue researchers are raising about this study is informed consent. How can participants in the trial consent, and how should researchers complete their trial paperwork – given that the participants are legally dead —and how can brain death be conclusively confirmed, anyway? What would happen if any brain activity did return, and what would the patient’s mental state be? Could anything beyond extreme brain damage even be possible?

As reported by Stat News, In 2016, neurologist Dr. Ariane Lewis and bioethicist Arthur Caplan wrote in Critical Care that the trial is “dubious,” “has no scientific foundation,” and suffers from an “at best, ethically questionable, and at worst, outright unethical nature.” According to Stat News, despite his earlier work with electrical stimulation of the median nerve, Dr. Cooper also doubts Bioquark’s method, and feels “there is no way this technique could work on someone who is brain-dead. The technique, he said, relies on there being a functional brain stem — one of the structures that most motor neurons go through before connecting with the cortex proper. If there’s no functional brain stem, then it can’t work.”

Pediatric surgeon Charles Cox, who is not involved in Bioquark’s work, agrees with Cooper, commenting to Stat News on Bioquark’s full protocol, “it’s not the absolute craziest thing I’ve ever heard, but I think the probability of that working is next to zero. I think [someone reviving] would technically be a miracle.”

Pastor remains optimistic about Bioquark’s protocol. “I give us a pretty good chance,” he said. “I just think it’s a matter of putting it all together and getting the right people and the right minds on it.”

The post Scientists Hope to Use Stem Cells to Reverse Death in Controversial Study appeared first on Futurism.

Our Brains Can Predict Future Events Twice as Fast as They Actually Happen

Following the Dots

Situations that require thinking ahead or anticipating what comes next are part of our daily reality. They could be as trivial as trying to catch a ball or as important as avoiding getting hit by a car on the road. Regardless of the specific situation, researchers think there’s a mechanism in the human brain that allows us to visualize what comes next in a series of events.

Experiments that test how this “preplay” of images happens have previously been limited to looking at what goes on in the visual cortexes of animals. Now, researchers from Radboud University in the Netherlands have developed an experiment that looks into what happens in the visual cortex of humans. Their findings have been published in Nature Communications.

The researchers tested 29 university students using a functional magnetic resonance imaging (fMRI) scanner. The students were told to repeatedly watch a white dot move across a screen, an event that took just half a second. After viewing this event 108 times, the students developed a mental expectation of the dot’s movement. Next, the students watched 24 other random dot movies. In some, nothing changed, but in others, the dot didn’t move from the starting or ending position at all, or it moved more slowly.

preplay visual brain neuroscience visual cortex
Image Credit: Ekman etal / Nature Communications

The process was done twice for all the students, with four students serving as controls. The fMRI mapped the brain activity of the students throughout the experiments, taking multiple scans at ultra-high speeds to observe the blood flow in specific tissues. While watching the movies of the dot in the starting position, the students’ visual cortexes would activate, showing that they were visually picturing the completion of the move, but twice as quickly as it would actually occur.

Fast Forward to the Future

In effect, this research indicates that the human brain not only produces an image of the future movement in fast forward, the brain’s visual cortex also works in tandem with past experiences to predict future outcomes. Most probably, it developed this ability to predict the future in order to facilitate quicker action that could be the difference between life and death.

The Evolution of Brain-Computer Interfaces [INFOGRAPHIC]
Click to View Full Infographic

“Imagine you are standing at a road, a car is approaching and you need to decide ‘Do I cross, or do I wait for the car to pass first?’,” lead researcher Matthias Ekman told The Daily Mail. “Our study suggests that our visual system can fast-forward the trajectory of the car and thereby help us with our decision whether to wait or not.”

This is just one example of research expanding our understanding of the infinitely complex human brain. This new knowledge will prove invaluable as scientists work to create systems that let us manipulate it to overcome paralysis, upload skills, or just get generally smarter. Even without any cyborg super-tech, though, the brain is pretty remarkable, so the next time you catch that ball or avoid that speeding car, thank it for predicting the future — even just a short bit of it.

The post Our Brains Can Predict Future Events Twice as Fast as They Actually Happen appeared first on Futurism.

New Electrical Brain Stimulation Could “Awaken” Comatose People

Waking The Brain

After experiencing a severe brain injury, people can slip into a coma. Some of these people begin to show signs of intermittent awareness, but are still not able to communicate. This state is called “minimal consciousness,” and it can leave friends and loved ones of those who are trapped in it feeling helpless and heartbroken.

[Taken] Electrical Brain Stimulation May “Awaken” Minimally Conscious People
Credit: olho wodzynski via Visualhunt /  CC BY-NC-ND

But now, researchers have found that minimally conscious people responded remarkably well to a treatment of daily 20-minute sessions of transcranial direct current stimulation (tDCS) for five consecutive days. These patients, who had been minimally conscience for at least three months, were able to respond to commands, recognize items, and, in a few cases, even communicate.

tCDS stimulates neurons with low-level electricity. In this study, recently published in Brain Injury, the tCDS treatment was applied to the prefrontal cortex, the part of the brain involved in high-level cognitive functions including consciousness. The prefrontal cortex is also connected to other important hubs in the brain, allowing it to send electrical signals outward.

Waves of electrical activity moving outward from the prefrontal cortex into other areas of the brain is the hallmark of consciousness. This study’s success in prompting some of these patterns with stimulation suggests that scientists may be able to help people with consciousness disorders to “awaken” partially. Further, the development of a device to facilitate communication between people with consciousness disorders and others may be possible.

New Hope

Approximately 315,000 Americans suffer from disorders of consciousness. Of those, about 280,000 are minimally conscious. Disorders of consciousness are most often caused by traumatic brain injuries, but can also be caused by overdoses or poisoning, suffocation, strokes, or severe infections of the brain such as meningitis or encephalitis.

Typically, those who survive the coma state for more than two to four weeks evolve into either a vegetative or higher state of consciousness, whether it be a minimally conscious state or sometimes regaining normal consciousness. These states are sometimes difficult to distinguish, because it is not possible to communicate with the patients; should this research pan out, this may improve the prognoses for many patients by making better diagnostic work possible.

The work is encouraging, but remains incomplete. While the longer periods of stimulation in this latest research led to more significantly improved consciousness, it isn’t clear whether patients can sustain these improvements. If not, and ongoing treatments are necessary, a longer study will be required to ensure there are no long-term side effects.

Still, the device itself is cheap to produce and easy to use, so it’s possible that long-term trialing will happen in part as volunteer patients and their families begin to use it. This medical advancement could give many families the hope of communicating and connecting to their loved ones once more.

The post New Electrical Brain Stimulation Could “Awaken” Comatose People appeared first on Futurism.

Researchers Just Created a Revolutionary “Brain on a Chip”

This Is Your Brain on a Chip

Researchers from the Australian National University (ANU) have found a new way to create a brain-on-a-chip, and it could have a remarkable impact on how we study and treat neurological diseases.

The team first designed semiconductor wafers to act a base for their creation, working with a group in the university’s Research School of Physics and Engineering to fabricate and embed nanowires onto the wafer in a specific geometry. After that, they added brain cells to the wafer for observation. As hoped, the cells used the nanowires as a scaffold as they grew into functioning circuits.

Via Stuart Hay, ANU
Image credit: Stuart Hay, ANU

“We were able to make predictive connections between the neurons and demonstrated them to be functional with neurons firing synchronously,” project group leader Vincent Daria said in an ANU news release. This is the first time researchers have been able to build functional, interconnected neuronal circuits on a nanowire scaffold, and the research has been published in NanoLetters.

The Future of Testing

The ANU team’s chips will provide a useful platform for studying brain cell growth and interconnection. That knowledge could then be used to find innovative new ways to integrate the organic and non-organic for disease treatment. According to Daria, “This work could open up [a] new research model that builds up a stronger connection between materials nanotechnology with neuroscience.”

The Evolution of Brain-Computer Interfaces [INFOGRAPHIC]
Click to View Full Infographic

It may even make it possible for scientists to create neuro-prosthetics for use after degenerative neurological diseases or damage — a process much trickier than creating prosthetics for other body parts because neuro-prosthetics require a fairly complete understanding of how neurons grow, connect, form circuits, and process data. “Unlike other prosthetics like an artificial limb, neurons need to connect synaptically, which form the basis of information processing in the brain during sensory input, cognition, learning, and memory,” explained Daria.

Several research teams have been pursuing “on a chip” technologies with much success. Harvard researchers have been experimenting with their own brain-on-a-chip model in an attempt to study neurological and psychological diseases more effectively. Evatar, a miniature scale female reproductive tract on a chip, was announced earlier this year — its creators hope the device will advance the study of a variety of reproductive diseases and treatments. Even the FDA has acknowledged that the future of drug and treatment development is not in animal testing, but in organs on chips, so breakthroughs like this one out of ANU should continue for the foreseeable future.

The post Researchers Just Created a Revolutionary “Brain on a Chip” appeared first on Futurism.

Harvard Neuroscientist: Meditation Reduces Stress and Literally Changes Your Brain

The Power of Thought

Meditation, in its most basic sense, is clearing your mind of all thought—or at least, trying to. The practice is used in cultures around the globe for both religious and secular purposes. While it may be best known for its spiritual uses, it is this latter purpose that has recently ignited a firestorm of interest, as scientific research seems to indicate that meditation changes your brain on a fundamental, biological level.

Dr. Robert Puff, a licensed clinical psychologist, notes in Psychology Today that descriptions of meditation techniques date back at least some 3,000 years, to Indian scriptures written in approximately 1000 BCE. However, the practice itself appears to be far, far older. Some evidence suggests that individuals were describing the practice as long as 5,000 years ago. Unfortunately, the exact date of its inception is lost to the annals of time, but we do know that it wasn’t until the 6th century BCE that it truly started its global spread. And it wasn’t until the 20th century that mainstream scientists started researching the impact that it has on the human body—specifically, the impact it has on the human brain.

“Our data shows changes in the brain after just eight weeks.”

To date, a host of medical uses have been found for meditation. The most obvious use is, perhaps, stress management and reduction, which is supported by a plethora of scientific evidence. In a study published in Clinical Psychology Review, researchers at Boston University and Harvard Medical School found that the technique helps alleviate anxiety and allows individuals to better cope with stressful situations.

Along these same lines, a 2011 study by Dr. Fadel Zeidan, assistant professor of neurobiology and anatomy at Wake Forest Baptist Medical Center, found that meditation helps individuals cope with, and better tolerate, physical pain. This work was published in the Journal of  Neuroscience. But this is just the start of the research that has been conducted in relation to meditation.

Transforming the Human Brain

In a 2015 study published in Frontiers in Psychology, researchers from UCLA found that individuals who meditate over extended periods have more gray matter volume in their brains than those that do not. The work looked at individuals who been meditating for an average of 20 years, and the impact was pronounced. As study author Florian Kurth notes:

We expected rather small and distinct effects located in some of the regions that had previously been associated with meditating. Instead, what we actually observed was a widespread effect of meditation that encompassed regions throughout the entire brain.

Other studies support these findings. A 2011 study in Proceedings of the National Academy of Sciences, which was conducted by Yale University, discovered that meditation decreases activity in the default mode network (DMN) in the brain. In the paper, the team noted that this reveals the actual biological impact of meditation and helps bring to light “a unique understanding of possible neural mechanisms of meditation.”

And still, the evidence does not end.

Sara Lazar, a neuroscientist at Massachusetts General Hospital and Harvard Medical School, recently conducted work which found that individuals who meditate ultimately have more gray matter in the frontal cortex and, most notably, that this gray matter is preserved in spite of aging. The significance is overwhelming. As Lazar asserts in an interview with the Washington Post, “It’s well-documented that our cortex shrinks as we get older – it’s harder to figure things out and remember things. But in this one region of the prefrontal cortex, 50-year-old meditators had the same amount of gray matter as 25-year-olds.”

In other words, as a result of transformations in the brain, individual who meditate have a better chance of retaining their memory function in old age. And it doesn’t take much for individuals to reap the benefits. Lazar states that, in her study, the average meditation time was just 27 minutes a day and results were obtained just 8 weeks after the individuals started the practice. So, how can you reap the benefits?

Becoming Mindful

Scientists assert that using proprioceptive input (also know as deep touch pressure (DTP)) to ground your body is helpful when attempting to reach a meditative state. Research has shown that this kind of pressure results in a reduction in cortisol levels and an increase in serotonin production, decreasing your heart rate and blood pressure.

Thus, the relaxed physical state that comes from peroprioceptive input can make it easier to achieve a calm mental state that’s conducive to meditation, and one of the most effective ways to get this proprioceptive input is by using a weighted blanket.

As Amber Martin, an occupational therapist from Utica College, notes, “peroprioceptive input is good for pretty much everyone and anyone. It can be very calming and organizing.” By helping you reach a state of peaceful relaxation more quickly, Gravity Blanket makes it easier for you to take advantage of every valuable moment of meditation before you have to return to the busy world outside your mind.

There’s little debate in the science regarding the benefits of meditation. According to research published in the Journal of Consulting and Clinical Psychology, meditation has been linked to reduced feelings of depression, anxiety, and physical pain.

Other studies have explored connections between meditation and improved focus, lowered blood pressure, strengthened memory, reduced fatigue, and…well, the list goes on and on. Yet, reaching a meditative state generally takes a lot of work, and truly clearing your mind is far from easy. A weighted blanket is just one thing that can help you get there.

Futurism only supports products that we use and love. We may collect a share of the sales from the items featured on this page.

The post Harvard Neuroscientist: Meditation Reduces Stress and Literally Changes Your Brain appeared first on Futurism.

Discovery of the Brain’s “Autofill” Function Could Revolutionize Neurological Treatments

Neural Autofill

Though the brain is still very much a mystery, we have figured out a few of its tricks. Among those, we know that it copes with the uncertainties it encounters daily by creating a realistic model of the world to resolve them. Essentially, the brain rummages through its vast storehouse of past experiences to predict each possible contingency for every situation we run across, minimizing surprises, which, from an evolutionary standpoint, can be dangerous.

Predictive AI: Seeing Through the Chaos
Click to View Full Infographic

Neuroscientists have now identified the mechanism our brains use for overcoming uncertainties in auditory information, and it works much like a predictive text function, helping us anticipate what is most likely coming next as we listen to someone talking.

In the study, which is published in PLoS BiologyNewcastle University neuroscientist Yukiko Kikuchi and her team played nonsense speech sounds in various sequences to humans and macaques. As was the case in previous research, both species learned the grammatical rules of the artificial language quickly. After the initial learning period, subjects listened to more sound sequences, including some that violated those artificial grammar rules.

The team used microelectrodes to record responses from neurons that process sound information, focusing on both hundreds of individual neurons and large populations. In this way, they were able to compare the species’ responses and see that both species use the same neural mechanisms to process the sounds.

Understanding Brain Function

The researchers found that the predictive brain activity was remarkably similar in both species and that it varied depending on the sequence of sounds. This allowed the team to identify which activities utilized which populations of neurons and how the brain modifies its predictions.

This knowledge advances our understanding of the evolution of language and how the brain processes speech. It may lead to new, more accurate diagnostic tools and techniques for a variety of neurological conditions, such as dyslexia, attention-deficit hyperactivity disorder, and schizophrenia — conditions in which predictive responses are “off” in some way.

However, the methods used within the study may not translate to “real” languages as they are significantly different than the made-up language used in the study. More research should be able to determine if that is the case, though.

The post Discovery of the Brain’s “Autofill” Function Could Revolutionize Neurological Treatments appeared first on Futurism.

We Need To Talk About How Human Rights Will Work in the Age of AI

Last Refuge For Privacy

In response to advances in neuroscience and technologies that alter or read brain activity, some researchers are proposing a recognition of new human rights to mental integrity. These would protect people from having their thoughts abused, hacked, or stolen. The idea of this kind of human right is a recognition that although brain-related technologies have the potential to transform our lives in many positive ways, they also have the potential to threaten personal freedom and privacy.

A large portion of brain-related technology owes its development to medical research and physical need; some diagnostic tools and treatments, for example, need to “read” brain activity. However, this area of research and development has also given birth to performance enhancers, game interfaces, and brain-computer interfaces that can control anything you want them to.

According to University of Basel neuroethicist Marcello Ienca and University of Zurich human rights lawyer Roberto Andorno, these advances in neuroscience and technology threaten personal freedom and privacy in new ways. The pair argues that we are not yet doing enough to protect ourselves, and the human brain as the last refuge for human privacy. They have therefore offered up four new human rights they hope can preserve that refuge: the rights to cognitive liberty, mental integrity, mental privacy, and psychological continuity.

“Cognitive liberty” concerns a person’s freedom to alter their mental state using brain stimulation and other techniques—and to refuse to do so. If this human right is recognized, it could, for example, make it illegal for employers to use any kind of brain stimulation techniques on employees. The right to “mental integrity” concerns the possibility of hackers who might interfere with brain implants that are otherwise being willingly used by their owners. Hacking might take the form of sending false signals to implants or taking control of the implant itself. 

The Evolution of Brain-Computer Interfaces [INFOGRAPHIC]
Click to View Full Infographic

The right to “mental privacy” would guard against a person having their mind read without their consent, whatever form that takes as technology continues to improve. Under the current state of the law, you might have better luck pursuing someone for stealing and publicly sharing photographs or documents you took pains to keep private than if they used a device to steal your memories or thoughts and posted them publicly.

The right to “psychological continuity” would protect people from actions that could disrupt their sense of identity, or harm their feeling of going through life being the same person. The use of electrode implantation for deep brain stimulation to control Parkinson’s symptoms and other conditions, for example, has already triggered concerns about personal identity — with some patients indicating that they no longer feel like themselves after the procedure.

“The question we asked was whether our current human rights framework was well equipped to face this new trend in neurotechnology,” Ienca told The Guardian. “The information in our brains should be entitled to special protections in this era of ever-evolving technology. When that goes, everything goes.”

Brain Technologies In Bloom

Although some may think these concerns sound like science fiction, brain-related technologies and brain-computer interfaces are in development now at an astonishing rate—and this will only gain momentum, producing more and more sophisticated technologies. If you’re thinking the four human rights related to the integrity of the mind sound far-fetched, consider the breakthroughs that have come to fruition within the last few years alone.

Engineers created a robot than can be controlled with brainwaves. Scientists connected monkeys using brain implants and the monkeys learned to “communicate” at a distance using their minds. They also achieved brain synchronicity in rats, and connected two humans with EEG caps well enough to allow them to communicate yes and no answers. Early this year people suffering from locked-in syndrome were able to communicate using BCIs.

While the BCI technology isn’t perfect yet, it’s progressing fast. In fact, humans have used BCIs to control some amazing things. Researchers in Korea have used BCIs to control the movement of turtles by mentally controlling their instinctive escape behavior. Earlier this year a quadriplegic man used a brain-computer interface (BCI) and Functional Electrical Stimulation (FES) technology to “think” his arm into moving again. Facebook is already working on its BCI, and it will allow people to “think” to each other without typing or speaking. And Tesla’s Elon Musk is creating what may be the most ambitious BCI application yet: a third layer of the human mind that will merge human intelligence with AI.

Brain implant technologies have also been exploding: a Harvard team is working on implants that are not rendered less effective by scar tissue, and may soon be using them to restore sight to the blind. Other researchers are working to create electrodes from glassy carbon that are ideal for use in BCIs and may help paralyzed people become mobile again. MIT researchers have developed ultrafine fibers approximately 200 micrometers in diameter that are flexible like brain tissue and can be used to manipulate chemical, electrical, or optical signals.

Researchers are even reprogramming actual brain cells to fight Parkinson’s disease. They’ve already been able to “decode” brain activity, and the refinement and sophistication of these decoding skills are growing. It is safe to assume more reprogramming and decoding abilities are coming.

Ienca agrees that although some of these concerns are ahead of the technology, that won’t be true for long, and it’s typically better to be proactive. At least one experiment has already shown that brain signals are likely to be hackable in the future. “We cannot afford to have a lag before security measures are implemented,” he told The Guardian. “It’s always too early to assess a technology until it’s suddenly too late.”

The post We Need To Talk About How Human Rights Will Work in the Age of AI appeared first on Futurism.

Scientists Transplanted A Rat’s Head Onto Another Rat, Hoping to Replicate It in Humans

No Brain-Damaging Blood Loss

Okay, so it’s not every day you hear about a real life head transplant, but there is such a thing. In fact, research into transplanting heads has been around for a while now, with the first known two-headed animal experiments dating as far back as the 1900s and the 1950s.

Now, scientists from China have made a remarkable breakthrough in transplanting the head of one organism onto that of another.

For their work, they took the head of a smaller rat and attached it to a bigger one, creating what is effectively a two-headed rat. It is important to note that the rat did not survive long-term, but that was never the goal. The team knew the rat would not live long, as there are still a lot of technical and scientific issues that need to be resolved before we can successfully perform head transplants on living organisms and have them survive.

It also provides the possibility of long-term survival.

However, this is an astonishing step forward in performing viable head transplants in that the doctors were able to avoid any brain-damaging blood loss while the donor’s head was being attached.

The goal of this particular experiment was simple: the scientists wanted to know if they could successfully transplant a head without damaging the brain due to excessive blood loss. And they did. To do this, they had to keep the blood circulation going during the transplant by attaching the donor rat’s blood vessels to the other rat.

“We developed a bicephalic model of head transplantation to study these aspects,” the scientists report in the paper that they published in the journal CNS Neuroscience and Therapeutics.

A Real Head Turner

While the idea is a real head turner — and maybe a head shaker, for some — scientists assert that head transplants are worth exploring, as it could help millions of people worldwide who are suffering from muscle or nerve problems. It could also allow us to take the head of a person suffering from fatal cancer and transplant it onto a healthy human body.

Understandably, there are a number of issues that have to be resolved before it would be possible to transplant human heads. For example, apart from making sure the brain isn’t damaged by blood loss during the transplant process, there are a number of other concerns, such as rejection by the immune system.

Image credit: CNS
Image credit: CNS

Still, grafting a head onto another while keeping the brain safe from the damage associated with blood loss is certainly a positive step forward as far as developing a viable means of conducting human head transplants.

However, some experts assert that the problems associated with transplanting human heads extend beyond scientific and technical concerns and touch upon the fundamental nature of human psychology.

Arthur Caplan, founding director of NYULMC’s Division of Medical Ethics, previously told Futurism that, in such a procedure, a person could suffer from unprecedented levels of insanity. He ultimately stated that this would likely result from things such as “novel chemistry flooding the brain, unfamiliar input coming in from the nervous system of the body, etc.”

While some claim to have already successfully performed the procedure, there’s no clear evidence to support such claims.

In any case, if we are to ever successfully complete such a procedure in the future, this most recent study provides an important piece of the puzzle. As the researchers note in their abstract, “The application of vascular grafting can also provide the possibility of long-term survival of the model.”

The post Scientists Transplanted A Rat’s Head Onto Another Rat, Hoping to Replicate It in Humans appeared first on Futurism.

Academy Award-Nominated Filmmaker’s Next Project? Merging The Human Brain With AI

Not So Sci-Fi

Over the last few months, the world’s most prominent innovators have unveiled a host of projects aimed at decoding the brain and augmenting human intelligence. Mark Zuckerberg announced that he is working to make a brain interface that will let people communicate via their thoughts. Bryan Johnson created Kernel and invested $100 million to make our neural code programmable. Just last week, Elon Musk finally unveiled Neuralink, the company he founded to correct traumatic brain injuries and increase human intelligence.

As a result, we are fast entering a new age in humanity, one in which we dictate the course of our own evolution. And thanks to the aforementioned projects, the humans that exist a century from now will likely look little like the humans that are alive today.

“Unlocking the brain is the most consequential advancement in history.” – Bryan Johnson

In order to trace this remarkable stage of evolution, Warrior Poets (the production company founded in 2004 by Academy Award-nominated filmmaker Morgan Spurlock) and Futurism Studios are creating a new film. The film will be executive produced by Morgan Spurlock, Jeremy Chilnick, and Taryn Southern.

Currently called “The Untitled Human Intelligence Project,” the feature-length documentary will investigate the ways that human intelligence (HI) and artificial intelligence (AI) are evolving, focusing specifically on how future technology might allow us to augment our intelligence in ways previous generations never dreamed.

“Imagine a future where you could eliminate depression without taking a pill, instantly learn a foreign language, or communicate with loved ones telepathically. While this may sound like a sci-fi novel, it may be our not-so-distant future,” says Spurlock, who is directing the film. Of course, such developments come with a host of ethical and theoretical questions that will need to be answered.

Altering Our Most Precious Resource

A new era in biology, one in which we have everything from cognitive enhancement to the restoration of motor function in paralyzed patients, prompts a number of compelling questions. For example, if we can ameliorate depression, Alzheimer’s, Parkinson’s, and other disabilities, can our capacity to love, feel empathy, and cooperate also be improved? If so, by how much? How do we overcome issues related to wealth inequalities and ensure that all people are able to benefit from human augmentation? In a world of cyborgs, what does it mean to be human?

The film will interrogate and attempt to answer these most pressing questions—capturing the evolution of AI and HI and giving audiences a glimpse of what the connected evolution of these two intelligences will mean for humanity.

The Evolution of Brain-Computer Interfaces [INFOGRAPHIC]
Click to View Full Infographic

Unlike Hollywood dystopias, which are bathed in Terminator-style unsubstantiated fears of AI, the documentary will be rooted in commentary from experts in neuroscience and engineering, providing an honest look at what researchers and innovators working in the field have to say about AI and the future of humanity.

Johnson’s company, Johnson’s company, Kernel, will be among the dozen or so scientists, ethicists, universities, and private institutions Spurlock and his team will follow throughout the year in this film. Kernel has opened its doors in the hope to ignite interest in the field, shine a light on the entire neurotech landscape, and facilitate public discourse.

As Johnson notes, “Unlocking the brain is the most consequential advancement in history. Human intelligence is the most powerful and precious resource we have — everything we are, everything we do, and everything we will become, will come from the brain.”

Viewers can expect to see the documentary hit screens soon.

The interview with Bryan Johnson was slightly edited for brevity and clarity. Johnson is an investor in Futurism; he does not hold a seat on our editorial board or have any editorial review privileges.

The post Academy Award-Nominated Filmmaker’s Next Project? Merging The Human Brain With AI appeared first on Futurism.

AI Will Help Humanity Soar Past the Limits of Our Intelligence

Three times a day I take a drug called levodopa. I take it because my brain does not produce enough dopamine, without it my hands and feet shake and I have difficulty getting my body to do what I want it to do. These are symptoms of Parkinson’s disease and mean that many of my dopamine producing neurons have died. But, thanks to levodopa, I can feed my brain synthetic dopamine.

Wikimedia Commons

It is an incredible little drug that we discovered to be naturally produced in the broad bean plant, pictured here. (The fact that nature makes in this plant the exact molecule that is missing in my brain leaves me dumb-founded.) In the first half of the 20th century we figured out how to chemically produce the compound, allowing us to manufacture it in mass quantities and distribute it to Parkinson’s disease patients world-wide. That we figured all that out is a testament to our ingenuity and our intelligence. If someone was diagnosed with Parkinson’s just 60 years ago, there would be nothing that any doctor could do for them. I owe my ability to function today to this plant and to human intellect.

But it is not good enough. Every time I take this drug it floods my entire brain with dopamine, while this does produce a cool high and also makes me more alert and focused, it has some negative effects. In any drug that you might take, from the THC in marijuana to the caffeine in coffee, there are chemicals in them that bind to receptors in your brain. This either stimulates or inhibits neurons which has a cascade of other effects. One of them is that when taken regularly the brain produces extra receptors in anticipation of the drug, that is why tolerance goes up. This might not sound like such a bad thing, but it does alter the chemistry and structure of your brain.

While the altered structure that results from too much caffeine may not be that harmful, other drugs, like synthetic dopamine, can be. Most people who take levodopa for many years start to experience a side-effect of the drug called dyskinesia that makes it very difficult to control one’s body. To combat this side effect we have other drugs that further alter your brain which eventually leads to more drugs to combat the harmful effects of the previous drug. Before long patients end up spending most of their day shoving pills into their mouth and the disease becomes incredibly difficult to manage.

Also, every person’s brain is wired differently and as a result the effects of drugs produce a slightly different response in each one of us. For now the pharmaceutical industry is built around a one-drug-fits-all motto and until we get to the point where we can have a personalized approach to treating diseases this will continue to be the case.

But the biggest impediment to figuring out diseases such as Parkinson’s or Alzheimer’s is the astounding complexity of the brain itself. As James Watson put it, “The brain is the last and grandest biological frontier, the most complex thing we have yet discovered in our universe. It contains hundreds of billions of cells interlinked through trillions of connections. The brain boggles the mind.”

So what to do? Well, it is very likely that the mechanisms underlying neurodegeneration, which everyone will suffer from to some degree, are just too complex for us to figure out. The number of factors that one needs to consider to make sense of such diseases is incredibly long, most likely too long for any human, or even any group of humans, to make sense of. While we have come pretty far in our treatment of such diseases and have some exciting new therapies in the pipeline, there is no cure on the horizon and the reality is that we might just not be smart enough to figure these things out.

The same can be said for a number of riddles that we are still faced with. In the end the reason why we can’t figure out how to deal with climate change, or what the origins of the universe are or even how to settle the Israeli-Palestinian conflict is that there are limits to what we are capable of and for some issues, like the latter, there is the added difficulty of having too much human stupidity in the way.

Which is why the creation of artificial intelligence might ultimately be a good thing. Just like every parent hopes that their child will be smarter than they are, the hope is that if we give birth to an artificial form of intelligence it will be wiser than we are and may be able to figure out many of the problems in the world that we can not.

And we might be a lot closer to bringing about AI than most people think.

The post AI Will Help Humanity Soar Past the Limits of Our Intelligence appeared first on Futurism.

The Future of the Human Brain: Smart Drugs and Nootropics

A Scientific Way of Enhancing Performance?

Doing research to define what nootropics are is kind of like asking Tekkies which VPN is the best one to use. You get an avalanche of responses from a group of people who are absolute evangelists for the technology, and each is totally sold on their particular solution. This isn’t a criticism; it is the observation of an interested outsider looking in on a movement—the movement to achieve next level humanity.

Nootropics, broadly speaking, are substances that can safely enhance cognitive performance. We’re not talking about convincing your doctor to write you a Ritalin or Adderall prescription to help you stay awake longer. We’re talking about a group of (as yet unclassified) research chemicals, over-the-counter supplements, and a few prescription drugs, taken in various combinations—that are neither addictive nor harmful, and don’t come laden down with side-effects—that are basically meant to improve your brain’s ability to think.

Part of the secret sauce of nootropics is finding the right combination of the right substances, and taking them in the right amounts and proportions at the right times. That series of alignments is your “stack,” and a growing number of Silicon Valley innovators are relying on their stacks every day.

The idea behind nootropics is not taking a magic pill and transforming yourself into a genius instantly. The core value here is optimization.

Some nootropics you’ve probably already tried, like fish oil or caffeine, or maybe creatine or L-theanine, an amino acid commonly found in green tea. Others you may not have heard of, substances like the class of about 20 drugs called “racetams.”

To be clear, the idea behind nootropics is not taking a magic pill and transforming yourself into a genius instantly. The core value here is optimization, incremental changes—sometimes tiny ones—that together produce a cumulative effect that is powerful and progressive. Nootropics isn’t like winning the lottery (something that happens all in one day). They are investing; they are cashing in on compound cognitive interest.

In other words, nootropics are not like the pills in Limitless, which make you super-smart instantly. Rather, they pose the seductive question of what you might achieve if you were 10 percent more productive…all the time.

Via Pixabay
Via Pixabay

Optimization Neuroscience

Right now, it’s not entirely clear how nootropics as a group work, for several reasons. How effective any one component of a nootropic supplement (or a stack) is depends on many factors, including the neurochemistry of the user, which is connected to genes, mood, sleep patterns, weight, and other characteristics. In other words, results vary, and they can vary a lot.

Second, some nootropic components have received more research attention than others. For example, it is well-known how caffeine affects the body. Unfortunately, as anyone with a coffee habit can tell you, the benefits of caffeine can be lost over time as your body builds up a tolerance to it, and some people experience withdrawal-like symptoms when they don’t ingest caffeine, including headaches.

This is where the amino acid L-theanine comes back into the picture, because research has shown that it reduces physiological and psychological stress responses, making it a great partner for caffeine. It also promotes neuronal health. Studies on the two substances taken together show that they promote alertness, attention, and task switching abilities. This is why the L-theanine and caffeine combination is part of many stacks.

The third factor in ‘where the science behind nootropics stands’ is that many of these substances have been the subject of research studies that proponents don’t think are exactly on point. For example, Kamal Patel points out that Racetams are designed to improve cognitive function, but also notes that researchers aren’t exactly sure what their mechanisms are. According to Patel, this is in part because most research on Piracetam focuses on the elderly and people with real cognitive deficits; in contrast, the people who use nootropics are primarily young professionals who are already at the top of their game (at least, their unenhanced game).

The science behind nootropics is promising, but it is in its nascent phase.

That said, there are a number of studies that have found benefits in nootropics. One study found that brain function in elderly patients improved significantly with regular doses of Piracetam. Other research has shown that Piracetam improves memory in adults and can help students improve their nonverbal learning skills. The bottom line here is that researchers know Piracetam has some positive benefits, but they don’t yet understand why or how.

So, ultimately, the science behind nootropics is promising, but we are still in the early research stage, meaning that it is very hard to say anything definitive about combinations and how they work.

One final point of interest regarding research and nootropics is that the dedicated fan base of nootropic users are currently serving as their own long-term research cohort. They meet in person sometimes, for everything from informational seminars to meditation contests. They gather in their own nootropics Subreddit and in other online forums, sharing information on stacks, optimizing doses, and results. This is DIY, anecdotal science, not placebo-controlled, and certainly not generalizable—but to many people looking to optimize their life (and their brain), the evidence is persuasive enough for them to give nootropics a shot.

On the Market Now

For people who are interested in self-improvement but stand firm on the scientific method, there are some reasonable options to try. There are some startups creating and selling nootropics that have research scientists on their teams, with the aim of offering reliable, proven cognitive enhancers.

Qualia is one such nootropic. This 42 ingredient supplement stack is created by the Neurohacker Collective, a group that boasts an interdisciplinary research team. Among the members of Qualia’s science team are Sara Adães, who has a PhD in neuroscience; Andrew Huberman, who is a Stanford University School of Medicine professor of Neurobiology; Lindsay Briner, who is in her third year of PhD study in cognitive neuroscience; Dr. Rishi Khatri, JD/MD; Jon Wilkins, a Harvard PhD in biophysics; and Heather Sandison, a naturopath. You can find a complete list of Qualia’s ingredients on the website, as well as some of the research behind the stack’s formulation.

Some of Qualia’s ingredients are found in other stacks: Noopept, for example, and Vitamin B complex are some of the usual suspects in nootropics. Green tea extract, L-Theanine, Taurine, and Gingko Biloba are also familiar to many users, although many of the other components might stray into the exotic for most of us. Mucuna Pruriens, for example, is a source of L-Dopa, which crosses the blood–brain barrier, to increase concentrations of dopamine in the brain; L-Dopa is commonly used to treat dopamine-responsive dystonia and Parkinson’s disease.

The website says that the ‘smart drug’ is designed to provide users with “immediate, noticeable uplift of [their] subjective experience within 20 minutes of taking it, as well as long-term benefits to [their] neurology and overall physiologic functioning.” For people climbing their way up in Silicon Valley, it’s a small price to pay. What would you do with 10 percent more productivity, time, income, or intelligence?

Note: Futurism curates products that we believe in. Here, we have partnered with the team behind Qualia in order to offer readers a 10% discount using the coupon code ‘futurism’. Futurism also has affiliate partnerships, so we may get a share of the revenue from purchases.

The post The Future of the Human Brain: Smart Drugs and Nootropics appeared first on Futurism.

Our Brains May be 100 Times More Powerful Than We Thought

The human brain is so powerful that some intelligent computers called neural networks are patterned after how the human brain works. As such, figuring out how the many processes of the brain work continues to be the subject of much research.

A recent study published in the journal Science by a team of researchers from the University of California Los Angeles (UCLA) has uncovered new information about the brain’s inner workings, and it could change our understanding of how learning happens.

Our Brains May be 100 Times More Powerful Than We Thought
Credit: sydney g / Flickr

The study focused on a particular part of neurons, called dendrites. Dendrites are long and branch-like structures that connected to the roundish cell body, called the soma. Dendrites were thought to serve only as conduits that transfer spikes of electrical activity from the cell body to other neurons. The UCLA study, however, found that dendrites may actually be generating their own electrical spikes — and at rates 10 times more frequently than previously thought.

The researchers arrived at this conclusion by studying mice. Instead of implanting electrodes into dendrites themselves, they put them next to dendrites. They found that dendrites were five times more active than somas when the rats were asleep, and 10 times more when awake and exploring.

Understanding the Brain

“A fundamental belief in neuroscience has been that neurons are digital devices. They either generate a spike or not,” said Mayank Mehta, the study’s senior author, in a UCLA press release. “These results show that the dendrites do not behave purely like a digital device. Dendrites do generate digital, all-or-none spikes, but they also show large analog fluctuations that are not all or none. This is a major departure from what neuroscientists have believed for about 60 years.”

Since dendrites are estimated to make up more than 90 percent of neural tissue — about 100 times larger in volume compared to somas — this could mean that the human brain has 100 times more capacity then previously believed.

Reprogramming the Human Mind: Here’s How We’ll Make Humanity 2.0 [INFOGRAPHIC]
Click to View Full Infographic

Ultimately, this research could help medical professionals develop new ways to treat neurological disorders. The study may also shed a new light how learning really occurs.

“Many prior models assume that learning occurs when the cell bodies of two neurons are active at the same time,” explained author Jason Moore in the press release. “Our findings indicate that learning may take place when the input neuron is active at the same time that a dendrite is active — and it could be that different parts of dendrites will be active at different times, which would suggest a lot more flexibility in how learning can occur within a single neuron.”

The post Our Brains May be 100 Times More Powerful Than We Thought appeared first on Futurism.

Traumatic Memory? Let Science Erase It Forever

Making Memories

It would be completely misleading to say that our memories are filed neatly in our brains, ready to be pulled out whenever we feel nostalgic. Thanks to advances in neuroscience, we now know that making and recalling memories is a process that uses your whole brain and involves cells being triggered and fired, which essentially re-wires the circuitry of your brain.

Scientists theorized that this complicated process is facilitated, at least partially, by specific proteins. They first tested the theory on fish. In the study, they gave fish drugs that prevented the proteins from being made, which resulted in fish that were unable to recall what took place after the protein-inhibitors were administered. From here, scientists discovered that they could target long-term memories and delete them.

Deleting Memories

Whenever you try to recall something, you’re actually physically changing that memory in your mind. The more you try to reflect on old memories, the less accurate they become because they are rebuilt every single time you try to remember them. Given that, if the protein-inhibitor was administered as you recalled a particular memory, the memory might be erased — just like how it worked with the fish.

The theory was tested on lab mice, which were placed in a chamber that played a specific tone before administering an electric shock through the floor. The mice learned to associate the chamber and sound with an electric shock. Even months later, the mice would freeze up and show stress whenever the sound was played.  But when the researchers place the mice in a different chamber, played the tone, and administered a protein-inhibitor, the mice showed no fear in relation to the sound. Placed back in the old chamber, the mice showed the classic fear response.

The whole experiment demonstrated that researchers were able to selectively erase a part of the mice’s memories (the ones associated with sound), while leaving the memory of the shocking chamber intact. And scientists who conducted the study believe that someday a similar technique could be used for humans dealing with post traumatic stress disorder (PTSD) or addiction.

The post Traumatic Memory? Let Science Erase It Forever appeared first on Futurism.

New Find May Explain How the Brain Creates Consciousness

A New Mapping Method

Throughout years of research, the question of consciousness has continued to baffled neuroscientists. Despite all of the improvements in our ability to understand the human brain, we still have no explanation for how the brain creates consciousness. However, a new method for digitally reconstructing and mapping the brain may give us a much-needed clue in the search for an answer.

better mind

Conventional brain mapping methods that trace all the branches of a particular neuron can be tedious. Usually, researchers inject a dye into individual cells, dissect the brain into several sections, and then trace the path of the dyed neurons manually. Tracing a neuron completely through an entire organ is a rare feat. The new method, however, requires less effort and time, is scalable, and is less invasive.

The technique was developed by a team led by Christof Koch, president of the Allen Institute for Brain Science in Seattle, Washington. At a meeting of the Brain Research through Advancing Innovative Neurotechnologies (BRAIN) Initiative in Bethesda, Maryland, earlier this month, Koch explained how his team’s 3D mapping method revealed something new about the claustrum, a small, thin sheet of cells found in the brain that Koch believes is the seat of consciousness in both mice and humans.

Their findings are drawn from brain models mapped from a line of mice Koch and his colleagues engineered. These mice were designed so that exposure to a certain drug would activate specific genes in their claustrum neurons. When fed a small amount of the drug, a handful of neurons were triggered just enough to switch on these genes. The reaction produced a green florescent protein that spread throughout the entire neuron. Using 10,000 cross-sectional images of the mouse brain and a computer program, the researchers created a 3D reconstruction of the three glowing neurons.

A Giant Neuron

Koch was amazed at the extent of the three neurons, which stretched across both hemispheres of the brain. One of the three was so huge that it wrapped around the circumference of the brain, like a “crown of thorns,” according to Koch. Furthermore, these three claustrum neurons seemed connected to most, if not all, of the outer parts of the brain that receive sensory information and drive behavior. The findings seem to reinforce Koch’s belief that the claustrum, by coordinating inputs and outputs, is involved in the creation of consciousness.

Image credit: Allen Institute for Brain Science
Image credit: Allen Institute for Brain Science

“[Koch’s method is] quite admirable,” Rafael Yuste, a neurobiologist from Columbia University in New York City, told the journal Nature. However, he isn’t convinced that the research proves the claustrum is involved in consciousness: “It’s like trying to decipher language if we don’t understand what the alphabet is.” Yuste believes that the mapping method would be particularly useful for identifying the different cell types in the brain, which could improve research into understanding how the brain functions.

Though the method may be too expensive to map all neurons on a large scale, Koch plans to continue mapping those that branch out from the claustrum, so we may soon have an idea of where consciousness resides in our brain.

The post New Find May Explain How the Brain Creates Consciousness appeared first on Futurism.

MIT’s Super-Thin Wires Are Leading to a New Age in Brain Implants

One Device to Transmit Them All

Many of our studies of brain function are conducted using implants that record or deliver either electrical, chemical, or optical signals. Obviously, great advances have been made in those implants, as we now have maps of the human brain and the respective areas that handle specific functions. The development of optogenetics in neural systems was a considerable advancement in particular. However, as much as we now understand about the brain, there’s still so much more that’s left to explore. That’s where a study published last week in the journal Nature Neuroscience comes into play.

BCI Infographic

Brain research has often relied on devices that manipulate electrical, chemical, or optical signals separately. Combining these was usually “somewhat probabilisitic,” according to Polina Anikeeva, one of the more than a dozen researchers from the Massachusetts Institute of Technology (MIT) and elsewhere involved in the study. “We said, wouldn’t it be nice if we had a device that could just do it all.”

The result was an optogenetic device that uses fibers just about 200 micrometers across, designed to mimic the flexibility of brain tissue. These multifunctional fibers were crafted using a technique researcher Benjamin Grena described as being similar to that used to make the Napoleon pastry mille feuille. Graphite flakes were added to conductive polyethylene, which was then compressed. Another sprinkling of flakes was added, the polyethlyene was compressed again, and so on until multiple layers were created. This method increased the conductivity of the polymers by four or five times.

According to the study’s lead writer, Seongjun Park, “That allowed us to reduce the size of the electrodes by the same amount.” Because the fibers are so small, they can also remain in the brain for a longer period of time. The next step, according to the team, is to reduce the fiber’s width even further to mimic the properties of neural tissues better. “The next engineering challenge is to use material that is even softer, to really match” the nearby brain tissue, Park said.

Closer to Neural Tissue

In a test conducted on mice, the researchers were able to inject viral vectors called opsins through fluid channels in the fiber. These carried genes that made neurons sensitive to light. After waiting for the opsins to take effect, the researchers sent a pulse of light through the optical waveguide. The resulting neural activity was then recorded using six electrodes made from the same fibers to pinpoint specific reactions.

Previously, researchers would have needed several separate devices — needles for the injection, optical fibers for the delivery of light, and multiple electrodes for recording —to conduct this same experiment.

Image credit: Pixabay
Image credit: Pixabay

As John Rogers, a Northwestern University professor who wasn’t part of the research, pointed out to MIT, “The authors report some remarkably sophisticated designs and capabilities in multifunctional fiber devices, where they create a single platform for colocalized expression, recording, and illumination in optogenetics studies of brain function. These types of advances in technologies and tools are essential to progress in neuroscience research.”

To be sure, this multifunctional fiber would revolutionize the continuous research into the brain’s functions. Since they are so small, “potentially, we could use many of them to observe different regions of activity,” said Anikeeva. Eventually, these fibers could help researchers not only understand the brain better and develop improved treatment plans for neurological disorders, but also build better brain computer interfaces (BCI), which are currently being used to do everything from help paralyzed people walk again to helping the blind regain their sight.

The post MIT’s Super-Thin Wires Are Leading to a New Age in Brain Implants appeared first on Futurism.

Brain-to-Brain Communication in Humans Could Soon Become a Reality

Speaking Without a Single Word

Imagine living in a world in which verbal communication is no longer required, a society in which telepathy is the norm, where people would be able to “speak” to each other using only their thoughts.

Scientists have long been contemplating the possibilities of brain-to-brain communication in humans, and it appears as though their dreams could become a reality within the next year or so. Such a system would be made possible via major advances in the technology that have been achieved via recent trials involving animals.

In one study, three monkeys were connected through individual brain implants, then placed in separate rooms. They were given the task of controlling a virtual arm on screen, a task they could only complete successfully if they worked together. In the end, they did. According to Miguel Nicolelis, the study’s lead author, “They synchronized their brains and they achieved the task by creating a superbrain — a structure that is the combination of three brains.”

Credit: University of Washington

Alternatively, another experiment that tested brain synchronicity in four rats was able to accomplish similar results. After 10 trials, scientists found that the rats were able to think as one 61 percent of the time. They gained higher accuracy in solving simple problems when they combined minds.

More recently, the research has focused on humans. In one study, researchers placed two people in separate rooms and gave them the task of playing a game of 20 questions on a computer using only their minds. They transmitted “yes” or “no” answers with the help of an EEG cap, which tracked the brain activity of one person and triggered an electrical current in the other person’s brain.

The Future of Brain Communication

One day, we could attempt to take it up another notch in order to detect individual thought processes. These thoughts could be transmitted to another person, influencing the decisions they make.

This could be an enormous game changer for people with paralysis and other medical conditions that prevent them from being able to perform physical tasks. For example, assembling a robotic suit equipped with brainet, a synchronization of multiple brains acting as an organic computer, could allow people to receive help from others when learning how to use an exoskeleton to regain movement.

For now, it’s proving quite difficult to create a device that mimics pure telepathy. Our brains are unique, and each of us thinks differently, our thoughts being influenced by our individual memories and experiences. The resulting brain patterns make it hard for neuroscientists to develop brain-to-brain communication, but if they can reveal an individual’s patterns of thought, they could potentially use another person’s brain activity to trigger those thoughts.

The post Brain-to-Brain Communication in Humans Could Soon Become a Reality appeared first on Futurism.

Scientists Are Reversing the Symptoms of Alzheimer’s

The Reality of Alzheimer’s Disease

In the U.S. alone, approximately 5.1 million people may have Alzheimer’s disease. Although not associated directly with the aging process, older age groups are at higher risk of developing it. The illness doesn’t seem to be slowing down either. Between 2010 and 2050, the number of Americans age 65 and older will more than double to 88.5 million, the equivalent of 20 percent of the entire population.

Unfortunately, these are the harsh facts behind Alzheimer’s. But with advances in neuroscience and technology, scientists are on the verge of finding more effective treatments, or rather a solution to combatting the iconic symptoms of the disease.

Researchers at the Washington University School of Medicine in Missouri are looking at a particular protein and the gene responsible for its production in the body. It’s called the tau protein, and in a healthy brain, it helps to maintain the functioning of neurons. But in someone living with Alzheimer’s, the tau protein morphs into ‘tangles’ that are toxic to the brain.

Credit: Sarah DeVos

To test if these tau protein tangles could be reversed, the researchers took genetically engineered ‘tangled’ mice and injected them with antisense oligonucleotides, or RNA, four times a month. The result – tau levels plummeted, and existing tangles seemed to disappear. The protein even stopped spreading in older mice. Their life expectancy was longer than mice treated without antisense, and they were able to recover some traits lost from the mutation.

Will It Work On Humans?

With monkeys, the test results weren’t as hopeful. They found that tau protein levels only dropped by 20 percent, leading them to believe that future tests on humans could potentially produce adverse effects. Scientists aren’t completely sure where a reduction of tau in the human brain could lead. Normal operation could cease to function, or function abnormally as a result.

Other possible Alzheimer’s treatments were tested in recent years, and many failed to reverse the effects of the disease. It will still be a long way until people are able to experiment with human subjects, and the clock is ticking as the number of Alzheimer’s patients continues to increase each year.

The post Scientists Are Reversing the Symptoms of Alzheimer’s appeared first on Futurism.

This Genetic Discovery Could Change the Way We Treat Mental Illness

Genes Behind Cognitive Ability

Understanding how the brain works remains the subject of many studies. The brain is so complex that it took an international team of 60 researchers and experts, who were all part of the Cognitive Genomics Consortium (COGENT), to unlock some of the genes involved in cognitive ability. The team published their study in the journal Molecular Psychiatry.

The international team, led by Todd Lencz from the Feinstein Institute for Medical Research studied the genes of 35,000 people – measuring the brain function of these participants through tests of learning, memory, and other cognitive function components. They were able to discover new genetic variations related to cognitive ability. “This research provides new clues into how the brain works at the molecular level,” Lencz said.

Interestingly, and for the first time, the COGENT researchers also discovered a molecular genetic overlap between cognitive ability and personality. They found that genetic predispositions towards higher cognitive ability were linked to greater “openness to experience.” In order words, some of the genes that make people more likely to be curious about new ideas and experiences are the same as those that enhance cognitive ability.

Credits: Shutterstock
Credits: Shutterstock

Better Treatment for Cognitive Disorders

The researchers hope that their findings pave the way for new and better treatments. “Our long-term goal is to identify potential new targets for treatments of cognitive disorders of the brain, such as schizophrenia and attention deficit hyperactivity disorder (ADHD),” said Lencz.

While there is an abundance of research for better treatments for cognitive and neurological disorders, COGENT’s findings are unique and can help these other existing studies — like those that aim to better understand Parkinson’s and Alzheimer’s, for example. Lencz himself has been involved in several studies regarding schizophrenia.

But, like all these other studies, further research is necessary. “Today, we know of hundreds of genes related to traits such as height and weight, but only a few related to cognitive ability. We have a lot of work to do if we want to understand the molecular basis of brain function,” Lencz noted. COGENT is already working with European partners to expand their team, with the goal of expanding their study to include more than 100,000 DNA samples.

The post This Genetic Discovery Could Change the Way We Treat Mental Illness appeared first on Futurism.

Scientists Figured Out How to Activate Aggressive, Predatory Behavior in Mice

Turning Mice into Hulks

Traditionally, mice are the hunted animal. But scientists have discovered that by just pushing the right buttons in the brain, it’s possible to turn these meek mice into aggressive predators. This is what a team of scientists from Yale University were able to accomplish. They detail their work in a paper published in the journal Cell. The study gives us a glimpse into how predatory/hunting behavior has evolved over hundreds of millions of years.

The secret is in two sets of neurons found in the brain’s amygdala, which is the part that controls motivation and emotion. When these two sets of neurons were activated in mice using optogenetics — i.e., light stimulation — by shining a laser into them, the team observed changes in the behavior of the mice.

Specifically, it turned the mice aggressively predatory, with increased use of its jaw and neck muscles to bite anything on their path. It’s like they got furious, snapping even at innocuous objects like stick, bottle caps, and toys. Additionally, the hungrier the mice were, the more aggressive their behavior became. Except, they didn’t attack other mice, which was rather curious. When the laser light is off, the mice went back to normal behavior.

“The animals become very efficient in hunting,” explained researcher Ivan de Araujo, a Yale University associate professor of psychiatry and an associate fellow at The John B. Pierce Laboratory in New Haven. “They pursue the prey [a live cricket] faster, and they are more capable of capturing and killing it,” De Araujo said.

Brain Kill Switch

This aggressive behavior triggered by these activated neurons, however, seems to be reserved only for prey. The researchers take this as a clue that could explain how predatory behavior evolved some millions of years ago. The development of hunting circuits that coordinate how a predator’s jaw and neck muscles move is one thing, and it all “must have influenced the way the brain is wired up in a major way,” De Araujo said. “This is a very complex and demanding task.”

Before the experiment, the researchers already expected to find these circuits in mice, because there is inherent predatory behavior in them, usually in attacking insects or — in the case of the killer mouse — eating live prey, even other mice. That’s when they stumbled upon the two sets of neurons in the amygdala.

“When we stimulate [both sets of] neurons it is as if there is a prey in front of the animal,” De Araujo explained. “They assume the body posture and actions usually associated with real hunting.”

This is a peek at the complexity of the brain, and some say that these same hunting instincts are found in other species, including humans. It seems that the brain’s circuitry can be turned on and off to activate particular behaviors. Clearly, a great deal more of research is needed to explore the possibilities this kind of knowledge possesses.

The post Scientists Figured Out How to Activate Aggressive, Predatory Behavior in Mice appeared first on Futurism.

Researchers Used a Video Game Console to Better Understand Our Brains

Gaming Human Brains

To say that real neural networks — that is, flesh-and-blood human brains — are not very easy to understand is something of an understatement. In fact, they are extremely complex. That’s why researchers in the field of neuroscience are constantly trying to improve the ways in which we analyze the brain — they hope they’ll eventually hit on something that could furnish us with fundamental insights into how the brain processes information.

Much neuroscience research begins with the gathering of a lot of brain activity data and then analyzing that data using algorithms. However, testing the validity of these data analysis techniques remains difficult, because it’s still unclear how biological neural systems — even simple ones, like the brain of a fruit fly — actually work.

Researchers Eric Jonas of U.C. Berkeley and Konrad Kording of Northwestern University/ Rehabilitation Institute of Chicago wanted to test just how valid some widely used neuroscience approaches actually are, but instead of testing the approaches on living brains, they applied them to a manmade computer system that’s far simpler than biological neural networks: classic gaming console Atari 2600.

In their research, they applied numerous classical neuroscience analysis techniques to the Atari 2600’s 6502 microprocessor — a computer system that they actually understand. “Since humans designed this processor from the transistor all the way up to the software, we know how it works at every level, and we have an intuition for what it means to ‘understand’ the system,” Jonas explained. “Our goal was to highlight some of the deficiencies in ‘understanding’ that arise when applying contemporary analytic techniques to big-data datasets of computing systems.”

Credits: Eric Jonas, CC-BY
Credits: Eric Jonas, CC-BY

The Tools and Limits of Neuroscience

The researchers applied these standard neuroscience analysis techniques to the Atari 2600’s microprocessor to test how well they could highlight known characteristics, like the connections between the different parts of the chip or what happens when individual transistors are destroyed. Unfortunately, the results were not satisfactory, and the techniques were not able to provide the same level of understanding that a typical engineering student would have arrived at if looking at the system.

“We show that the approaches reveal interesting structure in the data but do not meaningfully describe the hierarchy of information processing in the microprocessor,” the researchers explained in the study, which is published in PLOS Computational Biology. “This suggests current analytic approaches in neuroscience may fall short of producing meaningful understanding of neural systems, regardless of the amount of data.”

Certainly, the field of neuroscience has greatly increased our understanding of how the brain works. However, as Kording said, “Progress requires better experiments, theories, and data analysis approaches.” In understanding how the brain’s processes work, big data just isn’t always that effective. “Without careful thought, current big-data approaches to neuroscience may not live up to their promise or succeed in advancing the field,” Jonas noted.

Ultimately, the study reveals the limitations of modern approaches to neuroscience, which is why the researchers suggest exploring new paths to better understand the brain, just like they did with their Atari experiment. “We could learn a lot about how to reverse-engineer biological systems by reverse-engineering synthetic systems,” according to Jonas.

The post Researchers Used a Video Game Console to Better Understand Our Brains appeared first on Futurism.

DARPA Has Huge Things Planned in Biotech

Big Budget = Big Breakthroughs

Although originally established 50 years ago to spearhead military research and development, the Defense Advanced Research Projects Agency (DARPA) has been the creative force behind many things civilians now take for granted, such as the internet and GPS, and for the past three years, DARPA has been busy working on other types of projects that could also have major repercussions beyond the battlefield.

In April 2014, DARPA established the Biological Technologies Office (BTO) with a mission to “harness the power of biological systems” to help design defense technology. Although the agency was already involved in research at the intersection of biology and engineering, with the establishment of the BTO, DARPA’s biological work found a dedicated home.

With a budget of about $296 million last year, researchers at the BTO have been doing work in areas such as memory improvement, human-machine symbiosis, and how to speed up disease detection and response. In a recent interview with Scientific American, BTO chief Justin Sanchez said he hopes to see the fruits of the office’s labor soon and shared details on what we can expect for 2017.

Credit: DARPA
Credit: DARPA

Biotech on the Horizon

One project, Living Foundries, aims to develop programmable microbes. Essentially, the goal is to be able to program microbes in bacteria and yeast to produce compounds needed to develop medications on the fly. “That program set out to produce 1,000 new molecules throughout the duration of the program [which has three years left], and the teams are well on their way,” said Sanchez. “I believe they have produced close to 100 new compounds already using these new pathways.”

Still other projects are on their way for 2017, including ones focused on what Sanchez calls “Outpacing Infectious Disease,” which has been pioneering new work in DNA and RNA approaches to immunization. “The idea is that you can tell your cells that produce antibodies what the right code is for producing the antibodies that would be effective against a pathogen,” Sanchez explained. It’s something DARPA hopes to make important announcements on this year.

Then there are the Luke prosthetic arms with the obviously “Star Wars”-inspired name. Recently, DARPA delivered the first two commercially available Lukes to the Walter Reed National Military Medical Center. This project sets the foundation for developing a “wide variety of devices that can be controlled via neural activity,” says Sanchez. Specifically, the BTO is exploring cognitive assistance.

Speaking of the brain, the BTO under Sanchez has also been dedicating significant resources to supporting President Obama’s BRAIN initiative. “Neurotechnology is a very big area in our office. We have made great strides on the medical side of things, showing direct neural interfaces can restore movement, sensation and health with neuropsychiatric disorders,” he explains. Included in this work is a “biochronicity” program that seeks to understand the role of time in biological functions.

Asked if he’s worried about the new president-elect and Republican-dominated Congress, Sanchez remained neutral, saying, “We usually don’t get in the middle of those kinds of things. The thing that I always like to emphasize is that our mission at DARPA remains the same no matter what the political climate is…I think that mission transcends the vast political landscape that is out there.”

Sanchez concluded the interview by emphasizing that big things are coming from the BTO, and soon: “I think we are really hitting our stride now, and I think the kind of things and developments we will see in 2017 will really blow our minds.”

The post DARPA Has Huge Things Planned in Biotech appeared first on Futurism.

There’s a New Window Into the Human Brain

A Softer Material

The brain is as delicate an organ as it is powerful. Everything we can do is rooted in the brain – the human body’s built in neural network. When it malfunctions, for one reason or another, there are treatments available to fix the brain. One common tool is electrode wire, which can be implanted in the brain, usually as treatment for cases of epilepsy or Parkinson’s disease.

While electrode implants are fairly common, current practice uses wires that are rather thick, measuring around 1.5 millimeters in diameter. These can sometimes hit blood vessels upon insertion, and even damage brain cells in the process. Furthermore, these electrode wires are also quite stiff and inflexible. They can also cause inflammation in the brain and, after a while, could become covered in immune cells that greatly reduce their efficiency.

An even smaller and softer material is necessary. Enter carbon nanotubes. This futuristic material has certainly been making waves in the miniaturization of devices, from smaller transistors to gene editing tools.

Carbon nanotubes are just a thousandth of the diameter of regular wires (or one-billionth of a meter). The only problem was that these are too soft, making them difficult to insert into the brain. “It’s like trying to stick a wet noodle into a bowl of Jello,” said Jacob Robinson of Rice University in Houston, Texas. Rice University has been working on several projects involving carbon nanotubes.

Fortunately, Robinson’s team was able to figure out a way to make carbon nanotubes temporarily stiff. The key was a device with a tiny channel of flowing water that fills up and stiffens the wire. The nanotubes do not bend or crumble upon insertion in the brain because the surrounding tissue helps keep them intact.

Credits: Victor Habbick Visions/Getty
Credits: Victor Habbick Visions/Getty

Better Monitoring and Treatment

The researchers were able to use the device to successfully implant nanoelectrodes in mice. Speaking at an IEEE workshop in San Diego last November, the team said there seemed to be no damage in the mice’s brains. Certainly, more tests will be necessary before the technique could be used in human clinical trials.

It is a promising tool, with great applications in the treatment of neurological diseases. Furthermore, these nanoelectrodes can allow neuroscientists to study brain activity over longer periods of time. This is just one example of how nanomaterials are revolutionizing medical research and treatment procedures.

This also isn’t the first time carbon nanotubes have been used in the brain. Previous research, also done in Rice University, has already established that carbon nanotubes make for superior links to the brain. It’s also been proven that these biocompatible nanofibers are the best candidates for safe electrodes that interact with the brain’s neural system.

Due to their robustness and flexibility, these carbon nanotube fibers are bound to do much more for medical research in the future.

The post There’s a New Window Into the Human Brain appeared first on Futurism.

New Drug Transforms the Immune System to Slow the Progress of Multiple Sclerosis

A Disabling Disease

Multiple sclerosis (MS) is an unpredictable and potentially disabling disease that cripples the central nervous system. It’s a widespread neurological condition that hits young adults, usually between the ages of 20 and 40, caused by an immune system disorder that mistakes a part of the brain as a hostile foreign object and attacks it. Though there are treatments available, particularly for its second state, multiple sclerosis remains incurable.

Progress is being made, however, with a new drug that can alter the immune system to slow damage to the brain for both states of MS. Ocrelizumab is the first known drug shown to work against the primary progressive form of MS. This primary progressive state is when MS gets worse, and being able to treat it is a “big news” in the eyes of doctors, as the BBC reports.




Promising Progress

The study, published in the New England Journal of Medicine, focuses on how ocrelizumab is able to destroy B cells, the part of the immune system that attacks the myelin sheath — the protective coating around nerves which serve as wire insulation for electrical signals to travel across nerves.

“The results shown by these studies have the potential to change how we approach treating both relapsing and primary progressive MS,” said Gavin Giovannoni, from Barts and The London School of Medicine and Dentistry, who was involved with the trials. “It’s very significant because this is the first time a phase three trial has been positive in primary progressive MS.”

The study notes that among 732 patients with progressive MS, those that showed deterioration dropped from 39 percent prior to ocrelizumab treatment to 33 percent with the new drug. Apart from these, patients who took the drug showed lesser brain loss detected on scans.

“It’s the first time a treatment has shown the potential to reduce disability progression for this type of MS, which offers a lot of hope for the future,” said Aisling McMahon, the head of clinical trials at the MS Society.

The post New Drug Transforms the Immune System to Slow the Progress of Multiple Sclerosis appeared first on Futurism.

A Brain in a Dish: 3D Mini-Brains Prove to Be Remarkably Accurate

Better Brain Models

When it comes to building model organs, perhaps none requires as much precision and exactness in detail as the human brain. In an effort to improve upon existing models, scientists at the Salk Institute studied a 3D “mini-brain” model grown from human stem cells. They concluded that it was more similar to human brains, both structurally and functionally, than the 2D models currently in use.

Just three years ago, European researchers came up with a method of growing embryonic brain cells in 3D gels, which allowed the cells to differentiate into realistic layers similar to those of a real human brain. These 3D models are called cerebral organoids (CO), and according to Joseph Ecker, director of Salk’s Genomic Analysis Laboratory, “Being able to grow human brain cells as miniature three-dimensional organs was a real breakthrough.”

However, until Ecker and his colleagues conducted their recent study, published in the December 20 issue of Cell Reportsno one knew just how accurately these COs mimicked real brains.

Better Brain Research

To figure out just how realistic these 3D brains were, Ecker’s team studied COs in early stages of brain development and compared them to actual human brains in similar developmental stages. The COs Ecker and his team used were from a human embryonic cell line called H9. With the right chemicals, H9 was induced down a neurodevelopmental pathway for 60 days.

Credits: Madeline Lancaster/MRC-LMB (Medical Research Council, Laboratory of Molecular Biology), UK
Credits: Madeline Lancaster/MRC-LMB (Medical Research Council, Laboratory of Molecular Biology), UK

They studied the COs epigenomes — basically, the chemical compounds that tell a genome what to do — because these have been increasingly associated with the development of brain diseases such as schizophrenia. After comparing their results with age-matched real tissues from the National Institutes of Health NeuroBioBank and with 2D brain-model data from other researchers, they saw that COs were more like authentic brains than their 2D counterparts and seemed to grow following the same early-developmental timelines as real brains.

“No one has done epigenome sequencing for cerebral organoids before,” said author Chongyuan Luo, research associate at Salk. “This kind of assessment is so important for understanding brain development, especially if we’re eventually going to use these tissues for neurological therapies.”

With a model that’s closer to a real brain, scientists will be better equipped to study brain development and its role in the emergence of neurological diseases such as Alzheimer’s or schizophrenia, putting us one step closer to improved treatment options or even cures.

The post A Brain in a Dish: 3D Mini-Brains Prove to Be Remarkably Accurate appeared first on Futurism.

Breakthrough Treatment Proven to Prolong Life

New Advance

An international team of researchers has developed and tested a drug on mice to treat Alzheimer’s disease. The results of the testing is very promising, suggesting that not only does the drug mitigate symptoms, but also increases life-span in patients with the debilitating disease. The team’s findings have been published in the Journal of Clinical Investigation.

According to author Professor Andrew Tobin, the drug targets a particular protein (M1 muscarinic receptor) that is associated with the disease and activates it, having a positive effect on cognition. Different drugs were previously tested on this protein but had too many adverse side-effects to make them practical as effective treatments. The class of drug used in these trials, however, did not manifest any of these adverse side-effects in the subject mice. Professor Tobin explained that the team, including Dr. Sophie Bradley, “found that these drugs can not only improve symptoms of brain degeneration, such as cognitive decline, but can also extend the life-span of these terminally-sick mice,” when administered daily.

Image credit: Thomas Deerinck/NCMIR/Science Source
Image credit: Thomas Deerinck/NCMIR/Science Source

Long Road

Since the trials are still at the animal testing phase, there is no guarantee that these drugs will have the same impact on humans. Even if this particular treatment does not live up to its early promise, it’s providing important discoveries about the disease itself that could play a vital role in the research ahead.

Current treatments simply serve to mask or relieve patient symptoms. There is no cure for the disease, but with each new research project undertaken, there is hope for healing.

The goal of Alzhiemer’s treatments seems to be shifting toward not only improving symptoms of the disease, but also to slow down the deleterious disease progression, thus extending life-span. It will take a lot of time and an ever-accumulating pool of professional knowledge, but with results as promising as these, there is always hope.

The post Breakthrough Treatment Proven to Prolong Life appeared first on Futurism.

Dementia Will Be a Treatable Condition in 10 Years

A Hopeful Director

Professor Bart De Strooper, the Belgian neuroscientist recently appointed as director of the U.K.’s Dementia Research Institute (DRI), is optimistic that dementia will be a treatable condition by 2025. “We won’t be celebrating in 2025 that dementia is cured, but I hope that by then there will be groups of patients who can be treated in much the same way HIV-Aids is treated today,” he said after the announcement that he would be the institute’s director.

As head of the DRI, De Strooper will assemble a multi-disciplinary team of doctors, biologists, engineers, and data specialists to expand study outside of the long-standing “amyloid hypothesis” on which most Alzheimer’s research is based. He believes that the hypothesis is dated and an over-simplification of dementia. If he’s right, this new research approach based on the premise that dementia is actually a complicated, multi-factorial condition could pave the way for treatments that can better manage it and other neurodegenerative diseases.

A Treatable Condition

De Strooper likens the promise of advancements in the study of dementia to those of HIV/Aids or cancer. In the 1970s and ’80s, those diseases were synonymous to a death sentence — incurable and debilitating. Today, both diseases, if caught early, can be manageable and treatable.

“I’m a scientist, so I don’t look into crystal balls, but I would put a lot of money on saying that the next generation will have a completely different view of dementia disorders,” says De Strooper.

Because the brain is the most plastic organ of the body, the right treatment could potentially allow patients to regain lost brain function. If we improve our understanding of how dementia attacks the brain significantly enough, scientists could potentially find ways to stabilize dementia disorders early or even intervene before symptoms surface.

“If we could step in 10 years before massive damage to the brain occurs, it would be a very nice thing,” De Strooper concludes.

The post Dementia Will Be a Treatable Condition in 10 Years appeared first on Futurism.

New Tech Lets You Play Games Using Only Your Mind

Sixth Sense

Can the brain process artificial information and use it to navigate a virtual world without input from any of our five senses? Based on the results of a team of researchers from University of Washington, yes.

Their work shows humans playing a computer game without relying on sensory cues from sight, sound, or touch. Instead, the gamers navigate through a two-dimensional maze based on direct brain stimulation.

“The way virtual reality is done these days is through displays, headsets and goggles, but ultimately your brain is what creates your reality,” said Rajesh Rao, UW professor of Computer Science & Engineering and director of the Center for Sensorimotor Neural Engineering.

Image Credit: University of Washington

Tasked to navigate through 21 different mazes, the subjects had to move according to the direction of a phosphene-induced stimulus. Phosphene, which is perceived as light, was generated using transcranial magnetic stimulation—a technique that uses a magnetic coil placed by the skull to stimulate a specific area of the brain directly and in a non-invasive way.

With stimulation, the test subjects were able to make the right moves in the maze 92 percent of the time, versus just 15 percent without phosphene stimulation.

“We’re essentially trying to give humans a sixth sense,” adds fellow researcher Darby Losey, a 2016 UW graduate in computer science and neurobiology who now works as a staff researcher for the Institute for Learning & Brain Sciences (I-LABS).

New Way to Experience Virtual Reality

Right now, the team is working with external partners to create the company Neubay, a startup focused on commercializing the research and introducing neuroscience and artificial intelligence (AI) techniques into virtual reality and gaming.

The equipment currently used to stimulate the brain is too bulky to be carried around. Eventually however, they hope to be able to develop hardware that will make it more appropriate for real world applications. They’re also trying to develop techniques that could pave the way for more complex sensory perceptions that are difficult to replicate in augmented or virtual reality.

It’s a small, but significant, step toward greater applications in entertainment, gaming, and even medicine.

“Over the long term, this could have profound implications for assisting people with sensory deficits while also paving the way for more realistic virtual reality experiences,” ends Rao.

The post New Tech Lets You Play Games Using Only Your Mind appeared first on Futurism.

New Research Shows How “Lost” Memories Can Be Recovered

This Magnet Will Jog Your Memory

We believe we’ve figured out a lot about how our brains handle memories. We’ve identified the parts of the brain that are active in making memories and the parts that store long- and short-term memories. We thought we knew that memory retention was a function of relevant neurons being active, as well, but new research published in the journal Science shows that that may not always be the case.

A team from the University of Wisconsin-Madison has discovered that applying an electromagnetic field to specific parts of the brain can reactivate recent memories. By applying the field using a technique called transcranial magnetic stimulation (TMS), the researchers found they could make a memory that didn’t seem to be active in a person’s brain according to traditional monitoring methods appear active again, according to the study.

In the study, the researchers would ask a participant to remember two different types of information, such as a word and a face. When the researchers would target the part of the brain where short-term memories of words are stored with the TMS, the subject would feel prompted to recall the word they’d be asked to remember, even if they were being verbally told they’d soon be asked a question about the face.

While determining how the brain decides what information to retain as a working memory and what information to set aside isn’t clear, this new study shows how it’s possible to bring latent memories to the foreground. “We think that memory is there, but not active,” Bradley Postle, one of the researchers involved with the study, told Neuroscience News. “More than just showing us it’s there, the TMS can actually make that memory temporarily active again.”

Credits: Wassermann/NINDS
Credits: Wassermann/NINDS

Potential for Treatment

The study significantly improves our understanding of how our brains work, especially when it comes to handling memories — a delicate process that appears to be more complex than previously thought. “What we’re taking are first steps toward looking at the mechanisms that give us control over what we think about,” said Postle. “[Y]ou can picture a point at which this work could help people control their attention, choose what they think about, and manage or overcome some very serious problems associated with a lack of control,” he added.

While this technique is currently limited to just short-term memory or working memory, it has potential in treating short-term memory problems or diseases like Alzheimer’s. It could also be used to treat mental illnesses, which the National Institute of Mental Health reports affect more than 18 percent of adults in the U.S. “A lot of mental illness is associated with the inability to choose what to think about,” said Postle, so giving those people the ability to control their thoughts with this new treatment could dramatically improve their quality of life.

The post New Research Shows How “Lost” Memories Can Be Recovered appeared first on Futurism.