Computing giant Intel has announced a multi-year deal, stretching through the 2024 Olympic games, with the International Olympic Committee (IOC) to bring the company’s new technologies to enhance the upcoming events. According to Advertising Age, Intel is hoping to bring “virtual reality, 360-degree video, artificial intelligence, and drones, to enhance the Olympic Games.”
The partnership aims to attract the interest of a younger demographic to help quell the loss of overall viewership. The president of the IOC, Thomas Bach said “There are many young people that are living a digital life. So we have to go where they are in the digital world, in their virtual reality.”
If humanity were to appoint a general in our war against aging, Aubrey de Grey would likely earn the honor. The British author and biomedical gerontologist has been on the front line for years, researching ways to free the world of age-related disease and, ultimately, extend human life indefinitely.
From the SRF Research Center (SRF-RC) in Mountain View, CA, foundation scientists conduct proof-of-concept research with the goal of addressing the problems caused by aging. They focus on repairing damage to the body at the molecular level, and their work is helping advance the field of rejuvenation biotechnology.
SRF-RC teams are currently focusing on two equally complex-sounding research projects, one centered on allotopic expression (a way to bypass the harmful effects of age-caused mitochondrial mutations) and the other on telomerase-independent telomere elongation (a little-researched process by which some cancer cells overcome mortality).
Either project could lead to major breakthroughs in anti-aging treatments, but as de Grey explains to Futurism, the path to immortality doesn’t just run through the science lab.
No Money, Big Problems
While the research being conducted at the SRF-RC is far from simple, de Grey claims DNA mutations and cancer cells aren’t the biggest hurdles to anti-aging breakthroughs: “The most difficult aspect [of fighting age-related diseases] is raising the money to actually fund the research.”
The nature of most science research is exploratory. Researchers don’t know that what they’re working on is going to yield the results they expect, and even if it does, turning basic research into income is no easy task. To support their work, most have to rely on funding from outside sources, such as government grants, educational institutions, or private companies.
“[Aging] is a medical problem that needs to be addressed.”
The amount of funding a specific field receives varies wildly. For example, the National Institutes of Health (NIH) reports that it allocated $5.5 billion for cancer research in 2016, while amyotrophic lateral sclerosis (ALS) research received a comparatively paltry $52 million. However, raising funds for anti-aging research has proven even more difficult, according to de Grey.
“It’s still an incredibly hard sell,” he claims. “We have very limited resources. We only have about 4 million dollars a year to spend, and so we spent it very judiciously.”
That money isn’t going to just the two in-house projects, either. The SENS Foundation funds anti-aging research at institutions across the globe and provides grants and internships for students, so raising money to support those endeavors is key to continued success in its fight against aging.
A Radical Disconnect
Essential to raising money for anti-aging research is ensuring that those with the funds understands why it’s worth the investment — a not-so-easy task given current misconceptions about aging.
In 2015, eight major aging-focused organizations, including AARP, the American Geriatrics Society, and the American Federation for Aging Research (AFAR), released a report detailing what they call the many “notable gaps” that exist between expert perspectives on aging and the public’s perception of the process. If the public isn’t well informed on aging, it’s even less knowledgeable about anti-aging.
Fifty-eight percent of respondents in a 2013 Pew Research study said they had never even heard of radical life extension before. When asked if they would undergo treatments that would allow them to live to the age of 120 or older, the majority of those surveyed said they would not, and 51 percent thought such treatments would be “bad for society.”
“There is still a huge amount of resistance to the logic that aging is bad for you and that it’s a medical problem that needs to be addressed,” explains de Grey. “It’s really, really extraordinary to me that it’s so hard to get this through to people, but that is the way it is.”
The SENS Foundation focuses a significant portion of its resources toward combatting this disconnect. In 2014, it dedicated more than $1 million to outreach and education, spreading the gospel of anti-aging research through speaking engagements, newsletters, press coverage, conferences, and other forms of community engagement.
Once the field is properly funded and supported, de Grey thinks researchers will have a clear path forward to “curing” the problem of aging:
Aging is not mysterious. We understand it pretty well. It’s not even a phenomenon of biology. It’s more a phenomenon of physics. Any machine with moving parts is going to damage itself…and the result is inevitably going to be that eventually the machine fails. It’s the same for the human body as it is for a car, for example, and if we think about it that way, it becomes pretty easy to actually see what to do about it.
The benefits of ending the problem of aging would be tremendous. Not only would we be living longer, we’d be living healthier for longer.
Without the debilitating diseases and disorders that have become synonymous with old age — vision loss, dementia, muscle weakness — we’d have extra years or even decades to do all the things we loved to do when were were younger: travel, play sports, spend time with our loved ones. We’d avoid the personal financial burden associated with treating the side effects of aging, and some argue that governments would even see a monetary benefit from radical life extension as two-thirds of Social Security expenditures for retirees currently go toward healthcare.
Anti-aging proponents like de Grey will be the people leading us toward that figurative fountain of youth, but you shouldn’t start living like you’re immortal just yet. “We have made very significant breakthroughs in some of the most difficult areas,” says de Grey. “I’m fairly proud of what we’ve achieved so far, though, of course, we still have a long way to go.”
Cyborgs: humans who have been merged with machines; a hybrid of sorts. What was once the subject of far-out science fiction has now entered reality as a medical tool. From implants to robotics, there is a whole host of emerging technologies that aim to treat health conditions and aid those suffering from different disabilities by turning people into, technically, cyborgs.
It might seem to be going too far to use the term cyborg when discussing, for instance, new versions of prosthetic limbs. However, carbon fiber and titanium prostheses are now commonplace, and most artificial limbs are fully functional. For example, in the video below, you can see the dexterity and capabilities of one prosthetic arm. Since this video was created, prostheses have advanced even further, with researchers going so far as to create robotic hands that can be controlled with one’s brain — and they have a sense of touch.
Artificial limb technologies like the “blades” used by Paralympians are even so advanced that some have started to discuss whether or not they are more capable than organic limbs. But artificial limbs aren’t the only advancements in so-called “cyborg tech.” One Swedish company is implanting its employees with microchips to allow them to do things like access doors with the wave of a hand instead of with a key. Elon Musk thinks that his neural lace could actually make human beings smarter. Many are experimenting with the many possibilities of merging humankind with machines.
A Cyborg Future
The authors of a recent paper in Science Roboticsdiscussed the potential issues with the future of such technologies:
There needs to be a debate on the future evolution of technologies as the pace of robotics and AI is accelerating. It seems certain that future assistive technologies will not only compensate for human disability but also drive human capacities beyond our innate physiological levels. The associated transformative influence will bring on broad social, political, and economic issues.
Once we officially cross that line, once the technologies that we create to assist those with difficulties and disabilities begin to advance human capabilities beyond what is biologically possible, we will have a teeming variety of moral and practical issues to deal with. Many believe that this will be humanity’s “next step in evolution.” Indeed, if we are ever going to colonize Mars and expand our reign in the Solar System, that might be a necessary evolution. Whatever moral and ethical quandaries may exist, it might not be possible for us to take such large strides without becoming cyborgs.
So, more likely than not, the day will come and we will cross that line. Will cyborg humans have the same rights and be bound by the same laws as biologically ordinary citizens? Will cyborgs be vulnerable to hacking and manipulation? Will warfare forever change with the possible advancement of military exoskeletons? The list goes on and on. And so, while we might not all be walking around as half-machines just yet, it might be a good idea to plan ahead.
While smartphone technology has already made many of us cyborg adjacent, there are some who are truly pioneering the world of the future. Take filmmaker Rob Spence of Toronto, for example. He lost his eye due to an unfortunate mishap involving his nine-year-old self, a pile of cow dung, and a shotgun. The gun kicked back after he fired it and severely damaged his eye.
While he did not lose all sight in the eye, he was declared legally blind. Years later, the eye began to physically deteriorate, prompting doctors to replace the eye. Instead of going for the traditional glass eye, Spence worked with a friend to build an eye camera. Spence’s eye can record up to 30 minutes of footage before the battery dies. The eye is not connected to the optic nerve so he cannot use it to see.
In preparation for the release of the video game sequel Deus Ex: Human Evolution, game developer Square Enix recruited the filmmaker to make a documentary of Spence interviewing other recipients of high-tech augmentation. He even included some of the eye camera footage in the doc.
The Age of Enhancement
While the word “cyborg” is still more closely aligned with science-fiction, more and more people are augmenting their bodies with technology. Many of these augmentations correct limitations, like this Star Wars inspired prosthetic arm, or these exoskeletons designed to give paralyzed people improved functionality. However, the next wave of available augmentations could focus on enhancing human capabilities, both physically and cognitively, beyond what is biologically possible. Tech wizards like Elon Musk and Bryan Johnson are working on systems that would integrate the human brain with computers, making the subject smarter.
However, each new capability could bring along specialized ethical concerns. For example, Elon Musk’s neural lace isn’t likely going to be cheap. Could giving those who can afford to purchase this technology access to higher levels of cognitive ability only lead to a massive widening of societal gaps? This and other ethical conundrums must be carefully considered as we quickly approach this Age of Enhancement.
The idea of the DermalAbyss project is that an individual would have the ink tattooed onto their body in the pattern of their preference. The tattoo would then change color according to the amount of the activating agent present. A tattoo using the ink designed to respond to glucose levels, for example, would change color from blue to brown as the person’s blood sugar level rises.
A Medical Skin Interface
The technology is an ingenious interaction of the body-art, medical, and bio-sensor sectors. While the researchers have no immediate plans to release their ink to the public, the potential of the project is huge, and others could possibly explore and expand upon it in the future.
Aside from the initial tattooing process, the researchers’ skin interfaces are non-invasive, unlike the methods currently used to monitor diabetes. They’re also much harder to damage than current wearable technology.
As stated on the project website, the technology could potentially be used to measure far more than just the levels tested in the study: “It could be used for applications in [continuous] monitoring, such as medical diagnostics, quantified self, and data encoding in the body.”
This isn’t the only research exploring innovative uses of tattoos — others have found ways to link body ink to sound files or use it to control smartphones — but this research is the first to explicitly explore the medical possibilities of inked biosensors. Though just a proof of concept right now, DermalAbyss could be offering us a glimpse into the future of health monitoring.
In 1905, an Ohio farmer survived a railroad accident that cost him both of his legs. Two years later, he founded the Ohio Willow Wood company, using the namesake timber to hand-carve prosthetic limbs. The company grew, surviving the Great Depression and a fire that destroyed the plant, and still thrives today in rural Ohio. Few who work there now might remember the curious footnote in the company’s history that occurred during World War II, when the rebuilt factory was diversified to build parts for PT boats and B-17 bombers.
Today, it is ironic to consider a company that specializes in prosthetic limbs building parts for the war machine that unfortunately increases demand on companies making prosthetic limbs. Indeed, the tragedy of war has pushed prosthetics researchers to work ever harder to help service members and veterans who have lost limbs.
That has made the U.S. Department of Veterans Affairs a key player in prosthetics development and technological innovation. But what is created for service members and veterans has benefits far beyond them, helping nearly two million Americans – civilian children, elderly people and young adults with amputations – maximize their mobility.
A Key Funding Source
As a biomedical engineer specializing in prosthetics, I’ve reviewed grant proposals seeking funding from the VA to research prosthetic limbs for several years.
The federal government has long played a vital role in advancing prosthetic technologies. Before the 1980s, prosthetic feet were adequate for standing and walking, but very limited for more intense activities involving running and jumping. Veterans Administration Rehabilitation Research and Development funds were used to develop the Seattle Foot. That device and its carbon-fiber contemporaries paved the way for “blade”-style prostheses seen in the Paralympics.
For prosthetics researchers, funding from traditional sources such as the National Institutes of Health and the National Science Foundation has become remarkably difficult to obtain, at least in part because of competition. Consequently, many researchers have turned their attention to soldiers and veterans. In 2014-2015, the Congressionally Directed Medical Research Program funded 18 proposals in Orthotics and Prosthetics Outcomes. But that funding is getting more competitive too: 98 proposals were rejected.
Connecting With the Body
All this work is, generally speaking, focused on transforming a prosthesis from an external tool used by a person to an integrated replacement that becomes a part of a person. Much of the research involves taking small steps toward reproducing the grand complexity and robust design of human anatomy. We have not yet achieved the flashy and functional capability of a real-life Luke Skywalker with cranial nerves attached directly to motors in a robotic hand. But researchers have made important progress through intermediate steps.
A process called targeted muscle reinnervation can connect nerves carrying messages from the brain, like “close hand,” to a muscle over which a set of electrodes are placed. Those electrodes in turn send the signal to the motorized hand to close. Sensory feedback is being developed in a similar manner. While these efforts are not yet ready for everyday community use, they represent crucial progress.
Consider also a CDMRP-funded project at the Rehabilitation Institute of Chicago and Vanderbilt University. The lower-limb prostheses used are not just passive spring-like materials: They’re active, motorized joints. Motors require careful control, and the project headed by neuroengineering specialist Levi Hargrove has attempted to learn from the amputee’s muscles when she wants to transition from walking to, say, walking up stairs or down a ramp. Just like your phone’s voice recognition better understands your voice over time, Hargrove’s control algorithm allows the motorized prosthesis to make fewer errors in ambulation.
Clearly, projects like Hargrove’s stand to benefit much broader populations than just soldiers and veterans. My own research focuses on children with limb loss, and I see so-called “civilian spinoffs” every day. In our Center for Pediatric Locomotion Sciences at Georgia State University, we’re modeling the material and structural properties of pediatric prosthetic feet. Many of those components can be traced back to that original VA-funded Seattle Foot. Across town at the Shepherd Center, a renowned hospital specializing in brain and spinal cord injury, robotic exoskeletons help paralyzed individuals move their limbs and remap neural pathways, a project with connections to VA prosthetic research.
Though they won’t get the public recognition of the Jeep or GPS, these civilian spinoffs from military-funded research in prosthetics will have long-term benefits for people of all ages, and they’ll feel the results every day.
For those eager to give their own brains a boost, this is good news. Various communities have sprung up to share tips and tricks on how to test the technique on themselves, often using self-rigged stimulators powered by 9-volt batteries.
Scientists and brain enthusiasts aren’t the only people interested. The military has also been eager to support projects involving brain stimulation with the hope that the technology could one day be used to help soldiers suffering from combat-induced memory loss.
But here’s the catch: the end results are inconsistent at best. While some people swear by the positive effects anecdotally, others report nothing but a nasty scalp burn from the electrodes.
We all have good days when your brain feels sharp and bad days when the “brain fog” never lifts. This led scientists to wonder: because electrical stimulation directly regulates the activity of the brain’s neural networks, what if it gives them a boost when they’re faltering, but conversely disrupts their activity when already performing at peak?
In a new study published in “Current Biology,” researchers tested the idea using the most direct type of brain stimulation — electrodes implanted into the brain. Compared to tDCS, which delivers currents through electrodes on the scalp, implanted ones allow much higher precision in controlling which brain region to target and when.
The team collaborated with a precious resource: epilepsy patients who already have electrodes implanted into their hippocampi and surrounding areas. These brain regions are crucial for memories about sequences, spaces and life events. The electrodes serve a double purpose: they both record brain activity and deliver electrical pulses.
The researchers monitored the overall brain activity of 102 epilepsy patients as they memorized 25 lists of a dozen unrelated words and tried to recall them later on.
For each word, the researchers used the corresponding brain activity pattern to train a type of software called a classifier. In this way, for each patient the classifier eventually learned what types of brain activity preceded successfully remembering a word, and what predicted failed recall. Using this method, the scientist objectively classified a “foggy” brain state as the pattern of brain activity that preceded an inability to remember the word, while the pattern of activity common before successfully recalling is characteristic of being on the ball.
Next, in the quarter of patients for whom the classifier performed above chance, the researchers zapped their brains as they memorized and recalled a new list of words. As a control, they also measured memory performance without any stimulation, and the patients were asked whether they could tell when the electrodes were on (they couldn’t).
Here’s what they found: when the zap came before a low, foggy brain state, the patients scored roughly 12 to 13 percent higher than usual on the recall task. But if they were already in a high-performance state, quite the opposite occurred. Then the electrical pulse impaired performance by 15 to 20 percent and disrupted the brain’s encoding activity — that is, actually making memories.
Moving Beyond Random Stimulation
This study is notably different from those before. Rather than indiscriminately zapping the brain, the researchers showed that the brain state at the time of memory encoding determines whether brain stimulation helps or hinders. It’s an invaluable insight for future studies that try to tease apart the effects of brain stimulation on memory.
The next big challenge is to incorporate these findings into brain stimulation trials, preferably using noninvasive technologies. The finding that brain activity can predict recall is promising and builds upon previous research linking brain states to successful learning. These studies may be leveraged to help design “smart” brain stimulators.
For example: picture a closed-loop system, where a cap embedded with electrodes measures brain activity using EEG or other methods. Then the data go to a control box to determine the brain state. When the controller detects a low functioning state, it signals the tDCS or other stimulator to give a well-timed zap, thus boosting learning without explicit input from the user.
Of course, many questions remain before such a stimulator becomes reality. What are the optimal number and strength of electrical pulses that best bolster learning? Where should we place the electrodes for best effect? And what about unintended consequences? A previous study found that boosting learning may actually impair a person’s ability to automate that skill — quickly and effortlessly perform it — later on. What other hidden costs of brain stimulation are we missing?
I’m not sure if I’ll ever be comfortable with the idea of zapping my brain. But this new study and the many others sure to follow give me more confidence: if I do take the leap into electrical memory enhancement, it’ll be based on data, not on anecdotes.
We are now poised at a time in history when brain-computer interfaces (BCI) are the obvious next step. On the brink of the automation age, we face the daunting prospect of artificial intelligence (AI) becoming more capable than ourselves. Now that touchscreens and voice recognition are part of our everyday devices, it’s time for us to be able to control our electronics with our minds, and tech companies like Facebook are itching to make it happen. And in this modern age, we are also edging closer to the elimination of various diseases, and we long for life without dementia, brain damage, and neurological diseases. The BCI provides us with a way to maintain control of our world and our electronics, to heal ourselves—and maybe even allow humanity itself to level up.
Elon Musk’s ambitious solution is an easily injectable neural electrode, a neural lace, which would be able to both stimulate and interpret electrical activity in the brain. While this hasn’t yet been developed, researchers are already stimulating electrical activity in the human brain using transcranial direct current stimulation, or tDCS. This is being used to treat consciousness disorders, and to help patients with minimal consciousness communicate. Researchers also have a prototype BCI in use, which people with locked-in syndrome can use to communicate.
These are simply the first steps on a more ambitious journey towards the high-level enhancement of the human brain, and by extension, of humanity itself. As envisioned, Musk’s neural lace is a far more transformative solution than tDCS could ever be; tDCS simply works with the neurons that are there. The neural lace would form an entirely new layer of the brain.
Additionally, tDCS works incrementally, slowly training more and more neurons in the brain to fire more readily and more often. The net benefit comes from the sheer numbers; if enough brain cells fire, you will see a result. In a person with minimal consciousness, you might see responsiveness; in a person with normal capacity, you may see improved intelligence, sharper problem-solving abilities, enhanced creativity, or other benefits. There is certainly a demand for this kind of enhancement, as the current nootropics craze proves — and nootropics fans are often also biohackers, willing to try physical solutions like BCIs.
A New Humanity?
So, could tDCS bring forth a new humanity? Or will neural lace or something similar be required for that to be possible? If tDCS is a step toward neural lace, does the difference really matter?
While tDCS has notable potential for treating neurological diseases, its potential for enhancing human intelligence is somewhat murkier. This is especially true in the context of AI, which, alongside the need to retain human rights, is the defining condition pushing for neural lace. The kinds of problems that tDCS can tackle, beyond neurological disease or paralysis, include the need to learn faster, upload skills, or retain plasticity.
To match wits with AI in any meaningful way, a more radical amplification of the brain will be necessary. In other words, the basic idea behind the neural lace is that we can’t beat AI: we’re going to need to join it instead, becoming part of a human/machine merger. This is beyond the realm of tDCS, delving into a new realm of cyborg humanity.
One such project is the topic of a study by researchers from the University of Washington (UW) that was just published in Nature Communications. They have developed a new method of turning cells into computers that process information digitally instead of following their usual macromolecular processes. They did so by building cellular versions of logic gates commonly found in electric circuits.
The team built their NOR gates, digital logic gates that pass a positive signal only when their two inputs are negative, in the DNA of yeast cells. Each of these cellular NOR gates was made up of three programmable DNA stretches, with two acting as inputs and one as an output. These specific DNA sequences were targeted using CRISPR-Cas9, with the Cas9 proteins serving as the molecular gatekeeper that determined if a certain gate should be active or not.
Controlling Cellular Function
This UW study isn’t the first to build circuits in cells, but it is the most extensive one to date, with seven cellular NOR gates in a single eukaryotic cell. This added complexity puts us one step closer to transforming cells into biological computers with a number of potential medical applications.
“While implementing simple programs in cells will never rival the speed or accuracy of computation in silicon, genetic programs can interact with the cell’s environment directly,” senior author Eric Klavins explained in a press release. “For example, reprogrammed cells in a patient could make targeted, therapeutic decisions in the most relevant tissues, obviating the need for complex diagnostics and broad spectrum approaches to treatment.”
If given the ability to “hack” our biology in this way, we could potentially engineer immune cells to respond to cancer markers or cellular biosensors to diagnose infectious diseases. Essentially, we’d have an effective way to fight diseases on the cellular level, ushering in a new era in human evolution.
Very soon, we could witness an acceleration in human evolution like never seen before, argues Caleb Sharf, director of Astrobiology at Columbia University. He believes that new technologies will change the evolutionary currency from soft, organic matter to machines that exceed these fleshy restraints and can be programmed or upgraded.
“We may be witnessing the first new origin event,” Sharf said in a video interview with Tech Insider. A sprint period in the endurance race of evolution may already be underway thanks to our interaction with the technology we are creating — and that sprint could get even faster when we consider the technologies around the corner.
Some developments may already be here. The internet, Sharf argues in an Aeon opinion article, could be the beginning of a new state of being for the human species.
“Part of our collective wisdom is now uploaded, placed in an omnipresent cloud of data,” Sharf wrote. “Where that’s taking us to isn’t obvious, however. If anything, we could be heading for a hive-mind state, a collective organism more akin to a termite colony or a set of squirmy naked mole-rats.”
But Sharf also argues in his opinion piece that the mechanization of the biological could be part of a much wider and longer process of life moving between biological and machine states as it develops. He wrote, “Someday we might decide that the future of intelligence on Earth requires biology, not machine computation.”
The idea of moving towards a definite end point in evolution is flawed: evolution is not a process with a roadmap, but a process of adaptation to an environment, whatever that environment may be. Those who can survive in the new environment live to pass on their genes — those who cannot, don’t. Therefore, we must not ask ourselves what technologies are making us bigger, stronger, or faster, but which technologies give us the ability to adapt to whatever characteristic is required for survival.
The first of these are bionics. While today’s bionics are aiming to mimic the functionality of a human limb — and getting remarkably close to doing so — when this goal is achieved bionic engineers may aim at exceeding it. Each of us may face an interesting dichotomy: to be inferior but to subscribe to an idea of man that defines “human” according to the natural body as we know it today, or to be a superior transhuman that integrates technology. If limbs become robotic, they also become customizable and therefore can be adapted to whatever the environment requires.
The second of these are Brain Computer Interfaces (BCI). These technologies, which allow the brain to talk directly to computers, open the door for the human mind to sit in a multitude of robotic forms. Should the environment become incredibly hot, for example, we could engineer a robotic body that could stand the heat, and subsequently insert a human brain into it that could control it using a BCI.
Aside from the bodily possibilities, we may also consider the potential for human intelligence. BCIs are not just one way interactions, but can allow computers to impart information onto brains. This gives us another golden key to adaptability: the possibility of having knowledge uploaded that would allow us to predict environmental changes and and engineer our own means of surviving them.
While this may seem outlandish, we must remember that the transfer of information is is the tenet of the Facebook and Elon Musk’s BCIs — although they are twisting this towards communication rather than information. Regardless of how we initially use them, these technologies will likely usher us into a new era of human evolution
Robots have taken over difficult, dangerous and repetitive physical tasks, improving factory safety, worker comfort and product quality. The next phase of labor innovation will do the same thing for cognitive work, removing mentally stressful and repetitive tasks from people’s daily routines.
Human work will become more versatile and creative. Robots and people will work more closely together than ever before. People will use their unique abilities to innovate, collaborate and adapt to new situations. They will handle challenging tasks with knowledge-based reasoning. Machines enabled by the technologies that are now becoming commonplace – virtual assistants like Siri and Alexa, wearable sensors like FitBits and smart-watches – will take care of tedious work details.
People will still be essential on the factory floors, even as robots become more common. Future operators will have technical support and be super-strong, super-informed, super-safe and constantly connected.
We call this new generation of tech-augmented human workers, both on factory floors and in offices, “Operator 4.0.” There are several types of enhancements available, which can be used individually or in combination to put humans at the heart of this technological revolution.
One straightforward enhancement would let workers wear robotic exoskeletons to enhance their strength. A “super-strength operator” could let a human truly control the physical power of a large robot. In today’s warehouses and construction sites, workers risk injury and exhaustion by handling heavy objects themselves. Or they are forced to compromise, using a more powerful tool with less adaptability, like a forklift.
The benefits go well beyond the workplace. Of course, a worker in a powered robotic suit could easily handle extremely heavy objects without losing the flexibility of natural human movements. The worker would also be far less likely to suffer severe injuries from accidents or overwork. And at the end of a day, a super-strength worker could take off the exoskeleton and still have energy to play with the kids or spend time with friends.
Fighter pilots use heads-up displays, which provide them with crucial information right on the cockpit windshield and directly in their line of sight. This is “augmented reality,” because it displays information within a live view of the world. It used to be very specialized and expensive technology. Now, Microsoft’s HoloLens makes it available for consumers.
An “augmented operator” can get directions or assistance without interrupting the task he or she is working on. Often, when new equipment or processes are developed, trainers need to travel long distances to factories, staying for weeks to teach workers what to do. Designers do the same, getting feedback for refinements and improvements. All that travel takes up a huge amount of time and is extremely expensive. With augmented reality available, it is often unnecessary.
A worker wearing a set of smart glasses can receive individualized, step-by-step instructions displayed right in front of his or her eyes, no matter where he or she is looking. With earbuds and a microphone, she or he could talk directly to trainers in real time.
Many manufacturing environments are hazardous, involving heavy equipment, caustic chemicals and other dangers that can maim and kill human workers. A “healthy operator” may be equipped with wearable sensors tracking pulse rate, body temperature, chemical exposure or other factors that indicate risks of injury.
This type of system is already available: Truck drivers can wear the Maven Co-Pilot, a hands-free headset that detects fatigue symptoms, like head-bobbing movements. It can also ensure drivers check their rear-view mirrors regularly to stay aware of nearby traffic. It can even provide reminders to take scheduled breaks. This helps keep the truck’s driver safe and improves everyone else’s road safety.
Possibilities are limitless. An “analytical operator” would wear a monitor showing real-time data and analytics, such as information on chemicals in a sewage treatment plant or pollutants at an incinerator. A “collaborative operator” may be linked to collaborative robots, or co-bots, like the assembly assistant YuMi. A “smarter operator” could be equipped with an intelligent virtual personal assistant, like an advanced Siri or Alexa.
There does not have to be conflict between robots and humans, with machines taking people’s jobs and leaving them unemployed. Technology should be designed with collaboration in mind. That way, companies and workers alike will be able to capitalize on the respective strengths of both human and machine. What’s more, the inherent flexibility of “Operator 4.0” workers will also help to ensure workplaces of the future that can change and adapt. That means getting ever more efficient and safer, as new technologies emerge.
Newcastle University researchers have developed a prosthetic limb that performs certain actions on its own. All the wearer has to do is look at an object, and the hand does the rest. Using its onboard camera to evaluate the shape, size, and distance, the hand performs the appropriate action – like picking up a coffee cup. This makes complex actions much easier for prosthetic wearers, who otherwise have to perform complex muscle actions to manipulate their limbs.
This technology would take the form of an injectable “neural lace” — composed of a mesh of electrodes — that would augment the human brain, adding another layer to the cortex and limbic system that is capable of communicating with a computer (essentially creating cyborgs). This, hypothetically, creates an upgradable, updatable interface that could be applied in countless ways. Some of these include:
Controlling Computers With Your Mind
Brains and technology both operate using the same vectors: electricity and data. Musk’s Neural Lace would be a system that provides a way for them to communicate directly with each other. To borrow a simile from Phillip Alvelda, the Neural Engineering System Design (NESD) program manager (another nascent BCI), “Today’s best brain-computer interface systems are like two supercomputers trying to talk to each other using an old 300-baud modem […] Imagine what will become possible when we upgrade our tools.” Applications could stretch from the remote operation of technology to the completely hands free and voiceless operation of computers. Researchers in Korea have already used a BCI to control turtles.
Updating Your Mind or Communicating With Someone Else’s
Elon Musk’s idea could both initiate brain activity and monitor it. The technology does not necessarily have to be a one-way communication stream, it is capable of both sending messages and creating information in the brain. The high-bandwidth interface could allow you to wirelessly transmit information to the cloud, to computers, or even directly to the brains of other people with a similar interface in their head. There is also the possibility of downloading content to augment your consciousness: think Neo learning kung-fu in the Matrix. While initial tests to improve intelligence haven’t been too successful, if brains and computers speak the same language, then computers can impart information to the brain. The technology is currently being used to allow paralyzed people to communicate, but its uses could extend far beyond that.
Bionic Limbs That Feel Like Real Limbs
As part of this two-way communication stream, robotic arms could communicate fleshy messages by being connected to existing nerve structures. Rather than making the brain learn how to use a new part of the nervous system, robotic limbs could be quickly and easily integrated into the system. This has the potential to revolutionize prosthetic limbs for the disabled, but may also encourage people to rid themselves of their biological arms in favour of mechanical super limbs. Who knows!
Emotionally Aware Technology
As computers and brains would essentially be speaking the same language, emotions could be read as data using electrodes. This would shift technology’s perception of humans from basic recognition to complex understanding. Robot helpers would be able to adapt to your emotional state rather than just responding to commands. Photos and videos could also be implanted with emotional metadata, meaning that one could feel what it would be like to be in any given scenario, rather than just trying to imagine it.
Next Generation Adaptable Gaming
One issue with the lifespan of games is repetition; people become accustomed, know what to expect, or are limited by the programmed narrative. A BCI could improve this situation by having games respond to what your brain is feeling, remaining one step ahead and endlessly diverse. This would be most applicable to the horror genre, in which enemies could come at you when and where you least expect it, providing constant shocks, jumps, and thrills. The Black Mirror episode Playtest is an hour long exploration of just how terrifying this could be. Since AI has been shown to be as creative as a human composer, this reality could be surprisingly close.
Anyone who’s ever had a stray eyelash or a piece of dust land in their eye can attest to the pain even the tiniest foreign object can cause. While this sensitivity is essential for keeping our eyes in working order, it can also be a problem for those with retinal implants.
The retina is a light-sensitive layer of tissue at the back of the eye that plays an important role in sight, and damage to it can lead to blindness. For the past few years, retinal implants have been restoring vision for patients with such problems, but those implants have all been mechanical devices, which can wreak havoc on already damaged eyes.
“The human eye is incredibly sensitive, which is why foreign bodies like metal retinal implants can be so damaging, leading to inflammation and/or scarring,” University of Oxford researcher Vanessa Restrepo-Schild explained in a university news release. To that end, she led a team in creating the world’s first synthetic, soft tissue retina.
The team used lab-created biological tissue for its artificial retina, and because it doesn’t incorporate any rigid foreign materials, the device is less likely to cause problems once implanted. “A biological synthetic implant is soft and water-based, so much more friendly to the eye environment,” explained Restrepo-Schild.
So far, the team has only tested its implant in a lab environment, so the next steps are testing it with living tissues and expanding on its functionality. Restrepo-Schild is optimistic about the future practical uses of her creation: “I hope my research is the first step in a journey towards building technology that is soft and biodegradable instead of hard and wasteful.”
All Eyes on Bionics
An estimated 39 million people worldwide are blind, and the Oxford team is just one of many developing bionic eyes to help them regain their vision.
Right now, the only bionic implant approved by the Food and Drug Administration (FDA) for use in the United States is Second Sight’sArgus II. That system requires the implantation of a device comprising an antenna, an electronics case, and an electrode array on the patient’s eye. That implant works in conjunction with a pair of glasses, a video processing unit (VPU), and a cable worn outside the body to restore some of the user’s vision.
Other researchers are skipping the eyes altogether in their quest to bring sight to the blind. Monash University professor Arthur Lowery’s “bionic eyes” feed information from a glasses-mounted camera directly into the brain via electrodes. This means it could be used by someone with severely damaged eyes or even no eyes at all.
Beyond helping people with impaired vision, bionic eyes could eventually give “superpowers” to those who already have no trouble reading the eye chart. Those implants could allow us to see in different light spectrums, give us telescopic sight, or even record what we see and wirelessly upload it to our social networks. The possibilities are endless, and breakthroughs like that of Restrepo-Schild put us one step closer to that age of superhuman sight.
With all the fanfare surrounding fluid-filled bags that made it possible to bring premature lambs to healthy development in the hopes of tackling premature birth in infants, it’s important to note how the medical community first tackled premature birth with a little physics.
A video from popular YouTube channel MinuteEarth discusses how a mathematical study left an impressive impact on the medical community, changing how we can make a difference for prematurely born infants.
The video takes a look at the Law of Laplace and its role in the human body. Similar to the air pressure in two connected balloons, alveoli, miniature air-sacs in our lungs that exchange carbon dioxide for oxygen, have trouble maintaining an air pressure equilibrium for premature children with Respiratory Distress Syndrome (RDS).
By understanding the physics behind this complication, CPAP machines have saved over two million lives worldwide — a dramatic shift since the 1970s, when 3 in 4 premature babies with RDS would pass away due to complications.
Hahna Alexander shares the journey the self-charging work boots that deliver actionable insights to increase efficiency and improve safety. At the core of SolePower’s development is a patented kinetic charger that generates power every step. Hahna is a 2017 winner of Toyota Mothers of Invention.
In March 2016, DARPA — the U.S. military’s “mad science” branch — announced their Targeted Neuroplasticity Training (TNT) program. The TNT program aims to explore various safe neurostimulation methods for activating synaptic plasticity, which is the brain’s ability to alter the connecting points between neurons — a requirement for learning. DARPA hopes that building up that ability by subjecting the nervous system to a kind of workout regimen will enable the brain to learn more quickly.
The ideal end benefit for this kind of breakthrough would be downloadable learning. Rather than needing to learn, for example, a new language through rigorous study and practice over a long period of time, we could basically “download” the knowledge after putting our minds into a highly receptive, neuroplastic state. Clearly, this kind of research would benefit anyone, but urgent military missions can succeed or fail based on the timing. In those situations, a faster way to train personnel would be a tremendous boon.
First Neurostimulation, Then Application
As part of the TNT program, DARPA is funding eight projects at seven institutions. All projects are part of a coordinated effort that will first study the fundamental science undergirding brain plasticity and will conclude with human trials. The first portion of the TNT program will work to unravel the neural mechanisms that allow nerve stimulation to influence brain plasticity. The second portion of the program will practically apply what has been learned in a variety of training exercises.
To ensure the work stays practical, foreign language specialists, intelligence analysts, and others who train personnel now will work with researchers to help refine the TNT platform to suit military training needs. Researchers will compare the efficacy of using an implanted device to stimulate the brain versus non-invasive stimulation. They will also explore both the ethics of enhanced learning through neurostimulation and ways to avoid side effects and potential risks.
“The Defense Department operates in a complex, interconnected world in which human skills such as communication and analysis are vital, and the Department has long pushed the frontiers of training to maximize those skills,” Doug Weber, the TNT Program Manager, said in a DARPA press release. “DARPA’s goal with TNT is to further enhance the most effective existing training methods so the men and women of our Armed Forces can operate at their full potential.”
If the TNT program succeeds, striving to be all you can be may mean learning at a much faster pace, and not just for military personnel. Downloadable learning may be one of the ways we achieve next-level humanity.
It may not be as sleek or as high-tech, as Iron Man’s suit, but right now, Richard Browning’s flight suit, developed under his startup Gravity, is the closest the world has to Tony Stark’s powered exoskeleton.
In its first live demonstration, Browning showed off his creation, the Deadalus, at a TED conference held at the Vancouver Convention Center, where he hovered above the ground and flew in a circle in front of a captivated crowd.
The entire suit is comprised of a specially designed exoskeleton, a helmet that tracks fuel usage, and six miniature jet engines that allow for vertical takeoff. To control direction and speed, the wearer has to use their arms. The tech allows the user to fly for 10 minutes straight, go as fast as 321 km/h (200 mph), and fly up to a few thousand feet into the air. However, for his live demo and his maiden test flight, Browning kept the speed and altitude low for safety reasons.
Browning himself admits that he doesn’t think the tech will go mainstream anytime soon. But, the Deadalus serves as proof of concept and demonstrates a new era in aviation. And while it’s still in its early stages of development, the attention it draws, including interest from the UK military, could mean the currently cumbersome tech could evolve into something practical. So, while we’re not exactly entering into a new era of aviation just yet, this invention is, at the very least, a remarkable stepping stone.
Ray Kurzweil, Google’s director of engineering, is a well-known futurist who seems to have a penchant for accurate predictions. Most recently, he has again reiterated his prediction that the so-called technological singularity will happen by 2045. For Kurzweil, this doesn’t translate to an end-of-the-world-as-we-know-it scenario courtesy of artificially intelligent (AI) machines. Rather, it means human beings will become powered by machines.
Kurzweil believes that, as part of this human-machine melding, nanobots will inhabit our bodies by the 2030s. While flowing through our arteries, these microscopic robots would keep us healthy and transmit our brains onto the cloud.
Another futurist, Dave Evans, founder and CTO of Silicon Valley stealth startup Stringify, gave his thoughts about Kurzweil’s nanobot idea in an interview with James Bedsole on February.
Evans explained that he thinks such a merging of technology and biology isn’t at all farfetched. In fact, he described three stages as to how this will occur: the wearable phase (where we are today), the embeddable phase (where we’re headed, with neural implants and such), and the replaceable phase.
Does Evans agree with Kurzweil’s idea of nanobots flowing inside our bodies? Check out the rest of his answer in the video embedded here.
Soon you’ll be able to answer your cell phone by touching your tattoo. Researchers at Saarland University in Germany have developed iSkin, a sensor worn on the skin like a tattoo. Just in case tattoos and cell phones weren’t already cool.
Silicon valley seems to have a new obsession with a proposed biotech called ‘the neural lace’. Both Elon Musk and Bryan Johnson have launched companies that aim to inject electrodes into the brain, which ultimately could give people the ability to upload and download thoughts.
After weeks of anticipation, details on Elon Musk’s brain-computer interface company Neuralink have finally been revealed. In a detailed report on the website Wait But Why, Tim Urban recounts insights gleaned from his weeks meeting with Musk and his Neuralink team at their San Francisco headquarters. He offers an incredibly detailed and informative overview of both Musk’s latest venture and its place in humanity’s evolution, but for those of you interested in just the big picture, here’s what you really need to know about Neuralink.
Your Brain Will Get Another “Layer”
Right now, you have two primary “layers” to your brain: the limbic system, which controls things like your emotions, long-term memory, and behavior; and the cortex, which handles your complex thoughts, reasoning, and long-term planning. Musk wants his brain interface to be a third layer that will complement the other two. The weirdest thing about that goal may be that he thinks we actually already have this third layer — we just don’t have the best interface for it:
We already have a digital tertiary layer in a sense, in that you have your computer or your phone or your applications…The thing that people, I think, don’t appreciate right now is that they are already a cyborg…If you leave your phone behind, it’s like missing limb syndrome. I think people—they’re already kind of merged with their phone and their laptop and their applications and everything.
The goal of Neuralink, then, is eliminating the middleman and putting that power we currently have at our fingertips directly into our brains. Instead of one person using their phone to transmit a thought to another person (“Dinner at 8?”), the thought would just go from one brain to the other directly.
Thankfully, we’ll be able to control this completely, Musk tells Urban: “People won’t be able to read your thoughts — you would have to will it. If you don’t will it, it doesn’t happen. Just like if you don’t will your mouth to talk, it doesn’t talk.”
Musk Is Working with Some Very Smart People
Musk met with more than 1,000 people before deciding on the eight who would help him shape the future of humanity at Neuralink. He claims assembling the right team was a challenge in and of itself, as he needed to find people capable of working in a cross-disciplinary field that includes everything from brain surgery to microscopic electronics.
The crew he landed is a veritable supergroup of smarties. They have backgrounds from MIT, Duke, and IBM, and their bios include phrases like “neural dust,” “cortical physiology,” and “human psychophysics.” They’re engineers, neurosurgeons, and chip designers, and if anyone can bring Elon Musk’s vision to life, it’s them.
The Timeline For Adoption Is Hazy…
Neuralink won’t come out the gate with a BMI that transforms you into a walking computer. The first product the company will focus on releasing will be much more targeted. “We are aiming to bring something to market that helps with certain severe brain injuries (stroke, cancer lesion, congenital) in about four years,” said Musk.
“I think we are about 8 to 10 years away from this being usable by people with no disability.” – Musk
The same way SpaceX was able to fund its research on reusable rockets by making deliveries to the ISS or Tesla was able to use profits from its early car sales to fund battery research, these earliest BMIs to treat diseases or the disabled will keep Neuralink afloat as it works on its truly mind-bending technologies.
As for when those technologies, the ones that allow healthy people to channel their inner telepaths, will arrive, Musk’s fairly optimistic timeline comes with several contingencies: “I think we are about 8 to 10 years away from this being usable by people with no disability…It is important to note that this depends heavily on regulatory approval timing and how well our devices work on people with disabilities.”
…Because The Hurdles are Many
Those are just two of the hurdles Neuralink faces. Elon Musk might make innovation look easy, but even going to Mars seems relatively straightforward in comparison to his plans for his latest company.
First, there are the engineering hurdles to overcome. The company has to deal with the problems of biocompatibility, wirelessness, power, and — the big one — bandwidth. To date, we’ve never put more than roughly 200 electrodes in a person’s brain at one time. When talking about a world-changing interface, the Neuralink team told Urban they were thinking something like “one million simultaneously recorded neurons.” Not only would they need to find a way to ensure that the brain could effectively communicate with that many electrodes, they also need to overcome the very practical problem of where to physically put them.
The engineering is only half the battle, though. Like Musk mentioned, regulatory approval will be a big factor in the development and adoption of Neuralink’s tech. The company also faces potential skepticism and even fear from a public that doesn’t want anyone cutting into their brains to install some high-tech machinery — according to a recent Pew survey, the public is even more worried about brain computer interfaces than gene editing. There’s also the not-entirely-unfounded fear that these computers could be hacked.
Add to all that our still very, very incomplete understanding of how the brain ticks exactly, and you can see that the Neuralink team has its work cut out for them.
Neuralink Won’t Exist in a Vacuum
Thankfully, they won’t be working to remake our minds alone — many other universities and research institutes are pushing brain interface technology forward. Facebook’s Building 8 is working on its own BCI, MIT is creating super-thin wires for use in brain implants, and other cyborg devices are already in the works to help the paralyzed walk again and the blind regain their sight. Each new development will push the field forward, and the team at Neuralink will be able to learn from the mistakes and successes of others in the field.
Just like other electric cars were on the road before Tesla came along, brain computer interfaces are not new — the tech might just need a visionary like Musk to elevate it (and us) to the next level.
Facebook’s annual developers’ conference is in full swing this week. Ultimately, the event serves as an opportunity for the company to unveil their most innovative products and reveal key details about upcoming projects. And nothing is more promising or intriguing than what’s coming out of Facebook’s mysterious Building 8 (B8).
Currently, we know very little about B8, as none of their projects have been officially detailed, but rumors are swirling, and it’s not like we don’t know anything. We have some big, overarching information about the general work being done. It seems that B8 is working on four primary projects. These are said to include augmented reality, drones, cameras, and direct brain-computer interfaces (BCIs).
Yes, Facebook is working on computers that are meant to interface with our brains.
During the opening event yesterday, Facebook CEO Mark Zuckerberg announced that, today, speakers at the conference would discuss where they are in relation to their work on the BCI technology.
At the event, he stated that we would “hear from Regina Dugan about some of the work that [they are] doing to build even further out beyond augmented reality, and that includes work around direct brain interfaces that are going to, eventually, one day, let you communicate using only your mind.”
Dugan is a former DARPA executive. She also worked for Google’s advanced projects division. Now, Dugan has finally released information about the work B8 has been doing. The highlights and video of the event are below (this story is developing):
Facebook is working to develop a brain-computer interface that will, in the future, allow individuals to communicate with other people without speaking. Ultimately, they hope to develop a technology that allows individuals to “speak” using nothing but their thoughts—unconstrained by time or distance.
They want to create “category defining products” that are “social” first, products that allow us to form more human connections and, in the end, unite the digital world of the internet with the physical world and the human mind.
Dugan notes that the brain produces about 1 terabyte per a second. However, through speech, we can only transmit information to others at about 100 bytes per a second. Facebook wants to get all of the information that is transmitted to our speech center out of the “brain” and into the world (to allow us to get it to others at will).
For their beginning work, they hope to allow all humans to “type” and “click” through our brains in order to interact with our technology. For example, people with ALS could type—not with eye blinks—but with their thoughts. Thus, they wish to “decode speech” and allow all individuals to communicate using our brain waves.
Initially, their goal is to allow people to type 5 times faster than people can type on a smartphone straight from their brain. This means that they are developing technologies that can “read” the human brain in order to transmit this information.
Next, they will work to allow people to “type” a staggering 100 words a minute using their thoughts. That’s far, far faster than most humans can type on a computer. The average person types between 38 and 40 words per minute.
They have developed actuators that allow people to “hear” through their skin.Ultimately, with Facebook’s technology, humans can “feel” words.
Eventually, they want to allow people to think something and send the thought to someone’s skin. Additionally, they will allow people to think something in one language and have a person receive the thought in an entirely different language.
This article has been updated to clarify the nature of the brain interface and what information Facebook is hoping to capture and transmit to the outside world.
Nootropics, broadly speaking, are substances that can safely enhance cognitive performance. They’re a group of (as yet unclassified) research chemicals, over-the-counter supplements, and a few prescription drugs, taken in various combinations—that are neither addictive nor harmful, and don’t come laden down with side-effects—that are basically meant to improve your brain’s ability to think.
Right now, it’s not entirely clear how nootropics as a group work, for several reasons. How effective any one component of a nootropic supplement (or a stack) is depends on many factors, including the neurochemistry of the user, which is connected to genes, mood, sleep patterns, weight, and other characteristics.
However, there are some startups creating and selling nootropics that have research scientists on their teams, with the aim of offering reliable, proven cognitive enhancers. Qualia is one such nootropic. This 42 ingredient supplement stack is created by the Neurohacker Collective, a group that boasts an interdisciplinary research team including Sara Adães, who has a PhD in neuroscience and Jon Wilkins, a Harvard PhD in biophysics.
Some of Qualia’s ingredients are found in other stacks: Noopept, for example, and Vitamin B complex are some of the usual suspects in nootropics. Green tea extract, L-Theanine, Taurine, and Gingko Biloba are also familiar to many users, although many of the other components might stray into the exotic for most of us. Mucuna Pruriens, for example, is a source of L-Dopa, which crosses the blood–brain barrier, to increase concentrations of dopamine in the brain; L-Dopa is commonly used to treat dopamine-responsive dystonia and Parkinson’s disease.
The website says that the ‘smart drug’ is designed to provide users with “immediate, noticeable uplift of [their] subjective experience within 20 minutes of taking it, as well as long-term benefits to [their] neurology and overall physiologic functioning.” For people climbing their way up in Silicon Valley, it’s a small price to pay. What would you do with 10 percent more productivity, time, income, or intelligence?
Note: Futurism curates the products that could help reshape our world. Here, we have partnered with the team behind Qualia in order to offer readers a 10% discount using the coupon code ‘futurism’. Futurism also has affiliate partnerships, so we may get a share of the revenue from purchases.
Humans have always been fascinated by the prospect of long life. For a classic take on this idea from one of the greats, check out Kurt Vonnegut’s short story from 1954 Tomorrow and Tomorrow and Tomorrow— he paints a portrait of a crumbling society after the invention of an anti-aging drug. You might think that life extension is just the stuff of science fiction, fantasy, and horror, but scientists have been studying how aging works — and how we can stop it — for decades.
A 2015 editorial published in Nature Medicine said that aging is the greatest risk factor for many chronic human diseases, and that if aging can be slowed, the chance of getting these diseases can be pushed off too. A report from the World Health Organization notes that not all people age at the same rate, and the occurrence of disease isn’t a guarantee. Genetic or environmental factors have a hand in this, and scientists are continually studying what exactly influences aging.
The Wellderly Study, published in 2016, linked cognitive health to physical health. You might also remember the studies done on mice involving the transfusion of blood; younger mouse (and human) blood allowed mice to thrive, while the blood of older mice negatively impacted their brains, livers, and hearts. Maybe vampires have the right idea.
Destroying old cells in mice also had promising effects, as reported by Nature in February. When senescent cells (cells that can’t divide) build up in aging animals, the cells “release molecules that can harm nearby tissues.” Mice were genetically engineered so that their senescent cells would be destroyed upon injection of a drug. These engineered mice had lifespans 20 to 30 percent longer than their normal mice counterparts.
In 2015, the FDA authorized the first clinical trials of a drug, Metformin, that has the potential to extend the human lifespan up to 40 percent (along with boosting the time span that people can enjoy optimal health). It’s impressive how many theories are out there in the scientific community to stop the effects of aging, and as the years go by, there could be hundreds more. This prospect is exciting for many, but it also raises several questions and challenges.
The Future of Aging
So how can we prevent Vonnegut’s vision of the future from happening if we extend the lifespans of humans? Should we even extend human lifespans? We’re already one of the longest-living land animals on Earth. There are ethical and social problems to consider if life extension becomes the norm, and many of these were brought up in a debate sponsored by Intelligence Squared.
Would slowing aging — or even eliminating it — change what it means to be a human? Humans are already quite self-centered as a species (prioritizing our own survival and standard of living over the planet itself and many other species). The planet can only sustain so many humans comfortably, and it’s currently strained as it is. If fewer people died, that would crowd us even more.
There’s also the argument that new generations become more progressive and open to change versus older generations. Without the older population dying off, there’s less potential for different perspectives and new ideas. Dying also gives our lives meaning in regards to time — when time is a finite resource, we cherish it more.
From the opposite side, research into anti-aging will make us healthier in the last years of our lives, making healthcare less expensive for the elderly. Medical research has had to focus on many diseases and conditions, but if the focus is on aging itself, hopefully the potential to develop those diseases will diminish.
We can also greatly benefit from the experience and wisdom of older people as a society. It’s detrimental to science to ignore these avenues of research. Anti-aging research also can lead to other research, such as how to sustain bigger populations. Urban Skyfarm is just one idea that would provide housing, farming, clean air, and renewable electricity.
Careful urban planning will be a necessity as we move forward. Many scientists studying longevity believe that, in the end, the benefits of anti-aging research outweigh the negatives. Even if we can’t stop aging completely, we can make the last years of our lives more comfortable and less marred by disease and discomfort.
Despite all the research we’ve done, we still know relatively little about how the human brain works, and we know even less about the mystery of “consciousness.” Scientists disagree about whether consciousness exists at all outside the illusions of our own collective imagination. Some believe it exists independently although we’ve yet to understand its origins have brought quantum physics into the discussion.
This is probably in part because of the way that the “observer effect” challenged one of science’s most basic tenets: that there is an objective, observable reality that exists whether we’re looking at it or not. The revelation that observing and measuring quantum effects changes their behavior is troubling, but it also suggests to many people that consciousness itself is part of quantum theory. Moreover, as humans creating AI that, for all its achievements still can’t master some of the things that come so easily to our own minds (at least not yet), we are bound to see a blurry reflection of ourselves in quantum computers, which promise to achieve so much more than ordinary computers ever could.
However, it was the British physicist Roger Penrose who pointed out that, observer effect aside, quantum mechanics may be involved in consciousness. More specifically, he thought it might be possible that quantum events cause molecular structures in the brain to alter their state and trigger neurons in different ways; that literal quantum effects within the brain exist.
For all we can accomplish with the human brain, it has its foibles, and perhaps suspecting the existence of quantum consciousness is one of them. We possess superior intellects because of our high-level pattern processing abilities, but it is also a well-proven fact that the human brain is prone to see meaningful patterns where none exist; in the midst of meaningless noise. And while the study of quantum physics is certainly not meaningless noise, it’s possible that our minds — which are meaning making machines — are wrong to see themselves in quantum effects. Does it really make sense to think that our lack of understanding of both consciousness and quantum mechanics points to a larger connection?
Our Participatory Universe
There is more to this question than the raw interest of philosophy: if there is in fact a connection between quantum mechanics and human consciousness, any major breakthrough in our understanding of either could help us understand both. For example, advances in quantum computing could enable us to master brain augmentation and uploading consciousness, opening the door to a form of immortality. Improved understanding of the superposition property could teach us how to conquer multiple mutually-exclusive ideas at once.
Or, perhaps we’ve been approaching this in the wrong way. As we look at quantum mechanics, we ask ourselves whether we disturb the effects by measuring, or whether it is the act of noticing the measurement impacting our consciousness that causes the disturbance. Is it possible that knowing how to think in the right way—achieving a quantum consciousness—will allow us to perceive quantum mechanics properly for the first time? We’ve always been part of Wheeler’s participatory universe in some sense, lending our interpretation to what reality is as we record our own history.
For now, most of the scientific community regards quantum effects in the brain skeptically—an appropriate response at this point. Fueling the fast retreat from any quantum consciousness theories in the scientific community is the New Age quantum consciousness trend and the cottage industry arising from it with plenty of avid bloggers writing about things like telepathy, the afterlife, and telekinesis, and crafters selling art and other products.
Whether or not consciousness influences quantum mechanics, and whether or not we eventually require quantum theory to fully comprehend how the brain works, for now we can enjoy the useful discomfort the association provides. Quantum theory has forced us out of our collective comfort zone as we consider new ways of thinking, and found ourselves living inside our own theories.
Rarely do we hear about a story as incredible as Adam Gorlitsky’s. Although he was paralyzed from the waist down as a result of a terrible car accident from 10 years ago, he continues to complete 10-kilometer (6-mile) foot races across the country. An impossible feat made possible thanks to the advent of reliable, robotic exoskeletons.
Thanks to the “ReWalker” exoskeleton produced by the company ReWalk Robotics, Adam is able to walk once more. Gorlitsky’s case is a powerful testament to what a light, wearable exoskeleton with motors at the hip and knee joints can do for people with daunting disabilities.
The battery-powered, computerized, robot prosthetic was offered to Gorlitsky during a clinical trial, and after the trial Gorlitsky couldn’t shake off the immense confidence and freedom that the suit had given him after nearly a decade of feeling powerless. Deciding that the suit can drastically improve his standard of living, Gorlitsky launched a GoFundMe campaign in 2015 that sold a line of I Got Legs t-shirts; helping him make a down payment on the new ReWalk Personal 6.0 exoskeleton suit.
A Project to Help Others
Gorlitsky’s journey didn’t end there. He has since then joined forces with director Warren Adcock to bring the story of how the Rewalk changed Gorlitsky’s life through a documentary. Through the film, I Got Legs: Or How I Learned to Stop Worrying and Love the Exoskeleton, the team aims to share Gorlitsky’s journey while advocating for the “right to walk,” the chief slogan and goal of Adam’s non-profit organization, I Got Legs. Supporters can back the documentary for various rewards through its Kickstarter page.
The documentary will touch upon several key moments in Adam’s life, underscoring his ability to look past personal struggles and fight not only for himself, but also for others with spinal injuries. Adam’s confidence and courage stem from his access to an exoskeleton suit, a suit that he advocates for others through critical insurance policy reform.
If the team raises its $70,000 goal by May, they plan to release the documentary sometime in 2018.
Doing research to define what nootropics are is kind of like asking Tekkies which VPN is the best one to use. You get an avalanche of responses from a group of people who are absolute evangelists for the technology, and each is totally sold on their particular solution. This isn’t a criticism; it is the observation of an interested outsider looking in on a movement—the movement to achieve next level humanity.
Nootropics, broadly speaking, are substances that can safely enhance cognitive performance. We’re not talking about convincing your doctor to write you a Ritalin or Adderall prescription to help you stay awake longer. We’re talking about a group of (as yet unclassified) research chemicals, over-the-counter supplements, and a few prescription drugs, taken in various combinations—that are neither addictive nor harmful, and don’t come laden down with side-effects—that are basically meant to improve your brain’s ability to think.
Part of the secret sauce of nootropics is finding the right combination of the right substances, and taking them in the right amounts and proportions at the right times. That series of alignments is your “stack,” and a growing number of Silicon Valley innovators are relying on their stacks every day.
The idea behind nootropics is not taking a magic pill and transforming yourself into a genius instantly. The core value here is optimization.
Some nootropics you’ve probably already tried, like fish oil or caffeine, or maybe creatine or L-theanine, an amino acid commonly found in green tea. Others you may not have heard of, substances like the class of about 20 drugs called “racetams.”
To be clear, the idea behind nootropics is not taking a magic pill and transforming yourself into a genius instantly. The core value here is optimization, incremental changes—sometimes tiny ones—that together produce a cumulative effect that is powerful and progressive. Nootropics isn’t like winning the lottery (something that happens all in one day). They are investing; they are cashing in on compound cognitive interest.
In other words, nootropics are not like the pills in Limitless, which make you super-smart instantly. Rather, they pose the seductive question of what you might achieve if you were 10 percent more productive…all the time.
Right now, it’s not entirely clear how nootropics as a group work, for several reasons. How effective any one component of a nootropic supplement (or a stack) is depends on many factors, including the neurochemistry of the user, which is connected to genes, mood, sleep patterns, weight, and other characteristics. In other words, results vary, and they can vary a lot.
Second, some nootropic components have received more research attention than others. For example, it is well-known how caffeine affects the body. Unfortunately, as anyone with a coffee habit can tell you, the benefits of caffeine can be lost over time as your body builds up a tolerance to it, and some people experience withdrawal-like symptoms when they don’t ingest caffeine, including headaches.
This is where the amino acid L-theanine comes back into the picture, because research has shown that it reduces physiological and psychological stress responses, making it a great partner for caffeine. It also promotes neuronal health. Studies on the two substances taken together show that they promote alertness, attention, and task switching abilities. This is why the L-theanine and caffeine combination is part of many stacks.
The third factor in ‘where the science behind nootropics stands’ is that many of these substances have been the subject of research studies that proponents don’t think are exactly on point. For example, Kamal Patel points out that Racetams are designed to improve cognitive function, but also notes that researchers aren’t exactly sure what their mechanisms are. According to Patel, this is in part because most research on Piracetam focuses on the elderly and people with real cognitive deficits; in contrast, the people who use nootropics are primarily young professionals who are already at the top of their game (at least, their unenhanced game).
The science behind nootropics is promising, but it is in its nascent phase.
That said, there are a number of studies that have found benefits in nootropics. One study found that brain function in elderly patients improved significantly with regular doses of Piracetam. Other research has shown that Piracetam improves memory in adults and can help students improve their nonverbal learning skills. The bottom line here is that researchers know Piracetam has some positive benefits, but they don’t yet understand why or how.
So, ultimately, the science behind nootropics is promising, but we are still in the early research stage, meaning that it is very hard to say anything definitive about combinations and how they work.
One final point of interest regarding research and nootropics is that the dedicated fan base of nootropic users are currently serving as their own long-term research cohort. They meet in person sometimes, for everything from informational seminars to meditation contests. They gather in their own nootropics Subreddit and in other online forums, sharing information on stacks, optimizing doses, and results. This is DIY, anecdotal science, not placebo-controlled, and certainly not generalizable—but to many people looking to optimize their life (and their brain), the evidence is persuasive enough for them to give nootropics a shot.
On the Market Now
For people who are interested in self-improvement but stand firm on the scientific method, there are some reasonable options to try. There are some startups creating and selling nootropics that have research scientists on their teams, with the aim of offering reliable, proven cognitive enhancers.
Qualia is one such nootropic. This 42 ingredient supplement stack is created by the Neurohacker Collective, a group that boasts an interdisciplinary research team. Among the members of Qualia’s science team are Sara Adães, who has a PhD in neuroscience; Andrew Huberman, who is a Stanford University School of Medicine professor of Neurobiology; Lindsay Briner, who is in her third year of PhD study in cognitive neuroscience; Dr. Rishi Khatri, JD/MD; Jon Wilkins, a Harvard PhD in biophysics; and Heather Sandison, a naturopath. You can find a complete list of Qualia’s ingredients on the website, as well as some of the research behind the stack’s formulation.
Some of Qualia’s ingredients are found in other stacks: Noopept, for example, and Vitamin B complex are some of the usual suspects in nootropics. Green tea extract, L-Theanine, Taurine, and Gingko Biloba are also familiar to many users, although many of the other components might stray into the exotic for most of us. Mucuna Pruriens, for example, is a source of L-Dopa, which crosses the blood–brain barrier, to increase concentrations of dopamine in the brain; L-Dopa is commonly used to treat dopamine-responsive dystonia and Parkinson’s disease.
The website says that the ‘smart drug’ is designed to provide users with “immediate, noticeable uplift of [their] subjective experience within 20 minutes of taking it, as well as long-term benefits to [their] neurology and overall physiologic functioning.” For people climbing their way up in Silicon Valley, it’s a small price to pay. What would you do with 10 percent more productivity, time, income, or intelligence?
Note: Futurism curates products that we believe in. Here, we have partnered with the team behind Qualia in order to offer readers a 10% discount using the coupon code ‘futurism’. Futurism also has affiliate partnerships, so we may get a share of the revenue from purchases.
Just as ancient Greeks fantasized about soaring flight, today’s imaginations dream of melding minds and machines as a remedy to the pesky problem of human mortality. Can the mind connect directly with artificial intelligence, robots and other minds through brain-computer interface (BCI) technologies to transcend our human limitations?
Over the last 50 years, researchers at university labs and companies around the world have made impressive progress toward achieving such a vision. Recently, successful entrepreneurs such as Elon Musk (Neuralink) and Bryan Johnson (Kernel) have announced new startups that seek to enhance human capabilities through brain-computer interfacing.
How close are we really to successfully connecting our brains to our technologies? And what might the implications be when our minds are plugged in
Much of the recent work on BCIs aims to improve the quality of life of people who are paralyzed or have severe motor disabilities. You may have seen some recent accomplishments in the news: University of Pittsburgh researchers use signals recorded inside the brain to control a robotic arm. Stanford researchers can extract the movement intentions of paralyzed patients from their brain signals, allowing them to use a tablet wirelessly.
The most sophisticated BCIs are “bi-directional” BCIs (BBCIs), which can both record from and stimulate the nervous system. At our center, we’re exploring BBCIs as a radical new rehabilitation tool for stroke and spinal cord injury. We’ve shown that a BBCI can be used to strengthen connections between two brain regions or between the brain and the spinal cord, and reroute information around an area of injury to reanimate a paralyzed limb.
With all these successes to date, you might think a brain-computer interface is poised to be the next must-have consumer gadget.
Still Early Days
But a careful look at some of the current BCI demonstrations reveals we still have a way to go: When BCIs produce movements, they are much slower, less precise and less complex than what able-bodied people do easily every day with their limbs. Bionic eyes offer very low-resolution vision; cochlear implants can electronically carry limited speech information but distort the experience of music. And to make all these technologies work, electrodes have to be surgically implanted – a prospect most people today wouldn’t consider.
But all these demos have been in the laboratory – where the rooms are quiet, the test subjects aren’t distracted, the technical setup is long and methodical, and experiments last only long enough to show that a concept is possible. It’s proved very difficult to make these systems fast and robust enough to be of practical use in the real world.
Even with implanted electrodes, another problem with trying to read minds arises from how our brains are structured. We know that each neuron and their thousands of connected neighbors form an unimaginably large and ever-changing network. What might this mean for neuroengineers?
Imagine you’re trying to understand a conversation between a big group of friends about a complicated subject, but you’re allowed to listen to only a single person. You might be able to figure out the very rough topic of what the conversation is about, but definitely not all the details and nuances of the entire discussion. Because even our best implants only allow us to listen to a few small patches of the brain at a time, we can do some impressive things, but we’re nowhere near understanding the full conversation.
There is also what we think of as a language barrier. Neurons communicate with each other through a complex interaction of electrical signals and chemical reactions. This native electro-chemical language can be interpreted with electrical circuits, but it’s not easy. Similarly, when we speak back to the brain using electrical stimulation, it is with a heavy electrical “accent.” This makes it difficult for neurons to understand what the stimulation is trying to convey in the midst of all the other ongoing neural activity.
Finally, there is the problem of damage. Brain tissue is soft and flexible, while most of our electrically conductive materials – the wires that connect to brain tissue – tend to be very rigid. This means that implanted electronics often cause scarring and immune reactions that mean the implants to lose effectiveness over time. Flexible biocompatible fibers and arrays may eventually help in this regard.
Ultimately, we believe a “co-adaptive” bidirectional BCI, where the electronics learns with the brain and talks back to the brain constantly during the process of learning, may prove to be a necessary step to build the neural bridge. Building such co-adaptive bidirectional BCIs is the goal of our center.
Elon Musk’s new startup Neuralink has the stated ultimate goal of enhancing humans with BCIs to give our brains a leg up in the ongoing arms race between human and artificial intelligence. He hopes that with the ability to connect to our technologies, the human brain could enhance its own capabilities – possibly allowing us to avoid a potential dystopian future where AI has far surpassed natural human capabilities. Such a vision certainly may seem far-off or fanciful, but we shouldn’t dismiss an idea on strangeness alone. After all, self-driving cars were relegated to the realm of science fiction even a decade and a half ago – and now share our roads.
In a closer future, as brain-computer interfaces move beyond restoring function in disabled people to augmenting able-bodied individuals beyond their human capacity, we need to be acutely aware of a host of issues related to consent, privacy, identity, agency and inequality. At our center, a team of philosophers, clinicians and engineers is working actively to address these ethical, moral and social justice issues and offer neuroethical guidelines before the field progresses too far ahead.
Connecting our brains directly to technology may ultimately be a natural progression of how humans have augmented themselves with technology over the ages, from using wheels to overcome our bipedal limitations to making notations on clay tablets and paper to augment our memories. Much like the computers, smartphones and virtual reality headsets of today, augmentative BCIs, when they finally arrive on the consumer market, will be exhilarating, frustrating, risky and, at the same time, full of promise.
Neuralink is Musk’s unconventional proposal for preempting that fear. The budding company will build a device to be implanted into the human brain. This device — which is likely to be called a “neural lace” —would give the human brain the ability to directly interface with gadgets and other devices. It could also improve the human brain’s memory by increasing its storage capacity. Such brain-computer implants could also lead to improved treatments for neurological diseases and cognitive disorders. If that wasn’t impressive enough, such a device could potentially be used to reprogram a person’s neural code.
In an piece he wrote for CNBC, Dustin McKissen wondered aloud how such a technology would be introduced — quite literally — into the public consciousness: “one question Musk hasn’t answered (and in fairness, it may not be his responsibility to answer) is who will have the privilege of getting a neural lace?” McKissen is the founder and CEO of PR and strategy firm McKissen + Company, whose work includes analyzing the effects of politics in the U.S. business climate.
“If the essentialness of maternity care is up for debate, it goes without saying Elon Musk’s neural lace probably won’t be covered under your insurance plan,” McKissen wrote, referring to the Obamacare repeal that has been at the forefront of U.S. political debate as of late. “In other words, not only do the rich seem to get richer—they may get the benefit of having a computer-enhanced brain.”
McKissen warns of how social inequality could render Musk’s neural lace beneficial only to a select few, rather than the human race on the whole. “What will income inequality look like if only the very wealthy get an upgrade? And will children be able to get a neural lace?,” he asked. Such a society is reminiscent of one featured in the science fiction film Elysium, where only the privileged few had access to technology’s benefits.
McKissen added: “Research has shown there is already a digital divide contributing to chronic poverty in low-income and rural communities. That digital divide will only grow when some of us can afford a brain enhanced with artificial intelligence. […] most of us are going to have to compete with computer-enhanced peers in an already unequal world.”
McKissen isn’t arguing that some people would be more deserving of access to advanced technology like a neural lace, but rather, he points out the need to improve the “current playing field” — which one could argue extends beyond the question of who gets a neural lace.
As he said, “In a world that’s growing increasingly class conscious, the ability for a relatively small number of people to become more than human could be a disaster for everyone—especially if that technology arrives in a time when income inequality is even worse than it is today.”
A company based in Stockholm, Sweden, is turning its employees into “cyborgs” using a microchip implant about the size of a grain of rice. Though not the first time such microchip implants have been used, this program is the first example of such implants being made available to a company’s employees on this sort of level.
“The biggest benefit I think is convenience,” Patrick Mesterton, co-founder and CEO of Epicenter, told the Associated Press. Epicenter provides network and office space to budding digital startups, and it is currently home to more than 100 companies and about 2,000 workers. The company calls itself “Stockholm’s first digital House of Innovation,” and it only started implanting workers in January 2015.
The microchips, which are implanted in the hands of employees and startup members, function as swipe cards. “It basically replaces a lot of things you have, other communication devices, whether it be credit cards or keys,” Mesterton said, who demoed opening a door just by waving his hand near it. The implant doesn’t just open doors, though. Epicenter’s “cyborg” employees can operate their printers with it or even order smoothies with a wave of their hands.
The devices aren’t mandatory, but “being chipped” has become popular amongst Epicenter’s employees, with more than 150 now implanted with the devices. The company even hosts monthly events where participants can get the implants for free, as well as parties to celebrate those who got implanted.
An obvious concern is security and privacy. “Of course, putting things into your body is quite a big step to do, and it was even for me at first,” Mesterton said, recalling his initial doubts about the implants, which carry information that can be transmitted to other devices via electromagnetic waves, but cannot receive information themselves.
“The data that you could possibly get from a chip that is embedded in your body is a lot different from the data that you can get from a smartphone,” explained Ben Libberton, a microbiologist at Stockholm’s Karolinska Institute. Such devices, he said, can be exploited by hackers to gain huge amounts of information.
“Conceptually you could get data about your health, you could get data about your whereabouts, how often you’re working, how long you’re working, if you’re taking toilet breaks and things like that,” Libberton added. The more sophisticated a microchip is, the bigger the ethical dilemmas that can come with them.
For Mesterton, there really isn’t a problem. “I mean, people have been implanting things into their body, like pacemakers and stuff to control your heart,” he said. “That’s a way, way more serious thing than having a small chip that can actually communicate with devices.”
The media is bursting at its seams with what seems to be the superhero revolution. Comic book publishers like Marvel and DC have spilled over onto the big screen, and now it may look as though they’re spilling over into our technology in the real world. While we have been making efforts at a superhero heads up display or an iron man workout suit, we are also inching our way up to a functional flight suit.
Gravity is a British technology start-up started by Richard Browning on March 31, 2017. The company has developed a human propulsion system to re-imagine manned flight. With miniaturized jet engines and a customized exoskeleton, the Daedalus is expected to push us into a new era of aviation. Browning and his team developed the suit over the course of 2016, with the team’s journey covered in this short documentary:
Browning’s team put together six kerosene-fuelled micro gas turbines to lift the manned-suit up—essentially, a man tethered to six compact jet engines. Initial testing began with Browning performing assisted jumps with four engines. Thinking additional engines would help him levitate off the ground, the team added two additional engines with leg mounts. The final design looks slightly different, as it houses two engines within the suit’s back in conjunction with two engines attached at each arm. The design also has a heads-up display that relays fuel information to the user through a WiFi connection. The suit is expected to reach speeds of 160 km/h (100 mph), but you can test that out yourself.
If you want to get a custom suit of your own, it is estimated to cost around $250,000. Although, Browning suggests that operators are physically fit in order to maneuver the suit. He adds that since he weighs only 70 kilograms (160 pounds), he was able to fly with relative ease.
Gravity’s developments marks the beginning of a new era of human aviation. While the suit seems clunky now, so did the first computers when they were built. We’re witnessing pockets of intelligent capital develop, and isn’t too hard to imagine the future of this technology.
Ambrosia, a company inspired by the work done by Stanford University neuroscientist Tony Wyss-Coray with parabiosis in mice, charges $8,000 per patient for its human clinical trial of parabiosis. Although there may be 600 people who take part in the study transfusing 1.5 liters of plasma with donors between the ages of 16 and 25, the study is being done without the blessing of Wyss. He believes that the study does not genuinely represent the science and that, “there’s just no clinical evidence, and you’re basically abusing people’s trust and the public excitement around this.”
While Ambrosia is operating without clinical evidence to support the trials, the science behind utilizing young blood in repairing and restoring aged cellular processes is worth taking a look at.
How Could Blood Be Transformed?
Red and white blood cells are produced from stem cells within bone marrow, and as we grow older, our body’s ability to replenish the number of red and white blood cells greatly depletes. Similar to the mouse trials ran by Wyss-Coray, researcher Hartmut Geiger and his team at the University of Ulm in Germany looked at the bone marrow in mice at varying ages and determined that older rodents produce very low levels of the protein osteopontin.
Rather than looking at blood transfusions for a possible solution like Wyss-Coray’s team, Geiger’s team looked the potential of stem cells to test the importance of the deficient protein. The team introduced “fresh” stem cells into mice that had little to no osteopontin and noticed that the stem cells aged very quickly. When older stem cells were introduced to a dish with osteopontin and an activator protein, the stem cells began to propagate blood cells.
While companies like Ambrosia are testing blood transfusions on humans to mimic an experiment that utilized a shared circulatory system between an older mouse and a younger mouse, Geiger’s team notes that long-term studies must be done on their work to verify the effect of osteopontin on rejuvenating cells completely.
The team is developing a drug with the protein and its activating factor, but they do not promise a fountain of youth. They do believe that there would be benefits for the immune systems of the elderly, which may be better positioned to fight diseases that are linked with cardiovascular aging after taking the drug.
While all this talk about immortality is exciting, it might be a while before we can actually reap the benefits of researchers’ studies in the way we hope. In the meantime, we can keep dreaming away death.
Although AI will eliminate some jobs, some believe it will create even more new careers. This shift may look like the last major change in the U.S. economy, when huge numbers of people moved from working in agriculture to working in manufacturing and other urban jobs.
Healthcare is one of the best examples of AI helping humanity. Even a highly experienced human doctor can have a tough time drawing a diagnosis from complex medical imagery as fine details can go unnoticed. It is far easier for an AI system to see the difference between a benign cyst and a malignant tumor, determine how well a burn is healing, and extrapolate how fast blood is flowing through a heart defect.
In addition to keeping us healthy and safe, AI is also helping the environment. Systems are being used to analyze global threats, such as decreasing biodiversity, as well as develop solutions, such as how best to enact conservation efforts.
AI systems are even helping us reduce energy use altogether. Google’s DeepMind, an AI startup, reduced its data center energy usage by 40 percent using learned data from the centers. The TensorFlow machine learning system is open source, so others can harness this tech to reduce their own energy use, too.
6. Revolutionize Agriculture
Autonomous systems are already revolutionizing agriculture, doing everything from planting seeds and fertilizing crops to administering pesticides. Drones are particularly useful as they are cheap and can be communally owned, so the flying devices are being used by farmers all over the world to monitor crops and collect data.
With so much information now available online, sorting the facts from the fiction has become increasingly difficult, but AI could help with that, too. A system could sample many versions of a news story from various validated sources. It could then use algorithms to reconstruct the central, verified facts of the events and remove bias. In this way it could act as a fact checker and fake news buster, augmenting journalistic work. Plenty of research is already going into this, so our days of widespread misinformation online could be numbered.
The Tech Is Ours to Program
Artificial intelligence will be most useful to us when we understand and trust it. Though the idea of a rogue autonomous system is prevalent in pop culture, we can’t let that deter us from developing a level of trust as it’s really the only way we can ever fully enjoy the benefits of AI. Despite their ability to make autonomous, flexible decisions within their narrow areas of focus, AI systems are still predictable in the sense that their overall outcomes are in line with our goals as long as we continue to program them that way
Eight years ago, Bill Kochevar was paralyzed in a cycling accident. Since then, he hasn’t been able to move anything below his shoulders — until now.
As part of a trial at Case Western Reserve University, two tiny, 96-channel electrodes were implanted in the motor cortex of Kochevar’s brain. Essentially, the electrodes register the actions of his neurons and respond by signaling a device that stimulates his arm muscles. The system has given him the ability to grasp and lift things the same way he used to, by simply “thinking” the command to his body. He can now drink through a straw after raising a mug of water to his mouth and eat mashed potatoes with a fork.
Kochevar prepared for the trials by learning how to move a virtual reality (VR) arm on a computer using his mind. He practiced this for four months, after which 36 electrodes were surgically implanted into his right arm. The electrodes were strategically placed to control the movement of his upper and lower arm, elbow, hand, shoulder, and wrist.
Now, when Kochevar thinks about moving his arm, the brain signals travel to the brain-computer interface (BCI), which decodes and translates them into commands for intended movements. Those commands are then converted again into patterns of electrical pulses by the Functional Electrical Stimulation (FES) system in his arm. The pulses stimulate the electrodes, which move the muscles.
The research team says that advances needed to use the technology outside of a lab setting are not far from reality.
The brain implant would need to be wireless, but work on that is already underway. The movements themselves would also need to be more precise. That will be achieved once the investigators improve the stimulation and decoding patterns. FES systems that are entirely implantable are already being tested in other clinical research, as well.
“Every day, most of us take for granted that when we will to move, we can move any part of our body with precision and control in multiple directions, and those with traumatic spinal cord injury or any other form of paralysis cannot,” Case Western Reserve School of Medicine associate professor of neurology Benjamin Walter told The Daily Case. “By restoring the communication of the will to move from the brain directly to the body, this work will hopefully begin to restore the hope of millions of paralyzed individuals that someday they will be able to move freely again.”
“For somebody who’s been injured eight years and couldn’t move, being able to move just that little bit is awesome to me,” said Kochevar. “It’s better than I thought it would be.”
It can be impossible for humans to tell apart very similar colors. But, with a new pair of tetrachromatic glasses created by researchers at the University of Wisconsin-Madison, you can never again leave the house wearing two items of black clothing that don’t quite match.
These spectacles enhance the user’s existing color vision, affording them new power to discern more distinct shades. Once developed for practical applications, they could be used to spot camouflaged targets in the field or identify counterfeit money.
Color vision in humans relies on short, medium, and long types of cone cells reacting to blue, green, and red wavelengths of light, respectively. These three colors together make up the normal trichromatic vision of a human. Physicist Mikhail Kats was reviewing his knowledge of the eye as he prepared to teach a photonics class when he had an inspiration: could he trick the eye into seeing as though it had a fourth kind of cone cell, effectively giving people tetrachromatic vision?
Obviously, Kats decided the answer was “yes,” and the working glasses render blocks of color, called metamers, visibly distinct by helping the wearer’s eye perceive their different wavelengths of light. They tested the glasses by displaying metamers on a smartphone screen and computer. They looked identical without the glasses, but with them on it was easy to see the difference.
“They look exactly the same and you look through the spectacles and, holy crap, they’re two different things,” Kats said in an interview with New Scientist.
Exploration With Enhanced Senses
Kats’s team is working on filters to help distinguish metamers at the blue end of the visible spectrum, and will move on to greens next. After more hues can be distinguished with the filters, they can adapted for tasks like guarding fruits and vegetables against spoiling by watching their skins for changes. Tech like these tetrachromatic glasses equips us to see more of the universe around us, as part of a sensory enhancement movement.
Think of it as a real life — but significantly scaled down — application of the 2009 blockbuster Avatar concept where humans control the body of an alien by remotely transferring human consciousness into another biological body. The team uses a brain-computer interface (BCI) that helps translate brain waves into commands that guide or control the movement of the turtle.
KAIST wrote in a press release, “Unlike previous research that has tried to control animal movement by applying invasive methods, most notably in insects, Professors Phill-Seung Lee of the Mechanical Engineering Department and Sungho Jo of the Computing School propose a conceptual system that can guide an animal’s moving path by controlling its instinctive escape behavior.”
Turtles are instinctively drawn towards light. The team harnessed this natural impulse by attaching a semi-cylinder that can block light onto the turtle’s back, which can be opened and closed using human thought, thus allowing them to control the turtle’s movement.
The human-turtle BCI setup features a human-operated, head-mounted display integrated with BCI, while the turtle is controlled using what the team calls the “cyborg system” — a system consisting of a camera, Wi-Fi transceiver, a computer-control module, and a battery attached to the turtle’s shell.
As much as we’d like this technology to develop into an actual Avatar-like machine that will one day allow us to control alien life forms, this kind of brain-interface tech will probably find more use here on Earth. Possible application include improving augmented and virtual reality technology, use in positioning systems, as well as military reconnaissance and surveillance, especially given its versatility.
The research team highlighted the versatility of their tech, as demonstrated by the variety of environments in which it was effective, in the study they published in the Journal of Bionic Engineering. The interface was able to get the turtles moving indoors and outdoors as well as across different surfaces like grass and gravel. The humans even caused the turtles to tackle multiple obstacles, like shallow water and trees, making it a handy tech to have during stealthy military operations.
In recent years, media has been bursting at the seams with what can only be described as the superhero revolution. From the original comic books to the big screens in Hollywood, we are surrounded by enhanced humans from the far reaches of our imagination. And as visions of enhanced humans inspire our art, they are also spurring biological and technological advances that could make these visions a reality. Let’s take a look at where we are right now.
A Suit of Armor Like Iron Man
While we’re nowhere near developing the iconic gold and red suit anytime soon, we are making progress. Researchers across the world are developing exoskeleton suits to enhance human capabilities. Whether it’s to assist those with disabilities or enhance our country’s armed forces — exoskeleton suits are definitely making their debut. But, as the Marvel films suggest, the greatest limiting factor in creating an Iron Man-like suit is the power source.
A suit with such power needs an energy source just as powerful. While it does seem like an uphill battle without an arc reactor, an Iron Man suit of armor seems to be on the more plausible side of the spectrum.
An X-Gene From The X-Men
The conversation takes a difficult turn when you involve genetics. This isn’t about modifying metal, but humans.
While in the past scientists have been able to manipulate mammals for the amplification of physical traits, we are still a long way from having our own Steve Rogers or a cadre of X-Men. But we may now have the means to achieve the result. Scientists believe that epigenetics can be altered to identify traits that may enhance strength, stamina, and a host other physical abilities that we would normally associate with a superhero.
With Great Power…
It’s not a question of whether or not superhumans will be a reality, but a question of when. We already have clinical trials to fight aging, so why not push humans forward in other avenues? The heart of the issue lies in ethics, society, and finances.
So until an eccentric billionaire comes around to build his or her own super suit, we might have to wait until CRISPR/Cas9 does the job for our own bodies.
A group of researchers from Saarland University in Saarbrücken, Germany, is giving a new meaning to the phrase “comfortable in your own skin.” In a remarkable application of human-computer interaction technology, the research team, led by Martin Weigel, is working on research concepts for uber-thin temporary electronic tattoos that turn your skin into touch-sensitive buttons.
The technology prints wires and electrodes on temporary tattoo paper using a conductive ink. The tattoos come out thinner than the width of human hair. Capable of lasting a couple of days before completely rubbing off, the e-tattoo can be transferred onto skin using water.
Skin and Screen Improvement
According to Weigel, SkinMarks can be put on bumps and birthmarks — or other notable skin features, such as wrinkles and freckles — which people intuitively know. As such, these are ideal for touch-sensitive buttons. For example, it’s possible to adjust the volume of your smartphone by sliding your finger across a tattoo placed on the side of another finger. Bending this tattooed finger, however, turns the volume slider into a play and pause button.
An added feature that makes SkinMarks stand out from other e-tattoos is electroluminescent. The tattoos glow whenever a current passes through it. A tattoo shaped like your favorite app can light up when you receive a notification.
“Now we’ve tested the technological feasibility, the next step is to look at implementing it in a practical way,” Weigel said to New Scientist.
Chris Harrison from Carnegie Mellon University thinks the technology will be available in 10 years. “You’ll have these digital tattoo parlours which you can go to in 2050 and 5 minutes later you can walk out with the iPhone 22 on your forearm,” he said in an interview with New Scientist.
“This is amazing research,” he added, noting that the skin gives a bigger surface area than any smartphone out there. “Human fingers are quite nimble on their own skin.”
Everywhere you go, you can find scientists and engineers doing 3-D printing. They may be using it to print bridges over water, or buildings and houses, or even hearts and livers and skull parts. In fact, we’re hearing so much about 3-D printers creating the normal and ordinary stuff all over again, that it’s becoming pretty boring.
This, of course, is how technology makes progress: slowly, and with iterative changes being added all the time. We’re currently using 3-D printers just to create all the old stuff, which we’re used to. The makers and creators are mainly interested today in demonstrating the capabilities of the printers, and put less emphasis on actually innovating and creating items that have never existed before, and of course, the clients and customers don’t want anything too extraordinary as well. That’s the reason we’re 3-D printing a prosthetic ear which looks just like a normal ear, instead of printing a Vulcan ear.
What happens if we let go of the ordinary and customary, and begin rethinking and reimagining the items and organs we currently have? That’s just what Manu S. Mannoor, Michael C. McAlpine, and their groups did at Princeton and Johns Hopkins Universities. They made use of a 3-D printer to create a cartilage tissue the shape of a human ear, along with a conductive polymer with infused silver nano-particles. The end result? A bionic ear that should look and feel just like an ordinary ear, but has increased radio frequency reception. It is not far-fetched to say that Mannoor and McAlpine have printed the first biological ear that could also double as a radio receiver.
Where else may we see such a combination between the biological and the synthetic? This is a fascinating thought experiment that could help us generate a few forecasts about the future. If I had to guess, I would venture a few combinations for the next twenty years –
Radio-conductive bones: have you come for a hip replacement, and also happen to have a pacemaker or some other implant? The researchers will supply you with a hip-bone printed specifically for you, which will also contain conductive elements that will aid radio waves go deeper into the body, so that the implants can receive energy more easily from the outside by radio waves or induction of some kind.
Drug delivering tattoos: this item is not 3-D printed, but it’s still an intriguing combination of a few different concepts. Tattoos are essentially the result of an injection of nano- and micro-particles under the skin. Why not use specific particles for added purposes? You can create beautiful tattoos of dragons and princesses and butterflies that can also deliver medicine and insulin to the bloodstream, or even deliver adrenaline when pressed or when experiencing a certain electrical field that makes the particles release their load. Now here’s a tattoo that army generals are going to wish their soldiers had!
Exquisite fingernails: the most modern 3-D printers come with a camera and A.I. built-in, so that they can print straight on existing items that the user places in the printer. Why don’t we make a 3-D printer that can print directly on fingernails with certain kinds of materials? The fingernails of the future – which will be printed anew every day – might contain tiny batteries that will power smartphones by touch, or microphones that could record everything that happens around the user.
These are obviously just three rudimentary ideas, but they serve to show what we could gain by leaving behind the idea that new manufacturing technologies should adhere to the “old and proven,” and advance ahead to novel utilities.
In the end, the future is never just “same old same old,” but is all about shedding off the customs of the past and creating new ones. And so, if I had to guess, I would wager that such a unification of concepts into new and bizarre devices would give us a much more accurate view of the future than the one we gain in the present by showing how 3-D printers can build yet another house and another human organ.
What are your ideas for future combinations of biological and synthetic components? Write them down in the comments section!
The inevitability of aging may be no more than yet another biological theory that scientific advances will retire in the near future. Some scientists today say that longevity is a societal concept that we may no longer need to uphold as a static law of nature, but instead, as one that can be rewritten to our benefit.
Researchers from fields spanning genetics to artificial intelligence (AI) are working towards a future where we will have to stop using a “midlife crisis” to justify our ill-advised decisions (but is it really ever the wrong time to buy a Porsche?).
While there have been innumerable theoretical ideas and initiatives for dodging the Grim Reaper, many actual strategies that are being developed today fall into one of two camps: biomedical or technological.
To win the biological battle against death, we must start by understanding the enemy. Aging is a processes that begins on a cellular level. As our cells divide, their DNA and functionality slowly break down, leading to greater susceptibility to damage and disease. For many years, scientists had hoped that telomeres might be the key to halting aging, but these “caps” that protect our DNA have proven to be more complex than initially thought.
When biologists discovered that our telomeres wear away as we age, they theorized that lengthening these strands of protective DNA could also elongate our lives — perhaps indefinitely. But researchers have since found that multiple processes in cells work to tightly regulate telomere lengths because telomeres that are too long can actually have negative consequences, like cancer. Many labs around the world are studying the molecular mechanisms that lengthen and trim telomeres in the cell in order to find out whether they can be modified to keep telomeres at the Goldilocks length — not too long or too short.
Concurrently, technology for genetic editing has advanced by leaps and bounds. The revolutionary process for editing DNA, “CRISPR/Cas9,” has allowed scientists to make targeted genetic changes in living mammalian cells, and has even worked to save a little girl from cancer. The tool has so many applications, researchers have coined the phrase “CRISPR it!” (who says scientists are out of touch?).
Genetic engineering is only one of the many medical advances that are being used to prolong our lives. However, some see the human body as a vessel not to be fixed, but rather, replaced.
Technology has become fully integrated with modern life, becoming, in effect, an extension of its users. And as virtual brain modeling and AI design have advanced, many have begged the question, at what point does virtual reality become… reality?
We can design servers — and even mechanical bodies — that will last far longer than our current bodies will. And consider that these robotic bodies could endure a much wider range of conditions than our fleshly ones can. Just in terms of space travel, designing a ship that would not need food, water, or atmosphere would be much more efficient. Also, it wouldn’t matter how long the trip to another world would take — we wouldn’t even need hibernation chambers!
While some reject the idea of preserving humanity outside of the actual human body, leaders like Elon Musk are taking steps towards the potential merger of man and machine. Musk has indicated that in the near future he will be unveiling a neural lace, a device that can enhance and grow with your brain. Once computers can seamlessly interact with our minds, it’s conceivable that they may be able to store our minds as well.
Visionaries like Musk, along with other companies chasing immortality, are forcing us to reconsider our own mortality: what would life look like divorced from death? What price are we willing to pay for another 100 years of life? Another 1,000? And if that’s the case, then what does it mean to be human?
If technology keeps advancing at its present rate, we may have to figure out our answers to these questions sooner rather than later.
CRISPR-Cas9 is the most advanced and efficient gene editing tool we have. It’s uses, however, have been largely limited to silencing protein-coding genes in the DNA. This leaves out what’s called the DNA “dark matter” — the non-coding DNA that covers about 99 percent of our genetic code. A study published in the journal PLOS Computational Biology may soon change this.
The new technique, developed by a team of researchers led by Carlos Pulido, is a software pipeline called CRISPETa. It’s based on a breakthrough tool (which uses CRISPR-Cas9) called DECKO. The tool was recently developed by the Johson lab, and was specifically designed for deleting those pieces of non-coding DNA. DECKO employs two sgRNAs as “molecular scissors” that snip out a piece of DNA. While the concept might seem simple, designing deletion experiments using DECKO was time consuming due to thee lack of software to create the required sgRNAs.
This is where CRISPETa comes in. Users can tell CRISPETa which region of DNA they wish to delete. The software then generates a pair of optimized sgRNAs that can be used directly for that experiment. Even better, the software can develop designs at high scale, which would all for future screening experiments as well.
“We hope that this new software tool will allow the greatest possible number of researchers to harness the power of CRISPR deletion in their research,” Pulido said.
CRISPETa designs have already demonstrated their ability to efficiently delete desired targets in human cells. “Ultimately, we expect that CRISPR deletion and other genome engineering tools to lead to a revolution in our ability to understand the genomic basis of disease, particularly in the 99% of DNA that does not encode proteins,” researcher Rory Johnson explained. The deletions could also be carried in RNA molecules.
“Apart from being used as a basic research tool, CRISPR may even be used in the future as a powerful therapeutic to reverse disease-causing mutations,” Johnson added. This is the underlying value of the research: the software could be used to develop CRISPR scissors to delete suspected disease-causing, non-coding DNA. At the very least CRISPETa will improve our understanding of non-coding DNA, which could lead to the discovery of new disease-causing genes, and aid in the development of potential new drugs with which to treat and maybe even eventually cure them.
The Cognitive Genomics Project is focused on understanding the origin of intelligence within our own genome. It’s lead by BGI, a non-profit research group based in Shenzhen, China, that was founded in 1999. The organization is currently conducting a gene-trait association study of g, a general factor of intelligence. General intelligence is defined by three prominent categories: stability, heritability, and predictive powers. In short, the study is collecting genetic data from over 20,000 individuals who have an IQ above 150, and looking for patterns in their genes.
While this might seem relatively straightforward, it’s actually a complex and difficult task. That’s because general intelligence does not follow mendelian, single-gene genetics. Researchers cannot simply look for specific mutations in specific genes, as they do for diseases like Huntington’s Disease or Cystic Fibrosis. Rather, intelligence is more similar to traits like eye color and hair color that involve multiple genes in inheritance patterns that we are just beginning to understand.
No Gene Editing Needed?
It remains to be seen how effective gene editing can be at influencing traits like personality and intelligence in people whose brains have already been formed. One way we could avoid the gene editing process entirely is by genetically designing intelligence into our children from conception. We could utilize in vitro fertilization and carefully process the genetic information of each embryo produced for genetic preferences.
If the Cognitive Genomics Project provides significant data supporting the correlation between particular parts of the genome and intelligence, then parents can look for these genetics sequences in potential embryos and select the embryos with the desired traits. This method would increase the probability of intelligent children without having to edit particular genome sequences.
While the ethics of human genetic engineering continue to be debated, we may be closer to a more intelligent humanity than ever before.
Researchers from Stanford University have built a new brain-computer interface (BCI) that allows paralyzed people to type with their brains at a rate faster than was previously possible.
For their study, they worked with three participants with severe limb weakness, two due to Lou Gehrig’s disease and one with limited movement due to a spinal cord injury. All three participants had electrode arrays embedded in their brains to record signals from the motor cortex, which controls muscle movement. A cable then transmitted the brain signals to a computer, and they were then translated by algorithms into point-and-click commands that prompted a cursor to move over letters. Basically, participants were able to move the cursor by imagining their own hand movements.
While it took some training, the subjects were able to achieve BCI results that outperformed any available platform today. One patient was able to type 39 correct characters per minute, which is around eight words per minute, proving that BCIs have the potential to enhance communication among people with impaired movement. It should be noted that these typing-rates were achieved without the use of automatic word-completion.
“The performance is really exciting. We’re achieving communication rates that many people with arm and hand paralysis would find useful. That’s a critical step for making devices that could be suitable for real-world use,” says Chethan Pandarinath, one of the authors of the paper, told Stanford.
BCIs on the Rise
The Stanford team’s contribution toward advancing people’s ability to control machines with just their thoughts reflects the progress being made in this field. In addition to this breakthrough, a team from the Wyss Center for Bio and Neuroengineering in Geneva, Switzerland, built a BCI that can decipher thoughts of people with complete locked-in syndrome, giving them the ability to answer yes or no questions. Meanwhile, researchers at Harvard are using BCI tech to restore sight in the blind.
In the U.S. alone, millions of people suffer from paralysis, some from degenerative conditions like ALS and others from unfortunate accidents that cause severe spinal cord injury. This point-and-click approach developed at Stanford could be the key to making a more efficient BCI geared toward improving the quality of life for those people. It could restore their ability to communicate while teams like that at the Center for Sensorimotor Neural Engineering (CSNE) work on ways to use BCIs to help the paralyzed walk again.
With some modifications, the team believes its system could be applied to other devices, including smartphones or tablets. “We’re approaching half of what, for example, I could probably type on a cell phone,” neurosurgeon and co-senior author Jaimie Henderson of Stanford University told Scientific American. “This study reports the highest speed and accuracy, by a factor of three, over what’s been shown before.” In the future, the team hopes to refine the technology so that it becomes wireless, fully implanted, and self-calibrating, thus being able to provide around-the-clock support.
It’s been two years since Neil Harbisson was declared the world’s first legally recognized cyborg after he corrected his colorblindness by installing an antenna directly into his scull. Soon, he may be one of many.
Our technological ability to augment the human body with electronic devices has finally reached the point where it can solve life-threatening ailments. We are not talking about simply making life “cooler” with some new gizmo — we are talking about saving the 22 people who are currently dying every day waiting on the U.S. national transplant list.
People who have had to constantly manage type 1 diabetes for most of their lives may soon have access to an artificial pancreas. The device allows users to hook up their smart phones to a sensor that continuously monitors their blood sugar – and the phone uses that information, as well as information from the user about their activity level – to tell a pump how much insulin to release into the users blood at any given moment. The artificial pancreas is currently in clinical trials to assess its safety and efficacy in people with type 1 diabetes.
The pancreas is not the only organ that could be replaced with an artificial device. Many companies are investing in the development of artificial organs. According to a recently released analysis from Big Market Research, the market is expected to surge over the next decade. An implantable artificial kidney may be available to patients currently enduring dialysis by the end of the decade, according to some reports.
These artificial organs, some electronic, some crafted by 3D printing with cultured cells, may be able to replace failing hearts and livers, supplant defective eyes and ears. When these technologies finally replace our woefully inadequate organ donation system, the more than 119,000 men, women, and children who are waiting for organs in the U.S. alone could finally get out of the hospital and live their lives free of fear. Artificial organs could also have the advantage of not requiring recipients to take immune-suppressants for the rest of their lives as well.
Implants for Everyone
Alright, in addition to saving lives, cybernetics might make life cooler with new gizmos. Leaders in the field like Elon Musk are developing technologies that will not simply restore the bodies of the sick and disabled, but enhance the bodies of the well. Musk has hinted that he plans to soon reveal his neural lace — a device that could enhance and grow with your brain that is designed to bring human intelligence up to speed with artificial intelligence. One could also easily imagine that if artificial sensory organs like eyes and ears could be designed for the blind or dead, could they not be upgraded to give people super senses as well?
Cybernetics may not only increase the quality of life for both the sick and the well, but increase the quantity of life as well. Every organ we are born with has an expiration date, but this may not be so with our future cybernetic parts. Could artificial organs and cybernetics launch us into the age of immortality? Many hope so, but only time will tell.
While most of the proposed CRISPR applications are focused on editing somatic (non-reproductive) cells, altering germline (reproductive) cells is also a very real possibility. This prospect of editing germline cells and making changes that would be passed on from generation to generation has sparked a heated ethical debate.
The potential to change someone’s DNA even before they are born has led to claims that CRISPR will be used to create “designer babies.” Detractors were appalled at the hubris of science being used to engineer the human race. Supporters, on the other hand, are saying this ability should be a human right.
Rigging the Game
To be fair, most advocates of genetic editing aren’t rallying for support so CRISPR can be used to create a superior human race. Rather, they believe people should have free access to technology that is capable of curing diseases. It’s not about rigging the genetic game — it’s about putting the technique to good use while following a set of ethical recommendations.
To that end, a panel made up of experts chosen by the National Academy of Sciences and the National Academy of Medicine released a series of guidelines that essentially gives gene editing a “yellow light.” These guidelines supports gene editing on the premise that it follows a set of stringent rules and is conducted with proper oversight and precaution.
Obviously, genetic enhancement would not be supported under these guidelines, which leaves some proponents miffed. Josiah Zaynor, whose online company The ODIN sells kits allowing people to conduct simple genetic engineering experiments at home, is among those who are adamant that gene editing should be a human right. He expressed his views on the subject in an interview with The Outline:
We are at the first time in the history of humanity where we can no longer be stuck with the genes we are dealt. As a society we have begun to see how choice is a right, but for some reason when it comes to genetics, some people think we shouldn’t have a choice. I can be smart and attractive, but everyone else should be ugly, fat, and short because those are the genes they were dealt and they should just deal with it.
The immense potential of gene editing to change humanity means the technology will continue to be plagued by ethical and philosophical concerns. Given the pace of advancement, however, it’s good that we’re having this debate on what and who it should be used for right now.
Brain-computer interfaces (BCI) are more advanced now than they’ve ever been, thanks mostly to research that has improved our understanding of how our brains works. BCIs make it possible for thoughts to be translated into machine commands. This has given hope to people suffering from neurodegenerative diseases or those that require the use of prosthetics.
Advanced as they are now, BCIs still mainly rely on implantable chips and electrodes that are placed on the brain’s surface to receive and transmit signals from brain chemicals known as neurotransmitters. Today’s state-of-the-art electrodes, made from thin-film platinum, are highly susceptible to corrosion over time. The Center for Sensorimotor Neural Engineering (CSNE) — a partnership between San Diego State University, the University of Washington (UW), and the Massachusetts Institute of Technology (MIT) — is working on research to significantly improve this technology. Their study is published in the journal Nature Scientific Reports.
Instead of using platinum, the researchers, led by CSNE deputy director Sam Kassegne, developed electrodes built from a form of carbon called “glassy carbon.” “Glassy carbon is much more promising for reading signals directly from neurotransmitters,” Kassegne told San Diego State University. “You get about twice as much signal-to-noise. It’s a much clearer signal and easier to interpret.” Furthermore, because it’s 10 times smoother than granular thin-film platinum, electrical stimulation corrodes glassy carbon more slowly. The material can also last longer than platinum or any other material used for electrodes.
Neural Stimulation and Growth
The researchers think that glassy carbon can do much more than just improve brain signal transmission. They are also using the improved BCIs built using glassy carbon to record neural signals along the brain’s cortical surface and from inside the brain at the same time. “If you record from deeper in the brain, you can record from single neurons,” explained researcher Elisa Castagnola. “On the surface, you can record from clusters. This combination gives you a better understanding of the complex nature of brain signaling.”
Another potential application is even more promising. Mieko Hirabayashi, a student in Kassegne’s lab, is exploring the possibility of using glassy carbon technology to make neural cells in spinal cords grow to replace damaged tissues. Hirabayashi is experimenting with rats to test if this neural growth can be triggered by precisely calibrated electrical stimulation. Without the new glassy carbon electrodes, Hirabayashi wouldn’t be able to stimulate, read the electrical signals from, or even simply detect the neurotransmitters as easily.
With all the successes that existing BCI technologies have produced, it’s not difficult to imagine a future in which a true merging of human and machine is be possible. Improvements such as CSNE’s research bring this future closer to today’s reality.
In 2006, a 21-year old Andy Sandness unsuccessfully attempted suicide. He had aimed a bullet at his chin, which ended up destroying the lower half of his face. He was rushed to Mayo Clinic in Rochester, Minnesota where doctors finally stabilized him, but nothing could be done about his missing jaw, teeth, and nose.
Sandness went back to his home and job in Wyoming, but six years later, the Mayo Clinic delivered some hopeful news. In 2012, the hospital proposed the idea of a face transplant, a procedure with many risks and possible complications following the final surgery. But, Sandness accepted the dangers, staying hopeful, and agreed to the operation.
“When you look like I looked and you function like I functioned, every little bit of hope that you have, you just jump on it,” Sandness said. “This was the surgery that was going to take me back to normal.”
Preparations for Andy’s facial reconstruction surgery took three years, while doctors trained for the procedure over 50 Saturdays.
In 2016, he was added to the organ donor list. Just five months later, he received notice of a donor whose wife consented to the face transplant.
The procedure stretched across 56 hours and featured 60 different staff members. The doctors were able to successfully transplant Sandness’ nose, cheeks, mouth, jaw, lips, teeth, and chin.
Since the operation, Andy has noticed that people haven’t been staring at him nearly as much as before he underwent reconstructive surgery. He’s also regained his sense of smell, taste, and his ability to breathe efficiently.
Andy’s story might involve a fairly uncommon transplant surgery, but his operation is one of many transplant procedures that are changing, and in some cases saving, lives.
Last month, researchers were able to use retinal transplants to partially restore sight in a group of mice. This success could lead to clinical trials in humans within the near future. Last year, doctors performed four uterus transplants, allowing women to choose whether or not they want children when they previously had no choice. In late 2016, doctors at the Leeds Teaching Hospital NHS Trust announced that they’re beginning to offer hand transplants — the first hospital in the world to do so.
With further advances in transplant surgery, doctors will be able to significantly improve the quality of life in patients who thought they’d never experience a normal life.
Advances in medicine and technology are revolutionizing what it means to be human. By providing us with gene editing tools, such as CRISPR, we’re well on our way topersonalizing the medical revolution.
CRISPR provides a way for us to alter gene expression in particular cells, based on need. Up until recently, the process was very difficult to execute. It took many years to develop precision when altering gene expression. With the latest technology, the desired precision can be obtained in just a few weeks. CRISPR, and other technologies like it, are shaping the future of medicine.
When all possible treatments had failed, the parents of Layla, a 1-year-old with leukemia, sought help from new technology developed by Cellectis — a biopharmaceutical company based in Paris, France. The gene editing therapy, which was still experimental, had been utilized once before in a patient with HIV.
Physicians and scientists worked together to give Layla immune cells from a healthy donor that had been modified with a gene editing tool. In this case, TALEN — a DNA-cutting enzyme — was utilized to modify the donor T-cells so that they would not attack Layla’s own cells. In order for the treatment to work, a patient’s immune system is essentially destroyed and replaced with the modified cells. However, this is not a permanent fix: it’s just a temporary solution until a matching T-cell donor can be found.
While Layla’s doctors believe she is in remission, only time will tell if this was a “one-off” fix or a case that may need revisiting. Additional trials are needed so physicians and scientists can better understand how gene-editing can benefit patients, and treat diseases other than cancer.
Without a doubt, computers and machines are besting humans in many ways thanks to developments in artificial intelligence (AI). AIs have beaten their human counterparts at everything from games like Go and poker to driving safely and treating illnesses. Clearly, intelligent machines are increasingly becoming better than humans at so many things, and as the old saying goes, if you can’t beat them…merge with them.
To some degree, we are already cyborgs. You think of the digital tools that you have, the applications that you have. You can ask a question and instantly get an answer from Google. You already have a digital tertiary layer. I say ‘tertiary’ because you think of the animal brain or the primal brain, and then the cortex, the thinking brain, and then your digital self as the third layer.
Hasta la Vista?
The potential for this human-machine merger isn’t lost on DARPA, the research arm of the U.S. Department of Defense. “There are a couple of very interesting things happening as we speak facilitating humans and machines working together in a very different way,” said Justin Sanchez, director of DARPA’s Biological Technologies Office, in an interview with Computer World.
Indeed, we are already seeing this to a certain extent with the latest developments in exoskeleton technology. We now have smart exoskeletons that help paralyzed people walk again or improve the strength and endurance of the average person. “I think the recent science and technology developments we’re making at DARPA, as well as the embracing of physiology and A.I., is enabling us to set up the conditions for profound changes on how humans and machines can work together,” Sanchez added. “We are giving our physiology the opportunity to work with machines in a different way.”
DARPA researchers are developing implantable devices that can perform computing functions similar to a desktop or laptop computer. Sanchez even noted that they have researchers working on the possibility of a human-computer interface using devices and chips that needn’t be implanted in the body. Another potential technology is a pair of smart contact lenses that allow soldiers to see what a drone sees from above.
Of course, while the technological possibilities are already here, there are other things that need to be considered, such as what these enhanced capabilities would mean for humanity in general. “We deeply feel that we can’t do this work in a vacuum,” Sanchez said. “We need to consider all aspects … the moment we try to even start thinking about science and technology in this space. There’s a responsibility that goes along with this.”
The Neural Lace is a concept that has been a staple of science fiction. It’s the idea of
capturing the processes of the human mind with a tool and enhancing them to better suit our environment and lifestyle in one large, wirelessly connected interface among humans. Recently, this concept has taken a step forward in transitioning from the realm of fiction to non-fiction.
With their results published in Nature Nanotechnology, scientist Charles Lieber and his teamhave developed syringe-injectable electronics. Thus far, they has only been tested on live mice, but the results are promising. These electronics assist in monitoring brain activity, delivering treatment, and can even enhance brain activity. While human trials are years away, and the neural lace is currently a wired technology, the reason why Elon Musk finds it imperative is because of what it could do.
Why We Need It
Elon Musk is making it very clear that the best way to avoid a dystopian nightmare is to form a symbiosis with machines. “The need for this, he said on Monday in Dubai, could “achieve a symbiosis between human and machine intelligence, and maybe solves the control problem and the usefulness problem.”Since computers are able to transmit information an exponentially quicker than humans, becoming “one” with them would allow humans to stay relevant in an age where automation and technology threaten our usefulness as organic organisms. Additionally, having a wireless interface would help to keep us on a similar playing field—essentially by making us part machine. So, while this cyborg idea of a future sounds ludicrous, when you consider how much of our reality is already sci-fi inspired, it makes a little bit more sense.
You can hear more about Musk’s thoughts on the necessary symbiosis between humans and machines in this recent video from CNBC.
Brain-computer interfaces (BCIs) are the future of brain implant technologies and prosthetics. Hacking the brain, however, hasn’t been easy. For one, electrodes that are implanted in the brain become less effective over time, as the scar tissue that forms around them degrades their connections to brain cells.
That, it would seem, is about to change as researchers from Harvard Medical School have been working with a new kind of implant that isn’t affected by scar tissue. Instead of penetrating the organ, these new electrodes are placed beneath the skull to rest on the surface of an animal’s brain, and they use powerful magnetic fields instead of electrodes like their predecessors to induce brain activity.
Their design has already been tested on mice by researchers from the Massachusetts General Hospital, who published their results in the journal Science Advances last December. Next week, testing will begin on monkeys using an implant designed to restore sight in the blind. “At the end of that, we hope to have monkeys be able to navigate a maze just by perceiving light and dark or basic geometric patterns,” explained Bernard Casse, a researcher at the Xerox-PARC research institute, which invented the new implant design as part of BRAIN initiative under President Obama.
Improved Brain Technologies
The vision-restoring implant will work by stimulating the visual cortex of the monkeys, as it tries to recreate the activity usually triggered by neuron signals from the eyes. The researchers think this will create the sensation of seeing even without actual input from the eyes. The goal is to be able to use the implant to translate signals from a camera into brain activity. It’s a unique approach to treating blindness, targeting the brain directly instead of the eyes.
Researchers see the value of being able to directly stimulate the brain. Other studies have shown how effective it could be, as in the case of a paralyzed Brazilian man that recovered his sense of touch thanks to a brain implant or the woman with ALS who regained her capacity to communicate thanks to a BCI. While those studies yielded very promising results, they could be only temporary due to the scar tissue problems with traditional electrodes. This new magnetic model could make them permanent.
University of California, San Diego associate professor Todd Coleman thinks this new approach is promising, and he sees other uses for the technology as well. “There could be very nice applications in other parts of the body,” he told MIT Technology Review. For instance, the new implants could be used to regulate activity in the more than 100 million neurons involved in the digestive system to help people with bowel movement troubles. Additionally, Coleman is interested in exploring how the tech could be used on the vagus nerve found in the chest to control PTSD. If all goes well, these monkey trials will be the next step on the path to human testing.
With every step we advance in the fields of robotics and artificial intelligence (AI), it seems we can’t help but ask, “Is this something that will bring us closer to ‘Westworld?’”
Since it premiered last year, HBO’s hit sci-fi series has become the mainstream benchmark for AI systems. In the show’s fictional world, robots can think and function like humans, and they are nearly indistinguishable from their human counterparts. It has created this sense of wonder at what’s possible given today’s quickly advancing technology. However, it has also reinforced how very far we still have to go before we can create AIs with the same level of sophistication.
Taken at face value, we know “Westworld” hosts are incredibly advanced AIs that probably have some sort of deep learning mechanism that allows them to quickly respond to their environments. Their mechanical skeletal systems are so sophisticated that they actually allow for fluid, human-like movements and facial expressions. While not explicitly addressed in the series, it seems the host’s bodies are built using organic material and a 3D-printing process, and though we may not have our own Dolores any time soon, we do already have 3D-printed organic materials.
At the Wake Forest Institute for Regenerative Medicine in North Carolina, Dr. Anthony Atala has been using his Integrated Tissue and Organ Printing System (ITOP) to print cells, bones, and even organs. ITOP prints using human cells and creates lab-grown organs that could soon be surgically implanted into the human body without risk of rejection. “We take a very small piece of their tissue. We then start to expand those cells outside of the body. We use those cells to create new tissues and organs that we can then put back into the body,” Atala explained in an interview with Circa.
Not Quite There Yet
It would be exciting to think this technology could be used to engineer artificial systems for machines, but the possibility of printing an entire body, complete with functioning systems, is a long way off. It’s one thing to print skin, for example, but to expect it to thrive against a metal, synthetic frame or skeleton is another challenge altogether. You’ll need an entire network of functioning, complex internal systems to make it work.
3D-printed organs could prove far more useful in living things in which existing mechanisms for life are already in place. According to Atala, they have already managed to print bone, muscle, and cartilage using ITOP, after which they successfully implanted it into rats. After months of observation, they discovered that the implanted tissue developed a system of blood vessels and nerves. This breakthrough in their study will allow them to move on to human trials, where they can attempt to replicate the same results. But even that could take a long time given that have to wait for government approval.
Atala is particularly optimistic about the future of this field, however, noting that while we may not be close to producing “Westworld”-level AIs anytime soon, “[the] way science is moving, I think that pretty much every organ in the body will eventually, in the next several decades, be achievable.” If you think waiting for the next season of “Westworld” is hard, just imagine what life is like for the thousands of people currently on the organ donor waiting list and how excited they must be for these new breakthroughs in 3D-printing technology.
While we aren’t quite there yet, plenty of research is going into BCIs, some of which focuses on the use of such systems to improve the lives of people suffering from one type of locked-in syndrome or another (a physical condition in which a patient cannot move or communicate but is still fully aware). Recently, a team of researchers managed to contribute to the cause, building a BCI that can decipher the thoughts of people unable to communicate otherwise. Their study, which is published in the journal PLOS Biology, is very promising as the participants reported being “happy” even despite their condition.
Niels Birbaumer, a neuroscientist at the Wyss Center for Bio and Neuroengineering in Geneva, Switzerland, and his team of researchers tested their BCI on four individuals suffering from amyotrophic lateral sclerosis (ALS). The patients were all asked personal questions that could be answered with a simple “Yes” or “No.” Examples include “Is your husband’s name Joachim?” and “Are you happy?” For questions with a definitive correct response, the patients gave the right answer seven times out of 10.
The tests disproved previously held assumptions that people with complete locked-in syndrome are incapable of the goal-oriented thought needed to make a BCI work. “The striking results overturn my own theory that people with complete locked-in syndrome are not capable of communication,” Birbaumer said in a press release. “We found that all four people we tested were able to answer the personal questions we asked them, using their thoughts alone. If we can replicate this study in more patients, I believe we could restore useful communication in completely locked-in states for people with motor neuron diseases.”
The team’s BCI utilized near-infrared spectroscopy (NIRS) and electroencephalography (EEG), which worked together to measure blood oxygenation and electrical activity in the brain. Though still limited as to which thoughts it could read, this BCI is currently the only method that has successfully restored the communication capabilities of people with complete locked-in syndrome.
Now, BCIs usually come in two forms: partially invasive (devices are planted inside a person’s skull) and non-invasive (wearables send signals to the brain), which is the type Birbaumer’s team used. In some sense, they are hacking the brain as BCIs learn how to read and translate brain signals into concrete actions — whether by moving a prosthetic limb or by translating a thought into something communicable.
Naturally, a device like this has the potential to improve the quality of life of people suffering from various neurological afflictions. According to the director of the Wyss Center, John Donoghue, “The Wyss Center plans to build on the results of this study to develop clinically useful technology that will be available to people with paralysis resulting from ALS, stroke, or spinal cord injury. The technology used in the study also has broader applications that we believe could be further developed to treat and monitor people with a wide range of neuro-disorders.”
Imagine living in a world in which verbal communication is no longer required, a society in which telepathy is the norm, where people would be able to “speak” to each other using only their thoughts.
Scientists have long been contemplating the possibilities of brain-to-brain communication in humans, and it appears as though their dreams could become a reality within the next year or so. Such a system would be made possible via major advances in the technology that have been achieved via recent trials involving animals.
In one study, three monkeys were connected through individual brain implants, then placed in separate rooms. They were given the task of controlling a virtual arm on screen, a task they could only complete successfully if they worked together. In the end, they did. According to Miguel Nicolelis, the study’s lead author, “They synchronized their brains and they achieved the task by creating a superbrain — a structure that is the combination of three brains.”
Alternatively, another experiment that tested brain synchronicity in four rats was able to accomplish similar results. After 10 trials, scientists found that the rats were able to think as one 61 percent of the time. They gained higher accuracy in solving simple problems when they combined minds.
More recently, the research has focused on humans. In one study, researchers placed two people in separate rooms and gave them the task of playing a game of 20 questions on a computer using only their minds. They transmitted “yes” or “no” answers with the help of an EEG cap, which tracked the brain activity of one person and triggered an electrical current in the other person’s brain.
The Future of Brain Communication
One day, we could attempt to take it up another notch in order to detect individual thought processes. These thoughts could be transmitted to another person, influencing the decisions they make.
This could be an enormous game changer for people with paralysis and other medical conditions that prevent them from being able to perform physical tasks. For example, assembling a robotic suit equipped with brainet, a synchronization of multiple brains acting as an organic computer, could allow people to receive help from others when learning how to use an exoskeleton to regain movement.
For now, it’s proving quite difficult to create a device that mimics pure telepathy. Our brains are unique, and each of us thinks differently, our thoughts being influenced by our individual memories and experiences. The resulting brain patterns make it hard for neuroscientists to develop brain-to-brain communication, but if they can reveal an individual’s patterns of thought, they could potentially use another person’s brain activity to trigger those thoughts.
This fact concerns a number of individuals, as artificial intelligence is predicted to outpace humanity at an unprecedented rate, which may result in AI looking at us as nothing more than house pets (or maybe even doing away with us entirely).
This fear has pushed Elon Musk to suggest something rather unconventional and almost controversial: the neural lace. In short, the neural lace is a device that is intended to grow with your brain. Its primary purpose is to optimize mental output through a brain-computer interface, allowing the human brain to effortlessly access the internet and, thus, keep up with (and someday merge with) artificially intelligent systems.
Musk asserts that the neural lace could push our cognitive performance to levels that are comparable to that of AI. He has teased updates before, and now, we may finally be getting something solid.
@elonmusk What abour neural lace? Announcement soon?
However, Musk isn’t the only one working to ensure that humanity can keep up with AI. Braintree founder Bryan Johnson is investing $100 million to make a neuroprosthesis to unlock the power of the human brain and, ultimately, make our neural code programmable.
Johnson outlines the purpose of his work, stating that it’s all about co-evolution:
Our connection with our new creations of intelligence is limited by screens, keyboards, gestural interfaces, and voice commands — constrained input/output modalities. We have very little access to our own brains, limiting our ability to co-evolve with silicon-based machines in powerful ways.
He is working to change this and ensure that we have a seamless interface with our technologies (and our AI).
Meanwhile, Johnson’s company, Kernel, wants to do more than just allow humans to interface with machines. His neuroprosthesis is intended to also repair our cognitive abilities, which will allow it to combat neurological diseases, such as Alzheimer’s, ALS, Parkinson’s, and other conditions that destroy our brains…and our lives.
This is just the beginning. Such advancements could allow us to literally program our neural code, which would allow us to transform ourselves in ways that we can’t even imagine. We could literally program ourselves into the people that we want to be.
Our biology and genetics have become increasingly programmable; our neural code is next in line.
It sounds like something out of science fiction, but it is based on remarkable scientific work.
In short, the devices under development work by replicating the way that our brain cells communicate with one another. The tech envisioned is based on the work of Dr. Theodore W. Berger, who is a professor of biomedical engineering and neuroscience and also serves as the director of the Center for Neural Engineering at the University of Southern California.
Berger is the Chief Science Office at Kernel, and the team is building off of 15 years of academic research that was funded by the NIH and DARPA. If that’s not enough to convince you he’s the right person to make this happen, Berger’s research has demonstrated improved recall functions in both rats and monkeys.
Over the weekend I attended a brilliant talk by Dr. Andres Lozano at the University of Toronto. According to Reuters, he is the most cited neurosurgeon on the planet. Dr. Lozano has pioneered many of the techniques used in DBS (deep brain stimulation) and helped transform Toronto Western Hospital into a global hub for neurosurgery. In another age, he would have been revered the world over as a miracle worker for having reanimated thousands of people suffering from a range of cognitive impairments through his surgical techniques.
In DBS a metal rod is inserted into the patient’s brain, on the end of the rod are electrodes that get connected to a battery implanted into their chest cavity. After everything has been put in place, the device is turned on, and an electrical current gets sent that stimulates the neurons in the affected part of the brain.
It’s incredible to see the technique in practice. It is most commonly used in Parkinson’s disease patients in cases where traditional medication is no longer effective. At the flick of a switch, patients go from shaking uncontrollably and barely able to move, to suddenly being still, relaxed and in almost full control of their body.
But this is just the beginning of what this technique will be able to do. It has had promising preliminary results in Alzheimer’s disease patients and is being used to treat everything from depression and epilepsy to diabetes and bipolar disorder. Dr. Lozano also accidentally discovered that it can be used to bring back lost memories while he was trying to cure a patient of morbid obesity.
Of course, not everyone is so keen to have a metal spike implanted into their heads no matter what good may come of it. Luckily, a variety of non-invasive techniques are also being developed that might eliminate the need for surgery. One such technique is called TMS (transcranial magnetic stimulation), where high-powered magnets are attached to the outside of a person’s head that send an electromagnetic current down to the target area. Early results have shown the ability to enhance cognition by stimulating neurons in the affected area, but the technique is still in its infancy, so it will be a while before we see people walking down the street with magnets taped to their heads.
Some even imagine a day when we might use genetic engineering to borrow genes from electric eels to allow the brain to produce its own biochemically induced current.
All of this is opening up a new branch of medicine called neuromodulation. The field uses techniques such as DBS, TMS, and others to try and enhance a variety of cognitive abilities. Many in the field believe that one day patients will be able to walk into a clinic, have their brains scanned to determine any deficiencies they have and then have the option to augment those deficiencies.
Neuromodulation also includes the brain’s ability to manipulate objects in space. Already neural prosthesis have been developed that allow people to manipulate robotic arms through their thoughts. It is conceivable that through the coming internet of things you will be able to control many of the objects in your home in the same way.
From there the next step would be a direct connection between your brain and the internet itself, giving you instant access to all of human knowledge, what Elon Musk called a Neural Lace.
Microelectronics has transformed our lives. Cellphones, earbuds, pacemakers, defibrillators – all these and more rely on microelectronics’ very small electronic designs and components. Microelectronics has changed the way we collect, process and transmit information.
Such devices, however, rarely provide access to our biological world; there are technical gaps. We can’t simply connect our cellphones to our skin and expect to gain health information. For instance, is there an infection? What type of bacteria or virus is involved? We also can’t program the cellphone to make and deliver an antibiotic, even if we knew whether the pathogen was Staph or Strep. There’s a translation problem when you want the world of biology to communicate with the world of electronics.
The research we’ve just published with colleagues in Nature Communications brings us one step closer to closing that communication gap. Rather than relying on the usual molecular signals, like hormones or nutrients, that control a cell’s gene expression, we created a synthetic “switching” system in bacterial cells that recognizes electrons instead. This new technology – a link between electrons and biology – may ultimately allow us to program our phones or other microelectronic devices to autonomously detect and treat disease.
Communicating With Electrons, Not Molecules
One of the barriers scientists have encountered when trying to link microelectronic devices with biological systems has to do with information flow. In biology, almost all activity is made possible by the transfer of molecules like glucose, epinephrine, cholesterol and insulin signaling between cells and tissues. Infecting bacteria secrete molecular toxins and attach to our skin using molecular receptors. To treat an infection, we need to detect these molecules to identify the bacteria, discern their activities and determine how to best respond.
Microelectronic devices don’t process information with molecules. A microelectronic device typically has silicon, gold, chemicals like boron or phosphorus and an energy source that provides electrons. By themselves, they’re poorly suited to engage in molecular communication with living cells.
Free electrons don’t exist in biological systems so there’s almost no way to connect with microelectronics. There is, however, a small class of molecules that stably shuttle electrons. These are called “redox” molecules; they can transport electrons, sort of like wire does. The difference is that in wire, the electrons can flow freely to any location within; redox molecules must undergo chemical reactions – oxidation or reduction reactions – to “hand off” electrons.
Turning Cells On and Off
Capitalizing on the electronic nature of redox molecules, we genetically engineered bacteria to respond to them. We focused on redox molecules that could be “programmed” by the electrode of a microelectronic device. The device toggles the molecule’s oxidation state – it’s either oxidized (loses an electron) or reduced (gains an electron). The electron is supplied by a typical energy source in electronics like a battery.
We wanted our bacteria cells to turn “on” and “off” due to the applied voltage – voltage that oxidized a naturally occurring redox molecule, pyocyanin.
Electrically oxidizing pyocyanin allowed us to control our engineered cells, turning them on or off so they would synthesize (or not) a fluorescent protein. We could rapidly identify what was happening in these cells because the protein emits a green hue.
In another example, we made bacteria that, when switched on, would swim from a stationary position. Bacteria normally swim in starts and stops referred to as a “run” or a “tumble.” The “run” ensures they move in a straight path. When they “tumble,” they essentially remain in a one spot. A protein called CheZ controls the “run” portion of bacteria’s swimming activity. Our electrogenetic switch turned on the synthesis of CheZ, so that the bacteria could move forward.
We were also able to electrically signal a community of cells to exhibit collective behavior. We made cells with switches controlling the synthesis of a signaling molecule that diffuses to neighboring cells and, in turn, causes changes in their behavior. Electric current turned on cells that, in turn, “programmed” a natural biological signaling process to alter the behavior of nearby cells. We exploited bacterial quorum sensing – a natural process where bacterial cells “talk” to their neighbors and the collection of cells can behave in ways that benefit the entire community.
Perhaps even more interesting, our groups showed that we could both turn on gene expression and turn it off. By reversing the polarity on the electrode, the oxidized pyocyanin becomes reduced – its inactive form. Then, the cells that were turned on were engineered to quickly revert back to their original state. In this way, the group demonstrated the ability to cycle the electrically programmed behavior on and off, repeatedly.
Interestingly, the on and off switch enabled by pyocyanin was fairly weak. By including another redox molecule, ferricyanide, we found a way to amplify the entire system so that the gene expression was very strong, again on and off. The entire system was robust, repeatable and didn’t negatively affect the cells.
Sensing and Responding on a Cellular Level
Armed with this advance, devices could potentially electrically stimulate bacteria to make therapeutics and deliver them to a site. For example, imagine swallowing a small microelectronic capsule that could record the presence of a pathogen in your GI tract and also contain living bacterial factories that could make an antimicrobial or other therapy – all in a programmable autonomous system.
This current research ties into previous work done here at the University of Maryland where researchers had discovered ways to “record” biological information, by sensing the biological environment, and based on the prevailing conditions, “write” electrons to devices. We and our colleagues “sent out” redox molecules from electrodes, let those molecules interact with the microenvironment near the electrode and then drew them back to the electrode so they could inform the device on what they’d seen. This mode of “molecular communication” is somewhat analogous to sonar, where redox molecules are used instead of sound waves.
These molecular communication efforts were used to identify pathogens, monitor the “stress” in blood levels of individuals with schizophrenia and even determine the differences in melanin from people with red hair. For nearly a decade, the Maryland team has developed methodologies to exploit redox molecules to interrogate biology by directly writing the information to devices with electrochemistry.
Perhaps it is now time to integrate these technologies: Use molecular communication to sense biological function and transfer the information to a device. Then use the device – maybe a small capsule or perhaps even a cellphone – to program bacteria to make chemicals and other compounds that issue new directions to the biological system. It may sound fantastical, many years away from practical uses, but our team is working hard on such valuable applications…stay tuned!
Researchers at Harvard University and other participating institutions have invented a robot that literally hugs your heart to keep it pumping blood. Their research is in response to the drawbacks of traditional ventricular assist devices (VADs), which remove blood from the heart before pumping it back into the aorta or pulmonary artery. The problem is that these artificial devices are in direct contact with the blood, which increases the risk of blood clots.
Harvard graduate student Ellen Roche and her advisor Conor Walsh decided to make a similar device, except their device never comes in direct contact with blood. They utilized soft robotics, contraptions made of flexible materials, to work with the complex rhythms of the heart. The silicon device is essentially a sleeve that contains actuators that contract when filled with pressurized air.
When the device is placed around a heart, these contractions can be used to keep the heart pumping, and the movement of the device can be controlled by adjusting the airflow. “Even as a patient’s disease condition changes, you could modify the level of assistance provided by the sleeve,” Roche told IEEE Spectrum. The work is published in the journal Science Translational Medicine.
The team tested their creation on pig cadavers, a synthetic heart, and even a live pig with a failing heart. When the device was connected to the live pig, it restored blood flow back to baseline. Though many more animal tests will be needed first, the team hopes to conduct a longer-term study on human participants at some point in the future, as well as research how they could reduce inflammation on the heart’s surface.
Heart disease costs billions of dollars overall and affects more than 41 million people globally. Every year, the medical industry proposes new potential treatment options and even cures, but thus far has failed to produce a clear-cut solution to the problem. If this invention proves successful on human subjects, it has the potential to save millions of lives.
Imagine coming home to freshly-baked cookies straight out of the oven, the enticing aroma luring you in. As you walk closer, you hear the sound of classical music playing somewhere in the kitchen. The thing is, there is actually no music playing at all.
It might seem like this instance could be chalked up to just hearing things in your head or off in the distance somewhere. However, this could also be something that 4.4 percent of adults experience: a phenomenon called synesthesia.
Synesthesia is a neurological trait that comes in many forms. Many recognize it personally, from seeing sounds to smelling colors. Pharrell Williams and Marilyn Monroe are just a couple of those who have been reported to possess it. Many have described synesthesia as a “bonus sense.”
But researchers at the City, University of London might have discovered a more common form of the trait – a trait that one in five people experienced in an experiment they conducted. Their work is published in the journal Consciousness and Cognition.
They call it the ‘hearing-motion’ phenomenon. Those who experience it are able to literally hear the movements of objects. “My data suggests there are two kinds of people,” said Elliot Freeman, lead researcher in the study. “Those who generate sounds deliberately and those who get the internal sounds without trying.” The latter belong to those displaying ‘hearing-motion’ synesthesia.
Some people might be able to see a silent fan spinning, but hear a whirring noise. Some may see a desktop screensaver shoot bubbles off screen, but hear the ‘whoosh!’ sound as they go. The video below shows the experiential difference between a synaesthete and non-synaesthete:
Freeman and his team collected 40 participants to test for this trait. They showed a two-part video involving Morse code-like patterns. One contained sound, and the other featured patterns of flashing lights.
Researchers noted if any sounds were heard by participants during this activity. To their surprise, 22 percent did. The experimental video is below:
What Synesthesia Can Teach Us About Ourselves
The second part of the experiment involved audible sound, using similar rules. What the researchers found was that those who performed well during the first part of the experiment actually performed worse on average on the second part. It could be because participants found the flashing lights coupled with mismatched sound distracting. But this experiment will allow them to follow up with additional research using a larger sample size.
With a larger sample size involving a diverse group of participants, scientists would be able to pinpoint specific traits in those who experience ‘hearing-motion’ synesthesia. This could also hep us to learn more about the specific brain processes behind sensing things that aren’t actually there.
As humans, it’s interesting to find out more about what we might be experiencing in everyday life outside of our peripherals. With these findings, there must be more that goes unnoticed. While studying the human body and its processes seems concrete and straightforward, it is incredible how much there is still to learn.
Memories are the faintest, most ethereal wisps of our neurophysiology — somehow, the firing of delicate synapses and the activation of neurons combine to produce the things we remember. The sum of our memories make us who we are; they are us, in every way, and without them we cease to be.
So it’s needless to say that there’s a pretty significant premium on discovering new ways to combat memory loss. Most of these involve physiological and biological methods, but some scientists, such as Theodore Berger of the University of Southern California, are beginning to turn toward technology. If any of these methods are successful, it would mean the possibility of perfect lifelong memory recall.
As a biomedical engineer, Berger has devised an implant that might be called an “artificial hippocampus.” The hippocampus is a part of the brain involved in transforming short-term memories into long-term ones. In short, it’s the neurological structure that converts the name and face of the person you met briefly at a party (short-term) into something you can recall years later.
Berger’s device is inspired by animal experiments designed to understand the functioning of the hippocampus and how it forms memories. Berger noticed that the hippocampus activates in a certain way, firing a pattern he calls a “space-time code” (not to be confused with the astrophysical concept). The memories seem to be formed and modified by the location of activated neurons in the hippocampus and when they fire. As the signal propagates throughout the hippocampus, this space-time sequence represents the long- and short-term memories.
Armed with this understanding, Berger and his team wrote a general mathematical model for how the hippocampus converts short-term memories into long-term memories. Using the model, they could implant useful memories into rats whose memories had been blocked by drugs.
Of course, humans are a far cry from rats. Mapping the countless billions of synaptic connections in the human brain and understanding how they produce memories would be critical to creating a memory implant with anything like the requisite resolution, so it’s highly unlikely we’ll be seeing implantable memory devices in the near future. However, this breakthrough in cracking the hippocampus’ mathematical “memory code” has very important implications.
If it’s possible to reduce the mechanisms of memory formation and transference to a mere mathematical formula and then refine that understanding with even-more-sophisticated software code, we could be looking at a new era in neuromanipulation. Memory device implants would represent just a small part of the equation — we could also see implantable or false memories, an idea that conjures up dystopian visions but that could actually have more benign medical and psychological uses.
Much more work needs to be done if we’re ever to have memory augments clattering around inside our braincases. Not only would we need to make significant progress in the development of miniaturized technology that can safely and effectively interact with nervous tissue, but we’d also need to improve our understanding of memory formation, develop more accurate techniques for mapping the “connectome,” and devise adequate mathematical models of how the brain works (if that’s ever really possible).
In the 16th century, Elizabeth Báthory murdered hundreds of young women and bathed in their blood, believing it would preserve her youth. Today, a less-murderous, but similarly bloody, idea is being proposed. Ambrosia, run by Jesse Karmazin, is seeking 600 people willing to pay $8,000 so they can join a clinical trial that involves transfusing 1.5 liters of plasma from donors 16-25 years old.
Karmazin, who holds a medical degree but isn’t licensed to practice medicine, was driven to pursue this trial based on his observations of the effects of numerous transfusions conducted while he was a medical student at Stanford.
“Some patients got young blood and others got older blood, and I was able to do some statistics on it, and the results looked really awesome. And I thought, this is the kind of therapy that I’d want to be available to me,” he shares with Business Insider.
Right now however, there is no clinical evidence that shows the process can be effectively used as a way to stop or slow aging. The company’s website promises they will share results as soon as they become available.
Close, But Not Quite There Yet
As Stanford University neuroscientist Tony Wyss-Coray points out in Science Magazine, “There’s just no clinical evidence [that the treatment will be beneficial], and you’re basically abusing people’s trust and the public excitement around this.”
Wyss-Coray led a study in mice that suggests parabiosis – a surgical technique that links the veins of two living animals – could have rejuvenating effects on the brain. This study led to a 2015 human clinical trial that tested the benefits of young plasma among subjects suffering from Alzheimer’s.
The results have yet to be released, but even fellow researchers involved in the study are taking a more pragmatic approach, saying that the process isn’t necessarily “de-aging” animals so much as it’s helping repair the damage.
That’s not to say that science doesn’t have anything that will eventually lead us to something that will combat aging. One scientist says that the key lies in lengthening the ends of human DNA, called telomeres. Another telomere-related study explores the use of hormones to prevent telomere shrinkage. Others discovered how stem cell therapy could potentially reverse age related osteoporosis. In Japan, the first science-backed anti-aging drug is being readied for human testing.
A definitive answer to where we will find the fountain of youth remains elusive. Obviously, it has nothing to do with bathing in young blood. It also seems unlikely that injecting the blood of millennials directly into veins of Gen X’ers, then paying for it, is the key. But we’re getting there–and it could be closer than we can imagine.
Menopause is a point of no return for women, considered irreversible until recently. Earlier in 2016, a team of experts was able to find a way to rejuvenate post-menopausal ovaries. After months of preclinical trials, experts from the Genesis Health Clinic in Athens are now launching the first clinical trials for the method.
The technique uses Platelet Rich Plasma (PRP) injections. But unlike other PRP transfusions, this one needs no donor. This PRP is made by centrifuging a person’s blood sample to isolate its growth factors. “It offers a window of hope that menopausal women will be able to get pregnant using their own genetic material,” Konstantinos Sfakianoudis, a gynecologist at Genesis, said.
The preclinical trials began in May 2016 and have yielded considerable results. PRP rejuvenated menopausal women’s ovaries, restoring fertility. Several were able to conceive after receiving PRP treatment. 75% of the 60 women treated became capable of conceiving through natural pregnancy or in vitro fertilization, and 9 actually got pregnant.
More than 75% showed overall hormone levels returned to youthful levels. It made women young again, so to speak.
Changing lifestyles have led to an increase in late pregnancies, and this comes with the usual complications associated with menopausal conception. Aside from this, there are also cases of women with difficulties in bearing children because of thin uterine lining. But after injecting PRP into the uterus of six women who had these conditions, they were able to bear children.
So this method doesn’t just restore a woman’s fertility, it can also alleviate some of the (many) negative effects associated with menopause. Konstantinos Pantos, Inovium Ovarian Rejuvenation Trials Director, explained:
“The goal of the trial is not to prove that we can reverse menopause, because over and over again in our treatments, we know that this is the result. We also know that the treatment triggers a whole body response that restores hormones to the levels of youth. Now, we want to see if the rejuvenation is a permanent one, and if we have discovered a connection between the loss of fertility and the damaging effects of aging in the body.”
The clinical trials have been approved by Greece’s National Ethics Committee and will start in February 2017. It isn’t free, however. Participation costs a hefty sum of $5,000. Similar trials are set to begin in multiple locations in the US by June 2017.
As far-fetched as this idea seems, it’s a possibility that scientists are now exploring after they successfully produced healthy mice using a process called in vitro gametogenesis (IVG). This is a revolutionary method involving embryonic stem cells that are reprogramed to become viable sex cells.
In the mice experiment, scientists made early stage mouse eggs from stem cells and grew them in the lab. Once the eggs matured, they fertilized them with mouse sperm and demonstrated that they could also be successfully implanted into a surrogate female mouse.
It’s important to note that the technology is still in its infancy. Creating eggs from skin cells is a possibility, but at this point, there is still some work to be done before it is truly viable in humans. The success of the mouse model, however, illustrates the opportunities that this technique could offer.
Obviously, IVG is revolutionary for the field of fertility medicine. It gives infertile people hope, especially those who are unable to have children because of cancer treatment. For example, collecting skin cells from patients undergoing chemotherapy means scientists can turn them into healthy eggs or sperm in case they become infertile as a result of treatment.
In short, the technique could render egg donors obsolete. For couples undergoing fertility treatments, they no longer have to choose from just a handful of viable embryos, they could potentially select from a bigger pool. It also makes the biological process of conceiving more democratic. Theoretically, the method can be used to produce egg cells from male skin cells, making it possible for a baby to be created from same-sex couples.
Perhaps as a testament to the promise of this technique, experts are already looking into IVG’s possible consequences this early into the study.
For instance, should the procedure eventually become accessible and inexpensive, we could face the possibility of ‘embryo farming,’ which for some, puts a focus on how this method can devalue human life. Perceived advantages, like making it possible for parents to select from a bigger pool of embryos, also has obvious downsides—like high-tech enabled eugenics.
Combined with advances in gene editing technology, it raises ethical concerns regarding human enhancement and designer babies. And with IVG theoretically making it possible for a baby to have three or more genetic parents, it raises questions regarding the legal rights and responsibilities of each parent.
It’s difficult to predict when technology like this will be ready for use in humans. Right now, any efforts to replicate the same results in primates or humans has proved unsuccessful. But the study is steadily moving forward, and as the authors pointed out:
[…] With science and medicine hurtling forward at breakneck speed, the rapid transformation of reproductive and regenerative medicine may surprise us. Before the inevitable, society will be well advised to strike and maintain a vigorous public conversation on the ethical challenges of IVG.
The transition from one year to the next is always a little uncertain – an uneasy blend of anxiety and optimism, it’s also a time of retrospection, introspection, and even a little tentative prognostication. And since the latter is our stock-in-trade at Futurism, we believe now is the perfect time to look ahead at what 2017 has in store for us.
Here’s our list of some of the many wonderful advances in medicine we can look forward to in 2017.
Sophisticated technologies have always had an important role to play in medicine, with each year adding extraordinary new tools to the physician’s medicine bag—2017 will be no exception.
We can, for instance, expect further improvements in the technology of robotic surgery. In addition to the currently available da Vinci Surgical System, look to see competition from the new surgical robot system developed by the partnership of Google and Johnson & Johnson. These new systems will parlay advances in software, miniaturization, and robotics to allow for minimally invasive surgeries on the most delicate elements of human anatomy.
We can expect the application of AI to medicine to only increase in the coming year, when the need to cull through and assimilate enormous quantities of medical data—whether on an individual or large-scale, societal basis—will become critical. Meanwhile, the danger that potentially flawed machine learning programs will supplant rather than merely supplement human medical judgment will also become much more than just an abstruse, academic question for medical ethicists.
A Pharmacological Revolution
But 2017 won’t be just about robots and artificial intelligence. It’s likely that some of the less visually spectacular medical technologies will yield the most astonishing medical breakthroughs. Drug research, for instance, is poised to take off in 2017—especially with immunotherapeutic treatments for cancer.
According to Stanley Marks, chairman of the UPMC CancerCenter, it is these treatments—which marshal the body’s immune system to attack and destroy cancerous cells—that represent the single most promising new front in the war on cancer. Using checkpoint inhibitor drugs and CAR (chimeric antigen receptor) T-cell therapies, it’s become possible to mobilize the body’s own immune system to fight the cancer.
The method involves extracting T-cells from the patient’s own blood, and genetically engineering them to recognize, attach to, and neutralize tumorous cells. It’s already had promising results in fighting some leukemias, so we can look forward to more research on these remarkable “living drugs” in 2017.
The revolutionary CRISPR/Cas-9 gene-editing technology has disrupted biology like nothing else—and bids fair to transform it from a slow, imprecise science to something approaching the precision of the physical sciences. What 2017 holds for gene-editing technology is anyone’s guess—it’s even possible that the Chinese, or some other nation with laxer standards than are currently permitted in the U.S., might begin a more widespread use of the technique in human subjects.
But expect passive measures, too, such as simply learning how Alzheimer’s and other neurodegenerative diseases progress, or even non-medical agricultural and industrial uses for the technology. As Nicola Patron of the Earlham Institute sagely observes, “Understanding what DNA sequences do is what enables us to solve problems in every field of biology from curing human diseases, to growing enough healthy food, to discovering and making new medicines, to understanding why some species are going extinct.”
The bottom line: 2017 is looking to be an exciting year, in all avenues of research and discovery, but particularly in medicine. And if all the above wasn’t exciting enough for you—you can look forward to capping it off with what might be the world’s first head transplant.
Disembodied heads or not, you can read all about the latest developments here at Futurism.
As it stands, CRISPR is a rather impressive gene editing tool and already the most precise method we have available for genetic manipulation. Studies like this recent one from UC Berkeley are helping us refine the CRISPR-Cas9 system, and now a new study published in Cell from UC San Francisco (UCSF) is offering a way to deal with some of its greatest remaining downsides.
Researchers discovered a way to switch off this gene-editing system using recently identified proteins discovered in the lab of Joseph Bondy-Denomy from UCSF’s Department of Microbiology and Immunology. These anti-CRISPR proteins could eventually improve the safety and accuracy of already very accurate CRISPR applications, and the researchers relied on a nifty little trick to discover them.
“Just as CRISPR technology was developed from the natural anti-viral defense systems in bacteria, we can also take advantage of the anti-CRISPR proteins that viruses have sculpted to get around those bacterial defenses,” explains the leader of the study, Benjamin Rauch.
In their research, the team looked for bacterial strains that had been infected by a known virus. They reasoned that their existence would be evidence that a bacteria’s Cas9 was not functioning properly.
“Cas9 isn’t very smart,” according to Bondy-Denomy. “It’s not able to avoid cutting the bacterium’s own DNA if it is programmed to do so. So we looked for strains of bacteria where the CRISPR-Cas9 system ought to be targeting its own genome — the fact that the cells do not self-destruct was a clue that the whole CRISPR system was inactivated.”
After examining nearly 300 strains of Listeria using Rauch’s bioinformatic approach, the team found three strains that showed this evidence. From those, they isolated four distinct anti-CRISPR proteins, and of these four, test showed that two — dubbed AcrIIA2 and AcrIIA4 — worked to inhibit the ability of SpyCas9 to target specific genomes.
A Better System
“These natural Cas9-specific ‘anti-CRISPRs’ present tools that can be used to regulate the genome engineering activities of CRISPR-Cas9,” the researchers write. They believe that with these proteins, it’s possible to avoid unintended or “off-target” gene modifications caused by keeping CRISPR’s machinery active in the body.
Of course, the next step would be to see how these proteins function in human cells. “We also want to understand exactly how the inhibitor proteins block Cas9’s gene targeting abilities, and continue the search for more and better CRISPR inhibitors in other bacteria,” Raunch explained.
Ultimately, this “off-switch” for CRISPR could prove almost as important as the system itself. “Researchers and the public are reasonably concerned about CRISPR being so powerful that it potentially gets put to dangerous uses,” Bondy-Denomy said. “These inhibitors provide a mechanism to block nefarious or out-of-control CRISPR applications, making it safer to explore all the ways this technology can be used to help people.”
While its parts have always been there, and have long been considered a part of the digestive system, the mesentery has recently been reclassified as an organ. For centuries the structure that connects the gut to the body has been considered fragmented, with its parts scattered and separated. Researchers from the University of Limerick (UL) in Ireland presented a study published in the medical journal The Lancet Gastroenterology & Hepatologythat asserts the mesentery is, in fact, one continuous organ.
“In the paper, which has been peer reviewed and assessed, we are now saying we have an organ in the body which hasn’t been acknowledged as such to date,” said lead researcher J Calvin Coffrey, professor of surgery at the UL’s Graduate Entry Medical School. “The anatomic description that had been laid down over 100 years of anatomy was incorrect. This organ is far from fragmented and complex.”
The reclassification of the mesentery as one continuous structure has already began changing the way medical students are studying the human body — and even Gray’s Anatomy, the world’s most famous medical text book series, has been updated.
A Better Gut Feeling
“This is relevant universally as it affects all of us,” Coffrey explains. He believes that mesenteric science could be its own particular field of medical study — similar to gastroenterology, neurology, or nephrology. It could improve our understanding of diseases in our gut.
“When we approach it like every other organ…we can categorize abdominal disease in terms of this organ,”says Coffrey. Essentially, doctors can may now add mesentery to your list of potential diseases.
Of course, further research is needed to fully understand what exactly the mesentery does. As Coffrey explains:
Up to now there was no such field as mesenteric science. Now we have established anatomy and the structure. The next step is the function. If you understand the function you can identify abnormal function, and then you have disease. Put them all together and you have the field of mesenteric science…the basis for a whole new area of science.
Indeed this is a good thing. By understanding the mesentery as a separate organ, we can dissect abdominal diseases better — “perhaps you’re experiencing some mesenteric pains,” your doctor could very well be telling you the next time your stomach hurts.
Superflex, a California-based startup, is currently developing apparel that goes beyond fashion. The company, an offshoot of SRI International, wants to design clothes that assist the wearer with physical activity.
“We’re calling it ‘intelligent wearable strength,’” said Rick Mahoney, co-founder and chief executive office of Superflex, in an interview with The Verge.
These clothes are equipped with sensors and flexible electric motors, which improve natural mobility, but only when they need to. Whenever the wearer performs an action, factors such as posture and movement are detected and quickly analyzed by a computer. Afterward, commands are sent out to aid the wearer in activities like standing up, walking, among others. Superflex envisions these clothes as lightweight, and comfortably worn under any outfit.
Although the thought of a strength-amplifying suit conjures images of Iron Man, the aim of the apparel-robotics startup is actually a lot less glamorous. In another interview with TechCrunch, Mahoney revealed that their main focus was on helping those who have trouble with moving on their own. “We’re interested in helping people with general independence, people who are starting to lose confidence in their mobility.”
This includes a wide range of people, including the elderly, the disabled, chronically disabled children, and even people who do a lot of heavy lifting.
The company is yet to release an initial concept, however. Some issues like design and ergonomics are still to be resolved, but the company hopes to release its first model by 2018. “We are forging a path,” Mahoney said. “We are not helping soldiers fight aliens or people leap from buildings. We want people to live a more productive and confident life.”
Retinitis pigmentosa (RP) is a rare genetic disorder that sabotages the retina’s ability to respond to light by disabling rod photoreceptors. The National Eye Institute estimates that one in 4,000 people around the world suffer from the disorder.
In the UK however, people who have lost their sight to RP may soon have hope as the National Health Service (NHS) funds the implantation of a device among ten patients to help treat this form of blindness.
The Argus II Bionic Eye implant works in tandem with a small camera mounted on a pair of glasses that is worn by the patient. Images from the camera are converted into wireless signals via electrodes attached to the retina. These electrodes stimulate the remaining retinal cells, which then transmit information to the brain, effectively restoring some sight to the patient.
“This highly innovative NHS-funded procedure shows real promise and could change lives,” said Dr Jonathan Fielden, director of specialized commissioning at NHS England.
This is the first time that a treatment for RP has been made available. Half of the patients receiving the implant will be treated at Manchester Royal Eye Hospital, while the remaining half will have the procedure at Moorfields Eye Hospital in 2017.
Following the surgeries, the patients will then be closely observed and monitored for a year to determine how well the implant was able to improve their quality of life.
In the UK, where the program is being implemented, an estimated 16,000 people suffer from the disorder, reports The Guardian. Among those, around 160 to 320 are possibly eligible for a bionic eye procedure. The NHS’s intent to support funding for the first ten patients receiving the implant is anchored in their vision to show viable treatments available to RP sufferers.
Across various fields of science, however, advances in technology and research have shown promise in terms of preventing blindness or restoring sight.
In one study, Google DeepMind, the artificial intelligence (AI) research subsidiary of Alphabet, has developed an AI algorithm that can detect diabetic retinopathy—one of the most common forms of blindness that leads to gradual loss of sight.
Furthermore, a gene therapy intended to cure blindness caused by hereditary retinal dystrophy is already in its final stage of FDA approval and could reach the market as early as 2017. And Salk Institute researchers have used CRISPR/Cas-9—a revolutionary gene-editing technique—to demonstrate how it can be used to partially restore sight in a study conducted in mice with genetic defects.
A Treatment For Blindness? NHS to Install Bionic Eyes in 10 Patients
Researchers from the University of California, Berkeley have discovered new CRISPR-Cas systems, the gene-editing tool currently revolutionizing the field. In their study, which is published in Nature, the team reports that they’ve found two new bacterial Cas proteins as well as the first-ever archaeal Cas9 protein.
The researchers spent the past 10 years reconstructing the genomes of microbes they’d collected from a variety of environments — everything from groundwater and sediment to acid mine drainage biofilms and infant gut. The result of their work is a terabase-scale genomics collection of microbes. A vast majority of these were previously uncultivated, meaning that they hadn’t been grown in isolation before. This is particularly significant because the CRISPR-Cas techniques currently in use were developed from bacteria cultured in labs.
The team looked for the characteristic repeated sequences and found, as CRISPR researcher Rodolphe Barrangou told The Scientist, “gold out of the metagenomic dark matter.” Sequences that controlled the Cas9 protein were located in two archaea genomes — an interesting find, since Cas9 was previously only found in bacteria.
Furthermore, the group uncovered new CRISPR-associated proteins (Cas) among bacteria. The new proteins, which they called CasX and CasY, were composed of around 980 and 1,200 amino acids respectively. “They’re really small, especially CasX,” said Banfield. “That means it’s potentially more useful.”
A Better CRISPR System?
CRISPR, or clustered regularly interspaced short palindromic repeats, is a natural defense mechanism found in some prokaryotes. Together with the Cas protein, the system cuts and stores nonnative DNA from invaders, such as viruses. This allows the organism to retain a genetic memory of that invader, which it can then reference to quickly identify the invader if it’s encountered again in the future.
Among the numerous CRISPR-Cas programs, the class 2 system, which typically uses the Cas9 nuclease, has been studied the most, and in recent years, it has been adapted as a technique in biotechnology. Using the CRISPR-Cas9 system, scientists can target specific intervals of genetic code in living organisms and then cut and edit those genes more accurately than ever before.
Because archaea differ from bacteria biologically, finding a Cas9 protein in archaea presents an interesting new area of study in CRISPR research. Banfield believes that there may be components of the system that differ and that these differences could offer new information that could improve current biotech methods. Similarly, Rotem Sorek of the Weizmann Institute of Science in Israel sees the smaller size of the newly discovered CasX and CasY proteins to be promising since “the delivery of small genes into cells is much easier than the delivery of large genes.”
Banfield is hopeful that more useful discoveries could be found using metagenomics: “This is a case in point for what I think will be an avalanche of new proteins and pathways and systems that hold unimaginable biotechnology and medical value.”
Traditionally, humans have five recognized senses: sight, touch, taste, smell, and sound. In the strictest sense, our reality is defined by anything and everything we experience through those five senses, but today’s technology is allowing us to live in a world beyond them.
The idea that humans may have more senses isn’t as far-fetched as it sounds. For example, our sense of balance and our body’s inherent pain monitoring capabilities would both be considered crucial sensory inputs. Not everyone experiences the traditional five senses in the same way, either. A small fraction of the population (around 4.4 percent) has synesthesia, a form of sensory perception that causes them to experience crosswired sensations such as “seeing” sounds or “feeling” tastes.
On the premise that our five senses aren’t definitive, scientists began to explore how they could use the brain’s neuroplasticity (its ability to constantly change throughout an individual’s lifespan) to expand and augment our senses. After all, we don’t really directly experience reality — our brains do. We send limited signals from our eyes, for instance, to our brain, and that’s where we process what we see. To that end, we can use technology to tap into how our brains processes these signals and manipulate that sensory feedback.
Expanding Our Senses
Rapidly improving technology and a deeper understanding of how our senses work are allowing us to fundamentally change our perception of the world. The ability to hack our senses by rerouting and expanding our sensory inputs is letting us see, hear, touch, taste, and smell things that we couldn’t possibly have experienced before given our human limitations.
Some of this technology allows people with disabilities to experience the traditional senses in a nontraditional way. The Versatile Extra-Sensory Transducer (VEST) translates sounds into vibrations, and with a bit of training, a hearing-impaired person can use it to perceive sound through tactile means. Meanwhile, technology developed by researchers from the University of Wisconsin allows the blind to “see” through electrical pulses delivered to a plate on their tongue.
Other tech builds on our existing senses. Artist Neil Harbisson could see, but was color blind, so he implanted an antenna in his skull that translates colors into audible vibrations. It resulted in a form of synesthesia, which he claims allows him to “hear” colors. The tech also allows him to experience light not visible to the human eye, which is another area of research yielding promising results.
While many attempts to augment the senses are still in the very early stages of development, researchers are already pushing the boundaries of what science and technology can do to achieve their lofty goals. Even the smallest breakthroughs in this field support the idea that we still have a lot to explore and discover about human perception and reality.
Communication technology tends to develop in a particular direction: more people communicating across larger distances using less effort to do so. Taken to its logical extreme, perfect communication would be anyone being able to talk to anyone, anywhere, using no effort at all.
The closest concept we have to this form of communication is something called the hive mind. Everyone would be connected to everyone telepathically, and we could all share our thoughts, memories, and even dreams with one another. Such a system of communication would not only have far-reaching consequences, it would also be hugely controversial.
We (Almost) Have the Technology
Many have theorized about the hive mind, speculating about everything from what we would need to bring it to fruition to what it would do to society.
Some initiatives, like Princeton’s Global Consciousness Project, assert that we already have a hive mind; it’s just a matter of detecting it. Certain religious groups also espouse the concept of oneness, claiming we are all already connected on a non-physical level. Rather than figuring out how to tap into some inherent ability we already have, however, the most popular theory on how we’ll get a hive mind is through technological innovation.
Some brainwave interpreting technology is allowing us to interact with machines through thought. Braingate’s wireless brain implant is currently designed to allow severely disable people to control things like wheelchairs and virtual keyboards using their thoughts, but the technology could serve as a direct conduit to other brains one day. We’re already making strides on that front as well. In 2014, a team of researchers successful conveyed the thoughts of someone from Spain to a person in France using an electroencephalography (EEG) to read their brainwaves and a computer to transport the information.
For Better or Worse
Few doubt that the technological tools we’d need for a global hive mind are on the way. The bigger question is what we would do once we had the technology.
Obviously, being able to effortlessly share our dreams, emotions, and thoughts with one another would have many benefits. We could create our own virtual worlds and interact with others only in those worlds. Formal education would become a days- or weeks-long process as we would have the ability to download knowledge directly into our brains, and the spread of ideas or viral thoughts would be nearly instantaneous.
It would feel like a much smaller world and how we would interact with others within that world would be fundamentally changed. Unfortunately, that change wouldn’t necessarily be for the better. Imagine a hacker controlling “thought databases” or a totalitarian regime going all “1984” on the whole of society. While many claim that privacy is already a lost concept, it would truly become a thing of the past if anyone could hack into your thoughts.
Another, more far-out theory is that we would coalesce into a single entity, like the Tyranid Hivemind or the Zerg Overmind situation in the video above. Through our own volition, or as a natural evolution of technology, we would “upload” ourselves into a singular consciousness. Maybe that would actually be the Singularity many have predicted.
Despite all these technological connections, at its most basic level, the concept of a hive mind is something found in nature, a means by which the many come together to create something better than the sum of its parts. Studies have already shown that we are better at predicting pretty much everything when we think as a group. Imagine the problems we could solve if everyone in the world could collaborate on solutions just by putting our minds together.
The Safilo Group SpA is one of the world’s largest designers and manufacturers of frames for eyeglasses. The group owns the license to creating frames for some of the biggest designer brands including Dior, Fendi, and Hugo Boss. According to Bloomberg, it looks like the company is seeking to add a little tech to their fashions.
Safilo is looking to expand into the fledgling smartglasses market with a new product it is hoping to debut at CES 2017 in a few weeks. The company believes that the glasses will be more fashionable than Google’s smartglass offering. So much so that they decided not to add a display, a camera, a microphone, or really any “smart” tech outside of mood detecting sensors. The goal is to use the data to help the wearer relax or meditate.
The plan is to sell the glasses online in the US starting in the summer of 2017, and then make them available for prescription glasses and eventually to integrate the technology with the designer frames. The company head Luisa Delgado feels that the selling point of the glasses will hinge on their fashionability compared to other smartglasses.
“The consumers don’t have to make sacrifices on the glasses in order to get the technology,” said Delgado.
Perhaps Safilo can help lead the way for smartglasses to become the new smartwatch.
Researchers at the Weizmann Institute of Science in Israel and STEBA Biotech have announced the success of the unique method they developed to fight prostate cancer. This treatment, which the group of expects called “transformative,” has shown promising results.
Under development since 2011, vascular targeted photodynamic therapy (VPT), as the procedure is called, is carried out in a two-step process. Patients are first introduced to WST11 — a compound extracted from some benthic bacteria, or bacteria that dwell at the bottom of the sea. These bacteria are particularly special, though, as they are highly light-sensitive.
Optical fibers are then inserted through an area called the perineum, found between the the testes and the anus, and directly into the prostate gland. Afterwards, the scientists turn on a red laser that is induced through the optical fibers. This process utilizes the photosensitivity of the WST11 drug and activates it. Upon activation, free radicals are released within the area, attacking and destroying the tumors. Unlike conventional treatments which might affect a general area of body cells, this treatment is localized. The nearby cells are left more or less unharmed.
Beating The Big ‘C’
The trial for this prostate cancer treatment has shown promising results. Endgadget reports that, from the 415 participating men, nearly half were rid of the malignant disease by the end of treatment. This is a huge improvement compared to conventional procedures, which only have an average success rate of about 14 percent. Of those that had cancer in remission, only six percent needed to have their affected prostate removed, which is again a stark contrast to the conventional procedures’ 30 percent.
Though this news is indeed exciting, the experts of the study did say that the VPT was most effective in cases that had low-to-medium risks. In dealing with higher risk cases, some techniques still needed to be improved.
Lead researcher Mark Emberton from the University College London hospital, however, seems confident in the method. “This is truly a huge leap forward for prostate cancer treatment…In prostate cancer, we are still commonly removing or irradiating the whole prostate, so the success of this new tissue-preserving treatment is welcome news indeed”
The United Kingdom’s fertility regulator will now begin accepting applications from clinics seeking to offer patients mitochondrial donation. The treatment has been proven to help women with mitochondrial DNA mutations give birth to healthy babies, and the Human Fertilisation and Embryology Authority’s (HFEA) approval for the technique has been deemed a historic step forward in this field.
The process involves the removal of the nucleus DNA from the egg of the mother carrying the mitochondrial disease and transferring it to a donor egg. The egg is fertilized with the father’s sperm, and the embryo is then implanted in the mother. That embryo will therefore have nucleus DNA from its parents, but mictochondrial DNA from a donor, though the third-party DNA will account for less than one percent of the embryo’s genes.
“Today’s historic decision means that parents at very high risk of having a child with a life-threatening mitochondrial disease may soon have the chance of a healthy, genetically related child. This is life-changing for those families,” says HFEA Chair Sally Cheshire in a statement.
A mitochondrial donation at a Mexican clinic resulted in the successful birth of a three-parent baby earlier this year, and the team behind that case recently announced plans to treat as many as 20 new patients in the first half of 2017.
Women with mitochondrial disorders have a high risk of passing on the mutated DNA to their offspring. In most cases, the affected child will die early due to the lack of available treatment options or suffer debilitating condition such as muscular dystrophy, organ failure, and muscle weakness.
Prior to the availability of mitochondrial donation, women with faulty mitochondria who wanted to have children were limited to adoption or undergoing in vitro fertilization (IVF) using healthy donor eggs. If they wanted to push for genetically related offspring, they would have to risk natural birth or undergo a screening procedure that could potentially help doctors find embryos with the fewest mutations.
Despite being a viable and better alternative to existing options, the procedure is not universally embraced. Critics have asserted that the DNA manipulation the technique requires could eventually lead to the creation of “designer babies.” However, as it provides a solution to a condition for which there is currently no better option, the value of this breakthrough is enormous.
Receiving the green light from regulatory bodies proves that advancements that could save lives and have positive life-changing implications right now will garner support despite any hypothetical future scenarios. “After a lot of hard work and invaluable advice from the expert panel, who reviewed the development, safety, and efficacy of these techniques over five years and four reports, we feel now is the right time to carefully introduce this new treatment in the limited circumstances recommended by the panel,” says Cheshire.
Twenty-eight year old quadriplegic Brian Gomez is one of the first patients to receive a new experimental treatment that restores movement by 300% in quadriplegics. This treatment uses a spinal simulator implant, developed by the Ronald Reagan UCLA Medical Center and funded by the National Institute of Biomedical Imaging and Bioengineering.
The 32-electrode stimulator was implanted near the C-5 vertebrae in the middle of Gomez’s neck, below the site of his spinal injury. This area on the spinal cord is most commonly associated with quadriplegia, or the loss of movement and function of the limbs.
“The spinal cord contains alternate pathways that it can use to bypass the injury and get messages from the brain to the limbs,” said Daniel Lu, director of UCLA’s neuroplasticity and repair laboratory and the neuromotor recovery and rehabilitation center. “Electrical stimulation trains the spinal cord to find and use these pathways.”
The procedure included a small battery pack and processing unit placed under the skin of the patient’s lower back. The implant is controlled using a remote small enough to fit in the palm of one’s hand, which can regulate the intensity and frequency of the stimulations.
This procedure was first used on two cervical spinal-cord injury patients, who displayed a 300% increase in finger mobility and grip strength. For Lu and his colleagues, this was certainly a good sign of how effective the implants could be.
“Our goal is to get these patients back to the 20 to 30 [newtons of force] range [in their hands],” said Lu. “That will allow them to do everyday tasks and will make a huge difference in the quality of their lives.”
“[T]his approach is unique,” explained Lu. “This device is implanted in the spine, not the brain, and is designed to help patients move their own hands.” Developments in synthetic biology are allowing us to continually explore ways to improve the lives of people like Gomez.
Researchers from Cornell University discovered a new way to give robots a (figurative) sense of touch, allowing robots to handle delicate items, and giving them the capability to sense shape and texture internally.
Traditionally, robot hands needed to be made of materials that could conduct electricity. “Gentle Bot,” uses optical waveguides—basically using light to sense surroundings instead.
This is key to measuring curvature, pressure, and elongation internally. Inside the Gentle Bot’s fingers are pliable pipes that carry waves—these are the waveguides. When the entire hand comes into contact with an object, the waveguides bend and distort the light signals, allowing the hand to move along with the fingers. These changes deliver data on whatever the hand is touching, allowing the robot to mimic human touch more than any prosthesis or mechanical limb, to date.
While optical waveguides have been available since the 70s, manufacturing them was an expensive and complicated process. However, the accessibility of soft lithography and 3D printing today allows for easier development of technology that can be used in soft robotic applications.
Gentle Bot’s potential in the field of prostheses is evident, not to mention its commercial, medical, and manufacturing applications. But a grander vision of creating bio-inspired robots is being eyed by its creators. This could be the basis for the development of bio-inspired robots that could be sent to explore the furthest reaches of space in ways that humans could not.