Herodotus’s Fountain of Youth. Rowling’s Philosopher’s Stone. Barrie’s Neverland. Ovid’s Cumaean Sibyl. The idea of immortality has been ingrained in humanity’s creative consciousness since our humble beginnings. In the present day, eternal youth may soon move out of the realms of myth and into a reality thanks to developing technologies.
The 2045 movement, founded by Russian billionaire Dmitry Itskov in 2011, aims to make humans immortal by transferring their personalities into a carrier superior to the human body. The movement’s ideology is “to create technologies enabling the transfer of a individual’s personality to a more advanced non-biological carrier, and extending life, including to the point of immortality.”
There are four main avenues that the cooperative are walking down in an attempt to achieve human immortality. Each step reflects a chronological step in the project, with each stage representing a further degree of disembodiment.
Avatar A aims to give the human brain control of a remote humanoid robot by using a brain-computer interface (BCI). While this might seem outlandish, we must remember that controlling robots using thought — the most rudimentary aspect of the project — was achieved a decade ago. This technology has been accelerated by recent advancements in the field of prosthetics, which show that the human nervous system is capable of interfacing with prosthetic enhancements.
Avatar B, rather than controlling a body remotely, seeks to implant the brain into the body itself. The process envisioned is to ‘turn off’ the brain, relocate it (metempsychosis), and then transplant it. As the brain, theoretically, would be in a robotic body, this stage creates a consciousness inhabiting a body that could be modified, augmented, or updated.
Avatar C, the next stage of embodiment, envisions a completely robotic body that the brain could be uploaded to. Hypothetically, this would require hardware and software solutions that could allow consciousness to be uploaded, and subsequently inserted, into a totally robotic body — or potentially many bodies. As the brain would become computerized — rather than it remaining as fleshy matter controlling a machine — this would allow the brain itself to be customized and the sentient robot, as a whole, to survive what a human body could not.
Whether or not this technology will be actualized is a moot question: it will eventually come about, perhaps even as soon as 2045 — as Kurzweil has predicted. The more important question is: should or shouldn’t it arise. The possibility of immortality could have profound effects on the individual, as well as society as a whole.
There are both positives and negatives concerning the idea of eternal youth. Immortality could mean that we would no longer suffer the fear of death, we could do more with our lifetimes, and the world’s greatest minds could continue to develop their thoughts. However, there could also be a strain on resources, serious psychological problems associated with extreme age, and stress on societal structures such as marriage and parenthood.
This dilemma is reminiscent of Ovid’s Cumaean Sibyl: she asked for eternal life rather than eternal youth. Apollo, therefore, let her rot — but kept her alive until she deteriorated into being kept in a jar, when she became only a voice. Taking the story’s moral to heart, we must constantly consider what we are wishing to achieve by attaining eternal life: whether this journey is towards a tangible and positive gain or whether we are simply scared of dying. Hopefully it is the former, or else we run the risk of being nothing more than a voice in a jar, an imitation of life.
During standard CRISPR-CAs9 gene editing, a “guide” RNA is instructed to deliver a Cas9 enzyme to a specific DNA molecule. The researchers from UCSD instead instructed it to target an RNA molecule.
Tests conducted in the laboratory showed that RCas9 removed 95 percent of problem-causing RNA for myotonic dystrophy types 1 and 2, Huntington’s disease, and one type of ALS. The technique also reversed 93 percent of the dysfunctional RNA targets in the muscle cells of patients with myotonic dystrophy type 1, resulting in healthier cells.
“This is exciting because we’re not only targeting the root cause of diseases for which there are no current therapies to delay progression, but we’ve re-engineered the CRISPR-Cas9 system in a way that’s feasible to deliver it to specific tissues via a viral vector,” senior author Gene Yeo, a cellular and molecular medicine professor at UCSD School of Medicine, explained in a press release.
Improving Countless Lives
Across the globe, an estimated 450,000 patients are said to be living with ALS. Roughly 30,000 of those are from the U.S. where 5,600 people are diagnosed with the diseases every year. The exact number of Huntington’s disease cases, however, isn’t quite as easy to pin down. One estimate says that around 30,000 Americans display symptoms of it, while more than 200,000 are at risk.
Regardless of the exact numbers, these two neurological diseases clearly affect a significant number of people. This prevalence and the absence of a known cure makes the UCSD team’s research all the more relevant. Even more exciting is the fact that the same kinds of RNA mutations targeted by this study are known to cause more than 20 other genetic diseases.
“Our ability to program the RCas9 system to target different repeats, combined with low risk of off-target effects, is its major strength,” co-first author of the study Ranjan Batra said in the UCSD press release.
However, the researchers do know that what they’ve accomplished is just a first step. While RCas9 works in a lab, they still have to figure out how it will fare when tested in actual patients.
“The main thing we don’t know yet is whether or not the viral vectors that deliver RCas9 to cells would elicit an immune response,” explained Yeo. “Before this could be tested in humans, we would need to test it in animal models, determine potential toxicities, and evaluate long-term exposure.”
Ultimately, while RCas9 couldn’t exactly deliver a cure, it could potentially extend patients’ healthy years. For disease like ALS and Huntington’s, that’s a good place to start.
The U.S. Department of Defense is moving forward with its work on brain-computer interfaces (BCIs), with the Defense Advanced Research Projects Agency (DARPA) awarding contracts to five research groups and one private company on Tuesday.
BCIs have been around for a while now, but the potential for expanding their capabilities is relatively recent. The primary purpose of BCIs have been developing better neuroprostheses, which is also one of the visions of DARPA’s Neural Engineering System Design (NESD) program. Through NESD, DARPA wants to develop high-resolution brain interfaces that could restore and enhance human senses.
“The NESD program looks ahead to a future in which advanced neural devices offer improved fidelity, resolution, and precision sensory interface for therapeutic applications,” founding NESD program manager Phillip Alvelda said at the announcement, the Singularity Archive reports. Of the six awarded contracts, four will work on vision enhancement while the other two will focus on hearing and speech.
Digital Signals in the Brain
Back in 2016, DARPA announced that NESD will develop neural interface systems that will improve communication between the brain and the digital world. The idea is to convert electrochemical signals in the brain into the binary bits of zeros and ones used in computers. Braintree founder Bryan Johnson even thinks it’s possible to make our neural code programmable through such systems.
Not only would this help treat persons with sense disabilities by inputting various senses directly into the brain as digital signals, it would also enhance them. The ability of such neural interface systems to do this has led others working on similar technologies — like Elon Musk with his Neuralink — to consider BCIs as humanity’s way of keeping up with the development of intelligent machines. Indeed, the power of BCIs to meld the human mind with machines has led to the emerging field of neuroreality, which is a transformation of how we see and interact with the world around us.
For DARPA, it starts with helping those that suffer from sensory impairments. “[I]f we’re successful in delivering rich sensory signals directly to the brain, NESD will lay a broad foundation for new neurological therapies,” Alvelda said.
“Significant technical challenges lie ahead,” he added, “but the teams we assembled have formulated feasible plans to deliver coordinated breakthroughs across a range of disciplines and integrate those efforts into end-to-end systems.”
Disclosure: Bryan Johnson is an investor in Futurism; he does not hold a seat on our editorial board or have any editorial review privileges.
To trendsetters, artists, and tech nerds, microdosing is nothing new: in Silicon Valley, people have doing it for a long time, aiming to optimize their performance. Microdosing involves using tiny, controlled doses of psychedelic drugs in the same manner one might use nootropic stacks or Adderall to give themselves an edge in a hyper-competitive environment.
In practice, microdosing is just like it sounds: taking between five to ten percent of a recreational dose of a hallucinogen such as LSD, mescaline (peyote), or psilocybin (magic mushrooms). Such a small dose can’t send you on a “trip” or make you feel high, but it’s just enough to give you an overall sense of well-being, and expand your behaviors and thought patterns to become more creative. In short, microdosing — when it works as intended — has been said to give adherents more productive, happier days.
In normal usage, the Arke is controlled using an array of sensors that respond to the wearer’s natural movements. However, as the user gets used to the exoskeleton, they typically use a tablet to issue instructions. Since this could be too much multitasking, some might find voice commands to be more intuitive.
The World Health Organization reports that more than 422 million people worldwide are living with diabetes, a condition that can take two forms. In the first, the body’s immune system attacks cells in the pancreas, preventing the organ from producing enough insulin [type 1 diabetes (T1D)]. In the second, the body doesn’t know how to use the insulin that is produced [type 2 diabetes (TD2)].
T1D accounts for roughly 10 percent of diabetes cases, and unlike T2D, which can often be reversed through lifestyle changes such as weight loss or increased exercise, scientists have yet to figure out how to prevent or cure T1D.
Right now, T1D is best managed by balancing insulin doses, but this method can be problematic in high-risk cases, taking time to act. Moreover, patients with hypoglycemia (low glucose) unawareness may not notice when their blood sugar drops dangerously low. Thankfully, researchers all over the world are hard at worklooking for a cure that will free T1D patients from their dependence on insulin injections and from risky situations when their levels drop low.
Now, one group may have found such a cure.
Just last week, California-based company ViaCyte began trials involving two T1D patients who were implanted with the company’s PEC-Direct device.
Each of these credit card-sized implants carries cells built from stem cells. These cells are designed to mature inside the human body into the specialized pancreas cells the immune system destroys in those with T1D. The implant is placed just below the skin and releases insulin whenever necessary.
“Patients with high-risk type 1 diabetes complications, such as hypoglycemia unawareness, are at constant risk of life-threatening low blood glucose,” clinical trial investigator Jeremy Pettus from University of California, San Diego, said in a ViaCyte press release. “The PEC-Direct islet cell replacement therapy is designed to help patients with the most urgent medical need.”
“There are limited treatment options for patients with high-risk type 1 diabetes to manage life-threatening hypoglycemic episodes,” added ViaCyte president and CEO Paul Laikind. “We believe that the PEC-Direct product candidate has the potential to transform the lives of these patients.”
Truly, freeing T1D patients from the need for constant insulin shots hasn’t been an easy task. Researchers in Finland have been looking into it for 25 years and only recently did they manage to develop a vaccine for type 1 diabetes — that breakthrough will go to clinical trials by 2018. ViaCyte’s device is another promising discovery.
Prior to last week’s clinical trial, PEC-Direct implants using smaller amounts of stem cells were tested in 19 diabetes patients. Although these did mature into the desired islet cells, the limited number wasn’t designed to treat the condition. The PEC-Direct implants received by the two patients last week contain more cells. The hope is that three months from now, when the cells have matured, they’ll be able to take the place of injections by releasing insulin automatically when needed.
If it does work, the only thing T1D patients will have to do is take immunosuppressant drugs to make sure their bodies don’t reject the new cells. That’s a small price to pay to be freed of daily injections. As James Shapiro at the University of Alberta, Canada, told New Scientist, “A limitless source of human insulin-producing cells would be a major step forward on the journey to a potential cure for diabetes.”
Editor’s Note: This article has been updated. A previous version implied that individuals should take insulin when blood sugar levels are low. This has been updated to note that individuals need insulin when sugar levels are high.
Last week, reports circulated that doctors had successfully edited a gene in a human embryo — the first time such a thing had been done in the United States. The remarkable achievement confirmed the powerful potential of CRISPR, the world’s most efficient and effective gene-editing tool. Now, details of the research have been published in Nature.
The procedure involved “correcting” the DNA of one-cell embryos using CRISPR to remove the MYBPC3 gene. That gene is known to cause hypertrophic cardiomyopathy (HCM), a heart disease that affects 1 out of 500 people. HCM has no known cure or treatment as its symptoms don’t manifest until the disease causes sudden death through cardiac arrest.
The researchers started with human embryos created from 12 healthy female donors and sperm from a male volunteer who carried the MYBOC3 gene. The defective gene was cut out using CRISPR around the time the sperm was injected into the eggs.
As a result, as the embryos divided and grew, many repaired themselves using the non-edited genes from the genetic materials of the female donors, and in total, 72 percent of the cells that formed appeared to be corrected. The researchers didn’t notice any “off-target” effects on the DNA, either.
Basic or not, the development is remarkable.“By using this technique, it’s possible to reduce the burden of this heritable disease on the family and eventually the human population,” Mitalipov said in an OHSU press release.
However, gene editing is a controversial area of study, and the researchers’ work included changes to the germ line, meaning the changes could be passed down to future generations. To be clear, though, the embryos were allowed to grow for only a few days and none were implanted into a womb (nor was that ever the researchers’ intention).
University of Wisconsin-Madison bioethicist Alta Charo thinks that the benefits of this potential treatment outweigh all concerns. “What this represents is a fascinating, important, and rather impressive incremental step toward learning how to edit embryos safely and precisely,” she told The Washington Post. “[N]o matter what anybody says, this is not the dawn of the era of the designer baby.”
Before the technique could be truly beneficial, regulations must be developed that provide clearer guidelines, according to Mitalipov. If not, “this technology will be shifted to unregulated areas, which shouldn’t be happening,” he explained.
More than 10,000 disorders have been linked to just a single genetic error, and as the researchers continue with their work, their next target is BRCA, a gene associated with breast cancer growth.
Mitalipov hopes that their technique could one day be used to treat a wide-range of genetic diseases and save the lives of millions of people. After all, treating a single gene at the embryonic stage is far more efficient that changing a host of them in adults.
Neural signal processing difficulties are what cause the loss of mobility in the limbs. Spinal cord injuries can cause these kinds of neural signal troubles, as can other issues such as muscular dystrophy, drug overdose, stroke, and neuropathy. In the case of spinal cord injuries, the brain sends a clear signal and the muscles in the limbs are capable of receiving them, but the damage to the spinal cord prevents the signals from getting through.
The Center for Sensorimotor Neural Engineering (CSNE) — a collaborative effort between researchers from San Diego State University, the Massachusetts Institute of Technology (MIT), and the University of Washington (UW) — is developing an implantable brain chip to bypass the problem in the spinal cord. The chip records neural electrical signals and then sends them directly to receivers in the limbs, restoring movement. That’s already an impressive achievement, but recently the team published a study describing a critical improvement: a brain-computer interface (BCI) that could allow the chip to send stronger signals that would lead to longer lasting movement capability in the body.
The BCI uses electrodes that read signals from neurotransmitters, recording and transmitting them to the appropriate limb or prosthetic device. Thin-film platinum has been the state-of-the-art material used for these electrodes, but this material carries a risk of fracturing and breaking down over time. The researchers from CSNE solved this problem by using glassy carbon to create the BCI’s electrodes. Glassy carbon lasts longer and resists corrosion more effectively than thin-film platinum and other metal used to make electrodes because it is about 10 times smoother.
Technology and innovation have the power to level the playing field between all kinds of people, including the disabled. One such innovation is helping hearing impaired people better connect with the world and lead fuller lives. Apple has partnered with Cochlear, a company who develops devices that allow the hearing impaired to hear better, or in some cases, for the very first time.
This partnership has led to the development of the Nucleus 7 Sound Processor, which streams audio from a compatible Apple device (iPhones, iPads, iPod touches) directly to a person’s implant via Bluetooth technology. These surgically implanted devices give hearing impaired individuals the power to not only hear activity on their devices but also customize the audio.
Apple began by creating a protocol for use by hearing device developers and manufacturers to make devices compatible with Apple’s products. Apple’s senior manager for global accessibility policy and initiatives, Sarah Herrlinger, told TechCrunch, “We want everybody to use our technology and to say ‘wow my iPhone is the best piece of technology I’ve ever used before’…with every iteration of our operating system our goal is to add in new accessibility features in order to expand the support that we can give to people all over the world.”
The Hearing Health Foundation estimates that greater than 50 million Americans have experienced hearing loss. This technology aims to help widen the auditory world for a great number of people.
A vending machine company based in Wisconsin is about to introduce a revolutionary technology in the United States. No, it’s not a new kind of vending machine, but it may be the future of purchasing vended goods, among other things.
“Employees will be implanted with a [Radio-Frequency Identification] chip allowing them to make purchases in their break room micro market, open doors, login to computers, use the copy machine, etc.,” according to the statement. Essentially, implanted employees will be able to do things like buy coffee with a wave of their hand —like a Jedi.
Each RFID chip is as small as a grain of rice and will be implanted under the skin between the thumb and the forefinger. The chips each cost about $300 — an expense the company is covering — and while getting the implant is optional, 32M says it expects more than 50 of its employees to volunteer to be chipped.
The kind of implant 32M is offering, however, isn’t exactly in the same league. The RFID chips are designed to carry information, usually personal, that would grant employees access to certain services. As such, these devices are prone to potential security and privacy problems or could even cause health issues.
“Companies often claim that these chips are secure and encrypted,” Alessandro Acquisti, IT and public policy professor at Carnegie Mellon University’s Heinz College, told the New York Times. However, “encrypted” is “a pretty vague term,” he added, “which could include anything from a truly secure product to something that is easily hackable.”
Anticipating such concerns, 32M prepared an FAQ list where it guaranteed that the chips aren’t trackable and don’t even have GPS capabilities. According to this list, the chips only contain the information employees choose to link to them. Still, Acquisti urges caution as the functionality of the chips could change after implantation. “It’s very hard to predict or stop a future widening of their usage,” he asserts.
Clearly, while this technology promises an exciting future, employees should carefully consider all angles before committing to it.
By now, most of us know what CRISPR gene editing is. At the very least, we have heard of this revolutionary technology that allows us to alter DNA—the source code of life itself. One day, CRISPR could allow us to delete genes in order to eradicate genetic diseases, add in new genes in order to vastly improve various biological functions, or even genetically modify human embryos in order to create an entirely new class of humans…of super humans.
But first, we have a lot of research to do.
And that brings us to today. Reports from MIT were just released which assert that the very first attempt at creating genetically modified human embryos in the United States has been carried out by a team of researchers in Portland, Oregon.
“So far as I know this will be the first study reported in the U.S.,” Jun Wu, who played a role in the project and is a collaborator at the Salk Institute, said to MIT.
According to MIT, the work was led by Shoukhrat Mitalipov, who comes from the Oregon Health and Science University. Although details are scarce at this point, sources familiar with the work assert that the research involved changing the DNA of one-cell embryos using CRISPR gene-editing. Further, Mitalipov is believed to have broken records in two notable ways:
He broke the record on the number of embryos experimented upon.
He is the first researcher to ever conclusively demonstrate that it is possible to safely and efficiently correct defective genes that cause inherited diseases.
This is notable because, despite the fact that it has been around for several years now, CRISPR is still an incredibly new tool—one that could have unintended consequences. As previous work published in the journal Nature Methods revealed, CRISPR-Cas9 could lead to unintended mutations in a genome. However, the work was later reviewed by researchers at another institution and the findings were brought into question. It remains to be seen whether the original study will be corrected or retracted, but this development highlights the importance of peer review in science.
In this regard, Mitalipov’s work brings us further down the path to understanding exactly how CRISPR works in humans, and reveals that is it possible to avoid both mosaicism (changes that are taken up not by only some of the cells of an embryo, as opposed to all of them) and “off-target” effects.
A Long Road to Travel
It is important to note that none of the embryos were allowed to develop for more than a few days, and that the team never had any intention of implanting them into a womb. However, it seems that this is largely due to ongoing regulatory issues, as opposed to issues with the technology itself.
In the United States, all efforts to turn edited embryos into a baby—to bring the embryo to full term—have been blocked by Congress, which added language to the Department of Health and Human Services funding bill that forbids it from approving any such clinical trials.
Scientists have been developing artificial hearts for quite some time now. However, many of the current designs are unfortunately clunky, which presents difficulties in successfully integrating them into human tissue. To approach this issue, a team of researchers from ETH Zürich decided to take a cue from the biological human heart.
Instead of using separate parts, the Swiss team, led by Nicholas Cohrs, 3D-printed an artificial heart using a soft, flexible material. The material was molded into a single part (or a “monoblock”) which allowed the team to design a complex inner structure complete with pumping mechanisms able to be triggered by silicon ventricles. This method imitates a realistic human heartbeat.
“[O]ur goal is to develop an artificial heart that is roughly the same size as the patient’s own one and which imitates the human heart as closely as possible in form and function,” Cohrs said in a press release. The team successfully tested this artificial heart, pumping blood-like fluid at human body-like pressures. The team published their research in the journal Artificial Organs.
However, this design is still a proof of concept, which means it’s yet to be ready for actual implantation. The materials used are, as of right now, unable to last more than half an hour or some few thousand heartbeats, though that could vary a bit depending on a person’s heart rate. It’s a limitation the team will continue to work on, as new materials and design improvements advance. Once perfected, this design could potentially improve the lives and health of around 26 million people worldwide who suffer from various heart conditions.
Current brain implants are less than ideal. They are crude brain-computer interface (BCI) devices that are implanted during an invasive procedure, and a relatively small number of electrodes make contact with the brain despite the large size of the devices. While they are somewhat effective in the mitigation of the effects of epilepsy, Parkinson’s, and other neurodegenerative conditions, they’d be far more useful with more electrodes making contact than the mere tens of thousands that is possible now.
To dramatically improve BCIs, scientists need to create an implant device that is less invasive than current methods and can still offer more interactive channels, including direct interfaces with the visual and auditory cortices. This kind of BCI would create a vastly expanded range of contact points for brain function support by artificial systems.
Enter the U.S. Department of Defense’s (DoD’s) Defense Advanced Research Projects Agency (DARPA), and its Neural Engineering System Design (NESD) program. Columbia Engineering Professor Ken Shepard is leading this team, which recently received a $15.8 million grant for the next four years. Their goal is to invent exactly this kind of device, with the specific intent to help people with hearing and vision impairment and neurodegenerative diseases, and they’re using flexible silicon electronics to do it. The team plans to develop an implantable device with one million channels and they hope to apply for approval for regulatory testing by the time the four-year grant expires.
Disclaimer: Futurism only supports products that we trust and use. This post is in partnership with Abundance 360, and Futurism may get a small percentage of sales. Want to take a class with Peter Diamandis? Click here to learn more!
As we close out 2016, if you’ll allow me, I’d like to take a risk and venture into a topic I’m personally compelled to think about, a topic that will seem far-out to most readers.
Today’s extraordinary rate of exponential growth may do much more than just disrupt industries. It may actually give birth to a new species — reinventing humanity — over the next 30 years.
I believe we’re rapidly heading towards a human-scale transformation, the next evolutionary step into what I call a “meta-intelligence,” a future in which we are all highly connected — brain to brain via the cloud — sharing thoughts, knowledge, and actions.
In this blog, I’m investigating the driving forces behind such an evolutionary step, the historical pattern we are about to repeat, and the implications thereof. Again, I acknowledge that this topic seems far-out, but the forces at play are huge and the implications are vast.
Let’s dive in…
A Quick Recap: Evolution of Life on Earth in 4 Steps
About 4.6 billion years ago, our solar system, the Sun, and the Earth were formed. Four steps followed…
3.5 billion years ago, the first simple life forms, called “prokaryotes,” came into existence. These prokaryotes were super-simple, microscopic single-celled organisms, basically a bag of cytoplasm with free-floating DNA. They had neither a distinct nucleus nor specialized organelles. Fast-forwarding one billion years…
2.5 billion years ago, the next step in evolution created what we call “eukaryotes” — life forms that distinguished themselves by incorporating biological “technology” into themselves. This technology allowed them to manipulate energy (via mitochondria) and information (via chromosomes) far more efficiently. Fast forward another billion years for the next step…
1.5 billion years ago, these early eukaryotes began working collaboratively and formed the first “multi-cellular life,” of which you and I are the ultimate example (a human is a multicellular creature of 10 trillion cells).
The final step I want to highlight happened some 400 million years ago, when lungfish crawled out of the oceans onto the shores, and life evolved from the oceans onto land.
The Next Stages of Human Evolution in 4 Steps
Today, at a massively accelerated rate — some 100 million times faster than the steps I outlined above — life is undergoing a similar evolution. In this next stage of evolution, we are going from evolution by natural selection (Darwinism) to evolution by intelligent direction.
Allow me to draw the analogy for you:
Simple humans today are analogous to prokaryotes. Simple life, each life form independent of the others, competing and sometimes collaborating.
Just as eukaryotes were created by ingesting technology, humans will incorporate technology into our bodies and brains that will allow us to make vastly more efficient use of information (BCI) and energy.
Enabled with BCI and AI, humans will become massively connected with each other and billions of AIs (computers) via the cloud, analogous to the first multicellular lifeforms 1.5 billion years ago. Such a massive interconnection will lead to the emergence of a new global consciousness and a new organism I call the “meta-intelligence.”
Finally, humanity is about to crawl out of the gravity well of Earth to become a multi-planetary species. Our journey to the Moon, Mars, asteroids, and beyond represents the modern-day analogy of journey made by lungfish climbing out of the oceans some 400 million years ago.
The Four Forces Driving the Evolution and Transformation of Humanity
Four primary driving forces are leading us towards our transformation of humanity into a meta-intelligence both on and off the Earth:
We’re wiring our planet
Emergence of brain-computer interface
Emergence of AI
Opening of the Space Frontier
Let’s take a look at each.
Wiring the Planet
Today, there are 2.9 billion people connected online. Within the next six to eight years, that number is expected to increase to nearly 8 billion, with each individual on the planet having access to a megabit-per-second connection or better.
The wiring is taking place through the deployment of 5G on the ground, plus networks being deployed by Facebook, Google, Qualcomm, Samsung, Virgin, SpaceX, and many others.
Within a decade, every single human on the planet will have access to multimegabit connectivity, the world’s information, and massive computational power on the cloud.
A multitude of labs and entrepreneurs are working to create lasting, high-bandwidth connections between the digital world and the human neocortex (I wrote about that in detail).
Ray Kurzweil predicts we’ll see human-cloud connection by the mid-2030s, just 18 years from now.
In addition, entrepreneurs like Bryan Johnson (and his company Kernel) are committing hundreds of millions of dollars towards this vision.
The end results of connecting your neocortex with the cloud are twofold: First, you’ll have the ability to increase your memory capacity and/or cognitive function millions of fold; second, via a global mesh network, you’ll have the ability to connect your brain to anyone else’s brain and to emerging AIs, just like our cell phones, servers, watches, cars, and all devices are becoming connected via the Internet of Things (IoT).
Artificial Intelligence/Human Intelligence
Next, and perhaps most significantly, we are on the cusp of an AI revolution.
Artificial intelligence, powered by deep learning and funded by companies such as Google, Facebook, IBM, Samsung, and Alibaba, will continue to rapidly accelerate and drive breakthroughs.
Cumulative “intelligence” (both artificial and human) is the single greatest predictor of success for both a company or a nation. For this reason, beside the emerging AI “arms race,” we will soon see a race focused on increasing overall human intelligence.
Whatever challenges we might have in creating a vibrant brain-computer interface (e.g. designing long-term biocompatible sensors or nanobots that interface with your neocortex), those challenges will fall quickly over the next couple of decades as AI power tools give us every increasing problem-solving capability.
It is an exponential atop an exponential. More intelligence gives us the tools to solve connectivity and mesh problems and in turn create greater intelligence.
Opening the Space Frontier
Finally, it’s important to note that the human race is on the verge of becoming a multiplanetary species.
Thousands of years from now, whatever we’ve evolved into, we will look back at these next few decades as the moment in time that the human race moved off Earth irreversibly.
Today, billions of dollars are being invested privately into the commercial space industry. Efforts led by SpaceX are targeting humans on Mars, while efforts by Blue Origin are looking at taking humanity back to the Moon and plans by my own company, Planetary Resources, strive to unlock near-infinite resources from the asteroids.
The rate of human evolution is accelerating as we transition from the slow and random process of “Darwinian natural selection” to a hyper-accelerated and precisely directed period of “evolution by intelligent direction.”
In this blog, I chose not to discuss the power being unleashed by such gene-editing techniques as CRISPR-Cas9. Consider this yet another tool able to accelerate evolution by our own hand.
The bottom line is that change is coming, faster than ever considered possible. All of us leaders, entrepreneurs, and parents have a huge responsibility to inspire and guide the transformation of humanity on and off the Earth.
What we do over the next 30 years — the bridges we build to abundance — will impact the future of the human race for millennia to come. We truly live during the most exciting time ever in human history.
Disclaimer: Futurism only supports products that we trust and use. This post is in partnership with Abundance 360, and Futurism may get a small percentage of sales. Want to take a class with Peter Diamandis? Click here to learn more!
The U.S. Department of Defense has created six consortia to develop brain-computer interface (BCIs) technologies and is backing them up with a $65 million investment. On July 10, the Department chose Paradromics Inc., and neural interface company, to lead one of the six groups.
Among the goals will be creating technologies that can stimulate millions of individual neurons inside the human brain and record their activity simultaneously. The ability to record activity from massive numbers of neurons is critical to the creation of any kind of high-throughput connection between computers of any kind and the human brain, including devices intended to restore lost senses or abilities.
The federal contracts, distributed by the Defense Advanced Research Projects Agency (DARPA), all focus on cutting-edge BCI technologies, including holographic microscopes for observing thousands of neurons simultaneously, wireless “neurograins,” the size of grains of sand, light-emitting diodes to blanket the visual cortex and restore vision, and flexible circuits for layering onto the brain.
Paradromics is taking up to $18 million of the total investment, and in return it is tasked with creating an implant that can record from one million neurons and send the signal back to the brain, all in a package not much bigger than a nickel.
Wire In The Brain
The technology Paradromics will incorporate is an outgrowth of a technique a Moldovan company uses for stretching hot metal to mass-produce extremely fine, insulated 20 micron-thick wires in coils, Matt Angle, who is leading the Paradromics team, told MIT Technical Review. In the case of the BCI final product, spools of wire can be bundled together so that the thin wires are in groups 10,000 wires thick with a sharp, brush-like edge to pierce the brain like needles, causing minimal damage.
At the other end, the bundle of wires will be joined to a microprocessor. This will detect, tally, and analyze the electrical signals coming from the brain through the wires.
For Paradromics, the ultimate goal is a high-density connection between a computer and the brain’s speech center which could let the device perceive the words a person is considering before they’re even said. This would allow the implant to be used to help patients with ALS who have lost the power of speech to speak again, for example.
However, the technology could have much broader applications; it could also enable neuroscientists to observe the ways that massive networks of neurons control and create complex behaviors and perceive sensory stimuli. This kind of technology may even allow neuroscientists to parse out how the brain creates consciousness — and how to mimic that ability.
At this point, DARPA is just one player in the game. Details about Elon Musk’s Neuralink were revealed in April, and if what he predicts comes to pass, users will be able to “think” to each other to communicate in eight to 10 years. Transcranial direct current stimulation is also being explored for its potential in treating neurological diseases. It may also help enhance human intelligence and function as a technological checkpoint toward achieving Neuralink’s neural lace and other BCIs.
What if you could “see” directly into another person’s brain? The ability to read minds, referred to as telepathy, is yet another concept that’s abundant in science fiction, but a former Facebook executive says that we could all be capable of at least seeing inside someone else’s mind — provided that we’re equipped with the right technology.
Mary Lou Jepsen was the head of display technology at Oculus before founding her own startup called Openwater. The company’s goal, while ambitious, is in theory quite simple: “to create a wearable to enable us to see the inner workings of the body and brain at high resolution.” In short, telepathy courtesy of a brain-computer interface (BCI) — a wearable device that works like an Magnetic Resonance Imaging (MRI) machine.
“I don’t think this is going to take decades,” Jepsen said of the tech’s development, during an interview with CNBC. “I think we’re talking about less than a decade, probably eight years until telepathy.” Her company plans to make a very limited number of prototypes available to their early access partners by next year.
Human + Machine
Wearable MRI technology could be quite an asset in terms of disease diagnosis and treatment. With just one quick look, a physician could see what’s happening inside a person’s brain, or elsewhere in the body. Of course, this raises many questions and concerns about privacy, which Jepsen says the company is working on. “We’re trying to make the hat only work if the individual wants it to work, and then filtering out parts that the person wearing it doesn’t feel it’s appropriate to share,” she said.
Royal College of Art (RCA) graduate student Dani Clode wants to change the way that people think about prosthetics, and she’s designed and created The Third Thumb to make that happen. Stick with me, it’s really cool.
The Third Thumb is a 3D-printed prosthetic that allows you to do whatever you’d normally do with an opposable thumb, but an extra one. “The origin of the word ‘prosthesis’ meant ‘to add, put on to,’ so not to fix or replace, but to extend,” Clode said to Dezeen. “The Third Thumb is inspired by this word origin, exploring human augmentation and aiming to reframe prosthetics as extensions of the body.”
When bioengineering students sit down to take their final exams for Stanford University, they are faced with a moral dilemma, as well as a series of grueling technical questions that are designed to sort the intellectual wheat from the less competent chaff:
If you and your future partner are planning to have kids, would you start saving money for college tuition, or for printing the genome of your offspring?
The question is a follow up to “At what point will the cost of printing DNA to create a human equal the cost of teaching a student in Stanford?” Both questions refer to the very real possibility that it may soon be in the realm of affordability to print off whatever stretch of DNA you so desire, using genetic sequencing and a machine capable of synthesizing the four building blocks of DNA — A, C, G, and T — into whatever order you desire.
The answer to the time question, by the way, is 19 years, given that the cost of tuition at Stanford remains at $50,000 and the price of genetic printing continues the 200-fold decrease that has occurred over the last 14 years. Precursory work has already been performed; a team lead by Craig Venter created the simplest life form ever known last year.
The Ethics of Changing DNA
Stanford’s moral question, though, is a little trickier. The question is part of a larger conundrum concerning humans interfering with their own biology; since the technology is developing so quickly, the issue is no longer whether we can or can’t,but whether we should or shouldn’t. The debate has two prongs: gene editing and life printing.
The question of printing life is similar in some respects; rather than altering organisms to have the desired genetic characteristics, we could print and culture them instead — billions have already been invested. However, there is the additional issue of “playing God” by sidestepping the methods of our reproduction that have existed since the beginning of life. Even if the ethical issue of creation was answered adequately, there are the further questions of who has the right to design life, what the regulations would be, and the potential restrictions on the technology based on cost; if it’s too pricey, gene editing could be reserved only for the rich.
It is vital to discuss the ethics of gene editing in order to ensure that the technology is not abused in the future. Stanford’s question is praiseworthy because it makes today’s students, who will most likely be spearheading the technology’s developments, think about the consequences of their work.
Researchers at the Baylor College of Medicine have found the key to longevity in Caenorhabditis elegans (C. elegans) worms — and maybe, someday, humans. The team noticed that genetically identical worms would occasionally live for much longer, and looked to their gut bacteria to find the answer. They discovered that a strain of E. coli with a single gene deletion might be the reason that its hosts’ lives were being significantly extended.
This study is one among a number of projects that focus on the influence of the microbiome — the community of microbes which share the body of the host organism — on longevity. Ultimately, the goal of this kind of research is to develop probiotics that could extend human life. “I’ve always studied the molecular genetics of aging,” Meng Wang, one of the researchers who conducted the study, told The Atlantic. “But before, we always looked at the host. This is my first attempt to understand the bacteria’s side.”
Even in cases like this, where it seems fairly obvious that the microbiome is influencing longevity, parsing out the details of how and why this happens among a tremendous variety of chemicals and microbe species is extremely complex. The team, in this case, was successful because they simplified the question and focused on a single relationship.
An 11-year-old boy has scored higher in an IQ test than Albert Einstein and Stephen Hawking. The so-called genius benchmark is set at 140 and Arnav Sharma gained a score of 162 – the maximum possible result you can achieve on the paper. It is a whole two points higher than German-born theoretical physicist Einstein and celebrated cosmologist Hawking.
Arnav, from Reading, passed the infamously difficult test a few weeks back with zero preparation and had never seen what a typical paper looked like before taking it. His mark in the exam, which primarily measures verbal reasoning ability, puts him in the top one per cent of the nation in terms of IQ level.
“The Mensa test is quite hard and not many people pass it so do not expect to pass,” Arnav confidently told The Independent from his home in Reading.
“I took the exam at the Salvation centre and it took about two and a half hours,” he recalled. “There were about seven or eight people there. A couple were children but the rest were adults. It was what I thought it would be.”
Arnav, who is of Indian origin, insisted he was not anxious before the sitting the test, saying: “I had no preparation at all for the exam but I was not nervous. My family were surprised but they were also very happy when I told them about the result.”
Meesha Dhamija Sharma, his mother, said she kept her “fingers crossed” for the whole exam.
“I was thinking what is going to happen because you never know and he had never seen what a paper looks like,” she explained.
When pressed about the point she became aware of his abilities, Ms Sharma cast her mind back to Arnav being in India as a toddler. Despite being told he was exceptional, she refused to believe it at the time.
“At one-and-a-half years old I took him to India for a holiday to see his grandparents, his grandmother told me about him and said Arnav is going to do very well with his studies,” she said. “I did not trust her, I just thought she was saying it to make me happy, but she was right.”
It was not until he was two-and-a-half years old that she became aware of his mathematical prowess.
“He was counting up to more than 100. That was when I stopped teaching him because I came to know that there is no end to his numbers.”
When asked about whether there was anyone else in the family with an unusually high IQ, she could not recall anyone, adding: “His dad is quite clever as well but not as clever.”
Arnav, who attends Crossfields School on the outskirts of Reading, has been selected for Eton College and Westminster, both highly competitive and sought-after schools, with no preparation.
But it’s worth noting his talents are not restricted to numbers. Arnav used to have a passion for singing and dancing and reached the semi-finals for Reading’s Got Talent for Dancing with a Bollywood act when he was eight.
Arnav, who also cites his hobbies as coding, badminton, piano, swimming and reading, also has an unusually good geographical knowledge, and can name all the capitals of the world.
A spokesperson for Mensa heaped praised on Arnav for the achievement, saying: “It is a high mark which only a small percentage of people in the country will achieve”.
Mensa is one of the most exclusive clubs in the world, as the only way to join is to prove your IQ is in the top two per cent of the nation.
It was founded in 1946 in Oxford by Lancelot Lionel Ware, a scientist and lawyer, and Roland Berrill, an Australian barrister, but the organisation later spread around the world.
Its mission is to “identify and foster human intelligence for the benefit of humanity”.
When train cards were invented, the idea was to make public transportation less of a hassle. Instead of taking out coins, you’d just access the train platform with a swipe of a pre-loaded card. However, cards can be misplaced, and that’s a problem Aussie biohacker Meow-Ludo Disco Gamma Meow-Meow — and yes, that’s his legal name — was determined to never face again.
Not satisfied with the (in)convenience of carrying an Opal card, Meow-Meow decided to have the card’s near-field communication (NFC) chip implanted just beneath the skin on the side of his left hand.
To that end, he removed the NFC from the Opal card and encased it in a bio-compatible plastic, resulting in an implant about 10 by 6 millimeters in size. The implantation procedure itself was undertaken by a piercing expert and took about an hour.
Meow-Meow urges anyone interested in following in his technological footsteps to be sure to do their research and be aware that the implant is considered a breach of Opal’s terms of service. However, he does seem satisfied with his body’s new addition. “It gives me an ability that not everyone else has, so if someone stole my wallet, I could still get home,” he told ABC News.
Our Cyborg Future
Implanted devices are, as Meow-Meow pointed out to ABC News, not as rare as some may think. Usually, they serve some medical purpose, such as chips for prosthetics or pacemakers. Meow-Meow himself has two other NFC implants aside from the Opal chip, including one he uses to store documents, like an implanted hard drive.
This mirrors a number of science fiction flicks, which feature implanted devices that serve as information storage, identity trackers, and health monitors, among other things. It certainly seems like the next step into the future, as numerous companies have been working on developing technologies that meld human beings with machines.
As chip implants become more advanced, they will undoubtedly become more common in the future, but for now, this Opal implant is at least making Meow-Meow’s daily commute less of a hassle.
The discovery of the gene-editing technology CRISPR came, in part, from Jennifer Doudna, a biochemist at the University of California, Berkeley. “It’s very profound,” she told NBC News. “It means that we can control human evolution now.”
With collaborator Emmanuelle Charpentier, Doudna was able to harness a curiosity in the DNA of certain bacteria and help turn CRISPR into the world’s most accessible gene-editing technology. The discovery is detailed in Doudna’s new book titled “A Crack in Creation.”
The Dangers of Gene-Editing
In the book, Doudna says that the days of costly, complicated processes to edit DNA are over. We’re now in an age of CRISPR, and it’s a profoundly simple technique. Doudna compares CRISPR to word-processing software that allows someone to correct a typo in a hefty document.
At the Innovative Genomics Institute in Berkeley where Doudna is executive director, teams of scientists are working to find new approaches to treating disorders like cancer, sickle cell anemia, and some forms of blindness. But CRISPR isn’t limited to Doudna’s lab. Its low cost and ease of use have helped the technology proliferate to labs all over the world.
At UT Southwestern in Dallas, Dr. Eric Olson is chasing a cure for Duchenne muscular dystrophy. At an insectary at UC Irvine, Dr. Anthony James has created mosquitoes that can pass on malaria resistance to some of their offspring. At the Salk Institute in La Jolla, Calif., CRISPR is being used to pursue a gene-engineered pig with transplantable human organs.
But with the thrill of discovering such a powerful tool came a somber realization. Doudna describes a nightmare: “Hitler was leaning forward and looking at me very intently. And he said, ‘So please tell me about the CRISPR technology.’ And I just felt this chill running down my back.”
Doudna knows better than anyone that with the power to alter evolution comes a daunting responsibility: Make sure it doesn’t get misused.
This Gene-Editing Breakthrough Could Change Life on Earth was originally published by NBC Universal Media, LLC on June 15, 2017 by Munir Atalla and Brenda Breslauer. Copyright 2017 NBC Universal Media, LLC. All rights reserved.
Computing giant Intel has announced a multi-year deal, stretching through the 2024 Olympic games, with the International Olympic Committee (IOC) to bring the company’s new technologies to enhance the upcoming events. According to Advertising Age, Intel is hoping to bring “virtual reality, 360-degree video, artificial intelligence, and drones, to enhance the Olympic Games.”
The partnership aims to attract the interest of a younger demographic to help quell the loss of overall viewership. The president of the IOC, Thomas Bach said “There are many young people that are living a digital life. So we have to go where they are in the digital world, in their virtual reality.”
If humanity were to appoint a general in our war against aging, Aubrey de Grey would likely earn the honor. The British author and biomedical gerontologist has been on the front line for years, researching ways to free the world of age-related disease and, ultimately, extend human life indefinitely.
From the SRF Research Center (SRF-RC) in Mountain View, CA, foundation scientists conduct proof-of-concept research with the goal of addressing the problems caused by aging. They focus on repairing damage to the body at the molecular level, and their work is helping advance the field of rejuvenation biotechnology.
SRF-RC teams are currently focusing on two equally complex-sounding research projects, one centered on allotopic expression (a way to bypass the harmful effects of age-caused mitochondrial mutations) and the other on telomerase-independent telomere elongation (a little-researched process by which some cancer cells overcome mortality).
Either project could lead to major breakthroughs in anti-aging treatments, but as de Grey explains to Futurism, the path to immortality doesn’t just run through the science lab.
No Money, Big Problems
While the research being conducted at the SRF-RC is far from simple, de Grey claims DNA mutations and cancer cells aren’t the biggest hurdles to anti-aging breakthroughs: “The most difficult aspect [of fighting age-related diseases] is raising the money to actually fund the research.”
The nature of most science research is exploratory. Researchers don’t know that what they’re working on is going to yield the results they expect, and even if it does, turning basic research into income is no easy task. To support their work, most have to rely on funding from outside sources, such as government grants, educational institutions, or private companies.
“[Aging] is a medical problem that needs to be addressed.”
The amount of funding a specific field receives varies wildly. For example, the National Institutes of Health (NIH) reports that it allocated $5.5 billion for cancer research in 2016, while amyotrophic lateral sclerosis (ALS) research received a comparatively paltry $52 million. However, raising funds for anti-aging research has proven even more difficult, according to de Grey.
“It’s still an incredibly hard sell,” he claims. “We have very limited resources. We only have about 4 million dollars a year to spend, and so we spent it very judiciously.”
That money isn’t going to just the two in-house projects, either. The SENS Foundation funds anti-aging research at institutions across the globe and provides grants and internships for students, so raising money to support those endeavors is key to continued success in its fight against aging.
A Radical Disconnect
Essential to raising money for anti-aging research is ensuring that those with the funds understands why it’s worth the investment — a not-so-easy task given current misconceptions about aging.
In 2015, eight major aging-focused organizations, including AARP, the American Geriatrics Society, and the American Federation for Aging Research (AFAR), released a report detailing what they call the many “notable gaps” that exist between expert perspectives on aging and the public’s perception of the process. If the public isn’t well informed on aging, it’s even less knowledgeable about anti-aging.
Fifty-eight percent of respondents in a 2013 Pew Research study said they had never even heard of radical life extension before. When asked if they would undergo treatments that would allow them to live to the age of 120 or older, the majority of those surveyed said they would not, and 51 percent thought such treatments would be “bad for society.”
“There is still a huge amount of resistance to the logic that aging is bad for you and that it’s a medical problem that needs to be addressed,” explains de Grey. “It’s really, really extraordinary to me that it’s so hard to get this through to people, but that is the way it is.”
The SENS Foundation focuses a significant portion of its resources toward combatting this disconnect. In 2014, it dedicated more than $1 million to outreach and education, spreading the gospel of anti-aging research through speaking engagements, newsletters, press coverage, conferences, and other forms of community engagement.
Once the field is properly funded and supported, de Grey thinks researchers will have a clear path forward to “curing” the problem of aging:
Aging is not mysterious. We understand it pretty well. It’s not even a phenomenon of biology. It’s more a phenomenon of physics. Any machine with moving parts is going to damage itself…and the result is inevitably going to be that eventually the machine fails. It’s the same for the human body as it is for a car, for example, and if we think about it that way, it becomes pretty easy to actually see what to do about it.
The benefits of ending the problem of aging would be tremendous. Not only would we be living longer, we’d be living healthier for longer.
Without the debilitating diseases and disorders that have become synonymous with old age — vision loss, dementia, muscle weakness — we’d have extra years or even decades to do all the things we loved to do when were were younger: travel, play sports, spend time with our loved ones. We’d avoid the personal financial burden associated with treating the side effects of aging, and some argue that governments would even see a monetary benefit from radical life extension as two-thirds of Social Security expenditures for retirees currently go toward healthcare.
Anti-aging proponents like de Grey will be the people leading us toward that figurative fountain of youth, but you shouldn’t start living like you’re immortal just yet. “We have made very significant breakthroughs in some of the most difficult areas,” says de Grey. “I’m fairly proud of what we’ve achieved so far, though, of course, we still have a long way to go.”
Cyborgs: humans who have been merged with machines; a hybrid of sorts. What was once the subject of far-out science fiction has now entered reality as a medical tool. From implants to robotics, there is a whole host of emerging technologies that aim to treat health conditions and aid those suffering from different disabilities by turning people into, technically, cyborgs.
It might seem to be going too far to use the term cyborg when discussing, for instance, new versions of prosthetic limbs. However, carbon fiber and titanium prostheses are now commonplace, and most artificial limbs are fully functional. For example, in the video below, you can see the dexterity and capabilities of one prosthetic arm. Since this video was created, prostheses have advanced even further, with researchers going so far as to create robotic hands that can be controlled with one’s brain — and they have a sense of touch.
Artificial limb technologies like the “blades” used by Paralympians are even so advanced that some have started to discuss whether or not they are more capable than organic limbs. But artificial limbs aren’t the only advancements in so-called “cyborg tech.” One Swedish company is implanting its employees with microchips to allow them to do things like access doors with the wave of a hand instead of with a key. Elon Musk thinks that his neural lace could actually make human beings smarter. Many are experimenting with the many possibilities of merging humankind with machines.
A Cyborg Future
The authors of a recent paper in Science Roboticsdiscussed the potential issues with the future of such technologies:
There needs to be a debate on the future evolution of technologies as the pace of robotics and AI is accelerating. It seems certain that future assistive technologies will not only compensate for human disability but also drive human capacities beyond our innate physiological levels. The associated transformative influence will bring on broad social, political, and economic issues.
Once we officially cross that line, once the technologies that we create to assist those with difficulties and disabilities begin to advance human capabilities beyond what is biologically possible, we will have a teeming variety of moral and practical issues to deal with. Many believe that this will be humanity’s “next step in evolution.” Indeed, if we are ever going to colonize Mars and expand our reign in the Solar System, that might be a necessary evolution. Whatever moral and ethical quandaries may exist, it might not be possible for us to take such large strides without becoming cyborgs.
So, more likely than not, the day will come and we will cross that line. Will cyborg humans have the same rights and be bound by the same laws as biologically ordinary citizens? Will cyborgs be vulnerable to hacking and manipulation? Will warfare forever change with the possible advancement of military exoskeletons? The list goes on and on. And so, while we might not all be walking around as half-machines just yet, it might be a good idea to plan ahead.
While smartphone technology has already made many of us cyborg adjacent, there are some who are truly pioneering the world of the future. Take filmmaker Rob Spence of Toronto, for example. He lost his eye due to an unfortunate mishap involving his nine-year-old self, a pile of cow dung, and a shotgun. The gun kicked back after he fired it and severely damaged his eye.
While he did not lose all sight in the eye, he was declared legally blind. Years later, the eye began to physically deteriorate, prompting doctors to replace the eye. Instead of going for the traditional glass eye, Spence worked with a friend to build an eye camera. Spence’s eye can record up to 30 minutes of footage before the battery dies. The eye is not connected to the optic nerve so he cannot use it to see.
In preparation for the release of the video game sequel Deus Ex: Human Evolution, game developer Square Enix recruited the filmmaker to make a documentary of Spence interviewing other recipients of high-tech augmentation. He even included some of the eye camera footage in the doc.
The Age of Enhancement
While the word “cyborg” is still more closely aligned with science-fiction, more and more people are augmenting their bodies with technology. Many of these augmentations correct limitations, like this Star Wars inspired prosthetic arm, or these exoskeletons designed to give paralyzed people improved functionality. However, the next wave of available augmentations could focus on enhancing human capabilities, both physically and cognitively, beyond what is biologically possible. Tech wizards like Elon Musk and Bryan Johnson are working on systems that would integrate the human brain with computers, making the subject smarter.
However, each new capability could bring along specialized ethical concerns. For example, Elon Musk’s neural lace isn’t likely going to be cheap. Could giving those who can afford to purchase this technology access to higher levels of cognitive ability only lead to a massive widening of societal gaps? This and other ethical conundrums must be carefully considered as we quickly approach this Age of Enhancement.
The idea of the DermalAbyss project is that an individual would have the ink tattooed onto their body in the pattern of their preference. The tattoo would then change color according to the amount of the activating agent present. A tattoo using the ink designed to respond to glucose levels, for example, would change color from blue to brown as the person’s blood sugar level rises.
A Medical Skin Interface
The technology is an ingenious interaction of the body-art, medical, and bio-sensor sectors. While the researchers have no immediate plans to release their ink to the public, the potential of the project is huge, and others could possibly explore and expand upon it in the future.
Aside from the initial tattooing process, the researchers’ skin interfaces are non-invasive, unlike the methods currently used to monitor diabetes. They’re also much harder to damage than current wearable technology.
As stated on the project website, the technology could potentially be used to measure far more than just the levels tested in the study: “It could be used for applications in [continuous] monitoring, such as medical diagnostics, quantified self, and data encoding in the body.”
This isn’t the only research exploring innovative uses of tattoos — others have found ways to link body ink to sound files or use it to control smartphones — but this research is the first to explicitly explore the medical possibilities of inked biosensors. Though just a proof of concept right now, DermalAbyss could be offering us a glimpse into the future of health monitoring.
In 1905, an Ohio farmer survived a railroad accident that cost him both of his legs. Two years later, he founded the Ohio Willow Wood company, using the namesake timber to hand-carve prosthetic limbs. The company grew, surviving the Great Depression and a fire that destroyed the plant, and still thrives today in rural Ohio. Few who work there now might remember the curious footnote in the company’s history that occurred during World War II, when the rebuilt factory was diversified to build parts for PT boats and B-17 bombers.
Today, it is ironic to consider a company that specializes in prosthetic limbs building parts for the war machine that unfortunately increases demand on companies making prosthetic limbs. Indeed, the tragedy of war has pushed prosthetics researchers to work ever harder to help service members and veterans who have lost limbs.
That has made the U.S. Department of Veterans Affairs a key player in prosthetics development and technological innovation. But what is created for service members and veterans has benefits far beyond them, helping nearly two million Americans – civilian children, elderly people and young adults with amputations – maximize their mobility.
A Key Funding Source
As a biomedical engineer specializing in prosthetics, I’ve reviewed grant proposals seeking funding from the VA to research prosthetic limbs for several years.
The federal government has long played a vital role in advancing prosthetic technologies. Before the 1980s, prosthetic feet were adequate for standing and walking, but very limited for more intense activities involving running and jumping. Veterans Administration Rehabilitation Research and Development funds were used to develop the Seattle Foot. That device and its carbon-fiber contemporaries paved the way for “blade”-style prostheses seen in the Paralympics.
For prosthetics researchers, funding from traditional sources such as the National Institutes of Health and the National Science Foundation has become remarkably difficult to obtain, at least in part because of competition. Consequently, many researchers have turned their attention to soldiers and veterans. In 2014-2015, the Congressionally Directed Medical Research Program funded 18 proposals in Orthotics and Prosthetics Outcomes. But that funding is getting more competitive too: 98 proposals were rejected.
Connecting With the Body
All this work is, generally speaking, focused on transforming a prosthesis from an external tool used by a person to an integrated replacement that becomes a part of a person. Much of the research involves taking small steps toward reproducing the grand complexity and robust design of human anatomy. We have not yet achieved the flashy and functional capability of a real-life Luke Skywalker with cranial nerves attached directly to motors in a robotic hand. But researchers have made important progress through intermediate steps.
A process called targeted muscle reinnervation can connect nerves carrying messages from the brain, like “close hand,” to a muscle over which a set of electrodes are placed. Those electrodes in turn send the signal to the motorized hand to close. Sensory feedback is being developed in a similar manner. While these efforts are not yet ready for everyday community use, they represent crucial progress.
Consider also a CDMRP-funded project at the Rehabilitation Institute of Chicago and Vanderbilt University. The lower-limb prostheses used are not just passive spring-like materials: They’re active, motorized joints. Motors require careful control, and the project headed by neuroengineering specialist Levi Hargrove has attempted to learn from the amputee’s muscles when she wants to transition from walking to, say, walking up stairs or down a ramp. Just like your phone’s voice recognition better understands your voice over time, Hargrove’s control algorithm allows the motorized prosthesis to make fewer errors in ambulation.
Clearly, projects like Hargrove’s stand to benefit much broader populations than just soldiers and veterans. My own research focuses on children with limb loss, and I see so-called “civilian spinoffs” every day. In our Center for Pediatric Locomotion Sciences at Georgia State University, we’re modeling the material and structural properties of pediatric prosthetic feet. Many of those components can be traced back to that original VA-funded Seattle Foot. Across town at the Shepherd Center, a renowned hospital specializing in brain and spinal cord injury, robotic exoskeletons help paralyzed individuals move their limbs and remap neural pathways, a project with connections to VA prosthetic research.
Though they won’t get the public recognition of the Jeep or GPS, these civilian spinoffs from military-funded research in prosthetics will have long-term benefits for people of all ages, and they’ll feel the results every day.
For those eager to give their own brains a boost, this is good news. Various communities have sprung up to share tips and tricks on how to test the technique on themselves, often using self-rigged stimulators powered by 9-volt batteries.
Scientists and brain enthusiasts aren’t the only people interested. The military has also been eager to support projects involving brain stimulation with the hope that the technology could one day be used to help soldiers suffering from combat-induced memory loss.
But here’s the catch: the end results are inconsistent at best. While some people swear by the positive effects anecdotally, others report nothing but a nasty scalp burn from the electrodes.
We all have good days when your brain feels sharp and bad days when the “brain fog” never lifts. This led scientists to wonder: because electrical stimulation directly regulates the activity of the brain’s neural networks, what if it gives them a boost when they’re faltering, but conversely disrupts their activity when already performing at peak?
In a new study published in “Current Biology,” researchers tested the idea using the most direct type of brain stimulation — electrodes implanted into the brain. Compared to tDCS, which delivers currents through electrodes on the scalp, implanted ones allow much higher precision in controlling which brain region to target and when.
The team collaborated with a precious resource: epilepsy patients who already have electrodes implanted into their hippocampi and surrounding areas. These brain regions are crucial for memories about sequences, spaces and life events. The electrodes serve a double purpose: they both record brain activity and deliver electrical pulses.
The researchers monitored the overall brain activity of 102 epilepsy patients as they memorized 25 lists of a dozen unrelated words and tried to recall them later on.
For each word, the researchers used the corresponding brain activity pattern to train a type of software called a classifier. In this way, for each patient the classifier eventually learned what types of brain activity preceded successfully remembering a word, and what predicted failed recall. Using this method, the scientist objectively classified a “foggy” brain state as the pattern of brain activity that preceded an inability to remember the word, while the pattern of activity common before successfully recalling is characteristic of being on the ball.
Next, in the quarter of patients for whom the classifier performed above chance, the researchers zapped their brains as they memorized and recalled a new list of words. As a control, they also measured memory performance without any stimulation, and the patients were asked whether they could tell when the electrodes were on (they couldn’t).
Here’s what they found: when the zap came before a low, foggy brain state, the patients scored roughly 12 to 13 percent higher than usual on the recall task. But if they were already in a high-performance state, quite the opposite occurred. Then the electrical pulse impaired performance by 15 to 20 percent and disrupted the brain’s encoding activity — that is, actually making memories.
Moving Beyond Random Stimulation
This study is notably different from those before. Rather than indiscriminately zapping the brain, the researchers showed that the brain state at the time of memory encoding determines whether brain stimulation helps or hinders. It’s an invaluable insight for future studies that try to tease apart the effects of brain stimulation on memory.
The next big challenge is to incorporate these findings into brain stimulation trials, preferably using noninvasive technologies. The finding that brain activity can predict recall is promising and builds upon previous research linking brain states to successful learning. These studies may be leveraged to help design “smart” brain stimulators.
For example: picture a closed-loop system, where a cap embedded with electrodes measures brain activity using EEG or other methods. Then the data go to a control box to determine the brain state. When the controller detects a low functioning state, it signals the tDCS or other stimulator to give a well-timed zap, thus boosting learning without explicit input from the user.
Of course, many questions remain before such a stimulator becomes reality. What are the optimal number and strength of electrical pulses that best bolster learning? Where should we place the electrodes for best effect? And what about unintended consequences? A previous study found that boosting learning may actually impair a person’s ability to automate that skill — quickly and effortlessly perform it — later on. What other hidden costs of brain stimulation are we missing?
I’m not sure if I’ll ever be comfortable with the idea of zapping my brain. But this new study and the many others sure to follow give me more confidence: if I do take the leap into electrical memory enhancement, it’ll be based on data, not on anecdotes.
We are now poised at a time in history when brain-computer interfaces (BCI) are the obvious next step. On the brink of the automation age, we face the daunting prospect of artificial intelligence (AI) becoming more capable than ourselves. Now that touchscreens and voice recognition are part of our everyday devices, it’s time for us to be able to control our electronics with our minds, and tech companies like Facebook are itching to make it happen. And in this modern age, we are also edging closer to the elimination of various diseases, and we long for life without dementia, brain damage, and neurological diseases. The BCI provides us with a way to maintain control of our world and our electronics, to heal ourselves—and maybe even allow humanity itself to level up.
Elon Musk’s ambitious solution is an easily injectable neural electrode, a neural lace, which would be able to both stimulate and interpret electrical activity in the brain. While this hasn’t yet been developed, researchers are already stimulating electrical activity in the human brain using transcranial direct current stimulation, or tDCS. This is being used to treat consciousness disorders, and to help patients with minimal consciousness communicate. Researchers also have a prototype BCI in use, which people with locked-in syndrome can use to communicate.
These are simply the first steps on a more ambitious journey towards the high-level enhancement of the human brain, and by extension, of humanity itself. As envisioned, Musk’s neural lace is a far more transformative solution than tDCS could ever be; tDCS simply works with the neurons that are there. The neural lace would form an entirely new layer of the brain.
Additionally, tDCS works incrementally, slowly training more and more neurons in the brain to fire more readily and more often. The net benefit comes from the sheer numbers; if enough brain cells fire, you will see a result. In a person with minimal consciousness, you might see responsiveness; in a person with normal capacity, you may see improved intelligence, sharper problem-solving abilities, enhanced creativity, or other benefits. There is certainly a demand for this kind of enhancement, as the current nootropics craze proves — and nootropics fans are often also biohackers, willing to try physical solutions like BCIs.
A New Humanity?
So, could tDCS bring forth a new humanity? Or will neural lace or something similar be required for that to be possible? If tDCS is a step toward neural lace, does the difference really matter?
While tDCS has notable potential for treating neurological diseases, its potential for enhancing human intelligence is somewhat murkier. This is especially true in the context of AI, which, alongside the need to retain human rights, is the defining condition pushing for neural lace. The kinds of problems that tDCS can tackle, beyond neurological disease or paralysis, include the need to learn faster, upload skills, or retain plasticity.
To match wits with AI in any meaningful way, a more radical amplification of the brain will be necessary. In other words, the basic idea behind the neural lace is that we can’t beat AI: we’re going to need to join it instead, becoming part of a human/machine merger. This is beyond the realm of tDCS, delving into a new realm of cyborg humanity.
One such project is the topic of a study by researchers from the University of Washington (UW) that was just published in Nature Communications. They have developed a new method of turning cells into computers that process information digitally instead of following their usual macromolecular processes. They did so by building cellular versions of logic gates commonly found in electric circuits.
The team built their NOR gates, digital logic gates that pass a positive signal only when their two inputs are negative, in the DNA of yeast cells. Each of these cellular NOR gates was made up of three programmable DNA stretches, with two acting as inputs and one as an output. These specific DNA sequences were targeted using CRISPR-Cas9, with the Cas9 proteins serving as the molecular gatekeeper that determined if a certain gate should be active or not.
Controlling Cellular Function
This UW study isn’t the first to build circuits in cells, but it is the most extensive one to date, with seven cellular NOR gates in a single eukaryotic cell. This added complexity puts us one step closer to transforming cells into biological computers with a number of potential medical applications.
“While implementing simple programs in cells will never rival the speed or accuracy of computation in silicon, genetic programs can interact with the cell’s environment directly,” senior author Eric Klavins explained in a press release. “For example, reprogrammed cells in a patient could make targeted, therapeutic decisions in the most relevant tissues, obviating the need for complex diagnostics and broad spectrum approaches to treatment.”
If given the ability to “hack” our biology in this way, we could potentially engineer immune cells to respond to cancer markers or cellular biosensors to diagnose infectious diseases. Essentially, we’d have an effective way to fight diseases on the cellular level, ushering in a new era in human evolution.
Very soon, we could witness an acceleration in human evolution like never seen before, argues Caleb Sharf, director of Astrobiology at Columbia University. He believes that new technologies will change the evolutionary currency from soft, organic matter to machines that exceed these fleshy restraints and can be programmed or upgraded.
“We may be witnessing the first new origin event,” Sharf said in a video interview with Tech Insider. A sprint period in the endurance race of evolution may already be underway thanks to our interaction with the technology we are creating — and that sprint could get even faster when we consider the technologies around the corner.
Some developments may already be here. The internet, Sharf argues in an Aeon opinion article, could be the beginning of a new state of being for the human species.
“Part of our collective wisdom is now uploaded, placed in an omnipresent cloud of data,” Sharf wrote. “Where that’s taking us to isn’t obvious, however. If anything, we could be heading for a hive-mind state, a collective organism more akin to a termite colony or a set of squirmy naked mole-rats.”
But Sharf also argues in his opinion piece that the mechanization of the biological could be part of a much wider and longer process of life moving between biological and machine states as it develops. He wrote, “Someday we might decide that the future of intelligence on Earth requires biology, not machine computation.”
The idea of moving towards a definite end point in evolution is flawed: evolution is not a process with a roadmap, but a process of adaptation to an environment, whatever that environment may be. Those who can survive in the new environment live to pass on their genes — those who cannot, don’t. Therefore, we must not ask ourselves what technologies are making us bigger, stronger, or faster, but which technologies give us the ability to adapt to whatever characteristic is required for survival.
The first of these are bionics. While today’s bionics are aiming to mimic the functionality of a human limb — and getting remarkably close to doing so — when this goal is achieved bionic engineers may aim at exceeding it. Each of us may face an interesting dichotomy: to be inferior but to subscribe to an idea of man that defines “human” according to the natural body as we know it today, or to be a superior transhuman that integrates technology. If limbs become robotic, they also become customizable and therefore can be adapted to whatever the environment requires.
The second of these are Brain Computer Interfaces (BCI). These technologies, which allow the brain to talk directly to computers, open the door for the human mind to sit in a multitude of robotic forms. Should the environment become incredibly hot, for example, we could engineer a robotic body that could stand the heat, and subsequently insert a human brain into it that could control it using a BCI.
Aside from the bodily possibilities, we may also consider the potential for human intelligence. BCIs are not just one way interactions, but can allow computers to impart information onto brains. This gives us another golden key to adaptability: the possibility of having knowledge uploaded that would allow us to predict environmental changes and and engineer our own means of surviving them.
While this may seem outlandish, we must remember that the transfer of information is is the tenet of the Facebook and Elon Musk’s BCIs — although they are twisting this towards communication rather than information. Regardless of how we initially use them, these technologies will likely usher us into a new era of human evolution
Robots have taken over difficult, dangerous and repetitive physical tasks, improving factory safety, worker comfort and product quality. The next phase of labor innovation will do the same thing for cognitive work, removing mentally stressful and repetitive tasks from people’s daily routines.
Human work will become more versatile and creative. Robots and people will work more closely together than ever before. People will use their unique abilities to innovate, collaborate and adapt to new situations. They will handle challenging tasks with knowledge-based reasoning. Machines enabled by the technologies that are now becoming commonplace – virtual assistants like Siri and Alexa, wearable sensors like FitBits and smart-watches – will take care of tedious work details.
People will still be essential on the factory floors, even as robots become more common. Future operators will have technical support and be super-strong, super-informed, super-safe and constantly connected.
We call this new generation of tech-augmented human workers, both on factory floors and in offices, “Operator 4.0.” There are several types of enhancements available, which can be used individually or in combination to put humans at the heart of this technological revolution.
One straightforward enhancement would let workers wear robotic exoskeletons to enhance their strength. A “super-strength operator” could let a human truly control the physical power of a large robot. In today’s warehouses and construction sites, workers risk injury and exhaustion by handling heavy objects themselves. Or they are forced to compromise, using a more powerful tool with less adaptability, like a forklift.
The benefits go well beyond the workplace. Of course, a worker in a powered robotic suit could easily handle extremely heavy objects without losing the flexibility of natural human movements. The worker would also be far less likely to suffer severe injuries from accidents or overwork. And at the end of a day, a super-strength worker could take off the exoskeleton and still have energy to play with the kids or spend time with friends.
Fighter pilots use heads-up displays, which provide them with crucial information right on the cockpit windshield and directly in their line of sight. This is “augmented reality,” because it displays information within a live view of the world. It used to be very specialized and expensive technology. Now, Microsoft’s HoloLens makes it available for consumers.
An “augmented operator” can get directions or assistance without interrupting the task he or she is working on. Often, when new equipment or processes are developed, trainers need to travel long distances to factories, staying for weeks to teach workers what to do. Designers do the same, getting feedback for refinements and improvements. All that travel takes up a huge amount of time and is extremely expensive. With augmented reality available, it is often unnecessary.
A worker wearing a set of smart glasses can receive individualized, step-by-step instructions displayed right in front of his or her eyes, no matter where he or she is looking. With earbuds and a microphone, she or he could talk directly to trainers in real time.
Many manufacturing environments are hazardous, involving heavy equipment, caustic chemicals and other dangers that can maim and kill human workers. A “healthy operator” may be equipped with wearable sensors tracking pulse rate, body temperature, chemical exposure or other factors that indicate risks of injury.
This type of system is already available: Truck drivers can wear the Maven Co-Pilot, a hands-free headset that detects fatigue symptoms, like head-bobbing movements. It can also ensure drivers check their rear-view mirrors regularly to stay aware of nearby traffic. It can even provide reminders to take scheduled breaks. This helps keep the truck’s driver safe and improves everyone else’s road safety.
Possibilities are limitless. An “analytical operator” would wear a monitor showing real-time data and analytics, such as information on chemicals in a sewage treatment plant or pollutants at an incinerator. A “collaborative operator” may be linked to collaborative robots, or co-bots, like the assembly assistant YuMi. A “smarter operator” could be equipped with an intelligent virtual personal assistant, like an advanced Siri or Alexa.
There does not have to be conflict between robots and humans, with machines taking people’s jobs and leaving them unemployed. Technology should be designed with collaboration in mind. That way, companies and workers alike will be able to capitalize on the respective strengths of both human and machine. What’s more, the inherent flexibility of “Operator 4.0” workers will also help to ensure workplaces of the future that can change and adapt. That means getting ever more efficient and safer, as new technologies emerge.
Newcastle University researchers have developed a prosthetic limb that performs certain actions on its own. All the wearer has to do is look at an object, and the hand does the rest. Using its onboard camera to evaluate the shape, size, and distance, the hand performs the appropriate action – like picking up a coffee cup. This makes complex actions much easier for prosthetic wearers, who otherwise have to perform complex muscle actions to manipulate their limbs.
This technology would take the form of an injectable “neural lace” — composed of a mesh of electrodes — that would augment the human brain, adding another layer to the cortex and limbic system that is capable of communicating with a computer (essentially creating cyborgs). This, hypothetically, creates an upgradable, updatable interface that could be applied in countless ways. Some of these include:
Controlling Computers With Your Mind
Brains and technology both operate using the same vectors: electricity and data. Musk’s Neural Lace would be a system that provides a way for them to communicate directly with each other. To borrow a simile from Phillip Alvelda, the Neural Engineering System Design (NESD) program manager (another nascent BCI), “Today’s best brain-computer interface systems are like two supercomputers trying to talk to each other using an old 300-baud modem […] Imagine what will become possible when we upgrade our tools.” Applications could stretch from the remote operation of technology to the completely hands free and voiceless operation of computers. Researchers in Korea have already used a BCI to control turtles.
Updating Your Mind or Communicating With Someone Else’s
Elon Musk’s idea could both initiate brain activity and monitor it. The technology does not necessarily have to be a one-way communication stream, it is capable of both sending messages and creating information in the brain. The high-bandwidth interface could allow you to wirelessly transmit information to the cloud, to computers, or even directly to the brains of other people with a similar interface in their head. There is also the possibility of downloading content to augment your consciousness: think Neo learning kung-fu in the Matrix. While initial tests to improve intelligence haven’t been too successful, if brains and computers speak the same language, then computers can impart information to the brain. The technology is currently being used to allow paralyzed people to communicate, but its uses could extend far beyond that.
Bionic Limbs That Feel Like Real Limbs
As part of this two-way communication stream, robotic arms could communicate fleshy messages by being connected to existing nerve structures. Rather than making the brain learn how to use a new part of the nervous system, robotic limbs could be quickly and easily integrated into the system. This has the potential to revolutionize prosthetic limbs for the disabled, but may also encourage people to rid themselves of their biological arms in favour of mechanical super limbs. Who knows!
Emotionally Aware Technology
As computers and brains would essentially be speaking the same language, emotions could be read as data using electrodes. This would shift technology’s perception of humans from basic recognition to complex understanding. Robot helpers would be able to adapt to your emotional state rather than just responding to commands. Photos and videos could also be implanted with emotional metadata, meaning that one could feel what it would be like to be in any given scenario, rather than just trying to imagine it.
Next Generation Adaptable Gaming
One issue with the lifespan of games is repetition; people become accustomed, know what to expect, or are limited by the programmed narrative. A BCI could improve this situation by having games respond to what your brain is feeling, remaining one step ahead and endlessly diverse. This would be most applicable to the horror genre, in which enemies could come at you when and where you least expect it, providing constant shocks, jumps, and thrills. The Black Mirror episode Playtest is an hour long exploration of just how terrifying this could be. Since AI has been shown to be as creative as a human composer, this reality could be surprisingly close.
Anyone who’s ever had a stray eyelash or a piece of dust land in their eye can attest to the pain even the tiniest foreign object can cause. While this sensitivity is essential for keeping our eyes in working order, it can also be a problem for those with retinal implants.
The retina is a light-sensitive layer of tissue at the back of the eye that plays an important role in sight, and damage to it can lead to blindness. For the past few years, retinal implants have been restoring vision for patients with such problems, but those implants have all been mechanical devices, which can wreak havoc on already damaged eyes.
“The human eye is incredibly sensitive, which is why foreign bodies like metal retinal implants can be so damaging, leading to inflammation and/or scarring,” University of Oxford researcher Vanessa Restrepo-Schild explained in a university news release. To that end, she led a team in creating the world’s first synthetic, soft tissue retina.
The team used lab-created biological tissue for its artificial retina, and because it doesn’t incorporate any rigid foreign materials, the device is less likely to cause problems once implanted. “A biological synthetic implant is soft and water-based, so much more friendly to the eye environment,” explained Restrepo-Schild.
So far, the team has only tested its implant in a lab environment, so the next steps are testing it with living tissues and expanding on its functionality. Restrepo-Schild is optimistic about the future practical uses of her creation: “I hope my research is the first step in a journey towards building technology that is soft and biodegradable instead of hard and wasteful.”
All Eyes on Bionics
An estimated 39 million people worldwide are blind, and the Oxford team is just one of many developing bionic eyes to help them regain their vision.
Right now, the only bionic implant approved by the Food and Drug Administration (FDA) for use in the United States is Second Sight’sArgus II. That system requires the implantation of a device comprising an antenna, an electronics case, and an electrode array on the patient’s eye. That implant works in conjunction with a pair of glasses, a video processing unit (VPU), and a cable worn outside the body to restore some of the user’s vision.
Other researchers are skipping the eyes altogether in their quest to bring sight to the blind. Monash University professor Arthur Lowery’s “bionic eyes” feed information from a glasses-mounted camera directly into the brain via electrodes. This means it could be used by someone with severely damaged eyes or even no eyes at all.
Beyond helping people with impaired vision, bionic eyes could eventually give “superpowers” to those who already have no trouble reading the eye chart. Those implants could allow us to see in different light spectrums, give us telescopic sight, or even record what we see and wirelessly upload it to our social networks. The possibilities are endless, and breakthroughs like that of Restrepo-Schild put us one step closer to that age of superhuman sight.
With all the fanfare surrounding fluid-filled bags that made it possible to bring premature lambs to healthy development in the hopes of tackling premature birth in infants, it’s important to note how the medical community first tackled premature birth with a little physics.
A video from popular YouTube channel MinuteEarth discusses how a mathematical study left an impressive impact on the medical community, changing how we can make a difference for prematurely born infants.
The video takes a look at the Law of Laplace and its role in the human body. Similar to the air pressure in two connected balloons, alveoli, miniature air-sacs in our lungs that exchange carbon dioxide for oxygen, have trouble maintaining an air pressure equilibrium for premature children with Respiratory Distress Syndrome (RDS).
By understanding the physics behind this complication, CPAP machines have saved over two million lives worldwide — a dramatic shift since the 1970s, when 3 in 4 premature babies with RDS would pass away due to complications.
Hahna Alexander shares the journey the self-charging work boots that deliver actionable insights to increase efficiency and improve safety. At the core of SolePower’s development is a patented kinetic charger that generates power every step. Hahna is a 2017 winner of Toyota Mothers of Invention.
In March 2016, DARPA — the U.S. military’s “mad science” branch — announced their Targeted Neuroplasticity Training (TNT) program. The TNT program aims to explore various safe neurostimulation methods for activating synaptic plasticity, which is the brain’s ability to alter the connecting points between neurons — a requirement for learning. DARPA hopes that building up that ability by subjecting the nervous system to a kind of workout regimen will enable the brain to learn more quickly.
The ideal end benefit for this kind of breakthrough would be downloadable learning. Rather than needing to learn, for example, a new language through rigorous study and practice over a long period of time, we could basically “download” the knowledge after putting our minds into a highly receptive, neuroplastic state. Clearly, this kind of research would benefit anyone, but urgent military missions can succeed or fail based on the timing. In those situations, a faster way to train personnel would be a tremendous boon.
First Neurostimulation, Then Application
As part of the TNT program, DARPA is funding eight projects at seven institutions. All projects are part of a coordinated effort that will first study the fundamental science undergirding brain plasticity and will conclude with human trials. The first portion of the TNT program will work to unravel the neural mechanisms that allow nerve stimulation to influence brain plasticity. The second portion of the program will practically apply what has been learned in a variety of training exercises.
To ensure the work stays practical, foreign language specialists, intelligence analysts, and others who train personnel now will work with researchers to help refine the TNT platform to suit military training needs. Researchers will compare the efficacy of using an implanted device to stimulate the brain versus non-invasive stimulation. They will also explore both the ethics of enhanced learning through neurostimulation and ways to avoid side effects and potential risks.
“The Defense Department operates in a complex, interconnected world in which human skills such as communication and analysis are vital, and the Department has long pushed the frontiers of training to maximize those skills,” Doug Weber, the TNT Program Manager, said in a DARPA press release. “DARPA’s goal with TNT is to further enhance the most effective existing training methods so the men and women of our Armed Forces can operate at their full potential.”
If the TNT program succeeds, striving to be all you can be may mean learning at a much faster pace, and not just for military personnel. Downloadable learning may be one of the ways we achieve next-level humanity.
It may not be as sleek or as high-tech, as Iron Man’s suit, but right now, Richard Browning’s flight suit, developed under his startup Gravity, is the closest the world has to Tony Stark’s powered exoskeleton.
In its first live demonstration, Browning showed off his creation, the Deadalus, at a TED conference held at the Vancouver Convention Center, where he hovered above the ground and flew in a circle in front of a captivated crowd.
The entire suit is comprised of a specially designed exoskeleton, a helmet that tracks fuel usage, and six miniature jet engines that allow for vertical takeoff. To control direction and speed, the wearer has to use their arms. The tech allows the user to fly for 10 minutes straight, go as fast as 321 km/h (200 mph), and fly up to a few thousand feet into the air. However, for his live demo and his maiden test flight, Browning kept the speed and altitude low for safety reasons.
Browning himself admits that he doesn’t think the tech will go mainstream anytime soon. But, the Deadalus serves as proof of concept and demonstrates a new era in aviation. And while it’s still in its early stages of development, the attention it draws, including interest from the UK military, could mean the currently cumbersome tech could evolve into something practical. So, while we’re not exactly entering into a new era of aviation just yet, this invention is, at the very least, a remarkable stepping stone.
Ray Kurzweil, Google’s director of engineering, is a well-known futurist who seems to have a penchant for accurate predictions. Most recently, he has again reiterated his prediction that the so-called technological singularity will happen by 2045. For Kurzweil, this doesn’t translate to an end-of-the-world-as-we-know-it scenario courtesy of artificially intelligent (AI) machines. Rather, it means human beings will become powered by machines.
Kurzweil believes that, as part of this human-machine melding, nanobots will inhabit our bodies by the 2030s. While flowing through our arteries, these microscopic robots would keep us healthy and transmit our brains onto the cloud.
Another futurist, Dave Evans, founder and CTO of Silicon Valley stealth startup Stringify, gave his thoughts about Kurzweil’s nanobot idea in an interview with James Bedsole on February.
Evans explained that he thinks such a merging of technology and biology isn’t at all farfetched. In fact, he described three stages as to how this will occur: the wearable phase (where we are today), the embeddable phase (where we’re headed, with neural implants and such), and the replaceable phase.
Does Evans agree with Kurzweil’s idea of nanobots flowing inside our bodies? Check out the rest of his answer in the video embedded here.
Soon you’ll be able to answer your cell phone by touching your tattoo. Researchers at Saarland University in Germany have developed iSkin, a sensor worn on the skin like a tattoo. Just in case tattoos and cell phones weren’t already cool.
Silicon valley seems to have a new obsession with a proposed biotech called ‘the neural lace’. Both Elon Musk and Bryan Johnson have launched companies that aim to inject electrodes into the brain, which ultimately could give people the ability to upload and download thoughts.
After weeks of anticipation, details on Elon Musk’s brain-computer interface company Neuralink have finally been revealed. In a detailed report on the website Wait But Why, Tim Urban recounts insights gleaned from his weeks meeting with Musk and his Neuralink team at their San Francisco headquarters. He offers an incredibly detailed and informative overview of both Musk’s latest venture and its place in humanity’s evolution, but for those of you interested in just the big picture, here’s what you really need to know about Neuralink.
Your Brain Will Get Another “Layer”
Right now, you have two primary “layers” to your brain: the limbic system, which controls things like your emotions, long-term memory, and behavior; and the cortex, which handles your complex thoughts, reasoning, and long-term planning. Musk wants his brain interface to be a third layer that will complement the other two. The weirdest thing about that goal may be that he thinks we actually already have this third layer — we just don’t have the best interface for it:
We already have a digital tertiary layer in a sense, in that you have your computer or your phone or your applications…The thing that people, I think, don’t appreciate right now is that they are already a cyborg…If you leave your phone behind, it’s like missing limb syndrome. I think people—they’re already kind of merged with their phone and their laptop and their applications and everything.
The goal of Neuralink, then, is eliminating the middleman and putting that power we currently have at our fingertips directly into our brains. Instead of one person using their phone to transmit a thought to another person (“Dinner at 8?”), the thought would just go from one brain to the other directly.
Thankfully, we’ll be able to control this completely, Musk tells Urban: “People won’t be able to read your thoughts — you would have to will it. If you don’t will it, it doesn’t happen. Just like if you don’t will your mouth to talk, it doesn’t talk.”
Musk Is Working with Some Very Smart People
Musk met with more than 1,000 people before deciding on the eight who would help him shape the future of humanity at Neuralink. He claims assembling the right team was a challenge in and of itself, as he needed to find people capable of working in a cross-disciplinary field that includes everything from brain surgery to microscopic electronics.
The crew he landed is a veritable supergroup of smarties. They have backgrounds from MIT, Duke, and IBM, and their bios include phrases like “neural dust,” “cortical physiology,” and “human psychophysics.” They’re engineers, neurosurgeons, and chip designers, and if anyone can bring Elon Musk’s vision to life, it’s them.
The Timeline For Adoption Is Hazy…
Neuralink won’t come out the gate with a BMI that transforms you into a walking computer. The first product the company will focus on releasing will be much more targeted. “We are aiming to bring something to market that helps with certain severe brain injuries (stroke, cancer lesion, congenital) in about four years,” said Musk.
“I think we are about 8 to 10 years away from this being usable by people with no disability.” – Musk
The same way SpaceX was able to fund its research on reusable rockets by making deliveries to the ISS or Tesla was able to use profits from its early car sales to fund battery research, these earliest BMIs to treat diseases or the disabled will keep Neuralink afloat as it works on its truly mind-bending technologies.
As for when those technologies, the ones that allow healthy people to channel their inner telepaths, will arrive, Musk’s fairly optimistic timeline comes with several contingencies: “I think we are about 8 to 10 years away from this being usable by people with no disability…It is important to note that this depends heavily on regulatory approval timing and how well our devices work on people with disabilities.”
…Because The Hurdles are Many
Those are just two of the hurdles Neuralink faces. Elon Musk might make innovation look easy, but even going to Mars seems relatively straightforward in comparison to his plans for his latest company.
First, there are the engineering hurdles to overcome. The company has to deal with the problems of biocompatibility, wirelessness, power, and — the big one — bandwidth. To date, we’ve never put more than roughly 200 electrodes in a person’s brain at one time. When talking about a world-changing interface, the Neuralink team told Urban they were thinking something like “one million simultaneously recorded neurons.” Not only would they need to find a way to ensure that the brain could effectively communicate with that many electrodes, they also need to overcome the very practical problem of where to physically put them.
The engineering is only half the battle, though. Like Musk mentioned, regulatory approval will be a big factor in the development and adoption of Neuralink’s tech. The company also faces potential skepticism and even fear from a public that doesn’t want anyone cutting into their brains to install some high-tech machinery — according to a recent Pew survey, the public is even more worried about brain computer interfaces than gene editing. There’s also the not-entirely-unfounded fear that these computers could be hacked.
Add to all that our still very, very incomplete understanding of how the brain ticks exactly, and you can see that the Neuralink team has its work cut out for them.
Neuralink Won’t Exist in a Vacuum
Thankfully, they won’t be working to remake our minds alone — many other universities and research institutes are pushing brain interface technology forward. Facebook’s Building 8 is working on its own BCI, MIT is creating super-thin wires for use in brain implants, and other cyborg devices are already in the works to help the paralyzed walk again and the blind regain their sight. Each new development will push the field forward, and the team at Neuralink will be able to learn from the mistakes and successes of others in the field.
Just like other electric cars were on the road before Tesla came along, brain computer interfaces are not new — the tech might just need a visionary like Musk to elevate it (and us) to the next level.
Facebook’s annual developers’ conference is in full swing this week. Ultimately, the event serves as an opportunity for the company to unveil their most innovative products and reveal key details about upcoming projects. And nothing is more promising or intriguing than what’s coming out of Facebook’s mysterious Building 8 (B8).
Currently, we know very little about B8, as none of their projects have been officially detailed, but rumors are swirling, and it’s not like we don’t know anything. We have some big, overarching information about the general work being done. It seems that B8 is working on four primary projects. These are said to include augmented reality, drones, cameras, and direct brain-computer interfaces (BCIs).
Yes, Facebook is working on computers that are meant to interface with our brains.
During the opening event yesterday, Facebook CEO Mark Zuckerberg announced that, today, speakers at the conference would discuss where they are in relation to their work on the BCI technology.
At the event, he stated that we would “hear from Regina Dugan about some of the work that [they are] doing to build even further out beyond augmented reality, and that includes work around direct brain interfaces that are going to, eventually, one day, let you communicate using only your mind.”
Dugan is a former DARPA executive. She also worked for Google’s advanced projects division. Now, Dugan has finally released information about the work B8 has been doing. The highlights and video of the event are below (this story is developing):
Facebook is working to develop a brain-computer interface that will, in the future, allow individuals to communicate with other people without speaking. Ultimately, they hope to develop a technology that allows individuals to “speak” using nothing but their thoughts—unconstrained by time or distance.
They want to create “category defining products” that are “social” first, products that allow us to form more human connections and, in the end, unite the digital world of the internet with the physical world and the human mind.
Dugan notes that the brain produces about 1 terabyte per a second. However, through speech, we can only transmit information to others at about 100 bytes per a second. Facebook wants to get all of the information that is transmitted to our speech center out of the “brain” and into the world (to allow us to get it to others at will).
For their beginning work, they hope to allow all humans to “type” and “click” through our brains in order to interact with our technology. For example, people with ALS could type—not with eye blinks—but with their thoughts. Thus, they wish to “decode speech” and allow all individuals to communicate using our brain waves.
Initially, their goal is to allow people to type 5 times faster than people can type on a smartphone straight from their brain. This means that they are developing technologies that can “read” the human brain in order to transmit this information.
Next, they will work to allow people to “type” a staggering 100 words a minute using their thoughts. That’s far, far faster than most humans can type on a computer. The average person types between 38 and 40 words per minute.
They have developed actuators that allow people to “hear” through their skin.Ultimately, with Facebook’s technology, humans can “feel” words.
Eventually, they want to allow people to think something and send the thought to someone’s skin. Additionally, they will allow people to think something in one language and have a person receive the thought in an entirely different language.
This article has been updated to clarify the nature of the brain interface and what information Facebook is hoping to capture and transmit to the outside world.
Nootropics, broadly speaking, are substances that can safely enhance cognitive performance. They’re a group of (as yet unclassified) research chemicals, over-the-counter supplements, and a few prescription drugs, taken in various combinations—that are neither addictive nor harmful, and don’t come laden down with side-effects—that are basically meant to improve your brain’s ability to think.
Right now, it’s not entirely clear how nootropics as a group work, for several reasons. How effective any one component of a nootropic supplement (or a stack) is depends on many factors, including the neurochemistry of the user, which is connected to genes, mood, sleep patterns, weight, and other characteristics.
However, there are some startups creating and selling nootropics that have research scientists on their teams, with the aim of offering reliable, proven cognitive enhancers. Qualia is one such nootropic. This 42 ingredient supplement stack is created by the Neurohacker Collective, a group that boasts an interdisciplinary research team including Sara Adães, who has a PhD in neuroscience and Jon Wilkins, a Harvard PhD in biophysics.
Some of Qualia’s ingredients are found in other stacks: Noopept, for example, and Vitamin B complex are some of the usual suspects in nootropics. Green tea extract, L-Theanine, Taurine, and Gingko Biloba are also familiar to many users, although many of the other components might stray into the exotic for most of us. Mucuna Pruriens, for example, is a source of L-Dopa, which crosses the blood–brain barrier, to increase concentrations of dopamine in the brain; L-Dopa is commonly used to treat dopamine-responsive dystonia and Parkinson’s disease.
The website says that the ‘smart drug’ is designed to provide users with “immediate, noticeable uplift of [their] subjective experience within 20 minutes of taking it, as well as long-term benefits to [their] neurology and overall physiologic functioning.” For people climbing their way up in Silicon Valley, it’s a small price to pay. What would you do with 10 percent more productivity, time, income, or intelligence?
Note: Futurism curates the products that could help reshape our world. Here, we have partnered with the team behind Qualia in order to offer readers a 10% discount using the coupon code ‘futurism’. Futurism also has affiliate partnerships, so we may get a share of the revenue from purchases.
Humans have always been fascinated by the prospect of long life. For a classic take on this idea from one of the greats, check out Kurt Vonnegut’s short story from 1954 Tomorrow and Tomorrow and Tomorrow— he paints a portrait of a crumbling society after the invention of an anti-aging drug. You might think that life extension is just the stuff of science fiction, fantasy, and horror, but scientists have been studying how aging works — and how we can stop it — for decades.
A 2015 editorial published in Nature Medicine said that aging is the greatest risk factor for many chronic human diseases, and that if aging can be slowed, the chance of getting these diseases can be pushed off too. A report from the World Health Organization notes that not all people age at the same rate, and the occurrence of disease isn’t a guarantee. Genetic or environmental factors have a hand in this, and scientists are continually studying what exactly influences aging.
The Wellderly Study, published in 2016, linked cognitive health to physical health. You might also remember the studies done on mice involving the transfusion of blood; younger mouse (and human) blood allowed mice to thrive, while the blood of older mice negatively impacted their brains, livers, and hearts. Maybe vampires have the right idea.
Destroying old cells in mice also had promising effects, as reported by Nature in February. When senescent cells (cells that can’t divide) build up in aging animals, the cells “release molecules that can harm nearby tissues.” Mice were genetically engineered so that their senescent cells would be destroyed upon injection of a drug. These engineered mice had lifespans 20 to 30 percent longer than their normal mice counterparts.
In 2015, the FDA authorized the first clinical trials of a drug, Metformin, that has the potential to extend the human lifespan up to 40 percent (along with boosting the time span that people can enjoy optimal health). It’s impressive how many theories are out there in the scientific community to stop the effects of aging, and as the years go by, there could be hundreds more. This prospect is exciting for many, but it also raises several questions and challenges.
The Future of Aging
So how can we prevent Vonnegut’s vision of the future from happening if we extend the lifespans of humans? Should we even extend human lifespans? We’re already one of the longest-living land animals on Earth. There are ethical and social problems to consider if life extension becomes the norm, and many of these were brought up in a debate sponsored by Intelligence Squared.
Would slowing aging — or even eliminating it — change what it means to be a human? Humans are already quite self-centered as a species (prioritizing our own survival and standard of living over the planet itself and many other species). The planet can only sustain so many humans comfortably, and it’s currently strained as it is. If fewer people died, that would crowd us even more.
There’s also the argument that new generations become more progressive and open to change versus older generations. Without the older population dying off, there’s less potential for different perspectives and new ideas. Dying also gives our lives meaning in regards to time — when time is a finite resource, we cherish it more.
From the opposite side, research into anti-aging will make us healthier in the last years of our lives, making healthcare less expensive for the elderly. Medical research has had to focus on many diseases and conditions, but if the focus is on aging itself, hopefully the potential to develop those diseases will diminish.
We can also greatly benefit from the experience and wisdom of older people as a society. It’s detrimental to science to ignore these avenues of research. Anti-aging research also can lead to other research, such as how to sustain bigger populations. Urban Skyfarm is just one idea that would provide housing, farming, clean air, and renewable electricity.
Careful urban planning will be a necessity as we move forward. Many scientists studying longevity believe that, in the end, the benefits of anti-aging research outweigh the negatives. Even if we can’t stop aging completely, we can make the last years of our lives more comfortable and less marred by disease and discomfort.
Despite all the research we’ve done, we still know relatively little about how the human brain works, and we know even less about the mystery of “consciousness.” Scientists disagree about whether consciousness exists at all outside the illusions of our own collective imagination. Some believe it exists independently although we’ve yet to understand its origins have brought quantum physics into the discussion.
This is probably in part because of the way that the “observer effect” challenged one of science’s most basic tenets: that there is an objective, observable reality that exists whether we’re looking at it or not. The revelation that observing and measuring quantum effects changes their behavior is troubling, but it also suggests to many people that consciousness itself is part of quantum theory. Moreover, as humans creating AI that, for all its achievements still can’t master some of the things that come so easily to our own minds (at least not yet), we are bound to see a blurry reflection of ourselves in quantum computers, which promise to achieve so much more than ordinary computers ever could.
However, it was the British physicist Roger Penrose who pointed out that, observer effect aside, quantum mechanics may be involved in consciousness. More specifically, he thought it might be possible that quantum events cause molecular structures in the brain to alter their state and trigger neurons in different ways; that literal quantum effects within the brain exist.
For all we can accomplish with the human brain, it has its foibles, and perhaps suspecting the existence of quantum consciousness is one of them. We possess superior intellects because of our high-level pattern processing abilities, but it is also a well-proven fact that the human brain is prone to see meaningful patterns where none exist; in the midst of meaningless noise. And while the study of quantum physics is certainly not meaningless noise, it’s possible that our minds — which are meaning making machines — are wrong to see themselves in quantum effects. Does it really make sense to think that our lack of understanding of both consciousness and quantum mechanics points to a larger connection?
Our Participatory Universe
There is more to this question than the raw interest of philosophy: if there is in fact a connection between quantum mechanics and human consciousness, any major breakthrough in our understanding of either could help us understand both. For example, advances in quantum computing could enable us to master brain augmentation and uploading consciousness, opening the door to a form of immortality. Improved understanding of the superposition property could teach us how to conquer multiple mutually-exclusive ideas at once.
Or, perhaps we’ve been approaching this in the wrong way. As we look at quantum mechanics, we ask ourselves whether we disturb the effects by measuring, or whether it is the act of noticing the measurement impacting our consciousness that causes the disturbance. Is it possible that knowing how to think in the right way—achieving a quantum consciousness—will allow us to perceive quantum mechanics properly for the first time? We’ve always been part of Wheeler’s participatory universe in some sense, lending our interpretation to what reality is as we record our own history.
For now, most of the scientific community regards quantum effects in the brain skeptically—an appropriate response at this point. Fueling the fast retreat from any quantum consciousness theories in the scientific community is the New Age quantum consciousness trend and the cottage industry arising from it with plenty of avid bloggers writing about things like telepathy, the afterlife, and telekinesis, and crafters selling art and other products.
Whether or not consciousness influences quantum mechanics, and whether or not we eventually require quantum theory to fully comprehend how the brain works, for now we can enjoy the useful discomfort the association provides. Quantum theory has forced us out of our collective comfort zone as we consider new ways of thinking, and found ourselves living inside our own theories.
Rarely do we hear about a story as incredible as Adam Gorlitsky’s. Although he was paralyzed from the waist down as a result of a terrible car accident from 10 years ago, he continues to complete 10-kilometer (6-mile) foot races across the country. An impossible feat made possible thanks to the advent of reliable, robotic exoskeletons.
Thanks to the “ReWalker” exoskeleton produced by the company ReWalk Robotics, Adam is able to walk once more. Gorlitsky’s case is a powerful testament to what a light, wearable exoskeleton with motors at the hip and knee joints can do for people with daunting disabilities.
The battery-powered, computerized, robot prosthetic was offered to Gorlitsky during a clinical trial, and after the trial Gorlitsky couldn’t shake off the immense confidence and freedom that the suit had given him after nearly a decade of feeling powerless. Deciding that the suit can drastically improve his standard of living, Gorlitsky launched a GoFundMe campaign in 2015 that sold a line of I Got Legs t-shirts; helping him make a down payment on the new ReWalk Personal 6.0 exoskeleton suit.
A Project to Help Others
Gorlitsky’s journey didn’t end there. He has since then joined forces with director Warren Adcock to bring the story of how the Rewalk changed Gorlitsky’s life through a documentary. Through the film, I Got Legs: Or How I Learned to Stop Worrying and Love the Exoskeleton, the team aims to share Gorlitsky’s journey while advocating for the “right to walk,” the chief slogan and goal of Adam’s non-profit organization, I Got Legs. Supporters can back the documentary for various rewards through its Kickstarter page.
The documentary will touch upon several key moments in Adam’s life, underscoring his ability to look past personal struggles and fight not only for himself, but also for others with spinal injuries. Adam’s confidence and courage stem from his access to an exoskeleton suit, a suit that he advocates for others through critical insurance policy reform.
If the team raises its $70,000 goal by May, they plan to release the documentary sometime in 2018.
Doing research to define what nootropics are is kind of like asking Tekkies which VPN is the best one to use. You get an avalanche of responses from a group of people who are absolute evangelists for the technology, and each is totally sold on their particular solution. This isn’t a criticism; it is the observation of an interested outsider looking in on a movement—the movement to achieve next level humanity.
Nootropics, broadly speaking, are substances that can safely enhance cognitive performance. We’re not talking about convincing your doctor to write you a Ritalin or Adderall prescription to help you stay awake longer. We’re talking about a group of (as yet unclassified) research chemicals, over-the-counter supplements, and a few prescription drugs, taken in various combinations—that are neither addictive nor harmful, and don’t come laden down with side-effects—that are basically meant to improve your brain’s ability to think.
Part of the secret sauce of nootropics is finding the right combination of the right substances, and taking them in the right amounts and proportions at the right times. That series of alignments is your “stack,” and a growing number of Silicon Valley innovators are relying on their stacks every day.
The idea behind nootropics is not taking a magic pill and transforming yourself into a genius instantly. The core value here is optimization.
Some nootropics you’ve probably already tried, like fish oil or caffeine, or maybe creatine or L-theanine, an amino acid commonly found in green tea. Others you may not have heard of, substances like the class of about 20 drugs called “racetams.”
To be clear, the idea behind nootropics is not taking a magic pill and transforming yourself into a genius instantly. The core value here is optimization, incremental changes—sometimes tiny ones—that together produce a cumulative effect that is powerful and progressive. Nootropics isn’t like winning the lottery (something that happens all in one day). They are investing; they are cashing in on compound cognitive interest.
In other words, nootropics are not like the pills in Limitless, which make you super-smart instantly. Rather, they pose the seductive question of what you might achieve if you were 10 percent more productive…all the time.
Right now, it’s not entirely clear how nootropics as a group work, for several reasons. How effective any one component of a nootropic supplement (or a stack) is depends on many factors, including the neurochemistry of the user, which is connected to genes, mood, sleep patterns, weight, and other characteristics. In other words, results vary, and they can vary a lot.
Second, some nootropic components have received more research attention than others. For example, it is well-known how caffeine affects the body. Unfortunately, as anyone with a coffee habit can tell you, the benefits of caffeine can be lost over time as your body builds up a tolerance to it, and some people experience withdrawal-like symptoms when they don’t ingest caffeine, including headaches.
This is where the amino acid L-theanine comes back into the picture, because research has shown that it reduces physiological and psychological stress responses, making it a great partner for caffeine. It also promotes neuronal health. Studies on the two substances taken together show that they promote alertness, attention, and task switching abilities. This is why the L-theanine and caffeine combination is part of many stacks.
The third factor in ‘where the science behind nootropics stands’ is that many of these substances have been the subject of research studies that proponents don’t think are exactly on point. For example, Kamal Patel points out that Racetams are designed to improve cognitive function, but also notes that researchers aren’t exactly sure what their mechanisms are. According to Patel, this is in part because most research on Piracetam focuses on the elderly and people with real cognitive deficits; in contrast, the people who use nootropics are primarily young professionals who are already at the top of their game (at least, their unenhanced game).
The science behind nootropics is promising, but it is in its nascent phase.
That said, there are a number of studies that have found benefits in nootropics. One study found that brain function in elderly patients improved significantly with regular doses of Piracetam. Other research has shown that Piracetam improves memory in adults and can help students improve their nonverbal learning skills. The bottom line here is that researchers know Piracetam has some positive benefits, but they don’t yet understand why or how.
So, ultimately, the science behind nootropics is promising, but we are still in the early research stage, meaning that it is very hard to say anything definitive about combinations and how they work.
One final point of interest regarding research and nootropics is that the dedicated fan base of nootropic users are currently serving as their own long-term research cohort. They meet in person sometimes, for everything from informational seminars to meditation contests. They gather in their own nootropics Subreddit and in other online forums, sharing information on stacks, optimizing doses, and results. This is DIY, anecdotal science, not placebo-controlled, and certainly not generalizable—but to many people looking to optimize their life (and their brain), the evidence is persuasive enough for them to give nootropics a shot.
On the Market Now
For people who are interested in self-improvement but stand firm on the scientific method, there are some reasonable options to try. There are some startups creating and selling nootropics that have research scientists on their teams, with the aim of offering reliable, proven cognitive enhancers.
Qualia is one such nootropic. This 42 ingredient supplement stack is created by the Neurohacker Collective, a group that boasts an interdisciplinary research team. Among the members of Qualia’s science team are Sara Adães, who has a PhD in neuroscience; Andrew Huberman, who is a Stanford University School of Medicine professor of Neurobiology; Lindsay Briner, who is in her third year of PhD study in cognitive neuroscience; Dr. Rishi Khatri, JD/MD; Jon Wilkins, a Harvard PhD in biophysics; and Heather Sandison, a naturopath. You can find a complete list of Qualia’s ingredients on the website, as well as some of the research behind the stack’s formulation.
Some of Qualia’s ingredients are found in other stacks: Noopept, for example, and Vitamin B complex are some of the usual suspects in nootropics. Green tea extract, L-Theanine, Taurine, and Gingko Biloba are also familiar to many users, although many of the other components might stray into the exotic for most of us. Mucuna Pruriens, for example, is a source of L-Dopa, which crosses the blood–brain barrier, to increase concentrations of dopamine in the brain; L-Dopa is commonly used to treat dopamine-responsive dystonia and Parkinson’s disease.
The website says that the ‘smart drug’ is designed to provide users with “immediate, noticeable uplift of [their] subjective experience within 20 minutes of taking it, as well as long-term benefits to [their] neurology and overall physiologic functioning.” For people climbing their way up in Silicon Valley, it’s a small price to pay. What would you do with 10 percent more productivity, time, income, or intelligence?
Note: Futurism curates products that we believe in. Here, we have partnered with the team behind Qualia in order to offer readers a 10% discount using the coupon code ‘futurism’. Futurism also has affiliate partnerships, so we may get a share of the revenue from purchases.
Just as ancient Greeks fantasized about soaring flight, today’s imaginations dream of melding minds and machines as a remedy to the pesky problem of human mortality. Can the mind connect directly with artificial intelligence, robots and other minds through brain-computer interface (BCI) technologies to transcend our human limitations?
Over the last 50 years, researchers at university labs and companies around the world have made impressive progress toward achieving such a vision. Recently, successful entrepreneurs such as Elon Musk (Neuralink) and Bryan Johnson (Kernel) have announced new startups that seek to enhance human capabilities through brain-computer interfacing.
How close are we really to successfully connecting our brains to our technologies? And what might the implications be when our minds are plugged in
Much of the recent work on BCIs aims to improve the quality of life of people who are paralyzed or have severe motor disabilities. You may have seen some recent accomplishments in the news: University of Pittsburgh researchers use signals recorded inside the brain to control a robotic arm. Stanford researchers can extract the movement intentions of paralyzed patients from their brain signals, allowing them to use a tablet wirelessly.
The most sophisticated BCIs are “bi-directional” BCIs (BBCIs), which can both record from and stimulate the nervous system. At our center, we’re exploring BBCIs as a radical new rehabilitation tool for stroke and spinal cord injury. We’ve shown that a BBCI can be used to strengthen connections between two brain regions or between the brain and the spinal cord, and reroute information around an area of injury to reanimate a paralyzed limb.
With all these successes to date, you might think a brain-computer interface is poised to be the next must-have consumer gadget.
Still Early Days
But a careful look at some of the current BCI demonstrations reveals we still have a way to go: When BCIs produce movements, they are much slower, less precise and less complex than what able-bodied people do easily every day with their limbs. Bionic eyes offer very low-resolution vision; cochlear implants can electronically carry limited speech information but distort the experience of music. And to make all these technologies work, electrodes have to be surgically implanted – a prospect most people today wouldn’t consider.
But all these demos have been in the laboratory – where the rooms are quiet, the test subjects aren’t distracted, the technical setup is long and methodical, and experiments last only long enough to show that a concept is possible. It’s proved very difficult to make these systems fast and robust enough to be of practical use in the real world.
Even with implanted electrodes, another problem with trying to read minds arises from how our brains are structured. We know that each neuron and their thousands of connected neighbors form an unimaginably large and ever-changing network. What might this mean for neuroengineers?
Imagine you’re trying to understand a conversation between a big group of friends about a complicated subject, but you’re allowed to listen to only a single person. You might be able to figure out the very rough topic of what the conversation is about, but definitely not all the details and nuances of the entire discussion. Because even our best implants only allow us to listen to a few small patches of the brain at a time, we can do some impressive things, but we’re nowhere near understanding the full conversation.
There is also what we think of as a language barrier. Neurons communicate with each other through a complex interaction of electrical signals and chemical reactions. This native electro-chemical language can be interpreted with electrical circuits, but it’s not easy. Similarly, when we speak back to the brain using electrical stimulation, it is with a heavy electrical “accent.” This makes it difficult for neurons to understand what the stimulation is trying to convey in the midst of all the other ongoing neural activity.
Finally, there is the problem of damage. Brain tissue is soft and flexible, while most of our electrically conductive materials – the wires that connect to brain tissue – tend to be very rigid. This means that implanted electronics often cause scarring and immune reactions that mean the implants to lose effectiveness over time. Flexible biocompatible fibers and arrays may eventually help in this regard.
Ultimately, we believe a “co-adaptive” bidirectional BCI, where the electronics learns with the brain and talks back to the brain constantly during the process of learning, may prove to be a necessary step to build the neural bridge. Building such co-adaptive bidirectional BCIs is the goal of our center.
Elon Musk’s new startup Neuralink has the stated ultimate goal of enhancing humans with BCIs to give our brains a leg up in the ongoing arms race between human and artificial intelligence. He hopes that with the ability to connect to our technologies, the human brain could enhance its own capabilities – possibly allowing us to avoid a potential dystopian future where AI has far surpassed natural human capabilities. Such a vision certainly may seem far-off or fanciful, but we shouldn’t dismiss an idea on strangeness alone. After all, self-driving cars were relegated to the realm of science fiction even a decade and a half ago – and now share our roads.
In a closer future, as brain-computer interfaces move beyond restoring function in disabled people to augmenting able-bodied individuals beyond their human capacity, we need to be acutely aware of a host of issues related to consent, privacy, identity, agency and inequality. At our center, a team of philosophers, clinicians and engineers is working actively to address these ethical, moral and social justice issues and offer neuroethical guidelines before the field progresses too far ahead.
Connecting our brains directly to technology may ultimately be a natural progression of how humans have augmented themselves with technology over the ages, from using wheels to overcome our bipedal limitations to making notations on clay tablets and paper to augment our memories. Much like the computers, smartphones and virtual reality headsets of today, augmentative BCIs, when they finally arrive on the consumer market, will be exhilarating, frustrating, risky and, at the same time, full of promise.
Neuralink is Musk’s unconventional proposal for preempting that fear. The budding company will build a device to be implanted into the human brain. This device — which is likely to be called a “neural lace” —would give the human brain the ability to directly interface with gadgets and other devices. It could also improve the human brain’s memory by increasing its storage capacity. Such brain-computer implants could also lead to improved treatments for neurological diseases and cognitive disorders. If that wasn’t impressive enough, such a device could potentially be used to reprogram a person’s neural code.
In an piece he wrote for CNBC, Dustin McKissen wondered aloud how such a technology would be introduced — quite literally — into the public consciousness: “one question Musk hasn’t answered (and in fairness, it may not be his responsibility to answer) is who will have the privilege of getting a neural lace?” McKissen is the founder and CEO of PR and strategy firm McKissen + Company, whose work includes analyzing the effects of politics in the U.S. business climate.
“If the essentialness of maternity care is up for debate, it goes without saying Elon Musk’s neural lace probably won’t be covered under your insurance plan,” McKissen wrote, referring to the Obamacare repeal that has been at the forefront of U.S. political debate as of late. “In other words, not only do the rich seem to get richer—they may get the benefit of having a computer-enhanced brain.”
McKissen warns of how social inequality could render Musk’s neural lace beneficial only to a select few, rather than the human race on the whole. “What will income inequality look like if only the very wealthy get an upgrade? And will children be able to get a neural lace?,” he asked. Such a society is reminiscent of one featured in the science fiction film Elysium, where only the privileged few had access to technology’s benefits.
McKissen added: “Research has shown there is already a digital divide contributing to chronic poverty in low-income and rural communities. That digital divide will only grow when some of us can afford a brain enhanced with artificial intelligence. […] most of us are going to have to compete with computer-enhanced peers in an already unequal world.”
McKissen isn’t arguing that some people would be more deserving of access to advanced technology like a neural lace, but rather, he points out the need to improve the “current playing field” — which one could argue extends beyond the question of who gets a neural lace.
As he said, “In a world that’s growing increasingly class conscious, the ability for a relatively small number of people to become more than human could be a disaster for everyone—especially if that technology arrives in a time when income inequality is even worse than it is today.”
A company based in Stockholm, Sweden, is turning its employees into “cyborgs” using a microchip implant about the size of a grain of rice. Though not the first time such microchip implants have been used, this program is the first example of such implants being made available to a company’s employees on this sort of level.
“The biggest benefit I think is convenience,” Patrick Mesterton, co-founder and CEO of Epicenter, told the Associated Press. Epicenter provides network and office space to budding digital startups, and it is currently home to more than 100 companies and about 2,000 workers. The company calls itself “Stockholm’s first digital House of Innovation,” and it only started implanting workers in January 2015.
The microchips, which are implanted in the hands of employees and startup members, function as swipe cards. “It basically replaces a lot of things you have, other communication devices, whether it be credit cards or keys,” Mesterton said, who demoed opening a door just by waving his hand near it. The implant doesn’t just open doors, though. Epicenter’s “cyborg” employees can operate their printers with it or even order smoothies with a wave of their hands.
The devices aren’t mandatory, but “being chipped” has become popular amongst Epicenter’s employees, with more than 150 now implanted with the devices. The company even hosts monthly events where participants can get the implants for free, as well as parties to celebrate those who got implanted.
An obvious concern is security and privacy. “Of course, putting things into your body is quite a big step to do, and it was even for me at first,” Mesterton said, recalling his initial doubts about the implants, which carry information that can be transmitted to other devices via electromagnetic waves, but cannot receive information themselves.
“The data that you could possibly get from a chip that is embedded in your body is a lot different from the data that you can get from a smartphone,” explained Ben Libberton, a microbiologist at Stockholm’s Karolinska Institute. Such devices, he said, can be exploited by hackers to gain huge amounts of information.
“Conceptually you could get data about your health, you could get data about your whereabouts, how often you’re working, how long you’re working, if you’re taking toilet breaks and things like that,” Libberton added. The more sophisticated a microchip is, the bigger the ethical dilemmas that can come with them.
For Mesterton, there really isn’t a problem. “I mean, people have been implanting things into their body, like pacemakers and stuff to control your heart,” he said. “That’s a way, way more serious thing than having a small chip that can actually communicate with devices.”
The media is bursting at its seams with what seems to be the superhero revolution. Comic book publishers like Marvel and DC have spilled over onto the big screen, and now it may look as though they’re spilling over into our technology in the real world. While we have been making efforts at a superhero heads up display or an iron man workout suit, we are also inching our way up to a functional flight suit.
Gravity is a British technology start-up started by Richard Browning on March 31, 2017. The company has developed a human propulsion system to re-imagine manned flight. With miniaturized jet engines and a customized exoskeleton, the Daedalus is expected to push us into a new era of aviation. Browning and his team developed the suit over the course of 2016, with the team’s journey covered in this short documentary:
Browning’s team put together six kerosene-fuelled micro gas turbines to lift the manned-suit up—essentially, a man tethered to six compact jet engines. Initial testing began with Browning performing assisted jumps with four engines. Thinking additional engines would help him levitate off the ground, the team added two additional engines with leg mounts. The final design looks slightly different, as it houses two engines within the suit’s back in conjunction with two engines attached at each arm. The design also has a heads-up display that relays fuel information to the user through a WiFi connection. The suit is expected to reach speeds of 160 km/h (100 mph), but you can test that out yourself.
If you want to get a custom suit of your own, it is estimated to cost around $250,000. Although, Browning suggests that operators are physically fit in order to maneuver the suit. He adds that since he weighs only 70 kilograms (160 pounds), he was able to fly with relative ease.
Gravity’s developments marks the beginning of a new era of human aviation. While the suit seems clunky now, so did the first computers when they were built. We’re witnessing pockets of intelligent capital develop, and isn’t too hard to imagine the future of this technology.
Ambrosia, a company inspired by the work done by Stanford University neuroscientist Tony Wyss-Coray with parabiosis in mice, charges $8,000 per patient for its human clinical trial of parabiosis. Although there may be 600 people who take part in the study transfusing 1.5 liters of plasma with donors between the ages of 16 and 25, the study is being done without the blessing of Wyss. He believes that the study does not genuinely represent the science and that, “there’s just no clinical evidence, and you’re basically abusing people’s trust and the public excitement around this.”
While Ambrosia is operating without clinical evidence to support the trials, the science behind utilizing young blood in repairing and restoring aged cellular processes is worth taking a look at.
How Could Blood Be Transformed?
Red and white blood cells are produced from stem cells within bone marrow, and as we grow older, our body’s ability to replenish the number of red and white blood cells greatly depletes. Similar to the mouse trials ran by Wyss-Coray, researcher Hartmut Geiger and his team at the University of Ulm in Germany looked at the bone marrow in mice at varying ages and determined that older rodents produce very low levels of the protein osteopontin.
Rather than looking at blood transfusions for a possible solution like Wyss-Coray’s team, Geiger’s team looked the potential of stem cells to test the importance of the deficient protein. The team introduced “fresh” stem cells into mice that had little to no osteopontin and noticed that the stem cells aged very quickly. When older stem cells were introduced to a dish with osteopontin and an activator protein, the stem cells began to propagate blood cells.
While companies like Ambrosia are testing blood transfusions on humans to mimic an experiment that utilized a shared circulatory system between an older mouse and a younger mouse, Geiger’s team notes that long-term studies must be done on their work to verify the effect of osteopontin on rejuvenating cells completely.
The team is developing a drug with the protein and its activating factor, but they do not promise a fountain of youth. They do believe that there would be benefits for the immune systems of the elderly, which may be better positioned to fight diseases that are linked with cardiovascular aging after taking the drug.
While all this talk about immortality is exciting, it might be a while before we can actually reap the benefits of researchers’ studies in the way we hope. In the meantime, we can keep dreaming away death.
Although AI will eliminate some jobs, some believe it will create even more new careers. This shift may look like the last major change in the U.S. economy, when huge numbers of people moved from working in agriculture to working in manufacturing and other urban jobs.
Healthcare is one of the best examples of AI helping humanity. Even a highly experienced human doctor can have a tough time drawing a diagnosis from complex medical imagery as fine details can go unnoticed. It is far easier for an AI system to see the difference between a benign cyst and a malignant tumor, determine how well a burn is healing, and extrapolate how fast blood is flowing through a heart defect.
In addition to keeping us healthy and safe, AI is also helping the environment. Systems are being used to analyze global threats, such as decreasing biodiversity, as well as develop solutions, such as how best to enact conservation efforts.
AI systems are even helping us reduce energy use altogether. Google’s DeepMind, an AI startup, reduced its data center energy usage by 40 percent using learned data from the centers. The TensorFlow machine learning system is open source, so others can harness this tech to reduce their own energy use, too.
6. Revolutionize Agriculture
Autonomous systems are already revolutionizing agriculture, doing everything from planting seeds and fertilizing crops to administering pesticides. Drones are particularly useful as they are cheap and can be communally owned, so the flying devices are being used by farmers all over the world to monitor crops and collect data.
With so much information now available online, sorting the facts from the fiction has become increasingly difficult, but AI could help with that, too. A system could sample many versions of a news story from various validated sources. It could then use algorithms to reconstruct the central, verified facts of the events and remove bias. In this way it could act as a fact checker and fake news buster, augmenting journalistic work. Plenty of research is already going into this, so our days of widespread misinformation online could be numbered.
The Tech Is Ours to Program
Artificial intelligence will be most useful to us when we understand and trust it. Though the idea of a rogue autonomous system is prevalent in pop culture, we can’t let that deter us from developing a level of trust as it’s really the only way we can ever fully enjoy the benefits of AI. Despite their ability to make autonomous, flexible decisions within their narrow areas of focus, AI systems are still predictable in the sense that their overall outcomes are in line with our goals as long as we continue to program them that way
Eight years ago, Bill Kochevar was paralyzed in a cycling accident. Since then, he hasn’t been able to move anything below his shoulders — until now.
As part of a trial at Case Western Reserve University, two tiny, 96-channel electrodes were implanted in the motor cortex of Kochevar’s brain. Essentially, the electrodes register the actions of his neurons and respond by signaling a device that stimulates his arm muscles. The system has given him the ability to grasp and lift things the same way he used to, by simply “thinking” the command to his body. He can now drink through a straw after raising a mug of water to his mouth and eat mashed potatoes with a fork.
Kochevar prepared for the trials by learning how to move a virtual reality (VR) arm on a computer using his mind. He practiced this for four months, after which 36 electrodes were surgically implanted into his right arm. The electrodes were strategically placed to control the movement of his upper and lower arm, elbow, hand, shoulder, and wrist.
Now, when Kochevar thinks about moving his arm, the brain signals travel to the brain-computer interface (BCI), which decodes and translates them into commands for intended movements. Those commands are then converted again into patterns of electrical pulses by the Functional Electrical Stimulation (FES) system in his arm. The pulses stimulate the electrodes, which move the muscles.
The research team says that advances needed to use the technology outside of a lab setting are not far from reality.
The brain implant would need to be wireless, but work on that is already underway. The movements themselves would also need to be more precise. That will be achieved once the investigators improve the stimulation and decoding patterns. FES systems that are entirely implantable are already being tested in other clinical research, as well.
“Every day, most of us take for granted that when we will to move, we can move any part of our body with precision and control in multiple directions, and those with traumatic spinal cord injury or any other form of paralysis cannot,” Case Western Reserve School of Medicine associate professor of neurology Benjamin Walter told The Daily Case. “By restoring the communication of the will to move from the brain directly to the body, this work will hopefully begin to restore the hope of millions of paralyzed individuals that someday they will be able to move freely again.”
“For somebody who’s been injured eight years and couldn’t move, being able to move just that little bit is awesome to me,” said Kochevar. “It’s better than I thought it would be.”
It can be impossible for humans to tell apart very similar colors. But, with a new pair of tetrachromatic glasses created by researchers at the University of Wisconsin-Madison, you can never again leave the house wearing two items of black clothing that don’t quite match.
These spectacles enhance the user’s existing color vision, affording them new power to discern more distinct shades. Once developed for practical applications, they could be used to spot camouflaged targets in the field or identify counterfeit money.
Color vision in humans relies on short, medium, and long types of cone cells reacting to blue, green, and red wavelengths of light, respectively. These three colors together make up the normal trichromatic vision of a human. Physicist Mikhail Kats was reviewing his knowledge of the eye as he prepared to teach a photonics class when he had an inspiration: could he trick the eye into seeing as though it had a fourth kind of cone cell, effectively giving people tetrachromatic vision?
Obviously, Kats decided the answer was “yes,” and the working glasses render blocks of color, called metamers, visibly distinct by helping the wearer’s eye perceive their different wavelengths of light. They tested the glasses by displaying metamers on a smartphone screen and computer. They looked identical without the glasses, but with them on it was easy to see the difference.
“They look exactly the same and you look through the spectacles and, holy crap, they’re two different things,” Kats said in an interview with New Scientist.
Exploration With Enhanced Senses
Kats’s team is working on filters to help distinguish metamers at the blue end of the visible spectrum, and will move on to greens next. After more hues can be distinguished with the filters, they can adapted for tasks like guarding fruits and vegetables against spoiling by watching their skins for changes. Tech like these tetrachromatic glasses equips us to see more of the universe around us, as part of a sensory enhancement movement.
Think of it as a real life — but significantly scaled down — application of the 2009 blockbuster Avatar concept where humans control the body of an alien by remotely transferring human consciousness into another biological body. The team uses a brain-computer interface (BCI) that helps translate brain waves into commands that guide or control the movement of the turtle.
KAIST wrote in a press release, “Unlike previous research that has tried to control animal movement by applying invasive methods, most notably in insects, Professors Phill-Seung Lee of the Mechanical Engineering Department and Sungho Jo of the Computing School propose a conceptual system that can guide an animal’s moving path by controlling its instinctive escape behavior.”
Turtles are instinctively drawn towards light. The team harnessed this natural impulse by attaching a semi-cylinder that can block light onto the turtle’s back, which can be opened and closed using human thought, thus allowing them to control the turtle’s movement.
The human-turtle BCI setup features a human-operated, head-mounted display integrated with BCI, while the turtle is controlled using what the team calls the “cyborg system” — a system consisting of a camera, Wi-Fi transceiver, a computer-control module, and a battery attached to the turtle’s shell.
As much as we’d like this technology to develop into an actual Avatar-like machine that will one day allow us to control alien life forms, this kind of brain-interface tech will probably find more use here on Earth. Possible application include improving augmented and virtual reality technology, use in positioning systems, as well as military reconnaissance and surveillance, especially given its versatility.
The research team highlighted the versatility of their tech, as demonstrated by the variety of environments in which it was effective, in the study they published in the Journal of Bionic Engineering. The interface was able to get the turtles moving indoors and outdoors as well as across different surfaces like grass and gravel. The humans even caused the turtles to tackle multiple obstacles, like shallow water and trees, making it a handy tech to have during stealthy military operations.
In recent years, media has been bursting at the seams with what can only be described as the superhero revolution. From the original comic books to the big screens in Hollywood, we are surrounded by enhanced humans from the far reaches of our imagination. And as visions of enhanced humans inspire our art, they are also spurring biological and technological advances that could make these visions a reality. Let’s take a look at where we are right now.
A Suit of Armor Like Iron Man
While we’re nowhere near developing the iconic gold and red suit anytime soon, we are making progress. Researchers across the world are developing exoskeleton suits to enhance human capabilities. Whether it’s to assist those with disabilities or enhance our country’s armed forces — exoskeleton suits are definitely making their debut. But, as the Marvel films suggest, the greatest limiting factor in creating an Iron Man-like suit is the power source.
A suit with such power needs an energy source just as powerful. While it does seem like an uphill battle without an arc reactor, an Iron Man suit of armor seems to be on the more plausible side of the spectrum.
An X-Gene From The X-Men
The conversation takes a difficult turn when you involve genetics. This isn’t about modifying metal, but humans.
While in the past scientists have been able to manipulate mammals for the amplification of physical traits, we are still a long way from having our own Steve Rogers or a cadre of X-Men. But we may now have the means to achieve the result. Scientists believe that epigenetics can be altered to identify traits that may enhance strength, stamina, and a host other physical abilities that we would normally associate with a superhero.
With Great Power…
It’s not a question of whether or not superhumans will be a reality, but a question of when. We already have clinical trials to fight aging, so why not push humans forward in other avenues? The heart of the issue lies in ethics, society, and finances.
So until an eccentric billionaire comes around to build his or her own super suit, we might have to wait until CRISPR/Cas9 does the job for our own bodies.
A group of researchers from Saarland University in Saarbrücken, Germany, is giving a new meaning to the phrase “comfortable in your own skin.” In a remarkable application of human-computer interaction technology, the research team, led by Martin Weigel, is working on research concepts for uber-thin temporary electronic tattoos that turn your skin into touch-sensitive buttons.
The technology prints wires and electrodes on temporary tattoo paper using a conductive ink. The tattoos come out thinner than the width of human hair. Capable of lasting a couple of days before completely rubbing off, the e-tattoo can be transferred onto skin using water.
Skin and Screen Improvement
According to Weigel, SkinMarks can be put on bumps and birthmarks — or other notable skin features, such as wrinkles and freckles — which people intuitively know. As such, these are ideal for touch-sensitive buttons. For example, it’s possible to adjust the volume of your smartphone by sliding your finger across a tattoo placed on the side of another finger. Bending this tattooed finger, however, turns the volume slider into a play and pause button.
An added feature that makes SkinMarks stand out from other e-tattoos is electroluminescent. The tattoos glow whenever a current passes through it. A tattoo shaped like your favorite app can light up when you receive a notification.
“Now we’ve tested the technological feasibility, the next step is to look at implementing it in a practical way,” Weigel said to New Scientist.
Chris Harrison from Carnegie Mellon University thinks the technology will be available in 10 years. “You’ll have these digital tattoo parlours which you can go to in 2050 and 5 minutes later you can walk out with the iPhone 22 on your forearm,” he said in an interview with New Scientist.
“This is amazing research,” he added, noting that the skin gives a bigger surface area than any smartphone out there. “Human fingers are quite nimble on their own skin.”
Everywhere you go, you can find scientists and engineers doing 3-D printing. They may be using it to print bridges over water, or buildings and houses, or even hearts and livers and skull parts. In fact, we’re hearing so much about 3-D printers creating the normal and ordinary stuff all over again, that it’s becoming pretty boring.
This, of course, is how technology makes progress: slowly, and with iterative changes being added all the time. We’re currently using 3-D printers just to create all the old stuff, which we’re used to. The makers and creators are mainly interested today in demonstrating the capabilities of the printers, and put less emphasis on actually innovating and creating items that have never existed before, and of course, the clients and customers don’t want anything too extraordinary as well. That’s the reason we’re 3-D printing a prosthetic ear which looks just like a normal ear, instead of printing a Vulcan ear.
What happens if we let go of the ordinary and customary, and begin rethinking and reimagining the items and organs we currently have? That’s just what Manu S. Mannoor, Michael C. McAlpine, and their groups did at Princeton and Johns Hopkins Universities. They made use of a 3-D printer to create a cartilage tissue the shape of a human ear, along with a conductive polymer with infused silver nano-particles. The end result? A bionic ear that should look and feel just like an ordinary ear, but has increased radio frequency reception. It is not far-fetched to say that Mannoor and McAlpine have printed the first biological ear that could also double as a radio receiver.
Where else may we see such a combination between the biological and the synthetic? This is a fascinating thought experiment that could help us generate a few forecasts about the future. If I had to guess, I would venture a few combinations for the next twenty years –
Radio-conductive bones: have you come for a hip replacement, and also happen to have a pacemaker or some other implant? The researchers will supply you with a hip-bone printed specifically for you, which will also contain conductive elements that will aid radio waves go deeper into the body, so that the implants can receive energy more easily from the outside by radio waves or induction of some kind.
Drug delivering tattoos: this item is not 3-D printed, but it’s still an intriguing combination of a few different concepts. Tattoos are essentially the result of an injection of nano- and micro-particles under the skin. Why not use specific particles for added purposes? You can create beautiful tattoos of dragons and princesses and butterflies that can also deliver medicine and insulin to the bloodstream, or even deliver adrenaline when pressed or when experiencing a certain electrical field that makes the particles release their load. Now here’s a tattoo that army generals are going to wish their soldiers had!
Exquisite fingernails: the most modern 3-D printers come with a camera and A.I. built-in, so that they can print straight on existing items that the user places in the printer. Why don’t we make a 3-D printer that can print directly on fingernails with certain kinds of materials? The fingernails of the future – which will be printed anew every day – might contain tiny batteries that will power smartphones by touch, or microphones that could record everything that happens around the user.
These are obviously just three rudimentary ideas, but they serve to show what we could gain by leaving behind the idea that new manufacturing technologies should adhere to the “old and proven,” and advance ahead to novel utilities.
In the end, the future is never just “same old same old,” but is all about shedding off the customs of the past and creating new ones. And so, if I had to guess, I would wager that such a unification of concepts into new and bizarre devices would give us a much more accurate view of the future than the one we gain in the present by showing how 3-D printers can build yet another house and another human organ.
What are your ideas for future combinations of biological and synthetic components? Write them down in the comments section!
The inevitability of aging may be no more than yet another biological theory that scientific advances will retire in the near future. Some scientists today say that longevity is a societal concept that we may no longer need to uphold as a static law of nature, but instead, as one that can be rewritten to our benefit.
Researchers from fields spanning genetics to artificial intelligence (AI) are working towards a future where we will have to stop using a “midlife crisis” to justify our ill-advised decisions (but is it really ever the wrong time to buy a Porsche?).
While there have been innumerable theoretical ideas and initiatives for dodging the Grim Reaper, many actual strategies that are being developed today fall into one of two camps: biomedical or technological.
To win the biological battle against death, we must start by understanding the enemy. Aging is a processes that begins on a cellular level. As our cells divide, their DNA and functionality slowly break down, leading to greater susceptibility to damage and disease. For many years, scientists had hoped that telomeres might be the key to halting aging, but these “caps” that protect our DNA have proven to be more complex than initially thought.
When biologists discovered that our telomeres wear away as we age, they theorized that lengthening these strands of protective DNA could also elongate our lives — perhaps indefinitely. But researchers have since found that multiple processes in cells work to tightly regulate telomere lengths because telomeres that are too long can actually have negative consequences, like cancer. Many labs around the world are studying the molecular mechanisms that lengthen and trim telomeres in the cell in order to find out whether they can be modified to keep telomeres at the Goldilocks length — not too long or too short.
Concurrently, technology for genetic editing has advanced by leaps and bounds. The revolutionary process for editing DNA, “CRISPR/Cas9,” has allowed scientists to make targeted genetic changes in living mammalian cells, and has even worked to save a little girl from cancer. The tool has so many applications, researchers have coined the phrase “CRISPR it!” (who says scientists are out of touch?).
Genetic engineering is only one of the many medical advances that are being used to prolong our lives. However, some see the human body as a vessel not to be fixed, but rather, replaced.
Technology has become fully integrated with modern life, becoming, in effect, an extension of its users. And as virtual brain modeling and AI design have advanced, many have begged the question, at what point does virtual reality become… reality?
We can design servers — and even mechanical bodies — that will last far longer than our current bodies will. And consider that these robotic bodies could endure a much wider range of conditions than our fleshly ones can. Just in terms of space travel, designing a ship that would not need food, water, or atmosphere would be much more efficient. Also, it wouldn’t matter how long the trip to another world would take — we wouldn’t even need hibernation chambers!
While some reject the idea of preserving humanity outside of the actual human body, leaders like Elon Musk are taking steps towards the potential merger of man and machine. Musk has indicated that in the near future he will be unveiling a neural lace, a device that can enhance and grow with your brain. Once computers can seamlessly interact with our minds, it’s conceivable that they may be able to store our minds as well.
Visionaries like Musk, along with other companies chasing immortality, are forcing us to reconsider our own mortality: what would life look like divorced from death? What price are we willing to pay for another 100 years of life? Another 1,000? And if that’s the case, then what does it mean to be human?
If technology keeps advancing at its present rate, we may have to figure out our answers to these questions sooner rather than later.
CRISPR-Cas9 is the most advanced and efficient gene editing tool we have. It’s uses, however, have been largely limited to silencing protein-coding genes in the DNA. This leaves out what’s called the DNA “dark matter” — the non-coding DNA that covers about 99 percent of our genetic code. A study published in the journal PLOS Computational Biology may soon change this.
The new technique, developed by a team of researchers led by Carlos Pulido, is a software pipeline called CRISPETa. It’s based on a breakthrough tool (which uses CRISPR-Cas9) called DECKO. The tool was recently developed by the Johson lab, and was specifically designed for deleting those pieces of non-coding DNA. DECKO employs two sgRNAs as “molecular scissors” that snip out a piece of DNA. While the concept might seem simple, designing deletion experiments using DECKO was time consuming due to thee lack of software to create the required sgRNAs.
This is where CRISPETa comes in. Users can tell CRISPETa which region of DNA they wish to delete. The software then generates a pair of optimized sgRNAs that can be used directly for that experiment. Even better, the software can develop designs at high scale, which would all for future screening experiments as well.
“We hope that this new software tool will allow the greatest possible number of researchers to harness the power of CRISPR deletion in their research,” Pulido said.
CRISPETa designs have already demonstrated their ability to efficiently delete desired targets in human cells. “Ultimately, we expect that CRISPR deletion and other genome engineering tools to lead to a revolution in our ability to understand the genomic basis of disease, particularly in the 99% of DNA that does not encode proteins,” researcher Rory Johnson explained. The deletions could also be carried in RNA molecules.
“Apart from being used as a basic research tool, CRISPR may even be used in the future as a powerful therapeutic to reverse disease-causing mutations,” Johnson added. This is the underlying value of the research: the software could be used to develop CRISPR scissors to delete suspected disease-causing, non-coding DNA. At the very least CRISPETa will improve our understanding of non-coding DNA, which could lead to the discovery of new disease-causing genes, and aid in the development of potential new drugs with which to treat and maybe even eventually cure them.
The Cognitive Genomics Project is focused on understanding the origin of intelligence within our own genome. It’s lead by BGI, a non-profit research group based in Shenzhen, China, that was founded in 1999. The organization is currently conducting a gene-trait association study of g, a general factor of intelligence. General intelligence is defined by three prominent categories: stability, heritability, and predictive powers. In short, the study is collecting genetic data from over 20,000 individuals who have an IQ above 150, and looking for patterns in their genes.
While this might seem relatively straightforward, it’s actually a complex and difficult task. That’s because general intelligence does not follow mendelian, single-gene genetics. Researchers cannot simply look for specific mutations in specific genes, as they do for diseases like Huntington’s Disease or Cystic Fibrosis. Rather, intelligence is more similar to traits like eye color and hair color that involve multiple genes in inheritance patterns that we are just beginning to understand.
No Gene Editing Needed?
It remains to be seen how effective gene editing can be at influencing traits like personality and intelligence in people whose brains have already been formed. One way we could avoid the gene editing process entirely is by genetically designing intelligence into our children from conception. We could utilize in vitro fertilization and carefully process the genetic information of each embryo produced for genetic preferences.
If the Cognitive Genomics Project provides significant data supporting the correlation between particular parts of the genome and intelligence, then parents can look for these genetics sequences in potential embryos and select the embryos with the desired traits. This method would increase the probability of intelligent children without having to edit particular genome sequences.
While the ethics of human genetic engineering continue to be debated, we may be closer to a more intelligent humanity than ever before.
Researchers from Stanford University have built a new brain-computer interface (BCI) that allows paralyzed people to type with their brains at a rate faster than was previously possible.
For their study, they worked with three participants with severe limb weakness, two due to Lou Gehrig’s disease and one with limited movement due to a spinal cord injury. All three participants had electrode arrays embedded in their brains to record signals from the motor cortex, which controls muscle movement. A cable then transmitted the brain signals to a computer, and they were then translated by algorithms into point-and-click commands that prompted a cursor to move over letters. Basically, participants were able to move the cursor by imagining their own hand movements.
While it took some training, the subjects were able to achieve BCI results that outperformed any available platform today. One patient was able to type 39 correct characters per minute, which is around eight words per minute, proving that BCIs have the potential to enhance communication among people with impaired movement. It should be noted that these typing-rates were achieved without the use of automatic word-completion.
“The performance is really exciting. We’re achieving communication rates that many people with arm and hand paralysis would find useful. That’s a critical step for making devices that could be suitable for real-world use,” says Chethan Pandarinath, one of the authors of the paper, told Stanford.
BCIs on the Rise
The Stanford team’s contribution toward advancing people’s ability to control machines with just their thoughts reflects the progress being made in this field. In addition to this breakthrough, a team from the Wyss Center for Bio and Neuroengineering in Geneva, Switzerland, built a BCI that can decipher thoughts of people with complete locked-in syndrome, giving them the ability to answer yes or no questions. Meanwhile, researchers at Harvard are using BCI tech to restore sight in the blind.
In the U.S. alone, millions of people suffer from paralysis, some from degenerative conditions like ALS and others from unfortunate accidents that cause severe spinal cord injury. This point-and-click approach developed at Stanford could be the key to making a more efficient BCI geared toward improving the quality of life for those people. It could restore their ability to communicate while teams like that at the Center for Sensorimotor Neural Engineering (CSNE) work on ways to use BCIs to help the paralyzed walk again.
With some modifications, the team believes its system could be applied to other devices, including smartphones or tablets. “We’re approaching half of what, for example, I could probably type on a cell phone,” neurosurgeon and co-senior author Jaimie Henderson of Stanford University told Scientific American. “This study reports the highest speed and accuracy, by a factor of three, over what’s been shown before.” In the future, the team hopes to refine the technology so that it becomes wireless, fully implanted, and self-calibrating, thus being able to provide around-the-clock support.
It’s been two years since Neil Harbisson was declared the world’s first legally recognized cyborg after he corrected his colorblindness by installing an antenna directly into his scull. Soon, he may be one of many.
Our technological ability to augment the human body with electronic devices has finally reached the point where it can solve life-threatening ailments. We are not talking about simply making life “cooler” with some new gizmo — we are talking about saving the 22 people who are currently dying every day waiting on the U.S. national transplant list.
People who have had to constantly manage type 1 diabetes for most of their lives may soon have access to an artificial pancreas. The device allows users to hook up their smart phones to a sensor that continuously monitors their blood sugar – and the phone uses that information, as well as information from the user about their activity level – to tell a pump how much insulin to release into the users blood at any given moment. The artificial pancreas is currently in clinical trials to assess its safety and efficacy in people with type 1 diabetes.
The pancreas is not the only organ that could be replaced with an artificial device. Many companies are investing in the development of artificial organs. According to a recently released analysis from Big Market Research, the market is expected to surge over the next decade. An implantable artificial kidney may be available to patients currently enduring dialysis by the end of the decade, according to some reports.
These artificial organs, some electronic, some crafted by 3D printing with cultured cells, may be able to replace failing hearts and livers, supplant defective eyes and ears. When these technologies finally replace our woefully inadequate organ donation system, the more than 119,000 men, women, and children who are waiting for organs in the U.S. alone could finally get out of the hospital and live their lives free of fear. Artificial organs could also have the advantage of not requiring recipients to take immune-suppressants for the rest of their lives as well.
Implants for Everyone
Alright, in addition to saving lives, cybernetics might make life cooler with new gizmos. Leaders in the field like Elon Musk are developing technologies that will not simply restore the bodies of the sick and disabled, but enhance the bodies of the well. Musk has hinted that he plans to soon reveal his neural lace — a device that could enhance and grow with your brain that is designed to bring human intelligence up to speed with artificial intelligence. One could also easily imagine that if artificial sensory organs like eyes and ears could be designed for the blind or dead, could they not be upgraded to give people super senses as well?
Cybernetics may not only increase the quality of life for both the sick and the well, but increase the quantity of life as well. Every organ we are born with has an expiration date, but this may not be so with our future cybernetic parts. Could artificial organs and cybernetics launch us into the age of immortality? Many hope so, but only time will tell.
While most of the proposed CRISPR applications are focused on editing somatic (non-reproductive) cells, altering germline (reproductive) cells is also a very real possibility. This prospect of editing germline cells and making changes that would be passed on from generation to generation has sparked a heated ethical debate.
The potential to change someone’s DNA even before they are born has led to claims that CRISPR will be used to create “designer babies.” Detractors were appalled at the hubris of science being used to engineer the human race. Supporters, on the other hand, are saying this ability should be a human right.
Rigging the Game
To be fair, most advocates of genetic editing aren’t rallying for support so CRISPR can be used to create a superior human race. Rather, they believe people should have free access to technology that is capable of curing diseases. It’s not about rigging the genetic game — it’s about putting the technique to good use while following a set of ethical recommendations.
To that end, a panel made up of experts chosen by the National Academy of Sciences and the National Academy of Medicine released a series of guidelines that essentially gives gene editing a “yellow light.” These guidelines supports gene editing on the premise that it follows a set of stringent rules and is conducted with proper oversight and precaution.
Obviously, genetic enhancement would not be supported under these guidelines, which leaves some proponents miffed. Josiah Zaynor, whose online company The ODIN sells kits allowing people to conduct simple genetic engineering experiments at home, is among those who are adamant that gene editing should be a human right. He expressed his views on the subject in an interview with The Outline:
We are at the first time in the history of humanity where we can no longer be stuck with the genes we are dealt. As a society we have begun to see how choice is a right, but for some reason when it comes to genetics, some people think we shouldn’t have a choice. I can be smart and attractive, but everyone else should be ugly, fat, and short because those are the genes they were dealt and they should just deal with it.
The immense potential of gene editing to change humanity means the technology will continue to be plagued by ethical and philosophical concerns. Given the pace of advancement, however, it’s good that we’re having this debate on what and who it should be used for right now.
Brain-computer interfaces (BCI) are more advanced now than they’ve ever been, thanks mostly to research that has improved our understanding of how our brains works. BCIs make it possible for thoughts to be translated into machine commands. This has given hope to people suffering from neurodegenerative diseases or those that require the use of prosthetics.
Advanced as they are now, BCIs still mainly rely on implantable chips and electrodes that are placed on the brain’s surface to receive and transmit signals from brain chemicals known as neurotransmitters. Today’s state-of-the-art electrodes, made from thin-film platinum, are highly susceptible to corrosion over time. The Center for Sensorimotor Neural Engineering (CSNE) — a partnership between San Diego State University, the University of Washington (UW), and the Massachusetts Institute of Technology (MIT) — is working on research to significantly improve this technology. Their study is published in the journal Nature Scientific Reports.
Instead of using platinum, the researchers, led by CSNE deputy director Sam Kassegne, developed electrodes built from a form of carbon called “glassy carbon.” “Glassy carbon is much more promising for reading signals directly from neurotransmitters,” Kassegne told San Diego State University. “You get about twice as much signal-to-noise. It’s a much clearer signal and easier to interpret.” Furthermore, because it’s 10 times smoother than granular thin-film platinum, electrical stimulation corrodes glassy carbon more slowly. The material can also last longer than platinum or any other material used for electrodes.
Neural Stimulation and Growth
The researchers think that glassy carbon can do much more than just improve brain signal transmission. They are also using the improved BCIs built using glassy carbon to record neural signals along the brain’s cortical surface and from inside the brain at the same time. “If you record from deeper in the brain, you can record from single neurons,” explained researcher Elisa Castagnola. “On the surface, you can record from clusters. This combination gives you a better understanding of the complex nature of brain signaling.”
Another potential application is even more promising. Mieko Hirabayashi, a student in Kassegne’s lab, is exploring the possibility of using glassy carbon technology to make neural cells in spinal cords grow to replace damaged tissues. Hirabayashi is experimenting with rats to test if this neural growth can be triggered by precisely calibrated electrical stimulation. Without the new glassy carbon electrodes, Hirabayashi wouldn’t be able to stimulate, read the electrical signals from, or even simply detect the neurotransmitters as easily.
With all the successes that existing BCI technologies have produced, it’s not difficult to imagine a future in which a true merging of human and machine is be possible. Improvements such as CSNE’s research bring this future closer to today’s reality.
In 2006, a 21-year old Andy Sandness unsuccessfully attempted suicide. He had aimed a bullet at his chin, which ended up destroying the lower half of his face. He was rushed to Mayo Clinic in Rochester, Minnesota where doctors finally stabilized him, but nothing could be done about his missing jaw, teeth, and nose.
Sandness went back to his home and job in Wyoming, but six years later, the Mayo Clinic delivered some hopeful news. In 2012, the hospital proposed the idea of a face transplant, a procedure with many risks and possible complications following the final surgery. But, Sandness accepted the dangers, staying hopeful, and agreed to the operation.
“When you look like I looked and you function like I functioned, every little bit of hope that you have, you just jump on it,” Sandness said. “This was the surgery that was going to take me back to normal.”
Preparations for Andy’s facial reconstruction surgery took three years, while doctors trained for the procedure over 50 Saturdays.
In 2016, he was added to the organ donor list. Just five months later, he received notice of a donor whose wife consented to the face transplant.
The procedure stretched across 56 hours and featured 60 different staff members. The doctors were able to successfully transplant Sandness’ nose, cheeks, mouth, jaw, lips, teeth, and chin.
Since the operation, Andy has noticed that people haven’t been staring at him nearly as much as before he underwent reconstructive surgery. He’s also regained his sense of smell, taste, and his ability to breathe efficiently.
Andy’s story might involve a fairly uncommon transplant surgery, but his operation is one of many transplant procedures that are changing, and in some cases saving, lives.
Last month, researchers were able to use retinal transplants to partially restore sight in a group of mice. This success could lead to clinical trials in humans within the near future. Last year, doctors performed four uterus transplants, allowing women to choose whether or not they want children when they previously had no choice. In late 2016, doctors at the Leeds Teaching Hospital NHS Trust announced that they’re beginning to offer hand transplants — the first hospital in the world to do so.
With further advances in transplant surgery, doctors will be able to significantly improve the quality of life in patients who thought they’d never experience a normal life.
Advances in medicine and technology are revolutionizing what it means to be human. By providing us with gene editing tools, such as CRISPR, we’re well on our way topersonalizing the medical revolution.
CRISPR provides a way for us to alter gene expression in particular cells, based on need. Up until recently, the process was very difficult to execute. It took many years to develop precision when altering gene expression. With the latest technology, the desired precision can be obtained in just a few weeks. CRISPR, and other technologies like it, are shaping the future of medicine.
When all possible treatments had failed, the parents of Layla, a 1-year-old with leukemia, sought help from new technology developed by Cellectis — a biopharmaceutical company based in Paris, France. The gene editing therapy, which was still experimental, had been utilized once before in a patient with HIV.
Physicians and scientists worked together to give Layla immune cells from a healthy donor that had been modified with a gene editing tool. In this case, TALEN — a DNA-cutting enzyme — was utilized to modify the donor T-cells so that they would not attack Layla’s own cells. In order for the treatment to work, a patient’s immune system is essentially destroyed and replaced with the modified cells. However, this is not a permanent fix: it’s just a temporary solution until a matching T-cell donor can be found.
While Layla’s doctors believe she is in remission, only time will tell if this was a “one-off” fix or a case that may need revisiting. Additional trials are needed so physicians and scientists can better understand how gene-editing can benefit patients, and treat diseases other than cancer.
Without a doubt, computers and machines are besting humans in many ways thanks to developments in artificial intelligence (AI). AIs have beaten their human counterparts at everything from games like Go and poker to driving safely and treating illnesses. Clearly, intelligent machines are increasingly becoming better than humans at so many things, and as the old saying goes, if you can’t beat them…merge with them.
To some degree, we are already cyborgs. You think of the digital tools that you have, the applications that you have. You can ask a question and instantly get an answer from Google. You already have a digital tertiary layer. I say ‘tertiary’ because you think of the animal brain or the primal brain, and then the cortex, the thinking brain, and then your digital self as the third layer.
Hasta la Vista?
The potential for this human-machine merger isn’t lost on DARPA, the research arm of the U.S. Department of Defense. “There are a couple of very interesting things happening as we speak facilitating humans and machines working together in a very different way,” said Justin Sanchez, director of DARPA’s Biological Technologies Office, in an interview with Computer World.
Indeed, we are already seeing this to a certain extent with the latest developments in exoskeleton technology. We now have smart exoskeletons that help paralyzed people walk again or improve the strength and endurance of the average person. “I think the recent science and technology developments we’re making at DARPA, as well as the embracing of physiology and A.I., is enabling us to set up the conditions for profound changes on how humans and machines can work together,” Sanchez added. “We are giving our physiology the opportunity to work with machines in a different way.”
DARPA researchers are developing implantable devices that can perform computing functions similar to a desktop or laptop computer. Sanchez even noted that they have researchers working on the possibility of a human-computer interface using devices and chips that needn’t be implanted in the body. Another potential technology is a pair of smart contact lenses that allow soldiers to see what a drone sees from above.
Of course, while the technological possibilities are already here, there are other things that need to be considered, such as what these enhanced capabilities would mean for humanity in general. “We deeply feel that we can’t do this work in a vacuum,” Sanchez said. “We need to consider all aspects … the moment we try to even start thinking about science and technology in this space. There’s a responsibility that goes along with this.”
The Neural Lace is a concept that has been a staple of science fiction. It’s the idea of
capturing the processes of the human mind with a tool and enhancing them to better suit our environment and lifestyle in one large, wirelessly connected interface among humans. Recently, this concept has taken a step forward in transitioning from the realm of fiction to non-fiction.
With their results published in Nature Nanotechnology, scientist Charles Lieber and his teamhave developed syringe-injectable electronics. Thus far, they has only been tested on live mice, but the results are promising. These electronics assist in monitoring brain activity, delivering treatment, and can even enhance brain activity. While human trials are years away, and the neural lace is currently a wired technology, the reason why Elon Musk finds it imperative is because of what it could do.
Why We Need It
Elon Musk is making it very clear that the best way to avoid a dystopian nightmare is to form a symbiosis with machines. “The need for this, he said on Monday in Dubai, could “achieve a symbiosis between human and machine intelligence, and maybe solves the control problem and the usefulness problem.”Since computers are able to transmit information an exponentially quicker than humans, becoming “one” with them would allow humans to stay relevant in an age where automation and technology threaten our usefulness as organic organisms. Additionally, having a wireless interface would help to keep us on a similar playing field—essentially by making us part machine. So, while this cyborg idea of a future sounds ludicrous, when you consider how much of our reality is already sci-fi inspired, it makes a little bit more sense.
You can hear more about Musk’s thoughts on the necessary symbiosis between humans and machines in this recent video from CNBC.
Brain-computer interfaces (BCIs) are the future of brain implant technologies and prosthetics. Hacking the brain, however, hasn’t been easy. For one, electrodes that are implanted in the brain become less effective over time, as the scar tissue that forms around them degrades their connections to brain cells.
That, it would seem, is about to change as researchers from Harvard Medical School have been working with a new kind of implant that isn’t affected by scar tissue. Instead of penetrating the organ, these new electrodes are placed beneath the skull to rest on the surface of an animal’s brain, and they use powerful magnetic fields instead of electrodes like their predecessors to induce brain activity.
Their design has already been tested on mice by researchers from the Massachusetts General Hospital, who published their results in the journal Science Advances last December. Next week, testing will begin on monkeys using an implant designed to restore sight in the blind. “At the end of that, we hope to have monkeys be able to navigate a maze just by perceiving light and dark or basic geometric patterns,” explained Bernard Casse, a researcher at the Xerox-PARC research institute, which invented the new implant design as part of BRAIN initiative under President Obama.
Improved Brain Technologies
The vision-restoring implant will work by stimulating the visual cortex of the monkeys, as it tries to recreate the activity usually triggered by neuron signals from the eyes. The researchers think this will create the sensation of seeing even without actual input from the eyes. The goal is to be able to use the implant to translate signals from a camera into brain activity. It’s a unique approach to treating blindness, targeting the brain directly instead of the eyes.
Researchers see the value of being able to directly stimulate the brain. Other studies have shown how effective it could be, as in the case of a paralyzed Brazilian man that recovered his sense of touch thanks to a brain implant or the woman with ALS who regained her capacity to communicate thanks to a BCI. While those studies yielded very promising results, they could be only temporary due to the scar tissue problems with traditional electrodes. This new magnetic model could make them permanent.
University of California, San Diego associate professor Todd Coleman thinks this new approach is promising, and he sees other uses for the technology as well. “There could be very nice applications in other parts of the body,” he told MIT Technology Review. For instance, the new implants could be used to regulate activity in the more than 100 million neurons involved in the digestive system to help people with bowel movement troubles. Additionally, Coleman is interested in exploring how the tech could be used on the vagus nerve found in the chest to control PTSD. If all goes well, these monkey trials will be the next step on the path to human testing.
With every step we advance in the fields of robotics and artificial intelligence (AI), it seems we can’t help but ask, “Is this something that will bring us closer to ‘Westworld?’”
Since it premiered last year, HBO’s hit sci-fi series has become the mainstream benchmark for AI systems. In the show’s fictional world, robots can think and function like humans, and they are nearly indistinguishable from their human counterparts. It has created this sense of wonder at what’s possible given today’s quickly advancing technology. However, it has also reinforced how very far we still have to go before we can create AIs with the same level of sophistication.
Taken at face value, we know “Westworld” hosts are incredibly advanced AIs that probably have some sort of deep learning mechanism that allows them to quickly respond to their environments. Their mechanical skeletal systems are so sophisticated that they actually allow for fluid, human-like movements and facial expressions. While not explicitly addressed in the series, it seems the host’s bodies are built using organic material and a 3D-printing process, and though we may not have our own Dolores any time soon, we do already have 3D-printed organic materials.
At the Wake Forest Institute for Regenerative Medicine in North Carolina, Dr. Anthony Atala has been using his Integrated Tissue and Organ Printing System (ITOP) to print cells, bones, and even organs. ITOP prints using human cells and creates lab-grown organs that could soon be surgically implanted into the human body without risk of rejection. “We take a very small piece of their tissue. We then start to expand those cells outside of the body. We use those cells to create new tissues and organs that we can then put back into the body,” Atala explained in an interview with Circa.
Not Quite There Yet
It would be exciting to think this technology could be used to engineer artificial systems for machines, but the possibility of printing an entire body, complete with functioning systems, is a long way off. It’s one thing to print skin, for example, but to expect it to thrive against a metal, synthetic frame or skeleton is another challenge altogether. You’ll need an entire network of functioning, complex internal systems to make it work.
3D-printed organs could prove far more useful in living things in which existing mechanisms for life are already in place. According to Atala, they have already managed to print bone, muscle, and cartilage using ITOP, after which they successfully implanted it into rats. After months of observation, they discovered that the implanted tissue developed a system of blood vessels and nerves. This breakthrough in their study will allow them to move on to human trials, where they can attempt to replicate the same results. But even that could take a long time given that have to wait for government approval.
Atala is particularly optimistic about the future of this field, however, noting that while we may not be close to producing “Westworld”-level AIs anytime soon, “[the] way science is moving, I think that pretty much every organ in the body will eventually, in the next several decades, be achievable.” If you think waiting for the next season of “Westworld” is hard, just imagine what life is like for the thousands of people currently on the organ donor waiting list and how excited they must be for these new breakthroughs in 3D-printing technology.
While we aren’t quite there yet, plenty of research is going into BCIs, some of which focuses on the use of such systems to improve the lives of people suffering from one type of locked-in syndrome or another (a physical condition in which a patient cannot move or communicate but is still fully aware). Recently, a team of researchers managed to contribute to the cause, building a BCI that can decipher the thoughts of people unable to communicate otherwise. Their study, which is published in the journal PLOS Biology, is very promising as the participants reported being “happy” even despite their condition.
Niels Birbaumer, a neuroscientist at the Wyss Center for Bio and Neuroengineering in Geneva, Switzerland, and his team of researchers tested their BCI on four individuals suffering from amyotrophic lateral sclerosis (ALS). The patients were all asked personal questions that could be answered with a simple “Yes” or “No.” Examples include “Is your husband’s name Joachim?” and “Are you happy?” For questions with a definitive correct response, the patients gave the right answer seven times out of 10.
The tests disproved previously held assumptions that people with complete locked-in syndrome are incapable of the goal-oriented thought needed to make a BCI work. “The striking results overturn my own theory that people with complete locked-in syndrome are not capable of communication,” Birbaumer said in a press release. “We found that all four people we tested were able to answer the personal questions we asked them, using their thoughts alone. If we can replicate this study in more patients, I believe we could restore useful communication in completely locked-in states for people with motor neuron diseases.”
The team’s BCI utilized near-infrared spectroscopy (NIRS) and electroencephalography (EEG), which worked together to measure blood oxygenation and electrical activity in the brain. Though still limited as to which thoughts it could read, this BCI is currently the only method that has successfully restored the communication capabilities of people with complete locked-in syndrome.
Now, BCIs usually come in two forms: partially invasive (devices are planted inside a person’s skull) and non-invasive (wearables send signals to the brain), which is the type Birbaumer’s team used. In some sense, they are hacking the brain as BCIs learn how to read and translate brain signals into concrete actions — whether by moving a prosthetic limb or by translating a thought into something communicable.
Naturally, a device like this has the potential to improve the quality of life of people suffering from various neurological afflictions. According to the director of the Wyss Center, John Donoghue, “The Wyss Center plans to build on the results of this study to develop clinically useful technology that will be available to people with paralysis resulting from ALS, stroke, or spinal cord injury. The technology used in the study also has broader applications that we believe could be further developed to treat and monitor people with a wide range of neuro-disorders.”
Imagine living in a world in which verbal communication is no longer required, a society in which telepathy is the norm, where people would be able to “speak” to each other using only their thoughts.
Scientists have long been contemplating the possibilities of brain-to-brain communication in humans, and it appears as though their dreams could become a reality within the next year or so. Such a system would be made possible via major advances in the technology that have been achieved via recent trials involving animals.
In one study, three monkeys were connected through individual brain implants, then placed in separate rooms. They were given the task of controlling a virtual arm on screen, a task they could only complete successfully if they worked together. In the end, they did. According to Miguel Nicolelis, the study’s lead author, “They synchronized their brains and they achieved the task by creating a superbrain — a structure that is the combination of three brains.”
Alternatively, another experiment that tested brain synchronicity in four rats was able to accomplish similar results. After 10 trials, scientists found that the rats were able to think as one 61 percent of the time. They gained higher accuracy in solving simple problems when they combined minds.
More recently, the research has focused on humans. In one study, researchers placed two people in separate rooms and gave them the task of playing a game of 20 questions on a computer using only their minds. They transmitted “yes” or “no” answers with the help of an EEG cap, which tracked the brain activity of one person and triggered an electrical current in the other person’s brain.
The Future of Brain Communication
One day, we could attempt to take it up another notch in order to detect individual thought processes. These thoughts could be transmitted to another person, influencing the decisions they make.
This could be an enormous game changer for people with paralysis and other medical conditions that prevent them from being able to perform physical tasks. For example, assembling a robotic suit equipped with brainet, a synchronization of multiple brains acting as an organic computer, could allow people to receive help from others when learning how to use an exoskeleton to regain movement.
For now, it’s proving quite difficult to create a device that mimics pure telepathy. Our brains are unique, and each of us thinks differently, our thoughts being influenced by our individual memories and experiences. The resulting brain patterns make it hard for neuroscientists to develop brain-to-brain communication, but if they can reveal an individual’s patterns of thought, they could potentially use another person’s brain activity to trigger those thoughts.
This fact concerns a number of individuals, as artificial intelligence is predicted to outpace humanity at an unprecedented rate, which may result in AI looking at us as nothing more than house pets (or maybe even doing away with us entirely).
This fear has pushed Elon Musk to suggest something rather unconventional and almost controversial: the neural lace. In short, the neural lace is a device that is intended to grow with your brain. Its primary purpose is to optimize mental output through a brain-computer interface, allowing the human brain to effortlessly access the internet and, thus, keep up with (and someday merge with) artificially intelligent systems.
Musk asserts that the neural lace could push our cognitive performance to levels that are comparable to that of AI. He has teased updates before, and now, we may finally be getting something solid.
@elonmusk What abour neural lace? Announcement soon?
However, Musk isn’t the only one working to ensure that humanity can keep up with AI. Braintree founder Bryan Johnson is investing $100 million to make a neuroprosthesis to unlock the power of the human brain and, ultimately, make our neural code programmable.
Johnson outlines the purpose of his work, stating that it’s all about co-evolution:
Our connection with our new creations of intelligence is limited by screens, keyboards, gestural interfaces, and voice commands — constrained input/output modalities. We have very little access to our own brains, limiting our ability to co-evolve with silicon-based machines in powerful ways.
He is working to change this and ensure that we have a seamless interface with our technologies (and our AI).
Meanwhile, Johnson’s company, Kernel, wants to do more than just allow humans to interface with machines. His neuroprosthesis is intended to also repair our cognitive abilities, which will allow it to combat neurological diseases, such as Alzheimer’s, ALS, Parkinson’s, and other conditions that destroy our brains…and our lives.
This is just the beginning. Such advancements could allow us to literally program our neural code, which would allow us to transform ourselves in ways that we can’t even imagine. We could literally program ourselves into the people that we want to be.
Our biology and genetics have become increasingly programmable; our neural code is next in line.
It sounds like something out of science fiction, but it is based on remarkable scientific work.
In short, the devices under development work by replicating the way that our brain cells communicate with one another. The tech envisioned is based on the work of Dr. Theodore W. Berger, who is a professor of biomedical engineering and neuroscience and also serves as the director of the Center for Neural Engineering at the University of Southern California.
Berger is the Chief Science Office at Kernel, and the team is building off of 15 years of academic research that was funded by the NIH and DARPA. If that’s not enough to convince you he’s the right person to make this happen, Berger’s research has demonstrated improved recall functions in both rats and monkeys.