In the world of ever-advancing artificial intelligence (AI), robotics continues to offer glimpses into the future intelligence machines could usher in. It also serves as a reminder (or warning) about the potential alternate side of that future, which has been played out time and time again in science fiction.
We’re not quite there yet, but according to Boston Dynamics CEO and founder Marc Raibert, robots will shake things up for humanity more so than any previous technological innovation — even the internet.
“I happen to believe that robotics will be bigger than the Internet,” Raibert said when he spoke at the Future Investment Initiative back in October, according to CNBC. “The Internet lets every person reach out and touch all the information in the world. But robotics lets you reach out and touch and manipulate all the stuff in the world — and so it is not just restricted to information, it is everything.”
It was apropos that Raibert spoke about the disruption robots have the potential to bring about at the same event where Hanson Robotics’ Sophia received the first citizenship granted to a non-human. Trained to understand and mimic human emotions, Sophia is a prime example of how robots are becoming incredibly human-like.
For Raibert, that likeness is about more than appearance. “When we have robots that can do what people and animals do, they will be incredibly useful,” he said. In order to do this, he added, robots must possess three key abilities: mobility, dexterity, and autonomous perception.
Naturally, when robots become good, their usefulness increases. “I think one of the most important applications will be taking care of people,” Raibert offered. “They will help you take care of your parents so that you don’t have to spend so much time doing that.”
All three “skills” are present in the type of robots Boston Dynamics develops. Just recently, the company released a video showing the latest iteration of their robot dog called SpotMini. The video showcases how SpotMini moves in an incredibly life-like manner. It aims to exhibit dexterity and successfully navigate its environment.
SpotMini isn’t the only robot in their workshop: Boston Dynamics has also been working on perfecting their two-legged robots, ATLAS and Handle. Boston Dynamics is now under telecommunications giant SoftBank, which the company hopes will help these developments continue, and possibly, go even farther.
SoftBank’s CEO, Masayoshi Son, has a reputation for his commitment to achieving the singularity; the point (which some have argued is inevitable and imminent) when robots become smarter than humans. Son has predicted that robots with an IQ of 10,000 will exist in the next 30 years.
Right now, many robotics companies are focused on putting smaller supersmart robots out commercially. Two recent examples of these “pets” would be Sony’s robotic dog and the Wall-E look-alike Cozmo. While these bots are cute and capable, the next major development in robotics will likely be bipedal robots that move like human beings. If Sophia is any indication, robots with a similar appearance and conversation prowess may soon become very familiar to us.
Reddit was taken over by a robot Tuesday, but it’s not the kind of bot you’re thinking of. Anki, the robotics company behind the extremely popular and cute robot Cozmo, brought their little guy to Reddit. Cozmo is part of a six-hour “escape from Reddit” game that redditors have been able to participate in since 10:00am Pacific time.
Essentially, Cozmo is in real life environments that are based on subreddits. Those subreddits include r/ShowerThoughts, r/RarePuppers, r/Gaming, r/ShittyRobots, r/HailCorporate, r/WhatCouldGoWrong and r/MemeEconomy. For instance, the real life r/RarePuppers room is an actual room full of puppies. Redditors are being prompted throughout the day to vote on what room Cozmo should go to, what he should do in that room and more.
In order for Cozmo to escape, he will have to get three “cube keys.” These keys can be obtained by accomplishing certain goals within the game. Cozmo will be earning karma points throughout the game as he explores different rooms and does different things, which will also help him get those keys.
“Reddit is a community that really embraces technology and gaming. It’s one of the largest and most visited websites in the world,” Craig Rechenmacher, chief marketing officer at Anki, told Futurism. “In some ways Reddit is the only place you could do this, because that community is so engaged.”
The creative technology agency Wieden+Kennedy Lodge and Anki formed to make this game spent three months on it and amassed an incredible amount of expert knowledge. Reddit is excited about hosting this game on Tuesday, and people from within the company claim it’s the largest game of this kind Reddit has ever attempted.
One thing we’re sure will be hilarious is that Cozmo will actually have his own “shower thoughts” while he’s in r/ShowerThoughts, because the engineering director at W+K Lodge helped create a machine learning technology that uploaded hundreds of thousands of Reddit Shower Thoughts into Cozmo’s brain so he could learn from them and form his own.
“This campaign marks a first for Reddit and our brand strategy team,” said Zubair Jandali, vice president of sales at Reddit. “Not only is it the largest-scale campaign built entirely around the Reddit community, it’s also the single, most creative brand activation we’ve had a chance to work on. It’s been incredible to watch this concept come to life over the past several months, and I can’t wait to see how our users respond.”
This also marks the first time any team has ever tried to create a physical manifestation of Reddit, so it appears Reddit is finally going 3D after all these years.
“It’s the first physical manifestation of the Reddit community, and we knew that in order to get this right, we had to strike a balance between something both unique and familiar to Redditors. Most importantly, it’s a powerful demonstration of what’s possible for brands that embrace Reddit communities,” Joe Federer, senior brand strategist at Reddit, told Futurism.
Disclosure: This is a sponsored post paid for by Anki. Futurism editorial staff was not involved in the production of this post.
Walmart is adding robots to its workforce, as it continues to shift into a storefront for the modern world.
The retailer has been testing the robots in a number of stores in Arkansas, Pennsylvania and California, where they’re used for “repeatable, predictable and manual” tasks. The 0.61-meter (2 ft) tall robots are equipped with a tower and a series of cameras, and they move up and down aisles scanning shelves for out-of-stock items, identifying incorrect prices, and taking note of wrong or missing labels. Data is then passed on to employees, who proceed to stock items and correct mistakes.
Initial tests of the robots have been successful enough for Walmart to expand their robot fleet to another 50 stores across the country. Business Insider reports this expansion will be completed by the end of January 2018.
Speaking with Reuters, Walmart Chief Technology Officer Jeremy King said Walmart robots are 50 percent more productive than human employees, more accurate with their scanning, and three times faster. Despite their efficiency, King stated they would not replace human workers—a relief to any employees expecting the company to cut more jobs in favor of automation.
In a blog post detailing the announcement, Walmart director of communications Justin Rushing said the robots “[free] up time for our associates to focus on what they tell us are the most important and exciting parts of working at Walmart–serving customers and selling merchandise.”
Improving the Shopping Experience
Walmart sees their autonomous scanners as a solution to their out-of-stock problem. Unavailable items mean the potential loss of a sale when someone can’t find a specific product. Getting human employees to scan for items has also proven to be difficult, as it’s a tedious task most don’t want to do.
“If you are running up and down the aisle and you want to decide if we are out of Cheerios or not, a human doesn’t do that job very well, and they don’t like it,” said King.
As these systems advance, so will the potential that they are involved in criminal activity, and right now, no regulations are in place that say how the law should treat super-intelligent synthetic entities. Who takes the blame if a robot causes an accident or is implicated in a crime? What happens if a robot is the victim of a crime? Do self-aware robots deserve rights that are comparable to those given to human beings?
Before we can begin discussing robot rights, we need to articulate exactly what (or who?) counts in this equation, said MIT Media Lab researcher and robot ethics expert Kate Darling in an email correspondence with Futurism. In other words, clearly defined terminology is a prerequisite for any productive conversation regarding robot rights.
“If we want to use legislation to regulate robotic technology, we’re going to need to establish better definitions than what we’re operating with today,” she said. “Even the word ‘robot’ doesn’t have a good universal definition right now.”
Eyes on Today
Now is the time to put these definitions in place because artificially intelligent robots are already in our midst.
Autonomous delivery robots are a common sight in the Estonian capital of Tallinn. As such, the country’s government is being proactive with regards to robot regulations and legal recourse for issues regarding ownership and culpability.
“It all started out from the self-driving car taskforce,” Marten Kaevats, the national digital advisor for the government office of Estonia, told Futurism. “We quite soon discovered that these liability, integrity, and accountability issues are not specific to self-driving cars; they’re general AI questions.”
Kaevats is aware that any discussion of robots and AI can quickly devolve into talk of the singularity and superintelligence, but that’s not the focus right now. “We are trying to work on things that are actually already here,” he explained.
Still, Estonia is looking to put legislation in place that has the flexibility to respond to advances in technology. Kaevats acknowledges that it’s not possible to create regulations that are completely future-proof, but he sees a pressing need for laws that offer certain rights alongside certain liabilities.
As Kaevats pointed out, right now, self-aware artificial intelligences are so far off that there’s no reason to rush into giving robots similar rights to humans. In addition to considering the ethical ramifications of putting machines on par with humans, we need to examine how such laws might be open to abuse before regulations are established.
Production Line Patsy
Estonia isn’t the only place where conversations on robot rights are happening.
The journal Artificial Intelligence and Law recently published an article by University of Bath reader Joanna J. Bryson and academic lawyers Mihailis E. Diamantis and Thomas D. Grant. In the paper, the authors state that proposals for synthetic personhood are already being discussed by the European Union and that the legal framework to do so is already in place. The authors stress the importance of giving artificially intelligent beings obligations as well as protections, so as to remove their potential as a “liability shield.”
But granting them full rights?
When Bryson spoke to Futurism, she warned against the establishment of robot rights, relating the situation to the way the legal personhood of corporations has been abused in the past.
“Corporations are legal persons, but it’s a legal fiction. It would be a similar legal fiction to make AI a legal person,” said Bryson. “What we need to do is roll back, if anything, the overextension of legal personhood — not roll it forward into machines. It doesn’t generate any benefits; it only encourages people to obfuscate their AI.”
Bryson offered up the example of a driverless taxi, which could potentially be made fully independent from its owner or manufacturer, serving as a legally recognized individual, fulfilling its own contracts. This situation could be manipulated to reduce the amount of taxes paid on the vehicle’s earnings by whoever receives the profits.
Kaevats said that this won’t be a problem in Estonia — the country’s digital tax system is proactive enough to track any malicious activity. However, the potential for abuse certainly exists in regions with less technologically advanced tax codes.
Corporations can already use the letter of the law to withhold as much wealth as possible. The use of a synthetic person as a “fall guy” for illicit activity isn’t outside the realm of possibility, and giving a robot rights could serve to emancipate them from conventional ownership. At that point, the entity is the ultimate independent contractor, with companies able to absolve themselves of wrongdoing even if they instructed the machine to behave in the illegal way.
Legislation could certainly be written up that avoids these pitfalls, though, so policy makers just need to be sure that any rights given to synthetic entities don’t include loopholes that can be abused.
In the far more distant future, we’ll need to consider the issue of self-aware robots. How should we tackle synthetic personhood for those entities?
“If we discover that there are certain capacities that we want to create in artificial intelligence, and once you create those, you spontaneously get these cognitive features that warrant personhood, we’ll have to have this discussion about how similar they are to the human consciousness,” James Hughes, executive director of the Institute for Ethics and Emerging Technologies, told Futurism.
Traditionally, under the law, you’re either a person or you are property.
The creation of this level of technology won’t be happening anytime soon, if it happens at all, but its potential raises some thorny issues about our obligation to synthetic beings and the evolving nature of personhood.
“Traditionally, under the law, you’re either a person or you are property — and the problem with being property is that you have no rights,” bioethicist and attorney-at-law Linda McDonald-Glenn told Futurism. “In the past, we’ve made some pretty awful mistakes.”
According to Hughes, this situation calls for a test that determines whether or not a synthetic person is self-aware. In the meantime, Estonia has found a fairly simple way to determine the rights of their robots. Instead of using technology as the defining factor, the nation will grant rights based on registration under the mythologically inspired Kratt law.
Estonian folklore states that the Kratt is an inanimate object brought to life, just as artificial intelligence can give a machine the cognitive abilities it needs to complete a particular task. The Kratt law will determine what level of sophistication a robot needs to possess in order to be considered its own legal entity.
“This is what we want our governments to do,” said Bryson, praising European efforts to put well-thought-out legislation in place. “This is what governments are for and what law is for.”
In many ways, AI technology is still very young, but there’s no better time than now to start thinking about the legal and ethical implications of its usage.
The rise of automation is prompting all kinds of questions about the future of work for human employees, but there’s also another important consideration at hand — how are we to interact with robots as they become more commonplace in our day-to-day lives?
As robots become more capable of performing tasks independently, it’s crucial that they’re taught how to communicate with the humans that they cross paths with, whether they’re co-workers or complete strangers. Natural language will play an important role, but it’s only one part of a greater whole.
More Than Words
When humans talk to one another, what’s said only comprises a small amount of the information being relayed. Everything from facial expressions to the intonation of a person’s voice might give extra context or added insight into the topic at hand.
This is something that the scientists and engineers constructing the robots of tomorrow are careful to consider. A full vocabulary isn’t necessarily enough — there’s more to conveying a message than just finding the right words.
A project known as Baxter being developed by Rethink Robotics uses a pair of eyes on a screen to let people know what the robot is going to do next. The display is on a swivel, meaning that it can direct its attention to one of its two arms before it performs an action, making sure that any bystanders are well aware of what kind of motion is coming.
This is a two-way street. As well as being able to signal their intentions to human onlookers, robots will need to be able to pick up of social cues dropped by others in order to be effective in their roles.
We’ve all been locked in a “hallway dance” when trying to give way to another human walking the opposite direction down a corridor. Nine times out of ten, it’s no big deal — but in a busy hospital ward, when one dance partner is a bulky robot, it could cause some problems.
A team at MIT’s Interactive Robotics Group led by Dr. Julie Shah has used machine learning techniques to teach a robot to observe anticipatory indicators from a human that can reveal which way they’re planning to turn. We know how to pick up on these cues from experience, but machines need to be taught the basics from scratch.
At present, we tend to think of robots as being good at manual labor. They can be built strong, so it makes sense to think of them lifting heavy loads and doing other physically intensive tasks.
Further advances in helping robots to interact with humans will allow them to take on a much wider range of vocations. Customer service positions, especially those where there’s a limited range of responsibilities, will be a perfect fit for machines once they’re able to hold a natural, productive conversation reliably.
A robot named Mario has already been trialled as a concierge in Belgium, handing out room keys and ingratiating himself with guests via a high-five. A project called RAVE is creating a robot that can teach young children without human input, holding their attention for six minutes at a time.
Robots excel in certain tasks because of their non-human qualities — they don’t tire, and they won’t turn their nose up at unpleasant or unfulfilling tasks.
However, as they take on a more diverse set of roles, they’ll need to learn some distinctly human skills. Being able to communicate with people is a key part of all kinds of vocations, and it doesn’t come naturally for a robot.
Much negativity surrounds the public perception of our robot-ridden future. Indeed, sensationalist fears about thousands or millions of jobs undergoing ruthless robotic appropriation spread like wildfire. But the advancement of intelligent machines isn’t all that miserable. Indeed, some advancements presently making waves will save lives. Enter the Senhance System, a surgical robot recently cleared by the U.S. Food and Drug Administration (FDA) for use in minimally-invasive surgeries.
Developed by medical device company TransEnterix, Senhance is a robotically-assisted surgical device (RASD) that lets surgeons use computer and software technology to move and control surgical instruments in procedures involving one or more tiny incisions in a patient’s body. Surgeons operate Senhance through a console unit or cockpit with a 3-D high-definition view of the surgical field, where they remotely control three separate arms, each equipped with a surgical instruments.
“Minimally[-]invasive surgery helps reduce pain, scarring and recovery time after surgery,” Binita Ashar, director at the Surgical Devices Division of the FDA’s Center for Devices and Radiological Health, said in a press release. “RASD technology is a specialized innovation in minimally invasive surgery designed to enhance the surgeon’s access and visualization within confined operative sites.”
Man and Machine Working Together
Amid rumors of ineluctable robot oppression, it’s easy to assume the robot-human work dynamic would be based on conflict, instead of mutual beneficence. Senhance-based robotic surgery could enhance the survival rate of surgeries that otherwise remain infamously dangerous. “The clearance of the Senhance System in the US is a milestone in the progress of robotics and is expected to deliver improvement in the efficacy, value and choices offered to patients, surgeons and hospitals,” TransEnterix president and CEO Todd M. Pope said in a statement.
Robotic surgery isn’t new, although the technology behind it has continually improved over the past couple of years. The principle remains the same: these robot surgeons are designed to improve precision and safety during procedures. It’s meant to assist surgeons and not take away their control, so there’s still a human behind every procedure.
“Millions of surgical procedures in the US are performed each year laparoscopically with basic manual tools that limit surgeons’ capability, comfort and control,” Pope added. “New choices are needed that enhance the senses, control and comfort of the surgeon, minimize the invasiveness of surgery for the patient, and maximize value for the hospital. Senhance is this new choice.”
More and more jobs are being automated, with robot workers replacing swathes of the human workforce. Much has been said about how the labor of robots will affect individuals being let go, but now we’re seeing more discussion about the broader economic effect in terms of lost tax money.
Now, Jane Kim of the San Francisco Board of Supervisors is attempting to do something about this problem. She’s established a committee dubbed the Jobs of the Future Fund, which will serve to explore how best to smooth the transition toward more automation.
“We’re exploring continuing the payroll tax and extending it to robots that perform jobs humans currently do,” said Kim in an interview with CNBC. The money could be used to train displaced workers to fill other roles, fund free community college programs, or to foster the creation of new jobs in industries where automation is less viable.
Kim’s stance on the taxation of an automated workforce echoes comments made by Bill Gates earlier this year.
“Certainly there will be taxes that relate to automation,” Gates argued in an interview with Quartz. “Right now, the human worker who does, say, $50,000 worth of work in a factory, that income is taxed and you get income tax, social security tax, all those things. If a robot comes in to do the same thing, you’d think that we’d tax the robot at a similar level.”
Automation seems poised to have a huge economic impact. It remains to be seen whether the advantages it brings in terms of increased productivity and the potential for new jobs outweigh its drawbacks. At this point, a sensible tax code seems to be the most sensible way of ensuring that everyone gets to reap the benefits, rather than just the business owners.
Too Much Tax?
The other side of the argument warns that enforcing a tax as this technology is starting to flourish will discourage widespread adoption. “Why would we want to put disincentives on companies using the best technology available?” asked Jeff Burnstein, president of the Association for Advancing Automation, in response to Kim’s stance.
Automation is already revolutionizing the way various industries operate. It’s clear that we should make full use of this technology, but it’s high time that we figure out a way to do so that takes into account all the possible pitfalls for our economy and our society.
Flying warehouses, robot receptionists, smart toilets… do such innovations sound like science fiction or part of a possible reality? Technology has been evolving at such a rapid pace that, in the near future, our world may well resemble that portrayed in futuristic movies, such as Blade Runner, with intelligent robots and technologies all around us.
But what technologies will actually make a difference? Based on recent advancements and current trends, here are five innovations that really could shape the future.
1. Smart homes
Many typical household items can already connect to the internet and provide data. But much smart home technology isn’t currently that smart.
A smart meter just lets people see how energy is being used, while a smart TV simply combines television with internet access.
But technology is rapidly moving towards a point where it can use the data and connectivity to act on the user’s behalf.
To really make a difference, technology needs to fade more into the background – imagine a washing machine that recognises what clothes you have put into it, for example, and automatically selects the right programme, or even warns you that you have put in items that you don’t want to wash together.
Here it is important to better understand people’s everyday activities, motivations and interactions with smart objects to avoid them becoming uninvited guests at home.
Such technologies could even work for the benefit of all.
The BBC reports, for example, that energy providers will “reduce costs for someone who allows their washing machine to be turned on by the internet to maximise use of cheap solar power on a sunny afternoon” or “to have their freezers switched off for a few minutes to smooth demand at peak times”.
Our home is, after all, the place where we should feel most secure. For them to become widespread, these technologies will have to keep it that way.
2. Virtual secretaries
While secretaries play a very crucial role in businesses, they often spend large parts of their working day with time-consuming but relatively trivial tasks that could be automated.
Consider the organisation of a “simple” meeting – you have to find the right people to take part (likely across business boundaries) and then identify when they are all available. It’s no mean feat.
Tools such as doodle.com, which compare people’s availability to find the best meeting time, can help. But they ultimately rely on those involved actively participating. They also only become useful once the right people have already been identified.
By using context information (charts of organisations, location awareness from mobile devices and calendars), identifying the right people and the right time for a given event became a technical optimisation problem that was explored by the EU-funded inContext project a decade ago.
At that stage, technology for gathering context information was far less advanced – smart phones were still an oddity and data mining and processing was not where it is today.
Over the coming years, however, we could see machines doing far more of the day-to-day planning in businesses.
On the downside, much of the required context information is relatively privacy-invasive – but then the younger generation is already happily sharing their every minute on Twitter and Snapchat and such concerns may become less significant over time.
And where should we draw the line? Do we fully embrace the “rise of the machines” and automate as much as possible, or retain real people in their daily roles and only use robots to perform the really trivial tasks that no one wants to do?
This question will need to be answered – and soon.
But how would you feel about receiving a diagnosis from an artificial intelligence? A private company called Babylon Health is already running a trial with five London boroughs which encourages consultations with a chatbot for non-emergency calls.
The artificial intelligence was trained using massive amounts of patient data in order to advise users to go to the emergency department of a hospital, visit a pharmacy or stay at home.
The company claims that it will soon be able to develop a system that could potentially outperform doctors and nurses in making diagnoses.
In countries where there is a shortage of medical staff, this could significantly improve health provision, enabling doctors to concentrate on providing treatment rather than spending too much time on making a diagnosis.
If AI systems can address these challenges and focus on understanding and enhancing existing care practices and the doctor-patient relationship, we can expect to see more and more successful stories of data-driven healthcare initiatives.
4. Care robots
Will we have robots answering the door in homes? Possibly. At most people’s homes? Even if they are reasonably priced, probably not. What distinguishes successful smart technologies from unsuccessful ones is how useful they are.
And how useful they are depends on the context. For most, it’s probably not that useful to have a robot answering the door. But imagine how helpful a robot receptionist could be in places where there is shortage of staff – in care homes for the elderly, for example.
Robots equipped with AI such as voice and face recognition could interact with visitors to check who they wish to visit and whether they are allowed access to the care home.
After verifying that, robots with routing algorithms could guide the visitor towards the person they wish to visit. This could potentially enable staff to spend more quality time with the elderly, improving their standard of living.
The AI required still needs further advancement in order to operate in completely uncontrolled environments. But recent results are positive.
Facebook’s DeepFace software was able to match faces with 97.25 percent accuracy when tested on a standard database used by researchers to study the problem of unconstrained face recognition.
The software is based on Deep Learning, an artificial neural network composed of millions of neuronal connections able to automatically acquire knowledge from data.
That is partly because they can use a multitude of sensors to gather data about the world, including 360-degree views around the car.
Moreover, they could potentially communicate with each other to avoid accidents and traffic jams.
More than being an asset to the general public, self-driving cars are likely to become particularly useful for delivery companies, enabling them to save costs and make faster, more efficient deliveries.
Advances are still needed in order to enable the widespread use of such vehicles, not only to improve their ability to drive completely autonomously on busy roads, but also to ensure a proper legal framework is in place.
Nevertheless, car manufacturers are engaging in a race against time to see who will be the first to provide a self-driving car to the masses. It is believed that the first fully autonomous car could become available as early as the next decade.
The advances in this area are unlikely to stop at self-driving cars or trucks. Amazon has recently filed a patent for flying warehouses which could visit places where the demand for certain products is expected to boom.
The flying warehouses would then send out autonomous drones to make deliveries. It is unknown whether Amazon will really go ahead with developing such projects, but tests with autonomous drones are already successfully being carried out.
Thanks to technology, the future is here – we just need to think hard about how best to shape it.
When it comes to overcoming an array of varied problems with an even more diverse set of solutions, scientists have nothing on natural selection. Countless generations of millions of species have developed numerous ways to defy gravity, stick to objects while underwater, and even lift objects thousands of times more massive than themselves.
Medics, scientists, and even engineers have long been interested in learning how biology has overcome certain obstacles so they can glean principles that we might apply to our own challenges. A historical example of such a strategy can be found in the Wright brothers’ study of birds as they worked on their first plane. They observed how the shapes of birds wings affected airflow and lift, and incorporated this information into what became the first aircraft to complete a sustained, controlled flight.
Today, aerospace engineers continue to use this strategy, now with the focus specifically on owls to make aircraft quieter. And with tools that allow us to study organisms on a molecular level, we can now look at everything from sea creatures to woodland animals for inspiration to improve our tech, our medicine, and — ultimately — our lives.
One company that has successfully adopted this strategy, called “biomimicry,” is Gecko Biomedical, founded by Jeffrey Karp. He designed a medical tape to replace sutures and staples that was inspired by — you guessed it — the gecko. By simulating the tiny hairs that allow gecko’s feet to stick to walls (and adding some glue for good measure), Karp was able to design a tape that could seal incisions in internal tissues like gastrointestinal tracts.
Karp’s team has also gone on to invent a surgical glue after studying the mucus of sandcastle worms. The glue not only remains sticky in wet environments, but is strong enough to hold together a beating heart — an application for which the glue is undergoing clinical trails in Europe.
Solving a Problem
Karp maintains that his goal in biomimicry is never to simply copy certain elements in nature just because we can. Instead, he aims to identify real human needs, and only then look at the natural world to see if it offers any insight.
“When we look to solve problems, it’s not so we can publish papers and get pats on the back from the academic community,” Nick Sherman, a research technician at Karp Lab, said in an interview with the Guardian. “It’s more like, ‘Is this work going to help patients? If not, how do we make it help them?’”
That is why the team has focused on developing medical devices that have the potential to vastly improve patient care, as is the case with the surgical glue. Jean-Marc Alsac, a cardiovascular surgeon who is overseeing the trial, told the Guardian, “This could completely transform how we perform surgery.”
But medicine is not the only field that nature has been transforming. Robotics, too, has been building off of biological models, yielding ‘bots that can do amazing things. For example, engineers recently designed 100-gram (3.5-ounce) robots that work together like ants. When six of these cooperate, they can lift a 1,800-kg (3,900-pound) car.
Karp believes biomimicry is a winning strategy because “every living creature that exists today is here because it tackled a number of challenges,” he told the Guardian. “In essence, we are surrounded by solutions. Evolution is truly the best problem-solver.”
Called Int-Ball, this adorable little camera drone resembles something Pixar might have come up with, but it’s totally real, and is now a floating companion to astronauts on board the International Space Station (ISS) – where it helps out by taking photos and recording video, freeing up valuable astronaut time.
Int-Ball was delivered to the ISS in a SpaceX cargo shipment last month – the company’s first involving a reused Dragon cargo capsule – and is now operational, currently undergoing initial testing.
It looks like those checks are going pretty smoothly too, with JAXA having just released the first test footage captured by its little floating bot (aka the JEM Internal Ball Camera).
You can check out Int-Ball’s debut camerawork in the following YouTube clip, which is backed by possibly the most twee music ever used in an official video released by a space agency (although it’s strangely fitting too, given Int-Ball’s cutesy, somewhat Kirby-like proportions):
Testing Drones in Microgravity
According to JAXA, Int-Ball can move autonomously in space, and can also be remotely controlled by flight controllers and researchers on the ground, who can relay its footage in real-time back to astronauts on the ISS for review and follow-up.
The little orb only measures 15 centimetres (6 inches) in diameter and weighs 1 kg (2.2 lbs), and was largely manufactured using 3D printing.
Along the surface of the sphere, 12 fans are positioned to enable Int-Ball to move around, while a number of ‘3D Target Markers’ placed on the ISS’s internal walls help the drone to orientate itself so it can navigate from place to place.
JAXA says that as much as 10 percent of astronaut working hours on board the ISS has crew members with a camera in hand, so by offloading the camerawork and videography to a little floating bot, it could free up researchers significantly to focus on conducting experiments and other important tasks.
Int-Ball is also serving as a test case so that JAXA scientists can see how well floating drones operate in the microgravity environment on board the ISS.
Taking care of video is an important gig, sure, but it’s definitely only a starting point.
In the future it’s conceivable that little autonomous drones like this could perform other kinds of jobs both inside and outside the ISS, helping astronauts by fetching or operating equipment, checking on supplies, or conducting repairs and maintenance on the station itself.
In the meantime, Int-Ball will have to stick to playing camera operator while JAXA figures out just what this technology is capable of, but we doubt very much that the ISS human crew mind having their tiny new pal around the joint.
Zombies and aliens may not be a realistic threat to our species. But there’s one stock movie villain we can’t be so sanguine about: sentient robots. If anything, their arrival is probably just a matter of time. But what will a world of conscious machines be like? Will there be a place in it for us?
Artificial intelligence research has been going through a recent revolution. AI systems can now outperform humans at playing chess and Go, recognizing faces, and driving safely. Even so, most researchers say truly conscious machines — ones that don’t just run programs but have feelings and are self-aware — are decades away. First, the reasoning goes, researchers have to build a generalized intelligence, a single machine with the above talents and the capacity to learn more. Only then will AI reach the level of sophistication needed for consciousness.
But some think it won’t take nearly that long.
“People expect that self-awareness is going to be this end game of artificial intelligence when really there are no scientific pursuits where you start at the end,” says Justin Hart, a computer scientist at the University of Texas. He and other researchers are already building machines with rudimentary minds. One robot wriggles like a newborn baby to understand its body. Another robot babbles about what it sees and cries when you hit it. Another sets off to explore its world on its own.
No one claims that robots have a rich inner experience — that they have pride in floors they’ve vacuumed or delight in the taste of 120-volt current. But robots can now exhibit some similar qualities to the human mind, including empathy, adaptability, and gumption.
Beyond it just being cool to create robots, researchers design these cybernetic creatures because they’re trying to fix flaws in machine-learning systems. Though these systems may be powerful, they are opaque. They work by relating input to output, like a test where you match items in column ‘A’ with items in column ‘B’. The AI systems basically memorize these associations. There’s no deeper logic behind the answers they give. And that’s a problem.
Humans can also be hard to read. We spend an inordinate amount of time analyzing ourselves and others, and arguably, that’s the main role of our conscious minds. If machines had minds, they might not be so inscrutable. We could simply ask them why they did what they did.
“If we could capture some of the structure of consciousness, it’s a good bet that we’d be producing some interesting capacity,” says Selmer Bringsjord, an AI researcher at the Rensselaer Polytechnic Institute in Troy, N.Y. Although science fiction may have us worried about sentient robots, it’s really the mindless robots we need to be cautious of. Conscious machines may actually be our allies.
ROBOT, KNOW THYSELF
Self-driving cars have some of the most advanced AI systems today. They decide where to steer and when to brake by taking constant radar and laser readings and feeding them into algorithms. But much of driving is anticipating other drivers’ maneuvers and responding defensively — functions that are associated with consciousness.
“Self-driving cars will have to read the minds of what other self-driving cars want to do,” says Paul Verschure, a neuroscientist at Universitat Pompeu Fabra in Barcelona.
As a demonstration of how that might look, Hod Lipson, an engineering professor at Columbia University and co-author of a recent book on self-driving cars, and Kyung-Joong Kim at Sejong University in Seoul, South Korea built the robotic equivalent of a crazy driver. The small round robot (about the size of a hockey puck) moves on a loopy path according to its own logic. Then a second robot is set with the goal of intercepting the first robot no matter where the first one started, so it couldn’t record a fixed route; it had to divine the moving robot’s logic.
People expect that self-awareness is going to be this end game of AI when really there are no scientific pursuits where you start at the end.
Using a procedure that mimicked Darwinian evolution, Lipson and Kim crafted an interception strategy. “It had basically developed a duplicate of the brain of the actor — not perfect, but good enough that it could anticipate what it’s going to do,” Lipson says.
Lipson’s team also built a robot that can develop an understanding of its body. The four-legged spidery machine is about the size of a large tarantula. When switched on, its internal computer has no prior information about itself. “It doesn’t know how its motors are arranged, what its body plan is,” Lipson says
But it has the capacity to learn. It makes all the actions it is capable of to see what happens: how, for example, turning on a motor bends a leg joint. “Very much like a baby, it babbles,” Lipson says. “It moves its motors in a random way.”
After four days of flailing, it realizes it has four legs and figures out how to coordinate and move them so it can slither across the floor. When Lipson unplugs one of the motors, the robot realizes it now has only three legs and that its actions no longer produce the intended effects.
“I would argue this robot is self-aware in a very primitive way,” Lipson says.
Another humanlike capability that researchers would like to build into AI is initiative. Machines excel at playing the game Go because humans directed the machines to solve it. They can’t define problems on their own, and defining problems is usually the hard part.
In a forthcoming paper for the journal “Trends in Cognitive Science,” Ryota Kanai, a neuroscientist and founder of a Tokyo-based startup Araya discusses how to give machines intrinsic motivation. In a demonstration, he and his colleagues simulated agents driving a car in a virtual landscape that includes a hill too steep for the car to climb unless it gets a running start. If told to climb the hill, the agents figure out how to do so. Until they receive this command, the car sits idle.
Then Kanai’s team endowed these virtual agents with curiosity. They surveyed the landscape, identified the hill as a problem, and figured out how to climb it even without instruction.
“We did not give a goal to the agent,” Kanai says. “The agent just explores the environment to learn what kind of situation it is in by making predictions about the consequence of its own action.”
The trick is to give robots enough intrinsic motivation to make them better problem solvers, and not so much that they quit and walk out of the lab. Machines can prove as stubborn as humans. Joscha Bach, an AI researcher at Harvard, put virtual robots into a “Minecraft”-like world filled with tasty but poisonous mushrooms. He expected them to learn to avoid them. Instead, they stuffed their mouths.
“They discounted future experiences in the same way as people did, so they didn’t care,” Bach says. “These mushrooms were so nice to eat.” He had to instill an innate aversion into the bots. In a sense, they had to be taught values, not just goals.
In addition to self-awareness and self-motivation, a key function of consciousness is the capacity to focus your attention. Selective attention has been an important area in AI research lately, not least by Google DeepMind, which developed the Go-playing computer.
“Consciousness is an attention filter,” says Stanley Franklin, a computer science professor at the University of Memphis. In a paper published last year in the journal “Biologically Inspired Cognitive Architectures,” Franklin and his colleagues reviewed their progress in building an AI system called LIDA that decides what to concentrate on through a competitive process, as suggested by neuroscientist Bernard Baars in the 1980s. The processes watch for interesting stimuli — loud, bright, exotic — and then vie for dominance. The one that prevails determines where the mental spotlight falls and informs a wide range of brain function, including deliberation and movement. The cycle of perception, attention, and action repeats five to 10 times a second.
The first version of LIDA was a job-matching server for the U.S. Navy. It read emails and focused on pertinent information while juggling each job hunter’s interests, the availability of jobs, and the requirements of government bureaucracy.
Since then, Franklin’s team has used the system to model animals’ minds, especially behavioral quirks that result from focusing on one thing at a time. For example, LIDA is just as prone as humans are to a curious psychological phenomenon known as “attentional blink.” When something catches your attention, you become oblivious to anything else for about half a second. This cognitive blind spot depends on many factors and LIDA shows humanlike responses to these same factors.
Pentti Haikonen, a Finnish AI researcher, has built a robot named XCR-1 on similar principles. Whereas other researchers make modest claims — create some quality of consciousness — Haikonen argues that his creation is capable of genuine subjective experience and basic emotions.
The system learns to make associations much like the neurons in our brains do. If Haikonen shows the robot a green ball and speaks the word “green,” the vision and auditory modules respond and become linked. If Haikonen says “green” again, the auditory module will respond and, through the link, so will the vision module. The robot will proceed as if it heard the word and saw the color, even if it’s staring into an empty void.
Conversely, if the robot sees green, the auditory module will respond, even if the word wasn’t uttered. In short, the robot develops a kind of synesthesia.
“If we see a ball, we may say so to ourselves, and at that moment our perception is rather similar to the case when we actually hear that word,” Haikonen says. “The situation in the XCR-1 is the same.”
Things get interesting when the modules clash — if, for example, the vision module sees green while the auditory module hears “blue.” If the auditory module prevails, the system as a whole turns its attention to the word it hears while ignoring the color it sees. The robot has a simple stream of consciousness consisting of the perceptions that dominate it moment by moment: “green,” “ball,” “blue,” and so on. When Haikonen wires the auditory module to a speech engine, the robot will keep a running monolog about everything it sees and feels.
Haikonen also gives vibration a special significance as “pain,” which preempts other sensory inputs and consumes the robot’s attention. In one demonstration, Haikonen taps the robot and it blurts, “Me hurt.”
“Some people get emotionally disturbed by this, for some reason,” Haikonen says. (He and others are unsentimental about the creations. “I’m never like, ‘Poor robot,’” Verschure says.)
A NEW SPECIES
Building on these early efforts, researchers will develop more lifelike machines. We could see a continuum of conscious systems, just as there is in nature, from amoebas to dogs to chimps to humans and beyond. The gradual progress of this technology is good because it gives us time adjust to the idea that, one day, we won’t be the only advanced beings on the planet.
For a long while, our artificial companions will be vulnerable — more pet than threat. How we treat them will hinge on whether we recognize them as conscious and as capable of suffering.
“The reason that we value non-human animals, to the extent that people do, is that we see, based on our own consciousness, the light of consciousness within them as well,” says Susan Schneider, a philosopher at the University of Connecticut who studies the implications of AI. In fact, she thinks we will deliberately hold back from building conscious machines to avoid the moral dilemmas it poses.
“If you’re building conscious systems and having them work for us, that would be akin to slavery,” Schneider says. By the same token, if we don’t give advanced robots the gift of sentience, it worsens the threat they may eventually pose to humanity because they will see no particular reason to identify with us and value us.
Judging by what we’ve seen so far, conscious machines will inherit our human vulnerabilities. If robots have to anticipate what other robots do, they will treat one another as creatures with agency. Like us, they may start attributing agency to inanimate objects: stuffed animals, carved statues, the wind.
Last year, social psychologists Kurt Gray of the University of North Carolina and the late Daniel Wegner suggested in their book “The Mind Club” that this instinct was the origin of religion. “I would like to see a movie where the robots develop a religion because we have engineered them to have an intentionality prior so that they can be social,” Verschure says. ”But their intentionality prior runs away.”
These machines will vastly exceed our problem-solving ability, but not everything is a solvable problem. The only response they could have to conscious experience is to revel in it, and with their expanded ranges of sensory perception, they will see things people wouldn’t believe.
“I don’t think a future robotic species is going to be heartless and cold, as we sometimes imagine robots to be,” Lipson says. “They’ll probably have music and poetry that we’ll never understand.”
What the Rise of Sentient Robots Will Mean for Human Beings was originally published by NBC Universal Media, LLC on June 19, 2017 by George Musser. Copyright 2017 NBC Universal Media, LLC. All rights reserved.
This technology would take the form of an injectable “neural lace” — composed of a mesh of electrodes — that would augment the human brain, adding another layer to the cortex and limbic system that is capable of communicating with a computer (essentially creating cyborgs). This, hypothetically, creates an upgradable, updatable interface that could be applied in countless ways. Some of these include:
Controlling Computers With Your Mind
Brains and technology both operate using the same vectors: electricity and data. Musk’s Neural Lace would be a system that provides a way for them to communicate directly with each other. To borrow a simile from Phillip Alvelda, the Neural Engineering System Design (NESD) program manager (another nascent BCI), “Today’s best brain-computer interface systems are like two supercomputers trying to talk to each other using an old 300-baud modem […] Imagine what will become possible when we upgrade our tools.” Applications could stretch from the remote operation of technology to the completely hands free and voiceless operation of computers. Researchers in Korea have already used a BCI to control turtles.
Updating Your Mind or Communicating With Someone Else’s
Elon Musk’s idea could both initiate brain activity and monitor it. The technology does not necessarily have to be a one-way communication stream, it is capable of both sending messages and creating information in the brain. The high-bandwidth interface could allow you to wirelessly transmit information to the cloud, to computers, or even directly to the brains of other people with a similar interface in their head. There is also the possibility of downloading content to augment your consciousness: think Neo learning kung-fu in the Matrix. While initial tests to improve intelligence haven’t been too successful, if brains and computers speak the same language, then computers can impart information to the brain. The technology is currently being used to allow paralyzed people to communicate, but its uses could extend far beyond that.
Bionic Limbs That Feel Like Real Limbs
As part of this two-way communication stream, robotic arms could communicate fleshy messages by being connected to existing nerve structures. Rather than making the brain learn how to use a new part of the nervous system, robotic limbs could be quickly and easily integrated into the system. This has the potential to revolutionize prosthetic limbs for the disabled, but may also encourage people to rid themselves of their biological arms in favour of mechanical super limbs. Who knows!
Emotionally Aware Technology
As computers and brains would essentially be speaking the same language, emotions could be read as data using electrodes. This would shift technology’s perception of humans from basic recognition to complex understanding. Robot helpers would be able to adapt to your emotional state rather than just responding to commands. Photos and videos could also be implanted with emotional metadata, meaning that one could feel what it would be like to be in any given scenario, rather than just trying to imagine it.
Next Generation Adaptable Gaming
One issue with the lifespan of games is repetition; people become accustomed, know what to expect, or are limited by the programmed narrative. A BCI could improve this situation by having games respond to what your brain is feeling, remaining one step ahead and endlessly diverse. This would be most applicable to the horror genre, in which enemies could come at you when and where you least expect it, providing constant shocks, jumps, and thrills. The Black Mirror episode Playtest is an hour long exploration of just how terrifying this could be. Since AI has been shown to be as creative as a human composer, this reality could be surprisingly close.
Several politicians and leaders in technology law are calling for the United States to create a department that concentrates on robotics and artificial intelligence (AI). AI is becoming ubiquitous, and is present in everything from your cell phone to self-driving cars.
The future of the workforce is in automation, and a plan needs to be in place for workers who are affected. In his farewell address, former president Barack Obama expressed his concerns about the impact of future tech. “The next wave of economic dislocation won’t come from overseas” Obama said. “It will come from the relentless pace of automation that makes many good, middle-class jobs obsolete.”
The U.S. should start taking action to address Obama’s concerns, argues John Frank Weaver, a lawyer who specializes in AI law. In an interview with Inverse, he advocated the formation of federal commission or similar government entity to establish overarching regulations of AI and autonomous technology.
“The idea that there’s one body where congress and the executive branch are able to pool their resources and come up with a coherent federal policy for the country, both in terms of domestic policy and how we approach international treaties, I think is important, because of the potential dangers in a lot of areas,” Weaver said.
Some of these potential dangers might be privacy concerns from drones or smart TVs, or safety issues stemming from cars driven by AI. There are also economic implications to these technological advances: what happens to taxis, Uber, Lyft, long-haul trucking, and other industries when AI takes over driving? Who is responsible for accidents caused by self-driving vehicles? A centralized federal agency could tackle these problems and others.
The idea of a federal agency to regulate robotics isn’t new. Ryan Calo, professor at the University of Washington School of Law and adviser of the Obama administration, wrote a proposal for one in 2014. The proposal points out that private tech companies are already looking to government agencies for guidance in these uncharted technological territories. For example, Toyota approached NASA for help when their cars were unexpectedly accelerating. But NASA cannot take on all the problems that will come with a growing robotics industry — its members have too many other things to focus on.
Currently, any regulations of robotics and AI are spread out across many organizations. The Federal Aviation Administration, Securities and Exchange Commission, and the National Highway Traffic Safety Administration have some of the responsibility when it comes to robotics regulations. However, this arrangement doesn’t allow for full coverage or expertise in this highly technical and rapidly changing field.
While the U.S. federal government is lagging behind technological advances, many states are struggling to come up with their own solutions. Legislation on autonomous vehicles has been passed Alabama, California, Florida, Louisiana, Michigan, Nevada, North Dakota, Pennsylvania, Tennessee, Utah, and Virginia, as well as in Washington D.C. , since 2012. However, when you compare the body of legislation to that of the airline industry, it doesn’t even come close. If every department takes on only the robotics issues that affect it directly, there’s no across-the-board policy, which can lead to confusion.
It’s not like such policies are impossible to put in place. Japan and the European Union have both created robotics commissions along the lines of what Calo and Weaver have proposed. In Japan in particular, robotics is an enormous industry. In 2009, the nation employed over 25,000 robot workers, more than any other country. This could be a solution for the country’s declining birthrate and diminishing workforce. The European Union’s proposal covers rules and ethics governing robots in addition to tackling the societal issues that will arise.
The consequences of allowing the robotics industry to run a muck without oversight could have far-reaching consequences. For a similar example, remember the the banking industry collapsed of 2008, which occurred because of a lack of federal oversight when it came to banking regulations. Nine years later, the industry is still suffering, according to author Anat Admati.
She says that it’s necessary to look to experts first to put guidelines in place — politicians and regulators probably don’t have the specific knowledge necessary to create rules about driverless cars, for example. In an interview with Inverse, Admati said, “It is important that policymakers rely on sufficient, un-conflicted expertise and make sure to set rules in a timely manner. Otherwise, we may discover that risks have been ignored when it is too late and harm has occurred.”
In a situation linked to the economy, it is vital that we have regulations in place to prevent another collapse like in 2008. A federal robotics agency is necessary in order to nurture this growing industry — and protect the nation from its side effects.
The moment we’ve all been waiting for has finally arrived…well…kind of. We at least now know the month when the battle of the century will commence.
Back in 2015, a team of American engineers, under the MegaBots moniker, challenged their Japanese counterparts, named Suidobashi, to a giant robot mega-battle. The Americans wanted to pit their Mark II robot against Japan’s Kuratas. And, after nearly two years of anticipation, we can finally gear up to watch this battle, at an unspecified date in August, at an unspecified location.
While we may have visions of elegant swordplay with lightning fast parries and ripostes, it’s not likely that robotics has gotten to the level of science-fiction favorites like Gundam or Pacific Rim.
Robotics have come a long way since Leonardo da Vinci gave the world its first recorded designs for a humanoid robot. Fittingly, it was made out of a suit of armor. Perhaps giant fighting robots were the logical conclusion to military technology.
Amazon’s CEO, Jef Bezos, recently demoed the world’s first manned bipedal robot. He couldn’t help but conjure images of Lex Luthor in his Warsuit or Obadiah Stane in the Iron Monger. This model isn’t likely to be battling superheroes anytime soon, but there are plenty of military robots already in existence and in development.
The software behind modern robots is also rapidly developing. There is now even tech available that allows humans to control robots with their minds.
These fighting robots may not have that capability, but humans aren’t likely mature enough to handle mind controlled mecha-fighters at this point in time anyhow. So tune into the MegaBot and Suidobashi YouTube channels this summer to see all of your twisted metal fantasies come true.
In case you missed it, Elon Musk is rather concerned about the fate of humanity, given the extreme advancements being made in artificial intelligence (AI). Ultimately, he fears that our AI will, one day, overtake us. When this happens, he claims that humans will likely become second class citizens (or slaves, or something even worse).
Now, reports have surfaced which assert that he is backing a brain-computer interface venture that was founded to allow humans to keep up with the advancements made in machines. The interface is intended to work by augmenting that which makes us human: our brains.
The find comes from The Wall Street Journal. According to them, the company – which is called Neuralink – is still in the earliest stages of development. To that end, it has no public presence at all.
For a meaningful partial brain interface, I think we’re roughly four or five years away. -Elon Musk
What we do know is that its ultimate goal is to create a device (or possibly a series of devices) that can be implanted in the human brain. These will serve a multitude of purposes — the final end being to help humans merge with our software and keep pace with artificial intelligences so we don’t get left in the dust.
Initially, these enhancements will likely assist in smaller ways, such as helping us improve our memories by creating additional, removable storage components.
Notably, this is not the first that we have heard of Musk working on such a device. Previously, he mentioned a device called the “neural lace.” He explained how he imagined it would work at the 2016 Code Conference, which you can see below:
The Race to Beat AI
Unsurprisingly, Musk isn’t the only one worried about AI. In a video posted by Big Think, Michael Vassar, thechief science officer of MetaMed Research, stated that AI will likely kill us all (literally): “If greater-than-human artificial general intelligence is invented without due caution, it is all but certain that the human species will be extinct in very short order.” Essentially, he is warning that an unchecked AI could eradicate humanity in the future.
Similarly, Stephen Hawking famously stated that AI is one of the biggest threats to humanity: “The development of artificial intelligence could spell the end of the human race. It would take off on its own and redesign itself at an ever-increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.”
To that end, Musk isn’t the only person working to ensure that humanity can keep up with AI. Braintree founder Bryan Johnson is investing $100 million to make a neuroprosthesis to unlock the power of the human brain and, ultimately, make our neural code programmable.
Johnson outlines the purpose of his work, stating that it’s all about co-evolution:
Our connection with our new creations of intelligence is limited by screens, keyboards, gestural interfaces, and voice commands — constrained input/output modalities. We have very little access to our own brains, limiting our ability to co-evolve with silicon-based machines in powerful ways.
He is working to change this and ensure that we have a seamless interface with our technologies (and our AI).
Johnson is clear that his company, Kernel, will begin by researching the brain and figuring out exactly how it works. This research, Johnson states, is the first step in helping humans achieve permanent equality with machines.
Of course, such technologies will do a lot more than just allow humans to interface with machines. Neuroprostheses could also repairour cognitive abilities – which will allow us to combat neurological diseases such as Alzheimer’s, ALS, Parkinson’s, and other conditions that destroy our brains…and our lives.
This is just the beginning.
Such advancements could allow us to merge with machines, yes, but they can also allow us to literally program our neural code, which would allow us to transform ourselves in ways that we can’t even imagine. In short, we could program ourselves into the people that we want to be. As Johnson states, “Our biology and genetics have become increasingly programmable; our neural code is next in line.”
It sounds like something out of science fiction, but it is based on remarkable scientific work.
In short, the devices under development work by replicating the way that our brain cells communicate with one another. The tech envisioned is based on 15 years of academic research that was funded by the NIH and DARPA. So get ready. Human superintelligence is only a matter of time.
Disclosure: Bryan Johnson is an investor in Futurism; he does not hold a seat on our editorial board or have any editorial review privileges.
Modern consumers crave touchscreens, but the screens’ brittle fragility is their downfall. In order for sensors to cover anything larger — like a robot — the technology will need to be cost-effective and flexible. An MIT research team might have found the solution in 3-D printing, according to a study published recently in the journal Advanced Materials Technologies.
The team, led by graduate student Subramanian Sundaram, set out to build a device that would physically respond to mechanical stresses. They were inspired by the golden tortoise beetle, which changes from its typical golden hue and flushes reddish orange when prodded or otherwise mechanically stressed. The team designed the sensors with “pixel” that change color when the sensor is pressed to prove that it is feasible to blend processing circuitry and sensors in printable, flexible electronics.
“In nature, networks of sensors and interconnects are called sensorimotor pathways,” Sundaram said in an MIT press release. “We were trying to see whether we could replicate sensorimotor pathways inside a 3-D-printed object. So we considered the simplest organism we could find.”
Printable electronics aren’t exactly new, but existing printable electronics take a plastic substrate and deposit flexible circuitry on it. The team working on this artificial “goldbug” actually printed the substrate itself. Choosing and customizing the substrate means fewer limitations in terms of what can be deposited atop it, in turn increasing the variety of devices this process has the potential to create.
Printable, Sensor-Laden Robot Skin
3-D-printed substrates will also make printable, sensor-laden robot skin possible. Although printed substrates are initially flat sheets as they print out, they can then transform into more intricate, 3-D shapes as they fold themselves up.
For example, researchers at the CSAIL Distributed Robotics Laboratory are developing self-assembling, printable robots. These robots work like the shrinky dinks of the future, going into the oven flat, and coming out folded into shape. This strategy demonstrates the power of 3-D printing an entire component — or robot — rather than simply printing individual parts of it.
“We believe that only if you’re able to print the underlying substrate can you begin to think about printing a more complex shape,” Sundaram says.
Ultimately, the goal would be to use an underlying substrate that was packed with sensors as well as working transistors so that the robot would be able to determine which inputs were important and which were just sensory “noise.” This substrate would be the ideal skin for a robot intended to react to its environment and interact intelligently with people and things around it.
Popular media frequently warn us about an impending robot revolution. A film or television show begins with a few guffaws and cackles about how artificially intelligent robots are “silly,” but ends on a more somber note. “Ex Machina,” “The Terminator,” and “Westworld” are all terrific examples of humans ignoring the idea of sentience when they first encounter robots. But what if we ignore it in real life? Should we acknowledge it right up front?
Some people believe robots will never truly achieve consciousness because humans don’t even understand it. Our idea of “human rights” is a relatively philosophical notion built on the idea of pain and suffering. Normally, robots do not need to be programmed to feel those emotions in order to carry out their functions, so the point is moot.
The other side of the argument is that our species evolved to understand pain for our own benefit. If we know fire hurts when we touch it, we won’t touch it. However, an advanced AI may just program pain into itself to achieve a higher level of self-awareness. At that point, denying robots rights is simply a matter of economics, the same as when factions of humanity have denied such rights to other humans and to animals throughout our history.
The issue of machine rights is already coming up in relation to privacy concerns and various thresholds of consciousness, but the idea of human exceptionalism is worth considering. We don’t want the species to go extinct, and the rights we decide to give to other species can have a direct effect on our own survival.
Artificial intelligence already plays a major role in human economies and societies, and it will play an even bigger role in the coming years. To ponder the future of AI is thus to acknowledge that the future is AI.
This will be partly owing to advances in “deep learning,” which uses multi layer neural networks that were first theorized in the 1980s. With today’s greater computing power and storage, deep learning is now a practical possibility, and a deep-learning application gained worldwide attention in 2016 by beating the world champion in Go. Commercial enterprises and governments alike hope to adapt the technology to find useful patterns in “Big Data” of all kinds.
In 2011, IBM’s Watson marked another AI watershed, by beating two previous champions in Jeopardy!, a game that combines general knowledge with lateral thinking. And yet another significant development is the emerging “Internet of Things,” which will continue to grow as more gadgets, home appliances, wearable devices, and publicly-sited sensors become connected and begin to broadcast messages around the clock. Big Brother won’t be watching you; but a trillion little brothers might be.
Beyond these innovations, we can expect to see countless more examples of what were once called “expert systems”: AI applications that aid, or even replace, human professionals in various specialties. Similarly, robots will be able to perform tasks that could not be automated before. Already, robots can carry out virtually every role that humans once filled on a warehouse floor.
Given this trend, it is not surprising that some people foresee a point known as the “Singularity,” when AI systems will exceed human intelligence, by intelligently improving themselves. At that point, whether it is in 2030 or at the end of this century, the robots will truly have taken over, and AI will consign war, poverty, disease, and even death to the past.
To all of this, I say: Dream on. Artificial general intelligence (AGI) is still a pipe dream. It’s simply too difficult to master. And while it may be achieved one of these days, it is certainly not in our foreseeable future.
But there are still major developments on the horizon, many of which will give us hope for the future. For example, AI can make reliable legal advice available to more people, and at a very low cost. And it can help us tackle currently incurable diseases and expand access to credible medical advice, without requiring additional medical specialists.
In other areas, we should be prudently pessimistic – not to say dystopian – about the future. AI has worrying implications for the military, individual privacy, and employment. Automated weapons already exist, and they could eventually be capable of autonomous target selection. As Big Data becomes more accessible to governments and multinational corporations, our personal information is being increasingly compromised. And as AI takes over more routine activities, many professionals will be deskilled and displaced. The nature of work itself will change, and we may need to consider providing a “universal income,” assuming there is still a sufficient tax base through which to fund it.
A different but equally troubling implication of AI is that it could become a substitute for one-on-one human contact. To take a trivial example, think about the annoyance of trying to reach a real person on the phone, only to be passed along from one automated menu to another. Sometimes, this is vexing simply because you cannot get the answer you need without the intervention of human intelligence. Or, it may be emotionally frustrating, because you are barred from expressing your feelings to a fellow human being, who would understand, and might even share your sentiments.
Other examples are less trivial, and I am particularly worried about computers being used as “carers” or “companions” for elderly people. To be sure, AI systems that are linked to the Internet and furnished with personalized apps could inform and entertain a lonely person, as well as monitor their vital signs and alert physicians or family members when necessary. Domestic robots could prove to be very useful for fetching food from the fridge and completing other household tasks. But whether an AI system can provide genuine care or companionship is another matter altogether.
Those who believe that this is possible assume that natural-language processing will be up to the task. But “the task” would include having emotionally-laden conversations about people’s personal memories. While an AI system might be able to recognize a limited range of emotions in someone’s vocabulary, intonation, pauses, or facial expressions, it will never be able to match an appropriate human response. It might say, “I’m sorry you’re sad about that,” or, “What a lovely thing to have happened!” But either phrase would be literally meaningless. A demented person could be “comforted” by such words, but at what cost to their human dignity?
The alternative, of course, is to keep humans in these roles. Rather than replacing humans, robots can be human aids. Today, many human-to-human jobs that involve physical and emotional caretaking are undervalued. Ideally, these jobs will gain more respect and remuneration in the future.
But perhaps that is wishful thinking. Ultimately, the future of AI – our AI future – is bright. But the brighter it becomes, the more shadows it will cast.
Although Bill Gates has said that robots who take jobs from human should pay taxes, billionaire Commerce Secretary Wilbur Ross doesn’t agree that a tax can mitigate the threat of job automation. “I’m not in favor of trying to hold back technological advance,” Ross told CNBC. “And if we don’t employ robots, the Chinese will, the Vietnamese will, the Europeans will, the Japanese will. Everyone will.”
This is not a surprising position from Ross, who took his place as a member of the new Republican administration this year. Ross has called overregulation the “single most important thing that bars” effective business decisions. His personal wealth as a billionaire was made mostly from buying and “flipping” distressed businesses. His dealings have at times been perceived as controversial, as Ross has fired workers and eliminated pensions in order to maximize profit.
Ross believes that companies should decide whether innovations, including robots, benefit them. Instead of taxing robots—and essentially passing on that cost to the business owning class—Ross has stated that people in the workforce need to adjust to become part of the workforce of the future, and that improvements to the community college system might be one way they can prepare to do that. Beyond that, Ross has not articulated a specific plan for coping with changes to the workforce wrought by advances in AI and increased numbers of working robots.
Although Ross is not eager to regulate U.S. businesses, he recently leveled a $900 million punishment against Chinese smartphone company ZTE based on their violation of U.S. sanctions against Iran. “Industrial power is, at the end of the day, a national security issue,” Ross said.
It remains to be seen whether mass unemployment without transition plans will be perceived as a national security issue as well.
Much has been said about how automation will affect employment and the economy. In almost every conversation, the looming threat of job displacement is focused on a very specific sector: the blue-collar job market.
These predictions are premised on the fact that machines are now more than capable of completing repetitive jobs that most blue-collar human workers are handling today. But technology isn’t going to stop there. Artificial intelligence (AI) is getting more sophisticated, implying that it’s not only the jobs defined by formulaic processes that are in danger, but also creative, service and knowledge-based professions.
Are Any Jobs Safe?
“We are starting to see in fields like medicine, law, investment banking, dramatic increases in the ability of computers to think as well or better than humans. And that’s really the game-changer here. Because that’s something that we have never seen before,” says Sunil Johal, a public policy expert for CBC News.
Granted, the implications of more intelligent automation on “white collar” jobs are all speculative at this point. There’s little data to support how much automation will affect that job market, mostly because experts believe its impact will be far more subtle than in blue- collar industries. In white-collar industries, there’s more opportunity to shuffle employees around, or slowly phase out jobs, which means the threat of automation won’t be as dramatic. That being said, it will change things.
Johal believes that to keep up, one must actively develop new skills that will adapt to the changing needs of the job market.
“If Canada doesn’t take this seriously, we are going to see many Canadians left on the sidelines of the labour market,” he adds. “They are not going to be able to get back into the job force.”
Experts have predicted that machines will take over a good number of jobs in the next five to ten years, but for a Dublin-based Wendy’s the automation began last year with their self-ordering kiosks. The demand for the technology has been rather high, coming from both costumers and franchise owners.
“There is a huge amount of pull from (franchisees) in order to get them,” according to the Wendy’s chief information officer David Trimm, speaking during the company’s investors’ day. “With the demand we are seeing … we can absolutely see our way to having 1,000 or more restaurants live with kiosks by the end of the year.”
Obviously, these kiosks would cut labor costs: “They are looking to improve their automation and their labor costs, and this is a good way to do it,” said Darren Tristano, VP at food-service research and consulting firm Technomic. “They are also trying to enhance the customer experience. Younger customers prefer to use a kiosk.”
“They always are courteous. They always show up for work on time,” Bob Welcher, president of Restaurant Consultants Inc., joked about the kiosks last year.
As automation reaches the food industry, Wendy’s is taking the lead. It helps that the kiosks are made in-house, at the company’s 90 Degrees lab on North High Street in the University District. “So we know that the things we build work,” Trimm said.
Wendy’s is the third largest burger chain in the world, after McDonald’s and Burger King. In the United States, around 49 million consumers go to Wendy’s each month, and self-serve kiosks could definitely help those numbers climb higher.
Attitudes towards robots, especially artificially intelligent (AI) robots, are mixed at best. As AI technology evolves and robots become eerily similar to living, breathing, thinking organisms, people seem to be less and less likely to trust them. Robots stir up unsettling memories of science fiction nightmares, images of robots surpassing humans in intelligence and taking over, lording over us and causing mass mayhem.
In an effort to gauge public opinion, the European Commission recently conducted a survey that looked at people’s attitudes towards robots, and the results of that effort are now available.
Can We Trust Robots?
While the general reaction was mostly positive, there are a few areas in which people expressed a fairly obvious distrust. For instance, the study asked participants if there were any areas of work from which robots should be banned and received a significant affirmative response for some industries. For example, 60-61 percent responded that robots should be banned from caring for children, the elderly, and the disabled, 30-34 percent said that robots should be banned from education, and 27-30 percent said that robots should be banned from healthcare.
As AI is already being used within healthcare in the form of IBM Watson, it is easy to imagine that any of these sectors could be home to AI technology and robots in the future. However, the report did show that there are several areas where people are already excited to have robots help propel us forward, with 45-52 percent in favor of their use for space exploration, 50-57 percent for manufacturing, and 41-64 percent for military and security operations.
AI and robotics technology is clearly progressing much faster than many people may have expected and not just in the areas we’d hope. AI robots are no longer a thing of the future, and they will only continue to get more intelligent and more prevalent in our daily lives. It is important to know how people feel about them and what can be done to minimize the risks of a sci-fi-style robot takeover, but it really all comes down to one question: How much do we trust robots?
In the age of the automation, losing our jobs to our metal friends may not actually be our biggest worry. While it’s widely noted that by 2030 millions of jobs across multiple industries may be automated, 2031The Movie gives us something else to worry about: the existence of our species.
In the opening scenes of the trailer, we are told that 1 in 50 soldiers are fully automated, hinting at not only their prevalence in society but also their warfaring capabilities. In this short film, we see Hollywood favorite Michelle Rodriguez come together with Lukas Haas to prevent the eradication of humanity as an ex-soldier and an engineer, respectively. They must work together to deftly utilize the information they have to make sure that robots don’t take over. The video paints a picture of a future where robots don’t just replace the jobs of lawyers, checkout clerks, and business professionals. In this future, the robots act as a superior, oppressive mechanical force that makes organic human life seem obsolete.
Check out the trailer below–while l go take the batteries out of my Rumba…I don’t want it getting any ideas…
Technology is pushing human sexuality into uncharted terrain. It’s transforming how we express love and intimacy, and holds tremendous potential for deeper emotional and physical connections. While everyone stands to benefit, this is perhaps especially true for those who face sexual challenges due to distance, loneliness, discrimination, or disability.
According to Future of Sex, five major fields are driving these changes. The Future of Sex Report delves into each of them, revealing how breakthroughs are moving us past the threshold of thinking technology isolates us. Instead, here are the major ways technology is playing a pivotal role in ultimately bringing us closer together.
Distance can be a relationship killer. But work and other factors often prevent couples from seeing each other on a regular basis. Luckily, strides in remote sex and intimate technologies are overcoming this divide.
Sex tech companies such as Kiiroo and Vibease already sell internet-connected sex toys, which are often referred to as teledildonics. These “smart” vibrators and male sex sleeves interact with each other via mobile and desktop applications. Using haptic touch technology, they record sexual motions and send the sensations between lovers who are miles apart.
While the field is still in the fledging stage, in the coming decades long-distance sex will become more appealing, realistic, and emotionally intense. Innovation is expected to truly kick off after 2018, when a broad patent ruling the transmission of sexual communication online expires.
Want to explore your sexual fantasies in a safe and accepting setting? Adult virtual worlds like Red Light Center [NSFW] and 3DXChat [NSFW] let you design hyper-customizable sexual environments and bodies. You can turn yourself or another avatar into your ideal sexual partner, meet real people in real time, partake in virtual dance parties, and enter non-judgemental spaces designed to cater to unique desires. For people who live in isolated rural areas or conservative regions that might view non-traditional sex as shameful, virtual sex could greatly improve overall well-being and self-acceptance.
Motion-capture animations of actors help create incredibly lifelike moving graphics. But the field is set to become even more realistic. Virtual sex worlds have begun integrating remote sex toys and virtual reality. As these emerging fields merge and become more advanced, the line dividing “real” sex with someone in your physical space and virtual sex in online worlds will disappear.
Companies are racing to create the first fully functional sex robots – although some low-level prototypes and automatons already exist. The popular frontrunner is Abyss Creations, the maker of the well-known silicone love doll line, RealDoll. The company announced its Realbotix project in 2015. Its first goal is to create a talking robotic head with the illusion of sentience that will attach onto RealDoll bodies. Release of the robotic system and preorders are expected by the end of the year. However, an AI app is scheduled to launch in April.
The team also plans to make a fully robotic body and integrate its robots into virtual reality. The venture is especially exciting because the Realbotix team works with experts from Hanson Robotics. This world-renowned company is known for its sophisticated, humanlike robots with impressive AI capabilities.
Yet robots with sexual functions will not simply mend lonely hearts, or only appeal to folks already attracted to artificial companions. In the future, therapeutic robots will be used to treat sexual dysfunction and help teach people how to become better lovers to human partners.
While adult film studios are quick to embrace virtual reality, hoping it will revitalize profits undercut by free streaming sites, the technology’s sexual potential extends well beyond pornography and into sex education.
Simulated VR environments offer a safe way to explore social issues and teach young people about potentially risky scenarios. For instance, a YMCA in Montreal recently launched an educational campaign on consent. Using virtual reality headsets, participants entered into scripted interactions with an insistent male suitor—all from the female perspective. The organizers’ aim was to raise awareness on gender equality and violence by having people step into someone else’s shoes. A team from Emory University and Georgia Tech also announced plans to develop a VR sex ed program for young women of color. The immersive experience is meant to teach about safe sex practices, combat STIs, and prevent unplanned pregnancies.
Award-winning VR porn site BaDoinkVR has put its own erotic spin on sex therapy. In its eight-part series Virtual Sexology, created with the help of a licensed therapist, adult performers instruct on Kegel strengthening for men and women, and how to improve stamina and prolong orgasms.
Breakthroughs in the medical field are opening up erotic possibilities, making us reconsider so-called sexual limitations posed by biology or inflicted through injury or illness.
While genital reconstruction and transplant surgeries mostly rely on organ donors, 3D bioprinting and tissue engineering will eventually remove the bottleneck caused by low supply. In fact, doctors at the Wake Forest Baptist Medical Center have already created and implanted lab-grown vaginas into women with vaginal aplasia. The center has also bioengineered and implanted penile erectile tissue on rabbits.
In the next few decades, regenerative medicine will help more people overcome injury and sexual dysfunction. We will also have options to customize our bodies for personal sexual aesthetic and heightened erotic bliss. Microchips and internet-connected implants will also offer incredible possibilities for people wishing to merge with machines to become sexual cyborgs.
Creating Our Sexual Future
Sex and sexuality play integral roles in human interaction. These emerging technologies can help us learn more about ourselves and how to connect better with loved ones. How we used these innovations will shape generations to come. For people faced with physical, emotional, and geographic challenges, they’ll offer opportunities for sexual fulfillment that might not otherwise exist.
However, these topics don’t always receive the attention they deserve—as much as our society is fascinated by sex, we often fail to speak about it in a positive way that pushes discussion forward. The goal of the Future of Sex Report is to bring more visibility to how technology is shaping human sexuality, so as many people as possible can participate in creating a positive and inclusive sexual future.
At this year’s Consumer Electronics Show, Mayfield Robotics unveiled what could possibly be the first truly domestic robot. The Bosch-owned company integrated full functionality with a pleasing, approachable design to establish the robot, named Kuri, as a friendly presence in the home.
Kuri is designed to fit right into smart homes. According to Kuri’s Blog, the bot has a 1080p camera behind one of its eyes, an array of four microphones, dual speakers, wifi and Bluetooth connectivity, and spatial sensors for navigation. It can be programmed to do various tasks around the house and is capable of connecting to modern smart homes through IFTTT. Kuri is 50.8 centimeters (20 inches) tall and weighs 6.4 kilograms (14 pounds), and when its hours of battery life run out, the bot simply returns to its charging dock to recharge.
The fact that Kuri reminds us of certain characters from animated films is not coincidental — Mayfield Robotics intentionally made the robot look that way so that it could easily ingratiate itself to its owners. However, Kuri is more than just a pretty face. “While insanely cute on the outside, Kuri contains serious technologies on the inside that represent the latest developments in smartphones, gaming, and robotics,” said Mayfield Robotics CTO and co-founder Kaijen Hsiao in a press release. “We hope Kuri introduces people — especially kids — to the power of technology and can inspire a new world of possibilities for their future.”
Best Friends With a Bot?
According to Kuri’s creators, one particularly great thing about this domestic robot is that it can be fully functional out of the box, not relying too heavily on third-party services. Further, by the time it reaches the market, Kuri will have a companion app available for iOS and Android. This will allow the robot’s owners to contact and control it remotely to do things like check up on the house while away on a business trip.
Kuri is far from the first domestic robot. A similar project from ASUS, called Zenbo, was also just announced in Taiwan, with a whole slew of capabilities of its own. French telecommunications company Yumii also recently unveiled Cutii, a companion robot for the elderly to help them interact with family, doctors, and others. We also have companion robots that can help autistic children learn new skills, ones that can respond to us with familiar facial expressions, and some that can perform physical tasks to help people with disabilities.
Kuri costs $699 and is available for pre-order with a downpayment of $100. It should be released around the 2017 holiday season, so we’ll just have to wait until then to see whether the age of domestic robots is truly upon us.
For a minute, it seemed like there was good news for anyone who has ever dreamt of becoming a Jaeger pilot. Videos and images of a four-meter (13-ft) tall, walking mech suit that could bring Pacific Rim’s robots into reality surfaced on Instagram.
The imposing exoskeleton was supposedly created by a Korean robotics company with the help of Moldovan-American designer, Vitaly Bulgarov. Bulgarov was the creative mind who designed the robots in Terminator, Robocop, and Transformers.
However, according to Live Science, we have some reasons to be skeptical of the bot:
The South Korean company that is supposedly developing the robot has virtually no online presence and was unfamiliar to robotics researchers contacted by Live Science. Furthermore, the only source for the videos or any information about them is the Facebook and Instagram pages of a designer whose website mentions a conceptual art project about a “fictional robotics corporation that develops its products in a not-so-distant future.”
Bulgarov still claims the massive mech suit is real. According to Bulgarov, the video was leaked, and the company will do a “proper press release” when the robot is completed in 2017. “Frankly, right now, we don’t even really care if people think this is legit because we didn’t try to promote it,” Bulgarov said.
According to Ronald Fearing, professor of electrical engineering and computer science at the University of California, Berkeley, the technology is plausible. However, the video itself is leading scientists to further question the technology’s authenticity.
Christian Hubicki, a postdoctoral robotics researcher at Georgia Tech who worked on the DURUS robot, told Live Science that the bot’s leg joints look unusually smooth given the force that the step of a 1.5-ton robot would exert on the motors. Of course, Bulgarov had an answer to this skepticism. He noted the dampers in the robot’s feet make its motion smooth. He also said the robot wasn’t built for use as is, but as a platform to show off various technical feats.
It’s possible that Method-1 may not be real, but other bots created to provide mechanical assistance already exist. After a 90-year attempt to bring to life a robot that could move in waves, the SAW robot was unveiled by engineers at Ben-Guiron Universtiy of the Negev. Another machine developed for search and rescue missions is the CRAM. The cockroach-inspired robot is designed to run swiftly through small spaces, making it an essential tool for first responders who have to assess earthquake rubble survivors.
These advances in robotics are making it possible for engineers to build machines that will prove to be invaluable. When it comes to Method-1 though, Hubicki still urges skepticism:
At some point, you have to ask yourself, ‘Did a secret company with no obvious ties to existing humanoid robotics groups actually develop a giant humanoid robot with likely unparalleled power and torque, and instead of sending out a press release, they unveil the robot by dribbling out low-resolution videos on Instagram via an affiliate who just so happens to be a visual effects artist?’
With all the environmental problems posed by global phenomena like climate change, it’s easy to forget that other, more directly man-made threats endanger animals. One such problem is the illegal wildlife trade, which is estimated to be a $19 billion business worldwide.
One of the most at-risk animals is the rhino, but you can’t exactly protect rhino herds 24/7. That’s why a mechanical engineer has a wacky new idea: robotic rhinos. These babies could stay with and track rhinos, and alert patrols when poachers are nearby.
The illegal rhino trade is a big one: at least 5,940 African rhinos have been killed since 2008, with 1,175 being killed in 2015 in South Africa alone. The main reason they are poached is for their horns; street prices for a rhino horn in Asia was $60,000-100,000 per kilogram in 2013.
To combat this, the robo-rhino concept was born. Called Rakamera, its basically a robot that would mimic rhinos so that herd would accept it, and it can monitor them from the inside.
The undercover bot would be powered by hydrogen fuel cells, and would have internal hydraulics and servomotors for locomotion. It could be outfitted with infrared sensors and cameras to track both the movements of the herd and any human activities.
Turning to technology
While this may seem like a very far-out solution to the poaching problem, many other high tech solutions are already being explored to combat the threat. For example, one company wanted to flood the black market with 3D printed synthetic horns, but the idea was universally opposed by conservation groups)
On a more conventional note, anti-poaching groups have taken to using infrared cameras, UAVs, spatial monitoring tools, and many other high-tech devices to catch criminals. These are most often used to track humans who get too close to protected areas.
Ultimately, the solution will have to come by way of destroying the demand. Myths of the curative properties of these horns will have to be dispelled, and people must be taught the value of wildlife conservation. Until then, technology will have to stand guard over our animals.