Nigel Stanford recorded his latest album ‘Automatica’ with the help of a robotic bassist, pianist, and drummer.
Flying warehouses, robot receptionists, smart toilets… do such innovations sound like science fiction or part of a possible reality? Technology has been evolving at such a rapid pace that, in the near future, our world may well resemble that portrayed in futuristic movies, such as Blade Runner, with intelligent robots and technologies all around us.
But what technologies will actually make a difference? Based on recent advancements and current trends, here are five innovations that really could shape the future.
1. Smart homes
Many typical household items can already connect to the internet and provide data. But much smart home technology isn’t currently that smart.
A smart meter just lets people see how energy is being used, while a smart TV simply combines television with internet access.
But technology is rapidly moving towards a point where it can use the data and connectivity to act on the user’s behalf.
To really make a difference, technology needs to fade more into the background – imagine a washing machine that recognises what clothes you have put into it, for example, and automatically selects the right programme, or even warns you that you have put in items that you don’t want to wash together.
Here it is important to better understand people’s everyday activities, motivations and interactions with smart objects to avoid them becoming uninvited guests at home.
Such technologies could even work for the benefit of all.
The BBC reports, for example, that energy providers will “reduce costs for someone who allows their washing machine to be turned on by the internet to maximise use of cheap solar power on a sunny afternoon” or “to have their freezers switched off for a few minutes to smooth demand at peak times”.
Our home is, after all, the place where we should feel most secure. For them to become widespread, these technologies will have to keep it that way.
2. Virtual secretaries
While secretaries play a very crucial role in businesses, they often spend large parts of their working day with time-consuming but relatively trivial tasks that could be automated.
Consider the organisation of a “simple” meeting – you have to find the right people to take part (likely across business boundaries) and then identify when they are all available. It’s no mean feat.
Tools such as doodle.com, which compare people’s availability to find the best meeting time, can help. But they ultimately rely on those involved actively participating. They also only become useful once the right people have already been identified.
By using context information (charts of organisations, location awareness from mobile devices and calendars), identifying the right people and the right time for a given event became a technical optimisation problem that was explored by the EU-funded inContext project a decade ago.
At that stage, technology for gathering context information was far less advanced – smart phones were still an oddity and data mining and processing was not where it is today.
Over the coming years, however, we could see machines doing far more of the day-to-day planning in businesses.
Indeed, the role of virtual assistants may go well beyond scheduling meetings and organising people’s diaries – they may help project managers to assemble the right team and allocate them to the right tasks, so that every job is conducted efficiently.
On the downside, much of the required context information is relatively privacy-invasive – but then the younger generation is already happily sharing their every minute on Twitter and Snapchat and such concerns may become less significant over time.
And where should we draw the line? Do we fully embrace the “rise of the machines” and automate as much as possible, or retain real people in their daily roles and only use robots to perform the really trivial tasks that no one wants to do?
This question will need to be answered – and soon.
3. AI doctors
But how would you feel about receiving a diagnosis from an artificial intelligence? A private company called Babylon Health is already running a trial with five London boroughs which encourages consultations with a chatbot for non-emergency calls.
The artificial intelligence was trained using massive amounts of patient data in order to advise users to go to the emergency department of a hospital, visit a pharmacy or stay at home.
The company claims that it will soon be able to develop a system that could potentially outperform doctors and nurses in making diagnoses.
In countries where there is a shortage of medical staff, this could significantly improve health provision, enabling doctors to concentrate on providing treatment rather than spending too much time on making a diagnosis.
This could significantly redefine their clinical role and work practices.
Elsewhere, IBM Watson, the CloudMedx platform and Deep Genomics technology can provide clinicians with insights into patients’ data and existing treatments, help them to make more informed decisions, and assist in developing new treatments.
An increasing number of mobile apps and self-tracking technologies, such as Fitbit, Jawbone Up and Withings, can now facilitate the collection of patients’ behaviours, treatment status and activities.
It is not hard to imagine that even our toilets will soon become smarter and be used to examine people’s urine and faeces, providing real-time risk assessment for certain diseases.
Nevertheless, to enable the widespread adoption of AI technology in healthcare, many legitimate concerns must be addressed. Already, usability, health literacy, privacy, security, content quality and trust issues have been reported with many of these applications.
There is also a lack of adherence to clinical guidelines, ethical concerns, and mismatched expectations regarding the collection, communication, use, and storage of patient’s data.
In addition, the limitations of the technology need to be made clear in order to avoid misinterpretations that could potentially harm patients.
If AI systems can address these challenges and focus on understanding and enhancing existing care practices and the doctor-patient relationship, we can expect to see more and more successful stories of data-driven healthcare initiatives.
4. Care robots
Will we have robots answering the door in homes? Possibly. At most people’s homes? Even if they are reasonably priced, probably not. What distinguishes successful smart technologies from unsuccessful ones is how useful they are.
And how useful they are depends on the context. For most, it’s probably not that useful to have a robot answering the door. But imagine how helpful a robot receptionist could be in places where there is shortage of staff – in care homes for the elderly, for example.
Robots equipped with AI such as voice and face recognition could interact with visitors to check who they wish to visit and whether they are allowed access to the care home.
After verifying that, robots with routing algorithms could guide the visitor towards the person they wish to visit. This could potentially enable staff to spend more quality time with the elderly, improving their standard of living.
The AI required still needs further advancement in order to operate in completely uncontrolled environments. But recent results are positive.
Facebook’s DeepFace software was able to match faces with 97.25 percent accuracy when tested on a standard database used by researchers to study the problem of unconstrained face recognition.
The software is based on Deep Learning, an artificial neural network composed of millions of neuronal connections able to automatically acquire knowledge from data.
5. Flying warehouses and self-driving cars
That is partly because they can use a multitude of sensors to gather data about the world, including 360-degree views around the car.
Moreover, they could potentially communicate with each other to avoid accidents and traffic jams.
More than being an asset to the general public, self-driving cars are likely to become particularly useful for delivery companies, enabling them to save costs and make faster, more efficient deliveries.
Advances are still needed in order to enable the widespread use of such vehicles, not only to improve their ability to drive completely autonomously on busy roads, but also to ensure a proper legal framework is in place.
Nevertheless, car manufacturers are engaging in a race against time to see who will be the first to provide a self-driving car to the masses. It is believed that the first fully autonomous car could become available as early as the next decade.
The advances in this area are unlikely to stop at self-driving cars or trucks. Amazon has recently filed a patent for flying warehouses which could visit places where the demand for certain products is expected to boom.
The flying warehouses would then send out autonomous drones to make deliveries. It is unknown whether Amazon will really go ahead with developing such projects, but tests with autonomous drones are already successfully being carried out.
Thanks to technology, the future is here – we just need to think hard about how best to shape it.
The post 5 Futuristic Technologies Set to Change Our Lives Really Soon appeared first on Futurism.
The Highest Form of Flattery
When it comes to overcoming an array of varied problems with an even more diverse set of solutions, scientists have nothing on natural selection. Countless generations of millions of species have developed numerous ways to defy gravity, stick to objects while underwater, and even lift objects thousands of times more massive than themselves.
Medics, scientists, and even engineers have long been interested in learning how biology has overcome certain obstacles so they can glean principles that we might apply to our own challenges. A historical example of such a strategy can be found in the Wright brothers’ study of birds as they worked on their first plane. They observed how the shapes of birds wings affected airflow and lift, and incorporated this information into what became the first aircraft to complete a sustained, controlled flight.
Today, aerospace engineers continue to use this strategy, now with the focus specifically on owls to make aircraft quieter. And with tools that allow us to study organisms on a molecular level, we can now look at everything from sea creatures to woodland animals for inspiration to improve our tech, our medicine, and — ultimately — our lives.
One company that has successfully adopted this strategy, called “biomimicry,” is Gecko Biomedical, founded by Jeffrey Karp. He designed a medical tape to replace sutures and staples that was inspired by — you guessed it — the gecko. By simulating the tiny hairs that allow gecko’s feet to stick to walls (and adding some glue for good measure), Karp was able to design a tape that could seal incisions in internal tissues like gastrointestinal tracts.
Karp’s team has also gone on to invent a surgical glue after studying the mucus of sandcastle worms. The glue not only remains sticky in wet environments, but is strong enough to hold together a beating heart — an application for which the glue is undergoing clinical trails in Europe.
Solving a Problem
Karp maintains that his goal in biomimicry is never to simply copy certain elements in nature just because we can. Instead, he aims to identify real human needs, and only then look at the natural world to see if it offers any insight.
“When we look to solve problems, it’s not so we can publish papers and get pats on the back from the academic community,” Nick Sherman, a research technician at Karp Lab, said in an interview with the Guardian. “It’s more like, ‘Is this work going to help patients? If not, how do we make it help them?’”
That is why the team has focused on developing medical devices that have the potential to vastly improve patient care, as is the case with the surgical glue. Jean-Marc Alsac, a cardiovascular surgeon who is overseeing the trial, told the Guardian, “This could completely transform how we perform surgery.”
But medicine is not the only field that nature has been transforming. Robotics, too, has been building off of biological models, yielding ‘bots that can do amazing things. For example, engineers recently designed 100-gram (3.5-ounce) robots that work together like ants. When six of these cooperate, they can lift a 1,800-kg (3,900-pound) car.
The robo-ants could be applied to a number of today’s problems, which are many. With continued symptoms of climate change, an aging population at higher risk of cancer, and an increasingly digital society that is more susceptible than ever to cyber attacks, we are in desperate need of fresh solutions. These machines, medical devices, and robots are a testament to the wisdom of the natural world.
Karp believes biomimicry is a winning strategy because “every living creature that exists today is here because it tackled a number of challenges,” he told the Guardian. “In essence, we are surrounded by solutions. Evolution is truly the best problem-solver.”
The post “Bioinspired” Researchers Look to Nature to Solve Humanity’s Biggest Problems appeared first on Futurism.
A Floating Companion
You know that creepy black sphere used as a floating interrogation droid in Star Wars? It seems like scientists at the Japan Aerospace Exploration Agency (JAXA) pretty much designed the complete opposite of that, and we want one for our very own.
Called Int-Ball, this adorable little camera drone resembles something Pixar might have come up with, but it’s totally real, and is now a floating companion to astronauts on board the International Space Station (ISS) – where it helps out by taking photos and recording video, freeing up valuable astronaut time.
Int-Ball was delivered to the ISS in a SpaceX cargo shipment last month – the company’s first involving a reused Dragon cargo capsule – and is now operational, currently undergoing initial testing.
It looks like those checks are going pretty smoothly too, with JAXA having just released the first test footage captured by its little floating bot (aka the JEM Internal Ball Camera).
You can check out Int-Ball’s debut camerawork in the following YouTube clip, which is backed by possibly the most twee music ever used in an official video released by a space agency (although it’s strangely fitting too, given Int-Ball’s cutesy, somewhat Kirby-like proportions):
Testing Drones in Microgravity
According to JAXA, Int-Ball can move autonomously in space, and can also be remotely controlled by flight controllers and researchers on the ground, who can relay its footage in real-time back to astronauts on the ISS for review and follow-up.
The little orb only measures 15 centimetres (6 inches) in diameter and weighs 1 kg (2.2 lbs), and was largely manufactured using 3D printing.
Along the surface of the sphere, 12 fans are positioned to enable Int-Ball to move around, while a number of ‘3D Target Markers’ placed on the ISS’s internal walls help the drone to orientate itself so it can navigate from place to place.
JAXA says that as much as 10 percent of astronaut working hours on board the ISS has crew members with a camera in hand, so by offloading the camerawork and videography to a little floating bot, it could free up researchers significantly to focus on conducting experiments and other important tasks.
Int-Ball is also serving as a test case so that JAXA scientists can see how well floating drones operate in the microgravity environment on board the ISS.
Taking care of video is an important gig, sure, but it’s definitely only a starting point.
In the future it’s conceivable that little autonomous drones like this could perform other kinds of jobs both inside and outside the ISS, helping astronauts by fetching or operating equipment, checking on supplies, or conducting repairs and maintenance on the station itself.
In the meantime, Int-Ball will have to stick to playing camera operator while JAXA figures out just what this technology is capable of, but we doubt very much that the ISS human crew mind having their tiny new pal around the joint.
After all, space can be a pretty lonely place.
The post This Little Japanese Camera Bot is Helping Astronauts Aboard the ISS appeared first on Futurism.
Consciousness raises questions in this age of artificial intelligence.
The post Michio Kaku: Could We Transport Our Consciousness Into Robots? appeared first on Futurism.
We could live in a future where even creativity will be automated.
The post Watch: Can Writers and Artists be Replaced By Robots? appeared first on Futurism.
Zombies and aliens may not be a realistic threat to our species. But there’s one stock movie villain we can’t be so sanguine about: sentient robots. If anything, their arrival is probably just a matter of time. But what will a world of conscious machines be like? Will there be a place in it for us?
Artificial intelligence research has been going through a recent revolution. AI systems can now outperform humans at playing chess and Go, recognizing faces, and driving safely. Even so, most researchers say truly conscious machines — ones that don’t just run programs but have feelings and are self-aware — are decades away. First, the reasoning goes, researchers have to build a generalized intelligence, a single machine with the above talents and the capacity to learn more. Only then will AI reach the level of sophistication needed for consciousness.
But some think it won’t take nearly that long.
“People expect that self-awareness is going to be this end game of artificial intelligence when really there are no scientific pursuits where you start at the end,” says Justin Hart, a computer scientist at the University of Texas. He and other researchers are already building machines with rudimentary minds. One robot wriggles like a newborn baby to understand its body. Another robot babbles about what it sees and cries when you hit it. Another sets off to explore its world on its own.
No one claims that robots have a rich inner experience — that they have pride in floors they’ve vacuumed or delight in the taste of 120-volt current. But robots can now exhibit some similar qualities to the human mind, including empathy, adaptability, and gumption.
Beyond it just being cool to create robots, researchers design these cybernetic creatures because they’re trying to fix flaws in machine-learning systems. Though these systems may be powerful, they are opaque. They work by relating input to output, like a test where you match items in column ‘A’ with items in column ‘B’. The AI systems basically memorize these associations. There’s no deeper logic behind the answers they give. And that’s a problem.
Humans can also be hard to read. We spend an inordinate amount of time analyzing ourselves and others, and arguably, that’s the main role of our conscious minds. If machines had minds, they might not be so inscrutable. We could simply ask them why they did what they did.
“If we could capture some of the structure of consciousness, it’s a good bet that we’d be producing some interesting capacity,” says Selmer Bringsjord, an AI researcher at the Rensselaer Polytechnic Institute in Troy, N.Y. Although science fiction may have us worried about sentient robots, it’s really the mindless robots we need to be cautious of. Conscious machines may actually be our allies.
ROBOT, KNOW THYSELF
Self-driving cars have some of the most advanced AI systems today. They decide where to steer and when to brake by taking constant radar and laser readings and feeding them into algorithms. But much of driving is anticipating other drivers’ maneuvers and responding defensively — functions that are associated with consciousness.
“Self-driving cars will have to read the minds of what other self-driving cars want to do,” says Paul Verschure, a neuroscientist at Universitat Pompeu Fabra in Barcelona.
As a demonstration of how that might look, Hod Lipson, an engineering professor at Columbia University and co-author of a recent book on self-driving cars, and Kyung-Joong Kim at Sejong University in Seoul, South Korea built the robotic equivalent of a crazy driver. The small round robot (about the size of a hockey puck) moves on a loopy path according to its own logic. Then a second robot is set with the goal of intercepting the first robot no matter where the first one started, so it couldn’t record a fixed route; it had to divine the moving robot’s logic.
People expect that self-awareness is going to be this end game of AI when really there are no scientific pursuits where you start at the end.
Using a procedure that mimicked Darwinian evolution, Lipson and Kim crafted an interception strategy. “It had basically developed a duplicate of the brain of the actor — not perfect, but good enough that it could anticipate what it’s going to do,” Lipson says.
Lipson’s team also built a robot that can develop an understanding of its body. The four-legged spidery machine is about the size of a large tarantula. When switched on, its internal computer has no prior information about itself. “It doesn’t know how its motors are arranged, what its body plan is,” Lipson says
But it has the capacity to learn. It makes all the actions it is capable of to see what happens: how, for example, turning on a motor bends a leg joint. “Very much like a baby, it babbles,” Lipson says. “It moves its motors in a random way.”
After four days of flailing, it realizes it has four legs and figures out how to coordinate and move them so it can slither across the floor. When Lipson unplugs one of the motors, the robot realizes it now has only three legs and that its actions no longer produce the intended effects.
“I would argue this robot is self-aware in a very primitive way,” Lipson says.
Another humanlike capability that researchers would like to build into AI is initiative. Machines excel at playing the game Go because humans directed the machines to solve it. They can’t define problems on their own, and defining problems is usually the hard part.
In a forthcoming paper for the journal “Trends in Cognitive Science,” Ryota Kanai, a neuroscientist and founder of a Tokyo-based startup Araya discusses how to give machines intrinsic motivation. In a demonstration, he and his colleagues simulated agents driving a car in a virtual landscape that includes a hill too steep for the car to climb unless it gets a running start. If told to climb the hill, the agents figure out how to do so. Until they receive this command, the car sits idle.
Then Kanai’s team endowed these virtual agents with curiosity. They surveyed the landscape, identified the hill as a problem, and figured out how to climb it even without instruction.
“We did not give a goal to the agent,” Kanai says. “The agent just explores the environment to learn what kind of situation it is in by making predictions about the consequence of its own action.”
The trick is to give robots enough intrinsic motivation to make them better problem solvers, and not so much that they quit and walk out of the lab. Machines can prove as stubborn as humans. Joscha Bach, an AI researcher at Harvard, put virtual robots into a “Minecraft”-like world filled with tasty but poisonous mushrooms. He expected them to learn to avoid them. Instead, they stuffed their mouths.
“They discounted future experiences in the same way as people did, so they didn’t care,” Bach says. “These mushrooms were so nice to eat.” He had to instill an innate aversion into the bots. In a sense, they had to be taught values, not just goals.
In addition to self-awareness and self-motivation, a key function of consciousness is the capacity to focus your attention. Selective attention has been an important area in AI research lately, not least by Google DeepMind, which developed the Go-playing computer.
“Consciousness is an attention filter,” says Stanley Franklin, a computer science professor at the University of Memphis. In a paper published last year in the journal “Biologically Inspired Cognitive Architectures,” Franklin and his colleagues reviewed their progress in building an AI system called LIDA that decides what to concentrate on through a competitive process, as suggested by neuroscientist Bernard Baars in the 1980s. The processes watch for interesting stimuli — loud, bright, exotic — and then vie for dominance. The one that prevails determines where the mental spotlight falls and informs a wide range of brain function, including deliberation and movement. The cycle of perception, attention, and action repeats five to 10 times a second.
The first version of LIDA was a job-matching server for the U.S. Navy. It read emails and focused on pertinent information while juggling each job hunter’s interests, the availability of jobs, and the requirements of government bureaucracy.
Since then, Franklin’s team has used the system to model animals’ minds, especially behavioral quirks that result from focusing on one thing at a time. For example, LIDA is just as prone as humans are to a curious psychological phenomenon known as “attentional blink.” When something catches your attention, you become oblivious to anything else for about half a second. This cognitive blind spot depends on many factors and LIDA shows humanlike responses to these same factors.
Pentti Haikonen, a Finnish AI researcher, has built a robot named XCR-1 on similar principles. Whereas other researchers make modest claims — create some quality of consciousness — Haikonen argues that his creation is capable of genuine subjective experience and basic emotions.
The system learns to make associations much like the neurons in our brains do. If Haikonen shows the robot a green ball and speaks the word “green,” the vision and auditory modules respond and become linked. If Haikonen says “green” again, the auditory module will respond and, through the link, so will the vision module. The robot will proceed as if it heard the word and saw the color, even if it’s staring into an empty void.
Conversely, if the robot sees green, the auditory module will respond, even if the word wasn’t uttered. In short, the robot develops a kind of synesthesia.
“If we see a ball, we may say so to ourselves, and at that moment our perception is rather similar to the case when we actually hear that word,” Haikonen says. “The situation in the XCR-1 is the same.”
Things get interesting when the modules clash — if, for example, the vision module sees green while the auditory module hears “blue.” If the auditory module prevails, the system as a whole turns its attention to the word it hears while ignoring the color it sees. The robot has a simple stream of consciousness consisting of the perceptions that dominate it moment by moment: “green,” “ball,” “blue,” and so on. When Haikonen wires the auditory module to a speech engine, the robot will keep a running monolog about everything it sees and feels.
Haikonen also gives vibration a special significance as “pain,” which preempts other sensory inputs and consumes the robot’s attention. In one demonstration, Haikonen taps the robot and it blurts, “Me hurt.”
“Some people get emotionally disturbed by this, for some reason,” Haikonen says. (He and others are unsentimental about the creations. “I’m never like, ‘Poor robot,’” Verschure says.)
A NEW SPECIES
Building on these early efforts, researchers will develop more lifelike machines. We could see a continuum of conscious systems, just as there is in nature, from amoebas to dogs to chimps to humans and beyond. The gradual progress of this technology is good because it gives us time adjust to the idea that, one day, we won’t be the only advanced beings on the planet.
For a long while, our artificial companions will be vulnerable — more pet than threat. How we treat them will hinge on whether we recognize them as conscious and as capable of suffering.
“The reason that we value non-human animals, to the extent that people do, is that we see, based on our own consciousness, the light of consciousness within them as well,” says Susan Schneider, a philosopher at the University of Connecticut who studies the implications of AI. In fact, she thinks we will deliberately hold back from building conscious machines to avoid the moral dilemmas it poses.
“If you’re building conscious systems and having them work for us, that would be akin to slavery,” Schneider says. By the same token, if we don’t give advanced robots the gift of sentience, it worsens the threat they may eventually pose to humanity because they will see no particular reason to identify with us and value us.
Judging by what we’ve seen so far, conscious machines will inherit our human vulnerabilities. If robots have to anticipate what other robots do, they will treat one another as creatures with agency. Like us, they may start attributing agency to inanimate objects: stuffed animals, carved statues, the wind.
Last year, social psychologists Kurt Gray of the University of North Carolina and the late Daniel Wegner suggested in their book “The Mind Club” that this instinct was the origin of religion. “I would like to see a movie where the robots develop a religion because we have engineered them to have an intentionality prior so that they can be social,” Verschure says. ”But their intentionality prior runs away.”
These machines will vastly exceed our problem-solving ability, but not everything is a solvable problem. The only response they could have to conscious experience is to revel in it, and with their expanded ranges of sensory perception, they will see things people wouldn’t believe.
“I don’t think a future robotic species is going to be heartless and cold, as we sometimes imagine robots to be,” Lipson says. “They’ll probably have music and poetry that we’ll never understand.”
What the Rise of Sentient Robots Will Mean for Human Beings was originally published by NBC Universal Media, LLC on June 19, 2017 by George Musser. Copyright 2017 NBC Universal Media, LLC. All rights reserved.
The post What Will The Rise of Conscious Machines Mean for Human Beings? appeared first on Futurism.
This mech suit can run over 20mph and jump 10 feet in the air.
Thanks to technological innovation, machines are learning how humans do things—they’re also learning how to do it better.
The post This New Era of Automation Is Going to Completely Transform Our Way of Life appeared first on Futurism.
Jane Kim, a San Francisco politician, has started exploring a tax on automated workers to combat inequality.
This technology would take the form of an injectable “neural lace” — composed of a mesh of electrodes — that would augment the human brain, adding another layer to the cortex and limbic system that is capable of communicating with a computer (essentially creating cyborgs). This, hypothetically, creates an upgradable, updatable interface that could be applied in countless ways. Some of these include:
Controlling Computers With Your Mind
Brains and technology both operate using the same vectors: electricity and data. Musk’s Neural Lace would be a system that provides a way for them to communicate directly with each other. To borrow a simile from Phillip Alvelda, the Neural Engineering System Design (NESD) program manager (another nascent BCI), “Today’s best brain-computer interface systems are like two supercomputers trying to talk to each other using an old 300-baud modem […] Imagine what will become possible when we upgrade our tools.” Applications could stretch from the remote operation of technology to the completely hands free and voiceless operation of computers. Researchers in Korea have already used a BCI to control turtles.
Updating Your Mind or Communicating With Someone Else’s
Elon Musk’s idea could both initiate brain activity and monitor it. The technology does not necessarily have to be a one-way communication stream, it is capable of both sending messages and creating information in the brain. The high-bandwidth interface could allow you to wirelessly transmit information to the cloud, to computers, or even directly to the brains of other people with a similar interface in their head. There is also the possibility of downloading content to augment your consciousness: think Neo learning kung-fu in the Matrix. While initial tests to improve intelligence haven’t been too successful, if brains and computers speak the same language, then computers can impart information to the brain. The technology is currently being used to allow paralyzed people to communicate, but its uses could extend far beyond that.
Bionic Limbs That Feel Like Real Limbs
As part of this two-way communication stream, robotic arms could communicate fleshy messages by being connected to existing nerve structures. Rather than making the brain learn how to use a new part of the nervous system, robotic limbs could be quickly and easily integrated into the system. This has the potential to revolutionize prosthetic limbs for the disabled, but may also encourage people to rid themselves of their biological arms in favour of mechanical super limbs. Who knows!
Emotionally Aware Technology
As computers and brains would essentially be speaking the same language, emotions could be read as data using electrodes. This would shift technology’s perception of humans from basic recognition to complex understanding. Robot helpers would be able to adapt to your emotional state rather than just responding to commands. Photos and videos could also be implanted with emotional metadata, meaning that one could feel what it would be like to be in any given scenario, rather than just trying to imagine it.
Next Generation Adaptable Gaming
One issue with the lifespan of games is repetition; people become accustomed, know what to expect, or are limited by the programmed narrative. A BCI could improve this situation by having games respond to what your brain is feeling, remaining one step ahead and endlessly diverse. This would be most applicable to the horror genre, in which enemies could come at you when and where you least expect it, providing constant shocks, jumps, and thrills. The Black Mirror episode Playtest is an hour long exploration of just how terrifying this could be. Since AI has been shown to be as creative as a human composer, this reality could be surprisingly close.
The post Five Ways Elon Musk’s Brain-Computer Interface Could Transform the World appeared first on Futurism.
Our Current State
Several politicians and leaders in technology law are calling for the United States to create a department that concentrates on robotics and artificial intelligence (AI). AI is becoming ubiquitous, and is present in everything from your cell phone to self-driving cars.
The future of the workforce is in automation, and a plan needs to be in place for workers who are affected. In his farewell address, former president Barack Obama expressed his concerns about the impact of future tech. “The next wave of economic dislocation won’t come from overseas” Obama said. “It will come from the relentless pace of automation that makes many good, middle-class jobs obsolete.”
The U.S. should start taking action to address Obama’s concerns, argues John Frank Weaver, a lawyer who specializes in AI law. In an interview with Inverse, he advocated the formation of federal commission or similar government entity to establish overarching regulations of AI and autonomous technology.
“The idea that there’s one body where congress and the executive branch are able to pool their resources and come up with a coherent federal policy for the country, both in terms of domestic policy and how we approach international treaties, I think is important, because of the potential dangers in a lot of areas,” Weaver said.
Some of these potential dangers might be privacy concerns from drones or smart TVs, or safety issues stemming from cars driven by AI. There are also economic implications to these technological advances: what happens to taxis, Uber, Lyft, long-haul trucking, and other industries when AI takes over driving? Who is responsible for accidents caused by self-driving vehicles? A centralized federal agency could tackle these problems and others.
The idea of a federal agency to regulate robotics isn’t new. Ryan Calo, professor at the University of Washington School of Law and adviser of the Obama administration, wrote a proposal for one in 2014. The proposal points out that private tech companies are already looking to government agencies for guidance in these uncharted technological territories. For example, Toyota approached NASA for help when their cars were unexpectedly accelerating. But NASA cannot take on all the problems that will come with a growing robotics industry — its members have too many other things to focus on.
Currently, any regulations of robotics and AI are spread out across many organizations. The Federal Aviation Administration, Securities and Exchange Commission, and the National Highway Traffic Safety Administration have some of the responsibility when it comes to robotics regulations. However, this arrangement doesn’t allow for full coverage or expertise in this highly technical and rapidly changing field.
While the U.S. federal government is lagging behind technological advances, many states are struggling to come up with their own solutions. Legislation on autonomous vehicles has been passed Alabama, California, Florida, Louisiana, Michigan, Nevada, North Dakota, Pennsylvania, Tennessee, Utah, and Virginia, as well as in Washington D.C. , since 2012. However, when you compare the body of legislation to that of the airline industry, it doesn’t even come close. If every department takes on only the robotics issues that affect it directly, there’s no across-the-board policy, which can lead to confusion.
It’s not like such policies are impossible to put in place. Japan and the European Union have both created robotics commissions along the lines of what Calo and Weaver have proposed. In Japan in particular, robotics is an enormous industry. In 2009, the nation employed over 25,000 robot workers, more than any other country. This could be a solution for the country’s declining birthrate and diminishing workforce. The European Union’s proposal covers rules and ethics governing robots in addition to tackling the societal issues that will arise.
The consequences of allowing the robotics industry to run a muck without oversight could have far-reaching consequences. For a similar example, remember the the banking industry collapsed of 2008, which occurred because of a lack of federal oversight when it came to banking regulations. Nine years later, the industry is still suffering, according to author Anat Admati.
She says that it’s necessary to look to experts first to put guidelines in place — politicians and regulators probably don’t have the specific knowledge necessary to create rules about driverless cars, for example. In an interview with Inverse, Admati said, “It is important that policymakers rely on sufficient, un-conflicted expertise and make sure to set rules in a timely manner. Otherwise, we may discover that risks have been ignored when it is too late and harm has occurred.”
In a situation linked to the economy, it is vital that we have regulations in place to prevent another collapse like in 2008. A federal robotics agency is necessary in order to nurture this growing industry — and protect the nation from its side effects.
The post If We Don’t Regulate Automation, It Could Decimate the U.S. Economy appeared first on Futurism.
World War Mech
The moment we’ve all been waiting for has finally arrived…well…kind of. We at least now know the month when the battle of the century will commence.
Back in 2015, a team of American engineers, under the MegaBots moniker, challenged their Japanese counterparts, named Suidobashi, to a giant robot mega-battle. The Americans wanted to pit their Mark II robot against Japan’s Kuratas. And, after nearly two years of anticipation, we can finally gear up to watch this battle, at an unspecified date in August, at an unspecified location.
While we may have visions of elegant swordplay with lightning fast parries and ripostes, it’s not likely that robotics has gotten to the level of science-fiction favorites like Gundam or Pacific Rim.
MegaBots was also able to generate $2.4 million dollars last year to create a robot fighting league.
Robotics have come a long way since Leonardo da Vinci gave the world its first recorded designs for a humanoid robot. Fittingly, it was made out of a suit of armor. Perhaps giant fighting robots were the logical conclusion to military technology.
Amazon’s CEO, Jef Bezos, recently demoed the world’s first manned bipedal robot. He couldn’t help but conjure images of Lex Luthor in his Warsuit or Obadiah Stane in the Iron Monger. This model isn’t likely to be battling superheroes anytime soon, but there are plenty of military robots already in existence and in development.
The software behind modern robots is also rapidly developing. There is now even tech available that allows humans to control robots with their minds.
These fighting robots may not have that capability, but humans aren’t likely mature enough to handle mind controlled mecha-fighters at this point in time anyhow. So tune into the MegaBot and Suidobashi YouTube channels this summer to see all of your twisted metal fantasies come true.
The post An Epic International Duel Between Giant Robots Is Going Down in August appeared first on Futurism.
In case you missed it, Elon Musk is rather concerned about the fate of humanity, given the extreme advancements being made in artificial intelligence (AI). Ultimately, he fears that our AI will, one day, overtake us. When this happens, he claims that humans will likely become second class citizens (or slaves, or something even worse).
Now, reports have surfaced which assert that he is backing a brain-computer interface venture that was founded to allow humans to keep up with the advancements made in machines. The interface is intended to work by augmenting that which makes us human: our brains.
The find comes from The Wall Street Journal. According to them, the company – which is called Neuralink – is still in the earliest stages of development. To that end, it has no public presence at all.
For a meaningful partial brain interface, I think we’re roughly four or five years away. -Elon Musk
What we do know is that its ultimate goal is to create a device (or possibly a series of devices) that can be implanted in the human brain. These will serve a multitude of purposes — the final end being to help humans merge with our software and keep pace with artificial intelligences so we don’t get left in the dust.
Initially, these enhancements will likely assist in smaller ways, such as helping us improve our memories by creating additional, removable storage components.
Notably, this is not the first that we have heard of Musk working on such a device. Previously, he mentioned a device called the “neural lace.” He explained how he imagined it would work at the 2016 Code Conference, which you can see below:
The Race to Beat AI
Unsurprisingly, Musk isn’t the only one worried about AI. In a video posted by Big Think, Michael Vassar, the chief science officer of MetaMed Research, stated that AI will likely kill us all (literally): “If greater-than-human artificial general intelligence is invented without due caution, it is all but certain that the human species will be extinct in very short order.” Essentially, he is warning that an unchecked AI could eradicate humanity in the future.
Similarly, Stephen Hawking famously stated that AI is one of the biggest threats to humanity: “The development of artificial intelligence could spell the end of the human race. It would take off on its own and redesign itself at an ever-increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.”
To that end, Musk isn’t the only person working to ensure that humanity can keep up with AI. Braintree founder Bryan Johnson is investing $100 million to make a neuroprosthesis to unlock the power of the human brain and, ultimately, make our neural code programmable.
Johnson outlines the purpose of his work, stating that it’s all about co-evolution:
Our connection with our new creations of intelligence is limited by screens, keyboards, gestural interfaces, and voice commands — constrained input/output modalities. We have very little access to our own brains, limiting our ability to co-evolve with silicon-based machines in powerful ways.
He is working to change this and ensure that we have a seamless interface with our technologies (and our AI).
Johnson is clear that his company, Kernel, will begin by researching the brain and figuring out exactly how it works. This research, Johnson states, is the first step in helping humans achieve permanent equality with machines.
Of course, such technologies will do a lot more than just allow humans to interface with machines. Neuroprostheses could also repair our cognitive abilities – which will allow us to combat neurological diseases such as Alzheimer’s, ALS, Parkinson’s, and other conditions that destroy our brains…and our lives.
This is just the beginning.
Such advancements could allow us to merge with machines, yes, but they can also allow us to literally program our neural code, which would allow us to transform ourselves in ways that we can’t even imagine. In short, we could program ourselves into the people that we want to be. As Johnson states, “Our biology and genetics have become increasingly programmable; our neural code is next in line.”
It sounds like something out of science fiction, but it is based on remarkable scientific work.
In short, the devices under development work by replicating the way that our brain cells communicate with one another. The tech envisioned is based on 15 years of academic research that was funded by the NIH and DARPA. So get ready. Human superintelligence is only a matter of time.
Disclosure: Bryan Johnson is an investor in Futurism; he does not hold a seat on our editorial board or have any editorial review privileges.
The post Elon Musk Just Launched A Company To Merge Your Brain With A Computer appeared first on Futurism.
Modern consumers crave touchscreens, but the screens’ brittle fragility is their downfall. In order for sensors to cover anything larger — like a robot — the technology will need to be cost-effective and flexible. An MIT research team might have found the solution in 3-D printing, according to a study published recently in the journal Advanced Materials Technologies.
The team, led by graduate student Subramanian Sundaram, set out to build a device that would physically respond to mechanical stresses. They were inspired by the golden tortoise beetle, which changes from its typical golden hue and flushes reddish orange when prodded or otherwise mechanically stressed. The team designed the sensors with “pixel” that change color when the sensor is pressed to prove that it is feasible to blend processing circuitry and sensors in printable, flexible electronics.
“In nature, networks of sensors and interconnects are called sensorimotor pathways,” Sundaram said in an MIT press release. “We were trying to see whether we could replicate sensorimotor pathways inside a 3-D-printed object. So we considered the simplest organism we could find.”
Printable electronics aren’t exactly new, but existing printable electronics take a plastic substrate and deposit flexible circuitry on it. The team working on this artificial “goldbug” actually printed the substrate itself. Choosing and customizing the substrate means fewer limitations in terms of what can be deposited atop it, in turn increasing the variety of devices this process has the potential to create.
Printable, Sensor-Laden Robot Skin
3-D-printed substrates will also make printable, sensor-laden robot skin possible. Although printed substrates are initially flat sheets as they print out, they can then transform into more intricate, 3-D shapes as they fold themselves up.
For example, researchers at the CSAIL Distributed Robotics Laboratory are developing self-assembling, printable robots. These robots work like the shrinky dinks of the future, going into the oven flat, and coming out folded into shape. This strategy demonstrates the power of 3-D printing an entire component — or robot — rather than simply printing individual parts of it.
“We believe that only if you’re able to print the underlying substrate can you begin to think about printing a more complex shape,” Sundaram says.
Ultimately, the goal would be to use an underlying substrate that was packed with sensors as well as working transistors so that the robot would be able to determine which inputs were important and which were just sensory “noise.” This substrate would be the ideal skin for a robot intended to react to its environment and interact intelligently with people and things around it.
The post We Could Create Robots That Can Feel, Thanks to New Sensor-Laden “Robotic Skin” appeared first on Futurism.
Popular media frequently warn us about an impending robot revolution. A film or television show begins with a few guffaws and cackles about how artificially intelligent robots are “silly,” but ends on a more somber note. “Ex Machina,” “The Terminator,” and “Westworld” are all terrific examples of humans ignoring the idea of sentience when they first encounter robots. But what if we ignore it in real life? Should we acknowledge it right up front?
Some people believe robots will never truly achieve consciousness because humans don’t even understand it. Our idea of “human rights” is a relatively philosophical notion built on the idea of pain and suffering. Normally, robots do not need to be programmed to feel those emotions in order to carry out their functions, so the point is moot.
The other side of the argument is that our species evolved to understand pain for our own benefit. If we know fire hurts when we touch it, we won’t touch it. However, an advanced AI may just program pain into itself to achieve a higher level of self-awareness. At that point, denying robots rights is simply a matter of economics, the same as when factions of humanity have denied such rights to other humans and to animals throughout our history.
The issue of machine rights is already coming up in relation to privacy concerns and various thresholds of consciousness, but the idea of human exceptionalism is worth considering. We don’t want the species to go extinct, and the rights we decide to give to other species can have a direct effect on our own survival.
Artificial intelligence already plays a major role in human economies and societies, and it will play an even bigger role in the coming years. To ponder the future of AI is thus to acknowledge that the future is AI.
This will be partly owing to advances in “deep learning,” which uses multi layer neural networks that were first theorized in the 1980s. With today’s greater computing power and storage, deep learning is now a practical possibility, and a deep-learning application gained worldwide attention in 2016 by beating the world champion in Go. Commercial enterprises and governments alike hope to adapt the technology to find useful patterns in “Big Data” of all kinds.
In 2011, IBM’s Watson marked another AI watershed, by beating two previous champions in Jeopardy!, a game that combines general knowledge with lateral thinking. And yet another significant development is the emerging “Internet of Things,” which will continue to grow as more gadgets, home appliances, wearable devices, and publicly-sited sensors become connected and begin to broadcast messages around the clock. Big Brother won’t be watching you; but a trillion little brothers might be.
Beyond these innovations, we can expect to see countless more examples of what were once called “expert systems”: AI applications that aid, or even replace, human professionals in various specialties. Similarly, robots will be able to perform tasks that could not be automated before. Already, robots can carry out virtually every role that humans once filled on a warehouse floor.
Given this trend, it is not surprising that some people foresee a point known as the “Singularity,” when AI systems will exceed human intelligence, by intelligently improving themselves. At that point, whether it is in 2030 or at the end of this century, the robots will truly have taken over, and AI will consign war, poverty, disease, and even death to the past.
To all of this, I say: Dream on. Artificial general intelligence (AGI) is still a pipe dream. It’s simply too difficult to master. And while it may be achieved one of these days, it is certainly not in our foreseeable future.
But there are still major developments on the horizon, many of which will give us hope for the future. For example, AI can make reliable legal advice available to more people, and at a very low cost. And it can help us tackle currently incurable diseases and expand access to credible medical advice, without requiring additional medical specialists.
In other areas, we should be prudently pessimistic – not to say dystopian – about the future. AI has worrying implications for the military, individual privacy, and employment. Automated weapons already exist, and they could eventually be capable of autonomous target selection. As Big Data becomes more accessible to governments and multinational corporations, our personal information is being increasingly compromised. And as AI takes over more routine activities, many professionals will be deskilled and displaced. The nature of work itself will change, and we may need to consider providing a “universal income,” assuming there is still a sufficient tax base through which to fund it.
A different but equally troubling implication of AI is that it could become a substitute for one-on-one human contact. To take a trivial example, think about the annoyance of trying to reach a real person on the phone, only to be passed along from one automated menu to another. Sometimes, this is vexing simply because you cannot get the answer you need without the intervention of human intelligence. Or, it may be emotionally frustrating, because you are barred from expressing your feelings to a fellow human being, who would understand, and might even share your sentiments.
Other examples are less trivial, and I am particularly worried about computers being used as “carers” or “companions” for elderly people. To be sure, AI systems that are linked to the Internet and furnished with personalized apps could inform and entertain a lonely person, as well as monitor their vital signs and alert physicians or family members when necessary. Domestic robots could prove to be very useful for fetching food from the fridge and completing other household tasks. But whether an AI system can provide genuine care or companionship is another matter altogether.
Those who believe that this is possible assume that natural-language processing will be up to the task. But “the task” would include having emotionally-laden conversations about people’s personal memories. While an AI system might be able to recognize a limited range of emotions in someone’s vocabulary, intonation, pauses, or facial expressions, it will never be able to match an appropriate human response. It might say, “I’m sorry you’re sad about that,” or, “What a lovely thing to have happened!” But either phrase would be literally meaningless. A demented person could be “comforted” by such words, but at what cost to their human dignity?
The alternative, of course, is to keep humans in these roles. Rather than replacing humans, robots can be human aids. Today, many human-to-human jobs that involve physical and emotional caretaking are undervalued. Ideally, these jobs will gain more respect and remuneration in the future.
But perhaps that is wishful thinking. Ultimately, the future of AI – our AI future – is bright. But the brighter it becomes, the more shadows it will cast.
Taxing Robots Chills Innovation?
Although Bill Gates has said that robots who take jobs from human should pay taxes, billionaire Commerce Secretary Wilbur Ross doesn’t agree that a tax can mitigate the threat of job automation. “I’m not in favor of trying to hold back technological advance,” Ross told CNBC. “And if we don’t employ robots, the Chinese will, the Vietnamese will, the Europeans will, the Japanese will. Everyone will.”
This is not a surprising position from Ross, who took his place as a member of the new Republican administration this year. Ross has called overregulation the “single most important thing that bars” effective business decisions. His personal wealth as a billionaire was made mostly from buying and “flipping” distressed businesses. His dealings have at times been perceived as controversial, as Ross has fired workers and eliminated pensions in order to maximize profit.
Ross believes that companies should decide whether innovations, including robots, benefit them. Instead of taxing robots—and essentially passing on that cost to the business owning class—Ross has stated that people in the workforce need to adjust to become part of the workforce of the future, and that improvements to the community college system might be one way they can prepare to do that. Beyond that, Ross has not articulated a specific plan for coping with changes to the workforce wrought by advances in AI and increased numbers of working robots.
Although Ross is not eager to regulate U.S. businesses, he recently leveled a $900 million punishment against Chinese smartphone company ZTE based on their violation of U.S. sanctions against Iran. “Industrial power is, at the end of the day, a national security issue,” Ross said.
It remains to be seen whether mass unemployment without transition plans will be perceived as a national security issue as well.
The post The U.S. Secretary of Commerce Doesn’t Think We Should Tax Robots appeared first on Futurism.
The Age of Automation
Much has been said about how automation will affect employment and the economy. In almost every conversation, the looming threat of job displacement is focused on a very specific sector: the blue-collar job market.
One frequently cited study published back in 2013 by Oxford University and the Oxford Martin School says that 47 percent of jobs in the US will be automated in the next 20 years. In Canada, a study conducted by the Brookfield Institute for Innovation + Entrepreneurship says that 40 percent of jobs in the country will be taken over by machines in the next decade or two. In the UK, they’re predicting that 850,000 jobs will be automated by 2030. And in Southeast Asia, an estimated 137 million workers are in danger of losing their jobs in the next 20 years.
These predictions are premised on the fact that machines are now more than capable of completing repetitive jobs that most blue-collar human workers are handling today. But technology isn’t going to stop there. Artificial intelligence (AI) is getting more sophisticated, implying that it’s not only the jobs defined by formulaic processes that are in danger, but also creative, service and knowledge-based professions.
Are Any Jobs Safe?
“We are starting to see in fields like medicine, law, investment banking, dramatic increases in the ability of computers to think as well or better than humans. And that’s really the game-changer here. Because that’s something that we have never seen before,” says Sunil Johal, a public policy expert for CBC News.
Granted, the implications of more intelligent automation on “white collar” jobs are all speculative at this point. There’s little data to support how much automation will affect that job market, mostly because experts believe its impact will be far more subtle than in blue- collar industries. In white-collar industries, there’s more opportunity to shuffle employees around, or slowly phase out jobs, which means the threat of automation won’t be as dramatic. That being said, it will change things.
Johal believes that to keep up, one must actively develop new skills that will adapt to the changing needs of the job market.
“If Canada doesn’t take this seriously, we are going to see many Canadians left on the sidelines of the labour market,” he adds. “They are not going to be able to get back into the job force.”
The post The Automation Upheaval Won’t Be Limited to Blue-Collar Jobs appeared first on Futurism.
Experts have predicted that machines will take over a good number of jobs in the next five to ten years, but for a Dublin-based Wendy’s the automation began last year with their self-ordering kiosks. The demand for the technology has been rather high, coming from both costumers and franchise owners.
“There is a huge amount of pull from (franchisees) in order to get them,” according to the Wendy’s chief information officer David Trimm, speaking during the company’s investors’ day. “With the demand we are seeing … we can absolutely see our way to having 1,000 or more restaurants live with kiosks by the end of the year.”
Obviously, these kiosks would cut labor costs: “They are looking to improve their automation and their labor costs, and this is a good way to do it,” said Darren Tristano, VP at food-service research and consulting firm Technomic. “They are also trying to enhance the customer experience. Younger customers prefer to use a kiosk.”
“They always are courteous. They always show up for work on time,” Bob Welcher, president of Restaurant Consultants Inc., joked about the kiosks last year.
As automation reaches the food industry, Wendy’s is taking the lead. It helps that the kiosks are made in-house, at the company’s 90 Degrees lab on North High Street in the University District. “So we know that the things we build work,” Trimm said.
Wendy’s is the third largest burger chain in the world, after McDonald’s and Burger King. In the United States, around 49 million consumers go to Wendy’s each month, and self-serve kiosks could definitely help those numbers climb higher.
The post One Fast Food Chain Is Adding Automated Kiosks to 1,000 of Its Restaurants in 2017 appeared first on Futurism.
Our Robotic Future
Attitudes towards robots, especially artificially intelligent (AI) robots, are mixed at best. As AI technology evolves and robots become eerily similar to living, breathing, thinking organisms, people seem to be less and less likely to trust them. Robots stir up unsettling memories of science fiction nightmares, images of robots surpassing humans in intelligence and taking over, lording over us and causing mass mayhem.
According to Elon Musk, AI technology is progressing so quickly that we may need to start merging with it soon, essentially becoming cyborgs. It’s easy to see where he gets that notion, considering robots are already threatening to replace lawyers, childcare workers, checkout clerks, and more.
In an effort to gauge public opinion, the European Commission recently conducted a survey that looked at people’s attitudes towards robots, and the results of that effort are now available.
Can We Trust Robots?
While the general reaction was mostly positive, there are a few areas in which people expressed a fairly obvious distrust. For instance, the study asked participants if there were any areas of work from which robots should be banned and received a significant affirmative response for some industries. For example, 60-61 percent responded that robots should be banned from caring for children, the elderly, and the disabled, 30-34 percent said that robots should be banned from education, and 27-30 percent said that robots should be banned from healthcare.
As AI is already being used within healthcare in the form of IBM Watson, it is easy to imagine that any of these sectors could be home to AI technology and robots in the future. However, the report did show that there are several areas where people are already excited to have robots help propel us forward, with 45-52 percent in favor of their use for space exploration, 50-57 percent for manufacturing, and 41-64 percent for military and security operations.
AI and robotics technology is clearly progressing much faster than many people may have expected and not just in the areas we’d hope. AI robots are no longer a thing of the future, and they will only continue to get more intelligent and more prevalent in our daily lives. It is important to know how people feel about them and what can be done to minimize the risks of a sci-fi-style robot takeover, but it really all comes down to one question: How much do we trust robots?
In the age of the automation, losing our jobs to our metal friends may not actually be our biggest worry. While it’s widely noted that by 2030 millions of jobs across multiple industries may be automated, 2031 The Movie gives us something else to worry about: the existence of our species.
In the opening scenes of the trailer, we are told that 1 in 50 soldiers are fully automated, hinting at not only their prevalence in society but also their warfaring capabilities. In this short film, we see Hollywood favorite Michelle Rodriguez come together with Lukas Haas to prevent the eradication of humanity as an ex-soldier and an engineer, respectively. They must work together to deftly utilize the information they have to make sure that robots don’t take over. The video paints a picture of a future where robots don’t just replace the jobs of lawyers, checkout clerks, and business professionals. In this future, the robots act as a superior, oppressive mechanical force that makes organic human life seem obsolete.
Check out the trailer below–while l go take the batteries out of my Rumba…I don’t want it getting any ideas…
The post Watch: Harrowing Vision of Our Future Dominated by an AI Military appeared first on Futurism.
Technology is pushing human sexuality into uncharted terrain. It’s transforming how we express love and intimacy, and holds tremendous potential for deeper emotional and physical connections. While everyone stands to benefit, this is perhaps especially true for those who face sexual challenges due to distance, loneliness, discrimination, or disability.
According to Future of Sex, five major fields are driving these changes. The Future of Sex Report delves into each of them, revealing how breakthroughs are moving us past the threshold of thinking technology isolates us. Instead, here are the major ways technology is playing a pivotal role in ultimately bringing us closer together.
Distance can be a relationship killer. But work and other factors often prevent couples from seeing each other on a regular basis. Luckily, strides in remote sex and intimate technologies are overcoming this divide.
Sex tech companies such as Kiiroo and Vibease already sell internet-connected sex toys, which are often referred to as teledildonics. These “smart” vibrators and male sex sleeves interact with each other via mobile and desktop applications. Using haptic touch technology, they record sexual motions and send the sensations between lovers who are miles apart.
There are also prototypes for long-distance kissing devices, VR haptic body suits, and connected pillows that transmit the sound of a lover’s heartbeat. These innovations foreshadow a future of remote sex and intimacy that is multi-sensory and immersive.
While the field is still in the fledging stage, in the coming decades long-distance sex will become more appealing, realistic, and emotionally intense. Innovation is expected to truly kick off after 2018, when a broad patent ruling the transmission of sexual communication online expires.
Want to explore your sexual fantasies in a safe and accepting setting? Adult virtual worlds like Red Light Center [NSFW] and 3DXChat [NSFW] let you design hyper-customizable sexual environments and bodies. You can turn yourself or another avatar into your ideal sexual partner, meet real people in real time, partake in virtual dance parties, and enter non-judgemental spaces designed to cater to unique desires. For people who live in isolated rural areas or conservative regions that might view non-traditional sex as shameful, virtual sex could greatly improve overall well-being and self-acceptance.
Motion-capture animations of actors help create incredibly lifelike moving graphics. But the field is set to become even more realistic. Virtual sex worlds have begun integrating remote sex toys and virtual reality. As these emerging fields merge and become more advanced, the line dividing “real” sex with someone in your physical space and virtual sex in online worlds will disappear.
Companies are racing to create the first fully functional sex robots – although some low-level prototypes and automatons already exist. The popular frontrunner is Abyss Creations, the maker of the well-known silicone love doll line, RealDoll. The company announced its Realbotix project in 2015. Its first goal is to create a talking robotic head with the illusion of sentience that will attach onto RealDoll bodies. Release of the robotic system and preorders are expected by the end of the year. However, an AI app is scheduled to launch in April.
The team also plans to make a fully robotic body and integrate its robots into virtual reality. The venture is especially exciting because the Realbotix team works with experts from Hanson Robotics. This world-renowned company is known for its sophisticated, humanlike robots with impressive AI capabilities.
Yet robots with sexual functions will not simply mend lonely hearts, or only appeal to folks already attracted to artificial companions. In the future, therapeutic robots will be used to treat sexual dysfunction and help teach people how to become better lovers to human partners.
While adult film studios are quick to embrace virtual reality, hoping it will revitalize profits undercut by free streaming sites, the technology’s sexual potential extends well beyond pornography and into sex education.
Simulated VR environments offer a safe way to explore social issues and teach young people about potentially risky scenarios. For instance, a YMCA in Montreal recently launched an educational campaign on consent. Using virtual reality headsets, participants entered into scripted interactions with an insistent male suitor—all from the female perspective. The organizers’ aim was to raise awareness on gender equality and violence by having people step into someone else’s shoes. A team from Emory University and Georgia Tech also announced plans to develop a VR sex ed program for young women of color. The immersive experience is meant to teach about safe sex practices, combat STIs, and prevent unplanned pregnancies.
Award-winning VR porn site BaDoinkVR has put its own erotic spin on sex therapy. In its eight-part series Virtual Sexology, created with the help of a licensed therapist, adult performers instruct on Kegel strengthening for men and women, and how to improve stamina and prolong orgasms.
Breakthroughs in the medical field are opening up erotic possibilities, making us reconsider so-called sexual limitations posed by biology or inflicted through injury or illness.
For example, in 2016 surgeons performed the first penile transplant in the United States on a cancer survivor who’d had his penis amputated. Last year, an infertile woman also received the first uterus transplant in the country. Although it eventually had to be removed, women who’ve received similar transplants in Sweden have been able to bear children.
While genital reconstruction and transplant surgeries mostly rely on organ donors, 3D bioprinting and tissue engineering will eventually remove the bottleneck caused by low supply. In fact, doctors at the Wake Forest Baptist Medical Center have already created and implanted lab-grown vaginas into women with vaginal aplasia. The center has also bioengineered and implanted penile erectile tissue on rabbits.
In the next few decades, regenerative medicine will help more people overcome injury and sexual dysfunction. We will also have options to customize our bodies for personal sexual aesthetic and heightened erotic bliss. Microchips and internet-connected implants will also offer incredible possibilities for people wishing to merge with machines to become sexual cyborgs.
Creating Our Sexual Future
Sex and sexuality play integral roles in human interaction. These emerging technologies can help us learn more about ourselves and how to connect better with loved ones. How we used these innovations will shape generations to come. For people faced with physical, emotional, and geographic challenges, they’ll offer opportunities for sexual fulfillment that might not otherwise exist.
However, these topics don’t always receive the attention they deserve—as much as our society is fascinated by sex, we often fail to speak about it in a positive way that pushes discussion forward. The goal of the Future of Sex Report is to bring more visibility to how technology is shaping human sexuality, so as many people as possible can participate in creating a positive and inclusive sexual future.
For more information on the five major fields driving the sex tech industry, including predictions on what to expect in the next decades, read the full Future of Sex Report here.
Jenna Owsianik is a Canadian journalist and the editor of FutureofSex.net. The publication, which explores the intersection of sex and technology, is part of the Rh7thm group of companies.
A Robot of Your Own
At this year’s Consumer Electronics Show, Mayfield Robotics unveiled what could possibly be the first truly domestic robot. The Bosch-owned company integrated full functionality with a pleasing, approachable design to establish the robot, named Kuri, as a friendly presence in the home.
Kuri is designed to fit right into smart homes. According to Kuri’s Blog, the bot has a 1080p camera behind one of its eyes, an array of four microphones, dual speakers, wifi and Bluetooth connectivity, and spatial sensors for navigation. It can be programmed to do various tasks around the house and is capable of connecting to modern smart homes through IFTTT. Kuri is 50.8 centimeters (20 inches) tall and weighs 6.4 kilograms (14 pounds), and when its hours of battery life run out, the bot simply returns to its charging dock to recharge.
The fact that Kuri reminds us of certain characters from animated films is not coincidental — Mayfield Robotics intentionally made the robot look that way so that it could easily ingratiate itself to its owners. However, Kuri is more than just a pretty face. “While insanely cute on the outside, Kuri contains serious technologies on the inside that represent the latest developments in smartphones, gaming, and robotics,” said Mayfield Robotics CTO and co-founder Kaijen Hsiao in a press release. “We hope Kuri introduces people — especially kids — to the power of technology and can inspire a new world of possibilities for their future.”
Best Friends With a Bot?
According to Kuri’s creators, one particularly great thing about this domestic robot is that it can be fully functional out of the box, not relying too heavily on third-party services. Further, by the time it reaches the market, Kuri will have a companion app available for iOS and Android. This will allow the robot’s owners to contact and control it remotely to do things like check up on the house while away on a business trip.
Kuri is far from the first domestic robot. A similar project from ASUS, called Zenbo, was also just announced in Taiwan, with a whole slew of capabilities of its own. French telecommunications company Yumii also recently unveiled Cutii, a companion robot for the elderly to help them interact with family, doctors, and others. We also have companion robots that can help autistic children learn new skills, ones that can respond to us with familiar facial expressions, and some that can perform physical tasks to help people with disabilities.
Kuri costs $699 and is available for pre-order with a downpayment of $100. It should be released around the 2017 holiday season, so we’ll just have to wait until then to see whether the age of domestic robots is truly upon us.
The post This Robot May Be the Key to Houses Powered by Intelligence appeared first on Futurism.
For a minute, it seemed like there was good news for anyone who has ever dreamt of becoming a Jaeger pilot. Videos and images of a four-meter (13-ft) tall, walking mech suit that could bring Pacific Rim’s robots into reality surfaced on Instagram.
The imposing exoskeleton was supposedly created by a Korean robotics company with the help of Moldovan-American designer, Vitaly Bulgarov. Bulgarov was the creative mind who designed the robots in Terminator, Robocop, and Transformers.
However, according to Live Science, we have some reasons to be skeptical of the bot:
The South Korean company that is supposedly developing the robot has virtually no online presence and was unfamiliar to robotics researchers contacted by Live Science. Furthermore, the only source for the videos or any information about them is the Facebook and Instagram pages of a designer whose website mentions a conceptual art project about a “fictional robotics corporation that develops its products in a not-so-distant future.”
Bulgarov still claims the massive mech suit is real. According to Bulgarov, the video was leaked, and the company will do a “proper press release” when the robot is completed in 2017. “Frankly, right now, we don’t even really care if people think this is legit because we didn’t try to promote it,” Bulgarov said.
According to Ronald Fearing, professor of electrical engineering and computer science at the University of California, Berkeley, the technology is plausible. However, the video itself is leading scientists to further question the technology’s authenticity.
Christian Hubicki, a postdoctoral robotics researcher at Georgia Tech who worked on the DURUS robot, told Live Science that the bot’s leg joints look unusually smooth given the force that the step of a 1.5-ton robot would exert on the motors. Of course, Bulgarov had an answer to this skepticism. He noted the dampers in the robot’s feet make its motion smooth. He also said the robot wasn’t built for use as is, but as a platform to show off various technical feats.
It’s possible that Method-1 may not be real, but other bots created to provide mechanical assistance already exist. After a 90-year attempt to bring to life a robot that could move in waves, the SAW robot was unveiled by engineers at Ben-Guiron Universtiy of the Negev. Another machine developed for search and rescue missions is the CRAM. The cockroach-inspired robot is designed to run swiftly through small spaces, making it an essential tool for first responders who have to assess earthquake rubble survivors.
These advances in robotics are making it possible for engineers to build machines that will prove to be invaluable. When it comes to Method-1 though, Hubicki still urges skepticism:
At some point, you have to ask yourself, ‘Did a secret company with no obvious ties to existing humanoid robotics groups actually develop a giant humanoid robot with likely unparalleled power and torque, and instead of sending out a press release, they unveil the robot by dribbling out low-resolution videos on Instagram via an affiliate who just so happens to be a visual effects artist?’
The post Sorry, This Mech Suit Designed for Humanity Might Just Be A Hoax appeared first on Futurism.
With all the environmental problems posed by global phenomena like climate change, it’s easy to forget that other, more directly man-made threats endanger animals. One such problem is the illegal wildlife trade, which is estimated to be a $19 billion business worldwide.
One of the most at-risk animals is the rhino, but you can’t exactly protect rhino herds 24/7. That’s why a mechanical engineer has a wacky new idea: robotic rhinos. These babies could stay with and track rhinos, and alert patrols when poachers are nearby.
The illegal rhino trade is a big one: at least 5,940 African rhinos have been killed since 2008, with 1,175 being killed in 2015 in South Africa alone. The main reason they are poached is for their horns; street prices for a rhino horn in Asia was $60,000-100,000 per kilogram in 2013.
To combat this, the robo-rhino concept was born. Called Rakamera, its basically a robot that would mimic rhinos so that herd would accept it, and it can monitor them from the inside.
The undercover bot would be powered by hydrogen fuel cells, and would have internal hydraulics and servomotors for locomotion. It could be outfitted with infrared sensors and cameras to track both the movements of the herd and any human activities.
Turning to technology
While this may seem like a very far-out solution to the poaching problem, many other high tech solutions are already being explored to combat the threat. For example, one company wanted to flood the black market with 3D printed synthetic horns, but the idea was universally opposed by conservation groups)
On a more conventional note, anti-poaching groups have taken to using infrared cameras, UAVs, spatial monitoring tools, and many other high-tech devices to catch criminals. These are most often used to track humans who get too close to protected areas.
Ultimately, the solution will have to come by way of destroying the demand. Myths of the curative properties of these horns will have to be dispelled, and people must be taught the value of wildlife conservation. Until then, technology will have to stand guard over our animals.
The post The Newest Anti-Poaching Technology? Robotic Rhinos appeared first on Futurism.