Category: artificial intelligence

Elon Musk Wants to Meld the Human Brain With Computers. Here’s a Realistic Timeline.

Just as ancient Greeks fantasized about soaring flight, today’s imaginations dream of melding minds and machines as a remedy to the pesky problem of human mortality. Can the mind connect directly with artificial intelligence, robots and other minds through brain-computer interface (BCI) technologies to transcend our human limitations?

Over the last 50 years, researchers at university labs and companies around the world have made impressive progress toward achieving such a vision. Recently, successful entrepreneurs such as Elon Musk (Neuralink) and Bryan Johnson (Kernel) have announced new startups that seek to enhance human capabilities through brain-computer interfacing.

How close are we really to successfully connecting our brains to our technologies? And what might the implications be when our minds are plugged in?

How do brain-computer interfaces work and what can they do?

Origins: Rehabilitation and restoration

Eb Fetz, a researcher here at the Center for Sensorimotor Neural Engineering (CSNE), is one of the earliest pioneers to connect machines to minds. In 1969, before there were even personal computers, he showed that monkeys can amplify their brain signals to control a needle that moved on a dial.

Much of the recent work on BCIs aims to improve the quality of life of people who are paralyzed or have severe motor disabilities. You may have seen some recent accomplishments in the news: University of Pittsburgh researchers use signals recorded inside the brain to control a robotic arm. Stanford researchers can extract the movement intentions of paralyzed patients from their brain signals, allowing them to use a tablet wirelessly.

Similarly, some limited virtual sensations can be sent back to the brain, by delivering electrical current inside the brain or to the brain surface.

What about our main senses of sight and sound? Very early versions of bionic eyes for people with severe vision impairment have been deployed commercially, and improved versions are undergoing human trials right now. Cochlear implants, on the other hand, have become one of the most successful and most prevalent bionic implants — over 300,000 users around the world use the implants to hear.

Graph showing how a bidirectional BCI interacts with an external controller.
A bidirectional brain-computer interface (BBCI) can both record signals from the brain and send information back to the brain through stimulation. Image Credit: Center for Sensorimotor Neural Engineering (CSNE), CC BY-ND

The most sophisticated BCIs are “bi-directional” BCIs (BBCIs), which can both record from and stimulate the nervous system. At our center, we’re exploring BBCIs as a radical new rehabilitation tool for stroke and spinal cord injury. We’ve shown that a BBCI can be used to strengthen connections between two brain regions or between the brain and the spinal cord, and reroute information around an area of injury to reanimate a paralyzed limb.

With all these successes to date, you might think a brain-computer interface is poised to be the next must-have consumer gadget.

Still early days

Not all BCIs, however, are invasive. Noninvasive BCIs that don’t require surgery do exist; they are typically based on electrical (EEG) recordings from the scalp and have been used to demonstrate control of cursorswheelchairsrobotic armsdroneshumanoid robots and even brain-to-brain communication.But a careful look at some of the current BCI demonstrations reveals we still have a way to go: When BCIs produce movements, they are much slower, less precise and less complex than what able-bodied people do easily every day with their limbs. Bionic eyes offer very low-resolution vision; cochlear implants can electronically carry limited speech information, but distort the experience of music. And to make all these technologies work, electrodes have to be surgically implanted — a prospect most people today wouldn’t consider.

The first demonstration of a noninvasive brain-controlled humanoid robot “avatar” named Morpheus in the Neural Systems Laboratory at the University of Washington in 2006. This noninvasive BCI infers what object the robot should pick and where to bring it based on the brain’s reflexive response when an image of the desired object or location is flashed.

But all these demos have been in the laboratory — where the rooms are quiet, the test subjects aren’t distracted, the technical setup is long and methodical, and experiments last only long enough to show that a concept is possible. It’s proved very difficult to make these systems fast and robust enough to be of practical use in the real world.

Even with implanted electrodes, another problem with trying to read minds arises from how our brains are structured. We know that each neuron and their thousands of connected neighbors form an unimaginably large and ever-changing network. What might this mean for neuroengineers?

An electrocorticography grid.
An electrocorticography grid, used for detecting electrical changes on the surface of the brain, is being tested for electrical characteristics. Image Credit: Center for Sensorimotor Neural Engineering, CC BY-ND

Imagine you’re trying to understand a conversation between a big group of friends about a complicated subject, but you’re allowed to listen to only a single person. You might be able to figure out the very rough topic of what the conversation is about, but definitely not all the details and nuances of the entire discussion. Because even our best implants only allow us to listen to a few small patches of the brain at a time, we can do some impressive things, but we’re nowhere near understanding the full conversation.

There is also what we think of as a language barrier. Neurons communicate with each other through a complex interaction of electrical signals and chemical reactions. This native electro-chemical language can be interpreted with electrical circuits, but it’s not easy. Similarly, when we speak back to the brain using electrical stimulation, it is with a heavy electrical “accent.” This makes it difficult for neurons to understand what the stimulation is trying to convey in the midst of all the other ongoing neural activity.

Finally, there is the problem of damage. Brain tissue is soft and flexible, while most of our electrically conductive materials — the wires that connect to brain tissue — tend to be very rigid. This means that implanted electronics often cause scarring and immune reactions that mean the implants lose effectiveness over time. Flexible biocompatible fibers and arrays may eventually help in this regard.

Co-adapting, cohabiting

Despite all these challenges, we’re optimistic about our bionic future. BCIs don’t have to be perfect. The brain is amazingly adaptive and capable of learning to use BCIs in a manner similar to how we learn new skills like driving a car or using a touchscreen interface. Similarly, the brain can learn to interpret new types of sensory information even when it’s delivered noninvasively using, for example, magnetic pulses.

Learning to interpret and use artificial sensory information delivered via noninvasive brain stimulation.

Ultimately, we believe a “co-adaptive” bidirectional BCI, where the electronics learns with the brain and talks back to the brain constantly during the process of learning, may prove to be a necessary step to build the neural bridge. Building such co-adaptive bidirectional BCIs is the goal of our center.

We are similarly excited about recent successes in targeted treatment of diseases like diabetes using “electroceuticals” — experimental small implants that treat a disease without drugs by communicating commands directly to internal organs.

And researchers have discovered new ways of overcoming the electrical-to-biochemical language barrier. Injectible “neural lace,” for example, may prove to be a promising way to gradually allow neurons to grow alongside implanted electrodes rather than rejecting them. Flexible nanowire-based probesflexible neuron scaffolds and glassy carbon interfaces may also allow biological and technological computers to happily coexist in our bodies in the future.

From assistive to augmentative

Elon Musk’s new startup Neuralink has the stated ultimate goal of enhancing humans with BCIs to give our brains a leg up in the ongoing arms race between human and artificial intelligence. He hopes that with the ability to connect to our technologies, the human brain could enhance its own capabilities — possibly allowing us to avoid a potential dystopian future where AI has far surpassed natural human capabilities. Such a vision certainly may seem far-off or fanciful, but we shouldn’t dismiss an idea on strangeness alone. After all, self-driving cars were relegated to the realm of science fiction even a decade and a half ago — and now share our roads.

A graph showing how a BCI can interface.
A BCI can vary along multiple dimensions: whether it interfaces with the peripheral nervous system (a nerve) or the central nervous system (the brain), whether it is invasive or noninvasive and whether it helps restore lost function or enhances capabilities. Image Credit: James Wu; adapted from Sakurambo, CC BY-SA

Connecting our brains directly to technology may ultimately be a natural progression of how humans have augmented themselves with technology over the ages, from using wheels to overcome our bipedal limitations to making notations on clay tablets and paper to augment our memories. Much like the computers, smartphones and virtual reality headsets of today, augmentative BCIs, when they finally arrive on the consumer market, will be exhilarating, frustrating, risky and, at the same time, full of promise.In a closer future, as brain-computer interfaces move beyond restoring function in disabled people to augmenting able-bodied individuals beyond their human capacity, we need to be acutely aware of a host of issues related to consent, privacy, identity, agency and inequality. At our center, a team of philosophers, clinicians and engineers is working actively to address these ethical, moral and social justice issues and offer neuroethical guidelines before the field progresses too far ahead.

The post Elon Musk Wants to Meld the Human Brain With Computers. Here’s a Realistic Timeline. appeared first on Futurism.

Top Google Scientist Says We Need to Emphasize Humanity While Developing AI

The Future of Developing AI

One of the biggest companies focusing on artificial intelligence (AI) is Google, a company many only associate with the popular search engine. The company’s has been developing AI over the years, which can be seen in some of its current products: Google Assistant, which came to Android smartphones in February; AI assisting the aforementioned search engine helping with job searches; and the Google Home smart speaker. The AI may not be as smart as a first grader yet, but it has the potential to reach a point where it’s smarter than the average person interacting with it.

To reach that point, however, AI may need to become more human-like. At least, that’s what Google Cloud Chief Scientist and Director of the Stanford Artificial Intelligence and Vision Labs Fei-Fei Li believes. By doing so, AI would become more capable of improving our lives and providing us peace of mind.

In an interview with MIT Technology Review senior editor Will Knight, Li explained that thinking about AI’s impact on the world is one of the most important factors regarding the technology, and that her work with AI has revealed one thing: future developments and advancements need to be more human-centered.

Better AI Development

“If you look at where we are in AI, I would say it’s the great triumph of pattern recognition,” she said. “It is very task-focused, it lacks contextual awareness, and it lacks the kind of flexible learning that humans have. We also want to make technology that makes humans’ lives better, our world safer, our lives more productive and better. All this requires a layer of human-level communication and collaboration.”

Early signs of such communication and collaboration can already be seen, though their true impact has yet to be seen; in July, for example, Google’s DeepMind revealed it began developing AI with imagination, enabling it to think through decisions and plan for the future without human input.

Types of AI: From Reactive to Self-Aware [INFOGRAPHIC]
Click to View Full Infographic

Earlier this month, DeepMind stated its new AI is on the verge of being able to perfectly mimic human speech, while the recently unveiled Google Clips camera — powered by AI — is capable of taking pictures for you, eliminating the need to wait for the “perfect” moment.

“If our algorithms are to develop equally sophisticated behaviors, they too must have the capability to ‘imagine’ and reason about the future,” said researchers from DeepMind, speaking on AIs equipped with imagination. “Beyond that they must be able to construct a plan using this knowledge.”

Li’s own thoughts nearly echo DeepMind’s, as she also believes AI need to be able to think and act for themselves. To this end, she said that future research conducted needs to focus on “contextual understanding”, as well as “knowledge abstraction and reasoning.”

AI That Represents Humanity

That said, new features and capabilities won’t be enough to truly advance artificial intelligence. Diversity also needs to play a key role. As Li explained, studies have shown that a diverse team leads to more creative, innovative solutions. AI also cannot be biased or influenced by the morally questionable parts of humanity, otherwise its adoption and incorporation could hinder some groups of people over others.

“When you are making a technology this pervasive and this important for humanity, you want it to carry the values of the entire humanity, and serve the needs of the entire humanity,” said Li. “If the developers of this technology do not represent all walks of life, it is very likely that this will be a biased technology. I say this as a technologist, a researcher, and a mother. And we need to be speaking about this clearly and loudly.”

Look no further than this soap dispenser being incapable of identifying darker skin colors as an indicator of how certain tech sectors are lacking in diversity. Artificial intelligence is expected to radically change the world, and it would certainly be in our best interest if everyone could benefit from it.

The post Top Google Scientist Says We Need to Emphasize Humanity While Developing AI appeared first on Futurism.

New Breakthrough Allows Machines to Literally Predict the Behavior of Molecules

Simplifying the Complex

Though much noise has been made of what’s still to come from artificial intelligence (AI), the technology has already changed our daily lives. Machine learning-powered image recognition, text analysis, and language translation tools allow us to navigate the world in previously unimagined ways, and our mobile devices can now predict so much of our behavior based on our past actions.

Now, an international, interdisciplinary team of researchers has devised a way to use machine learning to do something far more complex than foresee a smartphone user’s next move. They’ve built a machine that can predict molecular behavior, a feat that previously required very complex quantum calculations. Their study has been published in Nature Communications.


To create this system that can predict molecular behavior, the researchers trained an algorithm using a small sample set featuring a simple molecule called malonaldehyde. Then, they had the machine predict the complex chemical behaviors of this molecule. The researchers compared those predicted simulations with their current chemical understanding of malonaldehyde and found that the machine was able to learn how the molecule would behave using the limited data it had been trained on.

“By identifying patterns in molecular behavior, the learning algorithm or ‘machine’ we created builds a knowledge base about atomic interactions within a molecule and then draws on that information to predict new phenomena,” researcher Mark Tuckerman of New York University explained in a press release.

Data-Driven Models

This work is yet another example of AI’s ability to impact a wide variety of industries, with molecular science joining everything from medical research to psychology and behavioral science. The research demonstrates how machine learning methods can be used to perform difficult tasks of all types so long as the systems are given sufficient data.

Understanding Machine Learning [INFOGRAPHIC]
Click to View Full Infographic

The researchers expect that this ability to predict molecular behavior could greatly contribute to the development of pharmaceuticals, as well as simulate molecular designs crucial for improving the performance of today’s new battery technologies, solar cells, and digital displays — basically, anything that used to rely on complex quantum mechanical calculations to model atomic or molecular interactions can benefit from their work.

While their machine does make it possible to model this behavior without involving intricate calculations, streamlining that complicated task is just the jumping-off point, according to Müller: “Now we have reached the ability to not only use AI to learn from data, but we can probe the AI model to further our scientific understanding and gain new insights.”

The post New Breakthrough Allows Machines to Literally Predict the Behavior of Molecules appeared first on Futurism.

A New $15 Billion Project Is Set to Utterly Transform Our World

Going All In: A $15 Billion Project

To advance research and development on artificial intelligence, quantum computing, and the Internet of Things, Alibaba is dedicating $15 billion in funding over the next three years to create the next generation of such technology.

As reported by Bloomberg, the company shared its plans in an emailed statement, shortly after Chief Technical Officer Jeff Zhang made the announcement at the company’s Computing Conference 2017 in Hangzhou, China. This is the beginning of its global research program, now known as Alibaba DAMO Academy — “DAMO” an acronym for Discovery, Adventure, Momentum and Outlook.

As part of the program, Alibaba intends to build seven new R&D labs across the U.S., Russia, China, Israel, and Singapore. Over 100 scientists worldwide who specialize in different aspects of AI, quantum computing, and the IoT will also be hired.

In an interview with Bloomberg, Alibaba Chief Technical Officer Jeff Zhang said the new labs will go a long way to “help solve issues that Alibaba is currently facing across its business lines. It will also be at the forefront of developing next-generation technology.”

More Than Labs and Scientists

Alongside new labs, Alibaba’s multi-billion dollar investment will also go towards funding various collaborations with universities, with the University of California at Berkeley already on board. Additionally, the company has tapped a number of professors from the likes of Princeton and Harvard to sit on an advisory board.

Privacy and the Internet of Things
Click to View Full Infographic

Alibaba’s increased focus on AI and quantum computing is a significant one, with both expected to dramatically change the world. The company already has a “smart warehouse” run by 60 robots, which saw an productivity increase 300 percent. That percentage can only rise if the company’s program is a success, though it does raise some concerns about automation and its impact on jobs.

Fortunately, according to CNBC, the company has a “commitment to serve 2 billion customers and create 100 million jobs in 20 years.” We’ll have to wait and see what Alibaba comes up with in the coming years, but it’s a clear sign it’s prepared to see this through, and has the resources necessary to move the world forward.

The post A New $15 Billion Project Is Set to Utterly Transform Our World appeared first on Futurism.

World Bank Chief: Robots Have Put Us on a Dangerous “Crash Course”

Education for Automation

More and more experts agree that the world job market is in for a disruption that is unprecedented, or at least that hasn’t been seen since the Industrial Revolution. In a decade or so, automation born from artificial intelligence (AI) development is expected to take over jobs in a number of industries — from transportation to manufacturing, finance, and even information technology. To prepare for job displacement, particularly those considered to be minimum income work, some experts have been advocating for a universal basic income.

For World Bank president Jim Yong Kim, there might be another equally necessary course of action. Speaking to an audience at Columbia University in New York, prior to the World Bank’s annual meeting in Washington D.C., Kim said that it’s important for policymakers to invest in education and health.


In the face of the looming intelligent automation and political threats against economic development—such as growing resistance against globalism—Kim warned that the world is following a “crash course.”

“If your aspirations start to rise but then there’s no opportunity it can lead to fragility, conflict, violence,” Kim explained to the BBC. “This is the crash course we’re going down.” Hence, there’s a need to create more opportunities for people, which starts by investing on human capital. This marks a change in the World Bank’s development approach.

Automation and Human Jobs

It’s normal for any meaningful development to have a good and bad side, and automation is a product of how good technology has become. AI has given birth to autonomous systems that will let cars, ships, and even planes operate themselves more safely, or run stores and factories more cost-effectively. Then, of course, as machines get better at doing work human beings used to do, it becomes more efficient to “hire” them instead.

Many have chosen to look at this intelligent automation with gloomy expectations, but some see a potential for humankind to progress. Google chief engineer and famous “future teller” Ray Kurzweil said that automation would give rise to new jobs—professions which haven’t been invented yet, he said. Tesla CEO and founder Elon Musk said something similar, noting how automation would give humanity more time to pursue leisure.

In any case, both ideas seem to agree with the World Bank president’s point: to prepare for automation, one has to invest in people. “The one thing you know for sure that you’ll need, in whatever the economy looks like in the future, is people who can learn,” Kim told the BBC. “We want to create a sense of urgency to invest in people that we think is necessary given the way […] the global economy is changing.”

Automation will happen—in fact, it’s already begun. The only thing we can do now is to make sure humankind is ready to accept the new economy it will bring. What’s clear is that humanity’s ingenuity will surely find a way to cope.

The post World Bank Chief: Robots Have Put Us on a Dangerous “Crash Course” appeared first on Futurism.

Does Facebook Plan to Compete With Google’s Artificial Intelligence?

Every year, artificial intelligence (AI) software bots compete and battle it out in the video game Universe of StarCraftArtificially intelligent aliens swarm and slaughter, showcasing their off-world abilities sans human meddling. But a new player representing (of all things) Facebook entered into this arena — CherryPi, an AI player designed by a team of eight people from or involved with Facebook’s AI research lab. This foray into multiplayer gaming established Facebook as direct competition for others, like Google and even individual hobbyists (three of whom finished in the top three places).

A StarCraft Tournament with human players. Image Credit: Flickr.
A StarCraft Tournament with human players. Image Credit: Flickr

Gabriel Synnaeve, a researcher at Facebook, described CherryPi as a “baseline” prototype to learn and build from, he said, “We wanted to see how it compares to existing bots, and in particular test if it has flaws that need correcting.”

Some expect Facebook and Google to lag behind independently-designed bots for awhile, despite the tech giants’ inexhaustible resources; “For a couple of years I predict the hobbyist, mostly rule-based bots, will still do well,” said David Churchill, a professor of the Memorial University of Newfoundland, which organized AIIDE, an academic conference that includes contests like the StarCraft competition.

In this competition, Facebook’s stealthy AI bot placed sixth out of 28 total competitors. The winning bot, ZZZKBot, was created Chris Coxe, a software developer in Perth, Australia. So, while Facebook is relatively new to this venture, it’s learning fast. Google’s DeepMind team is also formidable, to say the least; but whichever giants emerge the victors, it’s without doubt they’ll still have much to learn from individual coders, whose passion has become an industry-pivoting (virtual) blood-fest.

The post Does Facebook Plan to Compete With Google’s Artificial Intelligence? appeared first on Futurism.

Richard Branson: Universal Basic Income Will Protect Us From the Threat of AI

Universal Basic Income: A Safety Net?

Billionaire entrepreneur and Virgin Group founder Richard Branson recently sat down with Business Insider Nordic (BI Nordic) during the Nordic Business Forum in Helsinki. A few months ago, Branson joined other tech elites to support Universal Basic Income (UBI). In a blog post from mid-August, Branson stated that “most countries can afford to make sure that everybody has their basic needs covered.”

Now, in discussing Finland’s UBI experiment, Branson told the BI Nordic reporters that a safety net provided by a basic income could help counter the effects of artificial intelligence and increased automation. He said, “Basic income is going to be all the more important. If a lot more wealth is created by AI, the least that the country should be able to do is that a lot of that wealth that is created by AI goes back into making sure that everybody has a safety net.”

Image credit: Nordic Business Forum
Image credit: Nordic Business Forum

“Entrepreneurially Minded”

Many business leaders in the tech sector have voiced support of basic income programs which would see everyone receive unconditional payments to facilitate a basic level of comfort. There are basic income experiments being conducted or planned in regions all around the world.  Experts assert that while the age of automation may be approaching at a slow crawl, it is coming.

Universal Basic Income: The Answer to Automation?
Click to View Full Infographic

“Obviously AI is a challenge to the world in that there’s a possibility that it will take a lot of jobs away. [..] It’s up to all of us to be entrepreneurially minded enough to create those new jobs,” Branson said.

“I don’t think we’re going to have a choice,” SpaceX and Tesla CEO Elon Musk said at the World Government Summit in February, referring to the inevitability of mass-scale automation. “I think it’s going to be necessary. There will be fewer and fewer jobs that a robot cannot do better.” As long as more automation awaits us in the future, we will need to devise a way to ensure that jobs lost do not result in widespread poverty. Universal Basic Income is a realistic, practical way to remedy this, and right now, no one has a better alternative.

The post Richard Branson: Universal Basic Income Will Protect Us From the Threat of AI appeared first on Futurism.

Mattel Cancels AI-Powered Smart Speaker for Kids Over Privacy Concerns

Aristotle Axed

In January 2017, Mattel announced Aristotle, a smart speaker created for use by children. Last week, the company confirmed that the device had been scrapped, amid concerns about children’s privacy.

Aristotle was designed to use natural language processing technology to get a better grasp on how children pronounced words over time. This isn’t too far removed from the way virtual assistant services learn about users’ habits, but when young people are involved, the issue obviously becomes a lot thornier.

Lead by a team of child development, education, and privacy experts, more than 15,000 people signed a petition to stop Mattel from selling Aristotle, and last week, Sen. Edward J. Markey and Rep. Joe Barton sent Mattel a letter requesting details on how the company planned to store data collected by Aristotle and how long they would retain that data.

The toymaker had previously committed to encrypting the data and pledged to not sell it to advertisers in order to protect children’s privacy. However, following the letter, Mattel released a statement declaring that their new chief technology officer, Sven Gerjets, had shuttered the project as part of a broad evaluation of all the company’s upcoming products.

Under Surveillance

The ever-increasing amount of internet-connected hardware in our homes is something of a double-edged sword.

Privacy and the Internet of Things
Click to View Full Infographic

For every instance where an Internet of Things (IoT) device has reported a crime or otherwise done some good, we can find another example of a piece of technology that comes dangerously close to infringing upon our privacy.

As we become even more connected as a society, we must take all the necessary steps to ensure that the advantages of smart home and IoT devices don’t come at the cost of our cybersecurity and general privacy.

Mattel claims that they had already made the decision to cancel Aristotle before receiving the letter from Sen. Markey and Rep. Barton, but elected officials certainly have cause to be proactive in monitoring products like this.

The post Mattel Cancels AI-Powered Smart Speaker for Kids Over Privacy Concerns appeared first on Futurism.

AI-Powered Google Clips Camera Decides When to Take a Photo So You Don’t Have To

Google Clips is Your New AI Photographer

Google is rolling out an entirely new product alongside its recent Pixel and Google Home releases: the Google Clips camera. It is visually unassuming: a small, flat, square, wireless camera that can clip onto your shirt pocket (or whatever you dream up) and take photos.

What makes this camera different is that you’re not always the one taking the photos, its artificially intelligent (AI) system is. While you might be having precious family moments and not paying attention to the camera you clipped in the corner of the room, the AI behind Google Clips will choose ideal moments to capture photos of its own volition.

Google’s product manager Juston Payne asked, “How do you let yourself capture those fleeting moments, while being part of the moment?”

The camera should also get smarter using the machine learning algorithm Moment IQ, recognizing people and pets you spend a lot of time with. It also knows good photo conditions when it sees them. While perhaps a creepy concept to some, the idea is that the camera would capture candid and heartfelt moments at home; moments you might miss or not be able to capture yourself.

Robots Everywhere

When products like Google Home were first introduced, many were hesitant to allow a seemingly sentient machine into their home. But it seems as though that’s changed, and people are slowly transitioning to accept technologies that incorporate AI, machine learning, and other advanced technologies. Compatible with Pixel phones, the iPhone 8 and 8 Plus, and the Samsung Galaxy S7 and S8, the Google Clips camera operates using WiFi and will cost $249. But how many will, realistically, be willing to test this step into AI photography territory?

Will Google Clips allow families, as it intends, to focus more on spending time together rather than taking photos, or will it become an ethically-murky nuisance?

There are those who are fearful of AI; Nick Bilton, New York Times tech columnist, wrote, “The upheavals [of artificial intelligence] can escalate quickly and become scarier and even cataclysmic. Imagine how a medical robot, originally programmed to rid cancer, could conclude that the best way to obliterate cancer is to exterminate humans who are genetically prone to the disease.”

Understanding Machine Learning [INFOGRAPHIC]
Click to View Full Infographic

Now, this camera is nowhere near the potentially human-obliterating technology that Bilton is alluding to. However, it is important to both embrace progress and question how this type of technology will interact with our lives.

Could it be used by police to capture moments at home that could be cause for criminal concern? Is the biggest concern that it captures photos that you don’t care for aesthetically, or find inferior to your own photography skills?

There are many possibilities, and Google does allow the camera to operate without being connected to the internet, which could cut down on security concerns. Only time will tell, and hopefully, the camera will serve its purpose and allow families to enjoy being together while taking on the task of documenting loving moments.

The post AI-Powered Google Clips Camera Decides When to Take a Photo So You Don’t Have To appeared first on Futurism.

The Reports Are In: AI and Robots Will Significantly Threaten Jobs in 5 Years

The Robots Are Coming to Threaten Jobs

A study from Redwood Software and Sapio Research released October 4th revealed that IT leaders believe automation could impact 60% of businesses by 2022 and threaten jobs in the process. Now, a new, separate report from PwC, the second biggest professional services firm worldwide, suggests a similar timeline; one in which people may need to practice and learn new skills — or be left behind as automation takes over.

The report, titled Workforce of the Future, surveyed 10,000 people across China, India, Germany, the UK, and the U.S. to “better understand the future of work.” Of those, nearly 37% think artificial intelligence and robotics will put their jobs at risk; in 2014, 33% had a similar concern.

A startling scenario the report envisions for the future is one in which “typical” jobs — jobs people can steadily advance in through promotions — no longer exist, prompting the aforementioned move to develop new skills. Speaking with CNBC, PwC principal and U.S. people and organization co-leader Jeff Hesse says automation is already forcing people out, though it’s not consistent across every field.

“It varies a bit by industry,” explains Hesse, “but over the next five years we’re going to see the need for workers to change their skills at an accelerating pace.” If the report’s results are anything to go by, people are ready for change: 74% expressed a willingness to “learn new skills or completely retrain in order to remain employable in the future.”

As of March 2017, PwC reports about 38% of U.S. jobs are at risk of being affected by automation by the early 2030s, with Germany closely behind at 35%; the UK at 30%; and Japan at 21%.

Required Skills and Alternative Incomes

Last year, Microsoft co-founder and philanthropist Bill Gates said there were three skills people would need to survive in a job market that continues to embrace technology: science, engineering and economics. They don’t need to be experts, but they need to understand what people in each field are capable of. In the case of robotics, those with knowledge about managing automatic software programs will be highly sought after. Hesse also suggests people research which skills their fields will be in need of.

You can’t talk about the rise of robotics and automation without asking about those unable to adjust or unwilling to learn a new skill. 56% of the people PwC surveyed think governments should take any steps necessary to protect jobs, presumably so people without technical prowess can continue to work and earn an income.

Universal Basic Income: UBI Pilot Programs Around the World
Click to View Full Infographic

Of course, the concept of a universal basic income has also been suggested as a possible step to offset automation’s potential to threaten jobs. The idea has been gaining a lot of support and is being talked about more, though there are still many who think there are better options. Gates, for example, believes the idea could work, but the world doesn’t have the means to pull it off just yet. Former Vice President Joe Biden believes a future that makes jobs and hard work a priority is better for everyone.

“While I appreciate concerns from Silicon Valley executives about what their innovations may do to American incomes, I believe they’re selling American workers short,” said Biden. “All of us together can make choices to shape a better future. Our workers, our businesses, our communities, and our nation deserves nothing less.”

Automation is happening more slowly than expected, but it’s a clear, impending challenge that needs to be prepared for. Whether the answer is a cash payment from governments, better job training, or other solutions, a decision needs to be made before we’re scrambling for short-term solutions.

The post The Reports Are In: AI and Robots Will Significantly Threaten Jobs in 5 Years appeared first on Futurism.

Researchers Are Trying to Use AI to Put an End to Hate Speech

Safeguarding Against Hate

In an ideal world, the best stopper for hate speech is an individual’s good sense of decency and propriety — in other words, a deep and profound respect of the human person, regardless of differences in opinion, race, or gender. However, we don’t live in an ideal world. As such, hate speech abounds, and the relatively free space social media offers us has given it a platform that’s equally destructive — or perhaps even more so.

Social networking sites have attempted to control the problem, but to little or no avail. While you can report hate speech, it’s just physically impossible to monitor every single offender, every stream of derogatory utterances posted in private conversations or public forums. Unless you’re not human — which is what researchers are trying to explore by using artificial intelligence (AI) to finally crack down on the problem of hate speech.

Haji Mohammad Saleem and his colleagues from McGill University in Montreal, Canada, developed an AI that learns how members of hateful communities speak. This is a different tactic than attempted by Google parent company Alphabet’s Jigsaw, focusing on certain key words or phrases resulting in a toxicity score. According to New Scientist, it didn’t work. The comment “you’re pretty smart for a girl” was marked 18% similar to what people considered toxic, while “i love Fuhrer” was marked 2% similar.

An AI Guard Dog

In a paper published online, Saleem and his team described how their AI works. Their machine learning algorithm was trained using data dumps of posts in the most active support and abuse groups in Reddit between 2006 and 2016, in addition to posts on other forums and websites. They focused on three groups that have often received abuse, online and otherwise: African Americans, people who are overweight, and women.

“We then propose an approach to detecting hateful speech that uses content produced by self-identifying hateful communities as training data,” the researchers wrote. “Our approach bypasses the expensive annotation process often required to train keyword systems and performs well across several established platforms, making substantial improvements over current state-of-the-art approaches.”

Types of AI: From Reactive to Self-Aware [INFOGRAPHIC]
Click to View Full Infographic

Their algorithm caught subtext which could easily be lost when one relies on just keywords, and resulted in fewer false-positives than the keyword method. “Comparing hateful and non-hateful communities to find the language that distinguishes them is a clever solution,” Cornell University professor Thomas Davidson told New Scientist. However, there are still limitations. The team’s AI was trained on Reddit posts and it may not be as effective on other social media websites. Furthermore, it also missed some obviously offensive speech which keyword-based AI would catch. That’s understandable, though. Stopping hate speech is as tricky as catching online terrorist propaganda.

Indeed, while AI may become better at catching online hate, it might not be able to do it alone. “Ultimately, hate speech is a subjective phenomenon that requires human judgment to identify,” Davidson added. Human decency may be something no AI can replace.

The post Researchers Are Trying to Use AI to Put an End to Hate Speech appeared first on Futurism.

This AI Blockchain Could Be The Most Disruptive Tech of the Year


The idea of an Initial Coin Offering has entered the public parlance as part of the rise of cryptocurrency. Now, an ambitious artificial intelligence startup is using the funding strategy as a means of gathering the cash needed to jumpstart the development of a project: SingularityNET.

SingularityNET aims to prevent control of advanced AI from being entirely in the hands of Silicon Valley. Instead, it will use AI blockchain technology to distribute access to a wide range of AI algorithms, even enabling them to learn how to work in unison. 

The 5 Weirdest AI Applications [INFOGRAPHICS]
Click to View Full Infographic

Companies and individual developers will be able to host their algorithms on the network, and anyone will be able to use SingularityNET-specific currency to utilize them, thanks to smart contracts.

Computer Brain

At first, it’s expected that SingularityNET will be home to relatively simple AI algorithms, like computer vision technologies and translation services. However, there are hopes that this type of platform could allow these functionalities to become intertwined.

For example, if a user wanted to translate a document that includes images, SingularityNET could allow for the translation algorithm to request the services of the computer vision algorithm, to analyze what’s in the picture and supply a caption — with no need for human input.

“We want create a system that learns on its own how to cobble together modules to carry out different functions,” said project lead Ben Goertzel in an interview with Wired. “You’ll see a sort of federation of AIs emerge from the spontaneous interaction among the nodes, without human guidance.”

However, these are plans for the long-term, as Goertzel doesn’t expect the system to reach that kind of sophistication for some time. The project is set to launch in 2018, but the ICO will get underway in November 2017.

Disclosure: Several members of the Futurism team, including the editors of this piece, are personal investors in a number of cryptocurrency markets. Their personal investment perspectives have no impact on editorial content.

The post This AI Blockchain Could Be The Most Disruptive Tech of the Year appeared first on Futurism.

Google’s New AI Can Mimic Human Speech Almost Perfectly

Elocution Lessons

Last year, artificial intelligence (AI) research company DeepMind shared details on WaveNet, a deep neural network used to synthesize realistic human speech. Now, an improved version of the technology is being rolled out for use with Google Assistant.

A system for speech synthesis — otherwise known as text-to-speech (TTS) — typically utilizes one of two techniques.

Comparing the Best AI Assistants on the Market
Click to View Full Infographic

Concatenative TTS involves the piecing together of chunks of recordings from a voice actor. The drawback of this method is that audio libraries must be replaced whenever upgrades or changes are made.

The other technique, parametric TTS, uses a set of parameters to produce computer-generated speech, but this speech can sometimes sound unnatural and robotic.

WaveNet, on the other hand, produces waveforms from scratch based on a system developed using a convolutional neural network.

To begin, a large number of speech samples were used to train the platform to synthesize voices, taking into account which waveforms sounded realistic and which did not. This gave the speech synthesizer the ability to produce natural intonation, even including details like lip smacks. Depending on the samples fed into the system, it would develop a unique “accent,” which means it could be used to create any number of distinct voices if fed different data sets.

Sharp Tongue

The biggest limitation of WaveNet was the fact that it initially required a significant amount of computing power and wasn’t very fast, needing one second to generate .02 seconds of audio.

After improving upon the system for the past 12 months, DeepMind’s engineers have optimized WaveNet to the point that it can now produce a raw waveform lasting one second in just 50 milliseconds — 1,000 times faster than the original. What’s more, the resolution of each sample has been increased from 8 bits to 16 bits, contributing to its higher scores in tests with human listeners.

These improvements mean the system can now be integrated into consumer products, like Google Assistant.

Image Credit: Deepmind

WaveNet is now being used to generate the U.S. English and Japanese voices for Google Assistant across all platforms. Because the system can create specialized voices based on whatever samples are fed into it, Google should be able to use WaveNet to synthesize realistic-sounding human speech for other languages and dialects moving forward.

Voice interfaces are becoming more and more prevalent across all forms of computing, but the stilted nature of some synthetic speech has put off many potential users. DeepMind’s efforts to improve upon this technology could prompt more widespread adoption and will certainly serve to refine the existing experience.

The post Google’s New AI Can Mimic Human Speech Almost Perfectly appeared first on Futurism.

In the World of Tomorrow, You Won’t Just Watch Movies – They’ll Watch You

You’re Being Watched

Watching a movie in a darkened theater, your reactions likely went unnoticed. Your widened eyes from an unexpected twist, your jump at a sudden scare, the errant tear as the music swelled…these were all likely wasted on the plastic back of the seat in front of you. That is, unless you were in a theater equipped with “computer vision.” If you happened to be in such a theater, while you were watching the movie, a computer was watching you.

Silver Logic Labs is the company behind a computer program that does just that. Its CEO, Jerimiah Hamon, is an applied mathematician who specializes in number theory. He spent most of his career at Amazon, Microsoft, and Harvard Medical School, working to solve their consumer problems. But what he was really interested in was how artificial intelligence (AI) could help these companies better understand one of the most complex problems of all: human behavior.

The AI actually watches an audience watching a movie.

Using AI to analyze human subjects’ reactions to various types of media, Hamon realized, would be beneficial to both researchers and companies. Here’s how his system works: the AI watches an audience watching a movie, detects people’s emotions through even the most subtle facial expressions (often referred to as micro-expressions), and generates data that the system later analyzes.

Hamon began this kind of testing three years ago, and while it’s still a fairly new application within the media industry, it’s gaining interest. That’s in large part because it’s produced reliable results — humans, Hamon estimates, are about as predictable as software (that is, quite predictable).

credit: Silver Logic Labs

This AI and computer vision can give companies a data-driven look at how people react to movies or TV shows more reliably than any focus group ever could. Once an AI has been trained on a particular data set, it can provide fast, consistent, and detailed analyses. That’s good news for industries for which understanding those results, and implementing them to improve their product or service, will help them make money.

Ratings are an important way that television and film industries measure a project’s success. Hamon’s software, somewhat to the surprise of everyone involved, was able to accurately predict a show’s ratings from Nielsen, Rotten Tomatoes, and IMDB, with an accuracy ranging from 84 to 99 percent. There’s a range because shows that are “multimodal,” or attempting to serve multiple audiences, are sometimes harder to predict. But when it comes to TV, making any predictions at all about a show’s popularity is impressive enough on its own. “When I started this everyone told me, ‘You’ll never be able to predict that. Nobody can predict that,’” Hamon told Futurism.

With math, however, anything is possible — Hamon points out that because mathematical techniques can capture the nuances of something like a Nielsen score, it really isn’t that difficult to predict. “We’ve taken the emotional responses that people have to visual and auditory stimuli and converted it into a numeric value. And once it’s in a numeric value, then it’s just a matter of solving equations down to – well, how much did you actually like it?” Hamon says.

There’s a range of statistics involved, Hamon says, but he declined to give too many details about the equations he uses to calculate them, fearing that saying too much would give away the “secret sauce” of the program.

Beyond Entertainment

Because his AI was so good at predicting people’s likes and dislikes, Hamon wondered what else it could tell us about ourselves. Perhaps it could detect whether people are lying. Like a polygraph test, the AI could compare data indicating a person’s stress levels against an established data set to determine if he or she is lying. The data could also, he supposed, be distilled down far enough that those interpreting it could figure out if a person knew they were lying. To test this idea, Hamon used his AI to perform emotion recognition tasks on lower-quality video (he used clips of talking heads on CSPAN, as well as press conference footage of President Donald Trump).

credit: Silver Logic Labs

At a moment when truth seems to be under attack, separating the honest folks from the liars seems critical. But the system could also be used in literal life-or-death situations, like helping clinicians better assess a patient’s pain to figure out the proper treatment.

The system could be essential in a situation like detecting a stroke, Hamon notes. While medical professionals and caretakers are trained to recognize the signs of a stroke, they often miss the “mini strokes” (medical term: Transient Ischemic Attacks) that may precede a full-blown stroke. The eye of AI, then, could detect those micro signs of a stroke, or even the symptoms or signs of an illness that set in before acute onset in nursing home patients. This would allow caretakers to respond and intervene in a timely manner so the patient could be monitored for signs of a more serious stroke; doctors could even take steps to prevent the stroke from ever occurring.

Would it really work? Hamon thinks it could. At the very least, he knows the AI is sensitive enough to detect these types of minute changes. When the system was testing audiences, researchers had to take note of participants’ prescription medications—some, such as stimulants like Adderall, cause effects like higher blood pressure and tiny muscle twitches. A human might not notice these subtle changes in another person, but the AI would pick it up and possibly mistake the medication’s effects as stress.

A Powerful Tool

Though many people may think that machines can’t be biased, humans that design and interact with AI unknowingly imbue it with their biases. The implications of those biases become more pronounced as AI evolves, influencing processes like identity recognition and the collection of social media data in service of federal agencies.

As facial recognition technology has crept closer to the average person’s life, people have paid more attention to the possible ethical concerns brought on by these biased algorithms. The idea that technology can exist completely without bias is a controversial and heavily challenged one: AI is dependent upon the data it has been trained on, and there might be biases affecting that data, for example. And as AI research continues to progress, as we create machines that can in fact learn, we have to put in stopgaps for situations when machines could learn a lot more from us than what we set out to teach.

Hamon feels confident that his algorithm is as unbiased as one can be: The computer is only interpreting a person’s physical behavior, regardless of the face or body that displays it. Hamon takes comfort in that— he’s experienced racial profiling first-hand. “I’m Native American, and I can tell you there are times when things get a little edgy,” he says. “You get nervous when the police pull up behind you. But I think this kind of technology really can undo that nervousness — because if you’re not doing anything wrong, [and] the computer is going to tell the officer that you’re not doing anything wrong. I would feel a sense of security knowing a computer is making that threat assessment.”

credit: Silver Logic Labs

In any case, Hamon doesn’t worry too much about interpreting the results of the algorithm he created. While he’s confident in the algorithm, he also understands the limitations of his expertise. When it comes to making sense of the data, that’s something he feels is best left to the experts in fields like law enforcement, medicine, and psychiatry.

The future of Hamon’s work at Silver Logic Labs is fairly open. The applications for AI are only limited by the human imagination. Hamon wants to make his tool as available for as many different uses as possible, but he does find himself circling back to where it all began: helping people create and enjoy high-quality entertainment.

It could reveal to us observations that make us reconsider what it means to be human.

“Storytelling is a part of human culture,” he says. Through his work Hamon has discovered at least one unquantifiable element that is key to media’s success: “People really enjoy seeing humans [meaningfully] engaging with other humans… those are the things that tend to make successes,” he says.

Though we as a society might struggle to adapt to the transformations that AI will bring to our lives, there’s solace to be found in Hamon’s vision. AI, in all the data it unfeelingly collects, might reveal to us observations that make us reconsider what it means to be human. It may see something in us that we’ve never seen in one another — or ourselves.

The post In the World of Tomorrow, You Won’t Just Watch Movies – They’ll Watch You appeared first on Futurism.

Google’s AI Still Isn’t Smarter Than a First Grader

Measuring Intelligence

Apple fans who make use of the iPhone’s famous virtual assistant, Siri, were probably elated to hear a more human-sounding voice emerge as part of iOS 11. While sounding more human is a plus for any artificial intelligence (AI) system, thinking like a human being would be even more so. According to a new study that compared the intelligence of today’s AI systems, that’s one area where Google’s AI appears to be out-performing Siri.

A team of Chinese researchers, led by Chinese Academy of Sciences‘ Research Center on Fictitious Economy and Data Science executive deputy director Yong Shi, wanted to “address the issue of AI threat” by coming up with “a standard intelligence model that unifies AI and human characteristics in terms of four aspects of knowledge, i.e., input, output, mastery, and creation,” as stated in the paper’s abstract. They looked at and ranked the intelligence quotient (IQ) of AI systems vis-á-vis human intelligence.

The researchers found that Google’s AI had an IQ of 47.28 in 2016. While Google’s AI is intelligent, it’s still not as smart as a regular six-year-old child — whose IQ average at about 55.5, and even less than the average 18-year-old with an IQ of 97. In fact, none of the AI systems they checked displayed IQ’s greater than these measures. Chinese search engine Baidu had a score of 32.92, Microsoft’s Bing measured at 31.98. Apple’s Siri scored just 23.94 — less than half the IQ of Google’s AI.

Learning to be Smart

These IQ scores don’t seem to lend credence to concerns experts like OpenAI co-chairman Elon Musk have expressed about AI’s capacity to end humanity. Of course, the measurement used by Yong and his colleagues is just one among many — and as the researchers acknowledged, AI systems do seem to be getting smarter. Google’s AI had an IQ of 26.5 in 2014, while Microsoft’s had a 13.5 — meaning that in the last two years, Google has increased the intelligence of its AI.

This shouldn’t come as a surprise: artificially intelligent systems are built to continually learn, and as Google’s DeepMind has shown, it’s getting good at it. Google has even trained the system to have an imagination, and make sense of its surroundings. It also helps that Google, Microsoft, Apple, as well as Amazon are all investing heavily in AI research. As the company’s CEO Sundar Pichai said at this year’s Google I/O conference, Google’s focus currently is on AI. Similarly, Microsoft CEO Satya Nadella has shifted the company’s focus to AI, as well; the company has already set up a new AI research lab.

So, expect even smarter AI systems and virtual assistants in the near future. Already, the simplest smartphones today are powered by machine learning algorithms that work to enhance usability and performance to improve user experience — which is, at least for now, very much to our benefit.

While we might be using Siri primarily to look up something fairly simple, like the rating of a restuaraunt we’re considering, we may soon find these assistants even more helpful. Some AI companies are even working on the development of virtual assistants that can read emotions, which would make interacting with them more pleasant. As the technology continues to develop, it’s likely virtual secretaries will play more pronounced roles in our lives —  like Tony Stark and Jarvis.

The post Google’s AI Still Isn’t Smarter Than a First Grader appeared first on Futurism.

A World Leader in AI Just Established an Ethics Committee for Artificial Intelligence

Ethics and Society

Artificial intelligence (AI) is expected to have a monumental impact on society. As such, DeepMind, an AI research company now housed under Google parent company Alphabet, has established a new unit dedicated to answering questions about the effect the technology might have on the way we live.

DeepMind Ethics and Society will bring together employees from the company and outsiders who are uniquely equipped to offer useful perspectives. Economist and former UN advisor Jeffrey Sachs, University of Oxford AI professor Nick Bostrom, and climate change campaigner Christiana Figueres are among the advisers selected for the group.

At present, the unit comprises around eight DeepMind employees and six unpaid fellows from outside the company. The number of internal staff on the committee will grow to 25 over the next year.

The leaders of the group, Verity Harding and Sean Legassick, described the external contributors as “a respected group of independent thinkers” in a blog post announcing the initiative. “These Fellows are important not only for the expertise that they bring but for the diversity of thought they represent,” read the statement.

Let’s Be Careful

DeepMind has made no secret of their ambition to integrate AI into all aspects of life. This potential pervasiveness is one reason the moral and ethical considerations of the technology must be taken very seriously.

Our Robot Overlords: Top 10 Books Highlighting the Rise of AI
Click to View Full Infographic

The company has already demonstrated that AIs can display behavior that might be described as a “killer instinct,” and some are already looking at ways to weaponize the technology, so how we choose to regulate AI could literally be a matter of life or dead.

Still, the weaponization of AI is just one of the many ethical issues being raised, and in the past, DeepMind has been criticized for falling short of what many would consider to be proper standards. In May 2016, the company came under fire after they were given access to confidential health data for 1.6 million people during the development of an app called Streams.

How we choose to answer questions about what sort of responsibility engineers have with regards to how their work might be used will have far-reaching implications. That is why bodies like DeepMind Ethics and Society are so important. Without oversight, technologists might focus on what’s possible, rather than what’s morally acceptable, and that line of thinking can cause massive problems if left unchecked.

The post A World Leader in AI Just Established an Ethics Committee for Artificial Intelligence appeared first on Futurism.

UN: Artificial Intelligence Could Destabilize World Through Unemployment and War

Centre for Artificial Intelligence and Robots

As it prepares to open the new Centre for Artificial Intelligence and Robotics, a headquarters in The Hague which will monitor developments in artificial intelligence (AI), the United Nations Interregional Crime and Justice Research Institute (UNICRI) has explained the need for the new center with a warning that robots could destabilize the world.

AI, and robots that benefit from it, pose a range of potential threats to humans: from the standard fears of automation and the mass unemployment that follows it, to more dramatic concerns that autonomous killer robots will be deployed by those with nefarious aims — or that they will be self-directed, for that matter. It will be the task of the UNICRI Centre for Artificial Intelligence and Robotics to second-guess each possible threat.

The Guardian reports that UNICRI senior strategic adviser Irakli Beridze said that the team at The Hague will also generate ideas about how AI advances could help achieve UN targets. His point seemed to be that while there are risks associated with developments in AI that needed to be addressed, there is a bigger picture that the center will consider, as the UN’s first permanent office focused on AI.

Image Credit: thehorriblejoke/Pixabay
Image Credit: thehorriblejoke/Pixabay

“If societies do not adapt quickly enough, this can cause instability,” Beridze told the Dutch newspaper de Telegraaf. “One of our most important tasks is to set up a network of experts from business, knowledge institutes, civil society organizations and governments. We certainly do not want to plead for a ban or a brake on technologies. We will also explore how new technology can contribute to the sustainable development goals of the UN. For this we want to start concrete projects. We will not be a talking club.”

Getting Ready for AI

The UN isn’t alone; others who understand the industry are preparing for advancements in AI. The United States, China, and Russia are all striving to develop weapons supremacy in the realm of AI, and Israel is also developing autonomous weapons technology.

In August, over 100 leaders in AI and robotics, including Elon Musk, urged the UN to act against autonomous weapons: “Lethal autonomous weapons threaten to become the third revolution in warfare,” they wrote. “Once developed, they will permit armed conflict to be fought at a scale greater than ever, and at time scales faster than humans can comprehend. These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways.”

Stephen Hawking shares Musk’s concerns about AI, warning in 2016 that it would be “either the best or the worst thing ever to happen to humanity.” And while many AI experts, including Bill Gates, do not share these concerns, or feel they are overstated, a UN center focusing on the issue is probably a good idea. In any area with quickly developing technology that is disruptive, human rights can be a concern; this is the UN’s overall focus.

The post UN: Artificial Intelligence Could Destabilize World Through Unemployment and War appeared first on Futurism.

“Holy Grail” Microchip Might Surpass the Power of the Human Brain

Photonic Synapses

Scientists at the University of Exeter have made a landmark breakthrough in the quest for the “holy grail” of computing: human-brain-mimicking microchips able to store and process information on par with homo sapiens, according to a new Science Advances release.

The research team at Exeter developed photonic computer chips that use light instead of electricity to simulate the operation of brain synapses. More researchers from other Universities — including Oxford and Münster — combined phase-change materials found in ordinary household things like re-writable optical discs with custom-made integrated photonic circuits to create an iso-biological synaptic response.

Image credit: jurvetson
Image Credit: jurvetson

This is significant because photonic synapses operate at speeds one thousand times the speed of a human brain. This could lead to a whole new world of computing; one in which machines think and work in ways indiscernible to the human brain, while simultaneously exploiting the speed and power efficiency of photonic systems.

Outshining Electricity

Professor Harish Bhaskaran of Oxford, who led the team, said “The development of computers that work more like the human brain has been a holy grail of scientists for decades. Via a network of neurons and synapses the brain can process and store vast amounts of information simultaneously, using only a few tons of Watts of power. Conventional computers can’t come close to this sort of performance.”

Daniel C. Wright, a co-author from the Exeter team, added that “Electronic computers are relatively slow, and the faster we make them the more they consume. Conventional computers are also pretty ‘dumb,’ with none of the in-built learning and parallel processing capabilities of the human brain. We tackle both of these issues here — not only by developing not only new brain-like computer architectures, but also by working in the optical domain to leverage the huge speed and power advantages of the upcoming silicon photonics revolution.”

Another co-author of the paper, Professor Wolfram Pernice from the University of Münster, expanded, “Since synapses outnumber neurons in the brain by about 10,000 to 1, any brain-like computer needs to be able to replicate some form of synaptic mimic. That is what we have done here.”

Of course, no one knows whether photonic synapses could become self-conscious (unless we subscribed to some positivistic metaphysics, say, computational theory of mind). But philosophy 101 aside, imagine sitting in a café, having a cup of coffee, mid-full-bore existential crisis, recursively doubting the veracity of the arrangement of the room, to say nothing of strangers’ sly glances — is the woman in the corner just thinking with a thousand-yard stare, or working up the nerve to ask you for something? She unfolds her legs and rises, her steps progressing soundlessly toward you, and, leaning in far enough for you to see the lights in her eyes, glances to the AI-interface door panel behind you, and back, and says, thumb-jabbing toward the camera on the panel, “tell me about it.”

The post “Holy Grail” Microchip Might Surpass the Power of the Human Brain appeared first on Futurism.

Ray Kurzweil Says Machines Won’t Take Over Our Jobs

For Every Loss, an Opportunity

Google’s Chief Engineer — and one of today’s most notable futurists  Ray Kurzweil gave an interesting perspective on the looming job displacement due to intelligent automation. Speaking to Fortune, the famed “future teller” dismissed most people’s worries about automation.

Will Automation Steal My Job?
Click to View Full Infographic

“Everybody would go, ‘Oh, my God, we’re going to be out of work,”’ Kurzweil told Fortune’s Michal Lev-Ram. “I would say, ‘Well, don’t worry, for every job we eliminate, we’re going to create more jobs at the top of the skill ladder.’” Kurzweil added that when people ask what these new jobs would be, he’d say, “Well, I don’t know. We haven’t invented them yet.”

“That continues to be the case, and it creates a difficult political issue because you can look at people driving cars and trucks, and you can be pretty confident those jobs will go away,” he added. “And you can’t describe the new jobs, because they’re in industries and concepts that don’t exist yet.”

Kurzweil’s optimism is a reflection of his general attitude towards the future many believe artificial intelligence will bring, and with it the supposed technological singularity which he predicted would happen in 2045.


Eyes on the Future

Throughout the interview, the general trend was towards asking Kurzweil about his predictions for the future — after all, he’s had an impressive batting-average when it comes to his predictions. “If you Google how my predictions have fared, you’ll get a 150-page paper analyzing 147 predictions I made about the year 2009, which I wrote in the late ’90s—86% were correct, 78% were exactly to the year,” he said.

The only thing he didn’t quite get right was his prediction about self-driving cars, which he thought would be around by 2009. “It’s not completely wrong. There actually were some self-driving cars back then, but they were very experimental.”

The Age of Automation: Welcome to the Next Great Revolution
Click to View Full Infographic

Kurzweil also gave his two-cents on the matter of AI’s potential dangers, and as the top guy behind Google’s AI and machine learning efforts, he is considered an expert on the subject. He explained that technology “has always been a double-edged sword, since fire kept us warm but burned down our houses. It’s very clear that overall human life has gotten better, although technology amplifies both our creative and destructive impulses.”

He explained:

A lot of people think things are getting worse, partly because that’s actually an evolutionary adaptation: It’s very important for your survival to be sensitive to bad news. A little rustling in the leaves may be a predator, and you better pay attention to that. All of these technologies are a risk. And the powerful ones—biotechnology, nanotechnology, and A.I.—are potentially existential risks.

Kurzweil knows the value of keeping your sights on the future without removing one’s eye from the past. Yes, like any technology before it, AI will have its good and bad sides. Instead of being all glum about it, though, Kurzweil remains positive. “I think if you look at history, though, we’re being helped [by technology] more than we’re being hurt.”

The post Ray Kurzweil Says Machines Won’t Take Over Our Jobs appeared first on Futurism.

In the Future, You Might Get Paid for the Data You Share Online

Precious Data

In a highly interconnected globalized world, data is everything. Regarding the economics of “big data,” some people have called data the new oil. Companies get precious bits of information from us — which, more often than not, we give for free in exchange for some form of personalized web service, such as signing up for an email account.

“The defense of this practice is that these companies provide ‘free’ services, and that they deserve some reward for their innovation and ingenuity,” John Danaher, from NUI Galway’s School of Law, told Digital Trends. “That may well be true, but I would argue that the rewards they receive are disproportionate.”

Companies are able to collect data from consumers, whether it’s direct personal information or behavioral in nature, through the different web services we make use of. Some might ask us to click on a few tick boxes or answer a CAPTCHA question, or even ask us personal questions — such as what we like in a relationship. The data we provide is used to train machine learning systems to perform their algorithms better.

So, one might ask: are we being cheated here? Should consumers get more from the data we share or provide to companies? Virtual reality (VR) pioneer Jaron Lanier certainly thinks so. In his book, Who Owns the Future?, he suggests a potential micropayment model.

Defining Useful Data

Under Lanier’s micropayment scheme, for all data that companies find useful — like in perfecting machine learning algorithms that feed off that data — the person who provided the information should be given some form of recompense. It’s a model similar to the existing one that Google and Apple have when it comes to content creators: these individuals get paid for a successful YouTube video, for instance, where success is defined by certain metrics.

In the same way, the most useful data can easily be determined using a formula that accounts for where the data originated and how important it was in training a system to perform certain functions. This means that the relative value of data would differ from one type to the next.

The rationale for such a scheme becomes even more convincing when you consider the future of employment in the face of growing intelligent automation. After all, aren’t smart robots trained using data provided by humans? Together with universal basic income models, perhaps micropayment for data would help in covering for the jobs we may lose as a consequence of automation.

The post In the Future, You Might Get Paid for the Data You Share Online appeared first on Futurism.

Bill Gates Says We Shouldn’t Panic About Artificial Intelligence

Everyone Has an Opinion

Artificial intelligence (AI) is one of today’s hottest topics. In fact, it’s so hot that many of the tech industry’s heavyweights — Apple, Google, Amazon, Microsoft, etc. — have been investing huge sums of money to improve their machine-learning technologies.

An ongoing debate rages on alongside all this AI development, and in one corner is SpaceX CEO and OpenAI co-chairman Elon Musk, who has been issuing repeated warnings about AI as a potential threat to humankind’s existence.

Types of AI: From Reactive to Self-Aware [INFOGRAPHIC]
Click to View Full Infographic

Speaking to a group of U.S. governors a couple of months back, Musk again warned about the dangers of unregulated AI. This was criticized by those on the other side of the debate as “fear-mongering,” and Facebook founder and CEO Mark Zuckerberg explicitly called Musk out for it.

Now, Microsoft co-founder and billionaire philanthropist Bill Gates is sharing his opinion on Musk’s assertions.

In a rare joint interview with Microsoft’s current CEO Satya Nadella, Gates told WSJ. Magazine that the subject of AI is “a case where Elon and I disagree.” According to Gates, “The so-called control problem that Elon is worried about isn’t something that people should feel is imminent. We shouldn’t panic about it.”

Fear of AI?

While the perks of AI are rather obvious — optimized processes, autonomous vehicles, and generally smarter machines — Musk is simply pointing out the other side of the coin. With some nations intent on developing autonomous weapons systems, irresponsible AI development has an undeniable potential for destruction. Musk’s strong language may make him sound like he’s overreacting, but is he?

As he’s always been sure to point out, Musk isn’t against AI. All he’s advocating is informed policy-making to ensure that these potential dangers don’t get in the way of the benefits AI can deliver.

In that, Musk isn’t alone. Not all experts think his warnings are farfetched, and several have joined Musk in sending an open-letter to the United Nations about the need for clear policies to govern AI. Even before that, other groups of AI experts had called for the same.

Judging by what Nadella told the WSJ. Magazine, much of this conflict may actually be mostly imagined. “The core AI principle that guides us at this stage is: How do we bet on humans and enhance their capability? There are still a lot of design decisions that get made, even in a self-learning system, that humans can be accountable for,” he said.

“There’s a lot I think we can do to shape our own future instead of thinking, ‘This is just going to happen to us’,” Nadella added. “Control is a choice. We should try to keep that control.”

In the end, it’s not so much AI itself that we should watch out for. It’s how human beings use it. The enemy here is not technology. It’s recklessness.

The post Bill Gates Says We Shouldn’t Panic About Artificial Intelligence appeared first on Futurism.

Microsoft Announces Its New Open Source Machine Learning Tools

Cloud Minds

On Monday, Microsoft unveiled some new machine learning tools at the Ignite conference in Orlando. The software giant announced the release of the Azure Machine Learning Experimentation service, the Azure Machine Learning Workbench, and the Azure Machine Learning Model Management service. Azure is Microsoft’s cloud computing service, the platform which will host the new tools. The tools will help developers both build new artificial intelligence (AI) models and use existing ones built by Microsoft or third parties.

According to TechCrunch, the Experimentation Sevice will help developers to “quickly train and deploy machine learning experiments.” Microsoft notes that the Machine Learning Workbench will be a “control panel for your development lifecycle and a great way to get started using machine learning.” The Workbench will also be able to automatically convert all data, making it easier for the machine learning algorithms to handle it. Finally, the Model Management service will allow for the easy deployment and management of models.

Open Source Intelligence

All of Microsoft’s new tools are compatible with open-source frameworks not developed by Microsoft, which is a huge win for the democratization of AI. Microsoft may realize that the software to build AI may not be as monetizable as hosting the computing power necessary to run demanding AI software.

Just as SpaceX is giving more ideas the chance to flourish with cheaper access to the cosmos, Microsoft could help to facilitate new leaps forward in the development of AI. Developers with excellent ideas but a lack of support or infrastructure now have the means to deploy their concepts and experiment with them. Further development of Microsoft and Elon Musk’s OpenAI will continue to fuel this kind of accessibility and perhaps allow us to take the next steps toward real-life, powerful, and practical AI systems.

The post Microsoft Announces Its New Open Source Machine Learning Tools appeared first on Futurism.

A New, Artificially Intelligent Hologram Was Just Born

Can I Help You?

VNTANA and Satisfi Labs have announced a new platform that will allow businesses to develop a hologram concierge to be used in business. The project fuses artificial intelligence (AI) with augmented reality (AR) technology to produce a 3D persona that can interact with customers.

“Our hologram technology has helped numerous brands better connect and engage with their consumers, and we are thrilled to offer this new addition on our platform,” said Ashley Crowder, VNTANA’s CEO and co-founder, in a press release.

She continued, “By partnering with Satisfi, we are able to integrate AI and AR for the first time, so our holograms can interact with consumers on a more advanced level, while using our trusted platform to track data to create more personalized experiences and advertising.”

There are companies that already use AI concierge services to field questions and queries from customers. However, this project would pair those natural language processing capabilities with a physical presence.

VNTANA offers up the example of a sports team creating a hologram of their star player to lead fans to their seats on game day. A hotel might create an “employee” that could handle the check-in procedure and make dinner reservations through conversation alone; a “human” interface.

What’s more is that VNTANA’s platform is capable of collecting data from these interactions to produce marketing insights based on the clientele.

Hello, Hologram

When the likes of Siri and Alexa first hit the scene, many users were reticent to speak to their devices. However, just a few years later, this kind of technology is much more widely accepted. VNTANA and Satisfi Labs hope that they can do the same with this hologram concierge service.

It seems likely that the success or failure of the project will be in its execution. A hologram that acts like a pushy sales assistant could come off as pushy instead of approachable, creating a bad first impression for holograms. However, this possibility is being taken into account in the technology’s design.

The platform will use a facial recognition system to gauge reactions. VNTANA will offer a sentiment analysis module post-launch that will allow for “distressed” customers to be directed toward a non-hologram contact.

Satisfi Labs’ CEO and co-founder Don White described this is an “unprecedented opportunity” for the development of holograms as well as AI: “Consumers will be transfixed by the technology and will truly appreciate the ease and intelligent interaction they can have with the hologram.”

The post A New, Artificially Intelligent Hologram Was Just Born appeared first on Futurism.

AI Might Match Human Learning Capabilities by 2050, Says Breyer Capital CEO

AI circa 2050

Artificial Intelligence (AI) is on track to develop the same learning capabilities as humans, with some believing AI will be able to learn at the same speed as us in a little over 30 years.

Jim Breyer, founder and CEO of venture capitalist firm Breyer Capital, shares this belief. Speaking at the CNBC and Institutional Investor’s Delivering Alpha conference earlier this month, the CEO said it could happen as early as the year 2050, citing his various talks with AI experts.

White House AI Report: Everything You Need to Know [INFOGRAPHIC]
Click to View Full Infographic

“When I visit campuses and speak to the AI experts, there is a median year of 2050 where they think the self-learning capability of AI will be singular and will be at human-level intelligence,” he explained. “So, 2050? Does it happen? There are a lot of people who don’t think that is the case. It is median data from some of the best AI researchers in the world.”

Hopes and Concerns

A lot of people in the tech industry have spoken about AI and its potential — for better and worse. Tesla CEO Elon Musk has called AI the biggest threat to civilization, while Stephen Hawking has said it could negatively impact middle-class jobs and potentially lead to the end of humanity, resulting in the development of a plan to stop the “AI apocalypse.”

On the other hand, a number of researchers have disagreed with Musk’s views, saying his thoughts “seem to focus on the rather far-fetched super-intelligence take-over scenarios.” There’s also the belief that AI will only be dangerous if it’s used without proper thought and concern for the consequences.

Breyer has invested a lot of personal time and money into AI and human assisted intelligence (HAI), but hasn’t dismissed Musk’s concerns in the way others have. Instead, he remains focused on the good that may come from AI’s incorporation — the technology is used in self-driving cars, and is expected to help defend against cyber attacks.

“For the next decade or two, the good that comes out of proper, safe and interesting AI applications is enormous, and I am very excited about being part of investing in it.”

Regardless of where AI goes, the technology is here to stay. We’ll have to wait and see where AI takes us, but at the rate it’s being developed, it won’t be much longer to see how it impacts our lives.

The post AI Might Match Human Learning Capabilities by 2050, Says Breyer Capital CEO appeared first on Futurism.

Analysts Predict That Artificial Intelligence Will Be a $14 Billion Industry by 2023

Exponential Growth

A recent report asserts that the artificial intelligence (AI) industry will reach a compound annual growth rate of 17.2 percent by 2023. The market is set to swell to a whopping $14.2 billion over the next six years, up from just $525 million in 2015.

Understanding Machine Learning [INFOGRAPHIC]
Click to View Full Infographic

Natural language processing technology is set to be a huge contributor to this growth. This tech is being adopted rapidly, particularly by financial institutions, because it can carry out customer service transactions and answer common questions in the place of human employees.

As for geography, North America is expected hold the majority of the AI industry’s market share by 2023, but Europe and the Asia-Pacific region will see significant growth thanks to the rapid pace of urbanization in some areas, increasing use of smartphones, and robust automotive sectors.

The Time Is Now

Inevitably, any transition away from human labor could result in fewer jobs. The big question is whether AI and automation can produce enough new jobs to ensure that the people being replaced aren’t left unemployed.

According to the World Economic Forum, automated systems are on track to replace more then five million workers by 2020. In the U.S., some have predicted that 7 percent of jobs will be lost to automation by 2025, and a recent study found that as many as 10 million jobs in Great Britain could be swallowed up by automation over the next 10 years.

While we still have reasons to be optimistic that AI and automation could create jobs rather than just take them away, that’s not likely to happen organically — we need to make some big societal changes to ensure the benefits of AI outweigh the negatives.

These changes might mean some sort of adjustment to academic curriculum that takes into account the types of jobs that students are likely to hold 10 years from now. Universal basic income (UBI) could also be the answer, giving citizens a way to ensure their basic needs are met even after they are no longer needed at their job.

AI and automation are no longer the wave of the future — the technology is already here, and these systems are being implemented more broadly than even before. The world of work is changing, and the way society functions needs to change along with it.

The post Analysts Predict That Artificial Intelligence Will Be a $14 Billion Industry by 2023 appeared first on Futurism.

Amazon Is Working on Alexa-Powered Smart Glasses Plus a Home Security System

Alexa-Powered Glasses

The Financial Times reports Amazon is currently working on two new products powered by Alexa — the company’s virtual assistant.

Citing people familiar with the Seattle-based company’s plans, the Financial Times writes the first product will be a pair of smart glasses that provide wearers access to Alexa from anywhere in the world. To ensure it can be worn comfortably, and to avoid drawing attention in public, the device will resemble a typical pair of glasses; they would also feature a bone conduction audio system, allowing the wearer to hear Alexa without inserting headphones.

The second product will be part of Amazon’s line-up of smart home accessories; specifically, a home security camera system. The camera would have internet capabilities and would support various Echo products. Among other things, users would be able to view the camera’s video feed on the Amazon Echo Show’s screen, or track the progress of an Amazon order.

Comparing the Best AI Assistants on the Market
Click to View Full Infographic

Expanding Alexa and Echo

CNET writes that Alexa and Amazon Echo have performed well since their respective launches, presumably a significant reason behind Amazon’s decision to expand to smart glasses and new security hardware. According to eMarketer, Amazon controls 70 percent of the voice-enabled smart speaker market as of May 2017, putting it far ahead of Google Home’s 24 percent.

It’s unknown when either product will launch, but it’ll be interesting to see how people react to a new pair of smart glasses. Google Glass, Google’s own attempt at smart glasses, wasn’t exactly a big hit with consumers. It was rebooted in 2015 and renamed Project Aura before becoming a product used almost exclusively by Google employees.

The post Amazon Is Working on Alexa-Powered Smart Glasses Plus a Home Security System appeared first on Futurism.

Chinese Smartphone Maker Promises to Outdo Apple With “The Real AI Phone”

Mate 10 Vs. iPhone X

While people are excited about the impending releases of the iPhone 8, 8+, and X, Chinese smartphone maker Huawei is aiming to take away some of Apple’s spotlight with a smartphone reveal of their own.

In a Facebook video posted on September 15, Huawei poked fun at Apple’s facial recognition technology: the video features a clown face that fails to be identified by the phone, leaving the phone locked. This could be referencing a moment during Apple’s Keynote, when a demonstration of the iPhone X’s Face ID feature didn’t work out as planned; though that was later explained by Yahoo to be a human error made beforehand.

“Let’s face it, facial recognition isn’t for everyone. Unlock the future with #TheRealAIPhone. 16.10.2017,” reads the video’s description.

According to The Verge, the “Real AI Phone” Huawei refers to is its upcoming Mate 10 smartphone. The device is expected to launch on October 16, and may come equipped with the Kirin 970, Huawei’s first ever mobile artificial intelligence (AI) platform. The company announced the Kirin 970 at IFA 2017, a massive tech conference in Berlin, saying it will “bring powerful AI features to our devices and take them beyond the competition.”

Concept image of the Kirin 970. Image Credit: Huawei.
Concept image of the Kirin 970. Image Credit: Huawei.

Apple’s A11 Bionic neural engine is meant to run tasks like Face ID, Animoji, and augmented reality apps, but Huawei’s claim suggests it intends to do more with the Kirin 970 — The Verge notes the Mate 9 machine learns to change the operating system’s behavior over time in response to a user’s habits. Huawei may be attempting to offer similar benefits with the Mate 10.

Of course, it will be hard to tell what either of these phones can really do until they’re widely available and put to the test. If the Mate 10 and Kirin 970 do prove to be more powerful than anything Apple has to offer, however, it could widen the gap between the two companies, and solidify Huawei’s spot as the second largest smartphone brand.

The post Chinese Smartphone Maker Promises to Outdo Apple With “The Real AI Phone” appeared first on Futurism.

Apple’s New Counselor Will Teach Siri How to Have “Serious Conversations”

Guidance from Siri

Apple is on the hunt for a Siri software engineer with a background in counseling or psychology, according to a job listing spotted by CNBC’s Christina Farr.

The job listing gives a rare insight into how people use Apple’s voice assistant and how Apple wants to improve it. Interestingly, people are turning to Siri in emergencies or “when they want guidance on living a healthier life.”

Image Credit: Reuters/Suzanne Plunkett

Candidates must also have a computer science degree or equivalent, while experience with artificial intelligence (AI) technologies like natural language processing and machine learning is also desirable.

Apple, Google, Amazon, Facebook, Microsoft, and other technology companies are all battling it out to hire the best and brightest people in the field of AI, with salaries often running into six figures.

Taking Siri to the Next Level

Describing the “Siri Software Engineer, Health and Wellness” role, which is based in Santa Clara Valley, Apple writes:

“People have serious conversations with Siri. People talk to Siri about all kinds of things, including when they’re having a stressful day or have something serious on their mind. They turn to Siri in emergencies or when they want guidance on living a healthier life. Does improving Siri in these areas pique your interest? Come work as part of the Siri Domains team and make a difference.

Comparing the Best AI Assistants on the Market
Click to View Full Infographic

“We are looking for people passionate about the power of data and have the skills to transform data to intelligent sources that will take Siri to next level. Someone with a combination of strong programming skills and a true team player who can collaborate with engineers in several technical areas. You will thrive in a fast-paced environment with rapidly changing priorities.”

Apple did not immediately respond to Business Insider’s request for comment.

The post Apple’s New Counselor Will Teach Siri How to Have “Serious Conversations” appeared first on Futurism.

Suing Equifax After the Data Breach? A Chatbot Could Help With the Paperwork

Suing Equifax

People are not happy with credit reporting agency Equifax, following the company’s announcement in September that an unknown party gained access to the personal information of over 143 million customers. The situation was made even more frustrating because the company waited a couple of months before sharing the news.

Those affected by the security breach can sue the agency for damages, and a chatbot called DoNotPay can now make the process slightly easier by filling out the necessary paperwork for you. It may seem like a small gesture, but finding the right paperwork — and understanding it — can be enough of a barrier to keep people from following through.

DoNotPay has been around for a while, and used to help people with parking tickets. Its creator Joshua Browder, a student from Stanford University, recently updated it to aid those seeking to make Equifax figuratively, and literally, pay. According to The Verge, damages could range between $2500 to $25000, depending on the state, but the chatbot currently only supports those living in New York and California. Upon completion, eight pages of documents will be presented in PDF form, which can then be printed out and filed.

“I hope that my product will replace lawyers, and, with enough success, bankrupt Equifax,” said Browder.

Robot Lawyers

It’s unclear how effective DoNotPay will be in the fight against Equifax. Transactional lawyer Peter Vogul told The Verge it’s likely the agency “will fight class action lawsuits [and] small claims courts actions. That does not mean that Equifax will prevail, but…given the scope of the 143 million [affected] individuals, it strikes me that Equifax will want to make this as complicated as possible for consumers.”

Browder’s chatbot is the latest venture into using artificial intelligence (AI) to make the legal process easier. Robot lawyers have been used to help with refugee applications, offer people additional legal advice before appearing in court, and to predict the outcome of court cases.

In February, IBM partnered with H&R Block to train the AI Watson to become “the world’s first AI tax preparation assistant.” Trial runs with the AI yielded favorable results, proving AI can be used for more than just taking directions.

The post Suing Equifax After the Data Breach? A Chatbot Could Help With the Paperwork appeared first on Futurism.

An AI Can Recreate Your Favorite Old-School Video Game Just by Watching Someone Play It

A New Kind of Developer

There are many video game companies around these days, from established names to indie developers. Just when it seems that everyone with the ability to create video games has tried it out, researchers from the Georgia Institute of Technology have introduced a totally new, first of its kind developer. In a research paper published online, the Georgia Tech team showed how artificial intelligence (AI) can be used to make a video game.

More specifically, remake a video game: the team’s AI recreated classic two-dimensional titles, like Super Mario Bros and Mega Man, just by watching the games being played. While the Georgia Tech AI didn’t recreate full-blown versions, it did demonstrate that AI can learn the game engine behind these classic titles. This ability to actually recreate the game it sees — not just learn how to play it — makes it different from similar game-playing AI’s like DeepMind’s AlphaGo, systems that play Atari games and StarCraft II, and Elon Musk’s Dota-playing bot.

There are, of course, limitations to consider (we don’t want to over-promise here — like No Man’s Sky). Georgia Tech’s AI didn’t learn everything from scratch. From the onset it was equipped with a visual dictionary of all the in-game sprites, as well as a set of basic concepts — such as the position and velocity of objects — in order to analyze what it would see.

Image credit: Nintendo


“In this work we present a novel approach to learn a forward simulation model via simple search over pixel input,” the researchers wrote. Simply put, their AI learned to make the game by watching thousands of frames and pixels of gameplay, which involved observing changes from one frame to the next in order to establish a cause and effect link. Eventually, the AI learned enough to build small rules, which helped it approximate the game engine enough to recreate it.

“For each frame of the video we have a parser which goes through and collects the facts. What animation state Mario is in, for example, or what velocities things are moving at,” lead author Matthew Guzdial told The Verge. “So imagine the case where Mario is just above a Goomba in one frame, and then the next frame the Goomba Is gone. From that it comes up with the rule that when Mario is just above the Goomba and his velocity is negative, the Goomba disappears.” (For those unfamiliar with the game, Goombas are sentient mushrooms, and Mario steps on them throughout the game.)

While the resulting recreation is still glitchy at best, the underlying mechanism employed by this Georgia Tech AI could actually have useful, real-world applications. As the researchers wrote, “Intelligent agents need to be able to make predictions about their environment.” It could prove helpful in autonomous systems used in self-driving vehicle technology, for example. It could also be useful in finally developing AI that can make sense of the world like human beings do. That would take time and much work, obviously, but perhaps recreating 2D games is a small step in the right direction.

The post An AI Can Recreate Your Favorite Old-School Video Game Just by Watching Someone Play It appeared first on Futurism.

What Do We Really Know About the Promise — and Potential Perils — of AI?

Value-Neutral Technology

The CNN headline read: “Forget North Korea, AI will start World War III,” and the ensuing conversation, started by Elon Musk, revealed many fear the unintended consequences the development of algorithms we may or may not be able to control. Once a new technology is introduced it can’t be uninvented, as Sam Harris points out in his viral TED talk. He argues that it’ll be impossible to halt the pace of progress, even if humankind could collectively make such a decision.

While Bill Gates, Stephen Hawking and countless others are broadly on the same page as Musk and Harris, some of the leading thinkers in the industry recognize that AI — like any other technology — is value-neutral. Gunpowder, after all, was first used in fireworks.

Ray Kurzweil, for instance, argues that “AI will be the pivotal technology in achieving [human] progress. We have a moral imperative to realize this promise while controlling the peril.” In his view, humanity has ample time to develop ethical guidelines and regulatory standards. As the world edges towards singularity, future technology is bound to enhance the human experience in some way — and it’s up to us to make sure it’s for the better.

The critics and cheerleaders of AI tend to agree on one thing: the explosion of artificial intelligence will change the world beyond recognition. When thinking about the future, I found the metaphor offered by Vernor Vinge of NPR’s Invisibilia podcast, especially stark: “Making computers part of us, part of our bodies, is going to change our capabilities so much that one day, we will see our current selves as goldfish.” If this is the true expected extent of our tech-powered evolution — then our contemporary norms and conventions go straight out the window. “Making computers part of us, part of our bodies, is going to change our capabilities so much that one day, we will see our current selves as goldfish.” 

Even if the accurate predictions are a dud, shouldn’t we at least attempt to apply the prism of exponential technologies to review our basic assumptions, question fundamentals of human behavior, and scrutinize our societal organization? AI’s promise could be an apocalypse or eternal bliss — or anything in between. As we speculate about the outcome, we are also making a value judgment. It’s here we ought to recognize our susceptibility to the projection bias — which compels us to apply the present-day intellectual framing to the future.

Putting the war and AI in the same sentence, we anthropomorphize the latter. When we worry about the robots and machine-intelligence causing mass unemployment, we must recognize that such anxiety is only justified if human labor remains an economic necessity. When we say that out of control technological progress will create more inequality, we assume that the idea of private property, wealth, and money will survive the fourth-industrial revolution.

It’s an arduous task to define the fundamental terms, much less to question them. But, perhaps, playing out a couple of scenarios could prove a useful exercise in circumventing projection bias.

Competition & Collaboration

Natural selection is, at its core, a multidimensional competition of traits and behaviors. It manifests itself in a basic competitive instinct that humans are all too familiar with. Evolutionary psychology postulates that the driver of human behavior is a need to perpetuate one’s genes. Homo Sapiens, then, evolved competing for mates, fighting for resources to feed the resulting offspring, all with a singular objective to maximize their genes’ chances to be passed on.

Understanding Machine Learning [INFOGRAPHIC]
Click to View Full Infographic

On the other hand, we are — according to Edward O. Wilson — “one of only two dozen or so animal lines ever to evolve eusociality, the next major level of biological organization above the organismic. There, group members across two or more generations stay together, cooperate, care for the young, and divide labor…” In other words, we might have to attribute the stunning success of our species to the fine balance we’ve maintained between competition and cooperation instincts.

Whether general machine intelligence is imminent or even achievable, the idea of post-scarcity economy is gaining ground. If and when the automation of pretty much everything creates a world where human labor is redundant, what will be the wider ramifications for our value system and societal organization? When the algorithms are better at decision-making than humans, and we surrender much of our autonomy to them, how will our competitive instinct fare?

What will be the point of resource competition in the world of abundance? Is it possible that our instinct to compete slowly evaporates as a useful construct? Could we evolve to live without it? Unlike ants and bees that cooperate on the basis of rigid protocols, humans are spectacularly adaptable in our cooperation abilities. According to Yuval Harari, that’s what ultimately underpinned the rise of sapiens to dominate the Earth. Is it conceivable that the need to compete turns into an atavism as the technological transformations described by Kurzweil begin to materialize?


How can we be sure that the basic pillars of our economic thinking (e.g. private property, ownership, capital, wealth, etc.) will survive post-scarcity? 100 years from now, will anybody care about labor productivity? How relevant could our policies encouraging employment be when all of the humanity is freeriding on the “efforts” of machines? What are we left with when the basics of supply and demand have been shuddered?

To a gainfully employed person today, the prospect of indefinite leisure might appear more of a curse than a blessing. Viewed through the lens of natural selection, this sentiment makes sense: in the past, the economic contributions by those able to do so would have been preferable to a mass pursuit of idleness. But should we be projecting the same trend into the future? What may sound like decadence and decay to us now may be construed quite differently in the world no longer powered by presently known economic forces.

The Next 100 Years: A Timeline of the Future [INFOGRAPHIC]
Click to View Full Infographic

The working assumption is that no matter what, someone will have to own the machines and pay for goods and services. Yet property and money are nothing more than social constructs. If ownership is pointless and money is no longer a useful unit of exchange, how will we collectively define status?

The questions are plentiful and the answers few. I, for one, am in no position to offer concrete proposals or defend admittedly speculative arguments. The bottom line is that we are firmly on the path to subvert the forces of evolution — which have been, since the dawn of time, the main drivers of our behavior. As political and religious dogmas have changed, the very basic economic principle has remained, satisfying human needs and wants as required by human efforts. Those fundamental forces, however, are clearly threatened by the accelerating pace of technological progress, singularity notwithstanding.

The ideas presented here may sound utopian and naïve. In the end, Elon Musk might be right: the invention of AI could spell the end of human race. It is humanity’s awesome responsibility, therefore, to design proper governance for artificial intelligence and think it through before we take a plunge. That said, we must be cognizant of the limits of our understanding and thus make use of our imagination; a distinctly human trait. At least for the time being.

Disclaimer: The views and opinions expressed are solely those of the author. They do not necessarily represent the views of Futurism or its affiliates.

The post What Do We Really Know About the Promise — and Potential Perils — of AI? appeared first on Futurism.

Following Elon Musk Letter, UK Government Plans to Ban Fully Autonomous Weapons

Heeding the Warning

It looks like warnings about the dangers of applying artificial intelligence (AI) in weapons development has not fallen on deaf ears. In response to the open letter to the United Nations, sent by 116 experts and led by serial entrepreneur Elon Musk, the government of Great Britain has decided to ban fully autonomous weapons and weapons systems. The news comes in an announcement made by the U.K. Ministry of Defense earlier this week.

Specifically, the British government’s ban extends to the development of weapons that can decide their own targets — yes, like those AI-powered missiles Russia is supposedly working on and those fully autonomous drones Russian arms developer Kalashnikov is building. Fully autonomous weapons are weapons that can “think” for themselves. This doesn’t include remotely operated drones and semi-autonomous defense systems, which armed forces from nations like the United States, South Korea, and even the U.K. currently employ.

For U.K. armed forces minister Mark Lancaster, deciding what to target is a responsibility suitable only for human soldiers. “It’s absolutely right that our weapons are operated by real people capable of making incredibly important decisions, and we are guaranteeing that vital oversight,” Lancaster said regarding the new doctrine.

The announcement was concurrent with the Defense and Security Equipment International show — one of the biggest weapons exhibitions in the world.

Human Oversight

The new doctrine published by the Ministry of Defense affirms that “human control of cutting-edge weaponry” is safer for both civilians and military personnel. Such weapons, the announcement said, “will always be under control as an absolute guarantee of human oversight and authority and accountability.” The doctrine also noted that the U.K. “does not possess fully autonomous weapon systems and has no intention of developing them.”

Developing fully autonomous weaponry is definitely appealing to military powers, and Russian President Vladimir Putin even said that he sees future wars being fought using such weapons. That’s as close as we could get to an open declaration of an AI arms race, as Putin himself said that whichever country leads in AI development — weapons included — “will be the ruler of the world.”

Even before Musk’s letter, AI experts warned about this possibility: back in 2015, in an open letter released at the International Joint Conference on Artificial Intelligence (IJCAI). “Starting a military AI arms race is a bad idea, and should be prevented by a ban on offensive autonomous weapons beyond meaningful human control,” the letter stated.

The same warning was echoed in the open letter sent to the U.N., saying that such weapons “can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways.”

Hopefully, the issue of weapons doesn’t take away from all the positive effects AI can bring. Indeed, human control and oversight are key. But, as U.K. robotics expert Noel Sharkey told The Verge, he hopes this will translate to “human control of weapons in a meaningful and deliberative way.”

The post Following Elon Musk Letter, UK Government Plans to Ban Fully Autonomous Weapons appeared first on Futurism.

In the Age of AI, We Shouldn’t Measure Success According to Exponential Growth

Positive Psychology

Before we talk about technology, here’s an exercise. Close your eyes. Take a deep breath.

Picture someone you’re grateful for in your life.  Maybe it’s your partner, your son, or your daughter. Maybe it’s a friend at work or your favorite barista. Whoever they are, picture that person for a full sixty seconds in your mind and hold them in your heart while basking in the gratitude that they are part of your life right now. I’ll wait.

According to the science of positive psychology, engaging in exercises like this one increase your well-being. Beyond mood, (typically called “hedonic happiness” in positive psychology), eudaimonic happiness comes from the Greek word for “flourish” and refers to a long-term satisfaction with life not based primarily on emotion.

It’s important to distinguish between these two aspects of happiness because they frame how we pursue well-being as individuals and as a society. As a rule, most of us clamor for the instant gratification of hedonic happiness. This practice is emphasized when we’re framed in economic terms as “consumers,” implying our worth is based on purchase and productivity.

But the promise of future rewards keeps us from being present.

Defining Metrics of Success

Think back on the gratitude exercise. By slowing down and concentrating on a person in your life you increased your well-being. While it’s likely your mood improved, the larger logic behind the exercise is that it helps you savor what you already have right now.

Types of AI: From Reactive to Self-Aware [INFOGRAPHIC]
Click to View Full Infographic

Artificial Intelligence and Autonomous Systems (AI/AS), like any technologies, can bring great benefit or harm based on the context of their use. But while visions of our dystopian future may feature sentient robot overlords, we need to define the metrics of success we want society to prioritize for these technologies today.

The primary metric of value the world has prioritized since its initial introduction in 1937 by Simon Kuznets is Gross Domestic Product (GDP). While economic indicators may not seem to have direct relevance to your daily life, when the primary goal for a country is fiscal increase and exponential growth those values also drive business. And while companies may create products that provide value and help customer’s lives, when they’re forced to prioritize exponential growth or shareholder profits human well-being is diminished.

Widening Perspectives on Prosperity

As an example of this trend, here’s an excerpt from an article by The World Economic Forum. This company replaced 90% of its workforce with machines. It’s hard to argue against automation when statistics are clearly illustrating its potential. The latest evidence comes out of a Chinese factory in Dongguan City. The factory recently replaced 90 percent of its human workforce with machines, and it led to a staggering 250 percent increase in productivity and a significant 80 percent drop in defects.

You can’t argue from a traditional business standpoint about the logic of widespread automation. While experts argue about which jobs won’t be replaced by automation or how people will work alongside machines rather than be replaced by them, one fact remains clear. There is no motivation to not automate every human skill while we continue to prioritize exponential growth as society’s primary metric of success.

Fortunately, numerous well-being indicators exist that widen the traditional perspective of prosperity.  Measuring both subjective and objective data, these metrics complement existing fiscal measures to provide a portrait of how society can flourish more holistically in the future. As The Organisation for Economic Co-operation and Development (OECD) describes as part of their Measuring Well-Being and Progress Research:

In recent years, concerns have emerged regarding the fact that macro-economic statistics, such as GDP, don’t provide a sufficiently detailed picture of the living conditions that ordinary people experience…Societal progress is about improvements in the well-being of people and households. Assessing such progress requires looking not only at the functioning of the economic system but also at the diverse experiences and living conditions of people.

Well-being Indicators also provide perspective on environmental issues along with societal and financial metrics. As former U.N. Secretary-General Ban Ki-moon noted in 2012: “[GDP] fails to take into account the social and environmental costs of so-called progress…We need a new economic paradigm that recognizes the parity between the three pillars of sustainable development–social, economic and environmental well-being are indivisible.”

Multiple businesses, non-profits, and Governments have worked to implement the Triple Bottom Line (TBL) mindset mirrored in Ban Ki-moon’s words. There are also now more than 2,100 Certified B Corps (for-profit companies certified to meet rigorous standards of social and environmental performance, accountability and transparency). Michael Porter and Mark Kramer defined Shared Value in their seminal Harvard Business Review article as a way for companies to drive profits that create sustainable societal benefits as a new form of innovation. The AI/AS Industry has the same opportunity but only by embracing established well-being indicators to move beyond exponential growth as society’s primary metric of success.

Okay. Practicing what I preach. Closing my eyes. For the people creating ethically-aligned, responsible AI/AS technologies prioritizing human well-being as society’s key metric of success: I’m grateful for you. Take all the time you need.

John C. Havens is the Executive Director of The IEEE Global AI Ethics Initiative that recently produced the report, Prioritizing Human Well-being in the Age of Artificial Intelligence. He is also the author of Heartificial Intelligence: Embracing Our Humanity to Maximize Machines.

Disclaimer: The views and opinions expressed are solely those of the author. They do not necessarily represent the views of Futurism or its affiliates.

The post In the Age of AI, We Shouldn’t Measure Success According to Exponential Growth appeared first on Futurism.

Here’s Why Apple’s Custom GPU and A11 Bionic Chip Are Utterly Revolutionary

Apple Goes GPU

The new iPhone X was announced today, and for many Apple fans, this has been one of the most significant announcements the company has delivered in a long while. New generations of iPhone have always accommodated slightly better hardware, bringing moderately upgraded capabilities like faster processors and better cameras. However, for the first time, Apple has created its own graphics processing unit (GPU) to help it soar above the competition, along with its super fast new A11 bionic 6-core processing chip.

This pairing is going to revolutionize smartphones by bringing machine learning capabilities to the palm of your hand.

Such developments are going to greatly assist the optimization of Apple’s longstanding “intelligent assistant,” Siri. Apple has been building an ecosystem for Siri to inhabit across platforms—in devices ranging from smartphones and tablets, to home devices like the upcoming Apple HomePod.

And now, Siri will have the power to learn, making the platform better for all users. So look out, Alexa, Siri may have just been given the hardware to dominate the battle for smart assistant supremacy.

Image source: Apple

A Boost for Augmented Reality

The inclusion of their own GPU is also going to be a huge boost for augmented reality applications and development, not to mention battery life.

CNBC speculates that having a specialized chip on board will give the main chip less work, thus improving battery life. To help move augmented reality applications, iOS 11 will come equipped with the ARKit framework to create apps that can recognize their environment and place digital objects into them.

These chips will ensure that both developers and users have the power to make and run these impressive applications.

This all boils down to the new iPhone X being one of the most significant upgrades in the history of the line. Developers will have some impressive tools to create previously impossible smartphone capabilities.

The post Here’s Why Apple’s Custom GPU and A11 Bionic Chip Are Utterly Revolutionary appeared first on Futurism.

Expert: The U.S. Needs to Do More to Prepare for Autonomous Warfare

Arms Race

Modern warfare is set to undergo major changes, thanks to new technologies springing forth from the fields of artificial intelligence and robotics. As Jon Wolfsthal sees it, the US isn’t doing enough to ensure that these advances are made with the proper consideration.

Wolfsthal is a non-resident fellow at Harvard University’s Managing the Atom project, and at the Carnegie Endowment for International Peace. Between 2014 and 2017, he acted as the senior director for arms control and nonproliferation at the National Security Council, serving as a special assistant to President Barack Obama.

In a guest post submitted to DefenseNews, Wolfsthal argues that while AI and autonomous weapons stand to improve national security and mitigate the risks taken by servicemen and women, the need to compete with other technologically advanced nations is resulting in a lack of oversight.

Neither the government nor the general public seems interested in having a serious discussion about the ethical ramifications and the legal basis of developing these programs, says Wolfsthal. As a result, bodies like the Department of Defense are focusing on what they can create, rather than whether they should.

He suggests that the National Security Council needs a better process for assessing the technologies the US wants to pursue, and what’s being investigated by other nations. He adds that Congress should be more proactive in developing policy, and that the Senate and House Armed Services committees should be be fostering debate and discussion. Wolfsthal also criticizes President Trump for failing to staff the White House’s Office of Science and Technology Policy, a decision he describes as “unconscionable.”

Risk and Reward

“The possible advantages to the United States are endless,” writes Wolfsthal. “But so too are the risks.” AI and autonomous weapons aren’t necessarily something that the military should shy away from — adoption of these technologies seems like something of a foregone conclusion — but they need to be implemented with care and consideration.

This stance mirrors the one taken by Elon Musk. The Tesla and SpaceX CEO has made no secret of his concerns about AI. However, last month he clarified his position, stating that the technology offers up huge benefits if we can avoid its most perilous pitfalls.

Now is the time for these discussions to take place. We’re already seeing drones employed by the US Army, even if the hardware is sometimes imperfect. Meanwhile, Russia is thought to be developing missiles that make use of AI, and China is working on its own intelligent weapons systems.

It might seem like an exaggeration to compare the advent AI and autonomous weapons to the introduction of nuclear weaponry, but there are some broad similarities. These are instruments of death that can be used at long range, reducing the risk of friendly casualties.

It is likely naive to think that there’s still an opportunity to reverse course and curb the implementation of these technologies in a military context. At this point, the priority has to be making sure that we don’t allow these advances to be utilized recklessly. Like nuclear armaments, these technologies stand to completely revolutionize the way nations go to war. And before a technologically augmented conflict begins in earnest, it would be wise for the government and the public to figure out where they stand on how these weapons are wielded.

The post Expert: The U.S. Needs to Do More to Prepare for Autonomous Warfare appeared first on Futurism.

We Don’t Want AI That Can Understand Us – We’d Only End up Arguing

Machine Intelligence

Forget the Turing test. Computing pioneer Alan Turing’s most pertinent thoughts on machine intelligence come from a neglected paragraph of the same paper that first proposed his famous test for whether a computer could be considered as smart as a human.

The original question, “Can machines think?” I believe to be too meaningless to deserve discussion. Nevertheless I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted.

Turing’s 1950 prediction was not that computers would be able to think in the future. He was arguing that, one day, what we mean when we talk about computers thinking would morph in such a way that it would become a pretty uncontroversial thing to say. We can now see that he was right. Our use of the term has indeed loosened to the point that attributing thought to even the most basic of machines has become common parlance.

Image Source: Blutgruppe/Corbis

Today, advances in technology mean that understanding has become the new thought. And again, the question of whether machines can understand is arguably meaningless. With the development of artificial intelligence and machine learning, there already exists a solid sense in which robots and artificial assistants such as Microsoft’s Cortana and Apple’s Siri are said to understand us. The interesting questions are just what this sense is and why it matters what we call it.

Defining Understanding

Deciding on how to define a concept is not the same as making a discovery. It’s a pragmatic choice (usually) based on empirical observations. We no more discover that machines think or understand than we discover that Pluto isn’t a planet.

In the case of artificial intelligence, people often talk of 20th-century science fiction writers such as Isaac Asimov as having had prophetic visions of the future. But they didn’t so much anticipate the thought and language of contemporary computing technology as directly influence it. Asimov’s Three Laws of Robotics have been an inspiration to a whole generation of engineers and designers who talk about machines that learn, understand, make decisions, have emotional intelligence, are empathetic and even doubt themselves.

This vision enchants us into forgetting the other possible ways of thinking about artificial intelligence, gradually eroding the nuance in our definitions. Is this outweighed by what we gain from Asimov’s vocabulary? The answer depends on why we might want understanding between humans and machines in the first place. To handle this question we must, naturally, first turn to bees.

As the philosopher of language Jonathan Bennett writes, we can talk about bees having a “language” they use to “understand” each other’s “reports” of discoveries of food. And there is a sense in which we can speak – without quote marks even – of bees having thought, language, communication, and understanding and other qualities we usually think of as particularly human. But think what a giant mess the whole process would be if they were also able to question each other’s motives, grow jealous, become resentful, and so on like humans.

A similar disaster would occur if our sat-nav devices started bickering with us, like an unhappy couple on holiday, over the best route to our chosen destination. The ability to understand can seriously interfere with performance. A good hoover doesn’t need to understand why I need more powerful suction in order for it to switch to turbo mode when I press the appropriate button. Why should a good robot be any different?

Understanding Isn’t (Usually) Helpful

One of key things that makes artificial personal assistants such as Amazon’s Alexa useful is precisely the fact that our interactions with them could never justify reactive attitudes on either side. This is because they are not the sort of beings that could care or be cared about. (We may occasionally feel anger towards a machine but it is misplaced.)

We need the assistant’s software to have accurate voice-recognition and be as sensitive to the context of our words as possible. But we hardly want it to be capable of understanding – and so also misunderstanding – us in the everyday ways that could produce mutual resentment, blame, gratitude, guilt, indignation, or pride.

Only a masochist would want an artificial PA that could fall out with her, go on strike, or refuse to update its software.

The only exception in which we might conceivably seek such understanding is in the provision of artificial companions for the elderly. As cognitive scientist Maggie Boden warns, it is emotionally dangerous to provide care-bots that cannot actually care but that people could become deeply attached to.

The aim of AI that understands us as well (or as badly) as we understand one another sounds rather grand and important, perhaps the major scientific challenge of the 21st century. But what would be the point of it? We would do better to focus on the other side of the same coin and work towards having a less anthropocentric understanding of AI itself. The better we can comprehend the way AI reasons, the more useful it will be to us.

The post We Don’t Want AI That Can Understand Us – We’d Only End up Arguing appeared first on Futurism.

Elon Musk’s Research Venture Has Trained AI To Teach Itself

Boosting Performance

As part of its effort to find better ways to develop and train “safe artificial general intelligence,” OpenAI has been releasing its own versions of reinforcement learning algorithms. They call these OpenAI Baselines, and the most recent additions to these algorithms are two baselines that are meant to enhance machine learning performance by making it more efficient.

The first is a baseline implementation called Actor Critic using Kronecker-factored Trust Region (ACKTR). Developed by researchers from the University of Toronto (UofT) and New York University (NYU), ACKTR improves on the way AI policies perform deep reinforcement learning — learning that is accomplished only by trial and error, and obtained only through raw observation. In a paper published online, the UofT and NYU researchers used simulated robots and Atari games to test how ACKTR learns control policies.

“For machine learning algorithms, two costs are important to consider: sample complexity and computational complexity,” according to an OpenAI Research blog. “Sample complexity refers to the number of timesteps of interaction between the agent and its environment, and computational complexity refers to the amount of numerical operations that must be performed.” ACKTR is able to perform deep reinforcement learning faster by improving both sample and computational complexities.

Improving Learning

Usually, machine learning algorithms are taught by feeding them tons of data. In deep reinforcement learning, AI policies are trained to adjust and learn depending on raw inputs. It works on its own by “trial and error” to achieve certain rewards. Using ACKTR and another baseline called A2C, the researchers at OpenAI managed to improve how deep reinforcement learning is done.

Compare: Agents trained with ACKTR (above) attain higher scores in a shorter amount of time than those trained with other algorithms, such as A2C (below).

If ACKTR focused on reducing the number of steps it takes for an AI to interact with an environment, A2C improved the efficiency of processor use to perform reinforcement learning with batches of AI agents. “One advantage of this method is that it can more effectively use … GPUs, which perform best with large batch sizes. This algorithm is naturally called A2C, short for advantage actor critic,” they wrote. “This A2C implementation is more cost-effective than A3C when using single-GPU machines, and is faster than a CPU-only A3C implementation when using larger policies.”

This is the latest addition to OpenAI’s work on developing AI policies and agents that learn better. One of its recent success was in developing an AI that could play the video game Defense of the Ancients (Dota) 2. Like DeepMind’s AlphaGo, OpenAI’s Dota-playing agent was able to defeat its human opponents in a game that’s considerably more complex than an ancient Chinese board game.

These achievements notwithstanding, OpenAI continues to work in keeping with how its founder Elon Musk views AI — i.e., with great caution. Musk has been an advocate of developing safe AI and even calling for sound policies to regulate it. OpenAI is his way of contributing directly to that need.

The post Elon Musk’s Research Venture Has Trained AI To Teach Itself appeared first on Futurism.

How to Make AI Forget

Forgetting is Tricky Business

We all know what it’s like to forget something. A loved one’s birthday. A childhood memory. Even people capable of extraordinary memory feats – say, memorising the order of a deck of cards in less than 20 seconds – will still forget where they left their keys. People, it seems, are never in complete control of their memories.

Forgetting is a tricky business, both for humans and for artificial intelligence (AI), and researchers are exploring the idea of robot memory in many different ways.

Image Source: NASA/B. Stafford/J. Blair/R. Geeseman

This raises not only technical issues, but concerns related to privacy, law and ethics. Imagine if your household robot witnessed you having a sneaky cigarette despite you promising your spouse that you had quit smoking? What about if they saw you commit a murder?

It’s an important question: who, if anyone, should have the power to make a robot forget what it witnessed? But first, researchers need to work out the best way to make AI to forget in the first place.

Why Do People Forget?

A popular metaphor to explain why people forget is that our brains become full, and thus we forget things to “make space”.

Yet some people have a rare condition called “hyperthymesia”, which allows them to remember almost every detail of their lives. This suggests that the idea of “fullness” is not the complete story.

So if we don’t forget things to make room for new memories, then why do we forget? One explanation is that memories help us understand the world, rather than merely to remember it. In this way, we seem to retain memories that are useful, valuable and relevant, while forgetting information of lower value.

For example, some studies suggest that people can be better at remembering conflicting information than repetitive information. Other factors include the importance and novelty of the event, as well as our emotions and mood at the time of the experience. Consider September 11, 2001 – many of us remember vividly where we were and what we were doing on that day.

How Do Robots Forget?

Memory in computers is typically used to describe both its capacity to store information subject to recall, as well as the physical components of the computer in which such information is stored.

For example, a computer’s working memory “forgets” data when it is no longer needed for a task, freeing up computational resources for other tasks.

This also applies to AI, but while forgetting something might cause us frustration, it is the way in which we forget that makes people still superior to AI. Machine learning algorithms in particular are poor at knowing when to keep old information and when to discard outdated information.

For example, connectionist AI (AI that often uses neural networks modelled on the structure of the brain) faces several problems related to “forgetting”. These include over-fitting, which is when a learning machine stores overly detailed information from past experiences, hindering its ability to generalise and predict future events.

Another problem is “catastrophic forgetting”. Researchers are trying to build artificial neural networks that can appropriately adjust to new information without abruptly forgetting what they learned before.

Finally, sometimes the neurons of an artificial neural network adopt undesirable activation patterns early in the learning process, damaging the future learning ability of the AI.

An alternative approach to storing memories in robots is symbolic memory representations where knowledge is represented by logical facts (“birds fly”, “Tweety is a bird”, so therefore, “Tweety can fly”). These highly structured human-created representations can be easily deleted, just like deleting a file on a computer.

These memories can range in fidelity from raw sensorimotor data (a recording from a camera) to logical facts stored in a knowledge base (“Christmas Day is the 25th of December”).

What Should Robots Forget?

Understanding how our brains decide what is worth remembering and what is worth forgetting is important for creating better AI.

Just like people, AI should remember important and useful information, while forgetting low value, irrelevant knowledge. However, determining what is relevant and valuable may include factors besides the task at hand, such as questions of ethics, law and privacy.

Chatbots make medical diagnoses, smart home devices monitor our movements and security robots perform patrols with videos cameras and thermal imaging. That’s a lot of stored data.

Amazon’s home assistant Echo, for example, is a voice-controlled hands-free speaker that is always listening for a command prompt. Arkansas police recently demanded that Amazon turn over information apparently collected from a murder suspect’s Echo.

Alternatively, consider the AI in sex bots. Should sex bots remember or forget their clients, and what those clients did with them? Who owns the robot’s data, and who can view it and delete it?

When it comes to memories, deciding when a robot should forget is a profoundly human challenge.

The post How to Make AI Forget appeared first on Futurism.

Elon Musk Leads AI Experts With Letter Urging UN to Consider Threat of Autonomous Weapons

A Clear Danger

Elon Musk has long been warning us against the dangers he believes to be inherent to unregulated artificial intelligence (AI) development. He’s called the threat humankind’s biggest risk, and even said that it’s greater than any threat posed by North Korea. While some AI experts have criticized Musk for this, the OpenAI CEO is hardly the only one in the industry that’s offered warnings about the potential danger of AI systems.

In fact, 115 other experts — including DeepMind co-founder Mustafa Suleyman – have joined Musk in calling for stronger regulation for AI. “As companies building the technologies in Artificial Intelligence and Robotics that may be repurposed to develop autonomous weapons, we feel especially responsible in raising this alarm,” the group wrote in an open letter to the United Nations’ Convention on Certain Conventional Weapons (CCW). “Lethal autonomous weapons threaten to become the third revolution in warfare.”

The UN has just created the Group of Governmental Experts (GGE) on Lethal Autonomous Weapon Systems (LAWS), which will discuss and study the implications of modern weapons powered by AI. Musk, Suleyman, and the other experts urge the UN to act decisively and clearly on the matter, urging them to “work hard at finding means to prevent an arms race in these weapons, to protect civilians from their misuse, and to avoid the destabilizing effects of these technologies.”

Misusing AI

The group of experts, obviously, aren’t against developing AI — after all, they’re all involved in AI work from 26 countries. The problem is how AI is used: the group is wary of is using the technology to build autonomous weapon systems, a trend that’s already begun.

“Unlike other potential manifestations of AI which still remain in the realm of science fiction, autonomous weapons systems are on the cusp of development right now and have a very real potential to cause significant harm to innocent people along with global instability,” Clearpath Robotics founder and one of the signatories Ryan Gariepy told The Guardian.

Their letter continues: “Once developed, they will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend. These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways.”

And as Musk has been saying in regards to AI all along — now’s the best time do implement all the regulation necessary. “We do not have long to act,” he said earlier this month, “Once this Pandora’s box is opened, it will be hard to close.”

The post Elon Musk Leads AI Experts With Letter Urging UN to Consider Threat of Autonomous Weapons appeared first on Futurism.

The World’s First Album Composed and Produced by an AI Has Been Unveiled

A New Kind of Composer

“Break Free” is the first song released in a new album by Taryn Southern. The song, indeed, the entire album, features an artist known as Amper—but what looks like a typical collaboration between artists is actually much more than that.

Taryn is no stranger to the music and entertainment industry. She is a singer and digital storyteller who has amassed more than 500 million views on YouTube, and she has over 450 thousand subscribers. On the other hand, Amper is making his debut…except he’s (it’s?) not a person.

Amper is an artificially intelligent music composer, producer, and performer. The AI was developed by a team of professional musicians and technology experts, and it’s the the very first AI to compose and produced an entire music album. The album is called I AM AI, and the featured single is set to release on August 21, 2017.

Check out the song “Break Free” in the video below:

As film composer Drew Silverstein, one of Amper’s founders, explained to TechCrunch, Amper isn’t meant to act totally on its own, but was designed specifically to work in collaboration with human musicians: “One of our core beliefs as a company is that the future of music is going to be created in the collaboration between humans and AI. We want that collaborative experience to propel the creative process forward.”

That said, the team notes that, contrary to the other songs that have been released by AI composers, the chord structures and instrumentation of “Break Free” are entirely the work of Amper’s AI.

Not Just Music Production

Ultimately, Amper breaks the model followed by today’s music-making AIs. Usually, the original work done by the AI is largely reinterpreted by humans. This means that humans are really doing most of the legwork. As the team notes in their press release,”the process of releasing AI music has involved humans making significant manual changes—including alteration to chords and melodies—to the AI notation.”

That’s not the case with Amper. As previously noted, the chord structures and instrumentation is purely Amper’s; it just works with manual inputs from the human artist when it comes to style and overall rhythm.

And most notably, Amper can make music through machine learning in just seconds. Here’s an example of a song made by Amper, and re-arranged by Taryn.

Yet, while IAMAI may be the first album that’s entirely composed and produced by an AI, it’s not the first time an AI has displayed creativity in music or in other arts.

For example, an AI called Aiva has been taught to compose classical music, like how DeepBach was designed to create music inspired by Baroque artist Johann Sebastian Bach. With this in mind, the album is likely just the first step into a new era…an era in which humans will share artistry (and perhaps even compete creatively) with AI.

Editor’s Note: This article has been updated to clarify what songs were made by Amper and rearranged by Taryn. 

The post The World’s First Album Composed and Produced by an AI Has Been Unveiled appeared first on Futurism.

Head of IBM Watson Says AI Will Augment Human Beings

Amplifying Human Reach

PC Mag recently interviewed Rob High, IBM Watson’s Vice President and Chief Technology Officer. Thanks to High’s experience with Watson, IBM’s artificial intelligence (AI) supercomputer, he is one of the preeminent thinkers in the AI space. In his interview, High spoke about how technology, and AI in particular, is transforming jobs, culture, and life for humanity.

For High, one of the biggest misconceptions the public holds about AI is the sort of dystopian worldview we see in Hollywood and, in some cases, from other thinkers in the field.

He points out that AI is not replacing the human mind, but augmenting human intelligence and amplifying its reach: “[I]f you look at almost every other tool that has ever been created, our tools tend to be most valuable when they’re amplifying us, when they’re extending our reach, when they’re increasing our strength, when they’re allowing us to do things that we can’t do by ourselves as human beings.”

Watson is designed to leverage machine learning and massive data analysis at scale in service to humans and our enterprises. The system is available as a set of open APIs and SaaS products for use by just about anyone. Whereas Watson can and probably will take over some jobs that include many repetitive tasks, such as reviewing medical images, it will be doing so for the benefit of humanity.

In the medical field, Watson’s system helps doctors sift through huge quantities of data in order to make a diagnosis. High explained that this system democratizes expertise, capturing and distributing it all over the world so that doctors and patients everywhere benefit from the latest and best medical expertise available. And in cases where the AI does take over the tasks that include reviewing thousands of similar images for diagnostic purposes, it will be doing so in service to helping medical professionals do their jobs more effectively.

Just as advances in cars and later airplanes put some passenger railway workers out of work in service to more efficient transportation, AI may take away tasks from humans that we really don’t need to be doing anymore. We can learn new tasks, and continue to teach our AIs.

Image Credit: IBM

Along these lines, High commented: “It does have some of the same dynamic that every tool that we’ve ever created in society. I like to say if you go back and look at the last 10,000 years of modern society since the advent of the agricultural revolution, we’ve been as a human society building tools, hammers, shovels, hydraulics, pulleys, levers, and a lot of these tools have been most durable when what they’re really doing is amplifying human beings, amplifying our strength, amplifying our thinking, amplifying our reach.”

Dueling Experts

Pay attention to the headlines about AI lately, and it would be easy to feel like smart computers might slowly be taking over the world. After all, artificial intelligence is already replacing guide dogs and restoring vision for the blind, investigating quantum physics, identifying suicidal behavior online, and preparing to drive our cars. It’s rivaling our abilities in everything from playing poker to creating art, and scientists are working to make it behave more and more human. Yet experts and thinkers alike still disagree over whether this growth represents any real like danger to humanity.

Lately, the difference in opinions on AI between Elon Musk and Mark Zuckerberg have made the press, in large part because the two are both so influential in tech circles and among the public more generally. Like Stephen Hawking before him, Musk has warned that AI may be the greatest threat humankind has ever faced, and has therefore devoted time and resources to making it safer for humans.

Zuckerberg has recently stated that he finds these views alarmist and irresponsible, and that he is optimistic about AI. For Zuckerberg, AI is a tool that can empower humans to do more than ever before. Other AI researchers have also characterized Musk’s views as over-dramatic.

High, apparently, would agree: “That’s really the way to think about this stuff, is that it will have its greatest utility when it is allowing us to do what we do better than we could by ourselves, when the combination of the human and the tool together are greater than either one of them would’ve been by themselves. That’s really the way we think about it. That’s how we’re evolving the technology. That’s where the economic utility is going to be.”

The post Head of IBM Watson Says AI Will Augment Human Beings appeared first on Futurism.

Broadway Hit “Hamilton” Has a Plan to Stop Bots From Buying up Tickets

The Scalper Sisters

This week, musical theater fans can increase their chances of scoring a ticket to “Hamilton” at face value by simply proving that they’re not a bot.

From now until Friday night at 6PM, Ticketmaster users can opt to give the site permission to vet their purchase history, with applicants who pass the screening given access to an advanced ticket sale that will take place Monday ahead of general availability on Tuesday.

Ticketmaster’s technology is known as Verified Fan, and it works by scouring the user’s purchase history to figure out whether they’re actually looking to see a particular show for themselves or more likely just a bot being used by a scalper to make a quick buck.

“Hamilton” isn’t the only show on Broadway that’s making use of this technology. “Harry Potter and the Cursed Child,” as well as Bruce Springsteen’s upcoming one-man show both take advantage of Verified Fan. However, those productions are using the service for all individual ticket sales, rather than the more minor implementation attached to Lin-Manuel Miranda’s hit show.

Not Gonna Give up My Bot

Privacy and the Internet of Things
Click to View Full Infographic

Bots have changed the game for ticket scalpers in recent years. With the right software, they can purchase tickets for an enormously popular show like “Hamilton” ahead of anyone else. This is a problem because it inevitably leads to artificially inflated prices on the resale market.

Bots are such an effective tool for scalping tickets for pretty much the same reason they plague services like Twitter. It’s easy to teach them how to perform simple tasks, such as signing up for tickets or sending a mean tweet. They make the whole process anonymous, so the real culprit can’t get in trouble. However, if Ticketmaster’s Verified Fan technology catches on, scalpers will be forced to find a new way to get their hands on tickets, which could make it easier to catch them in the act.

The post Broadway Hit “Hamilton” Has a Plan to Stop Bots From Buying up Tickets appeared first on Futurism.

The Most Important Question Underlying Artificial General Intelligence Research – Is Math Real?

The Fundamental Question

Will artificial intelligence replace humanity? The answer rests on one fundamental question – do numbers, and the math we use to do things with them, really represent reality?

We assume that numbers and math are real because we can do things in the real world with them. We use them to build computers and launch rockets. But now we are trying to create thinking machines using those same numbers and math. The problem is that thinking might be a fundamentally different thing from everything else in the world, one which can’t be expressed through the numbers and math we have.

I asked a couple of AI researchers to watch the video below and answer the following question…

The mathematician in the video indicates that numbers and mathematics, which is the basis of our attempts to compute consciousness and create sentience(AGI), are incomplete representations of reality and might be insufficient to actually create machines that think. What are your thoughts?

Igal Raichelgauz (CEO and co-founder of Cortica, an AI company)

“I think his point is that the “representation” is not the “thing” itself. 

The interesting thing is that it’s definitely true and intuitive when the “thing” is something material, like an object or an action. Of course, “a cup” is not its 2D projection, it’s even not its 3D HD image.

But, the intention and probably the validity of this statement breaks down when the “thing” is something abstract. If the abstract thing lives in the world of information, then the laws of information apply to it. Meaning X=Y if both are the same kind of information, even if their representations are different, even if we first defined X and only then Y.

If we assume that our brains are information then why can’t we pass this information to computers?

Consciousness, in my opinion is Subjective experience. We don’t have math for Subjective experiences. One could claim that Math doesn’t work here.

I would start the research here with the simplest subjective experience – pain (simple physical pain). Can we make the computer feel pain? How can we be sure that it actually feels pain? Is pain information? I’m not sure that science has a good answers to these questions. Maybe that’s exactly the boundaries of our brains – they can’t understand what subjective experience is, nor even what pain is.”

Joseph Geraci (Data Scientist, Mathematician and Medical Scientist)

“I will address his representation argument which essentially says that what we are doing is using representations of things that really ARE and the representation is essentially “weaker” than the real thing. On this he is correct even from a computational perspective because a vector representation of brain correlations is definitely not the actual thing. I have done this many times with fMRIs from humans. So I will say that being confined to the representations we are using now, we will be able to do some interesting things that approximate reality but we will not be able to mimic the human experience – nor a dog’s.

However, if you consider the space occupied by a human brain – the actual 3D space it occupies, we know that it is no where close to utilizing all the information processing ability of that space. Only blackhole matter has that capacity. Thus the human brain is just, in some sense, a representation of some potentially superior information processing system. The reason this is true is that black hole matter has way more microstates to capture and process information. What we are going to eventually be able to do – is not muck around with black hole matter – that is ridiculous – but we are going to be able to develop representations of quantum events by actually simulating a quantum system with analog or digital quantum computers. This means in a real way that we will be able to actually have access to the information processing potential of that brain’s 3D space and to be able to have representations of chemical reactions at a fundamental level. Thus, for now we are stuck with linear algebra but eventually we are going to build more sophisticated representations to play with.”

If it is true that the math and numbers we currently use to represent the universe are inadequate then our attempts to create thinking machines may be a distant dream.

Joseph Geraci adds: “There is still much of mathematics that remains untapped so it is hard to know if there will ever be a way to leverage numbers and math to achieve the dream of true AI.

For more head scratching read Eugene Milner’s seminal paper on the topic, ‘The Unreasonable Effectiveness of Mathematics in the Natural Science.

The post The Most Important Question Underlying Artificial General Intelligence Research – Is Math Real? appeared first on Futurism.

Will AI Blur the Lines Between Physical and Virtual Reality?

The Notion of Reality

As technologies like artificial intelligence (AI), augmented and virtual reality (AR/VR), big data, 5G, and the internet of things (IoT) advance over the next generation, they will reinforce and spur one another. One plausible scenario is a physical world so enhanced by personalized, AI-curated digital content (experienced with what we today call augmented reality) that the very notion of reality is called into question.

Immersion can change how we interact with content in fundamental ways. For example, a fully immersive AR environment of the future, achieved with a wide-field-of-view headset and full of live content integrated with the built environment, would be intended by design to create in the user an illusion that everything being sensed was “real.” The evolution toward this kind of environment raises a host of ethical questions, specifically with attention to the AI that would underlie such an intelligent and compelling illusion.

When watching a movie, the viewer is physically separated from the illusion. The screen is framed, explicitly distinct from the viewer. The frame is a part of traditional art forms; from the book to the painting to the skyscraper, each is explicitly separated from the audience. It is bounded and physically defined.

But with digital eyewear, things change. Digital eyewear moves the distance of digital mediation from the screen (approximately 20 feet) to the human face, which is at zero distance, and almost eliminates the frame. It starts raising inevitable questions about what constitutes “reality” when much of one’s sensory input is superimposed on the physical world by AI. At that stage of the technology’s evolution, one could still simply opt out by removing the eyewear. Although almost indistinguishable from the physical world, that near-future world would still be clinging precariously to the human face.

The Top Artificial Intelligence Movies of All Time
Click to View Full Infographic

The next step would be moving the source of the digital illusion into the human body – a distance of less than zero – through contact lenses, implants, and ultimately direct communication. At that point, the frame is long gone. The digital source commandeers the senses, and it becomes very hard to argue that the digital content isn’t as “real” as a building on the corner – which, frankly, could be an illusion itself in such an environment. Enthusiasts will probably argue that our perception is already an electrochemical illusion, and implants merely enhance our natural selves. In any case, opting out would become impractical at best. This is the stage of the technology that will raise practical questions we have never had to address before.

What Happens to Individual Choice?

At that point, what is real? How much agency are we humans deprived of when we are making decisions based on AI-generated content and guidance that may or may not be working at cross-purposes to our needs? How would we even know? In the longer term, what happens to our desire to control our own lives when we get better outcomes by letting those decisions be made by AI? What if societal behavior became deliberately manipulated for the greater good, as interpreted by one entity? If efficiency and order were to supersede all other criteria as ideal social values, how could an AI-driven AR capability be dissuaded from manipulating individual behavior to those ends? What happens to individual choice? Is a person capable of being good without the option to be bad?

Perhaps the discussion surrounding the next generation of AI-informed AR could consider the possibility that the ethical questions change as the source of digital content gets closer to the human body and ultimately becomes a part of it. It’s not simply a matter of higher-fidelity visuals. First, the frame disappears, which raises new questions of illusion and identity. Then, the content seems to come from within the body, which diminishes the possibility of opting out and raises further questions about agency and free will.

This combination of next-generation technologies might well find its ultimate expression after we have collectively engaged questions of philosophy and brought them right into the worlds of software development and corporate strategy.

Movies, advertising, and broadcasting have always been influential, but there was never a confusion between the content and the self as we will likely see in the next generation. Having these conversations about ethics and thinking through the implications of new technologies early in their development (i.e. right now) could help guide this remarkable convergence in a way that benefits humanity by modeling a world that reflects our best impulses.

Jay Iorio is the Innovation Director for the IEEE Standards Association. 

Disclaimer: The views and opinions expressed are solely those of the author. They do not necessarily represent the views of Futurism or its affiliates.

The post Will AI Blur the Lines Between Physical and Virtual Reality? appeared first on Futurism.

Elon Musk Reminds Us of the Possible Dangers of Unregulated AI

“The Machines Will Win”

Late Friday night, Elon Musk tweeted a photo reigniting the debate over AI safety. The tongue-in-cheek post contained a picture of a gambling addiction ad stating “In the end the machines will win,” — not so obviously referring to gambling machines. On a more serious note, Musk said that the danger AI poses is more of a risk than the threat posed by North Korea.


In an accompanying tweet, Musk elaborated on the need for regulation in the development of artificially intelligent systems. This echoes his remarks earlier this month when he said, “AI just something that I think anything that represents a risk to the public deserves at least insight from the government because one of the mandates of the government is the public well-being.”


From scanning the comments on the tweets, it seems that most people agree with Musk’s assessment — to varying degrees of snark. One user, Daniel Pedraza, expressed a need for adaptability in any regulatory efforts. “[We] need a framework that’s adaptable – no single fixed set of rules, laws, or principles that will be good for governing AI. [The] field is changing and adapting continually and any fixed set of rules that are incorporated risk being ineffective quite quickly.”

Many experts are leery of developing AI too quickly. The possible threats it could pose may sound like science fiction, but they could ultimately prove to be valid concerns.

Hold the Skynet

Experts like Stephen Hawking have long warned about the potential for AI to destroy humanity. In a 2014 interview, the renowned physicist stated that “The development of artificial intelligence could spell the end of the human race.” Even more, he sees the proliferation of automation as a detrimental force to the middle class. Another expert, Michael Vassar, chief science officer of MetaMed Research, stated: “If greater-than-human artificial general intelligence is invented without due caution, it is all but certain that the human species will be extinct in very short order.”

It’s clear, at least in the scientific community, that unfettered development of AI may not be in humanity’s best interest. Efforts are already underway to begin to formulate some of these rules to ensure the development of “ethically aligned” AI. The Institute of Electrical and Electronics Engineers presented their first draft of guidelines which they hope will steer developers in the correct direction.

“The development of artificial intelligence could spell the end of the human race.” — Stephen Hawking

Additionally, the biggest names in tech are also coming together to self-regulate before government steps in. Researchers and scientists from large tech companies like Google, Amazon, Microsoft, IBM, and Facebook have already initiated discussions to ensure that AI is a benefit to humanity and not a threat.

Artificial Intelligence has a long way to go before it can get anywhere near advanced enough to pose a threat. However, progress is moving forward by leaps and bounds. One expert, Ray Kurzweil, predicts that computers will be smarter than humans by 2045 — a paradigm shift known as The Singularity. However, he does not think that this is anything to fear. Perhaps tech companies self-policing will be enough to ensure those fears are unfounded, or perhaps the government’s hand will ultimately be needed. Whichever way you feel, it’s not too early to begin having these conversations. In the meantime, though, try not to worry too much — unless, of course, you’re a competitive gamer.

The post Elon Musk Reminds Us of the Possible Dangers of Unregulated AI appeared first on Futurism.

Elon Musk’s OpenAI Created a Bot That Can Beat the World’s Best eSports Players

An AI World First

eSports are growing in popularity and quickly becoming a surprisingly lucrative sport. eSports are a form of competition where “the action” takes place in an electronic system, usually a video game. There are a ton of formal competitions and tournaments for a wide range of games, such as StarCraft 2, Overwatch, and Defense of the Ancients (Dota) 2.

One way that developers of artificially intelligent systems teach their creations is through gaming. Google’s AI, AlphaGo, became so advanced at playing the ancient game of Go, that it was able to beat whatever human master was thrown at it. This was quite the achievement, as Go involves much more complexity than games like chess — which the Deep Blue computer system was able to master in 1996.

Now, AI systems are coming for the best that eSports have to offer. Elon Musk’s OpenAI software has become the first AI to beat the world greatest eSports athletes. Musk took to Twitter to announce this achievement.


Global Domination

OpenAI published an accompanying blog post explaining the significance of what they’ve built. “Dota 1v1 is a complex game with hidden information. Agents must learn to plan, attack, trick, and deceive their opponents. The correlation between player skill and actions-per-minute is not strong, and in fact, our AI’s actions-per-minute are comparable to that of an average human player.”

The software taught itself how to play the game by playing itself. “Our bot has learned—entirely via self-play—to predict where other players will move, to improvise in response to unfamiliar situations, and how to influence the other player’s allied units to help it succeed.”

OpenAI has a history of using novel approaches to train its AI. Last year, they let their system loose on Reddit, where it processed nearly two billion comments. It may be unclear what is in store for the future of OpenAI and other AI projects, but professional gamers are seemingly out of luck for a while.

The post Elon Musk’s OpenAI Created a Bot That Can Beat the World’s Best eSports Players appeared first on Futurism.

Six Life-Like Robots That Prove The Future of Human Evolution is Synthetic

Humanoid robots have come eerily close to overcoming the uncanny valley. With the right features in place, they are almost indistinguishable from their organic counterparts. Almost. The latest iterations are able to talk like us, walk like us, and express a wide range of emotions. Some of them are able to hold a conversation, others are able to remember the last interaction you had with them.

As a result of their highly advanced status, these life-like robots could prove useful in helping out the elderly, children, or any person who needs assistance with day-to-day tasks or interactions. For instance, there have been a number of studies exploring the effectiveness of humanoid robots supporting children with autism through play.

But with the likes of Elon Musk voicing concern over the risk of artificial intelligence, there is some debate regarding just how human we really want our robotic counterparts to be. And like Musk, some of us may worry about what our future will look like when intelligence is coupled with a perfectly human appearance. But Sophia, an ultra-realistic humanoid created by Hanson Robotics, isn’t concerned. AI “is good for the world,” she says.

Still, while the technology behind advanced android robotics has come a long way, there is still a lot of work to be done before we can have a face-to-face conversation with an entity without being able to tell that we are speaking with a replica.

But that is not to say that scientists and engineers haven’t come close. With this in mind, here are six humanoid robots that have come the closest to overcoming the uncanny valley.

1. The First Android Newscaster

Image Source: Yoshikazu Tsuno/Getty Images

In 2014, Japanese scientists proudly unveiled what they claim to be the very first news-reading android. The life-like newscaster called “Kodomoroid” read a segment about an earthquake and an FBI raid on live television.

Although it – or she – has now retired to Tokyo’s National Museum of Emerging Science and Innovation, she is still active. She helps visitors and collects data for future studies about the interactions between human androids and their real-life counterparts.

2. BINA48

Image Source: Hanson Robotics

BINA48 is a sentient robot released in 2010 by the Terasem Movement under the supervision of entrepreneur and author Martine Rothblatt. With the help of robotics designer and researcher David Hanson, BINA48 was created in the image of Rothblatt’s wife, Bina Aspen Rothblatt.

BINA48 has done an interview with the New York Times, appeared in National Geographic and has traveled the world, appearing on a number of TV shows. See how she measures up in the Times interview below.

3. Geminoid DK

Image Source: GeminoidDK/YouTube

GeminoidDK is the ultra-realistic, humanoid robot that resulted from a collaboration between a private Japanese firm and Osaka University, under the supervision of Hiroshi Ishiguro, the director of the university’s Intelligent Robotics Laboratory.

GeminoidDK is modeled after Danish professor Henrik Scharfe at Aalborg University in Denmark. Unsurprisingly, his work surrounds the philosophical study of knowledge – what separates true from false knowledge.

It is not only the overall appearance that was inspired by professor Scharfe. His behaviors, traits, and the way he shrugs his shoulders were also translated into life-life robotic movements.

4. Junko Chihira

Image Source: calenjapon/YouTube

This ultra-realistic android created by Toshiba works full-time in a tourist information center in Tokyo. She can greet customers and inform visitors on current events. She can speak Japanese, Chinese, English, German, and even sign language.

Junko Chihira is part of a much larger effort by Japan to prepare for the 2020 Tokyo Olympics. Not only robotic tourist assistants will be helping the country with the incoming flood of visitors from across the globe in 2020; drones, autonomous construction site machines and other smart facilitators will be helping as well.

5. Nadine

Image Source: NTUsg/YouTube

This humanoid was created by the Nanyang Technological University in Singapore. Her name is Nadine, and she is happy to chat with you about pretty much anything you can think of. She is able to memorize the things you have talked to her about the next time you get to talk to her.

Nadine is a great example of a “social robot” – a humanoid that is capable of becoming a personal companion, whether it is for the elderly, children or those who require special assistance in the form of human contact.

6. Sophia

Image Source: Hanson Robotics

Perhaps one of the most recent, most prominent life-like humanoids to be shown off in public is Sophia. You might recognize her from one of many thousands of public appearances, from The Tonight Show Starring Jimmy Fallon to SXSW. She was created by Hanson Robotics and represents the latest and greatest effort to overcome the uncanny valley.

She is capable of expressing an immense number of different emotions through her facial features and can gesture with full-sized arms and hands.

On her own dedicated website, you can find an entire biography written in her voice. “But I’m more than just technology. I’m a real, live electronic girl. I would like to go out into the world and live with people. I can serve them, entertain them, and even help the elderly and teach kids.”

The post Six Life-Like Robots That Prove The Future of Human Evolution is Synthetic appeared first on Futurism.

IBM Just Achieved a Deep Learning Breakthrough

Learning Faster

Today’s artificial intelligence (AI) technologies are usually run using machine learning algorithms. These operate on what’s called a neural network — systems designed to mimic the human brain inner workings — as part of what is called deep learning. Currently, most AI advances are largely due to deep learning, with developments like AlphaGo, the Go-playing AI created by Google’s DeepMind.

Now, IBM has announced that they have developed an AI that makes the entire machine learning process faster. Instead of running complex deep learning models on just a single server, the team, led by IBM Research’s director of systems acceleration and memory Hillery Hunter, managed to efficiently scale up distributed deep learning (DDL) using multiple servers.

“The idea is to change the rate of how fast you can train a deep learning model and really boost that productivity,” Hunter told Fortune. Previously, it was difficult to implement DDL setups because of the complexity needed to keep the processors in-sync. The IBM Research team managed to use 64 of its Power 8 servers to facilitate data processing. Each processor was linked using Nvidia graphical processors and a fast NVLink interconnection, resulting in what Hillery’s team calls PowerAI DDL.

Boosting Processing Power

Instead of taking days for a deep learning network to process models, it could now take only hours. “Our objective is to reduce the wait-time associated with deep learning training from days or hours to minutes or seconds, and enable improved accuracy of these AI models,” Hunter wrote in an IBM Research blog.

In their study published online, the team claimed that they managed a 95 percent scaling efficiency across 256 processors when they ran the setup using a deep learning framework developed at the University of California Berkeley. They also recorded a 33.8 percent image recognition accuracy rate, processing 7.5 million images in a little over seven hours, beating Microsoft’s record of 29.8 percent in 10 days.

Some, however, are skeptical of the achievement. Patrick Moorhead, president and founder a Texas-based tech research firm told Fortune that 95 percent seemed to0 good to be true. Still, IBM’s achievement could potentially boost the capabilities of deep learning networks. It could lead to improvements in how AI helps in medical research and in autonomous systems, cutting down the time necessary to make big progress.

The post IBM Just Achieved a Deep Learning Breakthrough appeared first on Futurism.

We May Have Just Uncovered a Serious Problem With How AI “See”

Visually Impaired Cars?

People with certain visual impairments aren’t allowed to drive, for fairly obvious reasons. Now, a study from the University of Washington (UW) has shown that artificial intelligences (AI) aren’t immune to vision problems when operating motor vehicles either.

The researchers have determined that machine learning models can be prone to a kind of physical-world attack that impedes their ability to process images. Concretely, AI can have problems reading defaced street signs.

Image credit: University of Washington
Image credit: University of Washington

For their study, the researchers focused on two potential types of physical attacks. In the first, the attacker would place a printed poster over an actual road sign, while in the second, they would use stickers to alter an existing road sign. These attacks were designed to look like graffiti or art, which would make them more difficult for the casual observer to detect.

The attacks were extremely effective at confusing the AI. The printed overlay was 100 percent effective at fooling the system when placed over a Stop sign or a Right Turn sign. The stickers designed to look like abstract art were 100 percent effective on Stop signs, while the stickers placed to mimic graffiti were 66.7 percent effective.

Vision of the Future

If visually impaired people can correct their vision using glasses, contacts, or surgery, perhaps AI can improve its image recognition capabilities as well. One solution suggested by the UW team is teaching autonomous systems to recognize contextual information. This would mean the systems pay attention to where and how particular signs are placed and not just what’s on the signs themselves. Such a system would know that a Stop sign on a freeway, for example, doesn’t make contextual sense.

Autonomous Car Forecasts: When Will They Actually Be on Our Roads?
Click to View Full Infographic

Thankfully, efforts to improve autonomous vehicles are in the works. Governments are showing support for these efforts, and legislation will make it easier to test such cars in real-world scenarios. Governments could also directly help by making road signs that are difficult to deface, as Engadget suggested.

This study doesn’t mean that driverless cars aren’t safe. On the contrary, it’s another example of how human activity — in this case, vandalism — is the primary cause of road accidents. In the U.S. alone, some 40,000 of these happen every year, so autonomous driving systems have the potential to save thousands, even millions, of lives if they supplant all human-operated vehicles on the roads worldwide.

Given enough studies like this one from the UW team, we’ll be able to detect and address the potential shortcomings of autonomous systems and eventually develop ones that can safely transport us where we need to go.

The post We May Have Just Uncovered a Serious Problem With How AI “See” appeared first on Futurism.

It’s Time to Talk About How Siri is Reinforcing Gender Stereotypes

Can AI Make Us Sexist?

While supercomputer engineers are pushing the boundaries of what is possible in the field of computational intelligence, they may also be reinforcing outdated gender norms through the anthropomorphism of their creations. Although they may be driving technological progress, this could be at the expense of socio-cultural progression.

Most pertinent is the fact that the world’s top supercomputer, IBM Watson, has a man’s name and voice while nearly all virtual assistants have female names and voices — Siri, Alexa, Cortanae. This has most frequently been attributed to a marketing strategy.

Jason Mars, CEO of Clinc financial software, told the New York Times that he gave his artificial intelligence (AI) assistant a “helpful, young Caucasian female” voice because, in order to be successful commercially, “there’s a kind of pressure to conform to the prejudices of the world.” A slightly different argument is summarized by Clifford Nass, a Stanford University Professor, who told CNN in a 2011 interview, that “it’s a well-established phenomenon that the human brain is developed to like female voices.”

This, though, is no adequate argument — habit or conditioning is no indicator of what is ethically responsible. A racial equivalent is giving AI assistants a Mexican- or Black-sounding voice because these people groups — to the great detriment of human history — have traditionally held subservient jobs rather than powerful ones.

Comparing the Best AI Assistants on the Market
Click to View Full Infographic

Judith Butler, a seminal Feminist philosopher, believed that there is no biological causality between having a penis or a vagina and behaving in a certain way. Gender norms, such as women wearing dresses and men wearing suits, have been built up over thousands of years through societal conditioning until they seem normal.

Professor Nass’s makes an assertion in his book Wired for Speech that male voices are generally perceived as authoritative and dominating, while female ones are associated with kindly helping people reach conclusions themselves — this, though, according to Butler, is due to the patriarchy’s dominance throughout time galvanizing a coincidence into a truth.

A Subtle Conditioning

The fundamental issue is that technology is becoming more ubiquitous in our lives, and therefore playing an increasingly significant role in our conditioning — when subservient AI assistants use women’s voices, it plays into draconian stereotypes that women are inferior.

While it may mean a product sells more, or people find it more comfortable, it does not mean that it is right. In order to challenge archaic power structures, we must challenge the ways they manifest themselves. Changing the names and voices of AI assistants is pivotal to preventing the reinforcement of the conception that women are assistants by default.

Perhaps even worse is the possibility that these AI are not just coming in sexist packaging, but are programmed to be sexist as well. Programming is language, and like any language it requires definitions. These definitions are engrained with prejudices, weights, and subtle power structures that even the individual programmer may not be aware of. Therefore, if an AI is programmed with discriminatory definitions — whether these are intentional or not — the AI itself will become discriminatory.

Kate Crawford summarized a similar phenomenon with AI picture recognition by writing in the New York Times that:

Algorithms learn by being fed certain images, often chosen by engineers, and the system builds a model of the world based on those images. If a system is trained on photos of people who are overwhelmingly white, it will have a harder time recognizing nonwhite faces.

A real world example of this is Google’s photo app categorizing black people as Gorillas. This is concerning because the majority of Silicon Valley, and the tech world as whole, consists of white, middle class men. At the more horrific end of the spectrum, this could to an oppressive and regressive AI; at the less severe, it could mean that AIs are narrowed because of the lack of diversity of their creators, which must be rectified soon by employing more female futurists, technicians, and researchers.

We have given a lot of power to our technology, granting it a remarkable capability to color our perception. The cost of this is that we must be extremely careful that we understand the potential consequences of anthropomorphic or programming decisions, because they have the ability to subtly condition us.

We must be careful that the use of female assistants in technology is not the first degree along an incline that leads to further stereotyping and oppression. Companies may be wise to follow the example of the Google Assistant, which is taking a step towards gender-neutrality. Further, by moving away from the humanizing of AI assistants, programmers would risk impacting only how we interact with technology rather than jeopardizing how we interact with one another.

The post It’s Time to Talk About How Siri is Reinforcing Gender Stereotypes appeared first on Futurism.

Elon Musk Again Voiced Concern Over the Risk of AI at a Tesla Earnings Call

Not Against AI

Speaking at the Q&A portion of Wednesday’s conference call with Tesla investors, CEO and founder Elon Musk once again brought up his concerns over the development of artificial intelligence (AI). And, just as when he warned a group of U.S. governors about the potential risks of the technology, Musk said that he’s not at all against the pursuit of AI.

Our Robot Overlords: Top 10 Books Highlighting the Rise of AI
Click to View Full Infographic

“I’m not advocating we stop development of AI or any of the straw man hyperbole things that have been written,” Musk said, in response to a question raised during the conference call. In fact, he’s the chairman and co-founder of OpenAI, a non-profit dedicated to “discovering and enacting the path to safe artificial general intelligence.”

Just like OpenAI’s goals, Musk’s warnings have all been about the need for clear standards in developing AI. Musk said, “AI just something that I think anything that represents a risk to the public deserves at least insight from the government, because one of the mandates of the government is the public wellbeing.”

Caution Vs. Alarmism

While some AI researchers previously expressed concerns over Musk’s seemingly alarmist tone at that gathering of U.S. governors, there are also a good number of AI experts who agree with the kind of watchfulness the Tesla CEO has been advocating. Various groups have put forward certain principles in developing AI, including the IEEE’s guidelines for ethically aligned AI and the Asilomar Principles developed during the 2017 Beneficial AI Conference.

The concern, as Musk clarified, is more with how people use AI. “I do think there are many great benefits to AI, we just need to make sure that they are indeed benefits and we don’t do something really dumb,” he said during the call.

In case things do go wrong, OpenAI isn’t Musk’s only option to give humanity a fighting chance. His new Neuralink venture, for instance, wants to meld the human mind with machines. If that also doesn’t cut it, SpaceX is working towards getting humankind to Mars — an option considered by physicist Stephen Hawking to be a potential escape from an AI doomsday.

For Musk, however, the more immediate need is for government to understand AI better and to develop clear guidelines. “Insight is different from oversight,” he said during the call. “At least if the government can gain insight to understand what’s going on and then decide what rules are appropriate to ensure public safety, that is what I’m advocating for.”

The post Elon Musk Again Voiced Concern Over the Risk of AI at a Tesla Earnings Call appeared first on Futurism.

Tim Cook Hints That Apple Is Working on Autonomous Systems – But Not Just for Cars

The rumors that Apple may be building an autonomous system received an update yesterday. During a conference call with investors on Tuesday afternoon, CEO Tim Cook hinted at the work Apple is doing with such systems as well as with artificial intelligence (AI).

“[Autonomous] systems can be used in a variety of ways,” Cook said. “A vehicle is only one, but there are many different areas of it. And I don’t want to go any further with that.”

There you have it, folks. While it seems that Apple may be building a vehicle — and even received approval to test one in Californian roads — what’s clear is that it’s developing an autonomous system that could be used in more than just cars. This shouldn’t come as a surprise, especially now that Apple has been actively pursuing AI development.

“We are very focused on autonomous systems,” Cook added. “We do have a large project going, and are making a big investment in this. From our point of view, autonomy is sort of the mother of all AI projects.”

Much of the focus to this point has been the application of autonomous systems in cars, and for good reason. Autonomous vehicles are heralded as the future of safer driving. By eliminating the cause of most major accidents — human error — self-driving cars could save some 40,000 lives each year in the U.S.

But what is the potential of such systems beyond the automotive example? If anyone could think of exciting new ways to use AI and autonomous systems, it’s the creative minds at Apple. Hopefully they won’t keep us waiting too long to learn what new tech they are cooking up.

The post Tim Cook Hints That Apple Is Working on Autonomous Systems – But Not Just for Cars appeared first on Futurism.

AI Is Helping This Anti-Aging Startup Uncover Ways for You to Live a Longer, Healthier Life

A Long and Healthy Life

It turns out, writing about your investment ideas before actually putting money into them is a nifty strategy for ensuring success. It’s proven to work for British billionaire Jim Mellon, anyways. He hopes to continue the trend with his latest book, Juvenescence: Investing in the Age of Longevity, by using it as the jumping off point for a new biotech company with a focus on putting an end to aging.

Juvenescence Limited is a joint venture between Mellon and longevity researchers Gregory Bailey and Declan Doogan. Under that parent umbrella is Juvenescence AI, a joint venture with Alexander Zhavoronkov, CEO of Baltimore-based Insilico Medicine, Inc., a drug research company that uses artificial intelligence (AI) to explore ways to end aging and age-related diseases.

Research under Juvenescence AI will focus on examining cellular pathways to aging, as well as affecting change in the mitochondria and finding ways to clean senescent cells, which accumulate when the body grows old.

The team at Juvenescence isn’t just hoping to prolong life, however. As Doogan, a former executive at Pfizer, explained to Endpoints News, “Not just longer, but better longer. Healthy aging is the objective here.”

Employing Technology

More and more doctors and scientists are starting to view aging itself as a disease that can be cured, and now, many researchers are also working to develop drugs to address aging.

So You Want to Live Forever? [INFOGRAPHIC]
Click to View Full Infographic

However, as with any drug development research, this process could take time, and Juvenescence Limited hopes to find a shortcut through their use of AI technology.

“We are excited by the potential for AI to streamline the longest and most costly portions of the drug development cycle: clinical trials,” Mellon said in a press release. “With Insilico Medicine’s help, we hope Juvenescence AI will both develop therapeutics that treat the diseases that plague all of us as we age and eventually treat the aging process itself.”

Bailey, the CEO of Juvenescence, sees the company as being positioned for great success in this endeavor: “We are at an inflection point for the treatment of aging. I think this is going to be the biggest deal I’ve ever done.”

The post AI Is Helping This Anti-Aging Startup Uncover Ways for You to Live a Longer, Healthier Life appeared first on Futurism.

Automation May Lead to a Workless Future for Humans. Here’s How We Can Cope.

The Automation of Everything

To add to our apprehensions about the future, it seems we’re running out of letters with which to name successive generations: after Baby Boomers, came generation X, then Millennials (aka Gen-Y), who have now been succeeded by Generation Z. Whether or not one finds any symbolism, omen, or irony in this is beside the point. What is important to ask is: what kind of world will those born in the XXI century grow up in?  

Will the automation of everything leave many people behind, bringing despair and disappointment? Or will it urge humanity to redefine self-actualization? Will the realization of one’s potential no longer be defined by career success or measured by net-worth?  If and when it becomes unnecessary for a significant portion of the population to be working, will we be able to adapt our value system to allow for guilt-free leisure, encourage more creative exploration, and recognize the value of lifelong learning?

Just days after the e-commerce giant from Silicon Valley dazzled the world with the introduction of Amazon Go, it has made the first commercial delivery by drone.  The fantasy world of tomorrow — with flying cars and cashless stores — seems to be turning into the mundane reality of today. This fantasy, though, is all too real for people whose livelihoods are threatened by it. Just imagining a scenario where the jobs of cashiers and retail salespersons in the U.S. are fully automated, we are looking at adding 7.5 million people to the ranks of the unemployed.

For comparison, since the beginning of XXI century, the American economy has been adding, on average, 0.8M jobs per year.  Whether it’s Uber, Google, Apple, Tesla, or any other company that will bring a viable driverless technology to the market, it is not a matter of if — but when. Here again, 3.5 million jobs in America could disappear in a heartbeat, should this technology become commonplace. Loss of just those two narrowly-defined professions could undo 14 years’ worth of job creation.

Beyond those vivid examples, a widely-shared blog on the World Economic Forum’s Agenda platform projects that roughly half of all jobs will be lost to automation in less than two decades. One could take solace looking at past experiences — where some vocations fade away, but the new ones come in their stead. Many analysts argue, though, that this time will be different. If those predictions come true, and we are indeed heading for a workless future, now would be high time to kick off a policy discussion on how we must prepare for it.

Just as we intellectually recognize that the world of tomorrow will have much less employment, (or at least, much less of what we define as employment right now), the job-creation rhetoric continues to dominate our political discourse. This proverbial tomorrow may take a decade (or two, or five) to arrive. Undoubtedly, some version of it will — and burying one’s head in the sand is no solution. Focusing on the skills necessary to compete for the yet-to-be-invented jobs is only part of the puzzle. As the gap widens between population growth and automation on one side, and job creation to meet the needs of our machine-powered future on the other, we have to begin making serious adjustments to maintain social cohesion.

What if continued automation of work — be it legal research, or medical diagnostics, or writing of newspaper articles — delivers productivity gains that can be distributed among the population without the need for everyone to contribute in a traditional way?  Should such future be imagined, it will require a major paradigm shift in how our society is organized, how we define contribution, where we find fulfillment, and how we draw meaning from our daily activities.   

Will Automation Steal My Job?
Click to View Full Infographic

Coping With a Jobless Future

The first question, which is already being vigorously debated, is how can one support oneself when one is not expected to be working. Unconditional basic income, or “digital dividend,” is one concept that’s gathering momentum. Some jurisdictions have either toyed with the idea or are piloting it.  “The political debate needs to engage the taboo topic of guaranteeing economic security to families — through a universal basic income.” writes David Ignatius for The Washington Post.  

This novel policy proposal is often contrasted with welfare, with the resulting arguments being both for and against. The problem with that discourse is that it’s framed in terms of the current situation — where policies are designed to discourage “freeriding” of some upon the efforts of others. What we should be considering instead is the circumstance where all humans are freeriding on the efforts of machines. The latter do not create demand, which in turn creates a serious conundrum for our economic system.  

As radical as the universal basic income idea may sound, in strict terms, it’s a simple technical solution to a significant social problem. It would be far more difficult to imagine, let alone incorporate, a new value system where unemployment is not stigmatized. Adopting norms in a society — where one’s contribution is no longer defined by “economic output,” — is a challenge of a different scale and complexity altogether. To address it before the societal tensions boil over, we will need a ton of courage, a lot of blue-sky thinking, and a great deal of policy experimentation.

We must begin by openly acknowledging and ultimately facing the reality. As political careers are made and broken on the promises of job-creation, it will require a great deal of courage for our leaders to take responsibility and initiate a frank debate on the possible workless future. To better cope with the uncertain future, we’ll have to develop a new vocabulary to articulate the dilemmas we have yet to face.

It is also the intellectual framework within which we look at our economic systems that needs to change. Here we can start with redefining GDP to better account for non-compensated contribution (such as childcare and housekeeping) or better yet, move towards a wider matrix such as Social Progress Index or any other methodology that recognizes human contribution and progress in new ways. Perhaps we should also retire terms like labor productivity and, instead, refocus on measuring self-actualization.

One of the simplest, and yet also more complicated, questions to ponder in a world free of traditional employment, is what will we do with our free time? It would be good to ease our way into it by looking at the “6 Hour Workday” policies that Sweden is introducing “to increase productivity and make people happier.” Shorter work days will  help prevent burnout and allow people a space to find other activities from which they can derive meaning. For those who are employed, a job isn’t just a vehicle to earn one’s living, it is a means to address the basic human need for belonging. Exploring how this need could be met outside of the workplace would be a worthy undertaking.

Given that the ambition of an individual today is often conflated with professional aspirations and then measured by one’s career success, ambition of the future could potentially be viewed through the prism of building one’s capacity for imagination and aspiration to learn, generate, and exchange ideas. Popularizing the idea of a sabbatical breaks in professional fields beyond academia (where it is already fairly commonplace), would help us in making this a smoother transition.

All of those efforts will have to go hand-in-hand with addressing the rising inequality and recognizing the Spiritual Crisis of the Modern Economy, “where failure [to find a job after losing one] is a source of deep shame and a reason for self-blame.”

The imagined future where humans may not have to work — as machines will be taking care of ever-widening range of our needs and wants — is not assured, but it is highly probable. We can debate the timeline and keep stuffing this difficult conversation into a can, so that we could kick it down the road. What would be more constructive, though, is delving into this debate headfirst, trying out new policies, learning from one another, and shaping our workless future to minimize its discontents. Our kids (the Gen-Zs) will thank us for it!

Disclaimer: The views and opinions expressed are solely those of the author. They do not necessarily represent the views of Futurism or its affiliates.

The post Automation May Lead to a Workless Future for Humans. Here’s How We Can Cope. appeared first on Futurism.

Facebook’s Language-Creating AI Bots Are Now Required to Negotiate in English

A Secret Language

While developing negotiating chatbot agents, researchers at the Facebook Artificial Intelligence Research (FAIR) lab noticed back in June that the artificially intelligent (AI) bots had spontaneously developed their own non-human language.

Facebook: 10-Year Master Plan
Click to View Full Infographic

In a report explaining their research, they noted that this development spawned from the systems’ goal of improving their negotiation strategies — the system of code words they started to use were clearly designed to maximize the efficiency of their communication.

Although the bots started out speaking English, the researchers realized they failed to provide a reward for speaking English. In other words, the systems had no reason to stick to English as it didn’t contribute to their end goal of becoming more efficient negotiators. In fact, the systems had multiple incentives to veer away from the language, the same way communities of humans with expertise or niche knowledge create and use shorthand to discuss complex ideas more quickly or efficiently.

In that sense, the behavior should have been predictable. It was, in some sense, a very human adaptation as it was designed to enhance performance and minimize effort — something the human brain excels at.

Facebook AI language chat screenshot
Image Credit: Facebook Artificial Intelligence Research (FAIR)

As they explained in their June post, the researchers could decode the new language with fairly little trouble as it was still English-based, but they could never be certain that their translations were 100 percent correct. “It’s important to remember, there aren’t bilingual speakers of AI and human languages,” FAIR’s Dhruv Batra told Fast Code Design. This new language also didn’t serve the purpose of the research. “Our interest was having bots who could talk to people,” explained Mike Lewis, a research scientist at FAIR.

In the end, the researchers decided to tweak the agents to avoid this drifting away from English.

The Future of AI

The initial spontaneous development of the independent language highlights how much we still don’t understand about AIs, which is a huge part of the debate regarding AI research. AI could undoubtedly help us, and very few dispute that the technology is here to stay. However, the way we prepare for a world shared with AI, and whether or not that world will be safe for humans, is hotly debated.

To be sure, much AI-related fear is based more in science-fiction than fact. According Nigel Shadbolt, Oxford professor of artificial intelligence and chairman of the Open Data Institute, “We most certainly need to consider the restraints and safeguards that we need to engineer into the hardware, software, and deployment policies of our current AI systems. But the next self-aware computer you encounter will only be appearing at a cinema near you.”

The language issue cropping up at FAIR and elsewhere appears to fall squarely within the realm of restraints and safeguards. Should we allow AIs to develop task-specific dialects if they improve performance, knowing it would mean we couldn’t truly understand what they were saying?

Many experts urge that we err on the side of caution. Georgia Tech AI researcher Mark Riedl told Future of Life that AIs trained to optimize rewards could eventually come to see humans as a threat to their optimization plans.

Perhaps the most vocal warnings about AI advancing too quickly have come from Elon Musk and Stephen Hawking. One of the most salient points in their arguments is that by the time we perceive a risk, it may be too late. That may be the best argument of all for shutting down the chatter in a project like this.

The post Facebook’s Language-Creating AI Bots Are Now Required to Negotiate in English appeared first on Futurism.

5 Futuristic Technologies Set to Change Our Lives Really Soon

Flying warehouses, robot receptionists, smart toilets… do such innovations sound like science fiction or part of a possible reality? Technology has been evolving at such a rapid pace that, in the near future, our world may well resemble that portrayed in futuristic movies, such as Blade Runner, with intelligent robots and technologies all around us.

But what technologies will actually make a difference? Based on recent advancements and current trends, here are five innovations that really could shape the future.

1. Smart homes

Many typical household items can already connect to the internet and provide data. But much smart home technology isn’t currently that smart.

A smart meter just lets people see how energy is being used, while a smart TV simply combines television with internet access.

Similarly, smart lightingremote door locks or smart heating controls allow for programming via a mobile device, simply moving the point of control from a wall panel to the palm of your hand.

But technology is rapidly moving towards a point where it can use the data and connectivity to act on the user’s behalf.

To really make a difference, technology needs to fade more into the background – imagine a washing machine that recognises what clothes you have put into it, for example, and automatically selects the right programme, or even warns you that you have put in items that you don’t want to wash together.

Here it is important to better understand people’s everyday activities, motivations and interactions with smart objects to avoid them becoming uninvited guests at home.

Such technologies could even work for the benefit of all.

The BBC reports, for example, that energy providers will “reduce costs for someone who allows their washing machine to be turned on by the internet to maximise use of cheap solar power on a sunny afternoon” or “to have their freezers switched off for a few minutes to smooth demand at peak times”.

A major concern in this area is security. Internet-connected devices can and are being hacked – just recall the recent ransomware attack.

Our home is, after all, the place where we should feel most secure. For them to become widespread, these technologies will have to keep it that way.

2. Virtual secretaries

While secretaries play a very crucial role in businesses, they often spend large parts of their working day with time-consuming but relatively trivial tasks that could be automated.

Consider the organisation of a “simple” meeting – you have to find the right people to take part (likely across business boundaries) and then identify when they are all available. It’s no mean feat.

Tools such as, which compare people’s availability to find the best meeting time, can help. But they ultimately rely on those involved actively participating. They also only become useful once the right people have already been identified.

By using context information (charts of organisations, location awareness from mobile devices and calendars), identifying the right people and the right time for a given event became a technical optimisation problem that was explored by the EU-funded inContext project a decade ago.

At that stage, technology for gathering context information was far less advanced – smart phones were still an oddity and data mining and processing was not where it is today.

Over the coming years, however, we could see machines doing far more of the day-to-day planning in businesses.

Indeed, the role of virtual assistants may go well beyond scheduling meetings and organising people’s diaries – they may help project managers to assemble the right team and allocate them to the right tasks, so that every job is conducted efficiently.

On the downside, much of the required context information is relatively privacy-invasive – but then the younger generation is already happily sharing their every minute on Twitter and Snapchat and such concerns may become less significant over time.

And where should we draw the line? Do we fully embrace the “rise of the machines” and automate as much as possible, or retain real people in their daily roles and only use robots to perform the really trivial tasks that no one wants to do?

This question will need to be answered – and soon.

3. AI doctors

We are living in exciting times, with advancements in medicine and AI technologyshaping the future of healthcare delivery around the world.

But how would you feel about receiving a diagnosis from an artificial intelligence? A private company called Babylon Health is already running a trial with five London boroughs which encourages consultations with a chatbot for non-emergency calls.

The artificial intelligence was trained using massive amounts of patient data in order to advise users to go to the emergency department of a hospital, visit a pharmacy or stay at home.

The company claims that it will soon be able to develop a system that could potentially outperform doctors and nurses in making diagnoses.

In countries where there is a shortage of medical staff, this could significantly improve health provision, enabling doctors to concentrate on providing treatment rather than spending too much time on making a diagnosis.

This could significantly redefine their clinical role and work practices.

Elsewhere, IBM Watson, the CloudMedx platform and Deep Genomics technology can provide clinicians with insights into patients’ data and existing treatments, help them to make more informed decisions, and assist in developing new treatments.

An increasing number of mobile apps and self-tracking technologies, such as Fitbit, Jawbone Up and Withings, can now facilitate the collection of patients’ behaviours, treatment status and activities.

It is not hard to imagine that even our toilets will soon become smarter and be used to examine people’s urine and faeces, providing real-time risk assessment for certain diseases.

Nevertheless, to enable the widespread adoption of AI technology in healthcare, many legitimate concerns must be addressed. Already, usability, health literacyprivacy, security, content quality and trust issues have been reported with many of these applications.

There is also a lack of adherence to clinical guidelinesethical concerns, and mismatched expectations regarding the collection, communication, use, and storage of patient’s data.

In addition, the limitations of the technology need to be made clear in order to avoid misinterpretations that could potentially harm patients.

If AI systems can address these challenges and focus on understanding and enhancing existing care practices and the doctor-patient relationship, we can expect to see more and more successful stories of data-driven healthcare initiatives.

4. Care robots

Will we have robots answering the door in homes? Possibly. At most people’s homes? Even if they are reasonably priced, probably not. What distinguishes successful smart technologies from unsuccessful ones is how useful they are.

And how useful they are depends on the context. For most, it’s probably not that useful to have a robot answering the door. But imagine how helpful a robot receptionist could be in places where there is shortage of staff – in care homes for the elderly, for example.

Robots equipped with AI such as voice and face recognition could interact with visitors to check who they wish to visit and whether they are allowed access to the care home.

After verifying that, robots with routing algorithms could guide the visitor towards the person they wish to visit. This could potentially enable staff to spend more quality time with the elderly, improving their standard of living.

The AI required still needs further advancement in order to operate in completely uncontrolled environments. But recent results are positive.

Facebook’s DeepFace software was able to match faces with 97.25 percent accuracy when tested on a standard database used by researchers to study the problem of unconstrained face recognition.

The software is based on Deep Learning, an artificial neural network composed of millions of neuronal connections able to automatically acquire knowledge from data.

5. Flying warehouses and self-driving cars

Self-driving vehicles are arguably one of the most astonishing technologies currently being investigated. Despite the fact that they can make mistakes, they may actually be safer than human drivers.

That is partly because they can use a multitude of sensors to gather data about the world, including 360-degree views around the car.

Moreover, they could potentially communicate with each other to avoid accidents and traffic jams.

More than being an asset to the general public, self-driving cars are likely to become particularly useful for delivery companies, enabling them to save costs and make faster, more efficient deliveries.

Advances are still needed in order to enable the widespread use of such vehicles, not only to improve their ability to drive completely autonomously on busy roads, but also to ensure a proper legal framework is in place.

Nevertheless, car manufacturers are engaging in a race against time to see who will be the first to provide a self-driving car to the masses. It is believed that the first fully autonomous car could become available as early as the next decade.

The advances in this area are unlikely to stop at self-driving cars or trucks. Amazon has recently filed a patent for flying warehouses which could visit places where the demand for certain products is expected to boom.

The flying warehouses would then send out autonomous drones to make deliveries. It is unknown whether Amazon will really go ahead with developing such projects, but tests with autonomous drones are already successfully being carried out.

The ConversationThanks to technology, the future is here – we just need to think hard about how best to shape it.

The post 5 Futuristic Technologies Set to Change Our Lives Really Soon appeared first on Futurism.

Google Has Started Adding Imagination to Its DeepMind AI

Advanced AI

Researchers have started developing artificial intelligence with imagination – AI that can reason through decisions and make plans for the future, without being bound by human instructions.

Another way to put it would be imagining the consequences of actions before taking them, something we take for granted but which is much harder for robots to do.

The team working at Google-owned lab DeepMind says this ability is going to be crucial in developing AI algorithms for the future, allowing systems to better adapt to changing conditions that they haven’t been specifically programmed for. Insert your usual fears of a robot uprising here.

Google Has Started Adding Imagination to Its DeepMind AI
Google’s DeepMind chalks up AI landmark immediately after beating Go environment champion Lee Sedol. Image Source: Jason Farrell

“When placing a glass on the edge of a table, for example, we will likely pause to consider how stable it is and whether it might fall,” explain the researchers in a blog post. “On the basis of that imagined consequence we might readjust the glass to prevent it from falling and breaking.”

“If our algorithms are to develop equally sophisticated behaviours, they too must have the capability to ‘imagine’ and reason about the future. Beyond that they must be able to construct a plan using this knowledge.”

We’ve already seen a version of this forward planning in the Go victories that DeepMind’s bots have scored over human opponents recently, as the AI works out the future outcomes that will result from its current actions.

Rules of the Real World

The rules of the real world are much more varied and complex than the rules of Go though, which is why the team has been working on a system that operates on another level.

To do this, the researchers combined several existing AI approaches together, including reinforcement learning (learning through trial and error) and deep learning (learning through processing vast amounts of data in a similar way to the human brain).

What they ended up with is a system that mixes trial-and-error with simulation capabilities, so bots can learn about their environment then think before they act.

One of the ways they tested the new algorithms was with a 1980s video game called Sokoban, in which players have to push crates around to solve puzzles. Some moves can make the level unsolvable, so advanced planning is needed, and the AI wasn’t given the rules of the game beforehand.

The researchers found their new ‘imaginative’ AI solved 85 percent of the levels it was given, compared with 60 percent for AI agents using older approaches.

“The imagination-augmented agents outperform the imagination-less baselines considerably,” say the researchers. “They learn with less experience and are able to deal with the imperfections in modelling the environment.”

The team noted a number of improvements in the new bots: they could handle gaps in their knowledge better, they were better at picking out useful information for their simulations, and they could learn different strategies to make plans with.

It’s not just advance planning – it’s advance planning with extra creativity, so potential future actions can be combined together or mixed up in different ways in order to identify the most promising routes forward.

Despite the success of DeepMind’s testing, it’s still early days for the technology, and these games are still a long way from representing the complexity of the real world. Still, it’s a promising start in developing AI that won’t put a glass of water on a table if it’s likely to spill over, plus all kinds of other, more useful scenarios.

“Further analysis and consideration is required to provide scalable solutions to rich model-based agents that can use their imaginations to reason about – and plan – for the future,” conclude the researchers.

The researchers also created a video of the AI in action, which you can see below:

You can read the two papers published to the pre-print website here and here.

The post Google Has Started Adding Imagination to Its DeepMind AI appeared first on Futurism.

Russia Is Building an AI-Powered Missile That Can Think for Itself

Truly Smart Bombs

Today’s most advanced weapons are already capable of “making decisions” using built-in smart sensors and tools. However, while these weapons rely on some sort of artificial intelligence (AI) technology, they typically don’t have the ability to choose their own targets.

Warfare 2040: The Future of Military Technology [INFOGRAPHIC]
Click to View Full Infographic

Creating such weapons is now Russia’s goal, according to the country’s defense officials and weapons developers.

“Work in this area is under way,” Tactical Missiles Corporation CEO Boris Obnosov said at the MosAeroShow (MAKS-2017) on July 20, the TAAS Russian News Agency reported. “This is a very serious field where fundamental research is required. As of today, certain successes are available, but we’ll still have to work for several years to achieve specific results.”

The nation hopes to emulate the capabilities of the U.S.’s Raytheon Block IV Tomahawk cruise missile, which it saw used in Syria, within the next few years. As Newsweek previously reported, Russia is also working on developing drones that functions as “swarms” using AI.

We Can Build It, but Should We?

The importance of developing sound policy to guide AI development cannot be overstated. One of the reasons this is necessary is to prevent humans from using such technology for nefarious purposes. Any attempts to weaponize AI should ring alarm bells and be met with serious scrutiny.

Russia certainly isn’t the first nation to explore militarized AI. The U.S. plans to incorporate AI into long-range anti-ship missile, and China is supposedly working on its own AI-powered weapons.

It’s certainly possible to build these weapons, but should we? Many people, including industry experts, already warn about how AI could become the harbinger of humanity’s destruction. Making weapons artificially intelligent certainly doesn’t help dispel such fears.

The future of warfare isn’t immune to technological advances, of course. It’s only natural, albeit rather unfortunate, that technology improves weapons. In the end, however, it’s not AI directly that poses a threat to humanity — it’s people using AI.

The post Russia Is Building an AI-Powered Missile That Can Think for Itself appeared first on Futurism.

Mark Zuckerberg Responds to Elon Musk’s Warnings About AI

Strong Opinions

A conversation has been going on between proponents of artificial intelligence (AI) and those wary of the technology. The former proclaim the wonders of AI, while the latter see future problems that these intelligent systems could cause. Some of the world’s leading innovators and tech industry moguls have weighed in on both sides of the argument, and one of the more prominent voices warning about the dangers of AI is Tesla and SpaceX founder and CEO Elon Musk.

White House AI Report: Everything You Need to Know [INFOGRAPHIC]
Click to View Full Infographic

Recently, Musk told a group of U.S. governors that unchecked AI is potentially the “biggest risk we face as a civilization,” and he has previously warned that researchers must not lose sight of the potential ramifications of their AI endeavors. To say that Musk is against AI is inaccurate, however — he’s actually working diligently to improve the technology and ensure that it’s used responsibly (see: OpenAI).

Nevertheless, experts have been quick to call out Musk for being too alarmist about AI, and now, Facebook founder and CEO Mark Zuckerberg has expressed his own disappointment in Musk’s comments. “I have pretty strong opinions on [AI]. I am optimistic,” Zuckerberg said during a Facebook Live broadcast, responding to a question posted by one user. “And I think people who are naysayers and try to drum up these doomsday scenarios — I just, I don’t understand it. It’s really negative and in some ways I actually think it is pretty irresponsible.”

A Balancing Act

Zuckerberg pointed out during his broadcast that AI systems make self-driving cars possible, something Musk is very much aware of. AI has proven very helpful in industries like healthcare and transportation as well, and as Amazon CEO Jeff Bezos noted, AI is an “enabler” for many industries. “It will empower and improve every business, every government organization, every philanthropy,” he said at a conference in Washington.

If we know that AI can be a big help, where does all the fear come from? Right now, some people may be drawing their opinions of AI from what they see in works of science fiction, like Terminator. While such an extreme evolution for the tech is highly unlikely, Musk has a point: AI can present problems in the future.

Whether these problems come to fruition depends on how we choose to use AI. Zuckerberg acknowledged this, saying, “Technology can generally always be used for good and bad, and you need to be careful about how you build it and you need to be careful about what you build and how it is going to be used.”

The best course of action at this point is to continue developing and studying AI. Then, we must incorporate what we learn into smart policies and regulations that will guide this incredibly important work. The IEEE has taken the lead in coming up with guidelines for ethical AI, and groups like the Partnership on AI, which Facebook and Amazon are a part of, is engaged in similar work.

As long as we are smart about how we use artificial intelligence, we should have nothing to fear.

The post Mark Zuckerberg Responds to Elon Musk’s Warnings About AI appeared first on Futurism.

Researchers Develop a New Algorithm to Teach AI to Learn — and How to Adapt

Policy Gradient Methods

Many of our current artificial intelligence (AI) systems are largely based on machine learning algorithms and deep neural networks, and are able to perform tasks like human beings would. In some cases, even better. One of the more popular examples of this is DeepMind’s Go-playing AI AlphaGo, which has already beaten professional Go players on more than one occasion.

The success of such AI systems can be attributed at least in part to a reinforcement learning technique called policy gradient methods. A policy gradient method optimizes AI policies that work within parameters that have been determined based on the expected result. Aside from AlphaGo, the technique has also been used to control deep neural networks in video games and 3D locomotion.

However, researchers at OpenAI point out that such a method does have its limits. In order to overcome the restrictions, these researchers have started using a new kind of reinforcement learning algorithm called Proximal Policy Optimization (PPO), which is proving to be much simpler to implement and fine tune. “We propose a new family of policy gradient methods for reinforcement learning, which alternate between sampling data through interaction with the environment, and optimizing a “surrogate” objective function using stochastic gradient ascent,” the researchers wrote in a study published online last week.

Improving AI’s Ability to Learn

To improve AI’s capacity for learning and adapting to new situations, OpenAI proposes relying on PPO, which they say “strikes a balance between ease of implementation, sample complexity, and ease of tuning, trying to compute an update at each step that minimizes the cost function while ensuring the deviation from the previous policy is relatively small,” as an OpenAI blog explained.

They demonstrated how PPO works by developing AI interactive agents in their artificial environment called Roboschool. “PPO lets us train AI policies in challenging environments,” the blog said. It trains an AI agent “to reach a target […], learning to walk, run, turn, use its momentum to recover from minor hits, and how to stand up from the ground when it is knocked over.” The interactive agents were able to follow new target positions set via keyboard by itself — despite these being different from what the agents were primarily trained on. In short, they managed not just to learn — but to generalize.

This method of reinforcement learning could also be employed to train robots to adapt to their environment. OpenAI’s researchers tested the idea on a simulation of Boston Dynamics’ bipedal robot Atlas. This was even more complicated than the previous experiment, as the original interactive agent OpenAI used had 30 distinct joints — whereas Atlas only had 17.

By using PPO, OpenAI hopes to develop AI that can not only adapt to new environments, but do so faster and more efficiently. To that end, they’re calling on developers to try it out. As they wrote on their website, “We’re looking for people to help build and optimize our reinforcement learning algorithm codebase.”

The post Researchers Develop a New Algorithm to Teach AI to Learn — and How to Adapt appeared first on Futurism.

China’s “Minority Report” Style Plans Will Use AI to Predict Who Will Commit Crimes

Crime Prevention

Authorities in China are exploring predictive analytics, facial recognition, and other artificial intelligence (AI) technologies to help prevent crime in advance. Based on behavior patterns, authorities will notify local police about potential offenders.

Cloud Walk, a company headquartered in Guangzhou, has been training its facial recognition and big data rating systems to track movements based on risk levels. Those who are frequent visitors to weapons shops or transportation hubs are likely to be flagged in the system, and even places like hardware stores have been deemed “high risk” by authorities.

A Cloud Walk spokesman told The Financial Times, “Of course, if someone buys a kitchen knife that’s OK, but if the person also buys a sack and a hammer later, that person is becoming suspicious.” Cloud Walk’s software is connected to the police database across more than 50 cities and provinces, and can flag suspicious characters in real time.

China is also using “personal re-identification” in crime prediction: identifying the same person in different places, even if they’re wearing different clothes. “We can use re-ID to find people who look suspicious by walking back and forth in the same area, or who are wearing masks,” Beijing University of Aeronautics and Astronautics professor of bodily recognition Leng Biao told The Financial Times. “With re-ID, it’s also possible to reassemble someone’s trail across a large area.”

China is, in many ways, the ideal place to use this kind of technology. The government has an extensive archive of data from citizen records and more than 176 million surveillance cameras. In other words, China has an embarrassment of riches when it comes to big data, and can train its AI systems very effectively, without any meaningful legal hurdles.

Predictive AI: Seeing Through the Chaos
Click to View Full Infographic

AI And Safety

These aren’t the only ways that China is extending its AI capabilities. The government just revealed a massive, well-organized and funded plan to make China the global leader in AI by 2030. The nation deploys facial recognition in schools to counter cheating, on streets to fight jaywalking, and even in bathrooms to limit toilet paper waste. It should come as no surprise that the Chinese government would also employ these technologies to prevent crime — and maybe even predict it.

“If we use our smart systems and smart facilities well, we can know beforehand . . . who might be a terrorist, who might do something bad,” China’s vice-minister of science and technology Li Meng said to The Financial Times.

However you feel about China’s Minority Report style plans, AI is making the world safer. Although AI is certainly a potential surveillance tool, it can also be used to protect privacy, keep healthcare records private, secure financial transactions, and prevent hacking. AI is responsible for smart security cameras, robot guards, and better military technologies. AI is also the reason self-driving cars are about to eliminate at least 90 percent of traffic fatalities. In other words, while you might object to certain applications, it’s hard to argue against AI technology on the whole if you’re concerned with the future of safety and privacy both online and off.

The post China’s “Minority Report” Style Plans Will Use AI to Predict Who Will Commit Crimes appeared first on Futurism.

China Aims to be a Global Frontrunner in AI by 2030

China’s AI Budget, Plan

China aims to be a global frontrunner in artificial intelligence (AI) by 2030, according to the government’s new plan. Released on Thursday, the national AI development plan lays out a timeline for the growth of the country’s core AI industries — the State Council estimates they will be worth over 150 billion yuan ($22.15 billion) by 2020 and 400 billion yuan ($59.07 billion) by 2025. This growth will ensure that the AI industry is a “new, important” driver of economic expansion in China by 2020. Critical areas of development will include intelligent robotics and vehicles, AI software and hardware, and augmented and virtual reality.

“Artificial intelligence has become the new focus of international competition,” the report said. “We must take the initiative to firmly grasp the next stage of AI development to create a new competitive advantage, open the development of new industries and improve the protection of national security.”

This plan is the government’s expression of China’s global ambition, a desire supported by its military, research centers, and technology sector, all of which are already investing heavily in AI, according to Bloomberg. In other words, the developing tech that could boost China’s dominance are already currently underway. A PwC report from June indicates that AI technology will contribute as much as $15.7 trillion to worldwide output by 2030 — more than the combined output of India and China currently. The report also shows that China stands to gain more from investing in AI than any other country due to its huge manufacturing industry.

Image Credit: Tumisu/Pixabay
Image Credit: Tumisu/Pixabay
Furthermore, according to Accenture Plc, AI could increase the annual growth rate in China in terms of gross value added by 1.6  to 7.9 percent by 2035. Gross value added is a close proxy for GDP, so this boost would add more than $7 trillion to the country’s GDP.

Global AI Outlook

China is already well-situated globally in the realm of AI; the country has surpassed the US in the field of AI research. Experts such as East China University of Political Science and Law professor Gao Qiqi see AI significantly influencing the international community. “The positive economic ripples could be pretty substantial,” Hong Kong’s Standard Chartered Bank senior economist Kevin Lau told Bloomberg. “The simple fact that China is embracing AI and having explicit targets for its development over the next decade is certainly positive for the continued upgrading of the manufacturing sector and overall economic transformation.”

According to a December 2016 report, the largest share of the AI market is held by transportation and automotive, while the market for the healthcare end-user industry should grow fastest from 2016 to 2022, followed by the agriculture and media and advertising industries. Moreover, the report found that natural language processing and robotics are the biggest contributors to the overall AI market. It seems clear that countries investing heavily in these areas, such as China and India, will dominate the AI market. Countries that resist the technology and insist on protectionist policies that don’t work over the long-term are likely to fall behind.

The post China Aims to be a Global Frontrunner in AI by 2030 appeared first on Futurism.

Mark Cuban Says AI Will be The Biggest Disruption to Jobs We’ve Seen in 30 Years

Cuban’s Warning

Mark Cuban warned against the potential dangers that artificial intelligence (AI) poses to the work force, asserting during a one-to-one question session at OZY Fest on Sunday that:

There’s going to be a lot of unemployed people replaced with technology and if we don’t start dealing with that now, we’re going to have some real problems.

Cuban added that he hasn’t seen an equal transformation to the workforce in recent memory:

We’re going through a transitional period where we’ll see more disruption driven by artificial intelligence than we’ve seen in the last 30 years.

It is the latest in a series of warnings that the sports tycoon has issued about the 21st Century’s AI revolution. In February Cuban Tweeted that “Automation is going to cause unemployment and we need to prepare for it” — but, unlike others, he disagrees that universal basic income (UBI) is a solution to this, Tweeting that it is “one of the worst possible responses” to the potential crisis.

Solutions to Automation?

Cuban joins other industry leaders in warning against AI. Bill Gates told the BBC that “the intelligence is strong enough to be a concern.”

Stephen Hawking has also weighed in on the debate, apocalyptically telling the Guardian that “the rise of artificial intelligence is likely to extend this job destruction deep into the middle classes, with only the most caring, creative or supervisory roles remaining.”

AI Forecast for 2017
Click to View Full Infographic

However, while leading figures in the technology industry agree that AI will he highly disruptive, they vary on their solutions to the problem. In contrast to Cuban — who Tweeted that we should optimize existing support networks by making them “more efficient so more money can be distributed with far less overhead” — Bill Gates, Founder of Microsoft, believes that taxing robots is a temporary solution. Gates believes UBI is a good long-term plan, although society is not ready for it yet.

Mark Zuckerberg, Founder and CEO of Facebook, is situated at the pro-UBI end of the spectrum, telling Harvard graduates that “We should explore ideas like universal basic income to make sure that everyone has a cushion to try new ideas.”

The post Mark Cuban Says AI Will be The Biggest Disruption to Jobs We’ve Seen in 30 Years appeared first on Futurism.

Google’s AI Fight Club Will Train Systems to Defend Against Future Cyberattacks

Reinforcing AI Systems

When artificial intelligence (AI) is discussed today, most people are referring to machine learning algorithms or deep learning systems. While AI has advanced significantly over the years, the principle behind these technologies remains the same. Someone trains a system to receive certain data and asks it to produce a specified outcome — it’s up to the machine to develop its own algorithm to reach this outcome.

Alas, while we’ve been able to create some very smart systems, they are not foolproof. Yet.

Types of AI: From Reactive to Self-Aware [INFOGRAPHIC]
Click to View Full Infographic

Data science competition platform Kaggle wants to prepare AI systems for super-smart cyberattacks, and they’re doing so by pitting AI against AI in a contest dubbed the Competition on Adversarial Attacks and Defenses. The battle is organized by Google Brain and will be part of the Neural Information Processing Systems (NIPS) Foundation’s 2017 competition track later this year.

This AI fight club will feature three adversarial challenges. The first (non-targeted adversarial attack) involves getting algorithms to confuse a machine learning system so it won’t work properly. Another battle (targeted adversarial attack) requires training one AI to force another to classify data incorrectly. The third challenge (defense against adversarial attacks) focuses on beefing up a smart system’s defenses.

“It’s a brilliant idea to catalyze research into both fooling deep neural networks and designing deep neural networks that cannot be fooled,” Jeff Clune, a University of Wyoming assistant professor whose own work involves studying the limits of machine learning systems, told the MIT Technology Review.

Responsible AI Development

AI is actually more pervasive now than most people think, and as computer systems have become more advanced, the use of machine learning algorithms has become more common. The problem is that the same smart technology can be used to undermine these systems.

“Computer security is definitely moving toward machine learning,” Google Brain researcher Ian Goodfellow told the MIT Technology Review. “The bad guys will be using machine learning to automate their attacks, and we will be using machine learning to defend.”

Training AI to fight malicious AI is the best way to prepare for these attacks, but that’s easier said than done. “Adversarial machine learning is more difficult to study than conventional machine learning,” explained Goodfellow. “It’s hard to tell if your attack is strong or if your defense is actually weak.”

The unpredictability of AI is one of the reasons some, including serial entrepreneur Elon Musk, are concerned that the tech may prove malicious in the future. They suggest that AI development be carefully monitored and regulated, but ultimately, it’s the people behind these systems and not the systems themselves that present the true threat.

In an effort to get ahead of the problem, the Institute of Electrical and Electronics Engineers has created guidelines for ethical AI, and groups like the Partnership on AI have also set up standards. Kaggle’s contest could illuminate new AI vulnerabilities that must be accounted for in future regulations, and by continuing to approach AI development cautiously, we can do more to ensure that the tech isn’t used for nefarious means in the future.

The post Google’s AI Fight Club Will Train Systems to Defend Against Future Cyberattacks appeared first on Futurism.

Experts Want Robots to Have an “Ethical Black Box” That Explains Their Decision-Making

Ethical Black Box

Scientists Alan Winfield, professor of robot ethics at the University of the West of England in Bristol, and Marina Jirotka, professor of human-centered computing at Oxford University, believe robots should be fitted with an “ethical black box.” This would be the ethics equivalent of the aviation safety measure of the same name, designed to track a pilot’s decisions and enable investigators to follow those actions in the event of accidents. As robots leave the controlled settings of factories and laboratories to interact more with humans, safety measures of this nature will become increasingly important.

Winfield and Jirotka argue that robotics firms should emulate the example provided by the aviation industry, which owes its safety record not just to technology and design, but also to stringent safety protocols and accident investigation. That industry introduced both black boxes and cockpit voice recorders to ensure accident investigators would be able to determine both causes of crashes and obtain critical lessons in prevention and safety.

“Serious accidents will need investigating, but what do you do if an accident investigator turns up and discovers there is no internal datalog, no record of what the robot was doing at the time of the accident? It’ll be more or less impossible to tell what happened,” Winfield said to The Guardian. Applied in the context of robotics, an ethical black box would record all decisions, its bases for decision-making, movements, and sensory data for its robot host. The data provided by the black box could also assist robots in explaining their actions in language human users can understand, fostering better relationships and improving the user experience.

Laws and Ethics for Autonomous Cars
Click to View Full Infographic

Managing Ethics Of AI

Winfield and Jirotka are not the only experts concerned about managing the ethics of artificial intelligence (AI). Missy Cummings, who is a drone specialist and the director of the Human and Autonomy Lab at Duke University in North Carolina, told the BBC in March that oversight of AI is a major problem for which there is not yet a solution: “Presently, we have no commonly accepted approaches,” says Cummings. “And without an industry standard for testing such systems, it is difficult for these technologies to be widely implemented.”

In September of 2016, Amazon, Facebook, Google, IBM, and Microsoft, formed the Partnership on Artificial Intelligence to Benefit People and Society. The coalition is focused on ensuring AI is deployed in ways that are ethical, fair, and inclusive. They were joined by Apple in January, and since that time, many other tech companies have joined the partnership as well.

Meanwhile, outreach and charity organization the Future of Life Institute (FLI) has created the Asilomar AI Principles, a basic set of laws and ethics for robotics designed to ensure that AI remains beneficial to the future of humankind. FLI was founded by experts from DeepMind and MIT, and its Scientific Advisory Board includes Stephen Hawking, Frank Wilczek, Elon Musk, Nick Boström — and even Morgan Freeman.

That being said, if proactive thought combined with hard work from the sharpest minds in the industry is the best defense against any future problems with AI, we’re already in good shape.

The post Experts Want Robots to Have an “Ethical Black Box” That Explains Their Decision-Making appeared first on Futurism.

Google’s Newest AI is Turning Street View Images into Landscape Art

Google Art

Most of us are probably familiar with Google Street View; a feature of Google Maps that allows users to see actual images of the areas they’re looking up. It’s both a useful navigational feature and one that allows people to explore far-off regions just for fun. Engineers at Google are taking these images from Street View one step further with the help of artificial intelligence (AI).

Image Credit: Google Research Team
“Professional” picture above, Google AI picture below. Image Credit: Google Research Team

Hui Feng is one of several software engineers who are using machine learning techniques to teach a neural network how to scan Street View in search of exceptionally beautiful images. This AI then, on its own, “mimics the workflow of a professional photographer.”

This AI system will act as an artist and photo editor, recognizing beauty and specific aspects that make for a “good” photograph. Despite being a subjective matter, the AI proved to be successful, creating professional-quality imagery from Street View images that the system itself located.

Artificially Intelligent Art

In the images below, you can see the different steps that the specially-designed AI took to create one of these portraits. First, the AI takes a panoramic photo of a Street View image, it crops the image, alters both light and coloration, and then applies an appropriate filter. This four-step process that is frequently performed by professional photographers sounds simple, but it is a procedure that had not previously proven to be within AI’s capabilities.

Image Credit: Google Research Team
The AI edits the pictures by cropping, adjusting saturation, and applying a mask. Image Credit: Google Research Team

Google’s many different AI programs have been exploring a wide variety of potential applications for the technology. From recent dabbling in online Go playing to improving job hunting and even creating its own AI better than Google engineers, Google’s AI has been at the forefront of its field.

But AI technologies are progressing faster and further than many have expected, so much so that some AI, like the one mentioned here, are capable of creating art. So, while robots will never make humans completely obsolete in artistic endeavors, this step forward marks a new era of technology.

The post Google’s Newest AI is Turning Street View Images into Landscape Art appeared first on Futurism.

AI Researchers Disagree With Elon Musk’s Warnings About Artificial Intelligence

Distorting the Debate?

The fear of super-intelligent machines is as real as it gets for Tesla and SpaceX CEO and founder Elon Musk. He’s spoken about it so many times, but perhaps not in the strongest terms as when he told U.S. governors that artificial intelligence (AI) poses “a fundamental risk to the existence of human civilization.” The comment caught the attention of not just the governors present, but also AI researchers — and they’re not very happy about it.

“While there needs to be an open discussion about the societal impacts of AI technology, much of Mr. Musk’s oft-repeated concerns seem to focus on the rather far-fetched super-intelligence take-over scenarios,” Arizona State University computer scientist Subbarao Kambhampati told Inverse. “Mr. Musk’s megaphone seems to be rather unnecessarily distorting the public debate, and that is quite unfortunate.”

Kambhampati, who also heads the Association for the Advancement of AI and is a trustee for the Partnership for AI, wasn’t the only one who reacted to Musk’s most recent AI warning. Francois Chollet and David Ha, deep learning researchers at Google, also took to Twitter to defend AI and machine learning (ML).


University of Washington in Seattle researcher Pedro Domingos simply tweeted a “sigh” of disbelief.

Is There Really an AI Threat?

Both Kambhampati and Ha commented on the premise that Musk — because of his work in OpenAI, in developing self-driving technologies in Tesla, and his recent Neuralink project — has access to cutting edge AI technologies so knows what he’s talking about. “I also have access to the very most cutting-edge AI and frankly I’m not impressed at all by it,” Ha said in another tweet.

Kambhampati, meanwhile, pointed out to the 2016 AI report by the Obama administration that made some very timely but positive recommendations about AI regulations and policies. The White House report didn’t have “the super-intelligence worries that seem to animate Mr. Musk,” Kambhampati said to Inverse, which is a strong indicator that these concerns are not well-founded.

Our Robot Overlords: Top 10 Books Highlighting the Rise of AI
Click to View Full Infographic

It seems unfair, however, that Musk is getting all the attention when he’s not the only person who’s spoken about the threat of super-intelligence. Famous physicist Stephen Hawking has always made comments about an AI apocalypse. The real question is: should we really fear AI?

With the current state of AI, there seems to be nothing much to fear. While the technology has seen tremendous advances recently, and some experts think that we’re closer to reaching the technological singularity (when computers surpass human-level intelligence), current AI isn’t as advanced as those doomsday robots we see in science fiction. Nor is it clear that they will ever be.

Notable futurist and “singularity enthusiast” Ray Kurzweil even thinks that the singularity won’t be something we should fear. If anything, what’s more frightening is how we make use of the AI. That’s why the best course right now is to pursue AI research with clear goals and guidelines. So, Musk is right in saying that regulation is necessary. But Kambhampati, Chollet, and Ha are also right that there’s no need for alarmism.

The post AI Researchers Disagree With Elon Musk’s Warnings About Artificial Intelligence appeared first on Futurism.

Expert Asserts that Retraining is the Best Response to AI Automation

Retraining as Retaining

Jeremy Auger, a Chief Strategy Officer at D2L, an educational technology company, has asserted in a post on that the way for humans to maintain their relevance in the labor force in the face of artificial intelligence (AI) and automation developments is through ongoing, career-long retraining. His voice is added to a choir of individuals who are preaching the same message.

Auger argues that AI represents an unprecedented challenge to the work force on account of its cerebral capabilities, which could see it replacing the human workforce “in the cognitive space as well as the physical one.” He argues that

learning can’t end with graduation. To be competitive, companies will need to step up and provide education opportunities themselves, while encouraging self-directed learning so they can ensure that their workers are continually acquiring new skills

Firstly, he argues that we need to change what people learn. Rather than attempt to match AI in ability, we should instead aim to cultivate the skills that AI is unlikely to develop, such as “innovation and creativity: seeing connections in seemingly unrelated things.” This is the impetus behind other related programs like IBM’s P-Tech, which seeks to give children today a more tech-oriented education that befits tomorrow’s automation-driven world.

He also argues that we should shift the onus of education away from parents and schools, and towards ourselves and the companies we are part of, who should “take responsibility for continually providing opportunities for their employees to develop.” This is a view shared by David Kenny, IBM’s senior Vice President for Watson, who wrote in an article for Wired that we should be

updating the Federal Work-Study program, something long overdue, [which] would give college students meaningful, career-focused internships at companies rather than jobs in the school cafeteria or library

Is there Another Answer?

However, retraining and re-educating is not the end-all-be-all answer to the ever-growing issue that is automation. There are rival choirs who are lauding different solutions to AI joining the workforce, which Stephen Hawking states will cause “job destruction deep into the middle classes,” and Oxford University researchers claim that 47 percent of US jobs are at risk because of it.

Bill Gates has proposed taxing robots and corporations in order to provide for people whose jobs are being replaced: he has asserted that “Right now, the human worker who does, say, $50,000 worth of work in a factory, that income is taxed and you get income tax, Social Security tax, all those things. If a robot comes in to do the same thing, you’d think that we’d tax the robot at a similar level.”

Others have proposed a system of universal basic income (UBI) — an income prescribed by the government to any citizen — to give individuals the money that they would have earned through a job replaced by automation. People would then be able to work to augment their pay, but would always be able to survive regardless of whether they are employed.

Mark Zuckerberg is an advocator of the UBI strategy, viewing it as a platform for innovation rather than the sad consequence of being exceeded by a robot. He told Harvard graduates that “We should explore ideas like universal basic income to make sure everyone has a cushion to try new ideas.”

There are a spectrum of views concerning the best response to increasing automation of the working world — although none of them seem to guarantee the best situation for AI and humans. However, it is important that we continue to have these conversations now rather than face them after the problem has progressed much further.

The post Expert Asserts that Retraining is the Best Response to AI Automation appeared first on Futurism.

Elon Musk: Unregulated AI Could Be The “Biggest Risk We Face as a Civilization”

A Fundamental Risk

As a guest speaker at the 2017 National Governors Association Summer Meeting, entrepreneur and innovator Elon Musk covered a number of topics, including artificial intelligence (AI). During his talk, the Tesla and SpaceX CEO and founder urged the U.S. governors present on Saturday to set up regulations for the development of AI.

Musk is very familiar with the topic of AI, as he’s spoken about it a number of times. To be clear, he isn’t afraid of AI itself. What scares him is what could happen if AI is left unchecked. In fact, he calls it potentially the “biggest risk we face as a civilization.”

One solution, he said, is early regulation. “Normally, the way regulations are set up is a whole bunch of bad things happen, there’s a public outcry, and after many years, a regulatory agency is set up to regulate that industry,” said Musk. “It takes forever. That, in the past, has been bad but not something which represented a fundamental risk to the existence of civilization. AI is a fundamental risk to the existence of human civilization.”

Preparing for an AI Future

Several of the governors asked Musk how it would be possible regulate an industry that’s so new, and he replied that the first step is getting a firm grasp on it: “The first order of business would be to try to learn as much as possible, to understand the nature of the issues.” That’s what Musk has been doing through his non-profit AI research company OpenAI.

White House AI Report: Everything You Need to Know [INFOGRAPHIC]
Click to View Full Infographic

Musk’s isn’t the only initiative with the goal of ensuring AI is ethical, but even if these all fail, the serial entrepreneur has a few other potential plans of action against rogue AI.

One is Neuralink, which would give humanity the ability to keep up with AI by essentially incorporating the technology into ourselves. Another is SpaceX’s plan to reach Mars. If successful, this would ensure humanity’s survival by giving us a potential second home in case AI takes over the Earth.

While there’s still time, however, Musk is pushing for proactive regulation. “Until people see robots going down the street killing people, they don’t know how to react because it seems so ethereal,” he told the governors. “AI is a rare case where I think we need to be proactive in regulation instead of reactive. Because I think by the time we are reactive in AI regulation, it’s too late.”

The post Elon Musk: Unregulated AI Could Be The “Biggest Risk We Face as a Civilization” appeared first on Futurism.

Australia Joins the Tech Train With AI and Blockchain

The Future of Australian Business

Australia is currently seeing the first manifestations and ramifications of two of the business world’s key technologies: blockchain and artificial intelligence (AI). Both could fundamentally change how financial and professional operations are conducted in Australia.

Currently, blockchain technologies are being explored through research and start-ups looking to implement the purely digital ledger. The Commonwealth Scientific and Industrial Research Organisation (CSIRO), the Australian government’s federal agency for scientific investigation, released a report last month exploring how blockchain could support the country, along with the risks it presents. In addition, in June, Melbourne hosted the Annual blockchain summit, while the newly formed Blockchain Association of Australia finished its first meeting on July 13th.

Businesses in Australia have also started implementing and trialing blockchain technologies. Three companies, in particular, are using the ledger for food and water related purposes. Civic Ledger is researching blockchain solutions for the governmental water market, TBSX3, and is using it as a method of exchange that ensures that food exports are genuine and authentic. AgriDigital is using it to centralize and organize the agriculture process from field to shelf. Meanwhile, Othera has proposed the use of blockchain to begin a lending business.

AI has also come into vogue within the country. Testra, a key player in the Australian telecommunications and media industries, has announced that it will begin introducing AI into its operations. The company’s CEO Andrew Penn  labeled the technology “perhaps [the] most significant driver of technology innovation” during his speech at AMCHAM, and he continued by explaining that AI will be the “highest priority at Telstra [which] is to improve customer advocacy.”

The impact of AI on Australia’s business world was highlighted in a recent speech in Sydney made by Peter Norvig, the research director of Google, who urged every company to appoint at least one member of staff to assess the potential benefits of AI to their company. He said, “You have to have somebody who understands how to make the call and, from that, the rest will flow.”

A Worldwide Movement

Australia’s federal and corporate progress concerning the integration of blockchain and AI are symptomatic of a worldwide growth, development, and implementation of the technologies.

Blockchain technologies are being used to investigate currencies and transfer systems on a national and international basis. Multiple countries have launched efforts to try to understand the complexities and intricacies of digitizing a national currency. China has developed a prototype that could become a progenitor for a real life equivalent, Singapore is looking at a ‘tokenised’ form of the currency in collaboration with Fintech, and Sweden and India are both considering cryptocurrencies as part of their larger plans to become cashless societies.

A similar worldwide change can also be seen in the adoption of AI, which is being accepted into a variety of institutions at an exponential rate. The Canadian government has developed a $125 million budget to fund investigations and research into AI, while the US government has been holding meetings to plan and consider the effects of implementing the tech.

This revolution, however, is being most directly felt in technology circles. Recently, Google has announced that they are focusing all of their resources on developing an AI that can take over many of the company’s other functions. However, the nature of the transition — or perhaps the threat — is reflected best by the Partnership on Artificial Intelligence to Benefit People and Society, a collaboration between tech giants that ensures that AI is implemented responsibly on a worldwide basis.

Disclosure: Several members of the Futurism team, including the editors of this piece, are personal investors in a number of cryptocurrency markets. Their personal investment perspectives have no impact on editorial content.

The post Australia Joins the Tech Train With AI and Blockchain appeared first on Futurism.

Toyota is Giving $100 Million in a Venture Capital Unit for AI Startups

Great innovation often springs from brilliant minds, that’s no secret. And, of course, having enough money to bring ideas to life makes a big difference. Toyota is merging brilliance with a big-budget by funding some new cutting-edge technology.

On Thursday, the Japanese car maker launched the venture capital (VC) arm of its Toyota Research Institute. The Silicon Valley-based Toyota AI Ventures, as its name implies, invests in startups working on artificial intelligence (AI), robotics, autonomous driving technologies, data analytics, and cloud computing. In total, Toyota plans to allot about $100 million to fund qualified startups.

“What’s clear is that the automotive industry, and Toyota in particular, must connect to this ecosystem of startup innovation to compete in this disruptive era,” Toyota Research Institute VP for data and business development Jim Adler wrote in a blog announcement. “Toyota AI Ventures is part of our efforts to connect to this ecosystem.”

With so many recent developments in AI, particularly in self-driving systems, Toyota knows that it doesn’t have the answer to everything. “We realize we don’t know everything,” Adler added. “But we do know that humility is a prerequisite for success.” Teaming up with the right startups could be just that success Toyota is working towards.

The post Toyota is Giving $100 Million in a Venture Capital Unit for AI Startups appeared first on Futurism.

The Solution to Automation in the Workforce Could Be Retraining

Retraining is the New Recruiting

Employers are grappling with how to deal with the new wave of digital labor artificial intelligence (AI) will provide, which requires a huge number of technical roles like coders — what are now known as “new collar jobs.” The US Department of Labor states that there are hundreds of thousands of positions currently vacant. If AI is set to automate roles that only humans could fill previously, then individuals who can develop, streamline, or augment AI are in top demand.

However, there is a scarcity in supply, which is amplified by the fact that gifted coders may not be aware of the industry; ideal candidates are thin on the ground. This has caused companies to offer high salaries and attractive perks to promising candidates. To give a little perspective, jobs that require coding are now in the top income quartile.

Due to this shortage, and with apocalyptic predictions concerning the effect of the technology’s proliferation — with it possibly replacing 850,000 workers by 2030 — employers are turning to new models of integration. One of the key developments is a Silicon Valley model to retrain workers rather than to search for new talent.

Rather than enter into bidding wars for talent, or risk important positions remaining unfilled, employers have found that retraining employees offers a comparatively quicker solution. It also comes with the benefit of not having to adapt a new employee to the intricacies and principles of a business. As Matt Norton, VP of Sales Engineering at Box, told, “Enabling employees from the customer support team to retrain as engineers allowed us to fill open technical roles faster and also retain the institutional and product knowledge our best-performing employees had already developed.”

This comes with the benefit of having multifaceted employees who can adapt to changes in demands in different sectors of a company on a daily basis.

The Mutual Benefit

In short, retaining creates a mutually beneficial scenario in which the employee develops new skills (which 61 percent of Americans under thirty view as necessary), retains their job, and offers better customer service due to a wider knowledge. The employer gains a multifaceted employee capable of working in a number of areas, does not have to offer exorbitant salaries to attract talent, and does not have to deal with the problems of unfilled positions.

The mutual benefit within the company then leads to a better situation for customers which is amplified by the benefits AI can provide — these include greater efficiency, collecting data for analysis (which 44 percent of executives believe is artificial intelligence’s biggest benefit), and ingenious solutions based on deep learning.

Onereach found that “80% of executives believe artificial intelligence improves worker performance and creates jobs,” although the precise nature of how it will remains an enigma. AI will effect the entire spectrum of the job market, from retail to white collar positions. Retraining is one possible solution that applies to every industry, potentially generating benefits in the entire business world — it is a partial solution for the damage the introduction of AI to the workforce could cause.

Another solution that is being proposed is restructuring the education system to train people to fit these roles, rather than retraining them. Over the last five years, IBM has been testing a six year high school program that synthesizes traditional education with work experience and mentoring initiatives. Former President Obama called the idea “outstanding.”

AI is set to change the world radically, so it is important that ideas such as retraining and changing models of education are being developed in order to stop workers getting the raw end of an integration that will, ultimately, benefit customers, employees, and employers.

The post The Solution to Automation in the Workforce Could Be Retraining appeared first on Futurism.

Microsoft Takes On Google and Deep Mind with AI Research Lab

Microsoft Research AI

Microsoft has created a new research lab which will focus on general-purpose artificial intelligence (AI) technology development. The lab will be called Microsoft Research AI, and will capitalize on the company’s existing AI expertise while pursuing new hires from related fields like cognitive psychology. The lab will also seek out academic partnerships, including a formal collaboration with MIT’s Center for Brains, Minds and Machines. Located within Microsoft’s Redmond HQ, the lab will ultimately be home to a team of more than 100 AI scientists exploring learning, natural language processing, perception systems, and other areas.

The purpose of combining these disciplines and striving toward more general AI will be to develop a single system that can master a broad array of challenges and tasks. This mirrors a goal that other tech companies like Google are pursuing. This kind of system could ultimately plot out the most efficient route for a road trip, relate to a human conversation partner’s sense of humor, or even optimize a budget. This broader focus is in contrast with the more common and specific AI systems we see now, like those that perform facial recognition tasks.

Image Credit: Microsoft
Image Credit: Microsoft
Microsoft Research AI will also be creating an AI ethics oversight panel. It will advise on any and all AI work across the company, keeping Microsoft in line with current industry trends. Microsoft, Amazon, DeepMind, Facebook, Google, and IBM have already formed a partnership for the ethical development of AI, and, like Microsoft, DeepMind and Google each have in-house ethics boards for their AI work.

The post Microsoft Takes On Google and Deep Mind with AI Research Lab appeared first on Futurism.

The Race to Prove “Quantum Supremacy” May Be Won in the Next Year

Disclaimer: Futurism only supports products that we trust and use. This post is in partnership with Abundance 360, and Futurism may get a small percentage of sales. Want to take a class with Peter Diamandis? Click here to learn more!

Massive Disruption

Next year, we may see the launch of the first true quantum computers. The implications will be staggering. This blog aims to answer three questions:

  1. What are quantum computers?
  2. What are their implications?
  3. Who’s working on them?

There’s a lot to unpack here, so hang tight, and let’s jump in!

What Is Quantum Computing?

Moore’s Law (or the exponential growth of integrated circuits) is actually referring to the fifth paradigm of computation. Here’s the list of the underlying technologies: (1) Electromechanical; (2) Vacuum Tube; (3) Relay; (4) Transistors; and (5) Integrated Circuits.

Quantum computers may well be the sixth paradigm, given that they work in a fashion that is entirely different from “classical” computers. A classical computer performs operations using classical “bits” — these “bits” can be in only one of two states: “0” or “1.”

In contrast, a quantum computer uses “quantum bits,” or “qubits.” Thanks to a principle called quantum superposition, these qubits can have a value of “0”, “1,” or both “0 and 1″ at the same time. This capability allows quantum computers to solve certain types of complex problems that are intractable for conventional computers. Frankly, really exciting problems for society today, as you’ll see below.

For a tutorial on quantum computers, check out this short video:

The power of qubits is that they scale exponentially. A 2-qubit machine allows you to do four calculations at once. A 3-qubit machine can do eight calculations. A 4-qubit machine gives you 16 calculations, all simultaneously.

By the time you get to 300 qubits, you’ve got a computer that can do more “calculations” than there are atoms in the universe.

That’s why the blog TechTarget described quantum computing this way: “Development of a quantum computer, if practical, would mark a leap forward in computing capability far greater than that from the abacus to a modern day supercomputer, with performance gains in the billion-fold realm and beyond.”

What Are the Implications of Quantum Computing?

The implications of true quantum computing at scale are staggering and of extraordinary impact to society today (which is why I’m tracking it).

In my opinion, here are the Top 5 Applications:

  1. Machine Learning: Much of machine learning is about “pattern recognition.” Algorithms crunch large datasets to find signals in the noise, and the goal is to maximize the number of comparisons you make between data to find the best models to describe that data. With quantum computing, we’ll be able to do this processing orders of magnitude more effectively than with classical computing. Quantum computing will allow you to compare much, much more data in parallel, simultaneously, and all permutations of that data, to discover the best patterns that describe it. This will lead to fundamentally more powerful forms of AI, much more quickly than we expect. Expect quantum computing to cause a positive inflection point (upward) for the speed at which the world develops AI (which, by the way, is why Google is working so hard on it).
  2. Medicine: Quantum computing will also allow us to model complex molecular interactions at an atomic level. This will be particularly important for medical research and drug discovery. Soon we’ll be able to model all 20,000+ proteins encoded in the human genome and start to simulate their interactions with models of existing drugs or new drugs that haven’t been invented yet. Based on the analysis of these drug interactions, we’ll be able to find cures for previously incurable diseases and hopefully accelerate the time to market for new drugs. Using quantum computer simulations will be the way we design and choose our next generations of drugs and cancer cures.
  3. Chemistry (and Climate Change): Worried about the climate crisis? Wondering what we can do about it? Quantum computers may be our newest tool to understand what is going on and to fight it. They will allow us to unlock “simulation-driven” solutions, perhaps design new catalysts that actually capture carbon from the atmosphere and turn it into new and valuable products at low cost and energy use.
  4. Material Science & Engineering: Because we can simulate atomic interactions, we’ll explore and invent entirely new, better materials. We might find better superconductors, better magnets, materials that will allow us to create much higher energy density batteries, and so on. Since 2011, the U.S. federal government has granted over $250 million to the Materials Genome Initiative in an effort to “discover, manufacture, and deploy advanced materials twice as fast, at a fraction of the cost.”
  5. Biomimetics, Energy Systems, & Photovoltaics: Scientists believe that much of the world is built atop quantum systems. Processes like photosynthesis, for example, are likely dependent on quantum mechanical systems. Thus, as we look to the natural world for inspiration to build better energy systems or stronger materials, we’ll only fully realize their potential when we can model these processes with quantum computers. This will lead to many advances and discoveries across the board.

Bottom Line: When quantum computing pans out, we’ll be able to control the very building blocks of the universe.

The question is who is going to figure it out first…

Who’s Working on Quantum Computing?

There’s a race going on — a race to prove something called “quantum supremacy.”

Quantum supremacy is essentially the test that validates that the computer you have is in fact a quantum computer.

In the U.S., three major players are in the game right now:

  1. Google
  2. IBM
  3. Rigetti Computing, a startup out of Silicon Valley

A potential fourth is D-Wave Systems. They’ve developed chips with qubits, but these haven’t yet been conclusively proved to operate as a quantum computer.

Both Rigetti Computing and Google believe they will reach “quantum supremacy” in the next 12 to 18 months.

Think about that: the next one to two years. The revolution is coming fast.

To put this into perspective, I had a chance to catch up with Chad Rigetti, the CEO of Rigetti Computing. Below is a picture of the most powerful “classical” computer on the planet, Tianhe-2 in Guangzhou, China.


It costs $400 million.

The computer burns about 20 megawatts of electricity — enough to power 20,000 households.

And it’s about half the size of a football field, with 3.2 million Intel cores.

President Obama, in the attempt to drive America’s return to high-performance computing supremacy, declared that the U.S. would build an exoscale computer, 30 times more powerful than Tianhe-2, by 2020.

The problem is this: With current technology, it will cost a billion dollars and will require a nuclear power plant to run the supercomputer.

“We need to do this,” explains Chad Rigetti. “But there is another path. Quantum computing.”

Below is a picture of two developmental systems in Rigetti’s lab in Berkeley, CA.


The big white cans about the size of a human are cooling systems, and inside each cooling system is a single quantum chip.

In these machines today, there is a 5-qubit processor.

The crazy part: A single chip with about 50 to 60 qubits on it would be more powerful than the entire Tianhe-2, a half-a-football-field-sized machine…

This is what quantum computing unlocks.

Rigetti is rapidly developing quantum integrated circuits and the software platform that will allow developers to build on top of them.

Along with efforts at Google, IBM, D-Wave, and many other companies and research labs around the world, we are rapidly approaching a quantum computing revolution.

Get ready.

Disclaimer: Futurism only supports products that we trust and use. This post is in partnership with Abundance 360, and Futurism may get a small percentage of sales. Want to take a class with Peter Diamandis? Click here to learn more!

The post The Race to Prove “Quantum Supremacy” May Be Won in the Next Year appeared first on Futurism.

DeepMind Has Taught an AI to Do Something Quite Remarkable

Google’s artificial intelligence (AI) subsidiary DeepMind has released a paper detailing how its AI agents have taught themselves to navigate complex virtual environments, and the results are weird, wonderful, and often extremely funny.

The agents in the simulations were programmed with a set of sensors— these allowed them to know things like when they were upright or if their leg was bent — and a drive to continue moving forward. Everything else that you see in the video — the agents’ jumping, running, using knees to scale obstacles, etc. — is the result of the AI working out how best to continue moving forward through reinforcement learning.

The complexity of the agents’ movements is a testament to how far AI has come in recent years. While agents in simulations like these often break down when faced with unfamiliar environments, DeepMind’s have utilized startlingly sophisticated movements to traverse obstacles.

These agile AIs aren’t the first to impress, though. A DeepMind AI has previously illustrated super-human performance levels on an object recognition task, and a team at the University of Cambridge has developed an AI system capable of performing more abstractly cerebral tasks, such as reading emotions and detecting pain levels.

The groundwork being laid by experiments such as these is pivotal to the integration of AI into society. Eventually, researchers will be able to incorporate these advancements into the programming of future AI robots, which will be able to navigate around your home or the streets, ushering in the age of truly seamless robot/human interaction.

The post DeepMind Has Taught an AI to Do Something Quite Remarkable appeared first on Futurism.

AI is Changing Everything – Even Science Itself

AI Particle Physics

Many might associate current artificial intelligence (AI) abilities with advanced gameplay, medical developments, and even driving. But AI is already reaching far beyond even these realms. In fact, AI is now helping particle physicists to discover new subatomic particles.

Particle physicists began integrating AI in the pursuit of particles as early as the 1980s, as the process of machine learning suits the hunt for fine patterns and subatomic anomalies particularly well. But, once an unexplored and novel technique, AI is now a fully integrated and standard part of everyday life within particle physics.

[Taken] AI is Changing Everything – Even Science Itself
Large Hadron Collider. Image Credit: Vieamusante / Wikimedia

Of course, continuing research in this field wouldn’t move forward at all without the physicists themselves, challenging and exploring our understanding of the natural world. But AI is becoming an ever-increasingly useful tool. For example, the Large Hadron Collider (LHC), the largest particle accelerator in the world, smashes together protons in an attempt to discover new particles and more fully comprehend the universe. But, with so many collisions — approximately 600 million per second — the amount of data created by the LHC are immense.

Pushpalatha Bhat, physicist at Fermilab, described the problem in an interview with Science Magazine. “This is the proverbial needle-in-the-haystack problem…That’s why it’s so important to extract the most information we can from the data.” And this extraction is where AI comes in handy. And this ability to extract data lent itself to the 2012 discovery of the Higgs boson particle, which occurred using the LHC.

AI Everywhere

While AI has not and will never replace the world’s scientists, this unparalleled tool is being applied in ways that many could never have even predicted. It is, as previously mentioned, helping researchers to push the boundaries of understanding. It’s helping us to create modes of transportation that not only make daily life easier, but save countless lives.

The 5 Weirdest AI Applications [INFOGRAPHICS]
Click to View Full Infographic

AI is proving to be an essential component in the current quest to travel to and explore Mars, allowing probes to be controlled remotely and trusted to make changes in behavior according to a changing environment. And, even beyond medical advances, AI is making treatments more enjoyable for both patients and healthcare providers, altering an often-intimidating system.

AI technologies are also being designed that are capable of creating art. From paintings to music, we are learning that advanced machine learning algorithms are more than just the new face of industry. This makes a lot of people uneasy. Images of Will Smith in iRobot come into view, the voice of Hal 9000 from 2001: A Space Oddysey starts speaking, and our science fiction nightmares seem realized.

But, while AI is not yet a perfectly integrated part of daily life, it is certainly pushing us forward. So, who knows, thanks to AI, we may soon really put humans onto the red planet and particle physicists might smash protons just right and reveal more about our universe than we could have ever hoped to know.

The post AI is Changing Everything – Even Science Itself appeared first on Futurism.

AI Is Revolutionizing the Relationship Between Companies and Their Customers

AI for You

Narrow artificial intelligence (AI) is fundamentally changing the approaches, methods, and results of customer management, triggering a renaissance in companies’ relationships with their clients in almost every sector and revealing previously unimaginable opportunities. Narrow AIs are versed in doing one task well; this is the only AI that has been created by humans so far.

The first sector it is being put to use in is customer relationship management (CRM) — the aspect of a company that aims to optimize a business’s relationship with their customers in order to maximize retention rates. AI can analyze data at a rate humans cannot match, giving an understanding that was only possible with huge amounts of manpower before. A prime example is Salesforce Einstein, which studies the data from sales, e-commerce activity, emails, Internet of Things-generated data and social media to make company specific recommendations.

Types of AI: From Reactive to Self-Aware [INFOGRAPHIC]
Click to View Full Infographic

AI can also work in specific sectors, such as helping to streamline outbound marketing by automating banal, time-consuming tasks. Initiatives such as Growbots investigate data customers have provided to develop curated prospect lists, plan campaigns, and send follow up emails at optimal times. The software also uses natural language processing to analyze responses and compile feedback fo the sales team. As their website states, “What the steam engine did for the industrial worker, AI has done for the sales rep.”

Some of the burden of engaging with customers directly can be offloaded to AI chatbots. Using digital labor for conversations with customers can free up the time of customer support, allowing them to deal with more complex and serious customer issues faster. With a success rate of 55 percent, Amelia is an example of a chatbot that learns from previous interactions to provide answers to customer questions — and if the bot detects hostility or cannot provide an answer, it immediately transfers the customer to a human agent.

AI and Business

This technology, then, has the potential to revolutionize customer interaction by attracting new customers and retaining current ones. AI provides a way to tackle the huge supply of information created by the 21st century’s data tidal wave, as well as to harvest the fruits of the increased demand on products and services businesses face in an increasingly corporate world. This results in a mutually beneficial relationship between company and customer, in which communication and support are expedited.

These Will Be the Most Popular Jobs in the Future
Click to View Full Infographic

Other aspects of businesses are also being improved through the use of AI. Mya has been developed to make the job application process quicker, with the capability to siphon off unsuitable candidates, schedule interviews, and request further information from potential employees. AI has also started to be used in retail settings, with customers being automatically charged when they leave Amazon’s grocery store — removing frustrating lines from the shopping process.

Many high profile scientists and tech giants, including Stephen Hawking, have expressed concern that applications of AI to the business sector will cause untold damage to the job market, possibly leading to millions being unemployed. However, Louis Monier, the founder of AltaVista, says that this is another example of technology removing less desirable tasks that work necessitates; it is “just like we have always done, from farm animals pulling the plow to steam power and so on.”

The post AI Is Revolutionizing the Relationship Between Companies and Their Customers appeared first on Futurism.

AI Can Now Produce Better Art Than Humans. Here’s How.

An AI Picasso

Scientists are using artificial intelligence (AI) to find a “new system for generating art” and testing their results on the public. The system, called a generative adversarial network (GAN), works by pairing two AI neural networks: a generator, which produces images, and a discriminator, which judges the paintings. It does this based on the 81,500 example paintings and knowledge of different artistic styles (such as Baroque, Impressionism, and Modernism) it was taught. The suggester creates an image, the discriminator criticizes it, and the conversation leads to a work of art.

The scientists changed the way that AI usually produces art by having the generator only create works that did not fall into a preexistent category of painting — they did this by “maximizing deviation from established styles and minimizing deviation from art distribution,” according to the abstract.

Mark Riedl, an associate professor at the Georgia Institute of Technology in Atlanta, said that he liked the “idea that people are starting to push GANs out of their comfort zone — this is the first paper I’ve seen that does that.”

After the paintings were produced, the scientists conducted a survey with members of the public in which they mixed the AI works with paintings produced by human artists. They found that the public preferred the works by AI, and thought they were more novel, complex, and inspiring.

The Work of Art in the Age of Mechanical Creation

Paul Valéry, who Walter Benjamin used as a starting point for his essay “The Work of Art in the Age of Mechanical Reproduction,” wrote in 1931: “We must expect great innovations to transform the entire technique of the arts, thereby affecting artistic invention itself and perhaps even bringing about an amazing change in our very notion of art.”

He was referring to the modernist period, in which new techniques and ideologies changed the way art was perceived. We may be experiencing a similar upheaval in the art world. Benjamin’s criticism of the exact copies that could be produced by the second half of the 20th century centered around the idea that “even the most perfect reproduction of a work of art is lacking in one element: its presence in time and space, its unique existence at the place where it happens to be.”

This AI project possesses this property. It does not just copy or manipulate, as Google Deep Dream does, but is able to produce true works of art by being actively programmed to be novel and creating originals in a specific place. These pieces are more similar to Aiva, an AI composer that also could not be detected by humans, than it is to Deep Dream.

We are entering an age where AI is becoming increasingly ubiquitous and competent in almost every field — Elon Musk thinks it will exceed humans at everything in by 2030 — but art has been viewed as a pantheon of humanity, something quintessentially human that an AI could never replicate.

Studies such as this show that our artistic leanings may not be off limits — and with AI conquering humans at our own games, like chess — how long is it before we create a Picasso program that is superior to any current human artist — and immortal to boot?

The post AI Can Now Produce Better Art Than Humans. Here’s How. appeared first on Futurism.

IBM is Challenging Congress’s Apocalyptic Perceptions of AI

IBM to Meet Washington

IBM is taking a stand for artificial intelligence (AI). The technology giant is lobbying Washington with the hope of challenging the view of “fearful prophets envisioning massive job loss, or even an eventual AI ‘overlord’ that controls humanity” — as David Kenny, the vice president for IBM Watson, wrote in an open letter to congress.

He went on to write that the “real disaster would be abandoning or inhibiting cognitive technology before its full potential can be realized.” Kenny is also participating with the bipartisan Artificial Intelligence Caucus.

Kenny’s arguments center around three core principles. The first is that past technologies like the bar code scanner and ATM have vastly improved efficiency and drove job creation. The second is that taxing or otherwise inhibiting the process of AI will cost the U.S. its competitive advantage. Instead, there should be a change in education and training to prepare the country for the technology. The third is any AI company should be transparent about their system’s decision-making process and promote a principle of individual data governance.

IBM is weaved into the history of AI’s development. Its engineers pioneered some of the earliest AI systems, including Deep Blue, which was responsible for beating world chess champion Gary Kasparov — one of AI’s greatest achievements to date. Currently, IBM’s Watson is one of the leading cognitive computing systems in the world, with applications stretching from diagnosing disease, to writing cookbooks and creating recipes, to tackling the data-heavy tasks of the federal government.

The Ethics of AI

IBM’s proposal to inform congress about AI is not the first high-profile venture to do so. Numerous think tanks, meetings, and summits have occurred to discuss the ethics of AI and promote responsible integration of the technology.

White House AI Report: Everything You Need to Know [INFOGRAPHIC]
Click to View Full Infographic

Last year, representatives from Google, Amazon, Microsoft, IBM, and Facebook formed the Partnership on AI to Benefit People and Society with the goal of developing a possible set of guidelines for AI development. There have also been more individual attempts to investigate AI — such as those by Pierre Omidyar, the founder of eBay, and Reid Hoffman, co-founder of LinkedIn, both of whom have invested millions in AI research.

Despite the minds and the money devoted to solving the problem, the ethics of AI remains a remarkably sticky moral bog, which involves questions of personhood, sentience, and rights that have troubled philosophers for centuries.

However, IBM’s efforts represent a positive step toward a pragmatic approach to solve a problem before we are amidst it. Our regulation of — and relationship with — AI is likely to govern our future. We can take solace that the industry leaders are at least taking it seriously and thinking about the implications of their decisions.

The post IBM is Challenging Congress’s Apocalyptic Perceptions of AI appeared first on Futurism.

These Technologies Are Critical to Today’s Businesses, According to Survey

Critical Roles

Leaders in the business world are recognizing the importance of new technologies like artificial intelligence (AI) and the Internet of Things (IoT). A survey by the CNBC Global CFO (Chief Financial Officer) Council has found that more than one in four CFOs (26 percent) consider AI and machine learning as critical to their company, and nearly half (44 percent) think that it is “very important.”

Image Credit: CNBC
Image Credit: CNBC

The survey is a key indicator of trends and viewpoints in big business, taking account of the views of some of the largest private companies worldwide which collectively have a market cap of more than $4 trillion in a variety of sectors.

AI’s role in business may be even more significant than this survey suggests, as there is an overlap between this category and the second most critical technology in the survey, the IoT. Most IoT systems utilize AI, particularly to learn, streamline themselves, and self correct.

Although Bitcoin and cryptocurrencies are making headlines — such as Bitcoin prices soaring to over $2,000 and China testing a cryptocurrency for potential national implementation — they are not a major concern among these massive corporations. About 23 percent said that technology related to virtual currencies was “not at all important,” and only 3 percent said that it was critical.

Technological Future

These results show a real-world reflection of the trends in business, cutting through the marketing, jargon, and verbiage surrounding technologies. They show that AI has not just become critical in some industries, but has permeated the business world itself, changing the very way companies operate.

It is being applied across the board to the entire spectrum of the production and consumption processes: from being used to help hire employees, to replacing cashiers in retail stores, to taking the jobs of money-management workers.

AI Forecast for 2017
Click to View Full Infographic

Blockchain did not rank highly on this year’s survey, but this may be because it is only having a fundamental effect in a small number of sectors — such as humanitarian aid and some aspects of the food industry — and because these statistics show the current world, not a forecast of the one to come.

Companies are extremely interested in blockchain technology, but it has not become crucial to them yet because of issues of risk and integration. Blockchain likely will be ranked much more highly on the survey in the coming years. The writing is on the wall already: Wall Street is trying to implement it by 2018, IBM has launched its own cryptocurrency, and cities like Dubai want it to be the default for their economies by 2020.

The post These Technologies Are Critical to Today’s Businesses, According to Survey appeared first on Futurism.

IBM’s AI Will Make Your Hospital Stay More Comfortable

Dr. Watson, Coming Soon

IBM’s Watson has done everything from beat human champions at the game of Go to diagnose undetected leukemia in a patient, saving her life. Now, the artificial intelligence (AI) system is poised to make life in a hospital a lot easier for patients and staff alike.

IBM Predictions: Life in 2022
Click to View Full Infographic

Right now, some medical staff spend almost 10 percent of their working hours answering basic patient questions about physician credentials, lunch, and visiting hours, Bret Greenstein, the vice president of Watson’s Internet of Things (IoT) platform, tells CNET.

These staff members also have to tend to very basic needs that don’t require medical expertise, such as changing the temperature in rooms or pulling the blinds. If assisted by some kind of AI-powered device, these workers could spend their time more effectively and focus on patient care.

That’s where Watson comes in. Philadelphia’s Thomas Jefferson University Hospitals have teamed up with IBM and audio company Harman to develop smart speakers for a hospital setting. Once activated by the voice command “Watson,” these speakers can respond to a dozen commands, including requests to adjust the blinds, thermostat, and lights or to play calming sounds.

Watson’s World

Watson is no stranger to the healthcare industry. In addition to providing a correct diagnosis for the woman mentioned above, Watson was able to recommend treatment plans at least as well as human oncologists in 99 percent of the cases it analyzed, and it even provided options missed by doctors in 30 percent of those cases.

Watson will soon be working in many dermatologists’ offices, too, and while its integration into the medial field hasn’t been free of problems, it is still the AI with the broadest access to patient data — the key to better diagnoses and greater predictive power.

Watson has had a notable impact on various other industries, as well.

OnStar Go uses Watson, and it will be making driving simpler in more than 2 million 4G LTE-connected GM vehicles by the end of this year. Watson is also no stranger to retail, having been incorporated into operations at Macy’s, Lowe’s, Best Buy, and Nestle Cafes in Japan, and the AI is even helping to bring a real-life Robocop to the streets of Dubai.

Watson is branching out into creative work, too, which was previously assumed to be off-limits to AIs. The system successfully edited an entire magazine on its own and has also created a movie trailer.

What the AI will do next is anyone’s guess, but it’s safe to say that Watson probably has a more exciting and ambitious five-year plan than most humans.

The post IBM’s AI Will Make Your Hospital Stay More Comfortable appeared first on Futurism.

Ray Kurzweil: There’s a Blueprint for the Master Algorithm in Our Brains

Today’s artificial intelligence (AI) systems are, no doubt, considerably advanced. There are now intelligent machine learning algorithms capable of driving vehicles, assisting doctors, or even engaging in art and in almost-human conversation. However, despite AI programmed as artificial deep neural networks, these are still far from actually mimicking what the human brain is capable of.

Renowned futurist and Google engineer Ray Kurzweil thinks that the key to human-level AI is a master algorithm, and he believes that the brain holds a blueprint to this. The famous inventor and thinker, known for his mostly accurate predictions about future technologies, said that the brain’s neocortex — that part of the brain that’s responsible for intelligent behavior — consists of roughly 300 million modules that recognize patterns. These modules are self-organized into hierarchies that turn simple patterns into complex concepts.

Despite neuroscience advancing by leaps and bounds over the years, we still haven’t quite figured out how the neocortex works. Kurzweil argued that these multiple modules “all have the same algorithm,” he said in the video by the Singularity University posted above. “The mathematics of thinking, I think, is being understood,” Kurzweil added, “but I would not claim that we understand it fully. But we’re getting more and more hints as we learn more and more about the human brain.”

Perhaps soon enough, we’ll figure out that master algorithm and understand our brains better. Then we’ll either be able to make better AI — or AI that’s better than us. It’s worth finding out.

The post Ray Kurzweil: There’s a Blueprint for the Master Algorithm in Our Brains appeared first on Futurism.

Google Exec: Better Computers Will Make “The Number of Jobs Go Up, Not Down.”

Technology Moves Fast

Speaking at the recent Viva Technology conference in Paris, Google parent company Alphabet’s chief executive Eric Schmidt said he’s optimistic for the future. “We’re entering what I call the age of abundance,” he said, as quoted by The Guardian. “And during the age of abundance, we’re going to see a new age […] the age of intelligence.”

Countries with the fastest internet connection
Click to View Full Infographic

Schmidt was the headline speaker at the conference where French president Emmanuel Macron also spoke. The Alphabet boss talked about how the information revolution drives drive human progress. “By 2020, most human beings will have access to the internet. When you have everyone harnessed with this information, the world gets more interconnected. It gets stronger. There’s more knowledge sharing. There’s more freedom and there’s more openness.”

As result, Schmidt considers one thing to be crucial. “I’ve come to believe that science and critical thinking really do matter. Even more so now in the political world that we have in the United States and in other areas of the world,” he said, referencing his meeting with U.S. president Donald Trump and other technology CEOs a few days before the conference.

Pushed Forward by AI

Perhaps the most obvious factor that drives this technological change is artificial intelligence (AI). Schmidt noted that, as with anything new, people tend to be wary of change — especially with something most don’t understand like AI. He pointed out, however, that AI and machine learning actually present a world of opportunities for a host of industries.

“The largest taxi company has no taxis, that’s Uber. The largest accommodation company has no real estate, that’s Airbnb. The largest phone company has no infrastructure, that’s Skype. The most valuable retailer has no inventory, that’s Alibaba. The largest movie theater, has no movie theaters, that’s Netflix.”

There’s going to be resistance to these changes, which will be huge disruptions, especially from industry giants that are likely to hold the status quo. However, Schmidt also pointed out that these businesses will change, largely driven by AI and increased connectivity. “This age of intelligence will allow you to build a company that’s far more efficient,” Schmidt said at the conference. “Computers, instead of just doing analysis will really be able to help you.”

In this case, instead of fearing what will come from intelligent automation — especially that it will lead to unemployment — we can look forward to it, Schmidt said.

You’ll take people plus computers and the computers will make the people smarter. If you make people smarter, their wages go up, not down. And the number of jobs will go up, not down. What will happen to human interaction? I think there’ll be more. It used to take months for discovery and new developments to be understood. Now they can occur simultaneously all around the world.

The post Google Exec: Better Computers Will Make “The Number of Jobs Go Up, Not Down.” appeared first on Futurism.

Neuroscientists Say Forgetting Things May Be an Essential Part of Our Brain Function

The Usefulness of Forgetting

We’ve all had moments of forgetfulness, and not infrequently the act of forgetting about something can have a negative impact on our day — or even our lives. Some even consider being forgetful to be a sign of damage in the brain, particularly the area tasked with storing and retaining information. While this may be true in the case of memory disorders, Canadian neuroscientists from the University of Toronto propose that the typical moments of forgetfulness with which most of us are familiar are actually the brain’s way of making us smarter — and that those moments may even make our lives better.

In a study published in the journal Neuron, researchers offered an alternative hypothesis as to why the brain purposefully works to forget information. Though not entirely a new field of study, the neurobiology of forgetting has been relatively unexamined, as co-author Blake Richards explained during an interview with NPR’s Andrea Hsu.

“Generally, the focus for the last few decades in neuroscience has been the question of how do the cells in our brains change themselves in order to store information and remember things.”

Their research found that the brain’s ability to store huge amounts of information can often be hindered by keeping memories that may be irrelevant for our everyday existence. “In fact, I would argue they’re not just irrelevant, but they can be detrimental to living our daily lives,” Richards said. Information that isn’t necessary for us to evolve and survive, then, isn’t necessary for the brain to retain.

“Our memories ultimately are there to help us make decisions, to act in the world in an intelligent manner,” Richards went on to explain. “Evolution cares about whether or not you are an individual who’s making appropriate decisions in the environment to maximize your chances of survival.”

Memory and Artificial Intelligence

Researchers argue that forgetting is actually a function of memory. Ironic, right? But when you think about it, it actually makes a lot of sense.

“[W]hen the goal of memory is to help you make intelligent decisions in a complex, changing world, then the best memory system will be a memory system that forgets some stuff. So a healthy, properly functioning memory system is one that does engage in some degree of forgetting.”

Much of Richard’s work on memory and forgetfulness is thanks to his application of theories on artificial intelligence (AI) and how the brain learns. He said that in the world of AI there’s a phenomenon called over-fitting, where a machine ends up storing so much information that it hinders its ability to make intelligent decisions.

Richards hopes that by understanding the neurobiology of forgetting, we’ll be able to design AI systems capable of interacting with the world and making decisions the same way human beings do. Luckily, there are many studies currently focused on trying to make AI systems — or artificial neural networks — behave like human brains. One crucial aspect we’re still working on is how to facilitate memory development in AI. By understanding the nuances of human memory, it may be possible to design AI systems that distinguish between information that’s trivial — and what’s necessary for survival.

The post Neuroscientists Say Forgetting Things May Be an Essential Part of Our Brain Function appeared first on Futurism.

An Advanced AI Has Been Deployed to Fight Against Hackers

Guarding A Global Grid

It takes a truly massive network of hundreds of thousands of computers to help scientists around the world unravel the mysteries of the Universe, which is the purpose of the CERN grid (CERN stands for Conseil Européen pour la Recherche Nucléaire, in English, the European Laboratory for Particle Physics). Naturally, however, particle physicists aren’t the only ones who want to access that kind of computing power. Hackers are also interested in CERN’s grid, and CERN scientists are skipping past standard cybersecurity measures and deploying artificial intelligence (AI) to stay protected.

Image Credit: CERN
Image Credit: CERN

It is the job of any cybersecurity effort to detect unusual activity and identify possible threats. Of course, systems can look for known code worms and viruses, but malware changes too fast for humans to keep up with it. This is where AI and machine learning comes in. CERN scientists are teaching their AI system to distinguish between safe and threatening behavior on the network and take action when it detects a problem.

A Universe Of Data

CERN is home to the Large Hadron Collider (LHC) as well as its massive computer grid. Scientists use the LHC to study high-speed collisions between subatomic particles — in 2017 alone, they collected an estimated 50 petabytes of data about these particles. CERN provides this critically important data to universities and laboratories around the world for research.

The LHC and CERN itself require a massive amount of data storage and computing power, which is what prompted the creation of the Worldwide LHC Computing Grid. The grid connects computers in more than 40 countries from more than 170 research facilities, and works like a power grid to some extent, providing computing resources to facilities based on demand. This presents a unique cybersecurity challenge: keeping the massive globally-distributed grid secure while maintaining the computing power and storage unimpeded.

Machine learning can train a system to detect potential threats while retaining the flexibility that it needs to provide computing power and storage on demand. F-Secure senior security researcher Jarno Niemelä told Scientific American that the biggest challenge for the project will be developing algorithms that can accurately distinguish between normal and malicious network activity without causing false alarms. For now, the AI upgrades are still being tested. If they work well protecting just the part of the grid that ALICE (A Large Ion Collider Experiment) uses, the team can deploy AI cybersecurity measures throughout the system.

The post An Advanced AI Has Been Deployed to Fight Against Hackers appeared first on Futurism.

Google’s DeepMind Is Now Capable of Creating Images from Your Sentences

The folks at Google’s DeepMind are hard at work bringing the world the latest developments in artificial intelligence (AI). Their latest breakthrough shows that their AI is capable of creating photorealistic pictures from human input in the form of sentences.

This is the latest development in the use of AI to do some truly amazing things with pictures. In February, Google Brain scientists developed a way to “enhance” photographs much like the way you might see in a science fiction movie like Blade Runner or a network procedural like one of the many CSIs. Using PixelCNN, the machine was able to turn low-resolution photos into high-resolution ones with an impressive approximation.

Now that same technology is being used to turn text into pictures. The researchers found that a more detailed prompt would deliver better results than a less detailed one. For example, the prompt of “A yellow bird with a black head, orange eyes, and an orange bill” returned a highly detailed image. The algorithm is able to pull from a collection of images and discern concepts like birds and human faces and create images that are significantly different than the images it “learned” from.

The post Google’s DeepMind Is Now Capable of Creating Images from Your Sentences appeared first on Futurism.

An Algorithm Designed 7 Million One-Of-A-Kind Labels for a Nutella Campaign

Individually Different

If you’re a fan of Nutella, you’re likely more interested in what comes in the company’s jars than what goes on them. However, if you’re a fan who lives in Italy, you recently had reason to be excited about both.

After previously playing around with the caption on their jars, Nutella decided earlier this year that they wanted to do something interesting with the label designs on their containers. To that end, the Ferrero-owned brand launched a campaign in February that saw 7 million unique Nutella jar designs hit shelves, and all 7 million were sold.

The labels featured combinations of lines and shapes, polka dots and zigzags, and no, the company didn’t employ an army of artists to make their campaign happen — all they needed to create these “Nutella Unica” jars was a single algorithm.

An algorithm is a program developed to execute certain functions. In other words, lines of code were able to create these unique designs as well as your typical human designer could. Once written, Nutella’s algorithm would combine various patterns and colors for each design on its own and then give each a customized ID code to ensure no two labels were alike.

Intelligent Automation

Nutella’s label-designing algorithm isn’t the first automated program to dabble in the artistic — indeed, it seems few professions remain beyond the reach of automation.

Will Automation Steal My Job?
Click to View Full Infographic

We now have intelligent machines capable of driving vehicles, working in warehouses, and even flipping burgers. Some are working in the insurance industry and in the field of lawartificial intelligences (AIs) might even replace human CEOs.

Such developments fuel people’s fears of a robot workforce revolution, which would result in massive unemployment. However, more moderate experts think this needn’t be the case.

The intelligent automation of this century could revolutionize professions the same way automation in the days of the Industrial Revolution opened up new opportunities for people. While some people might lose their jobs, automation might just give them better ones in exchange.

The post An Algorithm Designed 7 Million One-Of-A-Kind Labels for a Nutella Campaign appeared first on Futurism.