Category: machine learning

Machine Learning Identifies Breast Lesions Likely to Become Cancer

High-Risk Breast Lesions

A new study reveals that a machine learning tool can help to identify which breast lesions, already classified as “high-risk,” are likely to become cancerous. The researchers behind the study believe that the technology could eliminate unnecessary surgeries. Breast lesions are classified as high-risk after a biopsy reveals they have a higher chance of developing into cancer. Surgical removal is typically the recommended treatment option for these lesions due to the increased risk, even though many of these lesions do not pose an immediate threat. With “less immediate” cases, surgery may be deemed unnecessary and follow up imaging or other treatments may be found to be the preferred course of action — but only if there is a reliable way of differentiating between the lesions.

Image Credit: Radiological Society of North America
Machine learning could drastically improve medical treatment for breast lesions. Image Credit: Radiological Society of North America

“There are different types of high-risk lesions,” Dr. Manisha Bahl, study author and radiologist at Harvard Medical School and Massachusetts General Hospital (MGH), said in a press release. “Most institutions recommend surgical excision for high-risk lesions such as atypical ductal hyperplasia, for which the risk of upgrade to cancer is about 20 percent. For other types of high-risk lesions, the risk of upgrade varies quite a bit in the literature, and patient management, including the decision about whether to remove or survey the lesion, varies across practices.”

A Machine Learning Solution

A research team from MGH, MGH breast imaging experts, and researchers from the Computer Science and Artificial Intelligence Laboratory at the Massachusetts Institute of Technology (MIT) collaborated to study how a machine learning tool could be used to discern which high-risk lesions were more or less likely to turn into cancer. Machine learning allows an artificial intelligence (AI) system to learn from its past experiences and improve its performance as a result, similar to the way in which humans learn.

The model, in this case, analyzed traditional risk factors, like lesion histology and patient age, along with a few more novel features like keywords in biopsy pathology reports. The team trained the machine learning model on patients with high-risk lesions confirmed via biopsy who had either had surgery or two-year imaging after the biopsy and high-risk determination. In the group, a total of 1,006 high-risk lesions were identified, and 115 of them, or 11 percent, later developed into cancer.

The researchers trained the model on two-thirds of the high-risk lesions, and then tested it on the 335 lesions that remained. The model correctly predicted cancer upgrades in 37 of the 38 lesions or 97 percent. The team found that, had the model been used, it would have helped to prevent nearly one-third of the surgeries conducted on benign lesions. The machine-learning model also found that the keyword terms “severely” and “severely atypical” in pathology reports were associated with a greater risk of developing cancer.

“Our goal is to apply the tool in clinical settings to help make more informed decisions as to which patients will be surveilled and which will go on to surgery,” Dr. Bahl said in the release. “I believe we can capitalize on machine learning to inform clinical decision making and ultimately improve patient care.” This is just one more example of how machine learning is transforming how we treat, manage, and prevent disease.

The post Machine Learning Identifies Breast Lesions Likely to Become Cancer appeared first on Futurism.

Google’s Machine Learning Software Has Learned to Replicate Itself

Google’s AutoML

Back in May, Google revealed its AutoML project; artificial intelligence (AI) designed to help them create other AIs. Now, Google has announced that AutoML has beaten the human AI engineers at their own game by building machine-learning software that’s more efficient and powerful than the best human-designed systems.

An AutoML system recently broke a record for categorizing images by their content, scoring 82 percent. While that’s a relatively simple task, AutoML also beat the human-built system at a more complex task integral to autonomous robots and augmented reality: marking the location of multiple objects in an image. For that task, AutoML scored 43 percent versus the human-built system’s 39 percent.

These results are meaningful because even at Google, few people have the requisite expertise to build next generation AI systems. It takes a rarified skill set to automate this area, but once it is achieved, it will change the industry. “Today these are handcrafted by machine learning scientists and literally only a few thousands of scientists around the world can do this,” WIRED reports Google CEO Sundar Pichai said. “We want to enable hundreds of thousands of developers to be able to do it.”

Image Credit: Google
Image Credit: Google

Much of metalearning is about imitating human neural networks and trying to feed more and more data through those networks. This isn’t — to use an old saw — rocket science. Rather, it’s a lot of plug and chug work that machines are actually well-suited to do once they’ve been trained. The hard part is imitating the brain structure in the first place, and at scales appropriate to take on more complex problems.

The Future of Machine-Built AI

It’s still easier to adjust an existing system to meet new needs than it is to design a neural network from the ground up. However, this research seems to suggest this is a temporary state of affairs. As it becomes easier for AIs to design new systems with increased complexity, it will be important for humans to play a gatekeeping role. AI systems can easily make biased connections accidentally — such as associating ethnic and gendered identities with negative stereotypes. However, if human engineers are spending less time on the grunt work involved in creating the systems, they’ll have more time to devote to oversight and refinement.

Ultimately, Google is aiming to hone AutoML until it can function well enough for programmers to use it for practical applications. If they succeed in this, AutoML is likely to have an impact far beyond the walls of Google. WIRED reports Pichai stated, at the same event from last week, that “We want to democratize this,” — meaning, the company hopes to make AutoML available outside Google.

The post Google’s Machine Learning Software Has Learned to Replicate Itself appeared first on Futurism.

New Breakthrough Allows Machines to Literally Predict the Behavior of Molecules

Simplifying the Complex

Though much noise has been made of what’s still to come from artificial intelligence (AI), the technology has already changed our daily lives. Machine learning-powered image recognition, text analysis, and language translation tools allow us to navigate the world in previously unimagined ways, and our mobile devices can now predict so much of our behavior based on our past actions.

Now, an international, interdisciplinary team of researchers has devised a way to use machine learning to do something far more complex than foresee a smartphone user’s next move. They’ve built a machine that can predict molecular behavior, a feat that previously required very complex quantum calculations. Their study has been published in Nature Communications.

 

To create this system that can predict molecular behavior, the researchers trained an algorithm using a small sample set featuring a simple molecule called malonaldehyde. Then, they had the machine predict the complex chemical behaviors of this molecule. The researchers compared those predicted simulations with their current chemical understanding of malonaldehyde and found that the machine was able to learn how the molecule would behave using the limited data it had been trained on.

“By identifying patterns in molecular behavior, the learning algorithm or ‘machine’ we created builds a knowledge base about atomic interactions within a molecule and then draws on that information to predict new phenomena,” researcher Mark Tuckerman of New York University explained in a press release.

Data-Driven Models

This work is yet another example of AI’s ability to impact a wide variety of industries, with molecular science joining everything from medical research to psychology and behavioral science. The research demonstrates how machine learning methods can be used to perform difficult tasks of all types so long as the systems are given sufficient data.

Understanding Machine Learning [INFOGRAPHIC]
Click to View Full Infographic

The researchers expect that this ability to predict molecular behavior could greatly contribute to the development of pharmaceuticals, as well as simulate molecular designs crucial for improving the performance of today’s new battery technologies, solar cells, and digital displays — basically, anything that used to rely on complex quantum mechanical calculations to model atomic or molecular interactions can benefit from their work.

While their machine does make it possible to model this behavior without involving intricate calculations, streamlining that complicated task is just the jumping-off point, according to Müller: “Now we have reached the ability to not only use AI to learn from data, but we can probe the AI model to further our scientific understanding and gain new insights.”

The post New Breakthrough Allows Machines to Literally Predict the Behavior of Molecules appeared first on Futurism.

AI-Powered Google Clips Camera Decides When to Take a Photo So You Don’t Have To

Google Clips is Your New AI Photographer

Google is rolling out an entirely new product alongside its recent Pixel and Google Home releases: the Google Clips camera. It is visually unassuming: a small, flat, square, wireless camera that can clip onto your shirt pocket (or whatever you dream up) and take photos.

What makes this camera different is that you’re not always the one taking the photos, its artificially intelligent (AI) system is. While you might be having precious family moments and not paying attention to the camera you clipped in the corner of the room, the AI behind Google Clips will choose ideal moments to capture photos of its own volition.

Google’s product manager Juston Payne asked, “How do you let yourself capture those fleeting moments, while being part of the moment?”

The camera should also get smarter using the machine learning algorithm Moment IQ, recognizing people and pets you spend a lot of time with. It also knows good photo conditions when it sees them. While perhaps a creepy concept to some, the idea is that the camera would capture candid and heartfelt moments at home; moments you might miss or not be able to capture yourself.

Robots Everywhere

When products like Google Home were first introduced, many were hesitant to allow a seemingly sentient machine into their home. But it seems as though that’s changed, and people are slowly transitioning to accept technologies that incorporate AI, machine learning, and other advanced technologies. Compatible with Pixel phones, the iPhone 8 and 8 Plus, and the Samsung Galaxy S7 and S8, the Google Clips camera operates using WiFi and will cost $249. But how many will, realistically, be willing to test this step into AI photography territory?

Will Google Clips allow families, as it intends, to focus more on spending time together rather than taking photos, or will it become an ethically-murky nuisance?

There are those who are fearful of AI; Nick Bilton, New York Times tech columnist, wrote, “The upheavals [of artificial intelligence] can escalate quickly and become scarier and even cataclysmic. Imagine how a medical robot, originally programmed to rid cancer, could conclude that the best way to obliterate cancer is to exterminate humans who are genetically prone to the disease.”

Understanding Machine Learning [INFOGRAPHIC]
Click to View Full Infographic

Now, this camera is nowhere near the potentially human-obliterating technology that Bilton is alluding to. However, it is important to both embrace progress and question how this type of technology will interact with our lives.

Could it be used by police to capture moments at home that could be cause for criminal concern? Is the biggest concern that it captures photos that you don’t care for aesthetically, or find inferior to your own photography skills?

There are many possibilities, and Google does allow the camera to operate without being connected to the internet, which could cut down on security concerns. Only time will tell, and hopefully, the camera will serve its purpose and allow families to enjoy being together while taking on the task of documenting loving moments.

The post AI-Powered Google Clips Camera Decides When to Take a Photo So You Don’t Have To appeared first on Futurism.

In the Future, You Might Get Paid for the Data You Share Online

Precious Data

In a highly interconnected globalized world, data is everything. Regarding the economics of “big data,” some people have called data the new oil. Companies get precious bits of information from us — which, more often than not, we give for free in exchange for some form of personalized web service, such as signing up for an email account.

“The defense of this practice is that these companies provide ‘free’ services, and that they deserve some reward for their innovation and ingenuity,” John Danaher, from NUI Galway’s School of Law, told Digital Trends. “That may well be true, but I would argue that the rewards they receive are disproportionate.”

Companies are able to collect data from consumers, whether it’s direct personal information or behavioral in nature, through the different web services we make use of. Some might ask us to click on a few tick boxes or answer a CAPTCHA question, or even ask us personal questions — such as what we like in a relationship. The data we provide is used to train machine learning systems to perform their algorithms better.

So, one might ask: are we being cheated here? Should consumers get more from the data we share or provide to companies? Virtual reality (VR) pioneer Jaron Lanier certainly thinks so. In his book, Who Owns the Future?, he suggests a potential micropayment model.

Defining Useful Data

Under Lanier’s micropayment scheme, for all data that companies find useful — like in perfecting machine learning algorithms that feed off that data — the person who provided the information should be given some form of recompense. It’s a model similar to the existing one that Google and Apple have when it comes to content creators: these individuals get paid for a successful YouTube video, for instance, where success is defined by certain metrics.

In the same way, the most useful data can easily be determined using a formula that accounts for where the data originated and how important it was in training a system to perform certain functions. This means that the relative value of data would differ from one type to the next.

The rationale for such a scheme becomes even more convincing when you consider the future of employment in the face of growing intelligent automation. After all, aren’t smart robots trained using data provided by humans? Together with universal basic income models, perhaps micropayment for data would help in covering for the jobs we may lose as a consequence of automation.

The post In the Future, You Might Get Paid for the Data You Share Online appeared first on Futurism.

AI Might Match Human Learning Capabilities by 2050, Says Breyer Capital CEO

AI circa 2050

Artificial Intelligence (AI) is on track to develop the same learning capabilities as humans, with some believing AI will be able to learn at the same speed as us in a little over 30 years.

Jim Breyer, founder and CEO of venture capitalist firm Breyer Capital, shares this belief. Speaking at the CNBC and Institutional Investor’s Delivering Alpha conference earlier this month, the CEO said it could happen as early as the year 2050, citing his various talks with AI experts.

White House AI Report: Everything You Need to Know [INFOGRAPHIC]
Click to View Full Infographic

“When I visit campuses and speak to the AI experts, there is a median year of 2050 where they think the self-learning capability of AI will be singular and will be at human-level intelligence,” he explained. “So, 2050? Does it happen? There are a lot of people who don’t think that is the case. It is median data from some of the best AI researchers in the world.”

Hopes and Concerns

A lot of people in the tech industry have spoken about AI and its potential — for better and worse. Tesla CEO Elon Musk has called AI the biggest threat to civilization, while Stephen Hawking has said it could negatively impact middle-class jobs and potentially lead to the end of humanity, resulting in the development of a plan to stop the “AI apocalypse.”

On the other hand, a number of researchers have disagreed with Musk’s views, saying his thoughts “seem to focus on the rather far-fetched super-intelligence take-over scenarios.” There’s also the belief that AI will only be dangerous if it’s used without proper thought and concern for the consequences.

Breyer has invested a lot of personal time and money into AI and human assisted intelligence (HAI), but hasn’t dismissed Musk’s concerns in the way others have. Instead, he remains focused on the good that may come from AI’s incorporation — the technology is used in self-driving cars, and is expected to help defend against cyber attacks.

“For the next decade or two, the good that comes out of proper, safe and interesting AI applications is enormous, and I am very excited about being part of investing in it.”

Regardless of where AI goes, the technology is here to stay. We’ll have to wait and see where AI takes us, but at the rate it’s being developed, it won’t be much longer to see how it impacts our lives.

The post AI Might Match Human Learning Capabilities by 2050, Says Breyer Capital CEO appeared first on Futurism.

Researchers Have Linked a Human Brain to the Internet for the First Time Ever

Brain Meets IoT

A team of researchers at Wits University in Johannesburg, South Africa have made a major breakthrough in the field of biomedical engineering. According to a release published on Medical Express, for the first time ever, researchers have devised a way of connecting the human brain to the internet in real time. It’s been dubbed the “Brainternet” project, and it essentially turns the brain “…into an Internet of Things (IoT) node on the World Wide Web.”

The project works by taking brainwave EEG signals gathered by an Emotiv EEG device connected to the user’s head. The signals are then transmitted to a low cost Raspberry Pi computer, which live streams the data to an application programming interface and displays the data on an open website where anyone can view the activity. Adam Pantanowitz, a lecturer in the Wits School of Electrical and Information Engineering and the project’s supervisor, said:

Brainternet is a new frontier in brain-computer interface systems. There is a lack of easily understood data about how a human brain works and processes information. Brainternet seeks to simplify a person’s understanding of their own brain and the brains of others. It does this through continuous monitoring of brain activity as well as enabling some interactivity.

Intellectual Muscle

Pantanowitz said this is just the beginning of the possibilities of the project. He adds that the team is now aiming to allow for a more interactive experience between the user and their brain. Some of this functionality has already been built into the site, but it is very narrow — limited to stimulus such as arm movement. “Brainternet can be further improved to classify recordings through a smart phone app that will provide data for a machine-learning algorithm. In future, there could be information transferred in both directions – inputs and outputs to the brain,” Pantanowitz said.

Future applications for this project could lead to some very exciting breakthroughs in machine learning and brain-computer interfaces like Elon Musk’s Neural Lace and Bryan Johnson’s Kernel. Data collected from this project could lead to a better understanding of how our minds work and how we can take advantage of that knowledge to augment our brain power.

The post Researchers Have Linked a Human Brain to the Internet for the First Time Ever appeared first on Futurism.

In the Age of AI, We Shouldn’t Measure Success According to Exponential Growth

Positive Psychology

Before we talk about technology, here’s an exercise. Close your eyes. Take a deep breath.

Picture someone you’re grateful for in your life.  Maybe it’s your partner, your son, or your daughter. Maybe it’s a friend at work or your favorite barista. Whoever they are, picture that person for a full sixty seconds in your mind and hold them in your heart while basking in the gratitude that they are part of your life right now. I’ll wait.

According to the science of positive psychology, engaging in exercises like this one increase your well-being. Beyond mood, (typically called “hedonic happiness” in positive psychology), eudaimonic happiness comes from the Greek word for “flourish” and refers to a long-term satisfaction with life not based primarily on emotion.

It’s important to distinguish between these two aspects of happiness because they frame how we pursue well-being as individuals and as a society. As a rule, most of us clamor for the instant gratification of hedonic happiness. This practice is emphasized when we’re framed in economic terms as “consumers,” implying our worth is based on purchase and productivity.

But the promise of future rewards keeps us from being present.

Defining Metrics of Success

Think back on the gratitude exercise. By slowing down and concentrating on a person in your life you increased your well-being. While it’s likely your mood improved, the larger logic behind the exercise is that it helps you savor what you already have right now.

Types of AI: From Reactive to Self-Aware [INFOGRAPHIC]
Click to View Full Infographic

Artificial Intelligence and Autonomous Systems (AI/AS), like any technologies, can bring great benefit or harm based on the context of their use. But while visions of our dystopian future may feature sentient robot overlords, we need to define the metrics of success we want society to prioritize for these technologies today.

The primary metric of value the world has prioritized since its initial introduction in 1937 by Simon Kuznets is Gross Domestic Product (GDP). While economic indicators may not seem to have direct relevance to your daily life, when the primary goal for a country is fiscal increase and exponential growth those values also drive business. And while companies may create products that provide value and help customer’s lives, when they’re forced to prioritize exponential growth or shareholder profits human well-being is diminished.

Widening Perspectives on Prosperity

As an example of this trend, here’s an excerpt from an article by The World Economic Forum. This company replaced 90% of its workforce with machines. It’s hard to argue against automation when statistics are clearly illustrating its potential. The latest evidence comes out of a Chinese factory in Dongguan City. The factory recently replaced 90 percent of its human workforce with machines, and it led to a staggering 250 percent increase in productivity and a significant 80 percent drop in defects.

You can’t argue from a traditional business standpoint about the logic of widespread automation. While experts argue about which jobs won’t be replaced by automation or how people will work alongside machines rather than be replaced by them, one fact remains clear. There is no motivation to not automate every human skill while we continue to prioritize exponential growth as society’s primary metric of success.

Fortunately, numerous well-being indicators exist that widen the traditional perspective of prosperity.  Measuring both subjective and objective data, these metrics complement existing fiscal measures to provide a portrait of how society can flourish more holistically in the future. As The Organisation for Economic Co-operation and Development (OECD) describes as part of their Measuring Well-Being and Progress Research:

In recent years, concerns have emerged regarding the fact that macro-economic statistics, such as GDP, don’t provide a sufficiently detailed picture of the living conditions that ordinary people experience…Societal progress is about improvements in the well-being of people and households. Assessing such progress requires looking not only at the functioning of the economic system but also at the diverse experiences and living conditions of people.

Well-being Indicators also provide perspective on environmental issues along with societal and financial metrics. As former U.N. Secretary-General Ban Ki-moon noted in 2012: “[GDP] fails to take into account the social and environmental costs of so-called progress…We need a new economic paradigm that recognizes the parity between the three pillars of sustainable development–social, economic and environmental well-being are indivisible.”

Multiple businesses, non-profits, and Governments have worked to implement the Triple Bottom Line (TBL) mindset mirrored in Ban Ki-moon’s words. There are also now more than 2,100 Certified B Corps (for-profit companies certified to meet rigorous standards of social and environmental performance, accountability and transparency). Michael Porter and Mark Kramer defined Shared Value in their seminal Harvard Business Review article as a way for companies to drive profits that create sustainable societal benefits as a new form of innovation. The AI/AS Industry has the same opportunity but only by embracing established well-being indicators to move beyond exponential growth as society’s primary metric of success.

Okay. Practicing what I preach. Closing my eyes. For the people creating ethically-aligned, responsible AI/AS technologies prioritizing human well-being as society’s key metric of success: I’m grateful for you. Take all the time you need.

John C. Havens is the Executive Director of The IEEE Global AI Ethics Initiative that recently produced the report, Prioritizing Human Well-being in the Age of Artificial Intelligence. He is also the author of Heartificial Intelligence: Embracing Our Humanity to Maximize Machines.

Disclaimer: The views and opinions expressed are solely those of the author. They do not necessarily represent the views of Futurism or its affiliates.

The post In the Age of AI, We Shouldn’t Measure Success According to Exponential Growth appeared first on Futurism.

Here’s Why Apple’s Custom GPU and A11 Bionic Chip Are Utterly Revolutionary

Apple Goes GPU

The new iPhone X was announced today, and for many Apple fans, this has been one of the most significant announcements the company has delivered in a long while. New generations of iPhone have always accommodated slightly better hardware, bringing moderately upgraded capabilities like faster processors and better cameras. However, for the first time, Apple has created its own graphics processing unit (GPU) to help it soar above the competition, along with its super fast new A11 bionic 6-core processing chip.

This pairing is going to revolutionize smartphones by bringing machine learning capabilities to the palm of your hand.

Such developments are going to greatly assist the optimization of Apple’s longstanding “intelligent assistant,” Siri. Apple has been building an ecosystem for Siri to inhabit across platforms—in devices ranging from smartphones and tablets, to home devices like the upcoming Apple HomePod.

And now, Siri will have the power to learn, making the platform better for all users. So look out, Alexa, Siri may have just been given the hardware to dominate the battle for smart assistant supremacy.

Image source: Apple

A Boost for Augmented Reality

The inclusion of their own GPU is also going to be a huge boost for augmented reality applications and development, not to mention battery life.

CNBC speculates that having a specialized chip on board will give the main chip less work, thus improving battery life. To help move augmented reality applications, iOS 11 will come equipped with the ARKit framework to create apps that can recognize their environment and place digital objects into them.

These chips will ensure that both developers and users have the power to make and run these impressive applications.

This all boils down to the new iPhone X being one of the most significant upgrades in the history of the line. Developers will have some impressive tools to create previously impossible smartphone capabilities.

The post Here’s Why Apple’s Custom GPU and A11 Bionic Chip Are Utterly Revolutionary appeared first on Futurism.

The Age of Automation Is Coming, Just More Slowly Than Expected

What Are We Waiting For?

For years, we’ve been told that automation is coming, and that it’s going to have a profound effect on the world of work. However, while robots are already taking care of everything from construction work to delivering packages, widespread adoption of this technology isn’t happening quite as quickly as some would have expected.

Automation can make businesses more efficient, and there are plenty of incentives for companies to invest in the idea. However, there’s a tangible lack of trust in this kind of new technology among many executives, at least for the time being. The old ways are being put to pasture, with a wholly different approach put in their place, and that can be an uncomfortable notion.

“Some of these pretty profound innovations are going to take time to diffuse,” said Andrew McAfee, the co-director of the Massachusetts Institute of Technology’s Initiative on the Digital Economy, and co-author of the book The Second Machine Age.

Universal Basic Income: The Answer to Automation?
Click to View Full Infographic

This standpoint is evidenced by comments made by outgoing General Electric CEO Jeff Immelt earlier this year. “I think this notion that we are all going to be in a room full of robots in five years … and that everything is going to be automated, it’s just BS. It’s not the way the world is going to work,” argued Immelt.

For some companies, reticence to commit to automation is all about a lack of trust in the technology. For others, the biggest obstacle is finding a way to integrate human workers with robots and computers in the most efficient way. However, there’s something to be said for a cautious approach, as automation is going to have a profound, far-reaching effect on the way we live our lives.

Brave New Work

When we think about automation, it’s easy to jump to the problem of human employment. If machines are taking on more tasks, there’s going to be less work to go around for living, breathing employees.

Of course, it’s important to remember that the automated systems will provide jobs in their own right, both in terms of manufacture and design, as well as ongoing maintenance. Many companies will want to pair machine workers with humans to maximize the strengths of both.

It’s also worth mentioning that automation is set to take care of a lot of jobs that human workers don’t want to do. There are lots of roles in factory construction lines and other similar settings that are repetitive and strenuous enough to cause severe physical injury over time. Machines can simply be replaced if and when they succumb to wear and tear.

All this being said, we are going to need to figure out how people who rely on jobs that are going to be automated will make a living. Universal Basic Income is one compelling solution, although it’s not without its problems.

UBI isn’t cheap — conservative estimates put the cost of implementing such an idea in the United States at no less than $1.5 trillion per year. If it’s being put forward as a solution to jobs lost thanks to automation, companies who are pursuing that technology are likely be taxed heavily to foot the bill, which may actually end up slowing the adoption even further.

The technology that makes heavily automated workplaces possible is only one piece of the puzzle. For a huge shift such as this, there are all kinds of social considerations that have to be taken into account, and those changes tend to take a little longer than technological advances.

The post The Age of Automation Is Coming, Just More Slowly Than Expected appeared first on Futurism.

We May Have Just Uncovered a Serious Problem With How AI “See”

Visually Impaired Cars?

People with certain visual impairments aren’t allowed to drive, for fairly obvious reasons. Now, a study from the University of Washington (UW) has shown that artificial intelligences (AI) aren’t immune to vision problems when operating motor vehicles either.

The researchers have determined that machine learning models can be prone to a kind of physical-world attack that impedes their ability to process images. Concretely, AI can have problems reading defaced street signs.

Image credit: University of Washington
Image credit: University of Washington

For their study, the researchers focused on two potential types of physical attacks. In the first, the attacker would place a printed poster over an actual road sign, while in the second, they would use stickers to alter an existing road sign. These attacks were designed to look like graffiti or art, which would make them more difficult for the casual observer to detect.

The attacks were extremely effective at confusing the AI. The printed overlay was 100 percent effective at fooling the system when placed over a Stop sign or a Right Turn sign. The stickers designed to look like abstract art were 100 percent effective on Stop signs, while the stickers placed to mimic graffiti were 66.7 percent effective.

Vision of the Future

If visually impaired people can correct their vision using glasses, contacts, or surgery, perhaps AI can improve its image recognition capabilities as well. One solution suggested by the UW team is teaching autonomous systems to recognize contextual information. This would mean the systems pay attention to where and how particular signs are placed and not just what’s on the signs themselves. Such a system would know that a Stop sign on a freeway, for example, doesn’t make contextual sense.

Autonomous Car Forecasts: When Will They Actually Be on Our Roads?
Click to View Full Infographic

Thankfully, efforts to improve autonomous vehicles are in the works. Governments are showing support for these efforts, and legislation will make it easier to test such cars in real-world scenarios. Governments could also directly help by making road signs that are difficult to deface, as Engadget suggested.

This study doesn’t mean that driverless cars aren’t safe. On the contrary, it’s another example of how human activity — in this case, vandalism — is the primary cause of road accidents. In the U.S. alone, some 40,000 of these happen every year, so autonomous driving systems have the potential to save thousands, even millions, of lives if they supplant all human-operated vehicles on the roads worldwide.

Given enough studies like this one from the UW team, we’ll be able to detect and address the potential shortcomings of autonomous systems and eventually develop ones that can safely transport us where we need to go.

The post We May Have Just Uncovered a Serious Problem With How AI “See” appeared first on Futurism.

Google’s New Algorithm Wants to Help Researchers Stabilize Nuclear Fusion Reactions

Advancing Fusion Research

There are already a number of researchers involved in developing stable nuclear fusion. The goal may seem simple enough in theory: harnessing the same energy that powers the Sun— but attaining it has proven to be rather difficult. For one, sustaining a stable nuclear fusion reaction is tricky, as it requires playing with variables that aren’t that easy to manipulate. That’s why Google Research is working in tandem with nuclear fusion company Tri-Alpha Energy to help simplify the process.

Their solution is a computer algorithm, dubbed the Optometrist algorithm, that can speed up experiments involving plasma, the core ingredient in a fusion reaction. It’s also the most challenging aspect to manipulate. “The whole thing is beyond what we know how to do even with Google-scale computer resources,” Ted Baltz, a senior software engineer from the Google Accelerated Science Team, wrote in a Google Research blog.

“We boiled the problem down to ‘let’s find plasma behaviors that an expert human plasma physicist thinks are interesting, and let’s not break the machine when we’re doing it’,” Baltz added. “This was a classic case of humans and computers doing a better job together than either could have separately.”

The Optometrist algorithm was applied to Tri-Alpha Energy’s C2-U machine, where it was able to perform experiments that usually took a month to finish in just a few hours. The result, which was published in the journal Scientific Reports, was a 50 percent reduction in system-induced energy losses that increased total plasma energy. “It was only for about two milliseconds, but still, it was a first!” Baltz wrote. The next step is reaching that critical threshold necessary for nuclear fusion to occur and to stabilize.

A Truly Renewable Energy

Fusion research has garnered significant attention in recent years as scientists have recognized its potential as a renewable and clean energy source. Nuclear fusion could generate four times the amount of energy nuclear fission produces  (one fission event yields about 200 MeV of energy, or about 3.2 ´10-11 watt-seconds). It’s no wonder, then, that fusion is considered the “holy-grail” of energy research.

Recent questions in fusion research have been concerned with finding ways to stabilize the plasma that powers it — not an easy feat, since it requires temperatures of over 30 million degrees Celsius to sustain. Thus far, some researchers have proposed building better fusion reactors, and others are looking at the possibility of using a different base for plasma. Instead of the usual hydrogen, deuterium, or helium, physicists from the Princeton Plasma Physics Laboratory have been tinkering with argon-based plasma.

Where does Google’s algorithm fit in? Well, it could significantly shorten the amount of time needed for each of these experiments. “Results like this might take years to solve without the power of advanced computation,” Baltz said. By running computational models alongside human experiments, the Optometrist algorithm can breeze through every possible combination for nuclear fusion to work.

Tri-Aplha Energy has already ditched the C2-U machine in favor of the more advanced Norman, which already achieved first plasma earlier this month. They’re set to build a power generator for demonstration pending more successful experiments with the Norman.

The post Google’s New Algorithm Wants to Help Researchers Stabilize Nuclear Fusion Reactions appeared first on Futurism.

Google’s AI Fight Club Will Train Systems to Defend Against Future Cyberattacks

Reinforcing AI Systems

When artificial intelligence (AI) is discussed today, most people are referring to machine learning algorithms or deep learning systems. While AI has advanced significantly over the years, the principle behind these technologies remains the same. Someone trains a system to receive certain data and asks it to produce a specified outcome — it’s up to the machine to develop its own algorithm to reach this outcome.

Alas, while we’ve been able to create some very smart systems, they are not foolproof. Yet.

Types of AI: From Reactive to Self-Aware [INFOGRAPHIC]
Click to View Full Infographic

Data science competition platform Kaggle wants to prepare AI systems for super-smart cyberattacks, and they’re doing so by pitting AI against AI in a contest dubbed the Competition on Adversarial Attacks and Defenses. The battle is organized by Google Brain and will be part of the Neural Information Processing Systems (NIPS) Foundation’s 2017 competition track later this year.

This AI fight club will feature three adversarial challenges. The first (non-targeted adversarial attack) involves getting algorithms to confuse a machine learning system so it won’t work properly. Another battle (targeted adversarial attack) requires training one AI to force another to classify data incorrectly. The third challenge (defense against adversarial attacks) focuses on beefing up a smart system’s defenses.

“It’s a brilliant idea to catalyze research into both fooling deep neural networks and designing deep neural networks that cannot be fooled,” Jeff Clune, a University of Wyoming assistant professor whose own work involves studying the limits of machine learning systems, told the MIT Technology Review.

Responsible AI Development

AI is actually more pervasive now than most people think, and as computer systems have become more advanced, the use of machine learning algorithms has become more common. The problem is that the same smart technology can be used to undermine these systems.

“Computer security is definitely moving toward machine learning,” Google Brain researcher Ian Goodfellow told the MIT Technology Review. “The bad guys will be using machine learning to automate their attacks, and we will be using machine learning to defend.”

Training AI to fight malicious AI is the best way to prepare for these attacks, but that’s easier said than done. “Adversarial machine learning is more difficult to study than conventional machine learning,” explained Goodfellow. “It’s hard to tell if your attack is strong or if your defense is actually weak.”

The unpredictability of AI is one of the reasons some, including serial entrepreneur Elon Musk, are concerned that the tech may prove malicious in the future. They suggest that AI development be carefully monitored and regulated, but ultimately, it’s the people behind these systems and not the systems themselves that present the true threat.

In an effort to get ahead of the problem, the Institute of Electrical and Electronics Engineers has created guidelines for ethical AI, and groups like the Partnership on AI have also set up standards. Kaggle’s contest could illuminate new AI vulnerabilities that must be accounted for in future regulations, and by continuing to approach AI development cautiously, we can do more to ensure that the tech isn’t used for nefarious means in the future.

The post Google’s AI Fight Club Will Train Systems to Defend Against Future Cyberattacks appeared first on Futurism.

Google’s Newest AI is Turning Street View Images into Landscape Art

Google Art

Most of us are probably familiar with Google Street View; a feature of Google Maps that allows users to see actual images of the areas they’re looking up. It’s both a useful navigational feature and one that allows people to explore far-off regions just for fun. Engineers at Google are taking these images from Street View one step further with the help of artificial intelligence (AI).

Image Credit: Google Research Team
“Professional” picture above, Google AI picture below. Image Credit: Google Research Team

Hui Feng is one of several software engineers who are using machine learning techniques to teach a neural network how to scan Street View in search of exceptionally beautiful images. This AI then, on its own, “mimics the workflow of a professional photographer.”

This AI system will act as an artist and photo editor, recognizing beauty and specific aspects that make for a “good” photograph. Despite being a subjective matter, the AI proved to be successful, creating professional-quality imagery from Street View images that the system itself located.

Artificially Intelligent Art

In the images below, you can see the different steps that the specially-designed AI took to create one of these portraits. First, the AI takes a panoramic photo of a Street View image, it crops the image, alters both light and coloration, and then applies an appropriate filter. This four-step process that is frequently performed by professional photographers sounds simple, but it is a procedure that had not previously proven to be within AI’s capabilities.

Image Credit: Google Research Team
The AI edits the pictures by cropping, adjusting saturation, and applying a mask. Image Credit: Google Research Team

Google’s many different AI programs have been exploring a wide variety of potential applications for the technology. From recent dabbling in online Go playing to improving job hunting and even creating its own AI better than Google engineers, Google’s AI has been at the forefront of its field.

But AI technologies are progressing faster and further than many have expected, so much so that some AI, like the one mentioned here, are capable of creating art. So, while robots will never make humans completely obsolete in artistic endeavors, this step forward marks a new era of technology.

The post Google’s Newest AI is Turning Street View Images into Landscape Art appeared first on Futurism.

An Advanced AI Has Been Deployed to Fight Against Hackers

Guarding A Global Grid

It takes a truly massive network of hundreds of thousands of computers to help scientists around the world unravel the mysteries of the Universe, which is the purpose of the CERN grid (CERN stands for Conseil Européen pour la Recherche Nucléaire, in English, the European Laboratory for Particle Physics). Naturally, however, particle physicists aren’t the only ones who want to access that kind of computing power. Hackers are also interested in CERN’s grid, and CERN scientists are skipping past standard cybersecurity measures and deploying artificial intelligence (AI) to stay protected.

Image Credit: CERN
Image Credit: CERN

It is the job of any cybersecurity effort to detect unusual activity and identify possible threats. Of course, systems can look for known code worms and viruses, but malware changes too fast for humans to keep up with it. This is where AI and machine learning comes in. CERN scientists are teaching their AI system to distinguish between safe and threatening behavior on the network and take action when it detects a problem.

A Universe Of Data

CERN is home to the Large Hadron Collider (LHC) as well as its massive computer grid. Scientists use the LHC to study high-speed collisions between subatomic particles — in 2017 alone, they collected an estimated 50 petabytes of data about these particles. CERN provides this critically important data to universities and laboratories around the world for research.

The LHC and CERN itself require a massive amount of data storage and computing power, which is what prompted the creation of the Worldwide LHC Computing Grid. The grid connects computers in more than 40 countries from more than 170 research facilities, and works like a power grid to some extent, providing computing resources to facilities based on demand. This presents a unique cybersecurity challenge: keeping the massive globally-distributed grid secure while maintaining the computing power and storage unimpeded.

Machine learning can train a system to detect potential threats while retaining the flexibility that it needs to provide computing power and storage on demand. F-Secure senior security researcher Jarno Niemelä told Scientific American that the biggest challenge for the project will be developing algorithms that can accurately distinguish between normal and malicious network activity without causing false alarms. For now, the AI upgrades are still being tested. If they work well protecting just the part of the grid that ALICE (A Large Ion Collider Experiment) uses, the team can deploy AI cybersecurity measures throughout the system.

The post An Advanced AI Has Been Deployed to Fight Against Hackers appeared first on Futurism.

Google Announces New API That Can Detect and Identify Objects Using Images

Advancing API

Attention all developers, researchers, and enthusiasts: Google has announced that they will be releasing a new . API is, simply put, a set of rules and tools to help build software. Google’s new TensorFlow object detection API is designed to make it easier to identify objects using images. The API includes models that are designed to work on even on comparatively simple devices, like smartphones.

Simplifying machine learning models is proving to be essential for advancing API and machine learning technologies. We don’t all have massive desktop setups with our own servers capable of handling just about anything. While it’s possible to run them through the cloud, that usually proves to be abysmally slow, and also requires an internet connection. That means that in order to make these models more accessible to the average consumer, they’ll need to be simplified.

Keeping that in mind, Google intends for this new API to be extremely user-friendly, allowing anyone and everyone with a basic computer or smartphone to explore the world of machine learning.

Applying the API

We know that this new API can be used to identify objects by using images, but beyond being amusing, could that actually be useful in our everyday lives? As it turns out — yes, it likely could be. This type of API could lead to advancement in facial recognition, landmark detection, as well as the most obvious — object identification. These seemingly basic tools will continue to become essential in many different fields. From information services to law enforcement and even just daily digital tasks, these seemingly small strides in the progression and simplification of machine learning will only continue to push us forward.

Aside from Google’s development of the API and launch of TensorFlow lite, a streamlined version of the machine learning framework, other companies have been creating mobile models, too: Facebook has used the tech to build its Caffe2Go framework and subsequently Facebook’s Style Transfer, and Apple released CoreML, which aims to help run these models on iOS devices. Piece by piece, machine learning is moving closer to individual accessibility.

The post Google Announces New API That Can Detect and Identify Objects Using Images appeared first on Futurism.

Artificial Neural Networks Are Revealing The Quantum World

Modeling a Complex System

Quantum entanglement, which Einstein referred to as “a spooky action at a distance,” is one of the many possible quantum systems that physicists have been exploring for potential applications in a wide range of fields. These are at the heart of quantum computing technologies, for example, and the underlying principle behind quantum teleportation and quantum cryptography.

Researchers have yet to explore these quantum systems fully. Realizing the need for better tools to do so, physicists from the Joint Quantum Institute (JQI) and the University of Maryland’s Condensed Matter Theory Center (CMTC) have turned to artificial neural networks, which are constructed to function and pass information like neurons in the brain.

“If we want to numerically tackle some quantum problem, we first need to find an efficient representation,” JQI researcher Dongling Deng said in a press release. He got the idea after hearing about DeepMind’s Go-playing artificial intelligence (AI) AlphaGo famously defeated human professional players in 2016. Machine learning, which is behind the achievements of current AI systems, seemed like a plausible tool.

At the same time, a paper proposed the idea of using neural networks to represent quantum states. “We immediately recognized that this should be a very important paper,” Deng said, “so we put all our energy and time into studying the problem more.” They published their study in the journal Physical Review X.

Understanding Quantum States

In order to use neural networks to represent quantum systems, Deng and his colleagues studied those that rely on two distinct groups of neurons. One group, called “visible neurons,” was used to represent real quantum particles similar to atoms in an optical lattice or ions in a chain. The second group of neurons was used to account for the interactions between particles. These so-called “hidden neurons” linked up with the visible neurons. As long as the number of connections were kept relatively small, the description of the neural network stayed simple.

AI Forecast for 2017
Click to View Full Infographic

“This research is unique in that it does not just provide an efficient representation of highly entangled quantum states,” JQI fellow Sankar Das Sarma said in the press release. The models were able to produce compact representations of a number of interesting quantum states, including those with surprising amounts of entanglement. “It is a new way of solving intractable, interacting quantum many-body problems that uses machine learning tools to find exact solutions,” Das Sarma added.

Of course, the neural networks had their limits, and they weren’t able to do an efficient universal representation, Deng said. Still, the research could significantly improve how we understand quantum entanglement. As the team wrote in their paper: “Our results uncover the unparalleled power of artificial neural networks in representing quantum many-body states regardless of how much entanglement they possess, which paves a novel way to bridge computer-science-based machine-learning techniques to outstanding quantum condensed-matter physics problems.”

The post Artificial Neural Networks Are Revealing The Quantum World appeared first on Futurism.

A Breakthrough AI Can Now Predict Which Babies Will Develop Autism

Identifying Autism

A study published in Science Transitional Medicine has found that doctors can predict which babies will develop autism spectrum disorder (ASD) by the age of two with an astonishing 96 percent success rate. The test uses a combination of functional magnetic resonance imaging (MRI) and machine-learning algorithms.

The study took brain scans of 59 “sibs” (the younger siblings of children with ASD), who’s chances of getting the disease are 20 times higher than average. They captured the activity in 230 brain regions of the six-month-old infants, amounting to 26,335 neural connections. When the children were two, the scientists conducted a behavioral analysis which indicated that 11 of the children had developed autistic symptoms.

Using this data, the researchers taught a machine-learning algorithm to search for unique patterns in the six-month-old brains of the autistic children. They then had the algorithm analyze the six-month-old scans again, and it correctly identified nine of the 11 children who went on to develop autism.

Prevention as a Cure

The method has been applauded for its non-intrusive nature, ability to identify autism from only one scan, and potential to increase “the feasibility of developing early preventative interventions for ASD.” Kevin Pelphrey, Director of the Autism and Neurodevelopmental Disorders Institute at George Washington University, told Scientific American, “this is a game-changer for the field.”

Predictive AI: Seeing Through the Chaos
Click to View Full Infographic

Machine learning and artificial intelligence (AI) are becoming particularly adept in the diagnostic sector of healthcare due to their capability to analyze large amounts of data more quickly than humans. Recently, AI has shown promising results in predicting what over 65-year-olds are most likely to die from, and IBM’s Watson has already saved lives by diagnosing diseases early on.

AI and machine learning have the potential to revolutionize healthcare by identifying diseases earlier and with more accuracy, allowing doctors to perform cheaper and less intrusive preventative treatments. This would be especially useful in parts of the world where there are fewer healthcare professionals, or when a doctor’s time is under pressure.

The post A Breakthrough AI Can Now Predict Which Babies Will Develop Autism appeared first on Futurism.

Experts Weigh in on AI and the Singularity

AI Visionaries

Artificial intelligence (AI) is progressing so fast that there are new developments in the field almost every week. The tendrils of AI grow further into human life and continue to rapidly intertwine with our reality, and this process will only accelerate. Some worry about the consequences of a future in which AIs have more capabilities than humans, while some relish this prospect. IEEE Spectrum has just published a special issue for June 2017, which reports on the views of nine visionaries, technologists, and futurists on what’s coming in AI.

Each expert was asked, “When will we have computers as capable as the brain?” Ray Kurzweil thinks this will happen in 2029, while Jürgen Schmidhuber simply agrees that it will be “soon,” and Gary Marcus estimates that it will happen 20 to 50 years from now. Nick Bostrom predicts “within a small number of decades.” Rodney Brooks is a little more conservative, estimating 50 to 100 years, while both Robin Hanson and Martine Rothblatt think that it will happen within the 21st century.

Ruchir Puri’s answer to this question was perhaps the most interesting: “A human brain is fundamentally different than being a champion chess, ‘Jeopardy!,’ or Go player. It is something that entails essential traits like caring, empathy, sharing, ingenuity, and innovation. These human brain traits might prove to be elusive to machines for a long time. . .. Although AI’s impact on society will accelerate further. . .it will be a while before we will be able to holistically answer [that] question.”

Image Credit: Geralt/Pixabay
Image Credit: Geralt/Pixabay

The Singularity, Approaching

So, “How will brainlike computers change the world?” Robin Hanson thinks that humans will get rich from robot labor, while Gary Marcus anticipates major advancements in science and medicine and Martine Rothblatt agrees with Kurzweil that we will essentially eventually become downloadable and therefore immortal. Ray Kurzweil sees AI as a massive brain extender, and therefore a problem solver, making every aspect of our lives better. Rodney Brooks thinks making realistic predictions about this isn’t possible since it’s too far off, and instead posits that in 20 years, baby boomers — including Kurzweil — will be assisted by in-home computers, but won’t be immortal. Jürgen Schmidhuber thinks that AIs will be fascinated by the possibilities of space as they become self-motivated and pursue their own goals.

Finally, “Do you have any qualms about a future in which computers have human-level (or greater) intelligence?” Carver Mead points out that people always fear new technologies, even though history shows that we have continually benefitted from them. Robin Hanson thinks anyone who doesn’t have qualms about a change this momentous isn’t paying attention, but Martine Rothblatt doesn’t have qualms, because she thinks human needs will shape a Darwinian market for robots. Ray Kurzweil thinks we will avoid peril and gain optimally by merging with AI. Nick Bostrom is concerned by the problem of scalable control of AI, while Rodney Brooks says he has “no qualms at all,” and that “qualming” is not useful, even for Nick Bostrom. Gary Marcus doesn’t see clear solutions to potential problems yet, but thinks that future technologies will provide them.

The experts had different ideas about many things, but there was no dispute about the most important point: the singularity is coming, and it’s closer than we think.

The post Experts Weigh in on AI and the Singularity appeared first on Futurism.

Draw a Doodle of a Face, and Watch This AI Image Generator Make It Look More “Human”

Ugly Doodles

Machine learning is, perhaps, the most common platform for existing artificial intelligence (AI) networks. The basic idea is that an AI can be taught to reach its own decisions through exposure to usually huge datasets. It’s similar to how we can learn something by seeing it again and again.

Machine learning algorithms are trained to recognize patterns. For example, a system will be exposed to hundreds, thousands, or even millions of images of cars so it can learn what a car looks like based on characteristics shared by the images. Then, it’ll look for those shared characteristics in a never-before-seen image and determine if it is, in fact, a picture of a car.

While machine learning does an almost perfect job of classifying images, it seems to fumble a bit with generating them. The latest example is an image generator shared as part of the pix2pix project. It’s recently been making the rounds on social media, so we tried it out, and here’s the result:

The end results of the generator are either abstract or hideous, depending on your perspective. But it is undeniably able to turn a simple — and arguably poor — doodle into a far more realistic-looking image.

The Future of Machine Learning

Like so much of the internet, the pix2pix project started with cats. The same mechanics applied: a user drew an image, and the algorithm transformed it into a (relatively) more realistic-looking cat.

Understanding Machine Learning [INFOGRAPHIC]
Click to View Full Infographic

For their generators, the developers used a next-generation machine learning technique called generative adversarial networks (GANs). Essentially, the system determines whether its own generated output (in this case, the “realistic” face) is “real” (looks like one of the images of actual faces from the dataset used to train it) or “fake.” If the answer is “fake,” it then repeats the generation process until an outputted image passes for a “real” one.

The pix2pix project’s image generator is able to take the random doodles and pick out the facial features it recognizes using a machine learning model. Granted, the images the system currently generates aren’t perfect, but a person could look at them and recognize an attempt at a human face.

Obviously, the system will require more training to generate picture perfect images, but the transition from cats to human faces reveals an already considerable improvement. Eventually, generative networks could be used to create realistic-looking images or even videos from crude input. They could pave the way for computers that better understand the real world and how to contribute to it.

The post Draw a Doodle of a Face, and Watch This AI Image Generator Make It Look More “Human” appeared first on Futurism.

Researchers Have Created an AI That Could Read and React to Emotions

Reading Sheep

One of today’s more popular artificially intelligent (AI) androids comes from the TV series “MARVEL’s Agents of S.H.I.E.L.D.” Those of you who followed the latest season’s story — no spoilers here! — probably love or hate ADA by now. One of the most interesting things about this fictional AI character is that it can read people’s emotions. Thanks to researchers from the University of Cambridge, this AI ability might soon make the jump from sci-fi to reality.

The first step in creating such a system is training an algorithm on simpler facial expressions and just one specific emotion or feeling. To that end, the Cambridge team focused on using a machine learning algorithm to figure out if a sheep is in pain, and this week, they presented their research at the IEEE International Conference on Automatic Face and Gesture Recognition in Washington, D.C.

Image credit: University of Cambridge
Image credit: Robinson, et al./University of Cambridge

The system they developed, the Sheep Pain Facial Expression Scale (SPFES), was trained using a dataset of 500 sheep photographs to learn how to identify five distinct features of a sheep’s face when the animal is in pain. The algorithm then ranks the features on a scale of 1 to 10 to determine the severity of the pain. Early tests showed that the SPFES could estimate pain levels with an 80 percent accuracy.

Humane and Human

SPFES was a departure for Peter Robinson, the Cambridge professor leading the research, as he typically focuses on systems designed to read human facial expressions. “There’s been much more study over the years with people,” Robinson explained in a press release.“But a lot of the earlier work on the faces of animals was actually done by Darwin, who argued that all humans and many animals show emotion through remarkably similar behaviors, so we thought there would likely be crossover between animals and our work in human faces.”

Types of AI: From Reactive to Self-Aware [INFOGRAPHIC]
Click to View Full Infographic

As co-author Marwa Mahmoud explained, “The interesting part is that you can see a clear analogy between these actions in the sheep’s faces and similar facial actions in humans when they are in pain – there is a similarity in terms of the muscles in their faces and in our faces.”

Next, the team hopes to teach SPFES how to read sheep facial expressions from moving images, as well as train the system to work when a sheep isn’t looking directly at a camera. Even as is, though, the algorithm could improve the quality of life of livestock like sheep by facilitating the early detection of painful conditions that require quick treatment, adding it to the growing list of practical and humane applications for AI.

Additional developments could lead to systems that are able to accurately recognize and react to human emotions, further blurring the line between natural and artificial intelligences.

The post Researchers Have Created an AI That Could Read and React to Emotions appeared first on Futurism.

Google’s New AI Is Better at Creating AI Than the Company’s Engineers

Google’s AutoML

One of the more noteworthy remarks to come out of Google I/O ’17 conference this week was CEO Sundar Pichai recalling how his team had joked that they have achieved “AI inception” with AutoML. Instead of crafting layers of dreams like in the Christopher Nolan flick, however, the AutoML system layers artificial intelligence (AI), with AI systems creating better AI systems.

Understanding Machine Learning [INFOGRAPHIC]
Click to View Full Infographic

The AutoML project focuses on deep learning, a technique that involves passing data through layers of neural networks. Creating these layers is complicated, so Google’s idea was to create AI that could do it for them.

“In our approach (which we call ‘AutoML’), a controller neural net can propose a ‘child’ model architecture, which can then be trained and evaluated for quality on a particular task,” the company explains on the Google Research Blog. “That feedback is then used to inform the controller how to improve its proposals for the next round. We repeat this process thousands of times — generating new architectures, testing them, and giving that feedback to the controller to learn from.”

So far, they have used the AutoML tech to design networks for image and speech recognition tasks. In the former, the system matched Google’s experts. In the latter, it exceeded them, designing better architectures than the humans were able to create.

AI for Everything

AI that can supplement human efforts to develop better machine learning technologies could democratize the field as the relatively few experts wouldn’t be stretched so thin. “If we succeed, we think this can inspire new types of neural nets and make it possible for non-experts to create neural nets tailored to their particular needs, allowing machine learning to have a greater impact to everyone,” according to Google’s blog post.

AutoML has the potential to impact many of the other AI and machine learning-driven softwares that were discussed at the conference. It could lead to improvements in the speech recognition tech required for a voice-controlled Google Home, the facial recognition software powering the Suggested Sharing feature in Google Photos, and the image recognition technology utilized by Google Lens, which allows the user to point their Google Phone at an object (such as a flower) in order to identify it.

Truly, AI has the potential to affect far more than just our homes and phones. It’s already leading to dramatic advancements in healthcare, finance, agriculture, and so many other fields. If we can use an already remarkable technology to actually improve that same kind of technology, every advancement made by humans can lead to machine-powered advancements, which lead to better tools for humans, and so on. The potential of AI then draws to mind the title of another sci-fi film: limitless.

The post Google’s New AI Is Better at Creating AI Than the Company’s Engineers appeared first on Futurism.

AI Won’t Just Replace Workers. It’ll Also Help Them.

AI: The Tool

Many people worry about artificial intelligence (AI) eliminating jobs and displacing workers, or even taking over human society. A February 2016 report from Citibank and the University of Oxford predicted that automation threatens 47 percent of U.S. jobs, 35 percent of U.K. jobs, and 77 percent of jobs in China. An August report from Forrester stated that customer service and transportation jobs will be gone by 2025, and that we’ll feel the impact of this change within five years.

These fears aren’t unfounded, but they may need refocusing. Few of us understand what algorithms are or how they work; to most of us, they are invisible. Like the electricity that flows unseen and taken for granted throughout our homes, offices, and cities, we don’t notice the many ways that algorithms already shape our experiences, large and small.

This is a problem, because the disconnect between understanding what algorithms do, how they work, and how we should be shepherding their use and our ideas about AI are artificially and unreasonably detached. Yes, algorithms control how AI works. However, they also control how we work to a large extent — and we made them that way because it saves us time and effort.

Algorithms run the internet and make all online searching possible. They direct our email along with us when using our GPS systems. Smartphone apps, social media, software: none of these things would function without algorithms. AI is also dependent on algorithms, and in fact is the next-level extension of our life in the age of algorithms; what we’ve done is teach algorithms to write other new algorithms, and to learn and teach themselves.

Just as we once feared that computers would put us all out of work, we now fear that AI will take all of our jobs away. We have seen the next level of our algorithmic age, and we’re not sure what to make of it. Evolution is never totally predictable and is often messy.

However, part of the way we navigate this transition successfully is by learning to see what it is that we’re concerned about, and what’s actually present around us right now. Pew Research Center and the Imagining the Internet Center of Elon University recently polled 1,302 scholars, technology experts, government leaders, and corporate practitioners about what will happen in the next decade. The respondents were asked just one question: will the net overall effect of algorithms be positive or negative for individuals and society?

Net Benefits

The canvassing of these respondents, which was non-scientific from a statistical perspective, found that 38 percent of the respondents predicted that the benefits of algorithms will outweigh the detriments for both individuals and society in general, while 37 percent felt the opposite way, and 25 percent thought it would be a draw. These results are interesting, but what was really significant were the respondents’ written comments elaborating their positions. There were seven general themes that emerged in the answers as a whole.

Almost all respondents agreed that algorithms are essentially invisible to the public, and that their influence will increase exponentially over the next decade. Barry Chudakov of Sertain Research and StreamFuzion Corp. breaks down the significance for Pew:

“Algorithms are the new arbiters of human decision-making in almost any area we can imagine. […] They are also a goad to consider [human] cognition: How are we thinking and what does it mean to think through algorithms to mediate our world? The main positive result of this is better understanding of how to make rational decisions, and in this measure a better understanding of ourselves. […] The main negative changes come down to a simple but now quite difficult question: How can we see, and fully understand the implications of, the algorithms programmed into everyday actions and decisions?”

We need to learn to see the ways we are thinking through algorithms so we can ensure we maintain oversight over our decisions and actions — and so we know their limitations and our own.

Another theme is that great benefits will keep coming, thanks to algorithms and AI: we will be processing and understanding far more data, and achieving more breakthroughs in science, technological conveniences, and access to information. This will mean healthcare decisions made with more of the whole picture in mind and decisions on bank loans considered with more context and detail. It might even mean an end to unfair practices like gerrymandering — which utterly depend on old-school ways of drawing up voting areas and disappear when algorithms draw them up instead.

Theme three is less rosy: advances in algorithms and big data sets will mean corporations and governments hold all of the cards and set all of the parameters. If algorithms are created to optimize and achieve profitability for a particular set of people without regard to the rest, AI and algorithms won’t correct this imbalance, but will make it worse. Clemson University assistant professor in human-centered computing Bart Knijnenburg told Pew: “Algorithms will capitalize on convenience and profit, thereby discriminating [against] certain populations, but also eroding the experience of everyone else. […] My biggest fear is that, unless we tune our algorithms for self-actualization, it will be simply too convenient for people to follow the advice of an algorithm (or, too difficult to go beyond such advice), turning these algorithms into self-fulfilling prophecies and users into zombies who exclusively consume easy-to-consume items.”

White House AI Report: Everything You Need to Know [INFOGRAPHIC]
Click to View Full Infographic

The fourth theme has to do with biases that exist even in systems that are organized by algorithms. Even the most well-intentioned, inclusive, neutral algorithm creators build their own perspectives into their code, and there are even deficiencies and limitations within the datasets to which algorithms are applied.

Theme five centers upon the potential of access to algorithmically-aided living to deepen already existing cultural and political divides. Consider the differences that exist even now between groups of people consuming algorithmically-driven political news — more and more distinct ideological classes with less and less in common, and less empathy for each other. Algorithmic living makes it more possible for us to avoid and exclude each other; what will the end result of this separation be?

Or, as another example, consider the potential divide between the many highly-educated people who are learning to “biohack” or use nootropics to enhance their lives, and the numerous people of lower socioeconomic classes who lack education and the means or desire to engage in these activities — and lack access as well, even if they hoped to remain upwardly mobile in the algorithm age. Could this kind of progressively deepening division be enhanced by algorithmic living, and will it result in a kind of socio-biounderclass?

The sixth theme concerns unemployment, and many respondents do see the age of the algorithm as the age of mass unemployment. This unattributed response from one person surveyed reflects this overall theme: “I foresee algorithms replacing almost all workers with no real options for the replaced humans.” Other respondents emphasized the need for a universal basic income (UBI) to ensure that even those who have less access and ability to adapt to the changing economy have a basic means for survival.

The final theme from the report: the growing need for algorithmic oversight, transparency, and literacy.

Many respondents advocated for public algorithmic literacy education — the computer literacy of the 21st century — and for a system of accountability for those who create and evolve algorithms. Altimeter Group industry analyst Susan Etlinger told Pew, “Much like the way we increasingly wish to know the place and under what conditions our food and clothing are made, we should question how our data and decisions are made as well. What is the supply chain for that information? Is there clear stewardship and an audit trail? Were the assumptions based on partial information, flawed sources or irrelevant benchmarks? […] If there were ever a time to bring the smartest minds in industry together with the smartest minds in academia to solve this problem, this is the time.”

Putting Algorithms To Work

One of the most important takeaways to glean from this report — and indeed, all reporting on AI right now — is that there is no way to excise algorithms and the advances that are coming with them, such as AI, from our lives. Even if we wanted to, for example, live without all computer technology, it’s too late. That means that strategic planning for the future isn’t about pointlessly trying to ban things that are already coming. The smarter course is to find ways to make algorithms and AI technology work for us.

If we can collaborate with it, AI has the potential to make our working lives better, giving us higher levels of job satisfaction, relieving us of more dangerous and less interesting work. It can also ensure that the best candidates get jobs, and otherwise work to equalize the playing field — if we can ensure that’s how it learns to operate. We are deeply flawed teachers, considering that workplace discrimination, for example, persists. However, with self-awareness and algorithmic literacy, we can also teach ourselves.

The post AI Won’t Just Replace Workers. It’ll Also Help Them. appeared first on Futurism.

Confirmed: AI Can Predict Heart Attacks and Strokes More Accurately Than Doctors

A Notoriously Challenging Task

University of Nottingham researchers created an AI system that scanned routine medical data to predict which patients would have strokes or heart attacks within 10 years. The AI system beat the standard method of prediction, correctly making calls in 355 more cases than traditional means. Predicting cardiovascular events like strokes and heart attacks is a notoriously challenging task. In fact, the researchers note in their recent paper that around half of all strokes and heart attacks occur in patients who were never identified as being “at risk.”

The records included a decade of health outcomes, lab data, drug information, hospital records, and demographic information. The team identified the distinguishing characteristics of patients who experienced strokes and heart attacks using 75 percent of the records. They then tested their models against the standard guidelines using the remaining 25 percent of the records. The standard guidelines scored 0.728 out of 1.0, with the latter signifying 100 percent accuracy. The machine models scored between 0.745 to 0.764, with the neural network making 355 more accurate predictions than the standard guidelines, therefore earning the best score. Had those predictions been made in real time, the patients could have been provided with preventative care.

AI Changing Medicine

According to lead researcher researcher Stephen Weng, within five years the AI medical tools they are testing in labs will be improving the accuracy of clinicians’ diagnoses and the prognoses of patients. In practice, Weng envisions busy doctors making the most of their time with AI tools that are essentially masters of pattern recognition. “[T]he algorithm can look through the entire patient list, flag this up, and bring this to the attention of the doctor,” he said to IEEE Spectrum. “This could be done with the patient sitting in front of them during a routine appointment, or in a systematic screen of the entire list.” Although there is already clinical decision support software available, none of it uses AI pattern recognition — which is at the crux of these more accurate results.

Major regulatory hurdles remain before you’ll be seeing Dr. AI, however: “The key barrier to implementation will be managing privacy and patient confidentiality issues, with computer algorithms trawling through vast amounts of patient data which contain confidential and sensitive medical information,” Weng told IEEE Spectrum. Wariness of decision-making capabilities in medical machines on the part of regulators will also be a hurdle for AI technology to clear. For all of these reasons, when, exactly, the tech will be put into practice remains uncertain.

The post Confirmed: AI Can Predict Heart Attacks and Strokes More Accurately Than Doctors appeared first on Futurism.

AI Learns to Read Sentiment Without Being Trained to Do So

The Sentiment Neuron

Much of today’s artificial intelligence (AI) relies on machine learning: where machines respond or react autonomously after learning information from a particular data set. Machine learning algorithms, in a sense, predict outcomes using previously established values. Researchers from OpenAI discovered that a machine learning system they created to predict the next character in the text of reviews from Amazon developed into an unsupervised system that could learn representations of sentiment.

Understanding Machine Learning [INFOGRAPHIC]
Click to View Full Infographic

“We were very surprised that our model learned an interpretable feature, and that simply predicting the next character in Amazon reviews resulted in discovering the concept of sentiment,” OpenAI, a non-profit AI research company whose investors include Elon Musk, Peter Thiel, and Sam Altman, explained on their blog. OpenAI’s neural network was able to train itself to analyze sentiment by classifying reviews as either positive or negative, and was able to generate text with a desired sentiment.

The AI was a multiplicative long short-term memory (LSTM) trained for a month with “4,096 units on a corpus of 82 million Amazon reviews to predict the next character in a chunk of text.” After training this mLSTM, the researchers turned the model into a sentiment classifier using a linear combination of these units. When they noticed that their model was using few of the learned units, they discovered that there was a single “sentiment neuron” with a highly predictive sentiment value.

The sentiment analysis capabilities of this AI surpassed every other approach used in the Stanford Sentiment Treebank; a small — but extensively studied — sentiment analysis data-set. The AI boasts a 91.8 percent accuracy — higher than the previous best of 90.2 percent.

Image credit: OpenAI
Image credit: OpenAI

Unsupervised Learning

Unsupervised learning algorithms are the dream of machine learning researchers. It’s basically an AI capable of learning on its own, which eliminates the need to feed it labelled or organized data. OpenAI’s mLSTM achieved that, but its developers recognize that it may not be the only machine capable of such unsupervised learning:

“We believe the phenomenon is not specific to our model, but is instead a general property of certain large neural networks that are trained to predict the next step or dimension in their inputs.” 

Being able to learn unsupervised would give AIs a significant boost: decreasing the time required for them to be trained, and at the same time, improving performance. Such an AI could, for instance, provide skilled virtual assistance by analyzing, or even predicting, a user’s needs. These — and any other applications imaginable — still require further study of how this unsupervised algorithm developed.

“Our results are a promising step towards general unsupervised representation learning,” OpenAI’s researchers explained. “We found the results by exploring whether we could learn good quality representations as a side effect of language modeling, and scaled up an existing model on a carefully-chosen dataset. Yet the underlying phenomena remain more mysterious than clear.”

The post AI Learns to Read Sentiment Without Being Trained to Do So appeared first on Futurism.

Artificial Intelligence is Completely Transforming Modern Healthcare

A New Age of Healthcare

Artificial intelligence is slowly making its way into the realm of modern healthcare. Google’s DeepMind is revolutionizing eye care in the United Kingdom, and IBM’s Watson is tackling cancer diagnostics on par with human physicians. Both AI systems use deep learning, a concept loosely mirroring how our own brains work by having AI software analyze exorbitant amounts of data and uncover patterns — which is particularly applicable in diagnostics.

As medical imaging technology continues to take advantage of every new deep learning breakthrough, the challenge is that the computing technology on which it relies must evolve just as quickly. A company called Nvidia is leading that charge under the guidance of Kimberley Powell, who is confident that Nvidia’s processors are not only meeting the deep learning standards of medical imagining, but also pushing the industry forward as a whole.

Nvidia’s hardware has established its silent but prominent role in deep learning’s marriage with medicine. Powell believes projects like their specialized computers, such as the DGX-1 a powerful deep-learning product, will become increasingly more common in hospitals and medical research centers. Strong computing power, like what the DGX-1 can provide, stands to increase the reliability of the diagnostic process; something that, in turn, would significantly boost the standard of care in developing countries.

Deep Learning, MD

While AI won’t be replacing doctors anytime soon, it will provide physicians with tools to more efficiently — and reliably — assess patients. AI is already involved in mining medical data, diagnosing medical images, studying genomics-based data for personalized medicine, and improving the lives of the disabled.

Thanks to NVIDIA’s DGX-1, hospitals can efficiently compare a single patient’s tests and history with data from a vast population of other patients. Some medical research centers and startups are automating the analysis of MRIs, CT scans, and X-rays to assist physicians in making a diagnosis. Others are utilizing deep learning to create genetic interpretation engines to identify cancer-causing mutations in patient genomes, bringing to life the concept of personalized medicine.

However, while AI will no doubt continue to revolutionize medicine for years to come, physicians often find themselves perplexed by how to incorporate the technology into their regular practice. Only once AI is accepted, and fully integrated, into medicine will we see the full potential for the technology in terms of lending itself to more efficient and accurate diagnostics — from routine checkups to more specialized fields.

The post Artificial Intelligence is Completely Transforming Modern Healthcare appeared first on Futurism.

Google’s DeepMind AI Now Has a Memory

Mimicking The Human Brain

Since 2014, DeepMind has been playing Atari video games. Initially, its machine learning systems could learn to win games and beat human scores, but couldn’t remember how it managed to do it. Therefore, for each Atari game, a new neural network was created. DeepMind never benefitted from its own experience—until now.

A team of researchers from DeepMind and Imperial College London created an algorithm that bestows memory on the system, allowing it to learn, retain knowledge, and reuse it. The system uses supervised learning and reinforcement learning tests to learn in sequences.

In the human brain, synaptic consolidation is the basis for continual learning. Saving learned knowledge and transferring it from task to task is critical to the way humans learn. The inability to do that has been a key failure in machine learning. The algorithm, called “elastic weight consolidation” (EWC), chooses the most useful parts of what helped the machine play and win games in the past, then transfers only those parts forward.

deepmind-ai-machine-learning
CLICK HERE FOR FULL INFOGRAPHIC

Higher Level Applications

The system is impressive, but isn’t perfect yet. DeepMind can now retain the most important information from its previous experiences in order to learn, but despite that huge bank of experiences, it still can’t perform as well as a neural network that completes a single game. Efficiency of learning is the next step if machine learning is to match — or eventually eclipse — real-world learning.

Elastic weight consolidation is a core component of any intelligence — biological or artificial — because it enables the thinker to learn tasks in succession without forgetting. The new DeepMind algorithm supports continual learning just like the synaptic consolidation of the human brain, which is the next step for AI in terms of mastering more challenging tasks and learning contexts. In other words, it will mean that AI systems are better able to take on creative and intellectual challenges; previously thought to be the sole province of humankind.

The post Google’s DeepMind AI Now Has a Memory appeared first on Futurism.

Google’s New AI Can Tell If You Are Human or Not

Good news if you’re not an internet bot. Google’s CAPTCHA (Completely Automated Public Turing test to tell Computers and Humans Apart) is now invisible. This signals that the artificial intelligence (AI) behind the security software has reached a new level of sophistication.

*2* Google Artificially Intelligent Security Software is Now a Covert Operative
CLICK TO VIEW FULL INFOGRAPHIC

For those of us who aren’t sure what a CAPTCHA is, it’s that section on a website that forces you to stop what you’re doing to prove you’re not a robot. You usually have to type in a string of strange words or solve a simple puzzle. Now, thanks to machine learning, the system no longer requires conscious human input to recognize if you are, in fact, a person.

The development is fuel by what Google is calling an “advanced risk analysis engine.” This advancement means most of us will never know if we are being checked since only the “most suspicious traffic” will be tested.

The ability of AI to learn is allowing for significant boosts in its resourcefulness. AI systems have taught themselves to compose music, identify suicidal behavior, and now, in a clear step toward the Singularity, code themselves.

Once machines are capable of true learning, how long will it be until the Turing test and CAPTCHA become silly puzzles AI parents play with their children?

The post Google’s New AI Can Tell If You Are Human or Not appeared first on Futurism.

Facebook Wants to Give Your Computer ‘Common Sense’

Common Sense Through Vision

While “common sense” is a particularly vague idea that may be defined differently from person to person, the general idea will be consistent: It is the ability to judge and assess situations and choices in a way that most people would consider reasonable. This is an ability that software developers have been attempting to instill in more sophisticated artificial intelligences (AIs) to help them learn more easily.

One way that is being looked at right now is essentially giving AIs vision. Combining artificial neural networks with advanced machine vision could give increased context to the information that the AI deciphers.

Michael Saechang/Flickr
Credit: Michael Saechang/Flickr

Computers might be able to learn visually by using images and videos, akin to babies, said Yan LeCun, director of Facebook’s artificial intelligence research group in an interview with MIT Technology Review.

“You could interact with an intelligent system purely with language. The problem is that language is a very low-bandwidth channel,” LeCun said. “Much information that goes through language is because humans have a lot of background knowledge to interpret this information.”

Unsupervised Learning

There are some artificially intelligent systems that have used games to help themselves learn. Also, researchers have even begun giving AI systems dreams to help them learn unsupervised. LeCun said that, in his opinion, unsupervised learning of AIs will be where the most interesting technological discoveries are likely to occur.

“The applications for this are not necessarily in vision — it’s a big part of our effort in making progress in AI,” LeCun said in the MIT Technology Review interview.

LeCun’s team at Facebook created a system to help its AI gain visual context by asking the system to predict if a stack of precariously placed blocks will fall down.

“It serves to establish a baseline — if we were to train the system unsupervised, it has enough power to figure things out like that,” LeCun said.

It seems that we are increasingly giving AI human abilities (playing, seeing, dreaming), and this is helping them become more efficient intelligences.

The post Facebook Wants to Give Your Computer ‘Common Sense’ appeared first on Futurism.

AI Will Bring About the Biggest Transformation in Human History

Importance Principle

Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources.

In the history of human progress, a few events have stood out as especially revolutionary: the intentional use of fire, the invention of agriculture, the industrial revolution, possibly the invention of computers and the Internet. But many anticipate that the creation of advanced artificial intelligence will tower over these achievements.

In a popular post, Tim Urban with Wait But Why wrote that artificial intelligence is “by far THE most important topic for our future.

Or, as AI professor Roman Yampolskiy told me, “Design of human-level AI will be the most impactful event in the history of humankind. It is impossible to over-prepare for it.”

The Importance Principle encourages us to plan for what could be the greatest “change in the history of life.” But just what are we preparing for? What will more advanced AI mean for society? I turned to some of the top experts in the field of AI to consider these questions.

CLICK TO VIEW THE FULL INFOGRAPHIC

Societal Benefits?

Guruduth Banavar, the Vice President of IBM Research, is hopeful that as AI advances, it will help humanity advance as well. In favor of the principle, he said, “I strongly believe this. I think this goes back to evolution. From the evolutionary point of view, humans have reached their current level of power and control over the world because of intelligence. … AI is augmented intelligence – it’s a combination of humans and AI working together. And this will produce a more productive and realistic future than autonomous AI, which is too far out. In the foreseeable future, augmented AI – AI working with people – will transform life on the planet. It will help us solve the big problems like those related to the environment, health, and education.”

“I think I also agreed with that one,” said Bart Selman, a professor at Cornell University. “Maybe not every person on earth should be concerned about it, but there should be, among scientists, a discussion about these issues and a plan – can you build safety guidelines to work with value alignment work? What can you actually do to make sure that the developments are beneficial in the end?”

Anca Dragan, an assistant professor at UC Berkeley, explained, “Ultimately, we work on AI because we believe it can have a strong positive impact on the world. But the more capable the technology becomes, the easier it becomes to misuse it – or perhaps, the effects of misusing it become more drastic. That is why it is so important, as we make progress, to start thinking more strongly about what role AI will play.”

Short-Term Concerns

Though the Importance Principle specifically mentions advanced AI, some of the researchers I interviewed pointed out that nearer-term artificial intelligence could also drastically impact humanity.

“I believe that AI will create profound change even before it is ‘advanced’ and thus we need to plan and manage growth of the technology,” explained Kay Firth-Butterfield, Executive Director of AI-Austin.org. “As humans, we are not good at long-term planning because our civil systems don’t encourage it, however, this is an area in which we must develop our abilities to ensure a responsible and beneficial partnership between man and machine.”

Stefano Ermon, an assistant professor at Stanford University, also considered the impacts of less advanced AI, saying, “It’s an incredibly powerful technology. I think it’s even hard to imagine what one could do if we are able to develop a strong AI, but even before that, well before that, the capabilities are really huge. We’ve seen the kind of computers and information technologies we have today, the way they’ve revolutionized our society, our economy, our everyday lives. And my guess is that AI technologies would have the potential to be even more impactful and even more revolutionary on our lives. And so I think it’s going to be a big change and it’s worth thinking very carefully about, although it’s hard to plan for it.”

In a follow up question about planning for AI over the shorter term, Selman added, “I think the effect will be quite dramatic. This is another interesting point – sometimes AI scientists say, well it might not be advanced AI will do us in, but dumb AI. … The example is always the self-driving car has no idea it’s driving you anywhere. It doesn’t even know what driving is. … If you looked at the videos of an accident that’s going to happen, people are so surprised that the car doesn’t hit the brakes at all, and that’s because the car works quite differently than humans. So I think there is some short-term [AI] risk in that … we actually think they’re smarter than they are. And I think that will actually go away when the machines become smarter, but for now…”

Learning From Experience

As revolutionary as advanced AI might be, we can still learn from previous technological revolutions and draw on their lessons to prepare for the changes ahead.

Toby Walsh, a guest professor at Technical University of Berlin, expressed a common criticism of the principles, arguing that the Importance Principle could – and probably should – apply to many “groundbreaking technologies.”

He explained, “This is one of those principles where I think you could put any society-changing technology in place of advanced AI. … It would be true of the steam engine, in some sense it’s true of social media and we’ve failed at that one, it could be true of the Internet but we failed at planning that well. It could be true of fire too, but we failed on that one as well and used it for war. But to get back to the observation that some of them are things that are not particular to AI – once you realize that AI is going to be groundbreaking, then all of the things that should apply to any groundbreaking technology should apply.”

By looking back at these previous revolutionary technologies and understanding their impacts, perhaps we can gain insight into how we can plan ahead for advanced AI.

Dragan was also interested more explicit solutions to the problem of planning ahead.

“As the AI capabilities advance,” she told me, “we have to take a step back and ask ourselves: are we solving the right problem? Is there a better problem definition that will more likely result in benefits to humanity?

“For instance, we have always defined AI agents as rational. That means they maximize expected utility. Thus far, utility is assumed to be known. But if you think about it, there is no gospel specifying utility. We are assuming that some *person* somewhere will know exactly what utility to specify for their agent. Well, it turns out, we don’t work like that: it is really hard for people, including AI experts, to specify utility functions. We try our best, but when the system goes ahead and optimizes for what we inputted, the result is sometimes surprising, and not in a good way. This suggests that our definition of an AI agent is predicated on a wrong assumption. We’ve already started seeing that in robotics – the definition of how a robot should move didn’t account for people, the definition of how a robot should learn from demonstration assumed that people can provide perfect demonstrations to a robot, etc. – I assume we are going to see this more and more in AI as a whole. We have to stop making implicit assumptions about people and end-users of AI, and rigorously tackle that head-on, putting people into the equation.”

What Do You Think?

What kind of impact will advanced AI have on the development of human progress? How can we prepare for such potentially tremendous changes? Can we prepare? What other questions do we, as a society, need to ask?

The post AI Will Bring About the Biggest Transformation in Human History appeared first on Futurism.

An AI Completed 360,000 Hours of Finance Work in Just Seconds

A New COIN

JP Morgan Chase & Co. is the biggest bank in the United States. It is one of the largest employers in the American banking sector, with more than 240,000 employees serving millions of customers. Some of those employees are lawyers and loan officers who spend a total of 360,000 hours each year tackling a slew of rather mundane tasks, such as interpreting commercial-loan agreements. Now, the company has managed to cut the time spent on this work down to a matter of seconds using machine learning.

“People always talk about this stuff as displacement. I talk about it as freeing people to work on higher-value things…”

In June, JP Morgan started implementing a program called COIN, which is short for Contract Intelligence. COIN runs on a machine learning system that’s powered by a new private cloud network that the bank uses. Apart from shortening the time it takes to review documents, COIN has also managed to help JP Morgan decrease its number of loan-servicing mistakes. According to the program’s designers, these mistakes stemmed from human error in interpreting 12,000 new wholesale contracts every year.

COIN is part of the bank’s push to automate filing tasks and create new tools for both its bankers and clients. Automation is now a growing part of JP Morgan’s $9.6 billion technology budget. In fact, over the past two years, technology spending in JP Morgan’s consumer banking sector has totaled about $1 billion. “We have invested heavily in technology and marketing — and we are seeing strong returns,” the bank said in a presentation prior to its annual investor day.

Efficiency Makes a Case for Automation

Over the coming years and decades, artificial intelligence (AI) is expected to usher in a new era of automation. Accordingly, the increase in automated systems will bring with it job displacement in a number of industries, including finance, transportation, manufacturing, information technology, and even law. In total, one study projected that 57 percent of the world’s jobs are at risk of being replaced by automated systems.

This is due, in part, to growing access to technology and cheaper computing systems. Automation can increase efficiency and limit or altogether eliminate human error, as JP Morgan’s COIN program has demonstrated. “We’re starting to see the real fruits of our labor,” said Matt Zames, the bank’s CTO. “This is not pie-in-the-sky stuff.”

An AI Completed 360,000 Hours of Finance Work in Just Seconds
CLICK TO VIEW FULL INFOGRAPHIC

The bank, however, doesn’t see their move toward better automated systems as a cause of unemployment. “People always talk about this stuff as displacement. I talk about it as freeing people to work on higher-value things, which is why it’s such a terrific opportunity for the firm,” said Dana Deasy, Chief Information Officer at JP Morgan.

Whatever the case may be, automation will cause a job disruption in the years to come. In some places, like JP Morgan, it’s already begun. Institutions, both private and public, have to be ready to adjust to the economic future automation will bring with it.

The post An AI Completed 360,000 Hours of Finance Work in Just Seconds appeared first on Futurism.

“AI Brain Scans” Reveal How Synthetic Intelligence Think

Graphcore has taken some mesmerizing images of its Intelligent Processing Units (IPUs) in action. While the images may look like a psychedelic dream or even similar to a human brain scan, they’re actually graphs of the AI’s computations.

Ai movies
CLICK TO VIEW FULL INFOGRAPHIC

Graphcore used its development chip running various AI software to produce these stunning images of what occurs as processes run. For example, the first image below shows how the Graphcore IPU processed Microsoft Research’s ResNet-34 architecture.

Before executing any processes, machine learning systems go through a phase in which the programming creates a graph of all the computations necessary to complete a process. The system then uses that graph to complete those processes. Essentially, what we are seeing in Graphcore’s images is a top-down view of layers upon layers of computations that are executed in passes running back and forth across the data. These passes and the connections between them are color coded to produce images like the second one below.

Nigel Toon, Graphcore’s CEO said, “The striking similarity to scans of the brain highlights that what your brain is doing is something very similar. Your brain has neurons, and synapses connecting those neurons together, and you’re effectively modeling something very similar in this machine learning world as well. What you’re seeing is how the graph operates on the processor, so it would be analogous to taking a scan from a brain to see how it works.”

Credit Graphcore / Matt Fyles
Credit: Graphcore / Matt Fyles
Credit Graphcore / Matt Fyles
Credit: Graphcore / Matt Fyles
Credit Graphcore / Matt Fyles
Credit: Graphcore / Matt Fyles

The post “AI Brain Scans” Reveal How Synthetic Intelligence Think appeared first on Futurism.

New AI Can Write and Rewrite Its Own Code to Increase Its Intelligence

Learning From Less Data

The old adage that practice makes perfect applies to machines as well, as many of today’s artificially intelligent devices rely on repetition to learn. Deep-learning algorithms are designed to allow AI devices to glean knowledge from datasets and then apply what they’ve learned to concrete situations. For example, an AI system is fed data about how the sky is usually blue, which allows it to later recognize the sky in a series of images.

types of AI
CLICK TO VIEW FULL INFOGRAPHIC

Complex work can be accomplished using this method, but it certainly leaves something to be desired. For instance, could the same results be obtained by exposing deep-learning AI to fewer examples? Boston-based startup Gamalon developed a new technology to try to answer just that, and this week, it released two products that utilize its new approach.

Gamalon calls the technique it employed Bayesian program synthesis. It is based on a mathematical framework named after 18th century mathematician Thomas Bayes. The Bayesian probability is used to refine predictions about the world using experience. This form of probabilistic programming — a code that uses probabilities instead of specific variables — requires fewer examples to make a determination, such as, for example, that the sky is blue with patches of white clouds. The program also refines its knowledge as further examples are provided, and its code can be rewritten to tweak the probabilities.

Probabilistic Programming

While this new approach to programming still has difficult challenges to overcome, it has significant potential to automate the development of machine-learning algorithms. “Probabilistic programming will make machine learning much easier for researchers and practitioners,” explained Brendan Lake, an NYU research fellow who worked on a probabilistic programming technique in 2015. “It has the potential to take care of the difficult [programming] parts automatically.”

Gamalon CEO and cofounder Ben Vigoda showed MIT Technology Review a demo drawing app that uses their new method. The app is similar to one released by Google last year in that it predicts what a person is trying to sketch. However, unlike Google’s version, which relied on sketches it had previously seen to make predictions, Gamalon’s app relies on probabilistic programming to identify an object’s key features. Therefore, even if you draw a figure that’s different from what the app has previously seen, as long as it recognizes certain features — like how a square with a triangle on top is probably a house — it will make a correct prediction.

Image credits: Gamalon/MIT Technology Review
Image credits: Gamalon/MIT Technology Review

The two products Gamalon released show that this technique could have near-term commercial use. One product, the Gamalon Structure, using Bayesian program synthesis to recognize concepts from raw text, and it does so more efficiently than what’s normally possible. For example, after only receiving a manufacturer’s description of a television, it can determine its brand, product name, screen resolution, size, and other features. Another app, called Gamalon Match, categorizes products and prices in a store’s inventory. In both cases, the system can be trained quickly to recognize variations in acronyms or abbreviations.

Vigoda believes there are other possible applications, as well. For example, if equipped with a Beysian model of machine learning, smartphones or laptops wouldn’t need to share personal data with large companies to determine user interests; the calculations could be done effectively within the device. Autonomous cars could also learn to adapt to their environment much faster using this method of learning. The potential impact of smarter machines really can’t be overstated.

The post New AI Can Write and Rewrite Its Own Code to Increase Its Intelligence appeared first on Futurism.

We Need a Plan for When AI Becomes Smarter Than Us

When Apple released its software application, Siri, in 2011, iPhone users had high expectations for their intelligent personal assistants. Yet despite its impressive and growing capabilities, Siri often makes mistakes. The software’s imperfections highlight the clear limitations of current AI: today’s machine intelligence can’t understand the varied and changing needs and preferences of human life.

HomeThumb.jpg-1
CLICK TO VIEW FULL INFOGRAPHIC

However, as artificial intelligence advances, experts believe that intelligent machines will eventually – and probably soon – understand the world better than humans. While it might be easy to understand how or why Siri makes a mistake, figuring out why a superintelligent AI made the decision it did will be much more challenging.

If humans cannot understand and evaluate these machines, how will they control them?

Paul Christiano, a Ph.D. student in computer science at UC Berkeley, has been working on addressing this problem. He believes that to ensure safe and beneficial AI, researchers and operators must learn to measure how well intelligent machines do what humans want, even as these machines surpass human intelligence.

Semi-Supervised Learning

The most obvious way to supervise the development of an AI system also happens to be the hard way. As Christiano explains: “One way humans can communicate what they want, is by spending a lot of time digging down on some small decision that was made [by an AI], and try to evaluate how good that decision was.”

But while this is theoretically possible, the human researchers would never have the time or resources to evaluate every decision the AI made. “If you want to make a good evaluation, you could spend several hours analyzing a decision that the machine made in one second,” says Christiano.

For example, suppose an amateur chess player wants to understand a better chess player’s previous move. Merely spending a few minutes evaluating this move won’t be enough, but if she spends a few hours she could consider every alternative and develop a meaningful understanding of the better player’s moves.

Fortunately for researchers, they don’t need to evaluate every decision an AI makes in order to be confident in its behavior. Instead, researchers can choose “the machine’s most interesting and informative decisions, where getting feedback would most reduce our uncertainty,“ Christiano explains.

“Say your phone pinged you about a calendar event while you were on a phone call,” he elaborates, “That event is not analogous to anything else it has done before, so it’s not sure whether it is good or bad.” Due to this uncertainty, the phone would send the transcript of its decisions to an evaluator at Google, for example. The evaluator would study the transcript, ask the phone owner how he felt about the ping, and determine whether pinging users during phone calls is a desirable or undesirable action. By providing this feedback, Google teaches the phone when it should interrupt users in the future.

This active learning process is an efficient method for humans to train AIs, but what happens when humans need to evaluate AIs that exceed human intelligence?

Consider a computer that is mastering chess. How could a human give appropriate feedback to the computer if the human has not mastered chess? The human might criticize a move that the computer makes, only to realize later that the machine was correct.

With increasingly intelligent phones and computers, a similar problem is bound to occur. Eventually, Christiano explains, “we need to handle the case where AI systems surpass human performance at basically everything.”

If a phone knows much more about the world than its human evaluators, then the evaluators cannot trust their human judgment. They will need to “enlist the help of more AI systems,” Christiano explains.

Using AIs to Evaluate Smarter AIs

When a phone pings a user while he is on a call, the user’s reaction to this decision is crucial in determining whether the phone will interrupt users during future phone calls. But, as Christiano argues, “if a more advanced machine is much better than human users at understanding the consequences of interruptions, then it might be a bad idea to just ask the human ‘should the phone have interrupted you right then?’” The human might express annoyance at the interruption, but the machine might know better and understand that this annoyance was necessary to keep the user’s life running smoothly.

In these situations, Christiano proposes that human evaluators use other intelligent machines to do the grunt work of evaluating an AI’s decisions. In practice, a less capable System 1 would be in charge of evaluating the more capable System 2. Even though System 2 is smarter, System 1 can process a large amount of information quickly, and can understand how System 2 should revise its behavior. The human trainers would still provide input and oversee the process, but their role would be limited.

This training process would help Google understand how to create a safer and more intelligent AI – System 3 – which the human researchers could then train using System 2.

Christiano explains that these intelligent machines would be like little agents that carry out tasks for humans. Siri already has this limited ability to take human input and figure out what the human wants, but as AI technology advances, machines will learn to carry out complex tasks that humans cannot fully understand.

Can We Ensure that an AI Holds Human Values?

As Google and other tech companies continue to improve their intelligent machines with each evaluation, the human trainers will fulfill a smaller role. Eventually, Christiano explains, “it’s effectively just one machine evaluating another machine’s behavior.”

Ideally, “each time you build a more powerful machine, it effectively models human values and does what humans would like,” says Christiano. But he worries that these machines may stray from human values as they surpass human intelligence. To put this in human terms: a complex intelligent machine would resemble a large organization of humans. If the organization does tasks that are too complex for any individual human to understand, it may pursue goals that humans wouldn’t like.

In order to address these control issues, Christiano is working on an “end-to-end description of this machine learning process, fleshing out key technical problems that seem most relevant.” His research will help bolster the understanding of how humans can use AI systems to evaluate the behavior of more advanced AI systems. If his work succeeds, it will be a significant step in building trustworthy artificial intelligence.

You can learn more about Paul Christiano’s work here.

The post We Need a Plan for When AI Becomes Smarter Than Us appeared first on Futurism.

Artificial Intelligence Will Protect Us in the World of Tomorrow

When it comes to cybersecurity, no nation can afford to slack off. If a nation’s defense systems cannot anticipate how an attacker will try to fool them, then an especially clever attack could expose military secrets or use disguised malware to cause major networks to crash.

A nation’s defense systems must keep up with the constant threat of attack, but this is a difficult and never-ending process. It seems that the defense is always playing catch-up.

Ben Rubinstein, a professor at the University of Melbourne in Australia, asks: “Wouldn’t it be good if we knew what the malware writers are going to do next, and to know what type of malware is likely to get through the filters?”

In other words, what if defense systems could learn to anticipate how attackers will try to fool them?

Adversarial Machine Learning

In order to address this question, Rubinstein studies how to prepare machine-learning systems to catch adversarial attacks. In the game of national cybersecurity, these adversaries are often individual hackers or governments who want to trick machine-learning systems for profit or political gain.

Nations have become increasingly dependent on machine-learning systems to protect against such adversaries. Unaided by humans, machine-learning systems in anti-malware and facial recognition software have the ability to learn and improve their function as they encounter new data. As they learn, they become better at catching adversarial attacks.

Machine-learning systems are generally good at catching adversaries, but they are not completely immune to threats, and adversaries are constantly looking for new ways to fool them. Rubinstein says, “Machine learning works well if you give it data like it’s seen before, but if you give it data that it’s never seen before, there’s no guarantee that it’s going to work.”

With adversarial machine learning, security agencies address this weakness by presenting the system with different types of malicious data to test the system’s filters. The system then digests this new information and learns how to identify and capture malware from clever attackers.

Security Evaluation of Machine-Learning Systems

Rubinstein’s project is called “Security Evaluation of Machine-Learning Systems”, and his ultimate goal is to develop a software tool that companies and government agencies can use to test their defenses. Any company or agency that uses machine-learning systems could run his software against their system. Rubinstein’s tool would attack and try to fool the system in order to expose the system’s vulnerabilities. In doing so, his tool anticipates how an attacker could slip by the system’s defenses.

The software would evaluate existing machine-learning systems and find weak spots that adversaries might try to exploit – similar to how one might defend a castle.

“We’re not giving you a new castle,” Rubinstein says, “we’re just going to walk around the perimeter and look for holes in the walls and weak parts of the castle, or see where the moat is too shallow.”

By analyzing many different machine-learning systems, his software program will pick up on trends and be able to advise security agencies to either use a different system or bolster the security of their existing system. In this sense, his program acts as a consultant for every machine-learning system.

Consider a program that does facial recognition. This program would use machine learning to identify faces and catch adversaries that pretend to look like someone else.

Rubinstein explains: “Our software aims to automate this security evaluation so that it takes an image of a person and a program that does facial recognition, and it will tell you how to change its appearance so that it will evade detection or change the outcome of machine learning in some way.”

This is called a mimicry attack – when an adversary makes one instance (one face) look like another, and thereby fools a system.

To make this example easier to visualize, Rubinstein’s group built a program that demonstrates how to change a face’s appearance to fool a machine-learning system into thinking that it is another face.

In the image below, the two faces don’t look alike, but the left image has been modified so that the machine-learning system thinks it is the same as the image on the right. This example provides insight into how adversaries can fool machine-learning systems by exploiting quirks.

ben-rubinstein-facial-recognition

When Rubinstein’s software fools a system with a mimicry attack, security personnel can then take that information and retrain their program to establish more effective security when the stakes are higher.

Minimizing the Attacker’s Advantage

While Rubinstein’s software will help to secure machine-learning systems against adversarial attacks, he has no illusions about the natural advantages that attackers enjoy. It will always be easier to attack a castle than to defend it, and the same holds true for a machine-learning system. This is called the ‘asymmetry of cyberwarfare.’

“The attacker can come in from any angle. It only needs to succeed at one point, but the defender needs to succeed at all points,” says Rubinstein.

In general, Rubinstein worries that the tools available to test machine-learning systems are theoretical in nature, and put too much responsibility on the security personnel to understand the complex math involved. A researcher might redo the mathematical analysis for every new learning system, but security personnel are unlikely to have the time or resources to keep up.

Rubinstein aims to “bring what’s out there in theory and make it more applied and more practical and easy for anyone who’s using machine learning in a system to evaluate the security of their system.”

With his software, Rubinstein intends to help level the playing field between attackers and defenders. By giving security agencies better tools to test and adapt their machine-learning systems, he hopes to improve the ability of security personnel to anticipate and guard against cyberattacks.

The post Artificial Intelligence Will Protect Us in the World of Tomorrow appeared first on Futurism.

Google’s AI is Learning to Make Other AI

Training Your Replacement

Imagine the conflicted feelings of the machine learning expert who is creating artificial intelligence (AI) that they know will one day, possibly very soon, be able to create better AI than them. It’s the new age’s way of holding on to the time-honored tradition of having to train your own replacement. Machine learning experts are currently being paid a premium wage due to their limited numbers and the high demand for their valuable skills. However, with the dawn of software that is “learning to learn,” those days may be numbered.

The most prolific minds in AI research from groups such as Google Brain, OpenAI, DeepMind, and university research departments at the nation’s most prestigious tech schools are developing machine-learning systems that can create machine-learning systems. Researchers from Google Brain were able to design software that created an AI system to take a test that measures how well software can process language. The software did better on the test than software designed by humans. So, in a manner of speaking, much like recently reported zebra shark, AI can reproduce asexually.

According to MIT Technology Review, Jeff Dean, the leader of the Google Brain group, sees “automated machine learning” as the most promising research avenue for his team.“Currently the way you solve problems is you have expertise and data and computation. Can we eliminate the need for a lot of machine-learning expertise?”

Image credit: Shutterstock/iurii
Image credit: Shutterstock/iurii

Ever Expanding Experts

If it turns out that AI can consistently perform at levels comparable to the Google Brain experiment, self-starting AI could lead to faster implementation of the technology. While this prospect is very exciting for tech enthusiasts, growing numbers of people are worried about what increased roll-outs of AI systems could mean for their livelihoods.

Automation is set to disrupt the way the economy, and even capitalism itself, has operated for centuries. In the long run, machines are cheaper than human workers. There’s no need to worry about vacation time, unionization, heath insurance, or many other aspects of employment that people need or expect from their employers. However, this cheaper, more effective labor force will come at a great cost.

The biggest sector to be hit by the proliferation of automation will be manufacturing, especially in the developing world. President Obama even mentioned automation in his farewell address. “But the next wave of economic dislocations won’t come from overseas. It will come from the relentless pace of automation that makes a lot of good, middle-class jobs obsolete.” Industry experts tend to agree.

Even more, as this development indicates, not just low-skill jobs are on the chopping block. There are currently AI systems being developed that can replace film editors, songwriters, journalists, and much more. And now, with AI that is capable of creating AI better than humans, we all need to be a little more observant and aware of what’s on the horizon.

The post Google’s AI is Learning to Make Other AI appeared first on Futurism.

Predicting 2017: The Rise of Synthetic Intelligence

The transition from one year to the next is always a little uncertain – an uneasy blend of anxiety and optimism, it’s also a time of retrospection, introspection, and even a little tentative prognostication. And since the latter is our stock-in-trade at Futurism, we believe now is the perfect time to look ahead at what 2017 has in store for us.

When it came to artificial intelligence (AI), 2016 was a year of notable firsts: Tesla’s autopilot predicted a car crash, Alexa was subpoenaed, the White House took notice of the subject, and Google’s AlphaGo beat the reigning European Go champion. But 2016 is gone and buried and 2017 is promising even greater advances in AI as the technology further infiltrates our lives and colonizes new frontiers of endeavor.

Here’s some of what we can expect for artificial intelligence in the year ahead.

Personal AI

2016 saw the growth of more consumer-oriented AI. Alexa, Siri, and Cortana—among many others—now come standard in our handy personal devices, and Amazon even open-sourced Alexa for developers. Look for 2017 to be the year when AI begins to be a commonplace in some apps, and also expect the other major tech companies to follow Amazon’s lead in opening up their AI systems to outside development.

With this more collaborative approach to the evolution of AI, we may finally begin to see the personalization of artificial intelligence, and a great proliferation of new AI programs with idiosyncratic personalities, temperaments, and even intellectual outlooks.

Some of the artificial intelligence (AI) algorithms currently helping machines think. Credit: CIO Journal/Narrative Science
Some of the artificial intelligence (AI) algorithms currently helping machines think. Credit: CIO Journal/Narrative Science

A Better Conversationalist

Trying to hold a conversation with Siri or Alexa is like watching two people in the same canoe paddling in opposite directions—it just ain’t goin’ anywhere. For one thing, neither have much a sense of humor; furthermore, these robotic ladies just lack that wonted je ne sais quoi we expect in our intellectual sparring partners—they’re dull, frankly, and haven’t really got much to say.

But in 2017, that’s all about to change.

“In 2017 there will be a chatbot that passes the Turing test, exhibiting responses so human-like that an average person wouldn’t be able to tell if it’s human or machine,” forecasts Jim McHugh, Vice President and General Manager at NVIDIA. Such extraordinary capabilities will be the natural sequel to the new advances in machine learning, natural language processing, and pattern recognition that will finally beget more empathic and intuitive AI programs.

“In the coming year, we will see advances that are used to endow systems with new human-centered qualities, including more natural, fluid conversation—that can address several topics or needs in one ongoing interaction, and deeper understanding of human values and intentions, such as recognizing the commitments we make to others in our email and text messaging,” observes Eric Horvitz, Technical Fellow and Managing Director at Microsoft Research.

Very soon now, you’ll really be able to have that deep, meaningful conversation with Alexa that you always wanted.

Cognitive Machinery

According to the big brains at Ovum, “machine learning will be the biggest disruptor for big data analytics in 2017.”  They’ve got a point, too—a damn good one. Deep learning and neural networks have shown remarkable promise, even holding out the hope that they might point the way toward achieving some sort of serviceable, human-like machine intelligence—not a terrible surprise, since their mechanism mimics the layered cognitive processing employed by the human brain.

And in 2017, we can expect even greater strides in machine learning, as massive upgrades to parallel processing power enable the networks to crunch ever-larger blocks of data. “Generative adversarial networks” (GANs) are the next big thing in machine learning—essentially dual networks, one that learns from datasets and another that distinguishes between real and fake data.

So look for our machines to become better learners in 2017, as AI approaches its “terrible twos” and begins to cognitively mature.

A Companion Mind

Harry Shrum, Executive Vice President of Microsoft’s AI and Research Group, is cheerfully optimistic about AI’s outlook in the coming year:

“In 2017 we’ll see increased acceleration in the democratization of AI for every person and every organization. With advances in technology, computers will gain even greater ability to see, hear and understand our world—to make us more productive, to have more fun and also enable greater strides towards solving some of society’s most pressing challenges like fighting disease, ignorance, and poverty.”

And that seems to be the general consensus about what the New Year holds for artificial intelligence. Not a mystical singularity; not the sudden “awakening” of an inchoate machine mind—inhuman, alien, perhaps even malevolent or at least antipathetic to all we hold dear. That’s a fantasy—the reality will be far more prosaic.

If AI is taken to mean the evolution of tools that act in concert with their makers, and are instilled with a limited, almost instinctual awareness, then we seem to be well on the way toward building a world in which our machines are active partners in the business of life. So don’t look for the coming of Skynet or HAL 9000 in 2017—nothing so dramatic as all that. For now, baby steps—incremental steps toward a world in which our very tools are endowed with the same sort of sensory and reactive faculties that are found throughout the living world.

Now, as for 2018… that’s a different story altogether.

The post Predicting 2017: The Rise of Synthetic Intelligence appeared first on Futurism.

New Lie-Detecting Robot Security Agent Could Help Secure Borders

AVATAR

Meet the Automated Virtual Agent for Truth Assessments in Real Time (AVATAR), a robotic device created by San Diego State University management information systems professor Aaron Elkins that will essentially function as a lie detector at international borders.

“AVATAR is a kiosk, much like an airport check-in or grocery store self-checkout kiosk,” explains Elkins. “However, this kiosk has a face on the screen that asks questions of travelers and can detect changes in physiology and behavior during the interview. The system can detect changes in the eyes, voice, gestures, and posture to determine potential risk. It can even tell when you’re curling your toes.”

When AVATAR is in place as a border security measure, travelers will simply step up to the kiosk where the device will ask them questions similar to what a human security agent might ask. The responses will then be analyzed by the system through eye-detection software and motion and pressure sensors to identify signs of lying and discomfort.

The virtual agent will ask several baseline questions that will prevent passengers from being singled out unduly. For example, someone might simply have physiological signs of discomfort because they have a fear of flying and not necessarily because they have something nefarious to hide. Should AVATAR detect anything out of the ordinary, the passenger will be flagged and human agents will take over.

Image Credit: Aaron Elkins/ San Diego State University

Beyond Border Security

Currently, AVATAR’s creator is seeking government agencies willing to put his technology to the test. “AVATAR has been tested in labs, in airports, and at border crossing stations,” he explained. “The system is fully ready for implementation to help stem the flow of contraband, thwart fleeing criminals, and detect potential terrorists and many other applications in the effort to secure international borders.”

While the system was designed with the above applications in mind, AVATAR has potential for use outside of border security as well. “We’ve come to realize that this can be used not just for border security, but also for law enforcement, job interviews, and other human resources applications as well,” Elkins added. “We continue to make improvements, such as analyzing the collected data using Big Data analysis techniques that make AVATAR a potentially valuable tool across many industries.”

Advancements in big data, along with constantly improving machine learning capabilities, can lead to more opportunities for automated systems like AVATAR in the near future. Right now, we already have numerous shopping websites employing the technology for a more seamless shopping experience. Fraud detection is another field that has the potential to greatly benefit from this technology, as is the medical industry. One day soon, you may find yourself having more face-to-screen interactions with smart machines like AVATAR than face-to-face ones with traditional employees.

The post New Lie-Detecting Robot Security Agent Could Help Secure Borders appeared first on Futurism.

Westworld and AI: Experts Discuss the Future of Intelligent Machines

Sophia’s No Dolores…Yet.

Remember that robot that freaked everyone out on the internet because it said it will destroy humans? That was Sophia—and she feels terrible about the whole thing.

In any case, those who watched HBO’s latest series Westworld might be reminded of her, as the show’s artificially intelligent (AI) robots entertained their guests—or shot at one another. And that season finale? Oh, yes.

The reality is, however, Sophia isn’t quite like that yet. She’s no Dolores Abernathy.

Credits: HBO
Will robots develop the killer instinct? Credits: HBO

In interviews with Inverse, industry experts at Hanson Robotics—the makers of Sophia and that Albert Einstein HUBO—and the Allen Institute for Artificial Intelligence are confident that the future of AI and robotics is something very much similar to what Westworld shows us.

As to be expected, and just like in the HBO series, the key lies in the code. There are two systems of code that make Dolores superior to Sophia, explains Hanson’s VP of Creative Stephan Bugaj, who also heads personality design for their robots. These systems are the secret to making Hanson robots more relatable, although both are still in their early stages of development in the real world.

The first is generative code. This is a system that’s capable of writing its own code, in order to adapt to situations—something like neural networks and machine learning, but with the ability not only to assimilate and build upon incoming data, but to create new self-learning programs (for instance) when the need arises. “An A.I. could reason about itself and decide that it needs some new code, and write it itself,” Bugaj explains.

It’s sort of like AI programs having their own ideas.

Such wildly sophisticated software is evident in Westworld; but in the real world, we’re not quite there yet.

Bugaj continues:

They’re definitely adding some sort of semantic network associations. They’re changing things about themselves, whatever that internal structure would look like in this fictional coding universe. They’re learning, they’re formulating new opinions about themselves and about the world around them, and they’re acting on them. And that’s being sentient.

The second key software system is memory. Of course, in order to truly learn, an AI has to have memory. Not just any kind of memory, but memory in the same way human beings have it—short term, long-term, and even episodic. It’s a trait that makes the Westworld robots seem uncannily human.

“We use analogies, intuition, this kind of reminding thing, to do a lot of great work,” explains Oren Etzioni, CEO of the Allen Institute. “And that capability—people call it associative, people call it content-addressable, any of a number of things—we ain’t got any systems that come even close.”

A New Direction for the Future?

Bugaj believes that shows like Westworld can serve as an inspirational template for what AI can do in the real world. “I think they’re doing what a good futurist should do, which is making conjectures about the future, and then exploring how those might play out,” he says.

But what can today’s AI do?

Today’s AIs can see, they can speak, and even understand to a great extent—enough to make decisions that allow these systems to, say, suggest which friend to tag on a photograph or even drive an automobile. Certainly, we have successfully created neural networks capable of learning on their own, through various machine learning algorithms.

“A semantic or neural network is state-evolving over time, because it’s bringing in new data and basically learning new things,” says Bugaj.

But neural networks aren’t exactly yet capable of generative code, which is that next evolutionary level in machine intelligence. It’s the kind of code that writes its own code. And the key difference is memory. “Everything that we talk about, with a machine being able to learn, comes down to memory management,” Etzioni says. So far, we are yet to create code that can achieve a creative, spontaneous memory.

So, to be perfectly honest, Westworld is still a little out of our league. But hey, at the end of the day, it’s science fiction, folks. And however much it may be rooted in reality—fiction is precisely what it is.

And as science has proven time and again, it will always cough up something far better than what the most fevered sci-fi writer’s imagination can produce. It just takes a lot of research and a lot of time.

The post Westworld and AI: Experts Discuss the Future of Intelligent Machines appeared first on Futurism.

Machines That Learn Are Invading, and Utterly Transforming, Our Lives

What is Machine Learning?

The interest in artificial intelligence (AI) has evolved lately into something more akin to an obsession. Every day, it seems, we learn of some new development in AI, or its practical application to some other facet of our lives.  The term “artificial intelligence,” coined in 1956, started out as mere speculation that human thinking could be organized into a symbolic system that was—or would someday be—technologically reproducible.

But once it gained momentum, it seemed there was no stopping the ascent of AI.

Among the most promising avenues of AI research is machine learning, which consists of algorithms that “learn” by finding patterns and generating insights from raw, unassimilated data. Once these heuristic algorithms have assimilated enough data and compiled a sufficient library of rote reactions, they can apply these computations to make decisions and predictions when faced with similar stimuli in the future—consider it an artificial “instinct” routine. The programs now have the ability to learn and execute actions without having to be specifically programmed every time, simply by accessing this instinctual programming.

Machine Learning in Our Daily Lives

Over the past years, there have been enormous strides in the machine learning industry, which have resulted in what we have today: highly adaptable programs capable of applying mathematical calculations to big data repeatedly and at significantly faster speeds, according to the Statistical Analysis System (SAS) Institute.

There are many branches under the machine learning subcategory, with “deep learning” being one of the main research concentrations. This is hardly surprising, seeing how deep learning is the branch of machine learning that brings AI the closest to its lofty goal of enabling machines to learn and think like humans.

Now, as the technology swiftly evolves—practically on its own, for by its very nature it obviates or reduces the need for human guidance or intervention—the application of machine learning is fast becoming a ubiquitous part of our daily lives. Among machine learning applications that we interact with on a daily basis, most are probably familiar with those already embedded in smartphone applications, particularly the facial recognition and sorting technology in photos apps.

An example of how some advanced machine learning systems are used for facial recognition. Credit: Andrew Ng/Nature
An example of how some advanced machine learning systems are used for facial recognition. Credit: Andrew Ng/Nature

Several shopping websites, together with subscription services such as Netflix, employ machine learning to predict what a customer might like and display suggestions based on these assumptions. Fraud detection, though still imperfect, is also becoming far more capable with help from machine learning algorithms. And as autonomous vehicles gain popularity and legality in different areas of the globe, machine learning will eventually take over the driver’s seat.

Meanwhile, machine learning can take automation to a higher level: robotics, for instance, relies heavily on machine learning, and its making important inroads into the medical field to provide improved diagnostics and assimilate patient data to make informed predictions. According to digital publishing group McKinsey & Company, apart from predictable physical work, data sciences (collection and processing) are next in line for machine learning automation.

So even if you’ve never heard of machine learning, it’s a safe bet it will touch upon your life in some way—no matter how remote. Whether it’s the changing nature and composition of the workplace and workforce, the astonishing new advances in driverless tech, or AI’s infiltration of fraud detection, crime prevention, and medical diagnostics, intelligent algorithms will be influencing our lives whether we like or not.

Watch this video for a demonstration of some machine learning-based technologies being used and developed today:

The post Machines That Learn Are Invading, and Utterly Transforming, Our Lives appeared first on Futurism.

Artificial Intelligence: What It Is and How It Really Works

Which is Which?

It all started out as science fiction: machines that can talk, machines that can think, machines that can feel. Although that last bit may be impossible without sparking an entire world of debate regarding the existence of consciousness, scientists have certainly been making strides with the first two.

Over the years, we have been hearing a lot about artificial intelligence, machine learning, and deep learning. But how do we differentiate between these three rather abstruse terms, and how are they related to one another?

Artificial intelligence (AI) is the general field that covers everything that has anything to do with imbuing machines with “intelligence,” with the goal of emulating a human being’s unique reasoning faculties. Machine learning is a category within the larger field of artificial intelligence that is concerned with conferring upon machines the ability to “learn.”  This is achieved by using algorithms that discover patterns and generate insights from the data they are exposed to, for application to future decision-making and predictions, a process that sidesteps the need to be programmed specifically for every single possible action.

Deep learning, on the other hand, is a subset of machine learning: it’s the most advanced AI field, one that brings AI the closest to the goal of enabling machines to learn and think as much like humans as possible.

In short, deep learning is a subset of machine learning, and machine learning falls within artificial intelligence. The following image perfectly encapsulates the interrelationship of the three.

Image: NVIDIA.
Credit: NVIDIA.

Here’s a little bit of historical background to better illustrate the differences between the three, and how each discovery and advance has paved the way for the next:

Artificial Intelligence

Philosophers attempted to make sense of human thinking in the context of a system, and this idea resulted in the coinage of the term “artificial intelligence” in 1956. And it’s still believed that philosophy has an important role to play in the advancement of artificial intelligence to this day. Oxford University physicist David Deutsch wrote in an article how he believes that philosophy still holds the key to achieving artificial general intelligence (AGI), the level of machine intelligence comparable to that of the human brain, despite the fact that “no brain on Earth is yet close to knowing what brains do in order to achieve any of that functionality.”

Advancements in AI have given rise to debates specifically about them being a threat to humanity, whether physically or economically (for which universal basic income is also proposed, and is currently being tested in certain countries).

Machine Learning

Machine learning is just one approach to reifying artificial intelligence, and ultimately eliminates (or greatly reduces) the need to hand-code the software with a list of possibilities, and how the machine intelligence ought to react to each of them. Throughout 1949 until the late 1960s, American electric engineer Arthur Samuel worked hard on evolving artificial intelligence from merely recognizing patterns to learning from the experience, making him the pioneer of the field. He used a game of checkers for his research while working with IBM, and this subsequently influenced the programming of early IBM computers.

Current applications are becoming more and more sophisticated, making their way into complex medical applications.

Examples include analyzing large genome sets in an effort to prevent diseases, diagnosing depression based on speech patterns, and identifying people with suicidal tendencies.

Deep Learning

As we delve into higher and even more sophisticated levels of machine learning, deep learning comes into play. Deep learning requires a complex architecture that mimics a human brain’s neural networks in order to make sense of patterns, even with noise, missing details, and other sources of confusion. While the possibilities of deep learning are vast, so are its requirements: you need big data, and tremendous computing power.

It means not having to laboriously program a prospective AI with that elusive quality of “intelligence”—however defined. Instead, all the potential for future intelligence and reasoning powers are latent in the program itself, much like an infant’s inchoate but infinitely flexible mind.

Watch this video for a basic explanation of how it all works:

 

The post Artificial Intelligence: What It Is and How It Really Works appeared first on Futurism.

Artificial Intelligence is Helping Restore Vision

Microsoft Cares for Eyecare

According to the World Health Organization, an estimated 285 million people are visually impaired, with 39 million living with blindness and the other 246 million having low vision.

In a world of modern technological advancements, visual impairment has been the subject of much medical research. Perhaps the most notable among these are those that use artificial intelligence (AI), specifically through machine learning. Google’s DeepMind has been working with the UK’s National Health Service to do ophthalmology research.

Microsoft has also joined in on the fight to give humanity better eyesight, recently launching the Microsoft Intelligent Network for Eyecare (MINE) in India.

“Today, I am happy to take forward the work we have done in healthcare with an esteemed set of partners,” says Anil Bhansali, managing director at Microsoft India (R&D) Pvt. Ltd. The effort is in partnership with L V Prasad Eye Institute [LVPEI]. “In our shared vision to eradicate preventive blindness, MINE will help redefine eyecare by bringing together the power of technology and knowledge of global experts.”

Image source: softzone.es

Computer-Aided Diagnositics

AI applications in medical and healthcare research have been gaining ground. Eyecare is a field that’s gotten particular attention from non-traditional companies, such as LVPEI, to test the capabilities of machine learning in healthcare. “At LVPEI, we have been using Microsoft Azure Machine Learning and Power BI to drive clinical interventions and improve patient outcomes,” explains G.N. Rao, the institute’s founder and chair. The company focuses heavily on visually impaired children, working towards predicting outcomes for refractive surgery and studying the rate of change in myopia for children. As the tech giant stated in an announcement, “Microsoft will deploy its leading cloud platform technology – Cortana Intelligence Suite, for advanced analytics and to build [AI] models on eyecare.”

MINE will use machine learning’s image analysis capabilities, as eye conditions are a good space to apply the technology. The collaboration will work towards developing machine learning predictive models for better diagnosis of issues leading to vision impairment. Specifically, it will come up with optimal surgery parameters, personalizing a surgery and maximizing its chance of success.

The post Artificial Intelligence is Helping Restore Vision appeared first on Futurism.

A Timeline of Future Space Exploration: Part 2 [INFOGRAPHIC]

A Timeline of Future Space Exploration_Part 2

The post A Timeline of Future Space Exploration: Part 2 [INFOGRAPHIC] appeared first on Futurism.

Digital Face Off: Tech Startup Can Turn You Into a 3D Avatar

Digital avatars have been around for a while now. Earlier versions seemed to capture the spirit of a user’s likeness rather than how they might have actually looked. A new startup is looking to change that. The San Francisco-based artificial intelligence (AI) startup wants to turn your digital avatar three dimensional, and deliver it with stunning likeness.

Loom.ai’s tech uses basic digital photos to automatically generate a detailed 3D avatar which could be used in text message, social media, virtual reality (VR) games, and even e-commerce. It relies on machine learning to capture and recreate the subtleties of a human face. It gets better at inferring other facial expressions as it learns more data about facial movements.

Founded by visual effects veterans Mahesh Ramasubramanian (who previously worked as a visual effects supervisor in DreamWorks for projects like Shrek and Madagascar 3) and Kiran Bhat (formerly in facial animation at LucasFilm, with movies like The Avengers and Pirates of the Caribbean under his belt), Loom.ai came out from obscurity this week with $1.3 million in seed funding. “We believe that you need to bring the user into the experience, which means figuring out a way to digitize the everyday user just like we used to digitize the movie stars,” said Bhat.

It’s not definite yet where this tech will find itself, but Loom.ai is keen on licensing its technology to third-party platforms. In any case, with massive adoption, Loom.ai’s 3D avatar tech has the potential to reshape our virtual communication experience.

The post Digital Face Off: Tech Startup Can Turn You Into a 3D Avatar appeared first on Futurism.

Watch Arago’s HIRO Beat Out Human Gamers in Freeciv

Deep-Learning Gamer

HIRO is an artificially-intelligent program from IT and AI company Arago. The program, which stands for Human Intelligence Robotically Optimized, is modeled to imitate human memory and skill in problem solving.

HIRO’s latest accomplishment is its expert level playing of the civilization building video game Freeciv. The game, based on Sid Meier’s Civilization series, is hugely unpredictable with as many as 10 to the power of 15,000 possible game solutions.

It’s a feat that’s now in the reach of deep-learning AI. At the TechCrunch Disrupt London 2016, Arago announced that HIRO can beat about 80 percent of human players at Freeciv.

AI bots capable of complex strategy utilize deep machine learning honed by training from developers. HIRO learned by opposing human gamers, and by repeatedly beating the built-in computer opponent of the game. AI from Google DeepMind, AphaGo, also made a lot of noise for beating the world champion at the war simulation game Go. To this, Arago boasts that HIRO had less training from less data to beat a more complex game.

AI Future

HIRO sharpens its capabilities using these games. Arago offers HIRO to customers as an IT automation tool for business. Perfecting video games is merely training ground for HIRO and other deep-learning AI before they move on to functions like facial detection or cancer research. AI already plays a huge role in today’s society, optimizing systems in a wide range of industries.

AI has assisted medical procedures like cataract surgery and breast cancer detection. It makes safer and easier to use transportation like intelligent airplanes and self-driving cars.

The development of AI is rapid and is already changing the landscape of how we live. Let’s just hope that these machines don’t get any ideas for world domination from all these civilization-conquering video games.

The post Watch Arago’s HIRO Beat Out Human Gamers in Freeciv appeared first on Futurism.

At Amazon’s New Grocery Store, AI Takes Over Cashier Jobs

Shop and Go

Ever race another grocery shopper to get to the shortest checkout line or put an item back just so you could hit the 15 items or fewer line? Amazon believes you won’t need to do any of that in the grocery stores of the future. In fact, the company thinks those stores won’t have any checkout counters at all.

The company is bringing their vision for the future to life with Amazon Go, an aptly named smart store where you can just take anything from the shelves and leave. The store is powered by sensors, deep learning artificial intelligence (AI), and computer vision, which allows it to detect which items a customer has selected and even when products are returned to shelves.

To start their shopping experience, the customer just needs to download the Amazon Go app on their smartphone. They swipe the app upon entering the building, and after walking out of the store, the shopper’s Amazon account is charged for the items and a receipt is sent to the app.

Currently, Amazon Go is in its beta phase, available only to the company’s employees. The 167 square meter (1,800 square foot) retail space is located in Seattle, and the company expects the grocery store to open to the public early in 2017.

The AI Labor Force

Amazon Go has no checkout lines, no registers, and no cashiers. It’s a sneak peek at what some experts deem the inevitable future in which artificially intelligent devices take over many of our jobs.

A report from tech analyst Forrester claims that AI will have the biggest impact in the fields of transportation, logistics, customer service, and consumer services, and Amazon Go falls into the majority of those categories. When quick-learning, pattern-reading machines can execute repetitive work perfectly each time, how will a human worker compete?

White collar workers shouldn’t be so quick to scoff at the labor force takeover. A study from IT researchers proved the increasing possibility that even doctors, lawyers, and accountants could easily be replaced by AI. Machine learning algorithms could settle disputes without the need of lawyers, and the WebMD network already gets more hits per month than there are visits to all the doctors in the United States, so people are prepped to get their medical advice from machines.

The reliability of advanced technology has truly changed human living forever. Amazon Go is yet another testament to the very near, and very different, future ahead.

The post At Amazon’s New Grocery Store, AI Takes Over Cashier Jobs appeared first on Futurism.

A Whole New Universe: OpenAI Just Opened a School for AI

OpenAI, an artificial intelligence research center based in San Fransisco, has released an open source software platform that virtually tests and trains AI.

The platform, called Universe, is a digital playground comprised of games, web browsers, and even protein folding software that an AI can interact with. AI does this by sending simulated mouse and keyboard strokes via what’s called Virtual Network Computing, or VNC.

Universe facilitates reinforcement learning, where the AI learns tasks by trial and error; through risk and reward.  Over time, OpenAI researcher and former Googler Ilya Sutskever says AI can even practice “transfer learning,” in which an agent takes what it has learned in one application and applies it to another.

#ai #machine learning #universe
Credit: OpenAI

Games that AI can currently access include Portal, Shovel Knight, and SpaceChem. Video games may be a good benchmark and training-aide for the AI, but the researchers intend to add more apps tot he list, and teach AI problem solving skills in unfamiliar environments.

“An AI should be able to solve any problem you throw at it,” Sutskever said to Wired. Michael Bowling, a University of Alberta professor lauds the wide scope of the platform and it’s role in improving AI; “It crystallizes an important idea: Games are a helpful benchmark, but the goal is AI,” he says.

The post A Whole New Universe: OpenAI Just Opened a School for AI appeared first on Futurism.

Cambridge Scientists: We Shouldn’t Define Intelligence According to Humanity

A Limited View

When we try to figure out how some aspect of thinking works, we always start looking at how the human brain does it. But while humans could be said to be the apex species in this planet, we are far from being the only ones who can think.

Take for example chimpanzees, a staple for arguments for intelligent thinking in animals. When you compare them with other similar primates like bonobos, you see that they are more inclined towards tool-use. Not only do bonobos not use tools as much, young chimpanzees also manipulate objects more than young bonobos.

You also start seeing active intelligence in some species when comparing the actions of say, digger wasps and corvids (crows, magpies, rooks, etc). Digger wasps hide food for the future, but this is more of a programmed behavior. On the other hand, corvids display behavior consistent with actively planning for the future.

“Defining intelligence or culture in a way that is restricted to humans makes no sense in the grander scheme of evolution. Once we widen these definitions to include other animals, we find culture in other primates, tool use, and incredible intelligence in corvids,” says Dr Kathelijne Koops, formerly from the University of Cambridge’s Division of Biological Anthropology.

The Future of AI

In fact, we can extend this to non-biological intelligence, to intelligence man itself is building. If we consider skills or abilities to be the metric of intelligence, AI and robotics systems clearly qualify.

Now we have programs that can process data and even give predictions, on a far bigger scale than humans can. In the narrow fields that they are programmed in, these machines are the best. This is in contrast to humans, who are generally mediocre or good at a lot of things—generalists and specialists, in other words.

In fact, these machines have their own unique neural architectures, distinct from the human kind. Adapting how the brain thinks is hard for just these narrow competencies, so completely new ways of “learning” had to be developed.

Ultimately, all of these will have to be considered both in developing AI and in a society built around it. We need to consider other kinds of neural architectures and ways of thinking in order to better develop machine learning.

But more importantly, we’ll need to start talking about how we are going to treat these machines. Are they simple tools engineered for a task? Or are they intelligent beings capable of thinking just like human beings?

The post Cambridge Scientists: We Shouldn’t Define Intelligence According to Humanity appeared first on Futurism.

Artificial Intelligence is Helping Scientists “See” the Diversity of Sound

It’s a Bird

In an exciting demonstration of the power of artificial intelligence (AI) and the diversity of species, a team composed of two programmers and an ornithologist (an expert on birds) created a map of visualized bird sounds.

Coders Manny Tan and Kyle McDonald worked with ornithologist Jessie Barry to create this visually euphonious interactive map of bird sounds. Tan and McDonald used machine learning to organize thousands of bird sounds from a collection by Cornell University. They didn’t supply their algorithm with tags or even names of the bird sounds. Instead, they wanted to see how it would learn to organize all the data by listening to the bird sounds.

The results were amazing. Their algorithm was able to group similar sounds together. It generated visualizations of the sound — an image that served as the sound’s fingerprint — using a machine learning technique called t-distributed stochastic neighbor embedding (t-SNE), which allowed it to group together sounds with similar fingerprints.

AI Collaborations

What’s fascinating about this project is the work that AI can do for various disciplines, apart from just biology and ornithology. Deep learning algorithms are beginning to transform the fields of medicine and medical research, most recently with a retina-scanning program that can help prevent blindness. AI has also ventured into the realm of psychology, being able to identify patients with depression and suicidal tendencies.

While medical research might be an obvious application, AI isn’t limited to just this field. AI has also ventured into law and governance, and even to defense and security.

AI isn’t just allowing us to understand our world better, it’s also changing how we interact with it. The prevalence of automated vehicles or unmanned transportation technology is proof of this, with AI learning to become better car and truck drivers, pilots, and even sailors (in a manner of speaking). AI might even venture into space ahead of us.

Obviously, we’re still far from perfecting AI. In as much as deep neural networks are continually learning, we’re also in the process of developing better systems.

The post Artificial Intelligence is Helping Scientists “See” the Diversity of Sound appeared first on Futurism.