Experts say that we’re far from truly autonomous vehicles — meaning those that fall under the Society of Automotive Engineers’ (SAE) Level 5 category. But many companies, including Ford Motors, want to bring us a step closer to this ideal. Ford is aiming to launch a Level 4 autonomous car by 2021.
Ford CEO Mark Fields explained in CNBC’s “Squawk on the Street” last Monday that the company hopes to make an autonomous car with “no gas pedal” and “no steering wheel.” It’s a vehicle that excuses the driver-turned-passenger from taking control “in a predefined area.”
“In our industry, the word autonomous is being used very, very liberally. There’s different levels of autonomy,” Fields explained. “The question that should be asked when a company says they’re going to have an autonomous vehicle […] is at what level.”
This refers to SAE’s levels of driving autonomy. The SAE actually identifies six levels of driving autonomy — from Level 0, meaning no automation at all, to Level 5 which is full automation. A Level 4 self-driving vehicle, according the SEA, employs an automated system that covers “all aspects of the dynamic driving task, even if a human driver does not respond appropriately to a request to intervene.”
Driving Autonomous Vehicle Leadership
Fields’ pronouncements in the CNBC interview, which actually covered a wide range of topics, aren’t just whimsical thoughts. In fact, “Ford has been developing and testing autonomous vehicles for more than 10 years,” according to Ford EVP for Global Product Development and chief technical officer Raj Nair. “We have a strategic advantage because of our ability to combine the software and sensing technology with the sophisticated engineering necessary to manufacture high-quality vehicles. That is what it takes to make autonomous vehicles a reality for millions of people around the world.”
“The next decade will be defined by automation of the automobile, and we see autonomous vehicles as having as significant an impact on society as Ford’s moving assembly line did 100 years ago,” Fields said in the August 2016 press release. “We’re dedicated to putting on the road an autonomous vehicle that can improve safety and solve social and environmental challenges for millions of people – not just those who can afford luxury vehicles.”
The plan includes investing in technologies by collaborating with companies involved in strong research in the fields of advanced algorithms, 3D mapping, LiDAR, and radar and camera sensors. And now it seems it includes removing the steering wheel and the gas and break pedals too.
Is Ford’s vision of a pedal-less and steering wheel-free car the future? More importantly, are we ready for such a vehicle? Well, stick around for 2021 to find out.
The warnings are exaggerated, of course, and are rooted in science fiction (SkyNet isn’t coming, guys, c’mon). But it doesn’t hurt to be prepared, or at least to influence the direction AI research can or should take.
A new enterprise rises to the challenge in the form of the Ethics and Governance of Artificial Intelligence Fund. Backed by eBay founder Pierre Omidyar and LinkedIn co-founder Reid Hoffman, together with the Knight Foundation, the fund’s goal is “to support work around the world that advances the development of ethical AI in the public interest, with an emphasis on applied research and eduction.”
At its launch last January 10, the fund already received an initial investment of $27 million — with Hoffman and Omidyar each committing $10 million through their respective foundations, and the Knight Foundation’s $5 million contribution. Other preliminary investors include the William and Flora Hewlett Foundation, and Raptor Group founder Jim Pallotta, each adding $1 million to the fund.
The AI fund will be housed at The Miami Foundation, with the MIT Media Lab and Harvard’s Berkman Klein Center as the anchor institutions.
Hence the AI fund. “Because of this pervasive but often concealed impact, it is imperative that AI research and development be shaped by a broad range of voices — not only by engineers and corporations, but also by social scientists, ethicists, philosophers, faith leaders, economists, lawyers, and policymakers,” the group writes.
This impact of AI is seen in those instances when policy fails to effectively inform research, as in the case of Uber’s San Francisco autonomous test run. As such, the initiative seeks to support activities that promote keeping human issues at the forefront of AI research and maximizing the benefits of AI. Specifically, it addresses the following:
Communicating complexity: How do we best communicate, through words and processes, the nuances of a complex field like AI?
Ethical design: How do we build and design technologies that consider ethical frameworks and moral values as central features of technological innovation?
Advancing accountable and fair AI: What kinds of controls do we need to minimize AI’s potential harm to society and maximize its benefits?
Innovation in the public interest: How do we maintain the ability of engineers and entrepreneurs to innovate, create and profit, while ensuring that society is informed and that the work integrates public interest perspectives?
Expanding the table: How do we grow the field to ensure that a range of constituencies are involved with building the tools and analyzing social impact?
“There’s an urgency to ensure that AI benefits society and minimizes harm,” Hoffman explained. The AI Fund certainly isn’t the first partnership that aims to establish guideposts for AI research. There’s the Partnership on AI, which Google and Microsoft are a part of. Then there’s also the OpenAI collaboration between Elon Musk and Microsoft.
As Jonathan Zittrain, co-founder of the Berkman Klein Center, said: “A lot of our work in this area will be to identify and cultivate technologies and practices that promote human autonomy and dignity rather than diminish it.”
With autonomous vehicles widely being considered one of the breakout innovations of 2016, the debate that autonomous vehicles combined with artificial intelligence will replace jobs is revving into high gear. Largely this is a misnomer, as autonomous vehicles will create new jobs, job sectors and economic models.
The debate over innovations and technology replacing jobs is as old as history itself. During the first industrial revolution in 18th century England, new manufacturing processes and technologies were invented which led to the mechanization of textile production. This technical breakthrough led to the factory system; a system which would go on to create millions upon millions of jobs despite the worry that jobs would diminish due to automation.
In 2006, the technical breakthrough of cloud computing came from Amazon with the introduction of Elastic Compute cloud (EC2) as a commercial web service. IT professionals and industry analytics predicted large job losses as companies would outsource their computing needs. The opposite ended up being true, as cloud computing directly and indirectly created millions of jobs across the globe and tens of billions of dollars in wealth.
When Andy Jassy (who was Jeff Bezos first official shadow) wrote the AWS mission paper he said, “we tried to imagine a student in a dorm room who would have at his or her disposal the same infrastructure as the largest companies in the world.” The concept imagined by Mr. Jassy would eventually allow the founders of Airbnb to develop, launch, and scale Airbnb with the same infrastructure as the largest hotel companies in the world.
Since the company was founded in 2008, the hotel industry has cast a wary eye on Airbnb. From 2008 to 2015, Airbnb has supported hundreds of thousands of jobs, and hosts in the United States earned more than $3.2 billion in income. While in Europe, Airbnb hosts collectively earned more than $3 billion in 2015 alone.
Additionally, over the last five years, global hotel industry revenue has grown by more than $100 billion and supported hundreds of thousands of jobs.
The Hybrid Years
Despite the negative publicity, Airbnb and the global hotel industry have complemented each other rather nicely. The same will be proven true with drivers and autonomous vehicles powered by artificial intelligence during the hybrid years.
The hybrid years is a term I am coining that describes the time period when both driver vehicles and autonomous vehicles are traveling on public roadways. During the hybrid years, the role of driver and logistics will merge into the role of autonomous logistics officers.
Autonomous logistics officers will manage fleets of vehicles from a remote command center in multiple daily shifts. When these roles merge, drivers’ quality of life will improve immensely. This new job category will create thousands of jobs for individuals with a new, unique skill set.
Individuals with this new skill are already in demand according to the Wall Street Journal as Amazon is looking to acquire or build an application capable of matching available trucks to shipments.
During the hybrid years – which are starting now – forward-looking entrepreneurs will successfully identify changing market dynamics and create new businesses which, in turn, will create new jobs. This is the very scenario that has played out time and time again throughout history.
This is already happening today with connected cars and software. Smart Car, RideCell and Otonomo are all developing software platforms to enable entrepreneurs and established companies to build applications and services on top of the connected car (autonomous vehicle).
The services currently being developed on these platforms will create jobs and income for hundreds of thousands of individuals. JPMorgan Chase calls this the platform economy. The JPMorgan Chase Institute estimates that between October 2012 and September 2015, 4.2 percent of adults, an estimated 10.3 million people — more than the total population of New York City — earned income on the platform economy.
During the hybrid years, the platform economy will continue to grow and provide jobs and income for millions of individuals. Following the hybrid years, technology will evolve to the point where vehicles will no longer be driven by human drivers and autonomous vehicles will no longer be managed by autonomous logistics officers.
At this point in history, and for the first time, society will rely on fully autonomous vehicles as our main source of transportation. History will once again repeat itself as new jobs and new sectors will be created. Most of these new sectors and jobs have not yet been imagined; however, they are coming. We just need to look back on history as a guide.
Grayson Brulte is the Co-Founder & President of Brulte & Company, an innovation advisory and consulting company that designs innovation and technology strategies for a global marketplace.
As the race to improve autonomous driving technology heats up between the biggest names in the industry, Tesla is already making a big push in 2017 to remain at the forefront.
Last week, Elon Musk confirmed that the first 1,000 vehicles in Tesla’s fleet had been equipped with an active version of Enhanced Autopilot, with the same software appearing in “shadow mode” throughout the rest of the fleet. This version of the autopilot system integrated a traffic aware cruise control feature, forward collision warning, and an autosteer beta version enabled only at “low speed.” Yesterday, the company released an updated version of Enhanced Autopilot, again in active mode for 1,000 vehicles and shadow mode for the rest.
Enhanced Autopilot primarily gives Tesla’s HW2 vehicles (newer models running with Autopilot 2.0 hardware) some of the functionalities only found in earlier, HW1 models. However, Musk assures Tesla owners still running on the first generation Autopilot hardware that the latest update will also have improvements in store for them, noting that Ludicrous+ mode will be enabled with the new update, bringing more power to the Model SP 100D and enabling it to go from zero to 96 km/h (zero to 60 mph) in 2.4 seconds.
The company hopes to switch Enhanced Autopilot to active mode for the entire HW2 fleet by the end of this week.
If all looks good, HW2 Autopilot functionality will switch from shadow to active mode by end of week for cars beyond initial 1000
This Enhanced Autopilot update by Tesla is just one of the first steps toward the company’s ultimate goal of equipping Tesla vehicles with fully autonomous driving capabilities by the end of 2017. Updates with new improvements are expected every two to three months this year until the company produces a vehicle that could drive itself from Los Angeles to New York with zero input from a human driver.
However, even as Tesla makes significant strides toward achieving its vision, the company is mindfully taking a “measured and cautious” approach to rolling out their latest system updates. Owners who received the Enhanced Autopilot software last week are only able to access new features that come with the update as the system clocks in more miles and Tesla can gather real-time data to analyze — a process that will likely repeat with this latest update and, eventually, the rest of the vehicles within the fleet.
We’ve waited this long for self-driving cars, so what’s a few more months if it means the technology can be perfected before hitting the road?
The transition from one year to the next is always a little uncertain – an uneasy blend of anxiety and optimism, it’s also a time of retrospection, introspection, and even a little tentative prognostication. And since the latter is our stock-in-trade at Futurism, we believe now is the perfect time to look ahead at what 2017 has in store for us.
With this more collaborative approach to the evolution of AI, we may finally begin to see the personalization of artificial intelligence, and a great proliferation of new AI programs with idiosyncratic personalities, temperaments, and even intellectual outlooks.
A Better Conversationalist
Trying to hold a conversation with Siri or Alexa is like watching two people in the same canoe paddling in opposite directions—it just ain’t goin’ anywhere. For one thing, neither have much a sense of humor; furthermore, these robotic ladies just lack that wonted je ne sais quoi we expect in our intellectual sparring partners—they’re dull, frankly, and haven’t really got much to say.
But in 2017, that’s all about to change.
“In 2017 there will be a chatbot that passes the Turing test, exhibiting responses so human-like that an average person wouldn’t be able to tell if it’s human or machine,” forecasts Jim McHugh, Vice President and General Manager at NVIDIA. Such extraordinary capabilities will be the natural sequel to the new advances in machine learning, natural language processing, and pattern recognition that will finally beget more empathic and intuitive AI programs.
“In the coming year, we will see advances that are used to endow systems with new human-centered qualities, including more natural, fluid conversation—that can address several topics or needs in one ongoing interaction, and deeper understanding of human values and intentions, such as recognizing the commitments we make to others in our email and text messaging,” observes Eric Horvitz, Technical Fellow and Managing Director at Microsoft Research.
Very soon now, you’ll really be able to have that deep, meaningful conversation with Alexa that you always wanted.
According to the big brains at Ovum, “machine learning will be the biggest disruptor for big data analytics in 2017.” They’ve got a point, too—a damn good one. Deep learning and neural networks have shown remarkable promise, even holding out the hope that they might point the way toward achieving some sort of serviceable, human-like machine intelligence—not a terrible surprise, since their mechanism mimics the layered cognitive processing employed by the human brain.
And in 2017, we can expect even greater strides in machine learning, as massive upgrades to parallel processing power enable the networks to crunch ever-larger blocks of data. “Generative adversarial networks” (GANs) are the next big thing in machine learning—essentially dual networks, one that learns from datasets and another that distinguishes between real and fake data.
So look for our machines to become better learners in 2017, as AI approaches its “terrible twos” and begins to cognitively mature.
A Companion Mind
Harry Shrum, Executive Vice President of Microsoft’s AI and Research Group, is cheerfully optimistic about AI’s outlook in the coming year:
“In 2017 we’ll see increased acceleration in the democratization of AI for every person and every organization. With advances in technology, computers will gain even greater ability to see, hear and understand our world—to make us more productive, to have more fun and also enable greater strides towards solving some of society’s most pressing challenges like fighting disease, ignorance, and poverty.”
And that seems to be the general consensus about what the New Year holds for artificial intelligence. Not a mystical singularity; not the sudden “awakening” of an inchoate machine mind—inhuman, alien, perhaps even malevolent or at least antipathetic to all we hold dear. That’s a fantasy—the reality will be far more prosaic.
If AI is taken to mean the evolution of tools that act in concert with their makers, and are instilled with a limited, almost instinctual awareness, then we seem to be well on the way toward building a world in which our machines are active partners in the business of life. So don’t look for the coming of Skynet or HAL 9000 in 2017—nothing so dramatic as all that. For now, baby steps—incremental steps toward a world in which our very tools are endowed with the same sort of sensory and reactive faculties that are found throughout the living world.
Now, as for 2018… that’s a different story altogether.
As the world continues to achieve unprecedented levels of advancement in AI and robotics, we must, at the same time, come to terms with the fact that our fundamental understanding of technology is also being challenged. Technology was once viewed as a tool that drove human progress forward. Today, technology is threatening the employment and job security of millions.
High-profile personalities such as Stephen Hawking, as well as economists, have begun to shine the spotlight on this issue of technological unemployment—the displacement of human jobs by increasingly sophisticated means of automation.
In a column published in The Guardian, Hawking points out that, “[…]the automation of factories has already decimated jobs in traditional manufacturing, and the rise of artificial intelligence is likely to extend this job destruction deep into the middle classes, with only the most caring, creative or supervisory roles remaining.”
Economists are not discounting the fact that globalization is at least partially to blame for unemployment. They cite trade relations with China during the 2000s as an example, which according to researchers from MIT, led to the loss of over two millions jobs. Still, the impact of automation will have a greater, more disruptive effect on the labor force.
The New Economy
Some argue that the situation isn’t nearly as dire as some imagine it to be.
Elon Musk, who believes that rising automation will lead to the implementation of universal basic income, sees it as an opportunity. “People will have time to do other things, more complex things, more interesting things,” says Musk. “Certainly more leisure time.”
It’s also entirely possible that as industries begin to assimilate technology into their business models, that it will create new jobs.
“It’s literally the story of the economic development of the world over the last 200 years…just as most of us today have jobs that weren’t even invented 100 years ago, the same will be true 100 years from now,” argued Marc Andreesen, a venture capitalist who was also responsible for creating Mosaic, the first widely used web browser.
Automation can also serve to complement human skills. As Stefan Hajkowicz illustrated in his article in The Conversation:“Spreadsheets didn’t kill off accounting jobs. On the contrary, smart accountants learned how to use spreadsheets to become more productive and more employable.”
True. But experts think this industrial revolution is different.
Machines right now might be only capable of doing repetitive, formulaic jobs, but even so, it was already enough to displace thousands of human workers. What happens when prototypes of robots that were taught to mimic the human mind become available? It’s not hard to imagine that knowledge-based, creative, and service-oriented jobs will eventually be overtaken as well.
Our society is evolving—this is the inescapable reality, and change is the watchword of our age. Uncertainty and fear are the inevitable corollaries of the enormous changes stealing upon us; we feel as the cotton picker must have felt at the arrival of the cotton gin, or the coachman beholding the first horseless carriage. Some speak of a melding of our biological minds with the mechanical AI we create, and new phases of human evolution; but these are remote fantasies, of small comfort to the man or woman whose livelihood is rendered obsolete by the march of progress.
But our species’ most remarkable trait is its adaptability—with any luck, we’ll weather this storm as we’ve weathered so many before, and doubtless the people of 2117 will marvel at and even long for our quaint, unsophisticated age and our uncomplicated lives.
Sangbae Kim, an associate professor of mechanical engineering at MIT, has a very distinct vision for the future of emergency response. Amid advances in AI and robotics, he believes that the field would be best served with animal-like machines that can serve as first responders to disaster situations.
Born in Seoul, South Korea, Kim attended Yonsei University where he took up mechanical engineering before serving in the Korean military. From there, he returned to university life, where he explored his love for robotics.
One particular class project challenged students to create robots that could perform specific tasks. He cites it as the catalyst that propelled his career in robotics forward, which eventually took him to Stanford University to complete a graduate program for mechanical engineering.
It was there that he began work on his bio-inspired robotic machines, the first of which was a climbing robot that mimicked the gecko. At Harvard University, where he found himself in a postdoc post, he helped engineer the Meshworm, designed to mimic the movement of earthworms.
Despite his success in small robots, he realized that its size prevented it from doing “real, physical work.” And it was then that he began to develop a four-legged machine designed to complete human-level physical tasks—a dream that he nurtured when he joined MIT’s Department of Mechanical Engineering and established the Biomimetic Robotics Lab.
MIT’s Biomimetic Robotics Lab’s goal is to design biologically inspired robots that can complete high-level tasks.
At the forefront of this objective is the MIT Cheetah, a four-legged robot inspired by the fastest land animal. Powered with electronic motors, it can run, jump over obstacles autonomously, and trot at 14 miles per hour. The speed the team has achieved for this particular machine is already much faster than what animals in nature can do, and it’s proof of how much more the team can accomplish.
Right now, he’s trying to combine two major projects, one being the MIT Cheetah, and the other, the HERMES, a tele-operated robot on hind legs, whose movements and balance are controlled by a human operator, to create a robust machine that could help save lives in disaster and emergency situations.
“Say there are toxic gases leaking in a building, and you need to close a valve inside, but it’s dangerous to send people in,” Kim says. “Now, there is no single robot that can do this kind of job. I want to create a robotic first responder that can potentially do more than a human and help in our lives,” MIT writes.
By combining principles across biomechanics, human decision-making, and mechanical design, such a robot could easily manuever itself through debris, cross minefields, enter buildings on fire, or punch through walls. And Kim believes that it could take less than a decade to bring this machine to reality.
In the next 30 years, humanity is in for a transformation the likes of which we’ve never seen before—and XPRIZE Foundation founder and chairman Peter Diamandis believes that this will give birth to a new species. Diamandis admits that this might sound too far out there for most people. He is convinced, however, that we are evolving towards what he calls “meta-intelligence,” and today’s exponential rate of growth is one clear indication.
In an essay for Singularity Hub, Diamandis outlines the transformative stages in the multi-billion year pageant of evolution, and takes note of what the recent increasing “temperature” of evolution—a consequence of human activity—may mean for the future. The story, in a nutshell, is this—early prokaryotic life appears about 3.5 billion years ago (bya), representing perhaps a symbiosis of separate metabolic and replicative mechanisms of “life;” at 2.5 bya, eukaryotes emerge as composite organisms incorporating biological “technology” (other living things) within themselves; at 1.5 bya, multicellular metazoans appear as eukaryotes are yoked together in cooperative colonies; and at 400 million years ago, vertebrate fish species emerge onto land to begin life’s adventure beyond the seas.
“Today, at a massively accelerated rate—some 100 million times faster than the steps I outlined above—life is undergoing a similar evolution,” Diamandis writes. He thinks we’ve moved from a simple Darwinian evolution via natural selection into evolution by intelligent direction.
“I believe we’re rapidly heading towards a human-scale transformation, the next evolutionary step into what I call a “Meta-Intelligence,” a future in which we are all highly connected—brain to brain via the cloud—sharing thoughts, knowledge and actions,” he writes.
Change is Coming
Diamandis outlines the next stages of humanity’s evolution in four steps, each a parallel to his four evolutionary stages of life on Earth. There are four driving forces behind this evolution: our interconnected or wired world, the emergence of brain-computer interface (BCI), the emergence of artificial intelligence (AI), and man reaching for the final frontier of space.
In the next 30 years, humanity will move from the first stage—where we are today—to the fourth stage. From simple humans dependent on one another, humanity will incorporate technology into our bodies to allow for more efficient use of information and energy. This is already happening today.
The third stage is a crucial point.
Enabled with BCI and AI, humans will become massively connected with each other and billions of AIs (computers) via the cloud, analogous to the first multicellular lifeforms 1.5 billion years ago. Such a massive interconnection will lead to the emergence of a new global consciousness, and a new organism I call the Meta-Intelligence.
“It will then soar past it because of the continuing acceleration of information-based technologies, as well as the ability of machines to instantly share their knowledge.” Kurzweil predicts that this will happen by 2045—within Diamandis’ evolutionary timeline. “The nonbiological intelligence created in that year will be one billion times more powerful than all human intelligence today.”
The fourth and final stage marks humanity’s evolution to becoming a multiplanetary species. “Our journey to the moon, Mars, asteroids and beyond represents the modern-day analogy of the journey made by lungfish climbing out of the oceans some 400 million years ago,” Diamandis explains.
Buckle up: we have an exciting future ahead of us.
Remember that giant robotic exosuit—the one Sigourney Weaver used in her epic mêlée with the queen alien at the end of Aliens? Or all those giant robots that show up in just about every Japanese anime? Well, if you liked those…you’d better be sitting down.
Because the Furrion Robotics Division has debuted, at CES 2017 in Las Vegas, a giant mechanized exosuit that just might be the first step toward that sci-fi future. It’s called “Prosthesis” (unimaginative name, true, but the thing largely speaks for itself), and it looks like a fantastic congeries of girders and steelwork that’s come to life in the form of a giant, robotic insect.
The Prosthesis is 14 feet tall, 16 wide, and tips the scales at 7,700 lbs.; the sticking point, however, is in how the mechanical monstrosity is operated. Details are still forthcoming, but it apparently involves strapping the user into a “cockpit” of sorts, where some sort of haptic technology translates the user’s limb movements to the movements of the robotic exoskeleton.
“Inside is a full-body exoskeletal interface,” explains Jonathan Tippett, founder, and CEO of Furrion Robotics. “The pilot straps in, face down, and has their whole upper body kind of cradled by this articulated, carbon-fiber harness; their arms and legs are free to move, and there’s an exoskeletal interface on each of your limbs, which maps the motion of your limbs to the four limbs of the machine.”
The purpose of this suit? The mech (Tippett is careful to draw a distinction between human-controlled “mech” and autonomous robots) is designed with one purpose in mind: racing. Its creators claim that the all-electric Prosthesis can attain speeds of 20 mph, hopefully combined with an agility that cars just cannot match. It’s envisioned as the first element of a new sport—one in which human beings outfitted with gigantic mechanized suits like the Prosthesis achieve new forms of augmented athleticism.
According to Tippet: “The whole point is to create a new human experience for the pilots, who would have to be athletes to operate it.”
“I wanted to create a machine that required human skill and practice to operate,” Tippett explains, a dream that grew out of his experiences as a mountain biker.
So we’re eager to see, in the coming months, what comes of Furrion’s new gadget. Tippett’s dream of a device that engages with human beings, changes the way we interact with machines, and allows us to hone new skills, certainly seems more exciting than the world promised by autonomous tech—a caretaker world, essentially, in which control is taken away from us untrustworthy creatures of flesh and blood.
“It was kind of a counterpoint,” he observes, speaking of the philosophy animating the creation of the Prosthesis, “to all the things that are being automated and ritualized in our lives.”
The rapid advancement of robotics and artificial intelligence (AI) is allowing more companies to seek out automation as a means of boosting production efficiency. Mining company Rio Tinto is one of the companies that’s turned to automation to accomplish that goal.
The company has recently deployed a fleet of 73 self-driving trucks to haul payloads non-stop. In addition to the trucks, they also have robotic, rock-drilling rigs plugging away at the topography. In the near future, Rio Tinto is looking to upgrade the trains that haul the ore to port to not only drive themselves but also have the ability to load and unload automatically.
Back in 2015, Rio Tinto made history by introducing remote-controlled haulers at some mines. While those vehicles didn’t require a driver to be physically present, there was a human component to successful navigation. As a logical progression from that technology, the company has managed to replace those humans with software in the newest equipment. The vehicles now operate completely autonomously using precision GPS and scan for obstacles using radar and lasers.
The company is reporting a 15 percent reduction in the cost of operating the automated trucks compared to those driven by humans. As hauling is among the largest costs to a mining operation, Rob Atkinson, Rio Tinto’s leader of productivity efforts, lauds this reduction. “We’re going to continue as aggressively as possible down this path,” he claims in an interview with MIT Technology Review.
The future impact of this move toward more automation is arguably the biggest concern facing modern society. Robots are quickly becoming better than humans at more and more tasks. Couple this with the declining cost of robotics, and there’s a significant financial incentive for companies to explore all the ways in which they can automate their operations.
At the outset of the proliferation of this new frontier in how the world does business, jobs requiring low-skilled workers (especially in developing nations) are particularly in danger of disruption. However, artificial intelligence is also rapidly advancing, causing even the most skilled workers to start to look over their shoulders for their electronic replacements.
United States President-elect Donald Trump’s promises to bring outsourced jobs back to the country were a major part of his campaign platform. However, experts assert that these plans are likely to be hampered by the likes of automation. Other experts say that even if policies are put in place to bring companies back to the U.S., the jobs they initially outsourced may not come back with them.
Automation is steadily taking over in areas like manufacturing and transportation, and it even has the potential to significantly cut into the number of available information technology (IT) positions as well. The world’s brightest minds are looking for ways to head off the potentially debilitating impact this could have on the world’s employment levels. While the White House is proposing greater access to education, many experts see a universal basic income as the only means to prevent unemployment-caused poverty. Whatever the answer, something must be done before all jobs are hauled away by automated systems.
Meet the Automated Virtual Agent for Truth Assessments in Real Time (AVATAR), a robotic device created by San Diego State University management information systems professor Aaron Elkins that will essentially function as a lie detector at international borders.
“AVATAR is a kiosk, much like an airport check-in or grocery store self-checkout kiosk,” explains Elkins. “However, this kiosk has a face on the screen that asks questions of travelers and can detect changes in physiology and behavior during the interview. The system can detect changes in the eyes, voice, gestures, and posture to determine potential risk. It can even tell when you’re curling your toes.”
When AVATAR is in place as a border security measure, travelers will simply step up to the kiosk where the device will ask them questions similar to what a human security agent might ask. The responses will then be analyzed by the system through eye-detection software and motion and pressure sensors to identify signs of lying and discomfort.
The virtual agent will ask several baseline questions that will prevent passengers from being singled out unduly. For example, someone might simply have physiological signs of discomfort because they have a fear of flying and not necessarily because they have something nefarious to hide. Should AVATAR detect anything out of the ordinary, the passenger will be flagged and human agents will take over.
Beyond Border Security
Currently, AVATAR’s creator is seeking government agencies willing to put his technology to the test. “AVATAR has been tested in labs, in airports, and at border crossing stations,” he explained. “The system is fully ready for implementation to help stem the flow of contraband, thwart fleeing criminals, and detect potential terrorists and many other applications in the effort to secure international borders.”
While the system was designed with the above applications in mind, AVATAR has potential for use outside of border security as well. “We’ve come to realize that this can be used not just for border security, but also for law enforcement, job interviews, and other human resources applications as well,” Elkins added. “We continue to make improvements, such as analyzing the collected data using Big Data analysis techniques that make AVATAR a potentially valuable tool across many industries.”
Advancements in big data, along with constantly improving machine learning capabilities, can lead to more opportunities for automated systems like AVATAR in the near future. Right now, we already have numerous shopping websites employing the technology for a more seamless shopping experience. Fraud detection is another field that has the potential to greatly benefit from this technology, as is the medical industry. One day soon, you may find yourself having more face-to-screen interactions with smart machines like AVATAR than face-to-face ones with traditional employees.
Last July, Tesla Motors CEO Elon Musk tweeted about a new project that, to date, still remains in the works. The Tesla Model Y — not yet its official name — is supposedly the crossover SUV version of the Model 3, the company’s first mass-market car.
Since then, not much has been revealed about this uber secretive car model, although there have been a considerable amount of rumors — many fueled by tweets from Musk himself – which remain unconfirmed. As of yet, the product isn’t even on the company’s website.
Still, reports assert the Model Y is already in the advanced stages of development. If all goes well, it will go to production a little bit after the Model 3, perhaps some time in 2018. An official timeline, though, still hasn’t been announced.
But what can we expect from the Model Y? Let’s take a look at the rumors and see which are probable, and which are less than likely.
A Hodgepodge of Features
Of course, like other Tesla vehicles, the Model Y will certainly feature an autonomous driving system. Except that it’s not going to be your regular, semi-autonomous Autopilot found in existing Tesla models – instead, it’s expected to be fully autonomous. The full self-driving system will cost $8,000 at the time of purchase and include all of the Enhanced Autopilot features. Musk isn’t saying when full, Level 5 autonomy will be implemented, though. The company states on its website that this will depend on software validation and regulatory approval.
The vehicle is also going to incorporate Tesla’s solar roof technology, thanks to that successful merger with SolarCity. Seeing how this technology is already being used in other models, it’s highly probable that it’ll find its way onto this SUV. Musk explained that solar glass on a car would be net positive, meaning that it would create more energy than it uses.
Both a favorite and a bother, it was initially reported that the Model Y would have fancy falcon-wing doors present in the Model X. This assumption came from a tweet posted by Musk himself in October 2015. The tweet didn’t outright say that the Model Y would have these doors, it merely hinted that either the Model 3 or the Model Y might have them. Musk later deleted the tweet.
With the the design for the Model 3 already out, we now know that it doesn’t have falcon-wings. So, it remains probable that the Model Y might have them. Still, some say that falcon-wing doors — which have caused Tesla and Model X owners their share of headaches — would most probably not be used on the Model Y. Perhaps buyers will have the option, as it seems like they’re pretty divided on the subject.
The Model Y is expected to have a competitive range that will rival Chevrolet’s electric SUV, more cargo space than its sedan cousins, and an option to offer a heads-up dashboard display. The dashboard of the Model 3 still hasn’t been unveiled, so we can’t yet speculate what the Model Y’s will look like. However, back in April, Musk said via Twitter that the Model 3 will have a steering system that “feels like a spaceship.”
The price for all this? Again, Musk is keeping mum. It can be assumed, though, that the Model Y will be more costly than the Model 3’s base price of $35,000. All these rumors do make for an exciting Tesla SUV, but nothing is set in stone. There is one thing we can be sure of, however – the Model Y will be a Tesla, and looking at the company’s track record, their cars always have some surprises in store.
While Faraday Future os unveiling its FF91 autonomous car, Tesla is busy rolling out an update to its autopilot system. Last Saturday, CEO Elon Musk confirmed that a select number of cars have been updated with this futuristic, enhanced version of the company’s autopilot system. It marks the next step on the path to fully autonomous driving capabilities, which Tesla plans to have ready by the end of 2017.
The enhanced autopilot system has been pushed to the first 1,000 cars in Tesla’s fleet, and it includes a traffic aware cruise control feature, forward collision warning, and an autosteer beta version that’s enabled only at “low speed.” The update was designed for Tesla vehicles running on the advanced Hardware 2 platform, a new system of cameras and computers launched last October that is intended to support fully autonomous driving through a future software update.
Experts claim we’re still far from seeing fully autonomous cars on the roads, and Tesla seems aware of this. In the update’s release notes, the company makes it clear that they won’t be rushing to integrate these new features, instead taking a “measured and cautious” approach to their rollout. The company plans to analyze several hundred million miles of real-world usage to improve the system, addressing issues as they arise and improving confidence in the system.
Telsa urges early users of this new system to remain in constant control of their vehicles while taking advantage of the driver assistance features it provides. This caution is very appropriate as most new technologies can and should be approached with a fair amount of skepticism.
It does seem like Tesla is on the right track with this update, though, as initial tests in real-life scenarios have yielded promising results. In fact, just last month, a video of Tesla’s autopilot system accurately predicting a crash went viral on the internet — a testament to the company’s enhanced software capabilities and the public’s interest in them.
It looks like the job automation trend is getting to Japan, bringing the country a step closer to a future of layoffs in favor of intelligent machines. Japanese firm Fukoku Mutual Life Insurance Co. is laying off more than 30 of its employees and is replacing them with artificial intelligence (AI) systems that will perform payout calculations for policyholders.
For Fukoku Mutual, the move is seen as a practical and beneficial, with expected increase in productivity at 30 percent and a calculated return on investment in less than two years.“The insurance firm will spend about 200 million yen to install the AI system, and maintenance is expected to cost about 15 million yen annually,” writes the Japanese daily The Mainichi. “Meanwhile, it’s expected that Fukoku Mutual will save about 140 million yen per year by cutting the 34 staff.” Understandably, the 34 employees whose jobs are set to be eliminated by March 2017 aren’t going to be so happy.
The kind of work these AI systems are expected to do isn’t just limited to calculating payouts. “[T]he system will also be able to check customers’ cases against their insurance contracts to find any special coverage clauses — a measure expected to prevent payment oversights,” The Mainichi reports. “The type of payments the AI is expected to oversee at Fukoku Mutual totaled some 132,000 cases in fiscal 2015.”
The Future of Jobs
With that much at stake, Fukoku Mutual is relying on a system based on IBM’s Watson Explorer. According to IBM, this system is equipped with “cognitive technology that can think like a human,” and “can analyze and interpret all of your data, including unstructured text, images, audio and video”.
This isn’t Watson’s first job experience. Last July, Watson started working as in-store assistants at Macy’s. And, after learning Japanese, Watson started assisting on jobs in Japan, as well. The Dai-ichi Life Insurance Co., for instance, has been using Watson in processing payment assessments, with human auditors and no staff cutoffs. By March 2017, the Japan Post Insurance Co. will also start trying out Watson for the same duties.
This also isn’t the first time a large firm decided on automation in favor of human employees. In the U.S., industry giant Walmart cut 7,000 office jobs, replacing them with automated invoice and accounting systems. Several studies have already predicted the resulting unemployment due to the rise of intelligent machines. A recent combined study by Oxford University and the Oxford Martin School, an estimated “47 percent of jobs in the US are ‘at risk’ of being automated in the next 20 years.”
Without a doubt, the age of automation is already upon us — and it’s just the beginning.
Lego has debuted an exciting new sub-brand at the 2017 CES (Consumer Electronics Show) that is geared toward teaching children how to code. It’s called Boost and is meant for children aged seven and up. Lego has a similar product in its Mindstorms line, but that is aimed at older kids or young adults.
The starter set (priced at $160) comes with three Boost bricks containing tech to allow movement, lights, and sensors. The kit also includes 840 traditional Lego blocks that kids will use to build their interactive creations.
Out of the box, users are equipped to build five distinct models: Vernie the Robot, Frankie the Cat, the Guitar 4000, the Multi-Tool Rover 4, and a machine that can automatically assemble small Lego creations, called an Autobuilder. Each model has its own fun, unique abilities, including a robot that tells fart jokes and a cat that farts when you feed it too much (kids really like farts).
The kit works in conjunction with a companion app where kids are given step-by-step instructions on building and coding their creations. The code is displayed in the form of icons so that even children who do not know how to read can easily learn how to code.
The rapid development and lowering cost of robotics is allowing for the spread of automation. Toys such as Lego Boost introduce children to an ever-expanding field of future opportunity.
Google’s DeepMind is a company working on what could be one of the most significant scientific advances of our era. They focus on the development of AI systems that could have a broad range of applications from cybersecurity to healthcare. The primary goal of DeepMind is to develop artificial intelligence (AI) that can solve the most complex problems without even first having to be taught how.
A major training/proving ground for DeepMind’s software has been gaming. The company has developed AI that can play 49 different Atari games at expert levels. Also, in a world-first development, they created software called AlphaGo which challenged the world champion of the ancient Chinese game of Go, and won.
The latest news out of DeepMind comes back to that historic victory. As a means of testing some upgrades to AlphaGo, the company secretly unleashed the AI on some unwitting Go players. AlphaGo completely dominated the competition. More than 50 games were played and AlphaGo won every single one.
Demis Hassabis, the co-founder and CEO of Google DeepMind came clean yesterday that AlphaGo was behind the juggernaut. “We’ve been hard at work improving AlphaGo, and over the past few days we’ve played some unofficial online games at fast time controls with our new prototype version, to check that it’s working as well as we hoped.” So, if you happened to get bested while playing Go against players named Magister or Master, don’t feel too bad.
The development of AI that can tackle huge issues with the same ferocity as AlphaGo takes on the game of Go may not be too far off. Even just looking at DeepMind, and not other companies who are making major strides in the field of AI, there is some very impressive and awe inspiring work being done.
Even the processes by which the systems are being built to learn are fascinating. For example, DeepMind is taking a page out of the Westworld playbook by teaching its software “to dream.” Video games are also being deployed in this area to create auxiliary tasks, which ended up boosting the rate at which the software learns. Deepmind is also teaching AI how to learn similarly to humans and other living organisms – software is being made able to learn by doing and having that reinforced with a system of rewards and punishments.
The applications of software being created at the company are already causing much excitement. An AI was able to teach itself to identify a unique condition that could lead to blindness, much like a well-trained eye doctor. DeepMind has also partnered with the National Health Service (NHS) in the UK to work together on transforming healthcare.
AI has already come a long way and there is no end in sight to the benefits it will likely have in store for the future of humanity. The rapid pace of discovery could mean that such a future is right around the corner.
As time progresses, we hear of more and more artificial intelligence (AI) systems being developed that can defeat the world’s greatest game players. It’s pretty much a constant one-upping of Tall Tales for the information age. But instead of John Henry versus the steam-powered hammer at steel driving, we began our new era in 1997 with Garry Kasparov versus IBM’s Deep Blue at chess.
Since then, more complex AI systems have been able to become the best at a variety of games. IBM’s Watson beat Jeopardy! Super-Champions, Ken Jennings and Brad Rutter, and Google’s DeepMind beat the world champion of Go. Now, the Libratus AI system is looking to expand this winning streak for his team by taking on some of the greatest human players in a game of Heads-Up No-Limit Texas Hold’em poker.
The demands of a game like poker are quite different from the likes of chess or Go. The latter games are called perfect-information games. In other words, each player knows the moves being made and can extrapolate information using a finite set of remaining moves and potential outcomes. Conversely, poker is an imperfect-information game, with more possible hand configurations than there are atoms in the universe.
The developing team, out of Carnegie Mellon University, are focusing on information sets of known and unknown variables. The system is built with the goal of achieving the Nash Equilibrium. In this concept, each player is assumed to know the strategies of the other players, and no player has anything to gain by changing only his or her strategy. Therefore, as Professor Tuomas Sandholm, one of the system’s creators, puts it, “In these two-player zero-sum games, if the other player doesn’t play a Nash equilibrium strategy, that means they are playing worse, and we are making more money. In such games, playing Nash equilibrium is safe. It has the flavor where it plays rationally and is not exploitable anywhere.”
The matches are currently set to take place beginning on January 11 at the Rivers Casino in Pittsburgh, Pennsylvania. Libratus will go up against four of the world’s poker pros to win a cut of the $200,000 prize pot.
Bringing the best game players in the world down a peg is not the final goal for these developments, though. Creating a system of AI that can successfully master imperfect information games has wide-reaching implications across a variety of fields. “Most real-world interactions include multiple parties and incomplete information,” says Sandholm. Having systems that can successfully navigate these interactions will help to develop the technology further for more critical applications.
Sandholm foresees similar automated systems as being able to participate in complex negotiations, go toe to toe with nefarious hackers, and even fight disease. “We’ve been looking at auto-immune diseases and cancer, steering one’s own immune system to better battle disease at hand,” he explained. “The T-cell is not really an opponent, but you can deal with them using these techniques.”
If you’re curious as to how Libratus fares in the coming matches, they will be made available on the gamecasting site Twitch throughout the course of the tournament.
In addition to the vehicle’s obviously uncommon look (you did notice the wings, yes?), it boasts an equally unique user experience.
Powered by an artificial intelligence (AI) agent nicknamed “Yui,” the car learns about the driver, monitoring everything from their schedule and driving patterns to their responsiveness and emotions. Communication between Yui and the driver isn’t strictly voice-based — the interior is designed so that Yui can use light, sound, and touch to convey information.
“At Toyota, we recognize that the important question isn’t whether future vehicles will be equipped with automated or connected technologies,” said Bob Carter, senior vice president of automotive operations for Toyota. “It is the experience of the people who engage with those vehicles. Thanks to Concept-i and the power of artificial intelligence, we think the future is a vehicle that can engage with people in return.”
Unlike most concept cars, Concept-i will actually get to spend some time on the road, as Toyota plans to oversee on-road evaluations in Japan within the next few years.
Perhaps one day in the not-so-distant future, you could have your own one-on-one chat with Yui.
At this year’s Consumer Electronics Show, Mayfield Robotics unveiled what could possibly be the first truly domestic robot. The Bosch-owned company integrated full functionality with a pleasing, approachable design to establish the robot, named Kuri, as a friendly presence in the home.
Kuri is designed to fit right into smart homes. According to Kuri’s Blog, the bot has a 1080p camera behind one of its eyes, an array of four microphones, dual speakers, wifi and Bluetooth connectivity, and spatial sensors for navigation. It can be programmed to do various tasks around the house and is capable of connecting to modern smart homes through IFTTT. Kuri is 50.8 centimeters (20 inches) tall and weighs 6.4 kilograms (14 pounds), and when its hours of battery life run out, the bot simply returns to its charging dock to recharge.
The fact that Kuri reminds us of certain characters from animated films is not coincidental — Mayfield Robotics intentionally made the robot look that way so that it could easily ingratiate itself to its owners. However, Kuri is more than just a pretty face. “While insanely cute on the outside, Kuri contains serious technologies on the inside that represent the latest developments in smartphones, gaming, and robotics,” said Mayfield Robotics CTO and co-founder Kaijen Hsiao in a press release. “We hope Kuri introduces people — especially kids — to the power of technology and can inspire a new world of possibilities for their future.”
Best Friends With a Bot?
According to Kuri’s creators, one particularly great thing about this domestic robot is that it can be fully functional out of the box, not relying too heavily on third-party services. Further, by the time it reaches the market, Kuri will have a companion app available for iOS and Android. This will allow the robot’s owners to contact and control it remotely to do things like check up on the house while away on a business trip.
Kuri is far from the first domestic robot. A similar project from ASUS, called Zenbo, was also just announced in Taiwan, with a whole slew of capabilities of its own. French telecommunications company Yumii also recently unveiled Cutii, a companion robot for the elderly to help them interact with family, doctors, and others. We also have companion robots that can help autistic children learn new skills, ones that can respond to us with familiar facial expressions, and some that can perform physical tasks to help people with disabilities.
Kuri costs $699 and is available for pre-order with a downpayment of $100. It should be released around the 2017 holiday season, so we’ll just have to wait until then to see whether the age of domestic robots is truly upon us.
Flying cars have long been a facet of a science fiction vision of the future. From George Jetson making his morning rounds, to Leeloo crash landing on Korben Dallas’s cab, the future of transport has always been among the clouds. Back here in reality, driverless vehicles seem to be taking over as the next step in the evolution of personal transportation. Even so, some just aren’t ready to give up on the dream of taking flight in the family vehicle.
Urban Aeronautics is a company out of Israel that is currently working on a passenger drone able to carry 500kg (1,102 lbs) at speeds up to 185 km/h (115 mph). The vehicle is called the Cormorant, formally the “Air Mule,” and is generally being looked at as a military vehicle. It uses internal rotors, as opposed to helicopter-like blades, making it easier to maneuver in an urban setting.
Moving on Up
This “flying car” still has some work to do before it is ready to join the ranks. The prototype does not yet comply with all Federal Aviation Administration (FAA) standards. Also, during a test flight back in November, there were inconsistencies in the data provided by onboard sensors.
“It could revolutionize several aspects of warfare, including medical evacuation of soldiers on the battlefield,” says Tal Inbar, head of the UAV research center at Israel’s Fisher Institute for Air and Space Strategic Studies.
Flying cars are also being explored by other companies. Lilium Aviation has developed a prototype for a vehicle that can take off vertically using a set of large electric fans which then tilt to propel the vehicle forward at speeds from 250 to 300 km/h (160 to 190 mph).
Amazon has filed a patent that sheds light on its future drone-powered distribution plans, and they include an airship full of merchandise. We’ve long known that the company was extremely interested in deliveries carried out by drones, but these plans uncover just how far ahead the online retail giant is looking to see their vision through.
Central to Amazon’s plans is the deployment of “air fulfillment centers (AFCs),” each acting as a sort of flying warehouse. These AFCs would hover over areas where demand is expected to increase for certain products at a certain time. For example, during a sporting event, they might hover above the stadium while stocked with snacks or team merchandise. Upon receiving an online order, smaller drones aboard the AFC would fly down to the stadium and deploy the demanded goods to the customers — a quick, convenient delivery. The AFC might even descend from time to time to flash a video or blare out an audio advertisement.
Though it seems relatively simple, the whole process actually involves a complex web of people and aircraft. Amazon’s patent mentions additional auxiliary aerial shuttles that occasionally restock the AFCs, as well as bring the drones back up, as the company wants to save their power for deliveries. Also brought up in the patent was the idea of the drones and shuttles “talking” to each other, communicating crucial information like atmospheric conditions and quickest routes.
The Future of Retail
Though the idea of receiving a product just minutes after ordering it online, even when you aren’t at home, is thoroughly exciting, we might have to temper our expectations. Given that these airborne fulfillment centers involve a considerable number of things flying around, Amazon has quite a few regulatory obstacles to overcome before it can put this plan into action.
Remember that robot that freaked everyone out on the internet because it said it will destroy humans? That was Sophia—and she feels terrible about the whole thing.
In any case, those who watched HBO’s latest series Westworld might be reminded of her, as the show’s artificially intelligent (AI) robots entertained their guests—or shot at one another. And that season finale? Oh, yes.
The reality is, however, Sophia isn’t quite like that yet. She’s no Dolores Abernathy.
As to be expected, and just like in the HBO series, the key lies in the code. There are two systems of code that make Dolores superior to Sophia, explains Hanson’s VP of Creative Stephan Bugaj, who also heads personality design for their robots. These systems are the secret to making Hanson robots more relatable, although both are still in their early stages of development in the real world.
The first is generative code. This is a system that’s capable of writing its own code, in order to adapt to situations—something like neural networks and machine learning, but with the ability not only to assimilate and build upon incoming data, but to create new self-learning programs (for instance) when the need arises. “An A.I. could reason about itself and decide that it needs some new code, and write it itself,” Bugaj explains.
It’s sort of like AI programs having their own ideas.
Such wildly sophisticated software is evident in Westworld; but in the real world, we’re not quite there yet.
They’re definitely adding some sort of semantic network associations. They’re changing things about themselves, whatever that internal structure would look like in this fictional coding universe. They’re learning, they’re formulating new opinions about themselves and about the world around them, and they’re acting on them. And that’s being sentient.
The second key software system is memory. Of course, in order to truly learn, an AI has to have memory. Not just any kind of memory, but memory in the same way human beings have it—short term, long-term, and even episodic. It’s a trait that makes the Westworld robots seem uncannily human.
“We use analogies, intuition, this kind of reminding thing, to do a lot of great work,” explains Oren Etzioni, CEO of the Allen Institute. “And that capability—people call it associative, people call it content-addressable, any of a number of things—we ain’t got any systems that come even close.”
A New Direction for the Future?
Bugaj believes that shows like Westworld can serve as an inspirational template for what AI can do in the real world. “I think they’re doing what a good futurist should do, which is making conjectures about the future, and then exploring how those might play out,” he says.
“A semantic or neural network is state-evolving over time, because it’s bringing in new data and basically learning new things,” says Bugaj.
But neural networks aren’t exactly yet capable of generative code, which is that next evolutionary level in machine intelligence. It’s the kind of code that writes its own code. And the key difference is memory. “Everything that we talk about, with a machine being able to learn, comes down to memory management,” Etzioni says. So far, we are yet to create code that can achieve a creative, spontaneous memory.
So, to be perfectly honest, Westworld is still a little out of our league. But hey, at the end of the day, it’s science fiction, folks. And however much it may be rooted in reality—fiction is precisely what it is.
And as science has proven time and again, it will always cough up something far better than what the most fevered sci-fi writer’s imagination can produce. It just takes a lot of research and a lot of time.
The interest in artificial intelligence (AI) has evolved lately into something more akin to an obsession. Every day, it seems, we learn of some new development in AI, or its practical application to some other facet of our lives. The term “artificial intelligence,” coined in 1956, started out as mere speculation that human thinking could be organized into a symbolic system that was—or would someday be—technologically reproducible.
But once it gained momentum, it seemed there was no stopping the ascent of AI.
Among the most promising avenues of AI research is machine learning, which consists of algorithms that “learn” by finding patterns and generating insights from raw, unassimilated data. Once these heuristic algorithms have assimilated enough data and compiled a sufficient library of rote reactions, they can apply these computations to make decisions and predictions when faced with similar stimuli in the future—consider it an artificial “instinct” routine. The programs now have the ability to learn and execute actions without having to be specifically programmed every time, simply by accessing this instinctual programming.
Machine Learning in Our Daily Lives
Over the past years, there have been enormous strides in the machine learning industry, which have resulted in what we have today: highly adaptable programs capable of applying mathematical calculations to big data repeatedly and at significantly faster speeds, according to the Statistical Analysis System (SAS) Institute.
There are many branches under the machine learning subcategory, with “deep learning” being one of the main research concentrations. This is hardly surprising, seeing how deep learning is the branch of machine learning that brings AI the closest to its lofty goal of enabling machines to learn and think like humans.
Now, as the technology swiftly evolves—practically on its own, for by its very nature it obviates or reduces the need for human guidance or intervention—the application of machine learning is fast becoming a ubiquitous part of our daily lives. Among machine learning applications that we interact with on a daily basis, most are probably familiar with those already embedded in smartphone applications, particularly the facial recognition and sorting technology in photos apps.
Several shopping websites, together with subscription services such as Netflix, employ machine learning to predict what a customer might like and display suggestions based on these assumptions. Fraud detection, though still imperfect, is also becoming far more capable with help from machine learning algorithms. And as autonomous vehicles gain popularity and legality in different areas of the globe, machine learning will eventually take over the driver’s seat.
So even if you’ve never heard of machine learning, it’s a safe bet it will touch upon your life in some way—no matter how remote. Whether it’s the changing nature and composition of the workplace and workforce, the astonishing new advances in driverless tech, or AI’s infiltration of fraud detection, crime prevention, and medical diagnostics, intelligent algorithms will be influencing our lives whether we like or not.
Watch this video for a demonstration of some machine learning-based technologies being used and developed today:
It all started out as science fiction: machines that can talk, machines that can think, machines that can feel. Although that last bit may be impossible without sparking an entire world of debate regarding the existence of consciousness, scientists have certainly been making strides with the first two.
Over the years, we have been hearing a lot about artificial intelligence, machine learning, and deep learning. But how do we differentiate between these three rather abstruse terms, and how are they related to one another?
Artificial intelligence (AI) is the general field that covers everything that has anything to do with imbuing machines with “intelligence,” with the goal of emulating a human being’s unique reasoning faculties. Machine learning is a category within the larger field of artificial intelligence that is concerned with conferring upon machines the ability to “learn.” This is achieved by using algorithms that discover patterns and generate insights from the data they are exposed to, for application to future decision-making and predictions, a process that sidesteps the need to be programmed specifically for every single possible action.
Deep learning, on the other hand, is a subset of machine learning: it’s the most advanced AI field, one that brings AI the closest to the goal of enabling machines to learn and think as much like humans as possible.
In short, deep learning is a subset of machine learning, and machine learning falls within artificial intelligence. The following image perfectly encapsulates the interrelationship of the three.
Here’s a little bit of historical background to better illustrate the differences between the three, and how each discovery and advance has paved the way for the next:
Philosophers attempted to make sense of human thinking in the context of a system, and this idea resulted in the coinage of the term “artificial intelligence” in 1956. And it’s still believed that philosophy has an important role to play in the advancement of artificial intelligence to this day. Oxford University physicist David Deutsch wrote in an article how he believes that philosophy still holds the key to achieving artificial general intelligence (AGI), the level of machine intelligence comparable to that of the human brain, despite the fact that “no brain on Earth is yet close to knowing what brains do in order to achieve any of that functionality.”
Machine learning is just one approach to reifying artificial intelligence, and ultimately eliminates (or greatly reduces) the need to hand-code the software with a list of possibilities, and how the machine intelligence ought to react to each of them. Throughout 1949 until the late 1960s, American electric engineer Arthur Samuel worked hard on evolving artificial intelligence from merely recognizing patterns to learning from the experience, making him the pioneer of the field. He used a game of checkers for his research while working with IBM, and this subsequently influenced the programming of early IBM computers.
Current applications are becoming more and more sophisticated, making their way into complex medical applications.
As we delve into higher and even more sophisticated levels of machine learning, deep learning comes into play. Deep learning requires a complex architecture that mimics a human brain’s neural networks in order to make sense of patterns, even with noise, missing details, and other sources of confusion. While the possibilities of deep learning are vast, so are its requirements: you need big data, and tremendous computing power.
It means not having to laboriously program a prospective AI with that elusive quality of “intelligence”—however defined. Instead, all the potential for future intelligence and reasoning powers are latent in the program itself, much like an infant’s inchoate but infinitely flexible mind.
Watch this video for a basic explanation of how it all works:
Amazon has been serious about developing the future of interactive, speech-activated artificial intelligence (AI); their chatbot Alexa, which resides within the sleek cylindrical exterior of the Amazon Echo device, has already shown the world what the next evolution of our interface with technology will look like.
Last year, in their ongoing quest to better this avant garden tech, Amazon announced that it’s coming out with the Alexa Fund to invest in companies working on “voice technology innovation.” Now, elaborating on that commitment, the internet commerce giant is revealing an accelerator program for startups dabbling in conversational AI.
The Alexa Accelerator, a partnership with startup accelerator TechStars, will focus on areas connected with Amazon’s Alexa. That means companies working on how voice technology AI can be improved and applied to more devices, and striving to change the way we interact with our technology so that it’s more streamlined and intuitive.
But since Amazon wants to stick Alexa in more and more platforms, that actually leaves a pretty large field. The company hasn’t specified just what types of companies or tech concentrations it will be accepting, so teams working on smart cars, smart homes, medical devices, and everything in between could make the final cut.
While open to anyone, only 10-12 companies will be selected for an intensive 13-week program, which will connect them with mentors and experts that can help the startups with how to develop both their product and their organization.
A Talkative Future
Voice technology has really been shaping up to be the “successor” to touch-based devices. It’s more spontaneous and intuitive, and represents the natural evolution of tech interaction, which is why companies want to create AI that can better understand and respond to voice commands—not as easy a task as one might think. Tech giants like Microsoft and Google have been working on improvements to speech recognition and generation, and accelerators like this are only going to make that voice-powered future arrive even faster.
Somewhat surprisingly, Amazon has cornered the market when it comes to voice-activated AI; sales for the Echo family of devices have steadily increased, and developers have responded positively to the technology, especially since Amazon released the free Alexa Skills Kit for cloud- and web-based development. Over 3,000 new skills have been added to Alexa’s already substantial intellectual heft, which only seems to indicate that the technology will begin to grow exponentially as developers continually broaden the platform’s skill set.
In this paper, we propose Simulated+Unsupervised (S+U) learning, where the goal is to improve the realism of synthetic images from a simulator using unlabeled real data. The improved realism enables the training of better machine learning models on large datasets without any data collection or human annotation effort. We show that this enables generation of highly realistic images, which we demonstrate both qualitatively and with a user study.
Basically, training models that use computer generated images are easy to identify because they are tagged; as opposed to real-world images which aren’t. A synthetic image of an eye could be easily identified by the algorithm, but a similar image taken from the real-world would be unknown to the algorithm because it isn’t labelled.
The paper suggests pitting neural networks against each other to bridge this gap—training the program so that it can better discern synthetic data from real data.
Apple As An Open Book
Perhaps even more relevant than the the technique described in the paper is the fact that it was even published at all. Apple is notoriously secretive about its research, and AI is the heart of some of its biggest innovations, like Siri.
The company’s reluctance however, may have proven to be counterintuitive to their progress in the field. Their intent to protect proprietary research and reluctance to share knowledge in AI may have turned off prospective talents who wanted their achievements recognized.
Nevertheless, their willingness to share information and research is a welcome one for a very competitive industry that is rapidly advancing. And a more democratized approach to the study could pave the way for more talent and brains working towards improving the technology.
The popular ride-hailing company Uber has recently launched a new website for its large-scale delivery service called UberFreight. And while very little information about the new service is forthcoming, we can safely assume that this truck delivery service is connected to Uber’s acquisition of the self-driving cargo startup Otto last August.
“We don’t have any new information to share at the moment, but hope to in the new year so please do stay in touch,” an Uber spokesperson told Inverse.
Little else is revealed on the site, except that it’s open for both carriers and shippers. People can sign up to drive delivery trucks across the country, while others can send packages using the service without the usual contracts required by established shipping companies.
It’s clear, however, that UberFreight won’t be using Otto’s trucks just yet.
It’ll certainly be a while before autonomous trucks hit the road. Which is precisely why establishing UberFreight this early in the game is ideal. Right now, it’s about data acquisition and processing: running UberFreight will give Uber access to enormous quantities of real-life data that can help prepare for and improve the eventual autonomous hauling service it plans to implement using Otto’s trucks. The self-driving vehicles can learn from experienced drivers; the delay, meanwhile, also gives government regulators some much-needed time to figure out how to govern autonomous trucks.
In its October test run, Uber showed that the Otto autonomous trucks would still be monitored by a human being that works more as an operator than a driver. This setup can help mitigate the eventual job displacement expected to be created by autonomous freight trucks. Long haul truck drivers could possibly work as long haul operators in the near future.
With Uber’s resources and Otto’s technology, this new service enters a previously unexplored market that’s been steadily garnering attention from the likes of the Nikola Motor Company and even Tesla. Right now, it’s baby steps; but each faltering step forward with UberFreight will pave the way for an almost incredible future, one in which that long-haul semi you see beside you on the highway is driven by an algorithm, not a crusty truck driver.
Which begs the next question: what will become of all those seedy truck stops across the country?
The future of autonomous automobiles has been assured, to a great extent, by innovative companies like Tesla. While some are still hesitant to accept self-driving cars on the roads (who wouldn’t be?), and although the technology certainly hasn’t been perfected yet (remember Uber’s San Francisco drive?), new technological improvements daily bring us closer to complete autonomy.
Take Tesla Autopilot’s latest radar enhancements, for instance. Last September, Elon Musk’s futuristic car company rolled out version 8.0 of its self-driving software. This radar-processing software included an improved point cloud, which enabled Tesla Autopilot to track what’s in front of the car.
About 3 months after it was released, this new point cloud system demonstrated just how effective it really can be. It saved Hans Noordsij, a Tesla driver from the Netherlands, a great deal of trouble—and he caught the whole thing on his car’s dashboard cam.
The video shows the Tesla Autopilot’s Forward Collision Warning sending out a seemingly misplaced alert, as everything seemed to be humming along quite smoothly. But a few seconds later, we see the collision happen. Noordsij was warned off by his car’s radar system. Fortunately, despite the seeming seriousness of the crash, no one was badly injured.
Remarkably, the car’s radar and collision avoidance system correctly predicted a crash involving other cars, and could see the movements of the hidden vehicle in front of the leading car. Whatever you think of driverless technology, the above video is pretty cool.
The Better It Sees, the Better It Drives
This isn’t the first time that the improved software has saved Tesla drivers from accidents. Neither is it the last improvement Tesla has implemented. Just earlier this December, version 8.0.2 was released. It’s a relatively minor enhancement that prevents drivers from speeding (thanks Elon!), but even such a seemingly insignificant improvement adds to how autonomous vehicles can help keep their drivers—and others around them—safe. And it’s a timely upgrade, too, especially after Uber’s self-driving SUVs were caught beating the red light on a couple of occasions.
Indeed, Tesla’s Autopilot radar system will further improve the more it is used. According to the blog that accompanied its release:
When the data shows that false braking events would be rare, the car will begin mild braking using radar, even if the camera doesn’t notice the object ahead. As the system confidence level rises, the braking force will gradually increase to full strength when it is approximately 99.99% certain of a collision. This may not always prevent a collision entirely, but the impact speed will be dramatically reduced to the point where there are unlikely to be serious injuries to the vehicle occupants.
Of course, we’re still far from full autonomy.
The self-driving cars we have today still require their driver’s attention, even just a part of it—especially as the system continues to be improved. As it stands, this new radar system is not a replacement for a driver’s vigilance on the road.
Yes, self-driving cars are the future—but it wouldn’t hurt to keep an eye out for trouble along the road, at least for the time being.
In April, Thomas Frey, the founding executive director of non-profit futurist organization the DaVinci Institute, posted a prediction outlining 128 things that the driverless car era will bring to an end (or substantially reduce) by the year 2030.
“I’ve become enamored with the coming autonomous car era where many of today’s problems get solved,” writes Frey. “However, going through the transition will be anything but smooth.”
He goes on to explain all the areas primed for disruption by autonomous vehicles. Personal drivers as well as drivers and operators of heavy industrial vehicles are an obvious guess. In addition, Frey lists supporting jobs such as driving instructors, traffic analysts, car licensing and registration jobs, parking-related jobs, and many others.
Because autonomous vehicles promise to cut down on the number of accidents and fatalities resulting from human error — such as fatigue, drunkenness, and distractions — vehicle and road maintenance and repair will also be less of a problem. As a result, insurance firms may also see their businesses suffering.
In connection with this, Frey also foresees car theft and road rage instances dropping down. Subsequently, we’ll see fewer related court cases. Legal jobs and personnel that monitor, enforce, and oversee the implementation of driving laws, such as police, traffic lawyers, and judges, will also be downsized.
No More Driver’s Licenses?
Frey agrees with Elon Musk’s forecast that, as full AI autonomy becomes legal, human drivers will likely become prohibited. “The privilege of driving is about to be redefined,” he writes. “Elon Musk has predicted, over time, that lawmakers will decide that driving a vehicle is far too dangerous for humans, and most people will be outlawed from doing the driving themselves.”
So how are humans going to adapt in a world where machines are taking over fast?
Some are recommending a universal basic income (UBI), but Frey posits that unemployment won’t be as big of a problem as it’s being made out to be.
“The part that’s receiving far less attention is the huge number of new jobs that will replace the ones going away,” writes Frey. He says that autonomous vehicles will be opening up industries surrounding driverless vehicles, such as “ride experience” designers, as well as analysts and engineers focusing on automating and coordinating cars with traffic.
Regardless of how the age of autonomous vehicles affects employment, one thing is almost certain: we’ll all be a lot safer.
Tesla confirmed reports that their latest Autopilot update will now follow the exact speed limit on undivided roads. Prior to this update, owners were able to set the speed of the Autopilot’s ‘Traffic-Aware Cruise Control’ feature to up to eight km/h (five mph) over the speed limit on roads and non-divided highways when using Autosteer.
The update is the latest in a series of tweaks the auto maker released intended to make the Tesla driving experience safer and more convenient. In addition to the Autopilot system strictly adhering to speed limit, Tesla has a added more aggressive alert system to remind drivers when they should take the wheel.
“Tesla is not only adding restrictions. The company also recently started pushing a new update 8.0.2 to add new convenience features to its vehicles. Additionally, CEO Elon Musk confirmed yesterday that Tesla is making progress to bring Autopilot 2.0 vehicles to parity with the previous generation and that the Autopilot’s vision neural net is ‘now working well.,’” Elecktrek reports.
To say that the automotive industry will change more in the next two decades than it has in the past century is not an exaggeration. As car makers push to improve autonomous driving capabilities, it won’t be long before self-driving vehicles disrupt the familiar driving experience.
For instance, Tesla’s latest update has already been received with mixed reviews. Despite being anchored on the intent to make driving safer, some car owners view these updates as Tesla taking away capabilities that they already had. And for motorists who want to run a little faster, it might be a little frustrating that they will be unable to unless they take over and drive the car themselves.
With autonomous driving shifting driving responsibilities to the machine from the human behind the wheel however, liability will also be transferred to the manufacturers. This makes it even more important for automakers to take the necessary steps to ensure safety.
There has been a lot of discussion regarding the role automation will play in our workforce in the coming years. Much of this talk is focused on how the shift will impact low-skilled workers in areas like manufacturing. However, these jobs are increasingly being joined by other positions like doctors, lawyers, and now, even finance managers.
Ray Dalio is the founder of Bridgewater, the world’s largest and most profitable hedge fund, who just happens to also be working toward completely automating the firm’s management. Software engineers are hard at work developing systems to completely take over the day-to-day decision-making process, including hiring and firing. The software team is referred to as the Systematized Intelligence Lab and are led by David Ferrucci, who also led the development of IBM’s Watson.
The project is based on Dalio’s 123-page public manifesto known as the “Principles.” The manifesto culminates into what the Wall Street Journal describes as an “unorthodox management approach” involving “radical openness” that encourages employees to rate each other, openly disagree and criticize, and where honesty is a top priority.
Dalio believes that people work like machines, so much so that the word appears 84 times in his manifesto.
Those who are the most successful are capable of “higher level thinking” —i.e., they are able to step back and design a “machine” consisting of the right people doing the right things to get what they want. They are able to assess and improve how their “machine” works by comparing the outcomes that the machine is producing with their goals.
Dalio is calling the developing software the Principles Operating System, or PriOS. His ultimate goal for the system is to be able to predict meeting outcomes, autonomously mediate employee disagreement, and dictate every single move of each employee’s day. Dalio wants the system to be able to take over nearly three-quarters of the daily management decisions.
As the Wall Street Journal puts it, “The role of many remaining humans at the firm wouldn’t be to make individual choices but to design the criteria by which the system makes decisions, intervening when something isn’t working. ”
These goals, coupled with Mr. Dialo’s view of humans as machines, make it clear that he is of the mind that not just processes and work can be automated, but also humans themselves.
This line of thinking seems to be taking automation a bit far—this is much more than making humans efficient, it is trying to literally automate people. Simply put, humans are amazingly complex, indeed, there is much that we still don’t understand about the human brain. Our AI systems are nowhere near capable of understanding how humans work, taking all factors into consideration, and delineating our lives to this degree. And even if they could dictate the perfect day for us job-wise, things (and people) are constantly altering. AI would need to be flexible enough to immediately analyze these changes and redirect us.
But no matter how far we advance, humans are not machines and cannot be fully automated. Bridgewater even reports that one-fifth of its hires cannot handle a year at the company, and those who do survive are often found crying in the bathrooms.
The system is said to potentially be able to “dole out GPS-style directions for how staff members should spend every aspect of their days, down to whether an employee should make a particular phone call.” This is robbing people of individuality and forcing them to fit into this very specific (rather fundamentalist-sounding) philosophy. Real automation seeks to increase human efficiency, this is more like replacing humans with organic cogs.
Some experts at the Suzumori Endo Lab of the Tokyo Institute of Technology have just unveiled an engineering marvel. The extremely long robotic arm is not only getting all the buzz for its ability to reach for 20 meters (65 feet) but also because it weighs so little.
Also called the Giacometti arm — named after Alberto Giacometti, a Swiss sculptor best known for his thin, lanky works of art — the robot is made up almost entirely of balloons filled with helium. Technabob reports that lightweight synthetic muscles were used by the developers in favor of the usual hydraulic joints and gas-operated actuators, taking all the more weight out of the equation. In fact, the whole machine only weighs just about 1.2 kilograms (2.5 pounds), folds very easily, and could fit in the regular car trunk.
Although obvious problems arise from its weight, like being easily buffeted by strong winds, experts have seen promise for the unusual device. They said it could be used for search and inspection operations, or transport, in situations when the use of drones isn’t practical. Further, its light weight makes its operation safer in the factory setting in the event of glitches, contrasted against the usual heavy machinery being used today.
The report doesn’t take a hostile stance against AI and automation; in fact, it encourages the technologies. The first strategy it proposes is to invest in AI research and development, echoing Obama and the earlier White House report:
If care is taken to responsibly maximize its development, AI will make important, positive contributions to aggregate productivity growth, and advances in AI technology hold incredible potential to help the United States stay on the cutting edge of innovation. Government has an important role to play in advancing the AI field by investing in research and development.
As Obama said in an interview with WIRED, “[T]he government should add a relatively light touch, investing heavily in research and making sure there’s a conversation between basic research and applied research.” This also brings to mind the recent Senate subcommittee hearing in which industry experts urged the government to step up in making good use of AI.
The second strategy is in the same league as the first: Education is key.
As AI changes the nature of work and the skills demanded by the labor market, American workers will need to be prepared with the education and training that can help them continue to succeed. If the United States fails to improve at educating children and retraining adults with the skills needed in an increasingly AI-driven economy, the country risks leaving millions of Americans behind and losing its position as the global economic leader.
What About UBI?
Those following the discussions surrounding this issue are familiar with one particular solution that’s popped up quite often: universal basic income (UBI), giving every citizen a set amount of money with no strings attached to it. It is a favorite answer to the unemployment problem given by conservative thinkers like Milton Friedman and tech industry executives like Y Combinator’s Sam Altman and Tesla’s Elon Musk.
The new White House report offers a more cautious take on UBI: “Our goal should be first and foremost to foster the skills, training, job search assistance, and other labor market institutions to make sure people can get into jobs, which would much more directly address the employment issues raised by AI than would UBI.”
This sums up the paper’s third strategy, which advocates for more moderate social safety nets and government support as opposed to a UBI system.
This all seems well and good, and it’s overall a pretty progressive approach that accepts the inevitability of AI and automation and works with it. But the most important question might be this: “Will the incoming Trump administration agree with these suggestions?”
“When you look at Trump’s stance on jobs in general, he thinks that immigration and free trade are the things that have robbed us of jobs, not automation,” Rob May, founder of Talla, a human resources automation company, tells WIRED. “It will be interesting to see if, as time goes on, the administration sees automation as something that is inevitable or if they try to put policies in place that try to fight it.”
For a minute, it seemed like there was good news for anyone who has ever dreamt of becoming a Jaeger pilot. Videos and images of a four-meter (13-ft) tall, walking mech suit that could bring Pacific Rim’s robots into reality surfaced on Instagram.
The imposing exoskeleton was supposedly created by a Korean robotics company with the help of Moldovan-American designer, Vitaly Bulgarov. Bulgarov was the creative mind who designed the robots in Terminator, Robocop, and Transformers.
However, according to Live Science, we have some reasons to be skeptical of the bot:
The South Korean company that is supposedly developing the robot has virtually no online presence and was unfamiliar to robotics researchers contacted by Live Science. Furthermore, the only source for the videos or any information about them is the Facebook and Instagram pages of a designer whose website mentions a conceptual art project about a “fictional robotics corporation that develops its products in a not-so-distant future.”
Bulgarov still claims the massive mech suit is real. According to Bulgarov, the video was leaked, and the company will do a “proper press release” when the robot is completed in 2017. “Frankly, right now, we don’t even really care if people think this is legit because we didn’t try to promote it,” Bulgarov said.
According to Ronald Fearing, professor of electrical engineering and computer science at the University of California, Berkeley, the technology is plausible. However, the video itself is leading scientists to further question the technology’s authenticity.
Christian Hubicki, a postdoctoral robotics researcher at Georgia Tech who worked on the DURUS robot, told Live Science that the bot’s leg joints look unusually smooth given the force that the step of a 1.5-ton robot would exert on the motors. Of course, Bulgarov had an answer to this skepticism. He noted the dampers in the robot’s feet make its motion smooth. He also said the robot wasn’t built for use as is, but as a platform to show off various technical feats.
It’s possible that Method-1 may not be real, but other bots created to provide mechanical assistance already exist. After a 90-year attempt to bring to life a robot that could move in waves, the SAW robot was unveiled by engineers at Ben-Guiron Universtiy of the Negev. Another machine developed for search and rescue missions is the CRAM. The cockroach-inspired robot is designed to run swiftly through small spaces, making it an essential tool for first responders who have to assess earthquake rubble survivors.
These advances in robotics are making it possible for engineers to build machines that will prove to be invaluable. When it comes to Method-1 though, Hubicki still urges skepticism:
At some point, you have to ask yourself, ‘Did a secret company with no obvious ties to existing humanoid robotics groups actually develop a giant humanoid robot with likely unparalleled power and torque, and instead of sending out a press release, they unveil the robot by dribbling out low-resolution videos on Instagram via an affiliate who just so happens to be a visual effects artist?’
It’s always fun to go back in time and take a look at the predictions made in the past about what folks thought our present would look like. Last year the internet went gaga over comparing the vision of Robert Zemeckis and Bob Gale’s 2015 as seen in Back to the Future II, with reality.
Works of fiction aren’t the only media to take a stab at amateur prophecy. Ten years ago, a news segment during NBC Nightly News predicted that technology would have a much bigger role in identification. The story envisioned a scenario in which hospitals could simply scan a Radio-Frequency Identification (RFID) chip implanted beneath your skin to gain access to your medical history.
The technology existed back in 2007, so the prediction was focused on the implementation of such devices becoming commonplace. And unfortunately for NBC, they seemed to have missed the mark. While a handful of individuals have chosen to have the chips implanted, implementation is by no means widespread.
But not every prediction NBC made was as off the mark. The video also discussed how facial recognition software will play a larger part in our daily lives. They used the example from the film Minority Report where the software was used to tailor marketing specifically to each identified individual. While facial recognition is not yet part of the system, proximity marketing software does exist using real-time location systems (RTLS) to provide location specific marketing through wifi or Bluetooth.
According to the World Health Organization, an estimated 285 million people are visually impaired, with 39 million living with blindness and the other 246 million having low vision.
In a world of modern technological advancements, visual impairment has been the subject of much medical research. Perhaps the most notable among these are those that use artificial intelligence (AI), specifically through machine learning. Google’s DeepMind has been working with the UK’s National Health Service to do ophthalmology research.
“Today, I am happy to take forward the work we have done in healthcare with an esteemed set of partners,” says Anil Bhansali, managing director at Microsoft India (R&D) Pvt. Ltd. The effort is in partnership with L V Prasad Eye Institute [LVPEI]. “In our shared vision to eradicate preventive blindness, MINE will help redefine eyecare by bringing together the power of technology and knowledge of global experts.”
AI applications in medical and healthcare research have been gaining ground. Eyecare is a field that’s gotten particular attention from non-traditional companies, such as LVPEI, to test the capabilities of machine learning in healthcare. “At LVPEI, we have been using Microsoft Azure Machine Learning and Power BI to drive clinical interventions and improve patient outcomes,” explains G.N. Rao, the institute’s founder and chair. The company focuses heavily on visually impaired children, working towards predicting outcomes for refractive surgery and studying the rate of change in myopia for children. As the tech giant stated in an announcement, “Microsoft will deploy its leading cloud platform technology – Cortana Intelligence Suite, for advanced analytics and to build [AI] models on eyecare.”
MINE will use machine learning’s image analysis capabilities, as eye conditions are a good space to apply the technology. The collaboration will work towards developing machine learning predictive models for better diagnosis of issues leading to vision impairment. Specifically, it will come up with optimal surgery parameters, personalizing a surgery and maximizing its chance of success.
As the owner of the NBA’s Dallas Mavericks, Landmark Theaters, and Magnolia Pictures, businessman Mark Cuban clearly knows a thing or two about where to invest money. And apparently, he isn’t shy about sharing his tricks with others. In a recent blog post, Cuban made an investment suggestion to the government.
“If it was me spending the money, I would take 100 billion of the proposed $1 trillion dollars in infrastructure investment and invest it in Robotics,” Cuban says. “I would invest it in the companies that do R&D, software, and design for robots and every other facet of the Robotics Industry.”
Cuban thinks the US is getting left behind in terms of robotics research and development. What’s more, he says, “none of the companies that actually make the robotics are based here in the USA. That’s a problem that needs to be solved.”
Winning the Robotics Race
Cuban seems to think that China is the opponent to beat. He notes how China, according to a recent report, is spending more on robotics than the US — about $3 billion a year. “We need to quickly pass them by,” he says. The Mavericks owner also notes how China, the EU, and Korea offer “billions in credits to support their robotics industry. We spend about $100m. That ain’t gonna work.”
According to Cuban, US “infrastructure” spending should look forward, not backward so that we can be the robotics hub of the world.” He believes the US currently doesn’t have the best robotic technology and the infrastructure needed to enable it. Cuban insists:
Why is this so important? Because technological change always accelerates. It never stagnates over time. Which means we are going to face the fact that if nothing in the States changes, we will find ourselves dependent on other countries for almost everything that can and will be manufactured in a quickly approaching future.
“We have to win the robotics race,” Cuban writes. “We are not even close right now.”
HAL, the malevolent artificial intelligence (AI) in Stanley Kubrick’s 1968 masterpiece 2001: A Space Odyssey, may be responsible for ruining the idea of AI singing/songwriting for the rest of time. One of the most confusingly terrifying and heartbreaking scenes in film history simply involves the AI singing one final song as it is being permanently shut down.
However, as we humans are typically not ones wont to learn from our past. We have recently decided to give some real AI the ability to create music. Researchers from Toronto have unleashed a new technology called “neural karaoke” to help a computer learn to write a Christmas song. The results were one part “Star Wars according to a 3 year old,” and one part Steve Buscemi’s “Howdy do, fellow kids!” It has a sprinkling of some broad stroke ideas of the holiday season, but something just isn’t right.
To all of those who desperately wondered what it would sound like if your old family computer borrowed a human suit and tried to fit in at this year’s Christmas party, your prayers have been answered.
As preparation to create this magnum opus, the AI listened to 100 hours of Christmas music to get the basics of a melody, to which it later added some light accompaniment. The lyrics were derived from a series of themed photographs (in which, there were apparently “lots and lots and lots of flowers”).
All jokes aside, while this may not be the most impressive instance of AI creativity on record, it is the most festive. Previous ventures in musical AI have brought us the Beatles inspired “Daddy’s Car,” as well as a musical composition in the style of Bach.
Perhaps there will be a future for non-human songwriters. Maybe this technology could even make its way onto television in the vein of Watson’s tenure on “Jeopardy!” According to the Guardian, Raquel Urtasun associate professor in machine learning and computer vision at Toronto’s computer science lab said “more advanced versions of the Toronto lab’s program might one day serve as a virtual coach for wannabe stars on X Factor, The Voice, and America’s Got Talent.”
This makes one of the lines in the song strangely prescient, “I’ve always been there for the rest of our lives.” We will see in time, but until then, let’s keep an eye on the pod bay doors.
The Atacama Large Millimeter/submillimeter Array (ALMA) in Chile just got a major upgrade. The space exploration tool has some new Band 5 receivers installed that greatly improve the wavelengths detectable by the array.
The most exciting aspect of the new receivers is the increase in ease of finding water on celestial bodies, and thereby giving a clearer treasure map on the long road to finding alien life. According to European ALMA Programme Scientist, Leonardo Testi, “The new receivers will make it much easier to detect water, a prerequisite for life as we know it, in our Solar System and in more distant regions of our galaxy and beyond. They will also allow ALMA to search for ionized carbon in the primordial Universe.”
The upgrade was tested by observing a objects such as colliding galaxies and a red dwarf star on the verge of a supernova. Astronomers are excited at the new prospects afforded by this increased power. ESO team member Robert Laing discussed these possibilities: “In the future, the high sensitivity and angular resolution of the full ALMA array will allow us to make detailed studies of water in a wide range of objects including forming and evolved stars, the interstellar medium and regions close to supermassive black holes.”
Where There’s Water…
The search for alien life is closely tied to the search for extraterrestrial water. Every living organism on Earth requires water for life, so it is very possible for the same to be true of alien worlds. It may turn out that someday this hard and fast rule isn’t 100 percent accurate. However, at this point, it serves as an understandable way to focus our limited resources.
Planets with water will also play an important role in the space mining industry. The ability to limit the supplies mining missions would have to bring would rely on the ability of these missions to use the resources of space to their advantage. One such important resource is water. The ability to harvest water from the environment will allow for longer missions and less space during transit to store it.
A small study was recently conducted to get an idea for how Americans feel about a universal basic income (UBI), a no-strings-attached, unconditional income given to individuals. The survey of 500 people was conducted by 50+1 Strategies and David Binder Research on behalf of the Economic Security Project. The results of the survey showed that 46 percent of respondents were in favor of the idea, along with 35 percent opposed and 19 percent undecided.
There has been a strong uptick in conversations about UBI recently, especially in the wake of some high-profile tests of the program being announced in Canada, Kenya, and Finland, among other places. Some experts even see the shift to UBI as an inevitability. “There is a pretty good chance we end up with a universal basic income, or something like that, due to automation,” tech industry titan Elon Musk has said. “Yeah, I am not sure what else one would do. I think that is what would happen.”
As robots and artificial intelligence (AI) get better at performing tasks formerly reserved for humans, the threat of mass job loss grows. Outsourcing jobs overseas was a major talking point in the past presidential election in the United States, and the current president-elect is promising to bring those companies back to the country. However, with increasing automation, those companies many not have as many jobs to offer.
The debate has even gotten to the point where the sitting president of the United States was asked in an interview about the potential for a UBI system. “Whether a universal income is the right model — is it gonna be accepted by a broad base of people? — that’s a debate that we’ll be having over the next 10 or 20 years,” Obama noted.
The Devil in the Details
The beginnings of that debate are already forming, and breaking down the numbers generated by the survey unveils some very interesting thoughts surrounding the issue.
Once respondents were given some specifics on the policy, support eroded. The aspects that were least favorable included not having the money tied to work, full autonomy with regard to how the money is to be spent, and that the funds for the program would come from tax revenue.
Some critics of UBI cite the inciting of laziness as a major argument against it. As Misha Chellam of the startup-training company Tradecraft explains to CNBC, “The laziness argument is one that has hamstrung welfare and safety net efforts for decades.” However, empirical evidence that welfare programs promote laziness simply does not exist.
Further support for UBI could hinge on how it is presented. The survey found that referring to the program as “social security for all” was received more favorably than when it was called universal basic income. However it is framed, UBI must be carefully considered as a realistic way to deal with the massive unemployment on the horizon due to the rise of automation.
France’s national postal service, Le Groupe La Poste, has begun a testing program to bring delivery by autonomous drone to the country. This announcement comes on the heels of Amazon successfully completing its first drone delivery over in the U.K.
A subsidiary of the postal service, DPDgroup, has been working on making drone delivery a reality in the European nation since 2014. The group partnered with French drone-making company Atechsys to provide the drones, which can fly up to 19 km (12 miles) at speeds of up to nearly 31 km/h (19 mph) while carrying a parcel weighing up to 3 kg (6.6 lbs).
The drone test route covers about 15 km (9.3 miles), which is the distance between two depots in the southeast of the country, Saint-Maximin-La-Sainte-Beaume and Pourrières in the Provence region of France. The packages are dropped off and picked up from these designated depots.
A Looming Problem
Any advancement in autonomous robotic technology always brings about a renewed debate surrounding jobs and unemployment. Experts say that robots will continue to replace humans, with some estimating that anywhere from 47 to 81 percent of jobs could be eliminated by technology.
Low-skill jobs such as manufacturing are no longer the only kind being discussed as potentially on the chopping block. IBM’s Watson AI was just as effective at recommending cancer treatments as doctors, and not even editors are safe, as Watson was also able to edit an entire magazine on its own. It’s also said that AI can replace 80 percent of IT workers.
Universal basic income is being discussed as a way to ease the blow that automation will take on the workforce, but even if that’s not the ultimate solution, something will need to be done soon to account for the loss of jobs due to technology.
At the beginning of this year, Facebook CEO Mark Zuckerberg challenged himself to build an AI system to run his home and help him with work. He named the project Jarvis, after the fictional AI system built by Marvel’s Iron Man. This Jarvis may not have the wit or the Mind Stone of his namesake, but what Zuckerberg has managed to build is pretty impressive.
Zuckerberg recently explained the process of building Jarvis, while also being quite candid about its strengths and weaknesses. The system helps him and his wife Priscilla around the home by turning lights on and off, letting them know where their dog Beast is, or alerting them if their daughter is stirring in her room.
Not only did Zuckerberg have to come up with ways to connect all of these devices, but he also had to do so safely. As Fast Companyexplains, the home network is essentially an extension of Facebook’s infrastructure, which “imposed limits on what he could control. Internet-connected fridges, for example, don’t come with Facebook security certificates. That’s not a problem for most people, but most people aren’t Mark Zuckerberg. Keeping his security at home airtight was a primary concern.”
One of the more interesting aspects of his project was how Zuckerberg chose to interact with Jarvis.
One thing that surprised me about my communication with Jarvis is that when I have the choice of either speaking or texting, I text much more than I would have expected. Similarly, when Jarvis communicates with me, I’d much rather receive that over text message than voice. That’s because voice can be disruptive and text gives you more control of when you want to look at it. Even when I speak to Jarvis, if I’m using my phone, I often prefer it to text or display its response.
He says there is a trend toward text communication over voice, and that is going to play an important role in the future of AI interaction. However, he doesn’t downplay the significance of voice recognition capabilities, “on a psychologic level, once you can speak to a system, you attribute more emotional depth to it than a computer you might interact with using text or a graphic interface.”
Learning How to Code Learning
Throughout the project, Zuckerberg was curious to see the areas in which artificial intelligence (AI) has advanced beyond what we thought possible, as well as how we’re still short of the mark.
AI is closer to being able to do more powerful things than most people expect — driving cars, curing diseases, discovering planets, understanding media. Those will each have a great impact on the world, but we’re still figuring out what real intelligence is.
Therein lies the crux of Zuckerberg’s year-long AI odyssey. It takes a lot of work to program AI to complete specific tasks, and while impressive, its not exactly learning on its own.
We are still far off from understanding how learning works. Everything I did this year – natural language, face recognition, speech recognition and so on – are all variants of the same fundamental pattern recognition techniques. We know how to show a computer many examples of something so it can recognize it accurately, but we still do not know how to take an idea from one domain and apply it to something completely different.
Zuckerberg is rightfully proud of his achievements over the course of this year. He recognizes that what he can create in 150 hours of free time over a year is nothing compared to what his company’s top engineers could create in the time available to them. Since Jarvis is so entwined with his home, he has no plans to release the code as of yet, but someday the tech could serve as a base for future products: “If I ever build a layer that abstracts more home automation functionality, I may release that. Or, of course, that could be a great foundation to build a new product.”
It’s safe to say that Zuckerberg was successful in this challenge, and we look forward to learning what goals has in store for 2017.
Google has been working on a self-driving car project for the past seven years, and just last week, it set up a separate company under the Alphabet umbrella to move the project forward. That company, now called Waymo, takes its mission of providing safe, fully autonomous vehicles to a new level with the addition of 100 Chrysler Pacifica hybrid minivans to its test fleet.
The minivans were produced by Fiat Chrysler specifically for Waymo in close tandem with the Alphabet company. This means that the self-driving technologies in these Pacifica minivans were not add ons — Waymo’s onboard computer power, sensors, and telematics are as much part of the Pacifica as its own electrical, powertrain, and structural systems (including the chassis), all modified and optimized for Waymo’s autonomous driving tech.
Everything took only six months to design and build, according to Chrysler, with engineering teams from both companies working hand in hand in the vehicle’s development. The Pacifica will join Waymo’s fleet of self-driving cars by early next year for more testing on public roads.
Although Google’s self-driving car project had its share of bumps along the road, Waymo presents its autonomous vehicle tech as “a safer driver that is always alert and never distracted.” Indeed, with 95 percent of all traffic fatalities the result of human error, self-driving cars are expected to save millions of lives, so the more variety we get in the types of self-driving vehicles available, the more likely we are to eventually have every car on the road be capable of autonomy.
Baroque composer Johann Sebastian Bach is known to have written many chorale cantatas, polyphonic hymns based on Lutheran texts. Each is fairly simple, featuring a single melody accompanied by three harmonies, so Gaetan Hadjeres and Francois Pachet from Sony Computer Science Laboratories in Paris thought it would be interesting to see if a machine could create chorale cantatas indistinguishable from Bach’s.
DeepBach was trained using a data set of 352 Bach cantatas that were transposed to other keys within a predefined vocal range to create a total of 2,503 chorales. Of these, 80 percent were used to train DeepBach and the rest to validate it. After training, DeepBach was able to produce its own harmonies for alto, tenor, and bass in the style of Bach when given a soprano melody.
Most importantly, DeepBach’s compositions even fooled people. The team asked more than 1,600 people — a fourth of whom were professional musicians or music students — to listen to two distinct harmonies of the same melody. The results showed that more than half the listeners attributed DeepBach-generated harmonies to Bach, while music by Bach was correctly identified by 75 percent of the listeners. “We consider this to be a good score knowing the complexity of Bach’s compositions,” said Hadjeres and Pachet.
What You Can Do, AI Can Do
Artificial intelligence (AI) can get into just about anything, but it’s particularly well-suited to anything mathematical— if there’s math in it, chances are you can develop an algorithm for it. Music is mathematical, and composers like Bach often made music that followed a defined, step-like flow that is almost algorithmic.
The video below might look like a movie trailer for a film like Her. Well, it’s not. This guy really has a hologram for a friend that calls him “darling” and checks up on him like an overprotective mother.
The hologram is Azuma Hikari, a 20-year old, 58-cm hologram created by Japanese tech company Gatebox. The functions of this virtual home robot are a little more varied than Amazon’s Alexa. Azuma can send you text messages throughout the day asking you to come home early (who needs a paycheck, right?), and reminds you to bring an umbrella with you (thanks mom).
According to her profile, “She is a comforting character that is great to those living alone. She will always do all she can just for the owner.” How thoughtful and sweet. Except she comes with a $2,600 price tag (and her US version will be sold for $3,000). So, caring for her “owner” is the least she can do, right?
The Future Virtual Home-Buddy
The hologram bot is based on a Japanese anime character, but she isn’t going to be the only character for Gatebox. From the looks of the website, the company is going to make other characters available, presumably also from anime.
Azuma’s hologram appears inside the main tube body of Gatebox, projected at a 1280 x 720 resolution. The hardware itself weighs 5kg, has stereo speakers, a microphone, and a camera mounted on top. Azuma is built with a machine learning algorithm, that helps her recognize her “master’s” voice, learn his sleeping habits, and send him messages through Gatebox’s native chat app.
Perhaps Azuma will work for some, but she might not cut it for others — going home to a cartoonish AI hologram could take some getting used to. Anyway, Gatebox is certainly trying to disrupt the virtual assistant space. However, we still seem to be far from that holographic projection that we’re really be looking for.
For now, let’s just put Azuma with Alexa, Siri, Cortana, and Google Home — a strangely almost-all-female list of virtual assistants.
Uber’s San Francisco trial run of its self-driving service last Tuesday is catching people’s attention. Unfortunately, it might not exactly be the kind of hype that Uber hoped for. As previously noted, the ride-hailing service company’s autonomous vehicle test drive in its hometown was given a red light by the California’s Department of Motor Vehicles (DMV).
To make things even more interesting, Uber’s self-driving cars were caught beating non-metaphorical red lights on two separate occasions — and in both instances, the vehicles seemed to be the Volvo XC90, launched the same day as Uber’s test run.
Here’s the video of the erring Uber captured by Charles Rotter, operations manager at traditional cab company Luxor.
According to an Uber spokesperson, “These incidents were due to human error. This is why we believe so much in making the roads safer by building self-driving Ubers. The drivers involved have been suspended while we continue to investigate.”
The incident was also witnessed by San Fransico writer and producer Annie Guas as she was travelling by a (human piloted) Lyft. She sounded off on Twitter:
Just passed a ‘self-driving’ Uber that lurched into the intersection on Van Ness, on a red, nearly hitting my Lyft.
As there are no current laws governing self-driving cars in the San Fransisco area, there is little consensus on what could or would be done. When a police officer was asked about the incident she stated, “First comes technology, then comes policy. It’s going to be a matter of setting some precedents,” said officer Giselle Talkoff, adding, “The companies that are putting these vehicles on the road should have their vehicles operate with due regard to the rules of the road.”
Electric cars help to save the environment. Still, Swiss automobile manufacturer Rinspeed thinks it could do more, say by having an integrated garden plot right behind the windshield.
Introducing Oasis, Rinspeed’s concept car of the future.
Okay, maybe environmentally friendly features aren’t enough to call it the car of the future. Rinspeed agrees, because it features so much more. It’s also autonomous, thanks to intelligent traffic management solutions from Siemens. Using NXP technology sensors to capture with pinpoint precision a 360° view of its surroundings. Vehicle-to-vehicle communication also allows it look around corners and obstacles, calculating the safest driving maneuvers.
Equipped with Harman LIVS technologies (life-enhancing intelligent vehicle solutions) the Oasis combines various social media capabilities spanning across its entire windshield. With its slightly-curved, 5K widescreen display users can interact with the Harman system using voice-or-gesture control.
The Oasis also integrates augmented and virtual reality (AR/VR) applications in its navigation system, and is equipped with its own virtual assistant. Plus, it features rental, vehicle-share, and ride-sharing options — turning every ride into an authentic social experience.
This car combines everything that technology can offer to improve the driving or riding experience. Indeed, Rinspeed might’ve just given us a peak into the future of automobile technology.
With the considerably successful run of its self-driving taxis in Pittsburgh last September, Uber began preparing to make its autonomous service available in its hometown of San Francisco. After lots of preparation and hype, the ride-hailing company was finally able to launch the service last Tuesday, making San Francisco the second city in the world where Uber offers autonomous cars for public transportation.
Uber wanted to test just how well their autonomous vehicles could handle roads different from those in Pittsburgh. “We drove in the rain and other kinds of weather, and we’ve added lane-changing capabilities since we started in September,” said Anthony Levandowski, Uber’s VP of self-driving technology. “Now we want to see how we operate in this new environment, especially with the giant hills that San Francisco has to offer.” It was also the launch the XC90, Volvo’s lidar-equipped self-driving SUV.
Uber’s Regulatory Hassles
Unfortunately, the success was short lived.
The next day, California government officials at the Department of Motor Vehicles (DMV) put the brakes on the project, saying Uber didn’t have the necessary permits to test its autonomous vehicle service.
“It is illegal for the company to operate its self-driving vehicles on public roads until it receives an autonomous vehicle testing permit,” writes Brian G. Soublet, deputy director of the DMV in California, in a letter to Uber. “Any action by Uber to continue the operation of vehicles equipped with autonomous technology on public streets in California must cease until Uber complies.”
Uber believes it didn’t violate any rules. “[T]he rules apply to cars that can drive without someone controlling or monitoring them,” writes Levandowski. “For us, it’s still early days and our cars are not yet ready to drive without a person monitoring them.”
This all comes after one of Uber’s Volvo XC90s ran a red light shortly after the service was implemented in San Francisco, and was caught on camera. The episode raised questions about the safety of Uber’s self-driving technology.
“This incident was due to human error,” Uber said in a statement. “This vehicle was not part of the pilot and was not carrying customers. The driver involved has been suspended while we continue to investigate.”
In 1991, Microsoft founder Bill Gates put up his company’s specialized research arm The aptly named Microsoft Research was to be the vehicle to fulfill Gates’ “vision that computers would one day see, hear and understand human beings.” Now, after years of work by the best and brightest minds in the company, that vision is closer to becoming a reality.
Microsoft Research has become one of the most accomplished labs in artificial intelligence (AI) development. The computer vision algorithms it has pioneered have achieved substantial success. “Across several industry benchmarks, our computer vision algorithms have surpassed others in the industry — even humans,” said Harry Shum, Microsoft’s Artificial Intelligence (AI) and Research group EVP.
These developments in computer vision allow for AI that see and listen, that understand better than ever before. Microsoft’s programs include Microsoft Cognitive Services that power Uber, a Skype translator algorithm that works in nine languages, and Microsoft Translator equipped with intelligent language and speech recognition capabilities that allow for simultaneous translation between groups speaking several languages.
Microsoft’s vision of the future includes making AI available for all. “We have always sought to democratize technology,” said Shum. “With AI, we will do that in two ways — by infusing into products such as Office 365, while creating a platform on which others can build and innovate”. Their collaboration with OpenAI clearly shows this.
Other AI technology in continuous development include the new Zo chatbot, Cortana devices SDK and Skills Kit, and a whole bunch intelligent tools in expansion. Indeed, Microsoft is actively exploring the future of computing. “We will push the boundaries and we will learn,” Shum said. “We’ll share our learnings with the industry, with you — so we can democratize AI and hopefully accelerate its benefits for our society.”
A new rule from US auto-safety regulators, the National Highway Traffic Safety Administration (NHTSA), may speed along the arrival of mass adoption of self-driving consumer vehicles. The rule requires vehicle-to-vehicle (V2V) connectivity, which uses short-range radio signals to allow cars to communicate information such as “each other’s speed, location, direction, and acceleration or braking, within a radius of about 300 meters.”
There are also plans in the works to allow vehicles to communicate with stationary objects such as stoplights or even physical barriers. These upgrades would not only benefit autonomous vehicles, but also serve to warn traditional drivers of dangers they may not notice, possibly even taking some control to prevent dangerous conditions.
U.S. Transportation Secretary Anthony Foxx stated, “Once deployed, V2V will provide 360-degree situational awareness on the road and will help us enhance vehicle safety.” The agency is optimistic that the integration of these technologies can help to reduce the severity of up to 80 percent of collisions not involving an impaired driver.
Should the incoming administration proceed, the rule could take up to a year to implement. Half of all cars manufactured would be required to comply within two years, and then meet full compliance within four.
Digital avatars have been around for a while now. Earlier versions seemed to capture the spirit of a user’s likeness rather than how they might have actually looked. A new startup is looking to change that. The San Francisco-based artificial intelligence (AI) startup wants to turn your digital avatar three dimensional, and deliver it with stunning likeness.
Loom.ai’s tech uses basic digital photos to automatically generate a detailed 3D avatar which could be used in text message, social media, virtual reality (VR) games, and even e-commerce. It relies on machine learning to capture and recreate the subtleties of a human face. It gets better at inferring other facial expressions as it learns more data about facial movements.
Founded by visual effects veterans Mahesh Ramasubramanian (who previously worked as a visual effects supervisor in DreamWorks for projects like Shrek and Madagascar 3) and Kiran Bhat (formerly in facial animation at LucasFilm, with movies like The Avengers and Pirates of the Caribbean under his belt), Loom.ai came out from obscurity this week with $1.3 million in seed funding. “We believe that you need to bring the user into the experience, which means figuring out a way to digitize the everyday user just like we used to digitize the movie stars,” said Bhat.
It’s not definite yet where this tech will find itself, but Loom.ai is keen on licensing its technology to third-party platforms. In any case, with massive adoption, Loom.ai’s 3D avatar tech has the potential to reshape our virtual communication experience.
Shows like Terminator and Westworld contribute to an attitude that is, more or less, unfriendly towards the development of artificial intelligence (AI). News about automation upending jobs in the near future also paints a not-so-favorable image of autonomous systems (AS).
Do these fears have legitimate basis? Perhaps, but whether or not they do, the Institute of Electrical and Electronics Engineers (IEEE) is convinced that AI/AS development should be “aligned to humans in terms of our moral values and ethical principles.”
They present their ideas in a 136-paged framework document called Ethically Aligned Design, which the institute hopes will guide the AI/AS industry to build benevolent and beneficial AI and AS. The framework is based on the input of more than 100 thought leaders in the fields of AI, law and ethics, philosophy, and policy. These experts have backgrounds in the academe and science, government and corporate sectors.
“By providing technologists with peer-driven, practical recommendations for creating ethically aligned autonomous and intelligent products, services, and systems, we can move beyond the fears associated with these technologies and bring valued benefits to humanity today and for the future,” according to a statement by Konstantinos Karachalios, IEEE Standard Association managing director.
Sound Policies, Good Research
IEEE opened the discussion to suggestions from people in the industry, engineers and developers alike. A feedback mechanism is also in place, making this a dialogue-based policy-making platform.
The proposals made in the framework include considerations for methodologies to guide ethical research and design; and warnings about implementing black-box services and components with utmost caution and ethical care.
Of course, what IEEE wishes to achieve raises several questions – most notably one that we’ve covered previously: can morality be programmed? Is it actually possible for AI/AS technologists to align their creations “with the values of its users and society,” as the IEEE document prescribes?
To this end, the IEEE says standards must be set up to make sure users are not harmed by autonomous outcomes, by providing “oversight of the manufacturing process of intelligent and autonomous technologies.” It warns that, as AI systems become more sophisticated, “unanticipated or unintended behavior becomes increasingly dangerous.”
As such, “[r]esearchers and developers will confront a progressively more complex set of ethical and technical safety issues in the development and deployment of increasingly autonomous and capable AI systems.”
Overall, the IEEE has definitely taken a much needed first step in humanizing AI/AS systems.
The senate’s Committee on Commerce, Science, and Transportation, under the helm of Texas Senator (R) Ted Cruz, convened November 30 to discuss the state of AI research and development, and its policy effects and implications on commerce. According to experts present at the hearing, the government isn’t doing much to provide guidelines and directions on AI research.
The experts, including Microsoft Research’s managing director Eric Horvitz, believe the government is in a unique position to shape the future of AI – especially since AI is still in its developmental stages. Horvitz said AI innovation can help in areas such as homelessness and addiction where there’s not much investment from the industry yet — which the government can help pursue.
Currently, AI research is largely distributed within a few industry giants, like Google and Facebook.
Policy is Key
The hearing itself is a good sign. It provided an avenue for the government to figure out just how much AI research is being done, and which areas could be covered by relevant applications. As OpenAI’s cofounder and CTO Greg Brockman said, there is a “real hunger for basic research” in the industry as technologists see they are yet to make the most important advances.
Of course, the obvious concern over automation was addressed at the hearing. Both Brockman and Horvitz said that long-term issues, such as job displacement, should be addressed by investing in more focused research on the matter.
But it’s more than just that. In the current state of AI R&D, the government is in a position to push for developing applications that address more immediate issues. Andrew Moore, dean of the school of computer science at Carnegie Mellon University, gave as an example the experience of a veteran seeking information on healthcare options online. It’s a process that the right AI application could make more efficient.
Many articles that talk about automation and its resulting job displacement may sound alarmist. However, studies have shown that automation will disrupt labor, especially industrial work.
A recent study by analysts Michael Parker and Alberto Moel at global investment research and management firm Bernstein predicts that automation is doing much more than just job displacement. According to Parker and Moel, the rise of automation beginning in the industrial and manufacturing sectors is changing one of the most basic assumptions in modern economics.
Gone are the days when specialization, as the father of modern economics Adam Smith said, is the way to go to optimize a country’s economy. Now, companies will just go for automation to balance the mismatch between the high cost of labor and the lower cost of manufacturing.
This points to a disturbing realization: bringing back jobs into the US might not work. Simply put, there won’t be any jobs to bring back. Companies, instead of choosing to spend on labor, will just opt for automation. All signs point to this, with global market spending on robots more and more. Job displacement seems inevitable.
According to Bernstein: “The ability of new emerging markets to grab these jobs and the export activity that comes with them will be eroded [and] … will militate in favor of automation and staying in China.”
You can already see it happening, with Walmart using drones for warehouse delivery and the Department of Defense using these in surveillance, and with Amazon Go doing away with store personnel. Other sources suggest that it won’t be unique to manufacturing jobs. 80% of jobs in the IT industry could be replaced by artificial intelligence (AI). Then there’s also the recent news that IBM’s Watson can treat cancer patients as well as a human doctor.
The future of labor has never been more uncertain. Strategies for helping those displaced workers are starting to be more widely debated. Programs for a Universal Basic Income (UBI) are even starting pilot tests in multiple countries around the world.
Last Monday, Amazon announced its new concept store, Amazon Go. The store is powered by sensors, deep learning artificial intelligence (AI), and computer vision, giving customers the ability to browse through the store, take what they want off shelves, and literally go — no need to queue to pay for the items. The only added step to the shopping experience is swiping the free Amazon Go app from their phones once when they enter the store. Their Amazon account automatically gets charged for their purchases when they leave.
The store concept eliminates labor costs incurred by cashiers and is ideal not only for Amazon, but for customers as well. No lines or checkouts? From a shopper’s point of view, it’s the ideal shopping experience, offering a level of convenience that could be game-changing for the retail industry. However, Amazon Go also raises concerns about potential ramifications on employment and the economy.
“With Amazon, it’s not just about reducing labor costs at all — they’ve come up with something disruptive,” says Martin Ford, author of “Rise of the Robots: Technology and the Threat of a Jobless Future,” in an interview at CNBC.
According to Ford, however, the implementation of automation technology is inevitable because it has obvious advantages for both consumers and retailers. “I don’t think we can stop it,” he says. “It’s a part of capitalism, that there’s going to be this continuous drive for more efficiency.”
While it’s true that the negative impact of a shift toward automation in this sector could be devastating, studies of potential solutions, such as the implementation of a universal basic income, are already in the works.
Last month, Michigan passed a bill that would allow trucks to travel in electronically synchronized groups or “platooning.” This bill has now been signed by Michigan Governor Rick Snyder at the Automotive Hall of Fame last Friday. The bill also includes legalization of testing self-driving cars without a supervising human in the driver’s seat, and without human driving interfaces such as a steering wheel and brake pedal. Ride-sharing services are also allowed to deploy self-driving cars to their customers once they are tested and certified.
This makes Michigan the first state to welcome self-driving cars onto their streets with laws that enable their testing, use, and eventual sale upon certification.
Michigan has taken upon itself to cradle the advancement of the self-driving vehicle industry. The state is currently developing a 335-acre self-driving vehicle research and testing site. There is already a 32-acre site, called Mcity, in the North Campus of the University of Michigan that has been dedicated as a test facility for running automated vehicles. Unsurprisingly, several automated car makers have chosen to set camp in the state.
Tesla is about to close out its best-ever year for delivering cars.
By the time 2016 is in the books, around 80,000 Tesla vehicles should be in the hands of customers.
That’s about 30,000 more deliveries than last year. But it’s unlikely that CEO Elon Musk and his team will be satisfied.
For one thing, that’s 10,000 fewer deliveries than the company guided to early in 2016. And for another, Tesla has to focus and focus hard on its next milestone: the on-time launch of the Model 3 mass-market sedan.
Last week, Morgan Stanley lead auto analyst Adam Jonas — a reliable Tesla bull and a pretty out-there theorist on Wall Street when it comes to the future of transportation — published a research note in which he zeroed in on the Model 3 and its importance the the automaker.
“In our view, the Model 3 is a potential funding strategy for Tesla’s bigger mission to accelerate the development of a highly safe and efficient transport utility,” he wrote. “However, 2017 is all about the later stages of development (and the possible commercial launch) of the Model 3.”
No Model 3 until 2018?
But then Jonas reiterated what could be considered a controversial prediction.
“We do not expect the Model 3 to be launched in 2017,” he wrote. [His emphasis.]
And he added this, which I’m excerpting at length:
While we cannot rule it out, we do not adopt as our base case a scenario in which Model 3 deliveries begin in 2017. We recognize that Tesla management has targeted a [second-half] launch date and that they will make every effort to satisfy high levels of preliminary demand and fill orders for the product as soon as possible. However, our base case is for a launch in late 2018. We have taken this conservative approach to allow for the probability that Tesla will choose to prioritize the quality, cost, performance and lifesaving technology of the vehicle. While Tesla still adopts a high level of vertical integration, we expect the Model 3 to rely even more extensively on 3rd party suppliers than the Model S, potentially increasing the scope of supply-related factors outside of the company’s control.
Jonas has hit on a couple of critical issues here.
First, Tesla won’t want to launch a Model 3 that isn’t, at some level, groundbreaking. It needs to turn heads and blow minds.
But that doesn’t mean it has to be another Model X, a vehicle that was launched in late 2015 after delays of three years and that was so complicated in its design that Musk said the automaker had been hubristic in designing and engineering it. The Model 3 will be a mass-market sedan, at least at first (other types of vehicle will follow, chiefly a compact SUV). A major carmaker could get production rolling on such a car without breaking a sweat.
Second, I’m not sure Jonas is right about the supply chain questions. Yes, Tesla has been trying to deal with an assortment of supplier issues, and one of its solutions is to bring production of some components in-house. But again, the Model 3 doesn’t have to be difficult build. With a factory in Northern California, Tesla is a bit off the normal, Midwest supplier and southern US grid. But that really just means adding in some additional transport time for parts.
I reached out to Tesla for some comments on his qualified skepticism about a late 2017 Model 3 launch, but I didn’t hear back immediately.
Nah, the Model 3 will be on time
The launch could easily slip into early 2018, but I don’t think that will happen. Here’s why.
The Model 3 was unveiled in March of 2016 (I was at the unveiling). These were prototype cars, but they were operational — they weren’t auto-show-floor concepts that you could only look at, not touch or drive.
Assuming that the basic design and engineering of the vehicle is set, then building it is simply a matter of adding the manufacturing capacity in Fremont, where the factory is located and where the Model S sedan and Model X SUV are assembled; setting up the suppliers and getting the parts in-bound; squaring away the labor and “training” the industrial robots that will do stuff like body welds; and having the proper stamping and tooling in place at the plant to put the cars together.
None of this is rocket science, and Tesla has hired an expert production executive from Audi to help make this transition from assembling around 100,000 vehicles annually to 500,000 by 2018. All completely doable.
We’ll be looking for some markers to tell us how well Tesla is doing at having Model 3’s ready to roll by the end of 2017 — markers drawn from our experience with the Model X.
By spring 2017, we should start to see photos and videos — created both by Tesla and by amateurs — of the Model 3 being tested in the wild.
By summer, we should begin seeing vehicles that are closer to the production vehicle in appearance, on roads around Fremont. Around the time Tesla reports second-quarter earnings, Musk should have some comments about what its like to drive the Model 3, as he’s usually the company’s Number One tester. (Look for a lot of color on Autopilot features from Musk, by the way, when he doesn’t start taking).
These will all be signs that the Model 3 is on schedule. Closer to launch, the automaker will have to give vehicle to government for crash-testing, so that will an additional tell.
What Tesla is getting good at
Obviously, if Tesla misses some of these marks, then Jonas could have a point. But Tesla did manage to hit them with the Model X, once it got down to business on the vehicle, and the pre-launch process for that vehicle was much more fraught (the exotic, upswinging falcon-wing doors, for example, has to be re-engineered, as the rear seats were revamped at the eleventh hour). The car still showed up by the launch date.
The bottom line here is that that Tesla is getting much better at passing major tests. It’s doing an effective job of selling, building, and delivering the Model S. After a long wait, the Model X is on track and, according to Tesla, making owners very happy. The Model 3 is, indeed, a Very Big Test. But Musk and his team have done the preparation.
They know how to build cars. They know how to build one on something resembling a traditional-automaker’s schedule. And they’ve made enough mistakes to know how to not make them again.
I don’t think the Model 3 will be early. But if Tesla doesn’t roll a couple off the assembly line in late 2017, I’ll be shocked.
A team of roboticists from UC Berkeley have just unveiled a small robot that is arguably the most agile one ever made.
Salto, short for saltatorial locomotion on terrain obstacles, which weighs 100 grams (3.5 ounces), is 26 centimeters (10.2 inches) tall, can leap into the air, spring off a wall, or perform several vertical jumps in succession. It has a vertical jumping agility of 1.75 meters per second, higher than that of a bullfrog, and a little short of the galago, which is one of the the most vertically agile creatures on earth.
The galago, which can jump five times in four seconds has the ability to store energy in its tendons while crouched down, allowing it to jump to heights that muscles alone couldn’t reach. This same process was applied to Salto, giving it its exceptional vertical agility. A motor drives a spring that loads via a leg mechanism to mimic the crouch of the galago, which allows it complete multiple quick jumps in a row.
“By combining biologically inspired design principles with improved engineering technology, matching the agile performance of animals may not be that far off,” Ronald Fearing, a professor of electrical engineering and computer sciences, said.
Uber’s current structure allows the ride-sharing company to be exempt from Americans with Disabilities Act (ADA) strictures, which promote accessibility to the 31 million people with mobility-related disabilities. Arguments have been made that the company is simply a platform promoting ride-sharing and not a transportation service, therefore not making the company subject to the law.
However, with the coming of its own autonomous fleet, that legal ground is about to get a lot more shaky.
Uber is not alone in its skirting of ADA requirements, basically the entire taxi industry takes advantage of what Bryan Casey, an independent researcher and student at Stanford Law School, calls “a ‘gaping loophole’ large enough for taxi companies to drive entire fleets through.” Two words in a section of the ADA, meant to allow for companies to grow into the law, have allowed companies to completely get around it. This section exempts all cabs except “new van[s] with a seating capacity of less than 8 passengers, including the driver.” Companies stuck to the letter of the law and began to avoid “new vans” by only purchasing used ones.
Uber cannot argue that its autonomous fleet will not be “new,” so any hopes of continuing to sidestep the ADA will hinge on how the law decides to define “van.”
The motivation to use this loophole likely doesn’t come from a place of malice. Retrofitting vehicles for wheelchair accessibility can cost upwards of $10,000.
Still, Uber, and other companies looking to launch autonomous fleets, are the a unique position of being able to preemptively incorporate accessibility into their vehicles.“Embracing ADA liability is an opportunity to cement Uber’s position as a nationwide transportation titan,” says Casey. Uber would place itself well above other transportation services that would have to spend hundreds of thousands, if not millions, of dollars to compete for that demographics’ business.
Uber taking leadership on this issue could set the tone for the future of autonomous ride-sharing. It certainly would be a move welcomed by the millions of people, both in the disabled and elderly communities, who would no longer be forced to struggle with limited public transport options.
HIRO is an artificially-intelligent program from IT and AI company Arago. The program, which stands for Human Intelligence Robotically Optimized, is modeled to imitate human memory and skill in problem solving.
HIRO’s latest accomplishment is its expert level playing of the civilization building video game Freeciv. The game, based on Sid Meier’s Civilization series, is hugely unpredictable with as many as 10 to the power of 15,000 possible game solutions.
It’s a feat that’s now in the reach of deep-learning AI. At the TechCrunch Disrupt London 2016, Arago announced that HIRO can beat about 80 percent of human players at Freeciv.
AI bots capable of complex strategy utilize deep machine learning honed by training from developers. HIRO learned by opposing human gamers, and by repeatedly beating the built-in computer opponent of the game. AI from Google DeepMind, AphaGo, also made a lot of noise for beating the world champion at the war simulation game Go. To this, Arago boasts that HIRO had less training from less data to beat a more complex game.
HIRO sharpens its capabilities using these games. Arago offers HIRO to customers as an IT automation tool for business. Perfecting video games is merely training ground for HIRO and other deep-learning AI before they move on to functions like facial detection or cancer research. AI already plays a huge role in today’s society, optimizing systems in a wide range of industries.
The development of AI is rapid and is already changing the landscape of how we live. Let’s just hope that these machines don’t get any ideas for world domination from all these civilization-conquering video games.
Earlier this week, TIME Magazine announced its 2016 Person of the Year: President Elect Donald Trump. The recognition is given to the person who has affected the news the most—good or bad—embodying what was most important about 2016. Trump was in the running alongside Barack Obama, WikiLeaks founder Julian Assange, Nigel Farage, leader of the UK Independence Party, Hilary Clinton, and Indian Prime Minister Narendra Modi.
Ahead of the announcement, TIME worked with IBM Watson and Opentopic to predict the winner of the award, declaring Prime Minister Narendra Modi to win significantly over the competition. According to the poll, Modi was set to receive 18 percent of the votes, with Obama, Trump and Assange getting just seven percent each.
Bringing these results to UNU however led to a different result. Based on the 75 randomly selected individuals connected online by real-time Swarm intelligence algorithms, Modi was actually the least likely to win TIME’s award. And as the system continued to whittle candidates down, it became obvious that it was going to come down to Clinton versus Trump, with the President Elect once again besting the senator.
UNU is a new form of artificial intelligence (AI) that combines real-time human input with advanced algorithms to create the world’s first hive mind. It’s anchored on the science of swarm intelligence, which works by allowing groups of online users to think as a collective, thus expressing itself as a single entity.
On the premise that groups, similar to the ones we see in nature, can make better decisions than individuals could make alone, swarm intelligence banks on the shared knowledge, experience, and intuition of groups.
This particular prediction, was not only made with less than a hundred people, but happened in just a matter of minutes. It shows how AI focused on harnessing the power of the crowd to make predictions can deliver results with remarkable accuracy.
Prior to this, swarm intelligence has accurately predicted winners of the Kentucky Derby, NFL playoffs, the Oscars, and the Golden Globes. While such predictions may not have any real world impact, applying the potential of swarm intelligence to business and political races could be a game changer.
As companies like Tesla continue to show how autonomous cars can safely take over our roads, U.K. defense contractor BAE Systems is on track to do the same for the skies.
The company is preparing to conduct a new round of testing that will increase the capability of its autonomous system to pilot aircraft. “Our priority as always is to demonstrate the safe and effective operation of autonomous systems, and together with NATS we are working towards the possibility of flying our own unmanned systems in a highly controlled environment in the U.K.,” Maureen McCue, the head of research and development at BAE, told The Engineer.
BAE’s converted Jetstream 31 aircraft will complete 17 flights, flying routes 482 km (300 miles) in distance between the cities of Inverness and Lancashire, according to a BBC report. The flights won’t be complete autonomous, though, as the aircraft will only engage its autonomous systems midair at 4.6 km (15,000 ft) while putting take-off and landing duties in the hands of human pilots.
The converted Jetstream 31 has a range of features designed to replace the human contributions to flying a plane. The aircraft is equipped with an identification system that can log data from transponders of other planes, and cameras mounted in the cockpit allow the plane to note obstacles like clouds. The system can identify any upcoming hazards in the air and adjust its course away from danger if the need arises.
The More The Merrier
Over the years, planes have been given various automation functions that can assist pilots while flying a plane. They have systems that guide the altitude and speed of a plane akin to a cruise control system in cars. Systems are in place that automate a plane’s landing, as well, which make it easier for pilots to perform the crucial maneuver.
These systems combined with the ones that BAE is testing right now are unlikely to completely replace commercial pilots in the near future since the general public still views autonomous systems with caution. However, an autonomous system can be helpful to pilots by acting like another set of eyes in a cockpit, ones that never get sleepy or distracted, which ultimately leads to safer skies and more peace of mind for passengers aboard a plane.
Anybody following developments at Apple knows they’re a very secretive company. Instead, the tech giant seems to prefer that the public glean what we can from the kind of talent it’s hiring and its patent applications.
But that may not be the case for its foray into artificial intelligence (AI). Twitter posts reveal that AI director, Russ Salakhutdinov, has announced that Apple will be publishing its AI research and engaging more with the academe.
Apart from the tweet, we really don’t know much. Apple has yet to make any formal announcement on the matter. We don’t know if it will be publishing everything or cherry picking the data it wants everyone to know.
Why this departure? According to Engadget, it may be because the company wants more AI researchers to work with them. Apple has been acquiring AI talent, but that kind of talent wants the recognition for the breakthroughs they accomplish. That’s something you can’t have if the parent company is as secretive as Apple.
It remains to be seen how this will affect the company, but it can do nothing but good for the field. AI research benefits from more brains working on research, and many new AI efforts have been aimed at “democratizing” AI and making it available for everyone.
It’s a concept that either sounds strange or fascinating depending on who you ask: a car that can feel human emotions. It’s a claim that Honda has made about their concept car that’s going to be shown in next year’s Consumer Electronics Show (CES).
In a press release on Honda’s website, the automotive company says the theme for their participation in the CES will be “Cooperative Mobility Ecosystem.” Honda says it is exploring more interactive and immersive experiences for passenger vehicles.
The centerpiece of their exhibit, and the culmination of this theme, will be the company’s new concept car called “NeuV.” It’s a concept automated EV commuter vehicle equipped with artificial intelligence (AI) called ’emotion engine.’ They go on to explain that the emotion engine is a group of AI technologies that allow the vehicle to generate artificial emotions.
At first glance, an emotional AI sounds unnecessary and even asinine. However, the concept Honda is exploring is a new aspect of AI that assists drivers. Recently, AI has been used to assist drivers physically, by enabling them to take over a driver of a car.
This is extremely helpful in lowering a driver’s fatigue when operating a vehicle for long periods of time and for long distances, as Tesla’s autopilot feature demonstrates. It has even been used in the trucking industry where drivers are often faced with the exhausting job of hauling products across the country.
Honda hasn’t yet released details on what the “emotion engine’ will actually do, or its place in the vehicle, but it’s not hard to imagine what it could do. For example, it could pull up weather data from the internet and announce the results to the driver in a cheery voice. How the driver will react would depend on the person, but having an AI car companion cheer you on while driving is an amusing thought.
OpenAI, an artificial intelligence research center based in San Fransisco, has released an open source software platform that virtually tests and trains AI.
The platform, called Universe, is a digital playground comprised of games, web browsers, and even protein folding software that an AI can interact with. AI does this by sending simulated mouse and keyboard strokes via what’s called Virtual Network Computing, or VNC.
Universe facilitates reinforcement learning, where the AI learns tasks by trial and error; through risk and reward. Over time, OpenAI researcher and former Googler Ilya Sutskever says AI can even practice “transfer learning,” in which an agent takes what it has learned in one application and applies it to another.
Games that AI can currently access include Portal, Shovel Knight, and SpaceChem. Video games may be a good benchmark and training-aide for the AI, but the researchers intend to add more apps tot he list, and teach AI problem solving skills in unfamiliar environments.
“An AI should be able to solve any problem you throw at it,” Sutskever said to Wired. Michael Bowling, a University of Alberta professor lauds the wide scope of the platform and it’s role in improving AI; “It crystallizes an important idea: Games are a helpful benchmark, but the goal is AI,” he says.
Apple enthusiasts and fans have been speculating for some time that the company Steve Jobs left behind is developing a smart car — aka Project Titan. Well, we’ve finally got some sort of confirmation to the long-standing rumor. Apple is indeed getting into the autonomous car business, but it’s currently unclear whether it will actually build a car or just an autonomous driving system.
In a five-page letter to US transport regulators, written by Product Integrity Director Steve Kenner, Apple expressed that it “is excited about the potential of automated systems in many areas, including transportation.” Apple commends the National Highway Traffic Safety Administration (NHTSA) for its Federal Automated Vehicles Policy, which prompted the writing of the letter.
Apple also acknowledged that it “is investing heavily in the study of machine learning and automation,” which perhaps is a clear indication that the company is already studying the possibilities of developing one, or an autonomous driving system at the very least. Apple has also already registered several car-related internet domains, such as apple.car and apple.auto.
One of the things Kenner proposes to the NHTSA in his letter is the sharing of data by companies in the autonomous vehicle industry. “Apple agrees that companies should share de-identified scenario and dynamics data from crashes and near-misses,” Kenner says. “Data should be sufficient to reconstruct the event, including time-series of vehicle kinematics and characteristics of the roadway and objects.”
While Apple may not be ahead of the game in this case, the company seems to be riding along at the right moment, especially with policies for autonomous cars being crafted and refined. Soon, we might not just be asking Siri for directions to our favorite restaurant; she could drive us there.
What happens when you sleep? Sleeping is more than just merely resting our bodies. In sleep, processes vital to the maintenance of our bodies occur like a computer rendering memories from and information learned in the day. Dreams are vital to stabilizing memory. In dreaming, the brain integrates new pieces of memory to old ones, helping us solidify these into long-term memories.
If sleep and dreams can help keep the complex human body running, Google believes that the same can be applied for their AI. Google DeepMind, an advanced neural network, is pioneering new technology that allows the program to dream in order to enhance its rate of learning. In the Google research group’s paper on the Cornell University ArXiv, the team likened human dreaming to the “unsupervised auxiliary tasks” used to train their programs.
The DeepMind AI’s dreams are of scenes from video games like Starcraft II and Labyrinth. The prompts in the video games help the AI learn to recognize challenges in the game, helping it make better decisions when it is being tested on the game controls. Impressively, dream training resulted in increased performance from Deepmind. The AI rated 880 percent expert human performance in Atari games, and on the game Labyrinth, it had a learning rate ten times faster that previous iterations of the AI.
The Future’s AI
Perfecting video games is merely training ground for DeepMind before it moves on to functions like facial detection or cancer research. AI already plays a huge role in today’s society, optimizing systems in a wide range of industries.
Improving how these bots learn could unlock a better future for us. It does raise some concerns on the uncontrolled training AI experiences over dreaming. While Google and other companies developing neural networks definitely don’t intend for the AI to turn against humans, it’s a possibility that could be fed into an AI’s dreams. The “reveries” in the universe of Westworld allowed for that fictional AI to exhibit some unintended (ahem) consequences. But chances are we won’t be seeing that here.
The methods, though racking up impressive results, are still experimental, and Google is still doing numerous tests. For now, we’re all looking forward to what could be the best AI yet.
Bill Doogue, a partner at the defense law firm Doogue O’Brian George, is launching a robot lawyer in Melbourne, Australia to help arm people with legal advice before representing themselves in court. This will be the first application of this technology in criminal cases in Australia.
When hit with a charge, the defendant can simply log in to the system and provide their personal information, as well as details about the offense. The system, however, is currently limited in its scope of cases.
“The robot has a number of hurdles that it places in front of people, but they have to be pleading guilty, it has to be a minor offense, and they can’t have priors,” Doogue said. “Then they just use the online service to prepare their data to hand up to the magistrate.”
According to the State Library of New South Wales: “The right to an adviser has not always been observed in Australia, and some courts are against it.” For example, smaller civil disputes are heard by the Queensland Civil and Administrative Tribunal, and the Queensland State Government dictates “you can get legal advice before a hearing, but are normally not allowed a lawyer and are expected to represent yourself.”
Dazed and Confused
Although there’s a ton of law resources online, representing oneself becomes a problem if the defendant suffers from anxiety. “[Some people] are visibly distressed and uncomfortable talking and give monosyllabic answers to the magistrate, when they have a story they should be telling,” Doogue said. As a result, their cases are brushed off and people end up suffering consequences that could have been diminished or withdrawn altogether.
In an interview with ABC, Su Robertson, lecturer at the College of Law and Justice at Victoria University, who observed self-represented litigants found that 63 percent of such cases last no more than five minutes, and 26 percent last two minutes or less.
Of course, the robot is just a starting point to combat these challenges, and should not be used as a substitute for complete legal aid altogether. Despite this, artificial intelligence is becoming popular in legal professions. Earlier this year, law firm Baker & Hostetler employed IBM’s AI Ross to handle their bankruptcy practice, a department of nearly 50 lawyers.
Professor Lyria Bennett Moses of University of New South Wales associate law professor does warn, however, “all tools have limitations, they don’t cover everyone’s case and they don’t meet the same need that services like legal aid services are currently meeting.”