Alphabet’s Project Loon has officially launched in Puerto Rico in an effort to bring basic internet connectivity to the island after its infrastructure was ravaged by Hurricane Maria. The project utilizes helium air balloons and was able to utilize working ground connections to relay internet service to more than 100,000 unconnected inhabitants.
Speaking to Engadget, the head of Project Loon Alastair Westgarth stated, “In times of crisis, being able to communicate with loved ones, emergency services, and critical information is key. We hope that the connectivity Project Loon has provided over the last few weeks has been helpful, and would like to thank AT&T, T-Mobile, and our government partners who made these efforts possible.”
High Tech Solutions
This is the fastest that Project Loon has ever been launched. The balloons set off from Winnemucca, Nevada and the team used machine learning algorithms to fly them to Puerto Rican airspace. And, while the project was not able to provide connectivity to the entire island, it is still an improvement to the territory’s decimated infrastructure and isn’t an indicator of shortcomings in Project Loon’s capabilities.
Alphabet isn’t the only company that is looking to use their technologies to help rebuild Puerto Rico. AT&T is also helping to reinstate wireless service with its “Flying COW” (Cell on Wings) drones. The devices helped to deliver cell phone service, including LTE wireless, to up to 8,000 people in San Juan, the territory’s capital.
Elon Musk’s Tesla additionally sent hundreds of batteries, including Powerwalls and higher capacity Powerpacks, to help get power to where it was needed most, including a children’s hospital in the capital.
Musk also spoke with Puerto Rico’s Governor, Ricardo Rosselló, about using the company’s technology to completely overhaul the island’s electricity grid, which was already crumbling long before Maria touched down. However, critics fear that such a move would be more disruptive to the Puerto Rican power sector, which is currently run by government-owned utilities.
Still, these high-tech solutions are helping to get Puerto Rico back up and running. It is now up to the people of Puerto Rico to decide what technologies will best benefit them throughout this rebuilding period and in the future.
Google and Waymo want their fleet of autonomous cars to act without human input, forgoing the autopilot feature found in many other cars from companies like Tesla. On Waymo’s FAQ page, the company notes how a majority of traffic accidents are caused by human error — something that can be addressed by making fully self-driving cars.
If you’ve ever seen one of Waymo’s prototype cars, you may have noticed it doesn’t have a steering wheel — that’s by design, as it plays into the company’s desire for cars that don’t need people operating the wheel. This wasn’t a sudden change that happened recently, either. Google and Waymo decided to shift away from autopilot features in 2013, after observing what people did when they no longer had to keep their eyes on the road.
As reported by Reuters, Waymo CEO John Krafcik revealed during a Waymo event on Monday that the initial 2013 tests showed passengers taking naps, applying makeup, and fiddling with their phones while the car approached speeds of 56 mph.
“What we found was pretty scary,” said Krafcik. “It’s hard to take over because they have lost contextual awareness.”
From Semi-Autonomous to Fully Self-Driving
Autopilot features typically require the person behind the wheel to take over in tougher situations for which the self-driving tech isn’t prepared. Other systems may also require the person to touch the steering wheel after a certain amount of time to inform the vehicle they’re still conscious or paying attention.
Waymo planned to do the same with their fleet of vehicles, but shifted focus away from such features, noting how the presence of autopilot could allow people to ignore the road and be unprepared to take the wheel if the situation called for it. This included a system in which the driver would be prompted to take over after an alarm went off, as well as allowing the driver to pass control to the car and vice-versa.
Now, Waymo’s self-driving cars feature two buttons for driver control: one for starting a ride, and another for pulling the car over at the next possible opportunity. As development progresses, Krafcik envisions a scenario in which an empty car comes to pick you up.
“In level four mode, you can imagine a completely empty car coming to where you are, you open the door, hop in the back seat, and it can take you — relaxed and happy, perhaps it has Wi-Fi — wherever it is you want to go,” The Verge reports Krafcik saying. “That’s what we’re striving to achieve every day.”
Self-driving cars are being tested all over the United States. New York City, Sacramento, and San Francisco are just some of the places you can see autonomous vehicles on the road. Waymo, Google’s self-driving car division, has been a leader of the tech. They recently partnered with Intel to further hone what their vehicles can do.
The CEO of Waymo, John Krafcik, recently wrote a Medium post detailing where the company will be testing their cars next: Michigan. Engineers have been testing the vehicles in cold-weather conditions for five years, and they are confident that their vehicles can handle the snowy Michigan roads as well as Michiganders.
Krafcik wrote in the post, “We’ll be giving our vehicles even more practice driving in snow, sleet, and ice. This type of testing will give us the opportunity to assess the way our sensors perform in wet, cold conditions.” This is an important part of passenger safety. He continues, “It will also build on the advanced driving skills we’ve developed over the last eight years by teaching our cars how to handle things like skidding on icy, unplowed roads.”
The vehicles will be tested by trained safety drivers in and around Detroit, near Waymo’s self-driving technology development center in Novi.
Google is fully aware of artificial intelligence’s (AI) potential — DeepMind’sAlphaGo AI is one of today’s most well-known examples of its capabilities — and in an earnings call this week, the company made it clear they believe the future of technology lies with AI.
During the call, Sundar Pichai, CEO of Alphabet (Google’s parent company), praised the company’s decision to invest in AI early, highlighting the concept’s trajectory from “a research project to something that can solve new problems for a billion people a day,” according to an Inverse report.
Pichai went on to note how Google’s AI research is already producing products that utilize machine learning, such as the Google Clips camera that debuted earlier this month. “Even though we are in the early days of AI, we are already rethinking how to build products around machine learning,” said Pichai. “It’s a new paradigm compared to mobile-first software, and I’m thrilled how Google is leading the way.”
Pichai went on to note the performance of Google Assistant since its implementation. While the AI powering it may not be smarter than a fifth grader, it continues to learn and help people “get things done in the real world,” said the CEO. Work done by DeepMind this year to add imagination and teach AI to manage real-world tasks has almost certainly been a factor in Google Assistant’s success.
“500 million people now use the machine learning smarts of Google Photos to manage and share their memories,” said Pichai. “The billion-plus people using Google Maps now get thoughtfully contextual information like how to find parking where they are going.”
Consumers aren’t the only ones benefiting from artificial intelligence. According to Pichai, businesses are also starting to learn how to use AI and machine learning to grow and remain relevant in a world that’s embracing more powerful technology.
Artificial intelligence is here to stay, and Pichai’s comments both highlight the technology’s current capabilities while also acknowledging its future. AI’s effect on jobs and the ethics of the tech still need to be addressed, but provided we can control it, AI is poised to lift humanity to new heights and radically change the world for the better.
You don’t need to look any further than the mobile device in your pocket for an example of how technology is shaping the world today, but Sidewalk Labs — the urban innovation arm of Google parent company Alphabet — wants to use technology to literally shape a piece of the world by building a high-tech neighborhood.
At a press conference on Tuesday, Sidewalk Labs announced plans to transform an area along Toronto’s waterfront into a technologically advanced neighborhood they’re calling Quayside. The project will be undertaken in partnership with the city government and Waterfront Toronto and is one part of a larger project called Sidewalk Toronto, which has a straightforward goal: use technology to solve urban Toronto’s biggest challenges.
“Today’s announcement is about creating a new type of neighborhood that puts people first. Sidewalk Toronto will transform Quayside into a thriving hub for innovation and a community for tens of thousands of people to live, work, and play,” Canadian Prime Minister Justin Trudeau said at the press conference, according to Business Insider. “I have no doubt Quayside will become a model for cities around the world and make all of our communities even better places to call home.”
Building Future Cities
Sidewalk Toronto first started to take shape after Toronto requested development proposals for an 800-acre area called Port Lands. Sidewalk Labs responded, and they’ve since committed $50 million as an initial investment for the high-tech neighborhood, which is estimated to cost some $1 billion in total. Waterfront Toronto plans to spend an additional $996 million investment from local and federal governments to build flood-preventing infrastructure along the waterfront.
The details for Quayside aren’t final yet as the planning process isn’t scheduled to start until November 1 during a community town hall. Sidewalk Labs CEO and former New York City deputy mayor Dan Doctoroff has previously spoken about how many of today’s biggest technological advancements — machine learning, smart sensors, self-driving cars, high-speed internet, etc. — could be used to build a community that truly belongs to the future, however, so that could be a hint at what’s to come for Quayside.
“This will not be a place where we deploy technology for its own sake, but rather one where we use emerging digital tools and the latest in urban design to solve big urban challenges in ways that we hope will inspire cities around the world,” said Doctoroff at the conference.
We’ve written before about how Google is one of the most prominent tech companies leading the way when it comes to the development of artificial intelligence. As each month passes, its AI division, DeepMind, continues to reveal increasingly advanced AI capabilities, especially when it comes to AlphaGo.
Since then, DeepMind has started adding imagination to its AI, and they also used gaming to teach the AI how to better manage tasks. AlphaGo even went on to defeat another top go player, Ke Jie, once again showing off its (potentially) unlimited potential to learn.
Now, a new development has just come to light.
This week, DeepMind unveiled the next iteration of the AlphaGo AI, dubbed AlphaGo Zero, saying it’s “even more powerful and is arguably the strongest Go player in history.” According to the company, Zero taught itself how to play Go on its own by playing games against itself — previous iterations of AlphaGo learned by studying data gathered from human players.
Zero began as a complete novice, but after just 3 days it beat the version of AlphaGo that bested Lee Se-dol, and it beat it by a staggering 100 games to 0. After 40 days, it proceeded to beat the superior version of AlphaGo that beat Ke Jie.
“AlphaGo Zero also discovered new knowledge, developing unconventional strategies and creative new moves that echoed and surpassed the novel techniques it played in the games against Lee Sedol and Ke Jie,” said DeepMind in their press release.
Beyond the Game of Go
While AlphaGo Zero’s Go capabilities are to be praised, it should be noted that playing a board game is much different than completing other tasks that have more variables. As Eleni Vasilaki, professor of computational neuroscience at Sheffield University, put it while speaking with The Guardian, “AI fails in tasks that are surprisingly easy for humans. Just look at the performance of a humanoid robot in everyday tasks such as walking, running, and kicking a ball.”
But that doesn’t mean that this development isn’t utterly revolutionary.
DeepMind CEO Demis Hassabis is also well aware of this gap between humans and AI, explaining how the development and growth of AlphaGo was more important than just mastering an ancient game, it was also “a big step for us towards building these general-purpose algorithms.”
Within the next decade, Hassabis believes AI will be working alongside people to advance development in fields like science and medicine — the latter of which can already be seen today.
For example, AlphaGo Zero is currently trying to work out how proteins fold, which is something that, if realized, could vastly accelerate drug discovery (the process through which new medications are discovered). This, in turn, could save countless lives and lead us into a new era in medicine.
“I hope that these kinds of algorithms and future versions of AlphaGo-inspired things will be routinely working with us as scientific experts and medical experts on advancing the frontier of science and medicine,” Hassabis said.
Companies working with artificial intelligence (AI) and related technologies are expected to benefit dramatically from the big changes automation will bring to the world of work. Workers themselves, on the other hand, could be facing a future of unemployment and poverty. To ensure humans still have a place in the workforce of tomorrow, Google has established the Grow with Google initiative.
As company CEO Sundar Pichai explained during yesterday’s Grow with Google announcement event in Pittsburgh, PA, Google plans to distribute $1 billion over the next five years to nonprofits that specialize in training workers and helping new businesses get off the ground.
$10 million from the fund has already been committed to Goodwill. The company plans to use that money to implement the Goodwill Digital Career Accelerator, which aims to prepare members of the American workforce for jobs that require a high level of technological know-how.
Google will also embark upon a tour of the U.S., visiting libraries and community organizations to dish out training sessions and career advice as part of a commitment to deliver one million hours of employee volunteering over the span of the project.
Google isn’t the only tech giant investing a significant amount of money in programs to ensure that people are ready for the jobs that will be available in the coming years.
In May 2017, Apple announced plans to set up a $1 billion fund to help foster manufacturing roles in the U.S., and the company also has plans to fund coding workshops. Microsoft also recently announced their TechSpark program to help prepare for workers for coming economic changes by improving their digital skills and computer literacy.
Automation isn’t guaranteed to devastate the workforce, but it is almost certain to disrupt it. The benefits of the technology are simply too great to ignore, so companies are going to want to implement it whenever possible. Thanks to the likes of Google, we have the opportunity to train the people most likely to be affected by this change, thereby ensuring that nobody gets left behind in this new era of automation.
Back in May, Google revealed its AutoML project; artificial intelligence (AI) designed to help them create other AIs. Now, Google has announced that AutoML has beaten the human AI engineers at their own game by building machine-learning software that’s more efficient and powerful than the best human-designed systems.
An AutoML system recently broke a record for categorizing images by their content, scoring 82 percent. While that’s a relatively simple task, AutoML also beat the human-built system at a more complex task integral to autonomous robots and augmented reality: marking the location of multiple objects in an image. For that task, AutoML scored 43 percent versus the human-built system’s 39 percent.
These results are meaningful because even at Google, few people have the requisite expertise to build next generation AI systems. It takes a rarified skill set to automate this area, but once it is achieved, it will change the industry. “Today these are handcrafted by machine learning scientists and literally only a few thousands of scientists around the world can do this,” WIRED reports Google CEO Sundar Pichai said. “We want to enable hundreds of thousands of developers to be able to do it.”
Much of metalearning is about imitating human neural networks and trying to feed more and more data through those networks. This isn’t — to use an old saw — rocket science. Rather, it’s a lot of plug and chug work that machines are actually well-suited to do once they’ve been trained. The hard part is imitating the brain structure in the first place, and at scales appropriate to take on more complex problems.
The Future of Machine-Built AI
It’s still easier to adjust an existing system to meet new needs than it is to design a neural network from the ground up. However, this research seems to suggest this is a temporary state of affairs. As it becomes easier for AIs to design new systems with increased complexity, it will be important for humans to play a gatekeeping role. AI systems can easily make biased connections accidentally — such as associating ethnic and gendered identities with negative stereotypes. However, if human engineers are spending less time on the grunt work involved in creating the systems, they’ll have more time to devote to oversight and refinement.
Ultimately, Google is aiming to hone AutoML until it can function well enough for programmers to use it for practical applications. If they succeed in this, AutoML is likely to have an impact far beyond the walls of Google. WIRED reports Pichai stated, at the same event from last week, that “We want to democratize this,” — meaning, the company hopes to make AutoML available outside Google.
To reach that point, however, AI may need to become more human-like. At least, that’s what Google Cloud Chief Scientist and Director of the Stanford Artificial Intelligence and Vision Labs Fei-Fei Li believes. By doing so, AI would become more capable of improving our lives and providing us peace of mind.
In an interview with MIT Technology Review senior editor Will Knight, Li explained that thinking about AI’s impact on the world is one of the most important factors regarding the technology, and that her work with AI has revealed one thing: future developments and advancements need to be more human-centered.
Better AI Development
“If you look at where we are in AI, I would say it’s the great triumph of pattern recognition,” she said. “It is very task-focused, it lacks contextual awareness, and it lacks the kind of flexible learning that humans have. We also want to make technology that makes humans’ lives better, our world safer, our lives more productive and better. All this requires a layer of human-level communication and collaboration.”
Early signs of such communication and collaboration can already be seen, though their true impact has yet to be seen; in July, for example, Google’s DeepMind revealed it began developing AI with imagination, enabling it to think through decisions and plan for the future without human input.
“If our algorithms are to develop equally sophisticated behaviors, they too must have the capability to ‘imagine’ and reason about the future,” said researchers from DeepMind, speaking on AIs equipped with imagination. “Beyond that they must be able to construct a plan using this knowledge.”
Li’s own thoughts nearly echo DeepMind’s, as she also believes AI need to be able to think and act for themselves. To this end, she said that future research conducted needs to focus on “contextual understanding”, as well as “knowledge abstraction and reasoning.”
AI That Represents Humanity
That said, new features and capabilities won’t be enough to truly advance artificial intelligence. Diversity also needs to play a key role. As Li explained, studies have shown that a diverse team leads to more creative, innovative solutions. AI also cannot be biased or influenced by the morally questionable parts of humanity, otherwise its adoption and incorporation could hinder some groups of people over others.
“When you are making a technology this pervasive and this important for humanity, you want it to carry the values of the entire humanity, and serve the needs of the entire humanity,” said Li. “If the developers of this technology do not represent all walks of life, it is very likely that this will be a biased technology. I say this as a technologist, a researcher, and a mother. And we need to be speaking about this clearly and loudly.”
Google has received a license from the FCC to deploy its Project Loon balloons over Puerto Rico and parts of the Virgin Islands. Google will be able to provide coverage there until April 4, 2018. If all goes according to plan, the helium balloons will provide emergency LTE cellular reception to local governments and residents, allowing them to contact family and friends. It will also enable them to reestablish communication with the outside world and manage relief efforts. At the time of writing, it is uncertain how much of Puerto Rico will be covered, which areas of the Virgin Islands will be covered, or how many balloons Google will deploy.
BREAKING: FCC issues experimental license to Google to provide emergency cellular service in Puerto Rico through Project Loon balloons.
This will not be the first time Google has sent its helium LTE coverage providers to assist in the wake of a disaster; the balloons gave Peruvians coverage after extreme flooding wiped out power and has also tested disaster relief initiatives in France, Brazil, New Zealand, Indonesia, and Sri Lanka. However, in Peru the problem was simpler because Google was already working together with a local telecom provider to provide disaster relief, so the structure for getting signals to and from the balloons was already in place.
As Google and Project Loon arrive in Puerto Rico, they’ll be starting with nothing in place. At this point, Puerto Rico’s telecom companies may not be able to formally partner with Google and provide any resources toward this collaboration, but obviously Google intends to make connectivity happen if it can. In a statement to Engadget, Google’s Alphabet X lab, home to Project Loon, said partnership with local telecom networks is critical to success: “To deliver signal to people’s devices, Loon needs be integrated with a telco partner’s network — the balloons can’t do it alone. We’ve been making solid progress on this next step and would like to thank everyone who’s been lending a hand.”
Meeting the Challenge
Project Loon uses its balloon network at 65,000 feet in the air to receive signals from a telecom partners on the ground, and then sends them to cellphone users. According to Mashable, the Peruvian project leader said the balloons sent 160 GB of data as they floated over an area about the size of Sweden, “enough data to send and receive around 30 million WhatsApp messages, or 2 million emails.”
After Hurricane Maria devastated Puerto Rico’s infrastructure, more than 75 percent of its cellphone towers remain offline, and power has yet to be restored to almost 90 percent of the island. The situation remains fairly desperate for the island and its 3.5 million inhabitants, who are American citizens. Hopefully this initiative will prove successful.
Google is rolling out an entirely new product alongside its recent Pixel and Google Home releases: the Google Clips camera. It is visually unassuming: a small, flat, square, wireless camera that can clip onto your shirt pocket (or whatever you dream up) and take photos.
What makes this camera different is that you’re not always the one taking the photos, its artificially intelligent (AI) system is. While you might be having precious family moments and not paying attention to the camera you clipped in the corner of the room, the AI behind Google Clips will choose ideal moments to capture photos of its own volition.
Google’s product manager Juston Payne asked, “How do you let yourself capture those fleeting moments, while being part of the moment?”
The camera should also get smarter using the machine learning algorithm Moment IQ, recognizing people and pets you spend a lot of time with. It also knows good photo conditions when it sees them. While perhaps a creepy concept to some, the idea is that the camera would capture candid and heartfelt moments at home; moments you might miss or not be able to capture yourself.
Will Google Clips allow families, as it intends, to focus more on spending time together rather than taking photos, or will it become an ethically-murky nuisance?
There are those who are fearful of AI; Nick Bilton, New York Times tech columnist, wrote, “The upheavals [of artificial intelligence] can escalate quickly and become scarier and even cataclysmic. Imagine how a medical robot, originally programmed to rid cancer, could conclude that the best way to obliterate cancer is to exterminate humans who are genetically prone to the disease.”
Now, this camera is nowhere near the potentially human-obliterating technology that Bilton is alluding to. However, it is important to both embrace progress and question how this type of technology will interact with our lives.
Could it be used by police to capture moments at home that could be cause for criminal concern? Is the biggest concern that it captures photos that you don’t care for aesthetically, or find inferior to your own photography skills?
There are many possibilities, and Google does allow the camera to operate without being connected to the internet, which could cut down on security concerns. Only time will tell, and hopefully, the camera will serve its purpose and allow families to enjoy being together while taking on the task of documenting loving moments.
A system for speech synthesis — otherwise known as text-to-speech (TTS) — typically utilizes one of two techniques.
Concatenative TTS involves the piecing together of chunks of recordings from a voice actor. The drawback of this method is that audio libraries must be replaced whenever upgrades or changes are made.
The other technique, parametric TTS, uses a set of parameters to produce computer-generated speech, but this speech can sometimes sound unnatural and robotic.
WaveNet, on the other hand, produces waveforms from scratch based on a system developed using a convolutional neural network.
To begin, a large number of speech samples were used to train the platform to synthesize voices, taking into account which waveforms sounded realistic and which did not. This gave the speech synthesizer the ability to produce natural intonation, even including details like lip smacks. Depending on the samples fed into the system, it would develop a unique “accent,” which means it could be used to create any number of distinct voices if fed different data sets.
The biggest limitation of WaveNet was the fact that it initially required a significant amount of computing power and wasn’t very fast, needing one second to generate .02 seconds of audio.
After improving upon the system for the past 12 months, DeepMind’s engineers have optimized WaveNet to the point that it can now produce a raw waveform lasting one second in just 50 milliseconds — 1,000 times faster than the original. What’s more, the resolution of each sample has been increased from 8 bits to 16 bits, contributing to its higher scores in tests with human listeners.
These improvements mean the system can now be integrated into consumer products, like Google Assistant.
WaveNet is now being used to generate the U.S. English and Japanese voices for Google Assistant across all platforms. Because the system can create specialized voices based on whatever samples are fed into it, Google should be able to use WaveNet to synthesize realistic-sounding human speech for other languages and dialects moving forward.
Voice interfaces are becoming more and more prevalent across all forms of computing, but the stilted nature of some synthetic speech has put off many potential users. DeepMind’s efforts to improve upon this technology could prompt more widespread adoption and will certainly serve to refine the existing experience.
Google is not far from the heels of Apple and is rapidly gaining ground. The internet and tech giant has just announced a new addition to their device family and it comes with some pretty neat features. The Google Pixel Buds are similar to Apple’s AirPod headphones with a few notable exceptions, including the remarkable ability to live translate 40 languages with just the touch of a button or speaking a prompting phrase, like “help me speak Spanish.”
This functionality is only available when the headphones are paired with a Google Pixel 2 phone, further demonstrating how Google is looking to directly compete with Apple in the mobile field. The earphones are able to utilize Google’s AI-powered, voice-activated assistant, Google Assistant. According to Google CEO Sundar Pichai’s statement to investors, “We have improved our translation ability more in one single year than all our improvements over the last 10 years combined.”
Mobile technologies are increasingly developing to integrate into new areas of our lives. Devices are even able to turn your phone into a mobile lab. Pairing machine learning and mobile devices will only continue to expand the possibilities of our mobile devices.
Apple fans who make use of the iPhone’s famous virtual assistant, Siri, were probably elated to hear a more human-sounding voice emerge as part of iOS 11. While sounding more human is a plus for any artificial intelligence (AI) system, thinking like a human being would be even more so. According to a new study that compared the intelligence of today’s AI systems, that’s one area where Google’s AI appears to be out-performing Siri.
A team of Chinese researchers, led by Chinese Academy of Sciences‘ Research Center on Fictitious Economy and Data Science executive deputy director Yong Shi, wanted to “address the issue of AI threat” by coming up with “a standard intelligence model that unifies AI and human characteristics in terms of four aspects of knowledge, i.e., input, output, mastery, and creation,” as stated in the paper’s abstract. They looked at and ranked the intelligence quotient (IQ) of AI systems vis-á-vis human intelligence.
The researchers found that Google’s AI had an IQ of 47.28 in 2016. While Google’s AI is intelligent, it’s still not as smart as a regular six-year-old child — whose IQ average at about 55.5, and even less than the average 18-year-old with an IQ of 97. In fact, none of the AI systems they checked displayed IQ’s greater than these measures. Chinese search engine Baidu had a score of 32.92, Microsoft’s Bing measured at 31.98. Apple’s Siri scored just 23.94 — less than half the IQ of Google’s AI.
Learning to be Smart
These IQ scores don’t seem to lend credence to concerns experts like OpenAI co-chairman Elon Musk have expressed about AI’s capacity to end humanity. Of course, the measurement used by Yong and his colleagues is just one among many — and as the researchers acknowledged, AI systems do seem to be getting smarter. Google’s AI had an IQ of 26.5 in 2014, while Microsoft’s had a 13.5 — meaning that in the last two years, Google has increased the intelligence of its AI.
This shouldn’t come as a surprise: artificially intelligent systems are built to continually learn, and as Google’s DeepMind has shown, it’s getting good at it. Google has even trained the system to have an imagination, and make sense of its surroundings. It also helps that Google, Microsoft, Apple, as well as Amazon are all investing heavily in AI research. As the company’s CEO Sundar Pichai said at this year’s Google I/O conference, Google’s focus currently is on AI. Similarly, Microsoft CEO Satya Nadella has shifted the company’s focus to AI, as well; the company has already set up a new AI research lab.
So, expect even smarter AI systems and virtual assistants in the near future. Already, the simplest smartphones today are powered by machine learning algorithms that work to enhance usability and performance to improve user experience — which is, at least for now, very much to our benefit.
Artificial intelligence (AI) is expected to have a monumental impact on society. As such, DeepMind, an AI research company now housed under Google parent company Alphabet, has established a new unit dedicated to answering questions about the effect the technology might have on the way we live.
DeepMind Ethics and Society will bring together employees from the company and outsiders who are uniquely equipped to offer useful perspectives. Economist and former UN advisor Jeffrey Sachs, University of Oxford AI professor Nick Bostrom, and climate change campaigner Christiana Figueres are among the advisers selected for the group.
At present, the unit comprises around eight DeepMind employees and six unpaid fellows from outside the company. The number of internal staff on the committee will grow to 25 over the next year.
The leaders of the group, Verity Harding and Sean Legassick, described the external contributors as “a respected group of independent thinkers” in a blog post announcing the initiative. “These Fellows are important not only for the expertise that they bring but for the diversity of thought they represent,” read the statement.
Let’s Be Careful
DeepMind has made no secret of their ambition to integrate AI into all aspects of life. This potential pervasiveness is one reason the moral and ethical considerations of the technology must be taken very seriously.
Still, the weaponization of AI is just one of the many ethical issues being raised, and in the past, DeepMind has been criticized for falling short of what many would consider to be proper standards. In May 2016, the company came under fire after they were given access to confidential health data for 1.6 million people during the development of an app called Streams.
How we choose to answer questions about what sort of responsibility engineers have with regards to how their work might be used will have far-reaching implications. That is why bodies like DeepMind Ethics and Society are so important. Without oversight, technologists might focus on what’s possible, rather than what’s morally acceptable, and that line of thinking can cause massive problems if left unchecked.
Google is making it easier for you to find out what it knows about you.
In a blog post on Friday Google said it has redesigned its privacy and security dashboard “from the ground up” to better integrate it into other privacy controls and to make it more touchscreen-friendly.
The main dashboard was created in 2009 and contains the “My Activity” and “My Account” tools that let you view all your Google activity, and see things like how many emails and photos you have stored.
The update won’t be rolling out until next week, but from the images, it looks like Google is using graphics to better itemize all the services you use, and make it clear where you spend the most time and what data is there. The current version is a bit more difficult to navigate, and requires you to go through multiple screens in order to change or view your privacy settings.
Along with the update, Google released new numbers on just how popular its privacy features are. More than 150 million people have used the My Activity feature to track down old links and videos, and “tens of millions” have used the Privacy Checkup tool to change their preferences. The “Takeout” feature lets your export your data out of Google, and has been used to export one exabyte of data since its creation in 2011.
The update doesn’t by any means change the way Google follows you around the internet, but it does make it a whole lot easier to see everything the company knows about you.
In a joint statement Wednesday evening, Google and HTC announced their $1.1 billion cooperation agreement. Google will pay $1.1 billion in cash and receive a non-exclusive license for HTC’s intellectual property and about 2,000 HTC employees, of which many are already working with Google to develop the Pixel smartphones. The deal will provide HTC with a much-needed infusion of cash after years of dropping market value.
The deal was widely rumored to happen this week, and the Taiwan stock exchange even suspended trading of HTC’s stock leading up to the announcement. The move signals Google’s growing interest in expanding its consumer hardware strategy — Google’s hire of former Motorola executive Rick Osterloh, who now runs its hardware division, is part of this strategy. Although Google does not acquire any manufacturing assets as part of the agreement, it signals the search engine juggernaut’s ambitious plans for Android-powered smartphones as a competitive force against Apple products which currently dominate the market. The announcement comes just before new product launches in early October that should include two Pixel phones.
“HTC has been a longtime partner of Google and has created some of the most beautiful, premium devices on the market,” Google Senior Vice President of Hardware Rick Osterloh said in a statement. “We’re excited and can’t wait to welcome members of the HTC team who will be joining Google to fuel further innovation and future product development in consumer hardware.”
Shared History, Compatible Future
Although Google failed with its acquisition of Motorola’s mobile division, divesting it not long after the purchase when Motorola failed to produce smartphones that could compete with Apple, the collaboration with HTC offers a second chance for Google — one not burdened with manufacturing facilities. HTC has helped to create several Google devices, manufactured the Pixel smartphone, and built the most recent 6P Nexus.
HTC has placed recent emphasis on its growing virtual reality (VR) business, including the HTC Vive headset. This agreement will allow the company to reduce development costs while continuing to grow the VR division. And, while HTC has not been a market leader in smartphones for some time, this collaboration may give Google the opportunity to control hardware for its other devices, from Chromebooks to Google Home.
Today, pioneering tech brand HTC announced that they will halt trading of their shares tomorrow pending “the release of material information.”
The company has been rumored to be negotiating a takeover with Google, and this announcement may be a sign that such a buyout is happening. HTC’s official response has simply been that they do not “comment on market rumor or speculation.” Why HTC would make an announcement to halt trading is unclear, though, as it could cause stockholders to sustain damage.
JUST IN: HTC just announced it’s shares will halt trading tomorrow (Thur) pending a major announcement.
HTC has been operating at a loss for more than a year and was apparently unlikely to survive without outside assistance. Google seems a promising source for this assistance as HTC has already worked with the company on the Google Pixel and Pixel XL smartphones.
Whether Google would be taking over merely the smartphone business of HTC or perhaps only their promising virtual reality (VR) division is unclear, but a source told Bloomberg that a full-company takeover is considered less likely at this point.
We won’t have to wait long for confirmation on the reason for this trade halt, though. The company will likely make the details known tomorrow when trading wraps for the day.
Google has launched a new mobile payment app in India called Tez for iOS and Android devices. Named after a Hindi word meaning ‘fast,’ it allows users to make swift, secure cash transfers using technology known as audio QR.
While India is home to 300 million smartphone users, many of these devices are entry level or mid-tier, and don’t feature near-field communication technology. As a result, they don’t support services like Samsung Pay and Apple Pay.
Tez forges a connection between the two devices taking part in the transaction using ultrasonic frequencies, which are inaudible to human ears. As a result, there’s no need to share bank account information or even a telephone number.
The service is protected by Tez Shield, a platform devised by Google that’s able to detect fraudulent usage, and keeps user identities under wraps.
Quicker Than Cash
In November 2016, the Indian government announced a ban on cash bills worth more than 500 rupees ($7.77). This decision removed more than 85 percent of the country’s currency from circulation, and as a rush, digital transactions have seen a sharp increase in popularity.
Google isn’t the only company attempting to field an app fulfilling this purpose to the Indian market. Whatsapp is apparently in talks with the National Payments Corporation of India — the governing body behind the Unified Payments Interface platform that links Tez up to users’ bank accounts — in the hopes of launching its own service.
For the moment, Google is pitching Tez squarely at the Indian marketplace. However, if it proves to be successful, the company hopes to release the app in countries including Vietnam, Indonesia, and Thailand.
There’s a growing concern that tech companies like Google and Amazon are becoming so large, they will soon be in control of every aspect of our lives. It’s easy to understand where the worries come from — Amazon is one of the largest online retailers in the world, and the go-to solution for the majority of modern shoppers; Google is the owner of what is, essentially, the default search engine, and also owns YouTube — arguably the easiest to use, and most popular video streaming service.
Between the two companies, they play a role in how people shop (Amazon.com; Google Express), search for information (Google.com), entertain themselves (Amazon Instant Video; YouTube), and communicate with their services (Alexa, Amazon Echo, and Google Assistant). If left unchecked, the growing consensus is they’ll become too big to stop, and will eventually stifle competition, since current laws and regulations in place aren’t enough to slow them down.
Federal Trade Commission (FTC) Chairman Maureen Ohlhausen echoed those fears in a speech at the 2017 Global Antitrust Enforcement Symposium in Washington D.C. on September 12. During her speech, she stated “we are spiraling towards a dystopian future where a few giant technology companies will ultimately gain sustained control over our economic lives.”
However, Ohlhausen doesn’t actually believe this to be the case, citing the early 2000’s merger between AOL and Time Warner, and where AOL is today, as proof that market dominance requires more than a larger market share. She argues that as long as companies like Amazon and Google continue to grow through smart decision-making and popularity, their actions are acceptable and not subject to any regulations, which primarily focus on consumer welfare.
Regulating Tech Companies
Not everyone thinks like this of course. Groups like the New America Foundation have repeatedly criticized Amazon, pointing to how it subverts regulations by drawing attention to it’s overall convenience, and lower prices.
“We cannot cognize the potential harms to competition posed by Amazon’s dominance if we measure competition primarily through price and output,” said Lina Khan, a fellow of the organizations Open Markets program, back in January.
Speaking to USA Today in June, Khan said it’s as if CEO Jeff Bezos created a map of antitrust laws specifically to determine the best ways to avoid them.
It’s unclear what companies with substantial influence like Google, Amazon, and Facebook would do if they no longer had competition or regulations to worry about, but it’s undeniable they each continue to expand in unexpected ways.
Ohlhausen ended her speech saying she and the FTC “vigorously supported policy positions that they sometimes love and sometimes hate.” Perhaps it’s time for a conversation she doesn’t necessarily agree with; one discussing the pros and cons of tech company’s seemingly unhindered growths.
Researchers have started developing artificial intelligence with imagination – AI that can reason through decisions and make plans for the future, without being bound by human instructions.
Another way to put it would be imagining the consequences of actions before taking them, something we take for granted but which is much harder for robots to do.
The team working at Google-owned lab DeepMind says this ability is going to be crucial in developing AI algorithms for the future, allowing systems to better adapt to changing conditions that they haven’t been specifically programmed for. Insert your usual fears of a robot uprising here.
“When placing a glass on the edge of a table, for example, we will likely pause to consider how stable it is and whether it might fall,” explain the researchers in a blog post. “On the basis of that imagined consequence we might readjust the glass to prevent it from falling and breaking.”
“If our algorithms are to develop equally sophisticated behaviours, they too must have the capability to ‘imagine’ and reason about the future. Beyond that they must be able to construct a plan using this knowledge.”
We’ve already seen a version of this forward planning in the Go victories that DeepMind’s bots have scored over human opponents recently, as the AI works out the future outcomes that will result from its current actions.
Rules of the Real World
The rules of the real world are much more varied and complex than the rules of Go though, which is why the team has been working on a system that operates on another level.
To do this, the researchers combined several existing AI approaches together, including reinforcement learning (learning through trial and error) and deep learning (learning through processing vast amounts of data in a similar way to the human brain).
What they ended up with is a system that mixes trial-and-error with simulation capabilities, so bots can learn about their environment then think before they act.
One of the ways they tested the new algorithms was with a 1980s video game called Sokoban, in which players have to push crates around to solve puzzles. Some moves can make the level unsolvable, so advanced planning is needed, and the AI wasn’t given the rules of the game beforehand.
The researchers found their new ‘imaginative’ AI solved 85 percent of the levels it was given, compared with 60 percent for AI agents using older approaches.
“The imagination-augmented agents outperform the imagination-less baselines considerably,” say the researchers. “They learn with less experience and are able to deal with the imperfections in modelling the environment.”
The team noted a number of improvements in the new bots: they could handle gaps in their knowledge better, they were better at picking out useful information for their simulations, and they could learn different strategies to make plans with.
It’s not just advance planning – it’s advance planning with extra creativity, so potential future actions can be combined together or mixed up in different ways in order to identify the most promising routes forward.
Despite the success of DeepMind’s testing, it’s still early days for the technology, and these games are still a long way from representing the complexity of the real world. Still, it’s a promising start in developing AI that won’t put a glass of water on a table if it’s likely to spill over, plus all kinds of other, more useful scenarios.
“Further analysis and consideration is required to provide scalable solutions to rich model-based agents that can use their imaginations to reason about – and plan – for the future,” conclude the researchers.
The researchers also created a video of the AI in action, which you can see below:
You can read the two papers published to the pre-print website arXiv.org here and here.
There are already a number of researchers involved in developing stable nuclear fusion. The goal may seem simple enough in theory: harnessing the same energy that powers the Sun— but attaining it has proven to be rather difficult. For one, sustaining a stable nuclear fusion reaction is tricky, as it requires playing with variables that aren’t that easy to manipulate. That’s why Google Research is working in tandem with nuclear fusion company Tri-Alpha Energy to help simplify the process.
Their solution is a computer algorithm, dubbed the Optometrist algorithm, that can speed up experiments involving plasma, the core ingredient in a fusion reaction. It’s also the most challenging aspect to manipulate. “The whole thing is beyond what we know how to do even with Google-scale computer resources,” Ted Baltz, a senior software engineer from the Google Accelerated Science Team, wrote in a Google Research blog.
“We boiled the problem down to ‘let’s find plasma behaviors that an expert human plasma physicist thinks are interesting, and let’s not break the machine when we’re doing it’,” Baltz added. “This was a classic case of humans and computers doing a better job together than either could have separately.”
The Optometrist algorithm was applied to Tri-Alpha Energy’s C2-U machine, where it was able to perform experiments that usually took a month to finish in just a few hours. The result, which was published in the journal Scientific Reports, was a 50 percent reduction in system-induced energy losses that increased total plasma energy. “It was only for about two milliseconds, but still, it was a first!” Baltz wrote. The next step is reaching that critical threshold necessary for nuclear fusion to occur and to stabilize.
A Truly Renewable Energy
Fusion research has garnered significant attention in recent years as scientists have recognized its potential as a renewable and clean energy source. Nuclear fusion could generate four times the amount of energy nuclear fission produces (one fission event yields about 200 MeV of energy, or about 3.2 ´10-11 watt-seconds). It’s no wonder, then, that fusion is considered the “holy-grail” of energy research.
Recent questions in fusion research have been concerned with finding ways to stabilize the plasma that powers it — not an easy feat, since it requires temperatures of over 30 million degrees Celsius to sustain. Thus far, some researchers have proposed building better fusion reactors, and others are looking at the possibility of using a different base for plasma. Instead of the usual hydrogen, deuterium, or helium, physicists from the Princeton Plasma Physics Laboratory have been tinkering with argon-based plasma.
Where does Google’s algorithm fit in? Well, it could significantly shorten the amount of time needed for each of these experiments. “Results like this might take years to solve without the power of advanced computation,” Baltz said. By running computational models alongside human experiments, the Optometrist algorithm can breeze through every possible combination for nuclear fusion to work.
Tri-Aplha Energy has already ditched the C2-U machine in favor of the more advanced Norman, which already achieved first plasma earlier this month. They’re set to build a power generator for demonstration pending more successful experiments with the Norman.
As technology continues to advance at unprecedented speed, it sometimes seems as though the internet can’t seem to keep up. If we could improve internet speeds, however, it could allow emerging technology to flourish, as well as speed up research that’s already ongoing. Engineers at Google well understand the desire for faster internet, and have taken it upon themselves to ramp it up. The company plans to achieve this by creating a new congestion control algorithm, BBR (Bottleneck Bandwidth and Round-trip propagation time).
BBR, an algorithm that was standardized back in the 1980s, detects when a network is overwhelmed and responds by slowing down data transfers. The algorithm might not seem all that significant, but it actually plays a huge role in internet speed. BBR is currently allowing companies and individuals that use Google’s Cloud Platform to access it and the speed that comes with it. But Google wants to take this algorithm one step further by publicly publishing it, and incorporating it into the TCP transmission standard. That move would have a ripple effect across the entire internet.
And things will get exciting again, sooner than you know it. This Fall, Apple is expected to reveal a 10th-anniversary iPhone, Google will likely reveal a revamped Pixel smartphone, and Microsoft is expected to hold another one of its regular late-October Surface computer press conferences.
In the meantime, there’s not much to do but reflect on what we’ve learned so far this year about the future of tech. And beyond the hype and the hyperbole, we’re starting to see the very earliest stages of a battle for the next phase of computing.
Because while Apple and Google may dominate the smartphone market today, technologies like augmented reality present whole-new platforms where there’s no clear winner. So Amazon, Microsoft, and Facebook, having missed out on owning a mobile platform, are doing their damndest to hasten the end of the smartphone — and the end of Apple and Google’s duopoly, while they’re at it.
Skin in the Game
Every major technological shift has created big opportunities for the few entrepreneurs who see it coming early — in the seventies, Apple and Microsoft made big bets that the PC would be a much bigger market than gigantic room-sized mainframes, while the mainframe industry decried the PC itself as a fad. We see who won that one.
Similarly, Microsoft didn’t fully realize the potential of smartphones, until well after Google and Apple proved them wrong. Now, Google’s Android is the most popular operating system in the world, full stop. And the iPhone has propelled Apple to record profits and to the status as the company to beat in tech.
Well, it seems like time is a flat circle. Right now, we’re seeing the earliest growing pains of augmented reality and virtual reality — tech that overlays the digital world onto our human senses. It means information, projected into your eyes and ears, as you need it. Why carry a phone when Netflix and WhatsApp are floating in front of you?
Some call it a fad, or just something that’s too new and untested to be considered a real threat to the smartphone. And yet, there’s a veritable arms race to build these augmented reality platforms of the future.
The net result is a race to build whatever is going to do the smartphone what the PC did to the mainframe. What these companies all have in common is that they missed the boat on building smartphone operating systems of their own. Now, it’s on them to build whatever comes next.
Apple and Google are well aware of the threat and are not standing still.
Apple has ARkit, a system for building augmented reality into iPhone apps, using the phone’s built-in camera. It’s technologically robust enough and easy enough to use that developers love it, giving Apple a nice foothold in augmented reality. If and when Apple releases smart glasses, those apps will come right over.
Google has various augmented reality efforts in the works, including Project Tango. And although the first version of the Google Glass headset flopped, if Google figures out how to revamp the device, it will have a vehicle to extend Android into the AR realm.
In the meantime, as we appreciate all the new hardware and software goodies coming out later this year, keep the perspective that everything we’re seeing now is the first salvo in a computing war that will rage for the next decade and beyond.
Whether they actually make it to fruition or never see the light of day, the following projects have one thing in common: They aim high and try to make science fiction a reality today. So let’s have a look at some of the most spectacular attempts at turning the technological landscape on its head, and where these moonshots are now.
One of Google’s most ambitious “moonshot projects” is Project Wing. The concept of delivery drones is nothing new — we have seen Amazon’s attempt to make short-haul deliveries via flying drones a reality, but obstacles in the form of security and logistical concerns have slowed progress down significantly. Project Wing has been testing different vehicle types and sophisticated navigation systems since 2014. Their aim is to build an entirely new commerce system that takes place in the sky, transporting anything from consumer goods to medicines, while lowering its carbon footprint in the process.
Transporting clean energy to remote places can often be inefficient…if not nearly impossible. Makani is developing “energy kites” that can fly at altitudes where stronger winds are able to generate significantly more reusable energy, especially when compared to conventional wind turbines. To make up for increasing energy demands, turbines are becoming larger, heavier and more complex, causing their ability to generate enough energy to eventually plateau. Makani, on the other hand, is theoretically able to handle these increased demands.
Bionic eyes and enhanced holographic vision have always seemed like a far-fetched dream of the future, but Alphabet-owned company Verily is trying to change all of that. Their Smart Lens Program is attempting to make people’s lives easier by allowing the wearer to monitor their blood sugar levels and correct farsightedness simultaneously. All the electronics – from circuits to sensors – are able to be worn directly on the surface of the eye. The project is still facing its fair share of hurdles, for instance, the miniaturization of the power source so it, too, can be integrated into the lens itself.
Project Loon is perhaps one of the most ambitious Google X projects. It aims to bring internet connectivity to the two-thirds of the world’s population that don’t have access to it, all thanks to the use of solar-powered balloons. Each balloon is about the size of a tennis court and has to be able to withstand 100 km/h (60 mph) winds and cold temperatures as low as -90C (-130F). It will spend up to 100 days at 20 km (12.4 miles) above the Earth’s surface, using wind currents to position itself for optimal coverage.
Renewable energy has been created before with the help of the sea in the form of tidal energy. But Google X’s latest project called Foghorn is trying something very different. It aims to develop a new kind of fuel using seawater to reduce the world’s over-reliance on fossil fuels. This “sea fuel” would not only be carbon-neutral, it would also reduce the amount of carbon dioxide emissions of conventional transportation methods. The goal is to take advantage of an electrochemical process that pulls the carbon out of seawater, which then can be combined with hydrogen to form hydrocarbon-based fuel.
Storing and analyzing genomic information requires significant amount of processing power. Google has tackled this major hurdle by launching a cloud-based service that offers an accelerated, scaleable and secure way to analyze DNA.
Groundbreaking artificial intelligence company DeepMind was acquired by Google in 2014 and has been at the forefront of its field. DeepMind’s stated purpose is clear: “to use intelligence to help make the world a better place.” Some of their ongoing projects today include DeepMind Health, a collaboration with the National Health Service in the UK that serves to accelerate and improve treatment of patients. DeepMind has also managed to beat the world’s reigning champions at the board game Go since 2015 as part of its AlphaGo project.
Capturing human intent from small movements of your hands and translating it into a digital language is Project Soli’s main objective. While the reality of completely “touchless” control of wearables and other electronics is still far off, the project has made big strides by harnessing the power of radar and its positional accuracy. Not only is radar very accurate, it can be miniaturized very easily and features no moving parts whatsoever — perfect for wearable integration.
Having to get a completely new smartphone every two years is not only very expensive, it’s wasteful as well. Project Ara was hoping to revolutionize the way the components of your next smartphone would communicate with each other, offering you an option to upgrade only parts of your phone. Unfortunately, the project never got past the prototyping stage and was shut down in September of last year.
Google is maintaining its edge in the world of quantum computing. Its 20-qubit processor is currently undergoing tests, and the company appears to be on schedule to have its working 49-qubit chip ready by the end of 2017 as promised. Until it began trialing the 20-qubit chip, Google’s most powerful quantum chip was the 9-qubit effort from 2015.
Traditional computer bits are binary, only existing as either 0 or 1; they’re like light switches that are either on or off. Qubits, on the other hand, can be 0 or 1 like regular bits, but can also have quantum properties that allow them to exist in a superposition where they are both 0 and 1 simultaneously. This makes qubits potentially far more powerful, because instead of figuring something out by trying each option one by one, they can simultaneously compute more than one possibility.
Google’s 49-qubit chip will allow them to develop a 49-qubit quantum system that can solve problems that are far beyond the capacity of ordinary computers: Google calls this goal quantum supremacy. The 20-qubit system that the Google quantum computing team is now working on currently boasts a “two-qubit fidelity” of 99.5 percent. The higher the rating, the fewer errors the system makes. Quantum supremacy demands not only a 49-qubit system, but also sufficient accuracy to achieve a two-qubit fidelity of at least 99.7 percent—which Google is on track to deliver by the end of 2017.
Quantum Computing, Quantum Speed
Google isn’t alone in their quest for advancing quantum computing. In 2016, IBM was running a 5 qubit computer, but by May 2017, it was offering beta access to its 16 qubit platform to the public for testing purposes. Furthermore, qubits alone aren’t the only consideration for actually achieving working quantum computers; error correction and scaling will also be critical to quantum systems. However, if Google does achieve quantum supremacy, it will be a major step forward.
As promised during its annual developer conference earlier this month, Google launched its search engine for jobs yesterday. Powered by artificial intelligence (AI), this specialized job search uses Google’s Cloud Jobs API that launched back in 2016 as part of the company’s “AI-first” approach. Instead of requiring users to download a new app, this new AI-powered tool allows job seekers in the U.S. to use Google’s existing search function, which are accessible via desktop and mobile.
The new feature, currently available only in English, uses the same simple language Google searches are known for. Simply type “jobs near me,” “teaching jobs,” “writing jobs,” or any other query along these lines. The results page would then show the new job search widget, accompanied by a wide range of jobs — from across several online job boards like LinkedIn, Monster, WayUP, and even Facebook — which you could then refine further depending on your preferences.
Speaking of preferences, once you find a job that interests you, you could turn on notifications that would send you alerts through the Google app or through email when a new job matching your personalized search gets posted. The AI filters out all duplicate listings posted on various job sites. However, Google won’t handle any of the actual application process, and will simply direct you to the existing job application site.
Google hopes to cut the time job seekers spend on searching for jobs that may not be a good fit. As Nick Zakrasek, project manager for this new search function, wrote in a Google blog, “Whether you’re a young adult looking for your first job, a veteran hoping to leverage your leadership experience in civilian life, or a parent looking for a job with better pay to support a growing family, we hope this new experience on Google will help make the job search simpler and more effective.”
Attention all developers, researchers, and enthusiasts: Google has announced that they will be releasing a new . API is, simply put, a set of rules and tools to help build software. Google’s new TensorFlow object detection API is designed to make it easier to identify objects using images. The API includes models that are designed to work on even on comparatively simple devices, like smartphones.
Simplifying machine learning models is proving to be essential for advancing API and machine learning technologies. We don’t all have massive desktop setups with our own servers capable of handling just about anything. While it’s possible to run them through the cloud, that usually proves to be abysmally slow, and also requires an internet connection. That means that in order to make these models more accessible to the average consumer, they’ll need to be simplified.
Keeping that in mind, Google intends for this new API to be extremely user-friendly, allowing anyone and everyone with a basic computer or smartphone to explore the world of machine learning.
Applying the API
We know that this new API can be used to identify objects by using images, but beyond being amusing, could that actually be useful in our everyday lives? As it turns out — yes, it likely could be. This type of API could lead to advancement in facial recognition, landmark detection, as well as the most obvious — object identification. These seemingly basic tools will continue to become essential in many different fields. From information services to law enforcement and even just daily digital tasks, these seemingly small strides in the progression and simplification of machine learning will only continue to push us forward.
Aside from Google’s development of the API and launch of TensorFlow lite, a streamlined version of the machine learning framework, other companies have been creating mobile models, too: Facebook has used the tech to build its Caffe2Go framework and subsequently Facebook’s Style Transfer, and Apple released CoreML, which aims to help run these models on iOS devices. Piece by piece, machine learning is moving closer to individual accessibility.
Natural gas may be cleaner burning than other fossil fuels like coal, but leaking methane can cause issues much more serious than those we are mitigating. According to the Environmental Defense Fund (EDF), “…methane leaking during the production, delivery, and use of natural gas has the potential to undo much of the greenhouse gas benefits we think we’re getting when natural gas is substituted for other fuels.”
A partnership between the EDF and Google has uncovered more than 5,500 leaks since trials began in 2012. Equipping Google’s fleet of Street View cars with an array of low-cost sensors has allowed the EDF to collect enough data to make maps of methane leaks for 11 cities.
Methane leaks in Boston. Image source: EDF
Identifying these leaks could have a huge impact on climate change, as the EDF reports that “methane is more than 100 times more potent at trapping energy than carbon dioxide (CO2), the principal contributor to man-made climate change.” Even more, its conversion to CO2 makes methane “84 times more potent after 20 years and 28 times more potent after 100 years.”
Google (Clean) Cloud
The maps can help utility companies prioritize the allocation of resources to better address these leaks. Also, the partnership has expanded the scope of their efforts by measuring overall air quality. Two years after the initial program began, the Street View fleet was equipped with a “Environmental Intelligence” mobile platform.
Defending the Earth from climate change has been an uphill battle for decades. The recent move from the Trump administration to withdraw from the Paris climate agreement is only the latest example of this unfortunate reality.
However, efforts similar to that of Google and the EDF are helping people to understand of the problem climate change, ultimately leading to numbers like 70 percent of Americans supporting the Paris accord. These maps can equip environmental activists with hyperlocalized data enabling them to target specific problem areas in their communities. Such localized efforts can have big impacts despite apathy on the national level.
Tentative Steps Toward Melding Fashion and Technology
The arts and technology are beginning a courtship that may fundamentally affect the way we perceive both of them: in visual art, we have Google Deep Dream, creating hallucinatory vistas on par with the wackier end of Hieronymus Bosch or Alan Aldridge; in music, we have A.I composers; virtual reality offers an unprecedented and fully-immersive opportunity for filmmakers and writers alike. Fashion, however, is another kettle of fish because it is an art that is lived, occurring at all hours of the day. Additionally, the high fashion industry is a made-to-order one, often based on a limited quantity of materials and intensive crafting — a difficult set of values to integrate with the nature of the technology industry: machines mass-producing goods and materials.
There have been hesitant exchanges at the higher end of fashion — some examples include a Google and Levi’s collaboration called Project Jacquard that turns fabrics into gesture-controlled surfaces and Largerfield using 3D printing in some of his designs. At the far end of the spectrum is Studio Bitoni; Francis Bitoni, the company’s CEO, said: “Our products should create the next version of the human, not service humanity after it has evolved.” They are responsible for high heels fit to be worn to a dinner party by a Matrix sentinel and bodices that would look natural on an alien queen (although they’re currently worn by Dita Von Teese).
Wearable technology has progressed far beyond the L.E.D t-shirts you used to wear to parties when you were 15, and has now become fully integrated into those with active lifestyles with the advent of step counters and heart rate recorders. According to an educated guess by Canalys, a research firm, the Apple Watch sold 11.9 million units last year, permeating the market more than any previous smart watch and setting a precedent for widespread adoption of wearable technology. However, the next development will shift the onus of the development from wearable to worn, stressing integration rather than accessorization. Almost every wearable currently available relies on a cell phone, including that Apple Watch.
Kate Sicchio, an assistant professor of integrated digital media at New York University, predicts the relationship could lead to the elimination of the smartphone itself: “If we look at the history of ubiquitous computing…in the ’90s, all these MIT researchers had backpacks full of laptops, and now we just have this little [rectangle] we keep in our pocket, and soon that’s going to disappear and it just will be a small microcontroller in our garments.”
While this may seem outlandish, we must think that we already consider technology as part of our bodies in many ways: a study published in the Journal of Computer-Mediated Communication states that “cellphone users are capable of perceiving their iPhone as an object of their extended self, which can be negatively impacted (i.e., lessening of self) during separation.” If this is the case, then, it is only natural that we will want to bring our technologies closer and closer to our bodies. Fashion could be the stepping stone that bridges handheld technology and bionics on the way to synthesis between humans and machines.
So what are some of the technologies that companies are dabbling in? Adidas is working on 3D printed shoes, Nike is focusing on shoes that lace themselves (which actually began as a replica of the Mag shoe worn by Marty McFly in Back to the Future Part II), and Google/Levi’s Project Jacquard created a denim jacket that can connect to a smartphone. By touching the jacket sleeve in certain ways, you can control your phone. If, as Diane Kruger famously stated, “what you wear is an expression of who you are,” our growing obsession with iPhones, Instagram, and technology as a whole will be reflected in the direction the clothing and wearable industries take in the coming decades.
Google’s Go-playing artificial intelligence (AI) continues its winning streak at this week’s Future of Go Summit, happening in the historic town of Wuzhen, China. The AI ,developed by Google’s DeepMind, beat a team made up of five of the best human Go players in the world, proving that it can go head-to-head with not one, but multiple professional players.
Teams were matched for demo games at Google’s event on Friday. The first match of the day paired AlphaGo with a human player. Two human/AlphaGo teams battled it out, with one pair forcing the other into forfeiting the match. Who said AI can’t work together with humans?
Prior to today’s matches, AlphaGo also dueled with the world’s top Go player Ke Jie for a second time. But, just like Tuesday’s first match, Google’s AI proved to be a better match than humanity’s best Go player. A third and final match occurred on Saturday.
Instead of seeing human players under stress in a traditional all-human match, commentators noticed that the professional players rather enjoyed observing and studying AlphaGo’s moves. Without that human competition element, the human players acted differently, some even learned to adapt their play style to follow AlphaGo’s moves.
Apart from the professional Go players, the summit is also attended by a number of AI experts keen on learning about the problem-solving potential of this AI — this basis in problem-solving is what playing Go is really about. At the very least, seeing how AI can work hand-in-hand with human beings is promising and informative, at least as far as playing Go is concerned.
This week, China is hosting one of the most anticipated conferences on artificial intelligence (AI). The historical Chinese town of Wuzhen is the setting Google chose for its Future of Go Conference, which features their Go-playing AI, AlphaGo, as well as China’s best Go players and a number of leading AI experts.
Over the course of five days, the conference will showcase a number of matches between AlphaGo and the top human players of this ancient Chinese board game. However, Google’s AI has already taken the lead, winning the first of three games against 19-year-old Chinese prodigy and 9th-dan player Ke Jie.
“I think everyone recognizes that Ke Jie is the strongest human player,” Michael Redmond, 9th-dan professional and commentator, told reporters before the game started. Nevertheless, AlphaGo won Tuesday’s match, although only by half a point. “Huge respect to Ke Jie for playing such a great game and pushing AlphaGo to its limits,” Demis Hassabis, DeepMind co-founder and CEO, said at a post-match press conference.
During the remaining days of the conference, AlphaGo and Ke Jie are scheduled for a second round on Thursday and a final round on Saturday. Friday will have a match between two human players pitted against each other, both paired with AlphGo, and a team match where five players go against Google’s AI.
This complex sensor can monitor pretty much all of the activity in your home, by sensing temperature fluctuation, electricity use, sounds, movement, and duration of events. It’s a foundation on which to build truly “Smart” homes, where activities are tracked, remote-controlled, and automated as the user sees fit. For now, it’s a great way to monitor the events in a busy home, workshop, or business. The SuperSensor secures all user data and doesn’t transmit it to the Cloud.
Vive just announced they’ve partnered with Google to create a new untethered VR headset. The headset will be unique because it doesn’t require a PC or mobile device in order to work. Users simply pick up the headset and enter immersive virtual reality. Though many details have yet to be revealed Vive claims the headset should be available later in the year.
Google has unveiled a major new update to its Tango augmented reality platform — the Visual Positioning System. The VPS senses and animates your world so you can see immersive visuals in your surroundings with just your smartphone.
One of the more noteworthy remarks to come out of Google I/O ’17 conference this week was CEO Sundar Pichai recalling how his team had joked that they have achieved “AI inception” with AutoML. Instead of crafting layers of dreams like in the Christopher Nolan flick, however, the AutoML system layers artificial intelligence (AI), with AI systems creating better AI systems.
The AutoML project focuses on deep learning, a technique that involves passing data through layers of neural networks. Creating these layers is complicated, so Google’s idea was to create AI that could do it for them.
“In our approach (which we call ‘AutoML’), a controller neural net can propose a ‘child’ model architecture, which can then be trained and evaluated for quality on a particular task,” the company explains on the Google Research Blog. “That feedback is then used to inform the controller how to improve its proposals for the next round. We repeat this process thousands of times — generating new architectures, testing them, and giving that feedback to the controller to learn from.”
So far, they have used the AutoML tech to design networks for image and speech recognition tasks. In the former, the system matched Google’s experts. In the latter, it exceeded them, designing better architectures than the humans were able to create.
AI for Everything
AI that can supplement human efforts to develop better machine learning technologies could democratize the field as the relatively few experts wouldn’t be stretched so thin. “If we succeed, we think this can inspire new types of neural nets and make it possible for non-experts to create neural nets tailored to their particular needs, allowing machine learning to have a greater impact to everyone,” according to Google’s blog post.
AutoML has the potential to impact many of the other AI and machine learning-driven softwares that were discussed at the conference. It could lead to improvements in the speech recognition tech required for a voice-controlled Google Home, the facial recognition software powering the Suggested Sharing feature in Google Photos, and the image recognition technology utilized by Google Lens, which allows the user to point their Google Phone at an object (such as a flower) in order to identify it.
Truly, AI has the potential to affect far more than just our homes and phones. It’s already leading to dramatic advancements in healthcare, finance, agriculture, and so many other fields. If we can use an already remarkable technology to actually improve that same kind of technology, every advancement made by humans can lead to machine-powered advancements, which lead to better tools for humans, and so on. The potential of AI then draws to mind the title of another sci-fi film: limitless.
It’s the time of year again for Google I/O, the company’s annual developer conference. On Wednesday, the showcase started with a keynote from CEO Sundar Pichai who reiterated the company’s latest approach for all that it does: artificial intelligence (AI). Google does a lot, from its original search engine function, to email, video services, and mobile software — and the company wants to see AI will be at the helm of it all.
Pichar highlighted this “AI-first” focus, something he mentioned at the I/O last year. Now, Google has given the rest of the world a glimpse of how machine learning will work behind every platform it has.
An AI Chip in the Cloud
Of course, there’s no amount of AI and machine learning-based tech that can work without a specialized processor to run them. That’s why Google launched the second generation of its tensor processing unit chips, called Cloud TPU. These new TPUs will be accessible to anyone to use for running and training artificial neural networks through Google’s cloud computing platform.
Much of Google’s research on AI is conducted by DeepMind, a company under its Alphabet group. But with all the advances being made in the discipline, Google still needs a platform to put resources together, including research, tools, and applied AI. Google.ai is precisely that, a way to democratize its AI research.
Machine Learning for Images
Image recognition is one of the first beneficiaries of machine learning development. There isn’t a shortage of algorithms designed to perform visual recognition tasks. Now, Google wants to take this to a whole new level by bringing its search engine expertise to your camera.
Pichai introduced Google Lens, which is essentially a way to search the internet using your smartphone’s camera. You take a picture and Lens tells you what it is. “[I]t’s a set of vision-based computing capabilities that can understand what you’re looking at,” Pichai explained, “and help you take action based on that information.”
It’ll be available initially as part of Google Photos — to scrape through your old photos for a bunch of data — and Google Assistant, which will serve as your primary way to interact with Lens.
Google Home Gets an Upgrade
As the future of smart home devices begin to take shape — thanks in large part to devices like the Amazon Echo — Google doesn’t want to get left behind. So, it launched its second iteration of Google Home — no longer just a little smart speaker that can play music. Now, it also offers proactive assistance, hands-free calling, and visual responses, among other features.
A Search Engine for Jobs
Further specializing its search engine prowess, Google is bringing its power people in the U.S. looking for jobs that will suit them, and helping employers find the employees they need. “46% of U.S. employers say they face talent shortages and have issues filling open job positions,” Pichai explained. “While job seekers may be looking for openings right next door – there’s a big disconnect here. […] We want to better connect employers and job seekers through a new initiative, Google for Jobs.”
Nothing can snap you out of an immersive virtual reality (VR) experience faster than tripping over a cord, which is why VR companies have been scrambling to develop wireless systems. Google is taking that freedom a step further, working to develop a headset that’s not only wireless, but also works anywhere you want to go.
The company has reportedly been working on the stand-alone VR system for more than a year now. The goal has been to create a headset that could handle everything needed to deliver a stunning virtual experience sans PC, phone, or gaming console — not easy when you consider the whole thing needs to also be comfortable to wear.
This week, Variety reported that the world could potentially finally get a glimpse of this standalone headset in action at Google’s I/O developer conference. Sure enough, yesterday afternoon a standalone VR headset was announced at the event, but Google’s name wasn’t the only one attached to it.
According to Backchannel, a “reference model” of the device has already been created with the help of Qualcomm, and the final commercial version should be ready in the next few months. It’s expected to cost in the mid-hundreds range, making it financially comparable to the already released Oculus Rift and HTC Vive.
Google hasn’t been the only company to pursue powerful standalone VR systems — Facebook is working on its own, as is Samsung. While a release date for Google’s system is still forthcoming, clearly we won’t have to wait much longer for the virtual world to reach that next level of freedom.
While Larry Page was busy bringing his “flying car” to reality, another Google co-founder, Sergey Brin, was apparently on a similar path, trying to create a vehicle that will take to the skies as well.
However, unlike Page’s land/air hybrid vehicle, the Kitty Hawk Flyer, which could be introduced into the market as early as this year, Brin’s project is a giant airship resembling a zeppelin that’s currently housed at the NASA Ames Research Center. According to a report by Bloomberg, the project was spurred after Brin saw old photos of the USS Macon, an old airship built by the US Navy.
This particular prototype could have been created as part of a potential business venture — which would be timely, given that we’re at the cusp of airborne vehicles and flying taxi fleets becoming a reality. Or, it could be simply a passion project for Brin, who apparently has a long-standing fascination with airships. We just don’t know yet.
News of this project has piqued the curiosity of many. New airship technology could potentially help cut delivery and transport costs, given that it can accommodate massive amounts of cargo and be more fuel efficient. But whatever his intentions are for this project, Brin certainly has a few experienced friends he can tap for advice.
In 2014 Google acquired London based DeepMind which they have turned into the single largest collection of brain power and resources focused solely on the development of artificial intelligence. They have over 250 PhDs working with essentially unlimited funding and resources.
DeepMind has two goals. One, try to solve intelligence. Two, use that to do everything else.
In step one they are first trying to figure out how the human brain allowed us to take over the planet. To do that they have scores of neuroscientists and psychologists and cognitive scientists hacking away at our best theories of how our brains work and disseminating that to computer scientists who try to recreate that ability in silicon chips.
In step two they plan to apply that to everything else. Very literally, everything else. If they accomplish this, it will make Google the most powerful entity on Earth.
This is not some clandestine plan that they are hatching underneath a volcano, this is the explicit goal of the company and they frequently talk about it openly to anybody who cares to listen. For more I strongly encourage listening to the latest talk from Demis Hassabis, founder of DeepMind, who lays this plan out clear as day.
Kitty Hawk, the “flying car” company that’s backed by Google founder Larry Page, just released its first video footage of their prototype in action.
The Kitty Hawk Flyer, as the company explains, is a fully electric aircraft that can fly over water and won’t require drivers to have a pilot’s license to fly. The company also asserts that anyone who wants to operate the vehicle will be able to learn how to do so in just minutes.
In the video, which you can view below, the Kitty Hawk Flyer is shown lifting off over a lake. And while it certainly looks a lot more like a jetski with wings—and much less like a flying car—the company says that they have already received approval from the Federal Aviation Administration to fly the first iteration of the vehicle in uncongested areas.
“We’ve all had dreams of flying effortlessly. I’m excited that, one day very soon, I’ll be able to climb onto my Kitty Hawk Flyer for a quick and easy personal flight,” Page said to the New York Times.
Best of all, Kitty Hawk is prepping to bring their vehicle into commercial production by the end of this year (2017). Yes, you will be able to buy one this year; however, when it does become available for sale, it will likely face some stiff competition from other manufacturers.
Ultimately, there are a host of individuals and companies working on flying cars. For example, there’s Lithium Aviation, AeroMobil, and PAL-V, who have been pushing to get their own versions of an all-electric airborne vehicle into the market in recent months.
Rumors about Google’s Pixel smartphone are swirling. One currently making the rounds in the South Korean press is that a one trillion KRW ($875 million) investment is in the works for LG Display, popular producer of screens for smartphones and tablets.
According to the Yonhap News Agency, the deal isn’t simply a purchase agreement — it’s a strategic move on two separate levels. Aside from potentially giving the Pixel curved edges courtesy of flexible organic light-emitting diode (OLED) panels, the deal would also limit Google’s dependence on Samsung Display, which is nestled under Google’s rival Samsung and a supplier for Apple’s iPhones. Both Google and LG have yet to comment on the rumors.
The Pixel, which Google introduced in October 2016, has become quite a formidable contender in the world of smartphones. If these rumors are true, they confirm that Google wants to join smartphone giant Samsung in featuring devices with curved screens, which do seem to be the next step in screen technology for smartphones — according to other rumors, even Apple is moving toward them.
Changing Your World View
In a world in which so many devices rely on screen-based interactions, curved screens aren’t simply a gimmick — they’re a way for companies to reach their ultimate goal of transforming how we interact and communicate with our devices. However, they aren’t the only way.
AlphaGo is clearly a force to be reckoned with in the world of Go, and now, Google is partnering with the best human Go players to explore the AI’s capabilities and how human players could learn from it.
“Clearly, there remains much more to learn from this partnership between Go’s best human players and its most creative A.I. competitor,” according to Google’s DeepMind. “That’s why we’re so excited to announce AlphaGo’s next step: a five-day festival of Go and artificial intelligence in the game’s birthplace, China.”
The “Future of Go Summit,” a collaboration between the Chinese Go Association, the Chinese government, and Google, is scheduled to run from May 23 to 27, 2017. Aside from China’s top Go players, leading AI experts from both Google and China will also be in attendance. The idea is to test AlphaGo’s creativity and adaptability, plus its ability to work in tandem with human players.
“The summit will feature a variety of game formats involving AlphaGo and top Chinese players, specifically designed to explore the mysteries of the game together,” Google’s blog post said. Those include “Pair Go,” a format in which Chinese pros will compete against each other while partnered with an AlphaGo teammate. Then, there’s “Team Go,” during which AlphaGo will compete against a five-person team of China’s top pro players. Lastly, the main event will be a match between AlphaGo and the world’s top ranked Go player, Ke Jie.
AI is advancing rapidly and not just in the realm of games like Go. More and more frequently, we’re seeing AI that can do tasks previously exclusive to humans, such as writing stories, creating movies, composing music, and even conducting behavioral analysis. The exponential growth of AI is due, in large part, to rapid improvements in machine learningalgorithms and neural network systems.
The Go festival in May will be another opportunity for AI research to grow. “[W]e will explore how AlphaGo has created new knowledge about the oldest of games, and how the technologies behind AlphaGo, machine learning, and artificial intelligence, are bringing solutions to some of the world’s greatest challenges into reach,” Google said.
Most people are familiar with carbon dioxide (CO2) as a leading cause of global warming, but it isn’t the only one. Methane, while it doesn’t linger in the atmosphere as long as CO2 does, contributes its own fair share as a greenhouse gas. Methane is also more potent than CO2, and can remain in the atmosphere for roughly five years. Recent estimates put the amount of methane in the atmosphere at 1,834 parts per billion (ppb) while CO2 is at just 399.5 ppb.
A team led by researchers from the Colorado State University (CSU), the Environmental Defense Fund, and Google Earth Outreach came up with a creative way of monitoring these leaks. They detail their project in a study published in the journal Environmental Science and Technology.
The team added pollution trackers to Google Earth’s Street View cars. The tracker uses an infrared laser that act as a methane analyzer, capable of identifying methane fumes in real time. Previously, this could only be done in labs using a gas chromatography analysis.
“The air contains gases that make it look foggy in the infrared spectrum,” explained Joe von Fischer from CSU. “The laser can scan through colors of infrared light and ‘see’ how much methane is present.” The researchers have tested the the device in Indianapolis, Boston, Staten Island, Syracuse, and Burlington, Vermont and Indianapolis.
“This is a huge challenge that almost nobody had been thinking about. Now we’re finding out just how widespread these leaks are,” said von Fischer. “The faster you fix them, the bigger the environmental benefits are. But utilities and regulators didn’t have the data to focus their efforts. […] Our goal is to make it faster, cheaper and easier to find and measure methane leaks from natural gas lines to help accelerate crucial repairs.”
Deep learning machines have been generating incredible amounts of buzz in recent months. Their extensive abilities can allow them to play video games, recognize faces, and, most importantly, learn. However, these systems learn 10 times more slowly than humans, which has allowed us to keep the creeping fears of a complete artificial intelligence (AI) takeover at bay. Now, Google has developed an AI that is capable of learning almost as quickly as a human being.
Claims of this advancement in speed come from Google’s DeepMind subsidiary in London. They say that not only can their machine assimilate and act on new experiences much more quickly than previous AI models, it will soon reach human-level speeds.
Their method mimics the processes of learning that occur in human and animal brains.
If you’re unfamiliar with how deep learning works, it uses layers of neural networks to locate trends or patterns in data. If one layer identifies a pattern, that information will be sent to the next layer. This process continues until all the information is collected.
Systems can be taught to learn differently depending on many different variables, such as the strength of the connection between layers. Significant changes in one layer can dramatically change how information is transmitted in other layers, or how it is “learned.” Deep neural networks have many layers, so when changes are present, the process of learning can take an incredibly long time.
However, Alexander Pritzel, one of the researchers at Google DeepMind, and his colleagues seem to have figured out a way around this issue. They call it “neural episodic control.”
“Neural episodic control demonstrates dramatic improvements on the speed of learning for a wide range of environments,” the team told MIT Technology Review. “Critically, our agent is able to rapidly latch onto highly successful strategies as soon as they are experienced, instead of waiting for many steps of optimization.” Their method mimics the processes of learning that occur in human and animal brains, replicating what happens first in the prefrontal cortex and then, as a backup, in the hippocampus.
Finding the Future
It seems as though every day some new development in AI technology is being revealed to the world. From altruistic robot lawyers to predictions about the singularity, AI technology has been barreling forward. However, we have not yet reached “true AI.” No robots exist whose AI capacity matches the intelligence of the human brain.
And so, while the realities of job loss as a result of automation and AI are becoming clearer every day, a “true AI” is not predicted to arrive any time soon. However, this development from DeepMind could be a huge stepping stone on the path to that future.
While that might seem like a science fiction nightmare on its surface, the possible benefits are enormous. Applications for advanced AI technology range from health and medicine to agriculture and even scientific research. As AI gets better at learning, it can be taught more and more ways to improve our lives.
If you’re familiar with HBO’s comedy series “Silicon Valley,” then you might know a thing or two about the importance of shrinking file sizes. Google clearly understands the value of file compression as they have developed a new algorithm that takes it to the next level.
Guetzli is an open source JPEG encoder developed by Google Research, and the algorithm it employs can produce JPEGs that are 35 percent smaller while retaining the same level of clarity. To accomplish this, Guetzli trades visual quality for a smaller file size at the quantization stage of image compression.
According to Google, the psychovisual modeling of Guetzli “approximates color perception and visual masking in a more thorough and detailed way than what is achievable” in existing methods. Ultimately, that means these smaller images will look just as good to the average person. In fact, in experiments comparing compressed images, human raters preferred the Guetzli-produced images over those created by libjpeg, even when the latter’s images were a larger file size.
Guetzli’s files are smaller, yes, but it takes the encoder a bit longer than other compression methods to shrink images. However, we’re left with smaller-sized images that don’t sacrifice quality, and these compressed images will shrink the time it takes Google to load websites and services, which is a good thing for both internet users and Google.
We have been trying to update our clothing for quite some time now. From clothing that can help us with our work out to something that can change to a new design on command, the concept of “smart clothing” isn’t entirely new. But when the biggest names in tech and fashion come together to sponsor a product, heads inevitably begin to turn.
Google and Levi have come together to unveil a simpler way to interface with technology through Project Jacquard. With conductive yarn, interactive textiles, and embedded electronics, this revolutionary jacket features a Bluetooth system that controls your device through gestures directed at the cufflinks.
Whether it’s changing songs, adjusting the volume, or figuring the best route to your destination — the gesture controls provide the wearer with a simper way to stay connected on the go while no one watching is any the wiser. The inconspicuous design is modeled after Levi’s trucker jacket, the Commuter Trucker, only underscoring the duality of comfort and control for the user. To top it off, the jacket also comes with its own app.
The Future of Smart Style
After 18 months of development, the jacquard project has gained some traction. The jacket has impressed many, but there’s no guarantee that it will be a success.
Wearable smart technology has had a difficult debut since google glass. Even today investors look at the apple watch the same anxiety, and it easy to understand why — consumers just don’t see the need for them yet.
However, unlike other forms of wearable technology where hardware takes heavy precedence, project jacquard was made without “blinking” on the jacket — rather, the textiles and intuitive interface is expected to speak for itself.
The smart jacket will release this fall at a retail price of $350.
Good news if you’re not an internet bot. Google’s CAPTCHA (Completely Automated Public Turing test to tell Computers and Humans Apart) is now invisible. This signals that the artificial intelligence (AI) behind the security software has reached a new level of sophistication.
For those of us who aren’t sure what a CAPTCHA is, it’s that section on a website that forces you to stop what you’re doing to prove you’re not a robot. You usually have to type in a string of strange words or solve a simple puzzle. Now, thanks to machine learning, the system no longer requires conscious human input to recognize if you are, in fact, a person.
The development is fuel by what Google is calling an “advanced risk analysis engine.” This advancement means most of us will never know if we are being checked since only the “most suspicious traffic” will be tested.
The ability of AI to learn is allowing for significant boosts in its resourcefulness. AI systems have taught themselves to compose music, identify suicidal behavior, and now, in a clear step toward the Singularity, code themselves.
Once machines are capable of true learning, how long will it be until the Turing test and CAPTCHA become silly puzzles AI parents play with their children?
We live in a world where data management has become an increasingly crucial part of most industries — even in health care.
Quick access to patient records can lead to better handling of health cases, as well as more precise treatment. This kind of access, however, is a need-to-know type. Patient records are highly confidential. So, how does one combine easy access and data security?
“Given the sensitivity of health data, we’ve always believed that we should aim to be as innovative with governance as we are with the technology itself,” Suleyman and Laurie wrote. “Over the course of this year we’ll be starting to build out Verifiable Data Audit for DeepMind Health, our effort to provide the health service with technology that can help clinicians predict, diagnose, and prevent serious illnesses – a key part of DeepMind’s mission to deploy technology for social benefit.”
Verifiable Data Audit is simple. As a data audit system, it will give hospitals — and even patients, later on — the ability to see who is accessing health records and what they’re using them for. The ability to oversee medical data is especially important because of its sensitive and personal nature, Suleyman and Laurie argue in the announcement. The audit trail could also help enforce rules around patient consent.
“For example, an organization holding health data can’t simply decide to start carrying out research on patient records being used to provide care, or repurpose a research dataset for some other unapproved use,” Suleyman and Laurie wrote. “In other words: it’s not just where the data is stored, it’s what’s being done with it that counts. We want to make that verifiable and auditable, in real-time, for the first time.”
DeepMind’s audit security tool isn’t exactly blockchain. But, it does have everything that makes blockchain an attractive system for industries, especially the financial sector. Verifiable Data Audit will use cryptographic mathematics, just like blockchain. Every time data is accessed, a new code is generated that covers all previous activity. So, whenever someone tries to access and edit a specific record for dubious reasons, every subsequent record would be messed up. This quickly reveals the misused data.
“Each time there’s any interaction with data, we’ll begin to add an entry to a special digital ledger,” Suleyman and Laurie wrote. “That entry will record the fact that a particular piece of data has been used, and also the reason why.”
This type of transactional security is what blockchain-based systems provide. Blockchain, simply put, is a digital ledger of transactions that is not governed by a central body and uses cryptography for security. Everything on a blockchain is monitored by everything else on it — as transactions check each other all the time.
Blockchain use had initially been limited to cryptocurrency, but has since expanded to a host of other potential applications, such as payment systems, crowdfunding, and gold investing, just to name a few. Of all blockchain-based tools and systems, perhaps the most popular one is Ethereum.
In short, with blockchain and blockchain-like systems, we are moving towards a more secure future
I was asked on Quora what Google will look like in 2030. Since that is one of the most important issues the world is facing right now, I took some time to answer it in full.
Larry Page, one of Google’s two co-founders, once said off-handedly that Google is not about building a search engine. As he said it, “Oh, we’re really making an AI”. Google right now is all about building the world brain that will take care of every person, all the time and everywhere.
By 2030, Google will have that World Brain in existence, and it will look after all of us. And that’s quite possibly both the best and worst thing that could happen to humanity.
To explain that claim, let me tell you a story of how your day is going to unfold in 2030.
2030 – A Google World
You wake up in the morning, January 1st, 2030. It’s freezing outside, but you’re warm in your room. Why? Because Nest – your AI-based air conditioner – knows exactly when you need to wake up, and warms the room you’re in so that you enjoy the perfect temperature for waking up.
You go out to the street and order an autonomous taxi to take you to your workplace. Who programmed that autonomous car? Google did. Who acquired Waze – a crowdsourcing navigation app? That’s right: Google did.
After lunch, you take a stroll around the block, with your Google Glass 2.0 on your eyes. Your smart glasses know it’s a cold day, and they know you like hot cocoa, and they also know that there’s a cocoa store just around the bend which your friends have recommended before. So it offers to take you there – and if you agree, Google earns a few cents out of anything you buy in the store. And who invented Google Glass…? I’m sure you get the picture.
I can go on and on, but the basic idea is that the entire world is going to become connected in the next twenty years. Many items will have sensors in and on them, and will connect to the cloud. And Google is not only going to produce many of these sensors and appliances (such as the Google Assistant, autonomous cars, Nest, etc.) but will also assign a digital assistant to every person, that will understand the user better than that person understands himself.
I probably don’t have to explain why the Google World Brain will make our lives much more pleasant. The perfect coordination and optimization of our day-to-day dealings will ensure that we need to invest less resources (energy, time, concentration) to achieve a high level of life quality. I see that primarily as a good thing.
So what’s the problem?
Here’s the thing: the digital world suffers from what’s called “The One Winner Effect”. Basically it means that there’s only place for one great winner in every sector. So there’s only one Facebook – the second largest social media network in English is Twitter, with only ~319 million users. That’s nothing compared to Facebook’s 1.86 billion users. Similarly, Google controls ~65% of the online search market. That’s a huge number when you realize that competitors like Yahoo and Bing – large and established services – control most of the rest ~35%. So again, one big winner.
So what’s the problem, you ask? Well, a one-winner market tends to create soft monopolies, in which one company can provide the best services, and so it’s just too much of a hassle to leave for other services. Google is creating such a soft monopoly. Imagine how difficult it will be for you to wake up tomorrow morning and migrate your e-mail address to one of the competitors, transfer all of your Google Docs there, sell your Android-based (Google’s OS!) smartphone and replace it with an iPhone, wake up cold in the morning because you’ve switched Nest for some other appliance that hasn’t had the time to learn your habits yet, etc.
Can you imagine yourself doing that? I’m sure some ardent souls will, but most of humanity doesn’t care deeply enough, or doesn’t even have the options to stop using Google. How do you stop using Google, when every autonomous car on the street has a Google Camera? How do you stop using Google, when your website depends on Google not banning it? How do you stop using Google when practically every non-iPhone smartphone relies on an Android operating system? This is a Google World.
And Google knows it, too.
Google Flexes it’s Muscles
Recently, around 200 people got banned from using Google services because they cheated Google by reselling the Pixel smartphone. Those people woke up one morning, and found out they couldn’t log into their Gmail, that they couldn’t acess their Google Docs, and if they were living in the future – they would’ve probably found out they can’t use Google’s autonomous cars and other apps on the street. They were essentially sentenced to a digital death.
Now, public uproar caused Google to back down and revive those people’s accounts, but this episode shows you the power that Google are starting to amass. And what’s more, Google doesn’t have to ban people in such direct fashion. Imagine, for example, that your website is being demoted by Google’s search engine (which nobody knows how it works) simply because you’re talking against Google. Google is allowed by law to do that. So who’s going to stand up and talk smack about Google? Not me, that’s for sure. I love Google.
To sum things up, Google is not required by law to serve everyone, or even to be ‘fair’ in its recommendations about services. And as it gathers more power and becomes more prevalent in our daily lives, we will need to find mechanisms to ensure that Google or Google-equivalent services are provided to everyone, to prevent people being left outside the system, and to enable people to keep being able to speak up against Google and other monopolies.
So in conclusion, it’s going to be a Google world, and I love Google. Now please share this answer, since I’m not sure Google will!
Note: all this is not to say that Google is ‘evil’ or similar nonsense. It is not even unique – if Google takes the fall tomorrow, Amazon, Apple, Facebook or even Snapchat will take its place. This is simply the nature of the world at the moment: digital technologies give rise to big winners.
Ever wonder how Google has been able to coordinate its data centers from all over the world? Well, they found a way to take control of time. Yes, time. And they did so by building the world’s first truly global database: Spanner. This system has given Google a way to store information in dozens of data centers, across millions of machines, spanning multiple continents, and it now underpins most of Google’s services, including AdWords (the company’s primary moneymaker), Gmail, Google Photos, and the Google Play store.
Through Spanner, Google’s engineers have taken control of time in a way never done before. Prior to such a system, communication across servers and data centers took time. Having multiple databases meant information could vary from one machine to another. A transaction at one data center wouldn’t show up at one across the globe as soon as it took place — it needed time. For the services that Google provides, this variance wasn’t an option — they needed a way to make sure that actions and information lined up across the globe.
To solve this problem, Google’s engineers developed a unique timekeeping technology called TrueTime. The engineers equipped all of Google’s data centers with GPS receivers and atomic clocks, and each center would shuttle its time readings to a master server. Those servers would constantly trade their readings with other servers to produce a common time for all data centers. “This gives you faster-than-light coordination between two places,” Peter Mattis, a former Google engineer, explained to Wired.
TrueTime is just one aspect of Spanner, the power of which is unprecedented and seems to defy logic. Through this system, Google can implement changes in one part of its database without contradicting information on the other side of the planet. Spanner also replicates data readily and reliably across multiple data centers and multiple parts of the world, as well as allows for the retrieval of data should one data center fail.
Now, Google want to make this amazing technology available to customers.
Highly Available and Reliable
Spanner is truly essential for a company like Google, and no one else in the tech world, not Microsoft, Apple, or Amazon, has anything quite like it. Google wants to leverage its position by offering its Spanner tech to customers in the cloud computing market. The result? Cloud Spanner, a version of their unique database that is on the cloud.
“If you are struggling with the scale of your transactional database — you will go to a shared database, or NoSQL,” Google’s Deepti Srivastava told TechCrunch. “If you’re at that stage where you have to make those trade-offs, Spanner is the way to go. You are already doing work to use one of those systems. We try to make that trade-off as simple as possible.”
Some worry that very few companies actually have database needs similar to Google’s, so Cloud Spanner might not have that big of a market. Plus, Cloud Spanner won’t be free, with a starting price of $0.90 per node per hour (including replication) and $0.30 per GB of storage per month. But Spanner is powerful, and it is unique. “If they offer it, people will want it, and people will use it,” Peter Bailis, an assistant professor of computer science at Stanford University, told Wired.
Google is already in talks with large financial institutions for possible Cloud Spanner adoption, and while there are few truly global businesses like Google, the company hopes that Cloud Spanner could help smaller businesses expand later on.
Concerns over artificial intelligence (AI) have been around for some time now, and thanks to a new study by Google’s DeepMind research lab, it seems that this Terminator-esque future of intelligent machines may not be that farfetched.
Using games, a platform that Google’s DeepMind AI is terribly familiar with, researchers have been testing whether neural networks are more likely to cooperate or compete, and if these AI are capable of understanding motivations behind making that choice.
For the research, they used two games with similar scenarios for two AI agents, red and blue.
In the first game, the agents were tasked with trying to gather the most apples (green) in a basic 2D graphical environment. The agents were given the option to tag one another with a laser blast that temporarily removed them from the game. After running the scenario a thousand times, they realized that the agents were willing to cooperate when the apples were abundant, but they turned on each other when the stakes were higher.
The researchers realized that, in a smaller network, the agents were more likely to cooperate. Whereas in a larger, more complex network, the AI were quicker to sabotage one another.
All is Not Lost
In the second scenario, a game called Wolfpack, the agents played as “wolves” that were tasked with capturing a “prey.”When the wolves are close in proximity during a successful capture, the rewards offered were greater. Instead of going all lone wolf, this incentivized the agents to work together.
In a larger network, the agents were quicker to understand that cooperation was the way to go.
The Google researchers hope that the study can lead to AI being better at working with other AI in situations with imperfect information. As such, the most practical application of this research, in the short term, is to “be able to better understand and control complex multi-agent systems such as the economy, traffic systems, or the ecological health of our planet – all of which depend on our continued cooperation,” the study says.
At the very least, the study shows that AI are capable of working together and that AI can make “selfish” decisions.
Joel Leibo, who was the lead author of the paper, outlines the next steps in an interview with Bloomberg, “Going forward it would be interesting to equip agents with the ability to reason about other agent’s beliefs and goals.”
Artificial Intelligence (AI) is on the attack again. Last night technology staged another trial coup during one of the most-watched television programs in the world, the Super Bowl.
Similar to how a televised broadcast of a story about Amazon’s Alexa ordering a dollhouse caused the device to order dollhouses for viewers, an ad during the big game has caused Google Home devices across to the country to activate during a commercial. People turned to Twitter to notify the masses of this phenomenon.
Our Google Home just went a little bonkers with the #GoogleHome commercial #SB51
There seems to be a problem with our AI devices, they just don’t know when to listen. Right now, these incidents range from some silly fun to mildly annoying, and this time no one received unwanted toys and cookies (as if such things could ever be unwanted). But devices that are designed to make life easier should, at least, attempt to mitigate annoyances like these.
Unlike Amazon’s Alexa, Google Home does not allow users to change its “wake-up word.” Perhaps advertisers will just have to try to sell this product without annoying their current customers or igniting worries of a robot takeover. Hopefully, future improvements to home AI devices will eliminate these silly annoyances, and perhaps these devices will become a little bit better at listening.
Imagine that it’s so cold outside, that your thick, floor-length winter coat isn’t even providing enough warmth. For some people comfortable with embracing freezing temperatures, it might not be so bad. But for the rest of us, there is probably no way that we’d even step outside, not even to get the mail.
In January, that is what it’s like in the UK, frigid and unforgiving. But a man named Aaron Puzey decided to slip on his sneakers and ride his bike the entire length of the UK. That’s 900 miles, all while burning 50,000 calories. How did he accomplish this? How did he not freeze to death? Simply, by using virtual reality (VR).
A Bluetooth cadence monitor, an exercise bike, and Samsung’s Gear VR made this virtual experience come to life. For this experience, Puzey even developed his own app called CycleVR, which uses data from Google Streetview to show the entire route that he took in 360 degrees.
The Future of CycleVR
As of now, CycleVR is not perfect, but Puzey is working on improving the app, re-coding the software to make it easier for those with simulation sensitivities. If successful, his invention could be an addition to the emerging VR technology that is transforming the world, up there with easing the pain of patients and providing a realistic experience to advance knowledge of our bodies. VR technology is a new, but quickly blossoming industry that holds incredible potential, now including the capacity to bike for hundreds of miles without ever leaving your living room. While VR technology like this is currently used mostly for games and recreational activities, the potential applications could range from medical advancements to research advantages, and even military uses.
The space race of the mid-20th century was one of the modern era’s greatest periods of concentrated scientific innovation. From putting the first human on the Moon in 1969 to then sending the last person to walk on its surface up in 1972, the United States has been fueled by curiosity and Russian competition to do the seemingly impossible.
It has been more than 44 years since the last human walked on the Moon, and the most recent Moon landing was conducted by China in 2013 with their Chang’e 3 mission. The U.S. has long since turned its attention over to Mars as the next frontier, but there is still much to be learned from studying the Moon — Google’s betting millions on it.
XPRISE to the Skies
Back in 2007, Google announced its Lunar XPRIZE competition. The goal of the project was to facilitate healthy competition to get human technology back on the Moon. Out of the 16 teams that made it to last year, only five are left with a chance to make it to the $20 million waiting for them at the finish line. The first team to successfully launch a spacecraft, land it on the Moon, travel at least 500 meters (1,640 feet), and transmit pictures and video from the surface before December 31, 2017, takes home the prize.
According to XPRIZE senior director Chanda Gonzales-Mowrer, “Each of these teams has pushed the boundaries to demonstrate that you don’t have to be a government superpower to send a mission to the Moon while inspiring audiences to pursue the fields of science, technology, engineering, and mathematics.” The competition shifts focus back to the Moon and will hopefully inspire new generations to develop an interest in STEM fields. The competition also embodies a spirit of healthy international competition and cooperation to energize the sciences across the globe.
Imagine the conflicted feelings of the machine learning expert who is creating artificial intelligence (AI) that they know will one day, possibly very soon, be able to create better AI than them. It’s the new age’s way of holding on to the time-honored tradition of having to train your own replacement. Machine learning experts are currently being paid a premium wage due to their limited numbers and the high demand for their valuable skills. However, with the dawn of software that is “learning to learn,” those days may be numbered.
The most prolific minds in AI research from groups such as Google Brain, OpenAI, DeepMind, and university research departments at the nation’s most prestigious tech schools are developing machine-learning systems that can create machine-learning systems. Researchers from Google Brain were able to design software that created an AI system to take a test that measures how well software can process language. The software did better on the test than software designed by humans. So, in a manner of speaking, much like recently reported zebra shark, AI can reproduce asexually.
According to MIT Technology Review, Jeff Dean, the leader of the Google Brain group, sees “automated machine learning” as the most promising research avenue for his team.“Currently the way you solve problems is you have expertise and data and computation. Can we eliminate the need for a lot of machine-learning expertise?”
Ever Expanding Experts
If it turns out that AI can consistently perform at levels comparable to the Google Brain experiment, self-starting AI could lead to faster implementation of the technology. While this prospect is very exciting for tech enthusiasts, growing numbers of people are worried about what increased roll-outs of AI systems could mean for their livelihoods.
Automation is set to disrupt the way the economy, and even capitalism itself, has operated for centuries. In the long run, machines are cheaper than human workers. There’s no need to worry about vacation time, unionization, heath insurance, or many other aspects of employment that people need or expect from their employers. However, this cheaper, more effective labor force will come at a great cost.
The biggest sector to be hit by the proliferation of automation will be manufacturing, especially in the developing world. President Obama even mentioned automation in his farewell address. “But the next wave of economic dislocations won’t come from overseas. It will come from the relentless pace of automation that makes a lot of good, middle-class jobs obsolete.” Industry experts tend to agree.
Even more, as this development indicates, not just low-skill jobs are on the chopping block. There are currently AI systems being developed that can replace film editors, songwriters, journalists, and much more. And now, with AI that is capable of creating AI better than humans, we all need to be a little more observant and aware of what’s on the horizon.
Faraday Future impressed with their first vehicle, the FF91; and Tesla began rolling out their Enhanced Autopilot last week. Not to be outdone by two of the biggest names in the self-driving automotive industry, Google has announced they will be testing their fleet of autonomous minivans on public roads by the end of this month.
The vehicles were built under Google’s self-driving car division, Waymo. In collaboration with Fiat Chrysler, the vehicles are essentially Chrysler Pacifica hybrids equipped with Waymo’s proprietary self-driving system.
It’s worth noting that Waymo built the entire system themselves, developing an integrated software with sensors, radars, and mapping technology intended specifically for self-driving vehicles. This move has not only allowed Waymo to ensure that the whole system works together seamlessly, it also made it possible for the company to lower costs.
The company clarifies that this doesn’t mean they intend to become an auto parts supplier. Instead, Waymo is focusing on creating partnerships with original equipment manufacturers (OEMs), or ride-sharing services to provide their self-driving systems.
Closer Than We Can Imagine
There’s no shortage of experts willing to tout the impending reality of a self-driving future. Elon Musk has repeatedly said 2017 will be the year that Tesla vehicles will become fully autonomous. Another expert asserts that “kids born today will never get to drive a car.”
Besides the ease and convenience that fully autonomous vehicles can provide, the technology is also poised to make our roads safer.
“Right now, too many people die on our roads – 35,200 last year alone – with 94 percent of those the result of human error or choice. Automated vehicles have the potential to save tens of thousands of lives each year,” President Obama notes in an op-ed piece he wrote for the Pittsburgh Post Gazette.
More than that, 41 percent of all human error fatalities are the result of ‘recognition errors;’ what the Department of Transportation (DOT) classifies as ‘driver’s inattention, internal and external distractions, and inadequate surveillance.’ Barring computational aberration or failure, automated vehicles simply do not encounter these types of problems, therefore suggesting that an overwhelming majority of traffic incidents are correctable through automation.
With expert opinion and actual advances in technology combined, it’s hard not to get into the hype. And Waymo’s latest achievement is just one among many that continues to illustrate how full autonomy for vehicles is indeed in the horizon – and it could be more accessible than we can imagine.
The transition from one year to the next is always a little uncertain – an uneasy blend of anxiety and optimism, it’s also a time of retrospection, introspection, and even a little tentative prognostication. And since the latter is our stock-in-trade at Futurism, we believe now is the perfect time to look ahead at what 2017 has in store for us.
Here’s a look at some of the ways the coming year promises to revolutionize our energy future.
2017 may well be the year that some of the most promising emerging energy markets…well, emerge. Foremost among these is Africa, where we’ve seen the spread of pay-as-you-go (PAYG) solar startups—such as PEGAfrica—which provide solar arrays to households in West Africa on credit.
The business model obviates the need for a secure energy infrastructure by combining solar photovoltaics (PV) with energy storage and mobile pay technology—a simple, effective plan for supplying electricity where it’s most needed. PAYG solar is spreading through Africa like wildfire and 2017 promises to see more of it, with new startups getting in on the act, and new technologies refining the business model. Africa could be the new energy frontier, with a renewable energy infrastructure that might just become the envy of the world.
Major Corporations Go Green
If the use of renewable energy is to become economically competitive, then it’s incumbent on the largest energy consumers to commit to its development and consistent use—and this is just what we’re beginning to see the major Silicon Valley tech corporations start to do. Google has announced that it plans for all of its data centers to be powered by renewables no later than 2017. Facebook’s targets are more modest, but its newest data center—set to be constructed this year in Los Lunas, New Mexico—will receive 100 percent of its power from renewable energy.
All of this translates into a massive new injection of capital investment in renewable energy technology, which could make 2017 the tipping point for innovation and affordability as major energy companies and startups alike scramble to fill this huge unmet need. At the same time, the aviation giants are bankrolling something of a green revolution of their own—this time involving the use of renewable jet fuels.
Meanwhile, the coming year will see a number of new innovations in the evolution of cleaner, more efficient energy systems as scientists and startups leverage massive national investments in research and development to pioneer novel technologies.
And let’s not leave out fusion research: 2017 could be the year of remarkable new breakthroughs in fusion energy, with startups like General Fusion and Tri Alpha attempting to achieve on a (comparative) shoestring what lavishly funded behemoths like the ITER Project have failed to do.
The Fly in the Ointment
Extrapolating 2017 from the developments in 2016 is all very well and good; but when it comes to forecasting the future, it really boils down to the unanticipated. And there are many variables that could change the equation—perhaps none more important than the incoming Trump administration.
President-elect Trump has signaled a desire to shift the country’s energy policy away from the Obama administration’s commitment to renewable energy—in fact, reinvigorating the fossil fuel industry, particularly coal, was a major cornerstone of his campaign. But if his pledge to upgrade the country’s infrastructure is to bear fruit, it will have to include some degree of renewables, since the increasing efficiency and affordability of clean energy is making it more economically attractive. Ironically, 2017 may see tremendous private and public investment in alternative forms of energy, especially if Trump’s promise to wean the country off its OPEC dependency (holding one’s breath is not advised) is to have any chance of success.
And then there’s the Tesla wildcard—by which we mean that Elon Musk could change the rules of the game at any moment. Last year’s introduction of the Tesla “energy ecosystem” opened up the possibility of a future in which every home becomes a power plant; and we can only expect more similar developments in 2017. The cliché “game-changer” was coined for people just like Musk; look to see him further justify that sobriquet in the coming year.
Last year was full of surprises—some anticipated, others decidedly not. 2017 promises to be no different. Market forces and accelerating research and investment means the avalanche in disruptive new energy technologies will continue in the new year; we clever little apes will persist in finding extraordinary new ways to eke out more energy to power our thirsty civilization.
So stay tuned to Futurism—we’ve got everything hungry minds need to survive 2017.
Google’s DeepMind is a company working on what could be one of the most significant scientific advances of our era. They focus on the development of AI systems that could have a broad range of applications from cybersecurity to healthcare. The primary goal of DeepMind is to develop artificial intelligence (AI) that can solve the most complex problems without even first having to be taught how.
A major training/proving ground for DeepMind’s software has been gaming. The company has developed AI that can play 49 different Atari games at expert levels. Also, in a world-first development, they created software called AlphaGo which challenged the world champion of the ancient Chinese game of Go, and won.
The latest news out of DeepMind comes back to that historic victory. As a means of testing some upgrades to AlphaGo, the company secretly unleashed the AI on some unwitting Go players. AlphaGo completely dominated the competition. More than 50 games were played and AlphaGo won every single one.
Demis Hassabis, the co-founder and CEO of Google DeepMind came clean yesterday that AlphaGo was behind the juggernaut. “We’ve been hard at work improving AlphaGo, and over the past few days we’ve played some unofficial online games at fast time controls with our new prototype version, to check that it’s working as well as we hoped.” So, if you happened to get bested while playing Go against players named Magister or Master, don’t feel too bad.
The development of AI that can tackle huge issues with the same ferocity as AlphaGo takes on the game of Go may not be too far off. Even just looking at DeepMind, and not other companies who are making major strides in the field of AI, there is some very impressive and awe inspiring work being done.
Even the processes by which the systems are being built to learn are fascinating. For example, DeepMind is taking a page out of the Westworld playbook by teaching its software “to dream.” Video games are also being deployed in this area to create auxiliary tasks, which ended up boosting the rate at which the software learns. Deepmind is also teaching AI how to learn similarly to humans and other living organisms – software is being made able to learn by doing and having that reinforced with a system of rewards and punishments.
The applications of software being created at the company are already causing much excitement. An AI was able to teach itself to identify a unique condition that could lead to blindness, much like a well-trained eye doctor. DeepMind has also partnered with the National Health Service (NHS) in the UK to work together on transforming healthcare.
AI has already come a long way and there is no end in sight to the benefits it will likely have in store for the future of humanity. The rapid pace of discovery could mean that such a future is right around the corner.
Google has been working on a self-driving car project for the past seven years, and just last week, it set up a separate company under the Alphabet umbrella to move the project forward. That company, now called Waymo, takes its mission of providing safe, fully autonomous vehicles to a new level with the addition of 100 Chrysler Pacifica hybrid minivans to its test fleet.
The minivans were produced by Fiat Chrysler specifically for Waymo in close tandem with the Alphabet company. This means that the self-driving technologies in these Pacifica minivans were not add ons — Waymo’s onboard computer power, sensors, and telematics are as much part of the Pacifica as its own electrical, powertrain, and structural systems (including the chassis), all modified and optimized for Waymo’s autonomous driving tech.
Everything took only six months to design and build, according to Chrysler, with engineering teams from both companies working hand in hand in the vehicle’s development. The Pacifica will join Waymo’s fleet of self-driving cars by early next year for more testing on public roads.
Although Google’s self-driving car project had its share of bumps along the road, Waymo presents its autonomous vehicle tech as “a safer driver that is always alert and never distracted.” Indeed, with 95 percent of all traffic fatalities the result of human error, self-driving cars are expected to save millions of lives, so the more variety we get in the types of self-driving vehicles available, the more likely we are to eventually have every car on the road be capable of autonomy.
While today’s virtual and augmented reality (VR/AR) systems continue to fall short of expectations, one mysterious startup has been circling the waters, poised to bring to life the mind-blowing VR/AR experience many of us have been anticipating since we were kids.
That company, Magic Leap, released a demo video last year that came as close to meeting those expectations as any had ever done. Since then, millions of dollars have been invested in the Florida-based startup, but we’ve yet to get word on a release date for an actual system, and most of those who’ve tried out what Magic Leap does have ready for use have been required to sign confidentiality agreements.
Writer Reed Albergotti has been one of the few exceptions, and yesterday, The Informationpublished an article that he penned alleging that Magic Leap has “oversold” its system, using different tech for the demo than what will be used in the final product. Magic Leap’s CEO Rony Abovitz then turned to Twitter to respond — sort of.
Over the course of about 20 hours, he tweeted over two dozen messages about everything from the difficulty of working at a startup to the games that will be played at the Magic Leap launch without ever directly calling out Reed Albergotti or the article. He does, however, refer to “grumpy mouse tech blogger writers” and being a “Veruca Salt,” a reference to the greedy, spoiled character in “Charlie and the Chocolate Factory.”
Check out the tweets for yourself below, and get hyped, because if Abovitz is to be believed, “What’s coming next for [Magic Leap] is the best part.”
Tech startups are hard and intense – but it takes a different breed of awesome to strap yourself into a rocket
With the green revolution in full swing, many institutions are pledging to fully rely on renewable energy. Nations like Costa Rica and Spain have already sourced most of their energy from renewables, or are pledging to do so in the near future.
That’s no small promise. While Google hasn’t really been too public on its energy use, it has revealed that it bought 5.7 terawatt hours (TWh) of renewable electricity in 2015, and that was just 44 percent of its power needs.
But this new promise doesn’t really mean that all its centers will be directly powered by renewables. They’ll still get power from a power company, whose electricity source still includes fossil fuels. What the company does is match the amount of electricity it buys from renewable sources with the amount of energy it consumes.
Given that the tech industry does account for approximately two percent of all greenhouse gas emissions, enticing companies to go green will mean a lot. Google’s pledge will only serve to motivate them further towards this path.
Today, if you ask the Google search engine on your desktop a question like “How big is the Milky Way,” you’ll no longer just get a list of links where you could find the answer — you’ll get the answer: “100,000 light years.”
While this question/answer tech may seem simple enough, it’s actually a complex development rooted in Google’s powerful deep neural networks. These networks are a form of artificial intelligence that aims to mimic how human brains work, relating together bits of information to comprehend data and predict patterns.
Google’s new search feature’s deep neural network uses sentence compression algorithms to extract relevant information from big bulks of text. Essentially, the system learned how to answer questions by repeatedly watching humans do it — more specifically, 100 PhD linguists from across the world — a process called supervised learning. After training, the system could take a large amount of data and identify the short snippet from it that answered the question at hand.
Training AI like this is both difficult and expensive. Google has to provide massive amounts of data for their systems as well as the human experts that the neural network can learn from.
Google and other technology companies like Facebook and Elon Musk’s OpenAI are currently working on better, more automated neural networks, the kind capable of unsupervised learning. Those networks wouldn’t need people to label data before they could learn from it; they could figure it out on their own.
If these companies are successful, a multitude of opportunities would be opened up for humankind. Advanced AI systems could quickly and accurately translate between languages, make our internet more secure, develop better medical treatments, and so much more. The data machines like that could process would change our world permanently.
Tech companies are currently still years away from discovering how to create fully autonomous AI. Nevertheless, that digital voice now answering our search engine queries puts us one step closer.
In an exciting demonstration of the power of artificial intelligence (AI) and the diversity of species, a team composed of two programmers and an ornithologist (an expert on birds) created a map of visualized bird sounds.
Coders Manny Tan and Kyle McDonald worked with ornithologist Jessie Barry to create this visually euphonious interactive map of bird sounds. Tan and McDonald used machine learning to organize thousands of bird sounds from a collection by Cornell University. They didn’t supply their algorithm with tags or even names of the bird sounds. Instead, they wanted to see how it would learn to organize all the data by listening to the bird sounds.
The results were amazing. Their algorithm was able to group similar sounds together. It generated visualizations of the sound — an image that served as the sound’s fingerprint — using a machine learning technique called t-distributed stochastic neighbor embedding (t-SNE), which allowed it to group together sounds with similar fingerprints.
AI isn’t just allowing us to understand our world better, it’s also changing how we interact with it. The prevalence of automated vehicles or unmanned transportation technology is proof of this, with AI learning to become better car and truck drivers, pilots, and even sailors (in a manner of speaking). AI might even venture into space ahead of us.
Obviously, we’re still far from perfecting AI. In as much as deep neural networks are continually learning, we’re also in the process of developing better systems.
Google DeepMind, the artificial intelligence (AI) research subsidiary of Alphabet, has had considerable applications in the field of medicine and medical research through DeepMind Health. One of its more recent achievements is an eye-scanning AI algorithm that can detect one of the most common forms of blindness.
This algorithm uses the same machine learning technique that Google uses to categorize millions of web images. It searches retinal images and detects signs of diabetic retinopathy — a condition that results from damaged eye blood vessels, and leads to gradual loss of sight — like a highly trained ophthalmologist.
According to computer scientists at Google, and medical researchers from the U.S. and India, the algorithm was originally developed to analyze retinal images and wasn’t explicitly designed to identify features that might indicate diabetic retinopathy. It learned this on its own, after having been exposed to thousands of healthy and diseased eyes.
The algorithm was exposed to 128,000 retinal images classified by at least three ophthalmologists as a training data set. Its was then tested on 12,000 retinal images, where it successfully identified the disease and how severe it was, matching or even exceeding the performance of experts. The results were published in the Journal of the American Medical Association— the first study in the journal ever published involving deep learning, according to editor-in-chief Howard Bauchner.
The AI eye doctor is in
“One of the most intriguing things about this machine-learning approach is that it has potential to improve the objectivity and ultimately the accuracy and quality of medical care,” says Michael Chiang of the Oregon Health & Science University’s Casey Eye Institute.
While its diagnosis can still be made more efficient and reliable through some form of automated detection, Google is currently conducting clinical trials with real patients, in collaboration with the Aravind Medical Research Foundation in India.
There is also a need to be able to explain how an algorithm such as this one arrives at its conclusions, as Google’s Lily Peng acknowledges. “We understand that explaining will be very important.”
A retinal scanning algorithm capable of performing as best as — or even better than — medical experts can certainly be useful. For starters, it can facilitate diagnosis and treatment in areas where the availability of doctors is scarce. AI is truly reshaping the future of medical research.