Within the next decade, planes could be capable of traveling across the country by hypersonic flight in less than an hour—all it would take is some boron nitride.
A key factor for a vehicle to maintain extremely high speeds is the intense amount of heat generated during travel; for example, the now-retired supersonic Concorde aircraft experienced temperatures of up to 260°F at its lazy cruising speed of 1,534 miles per hour. As such, the materials used to build these aircraft must also be able to withstand very high heat, in addition to being structurally stable and lightweight. A study conducted by researchers from NASA and Binghamton University investigated the properties of nanotubes made using boron nitride, a combination of boron and nitrogen. The study revealed it could potentially be used to make hypersonic travel—speeds above 4,000 miles per hour—possible.
Currently, carbon nanotubes are used in aircraft due to their strength and ability to withstand temperatures up to 400 degrees Celsius (752 degress Fahrenheit). Boron nitride nanotubes (BNNTs), however, can withstand up to 900 degrees Celsius (1652 Fahrenheit). They can also handle high amounts of stress, and are much more lightweight than their carbon counterparts.
The Price of Air Travel
The problem with using BNNTs is their cost. According to Binghamton University Associate Professor of Mechanical Engineering Changhong Ke, coating an aircraft with BNNTs would run a very high price tag.
“NASA currently owns one of the few facilities in the world able to produce quality BNNTs,” said Ke. “Right now, BNNTs cost about $1,000 per gram. It would be impractical to use a product that expensive.”
Despite the high production cost, it’s possible prices will decrease, and production increase, after more studies detail the material’s usefulness. Carbon nanotubes were around the same price 20 years ago, but are now between $10 and $20 per gram. Ke believes something similar will happen with BNNTs.
The seabin is, as its name suggests, a bin made up of a large fiber net and a dock-based pump. The device is aimed at collecting pollution of all sizes, down to floating debris as small as 2mm in diameter. It’s even capable of collecting oil from the water, a priceless innovation in the event of an oil spill.
The first implemented seabin was installed this month in Portsmouth Harbor in the U.K, where it will be able to immediately start cleaning plastic pollution from its waters.
“Sure, we can’t catch everything right now, but it’s a really positive start,” the device’s creators, Pete Ceglinski and Andrew Turto, told the Huffington Post. “It’s a big mission, but it can be done. In fact, we’re doing it right now.”
Cleaning Plastic Pollution
The seabin works by creating a flow of water into the bin, bringing with it any surrounding debris that is then caught in the net. According to the Seabin Project website, the device can catch 1.5 kilograms (about 3.3 pounds) of debris per day, with the ability to hold up to 12 kilograms (26.5 pounds) at full capacity. The creators estimate each seabin can remove about half a ton of debris every year, the equivalent of collecting about 20,000 bottles or 83,000 plastic bags.
This technology was so promising that its creators were able to raise $260,000 on IndieGoGo to fund its creation.
New research has shown that cancer cells that survive treatment may effectively wake themselves up and resume growing by tapping into the body’s immune system as a power source. This work provides more insight into the puzzle of why, and how, cancer relapses, and has provided key insights into why the immune system may fail to fight cancer effectively. The research also revealed that, by restoring the body’s immune response to proper functioning, immunotherapy might prove effective at preventing relapse of the deadly disease.
The research question for this team was how a relatively tiny number of treatment-resistant cancer cells could become a deadly occupying force after remaining dormant for stretches of time. In an attempt to answer this question, the team studied immune responses in mice. They were watching to see how immune system cells operate before treatment for cancer, after treatment was declared a success, and once tumors returned.
Normally, immune cells trigger inflammation by releasing signals when they sense infection or trauma. This can, in some cases, help the immune system kill cancer cells. However, post-treatment cancer cells subvert these signals. In the study, researchers targeted the response to the post-treatment cells using immunotherapies, and were able to delay — or in some cases prevent — relapse.
The key discovery was that a chemical signal called TNF-alpha can switch roles: it can either serve as an anti-tumor agent, supporting the immune response to cancer cells, or as a new type of relapse promoting signal. TNF-alpha was also able to trigger the resistant cells, prompting them to influence cells that kill cancer naturally not to attack.
Cells Ignoring the Problem
Specifically, the team found that the surfaces of resistant cancer cells had high levels of a certain molecule, PD-L1, on their surface. The PD-L1 on those cells can act with other PD-L1 on T cells, instructing them to ignore signals. These results suggest that, after surviving treatment, cancer cells wake themselves up and start growing again using the patient’s immune system. Once this mechanism is better understood, immunotherapy could provide potential treatment for humans. As such, this is the next area of research the team plans to pursue.
“Our study finds the body’s own immune system seems to play a crucial role when cancer relapses,” co-author Professor Alan Melcher said in a press release. “The immune system goes from keeping cancer cells in check to awakening and feeding residual cells, while turning a blind eye to their growth.”
“Excitingly, many of the methods employed by treatment-resistant tumours to re-grow and hide from the immune system can be blocked using existing immunotherapies.” The researchers also added that, based on recent supporting data from clinical trials, the potential of immunotherapy to reduce cancer relapse is good.
Co-author Kevin Harrington added: “It is becoming increasingly clear that the immune system is at the core of the puzzle of how we can treat cancer more effectively.”
This discovery is the kind of groundbreaking achievement that only happens a few times in science. Take a moment to understand exactly why gravitational waves were, and are, such a revolutionary discovery.
As artificial intelligence (AI) development progresses, experts have begun considering how best to give an AI system an ethical or moral backbone. A popular idea is to teach AI to behave ethically by learning from decisions made by the average person.
To test this assumption, researchers from MIT created the Moral Machine. Visitors to the website were asked to make choices regarding what an autonomous vehicle should do when faced with rather gruesome scenarios. For example, if a driverless car was being forced toward pedestrians, should it run over three adults to spare two children? Save a pregnant woman at the expense of an elderly man?
The Moral Machine was able to collect a huge swath of this data from random people, so Ariel Procaccia from Carnegie Mellon University’s computer science department decided to put that data to work.
In a new study published online, he and Iyad Rahwan — one of the researchers behind the Moral Machine — taught an AI using the Moral Machine’s dataset. Then, they asked the system to predict how humans would want a self-driving car to react in similar but previously untested scenarios.
Effectively, Proccacia wanted to demonstrate how a voting-based system could provide a solution to the ethical AI question, and he believes his algorithm can effectively infer the collective ethical intuitions present in the Moral Machine’s data. “We are not saying that the system is ready for deployment,” he told The Outline. “But it is a proof of concept, showing that democracy can help address the grand challenge of ethical decision making in AI.”
This idea of having to choose between two morally problematic outcomes isn’t new. Ethicists even have a name for it: the double-effect. However, having to apply the concept to an artificially intelligent system is something humankind has never had to do before, and numerous experts have shared their opinions on how best to go about it.
Other experts, including a team of researchers from Duke University, think that the best way to move forward is to create a “general framework” that describes how AI will make ethical decisions. These researchers believe that aggregating the collective moral views of a crowd on various issues — like the Moral Machine does with self-driving cars — to create this framework would result in a system that’s better than one built by an individual.
However, this type of crowdsourced morality isn’t foolproof. One sample group may have biases that wouldn’t be present in another, and different algorithms can be presented the same data but arrive at different conclusions.
For Cornell School of Law professor James Grimmelmann, who specializes in the dynamic between software, wealth, and power, the idea of crowdsourced morality itself is inherently flawed. “[It] doesn’t make the AI ethical,” he told The Outline. “It makes the AI ethical or unethical in the same way that large numbers of people are ethical or unethical.”
For Proccacia, these limitations are valid, and he acknowledges that their research is still only a proof of concept. However, he believes a democratic approach to building a moral AI could work. “Democracy has its flaws, but I am a big believer in it,” he said. “Even though people can make decisions we don’t agree with, overall democracy works.”
Artificial intelligence (AI) has become the center of today’s biggest tech companies, including Google, Microsoft and Amazon, as well as social-networking giant Facebook, and even Apple. When these companies talk about AI, they’re mostly referring to developing and perfecting deep learning systems that power a number of their products and services.
Training machine learning algorithms can be tedious — requiring tons of data sets — and involves highly specialized tools. Which is why, rivals Amazon and Microsoft have recently teamed up for better research on cloud services and, to make AI available to more developers.
On Thursday, they unveiled a new open-source deep-learning interface called Gluon. Essentially a deep-learning library, Gluon is designed to bring together the two most important components of a deep-learning system: training algorithms and neural network models. With Gluon, even developers who don’t specialize in AI can build and run machine learning models for their apps and services.
AI for Business
Microsoft and Amazon seem to be following the example of Google, which has it’s own open-source deep-learning AI library called TensorFlow, the company’s cloud-based machine learning platform for developers. Microsoft and Amazon are building their business, like Google, on AI. The task is daunting, to say the least, which is why they’ve teamed up under Gluon; to open it to other interested partners.
“The potential of machine learning can only be realized if it is accessible to all developers. Today’s reality is that building and training machine learning models requires a great deal of heavy lifting and specialized expertise,” Swami Sivasubramanian, VP for Amazon AI, told TechCrunch in a statement. “We created the Gluon interface so building neural networks and training models can be as easy as building an app. We look forward to our collaboration with Microsoft on continuing to evolve the Gluon interface for developers interested in making machine learning easier to use.”
Microsoft’s Eric Boyd, corporate VP for AI and Research, echoed the same sentiment. “We believe it is important for the industry to work together and pool resources to build technology that benefits the broader community,” he said in a statement.
Antibiotic resistance — the phenomenon in which bacteria stop responding to certain antibiotics — is a growing threat around the world. It’s expected to kill 10 million people annually by 2050.
And it hasn’t been easy to develop new drugs in order to stay ahead of the problem. Many major pharmaceutical companies have stopped developing new antibiotics, and the drugs that are still in development have faced numerous stumbling blocks toward approval.
So some drugmakers are starting to turn to other solutions, including one that’s actually had a fairly long history: phage therapy.
The treatments are made of bacteria-killing viruses called bacteriophages, or phages for short. Discovered in the early 1900s, bacteriophages have the potential to treat people with bacterial infections. They’re commonly used in parts of eastern Europe and the former Soviet Union as another way to treat infections that could otherwise be treated by antibiotics. Because they are programmed to fight bacteria, phages don’t pose much of a threat to human safety on a larger scale.
“There’s huge potential there that regular antibiotics don’t have,” NYT columnist Carl Zimmer told Business Insider in 2015. “I think what we’d actually have to work on is how we approve medical treatments to make room for viruses that kill bacteria.”
A conversation about approval pathways is already underway, with a handful of companies starting to get into the space. The trials, while still in early stages, could one day change the way we confront antibiotic resistance.
A Need For New Options
Dr. Paul Grint, CEO of one small company, AmpliPhi Biosciences, is trying to turn phage therapy into a tool that doctors might be able to one day use alongside antibiotics to treat serious infections. The company’s working on phage-based treatments to treat Staphylococcus aureus, a bug implicated in sinus infections, and Pseudomonas aeruginosa, a bug connected to lung infections in people with cystic fibrosis.
There are a number of reasons why these treatments are gaining some momentum now: for one, there’s a big need for antibiotics. In September, the World Health Organization warned that the world is running out of antibiotics.
“There is an urgent need for more investment in research and development for antibiotic-resistant infections including TB, otherwise we will be forced back to a time when people feared common infections and risked their lives from minor surgery,” WHO Director-General Dr. Tedros Adhanom Ghebreyesus said in a news release.
For phages in particular, there have been a number of advancements that help make it more straightforward for phage therapy to go through the FDA approval process. Grint told Business Insider that includes being able to sequence the bugs, which would help determine that you’re absolutely getting the right phages in treatment.
AmpliPhi also has a way to manufacture the therapy that’s up to regulatory standards set up by the FDA.
Using Phage Therapy in the US
While phage therapy has been around for more than a century, Grint said there’s still a lot of education that needs to happen to get doctors and researchers on board, especially in the US. In July, the FDA and National Institutes of Health hosted a workshop regarding bacteriophages, which Ampliphi and others participated in.
There are also some researchers like a group at the University of California at San Diego that are researching phage therapy. In 2016, for example, researchers at UCSD used AmpliPhi’s therapy to treat a professor at the university who had a drug-resistant infection.
Even so, the US is treading carefully into the world of phage therapy. For now, AmpliPhi is able to recruit patients under the FDA’s “compassionate use” pathway, making it mostly a case-by-case situation for now when other antibiotics have failed.
The hope is to use that information, along with some phase 1 studies that are happening in Australia to gear up for a phase 2 trial in the US. The company’s aiming to start that trial in the second half of 2018, meaning it still might be a while before we start using viruses to treat our bacterial infections.
Like all living things, cities have lifespans. London started as a small Roman settlement along the Thames River. Initially encompassing just a few families, today, more than 8.6 million people call the place home. So take a moment to take a journey. Here are is a series of maps, paintings, and old-time photographs that show the journey of the British capital from the past to today.
Two recent archaeological excavations, in 1999 and 2010, suggest that there were settlements near London’s Thames River as early as 4500 BC. The area saw a widespread adoption of agriculture in the Neolithic and Bronze Age.
The Romans founded Londinium (now called London) in 43 AD. This artist’s illustration of Londinium in 200 AD shows the city’s first bridge over the Thames River.
From the 7th to 11th centuries, Anglo-Saxons moved into Londinium. Their settlement was laid out in a grid pattern and grew to contain between 10,000 and 12,000 people.
Westminster Abbey, built in the 10th century, is a World Heritage Site and one of London’s oldest and most important buildings. Here it is in a 1749 painting.
William, Duke of Normandy, was crowned King of England there on Christmas Day, 1066 — just after it was completed. By the 11th century, London had the largest port in England. In the 12th century, the English royal court began to grow in size and sophistication and settled in Westminster, a neighborhood in central London.
In 1176, King Henry II commissioned a new stone bridge. Finished in 1284, the original London Bridge would stand for over 600 years. It supported homes and shops — which weighed down its arches over time.
The development of the printing press in the early 15th century made news available to the entire city and improved literacy levels. Coffeehouses also became popular spots for friendly debates.
In the 17th century, London suffered from the Great Plague, which killed about 100,000 people. In 1666, the Great Fire broke out; it took the city a decade to rebuild.
The city became a major hub for trade throughout the 1700s, and the Port of London expanded downstream.
During the Georgian era (from 1714 to 1830), new districts like Mayfair formed, and new bridges over the Thames encouraged development in South London.
In the mid-19th century, London overtook Amsterdam as the Europe’s leading financial center…and the Royal Navy became the world’s leading military fleet.
London was the largest city in the world from 1831 until 1925, when New York City superseded it. The growing population and increased traffic led to the creation of the world’s first local, underground urban rail network in the late 1860s. An extensive sewage system was also constructed.
WWII devastated London starting in 1941. As seen below, civilians hid in underground train stations to get away from air raids, which killed approximately 30,000 Londoners by the war’s end. The city then slowly began to rebuild itself.
The city has maintained its place as a center of global power …
… and today, over 8.6 million people reside there.
One of the most poignant displays of the island’s continued disarray? Now, three weeks after the storm has dissipated, more than 80 percent of the island is still without power. The storm damaged all facets of the power grid — how power is generated, how power is transmitted, and how power is distributed — making the process of repair far more challenging than on neighboring islands, New Scientist reports. Officials estimate that it could take months for citizens to get their electricity back, or even longer.
That is, unless Elon Musk steps in.
Prompted by a Twitter user, on October 5th, Musk noted that Tesla could get involved in restoring the island’s power. Notably, this power would be clean and renewable:
The Tesla team has done this for many smaller islands around the world, but there is no scalability limit, so it can be done for Puerto Rico too. Such a decision would be in the hands of the PR govt, PUC, any commercial stakeholders and, most importantly, the people of PR.
The two men recognized the great potential in the wake of Puerto Rico’s destruction. “Although in the short-term the object would be to bring power to the largest number of people, we shouldn’t sacrifice this opportunity to have an energy system that is resilient, modern, and can be at cutting edge on the global level,” Rossello said in a subsequent press conference.
A Shift In The Business
Neither Rossello nor Musk has provided much detail about what the plan would look like. But on smaller islands, Tesla has installed a microgrid, a distributed network of batteries and solar panels that operates independently of the standard electric grid. The solar panels collect energy when they can; the batteries store that energy for later use.
This kind of distributed system can bring electricity to those residents more quickly than repairing the traditional electrical grid. That’s clearly a good thing.
But could the shift away from traditional systems ultimately punish citizens?
Right now, the government-owned corporation Puerto Rico Electric Power Authority (PREPA) supplies power to all of the island. That power is generated primarily through burning petroleum, natural gas, and coal, which is imported. If more individual homes are outfitted with their own sources of power generation, they will become less reliant on PREPA. This trend is happening elsewhere in the United States, too, from New York to Arizona.
This shift could be financially more disruptive than straightforward privatization, in which government-owned utilities (power, waste management, water) are handed to private companies that run each part of the process. That is because privately owned companies are heavily regulated so that they don’t jack up prices and take advantage of consumers. But the combination of regulation and competing companies that answer to shareholders who want to turn a profit often means that consumers don’t see much difference in how much they pay for power; in some cases, individuals even pay less in privatized systems.
“There have been a lot of studies on the cost of electricity generation for public and private utilities. It makes no difference. There’s a wide range of results, with no real pattern as to whether public or private was better [for citizens],” John Donahue, the faculty chair of the Masters of Public Policy at the Harvard Kennedy School of Government, told Futurism.
“Musk’s pitch might be good news for rate payers, bad news for electricity workers.””
“As a customer in a privatized system, you can be confident there will be incentives in place,” Frank Wolak, an economics professor at Stanford University, told Futurism. But, he notes, there are downsides, too: “If you’re working in the privatized system, you won’t do as well. You could lose your job.”
To this end, a push towards individualized power generation could come with a similar trade-off for citizens, at least at first: they might pay less for electricity on a monthly basis, but those employed by the power company might lose their jobs.
“Musk’s pitch might be good news for rate payers, bad news for electricity workers. That’s probably the bottom line,” Donahue said.
Shifting to the microgrid also comes with new risks that weren’t present in the traditional power system. The solar panels and battery packs are expensive; users often need years to recoup their investment. Musk isn’t letting on how much he’s going to charge for Tesla’s systems, or who will be paying for it, but given Puerto Rico’s sizable debt, it might be a tough decision if the government uses an outsized portion of its recovery funds to restore power using Tesla’s tech.
It’s particularly knotty because the island will probably have to rebuild its traditional grid anyway. The batteries that store energy aren’t as efficient, and they need to give people access to electricity 24/7, Wolak said. So when the Sun isn’t shining, everyone will need backup power from the grid all at the same time. The supply and demand of electricity will fluctuate dramatically, causing a headache for the power company, which will need to generate that power only intermittently and can’t loop in to a larger grid due to Puerto Rico’s isolation.
“You hear people talk about how Denmark’s electricity is 80 percent renewable. But it’s interconnected with the rest of Europe. So they can install a lot of wind, but if the wind isn’t blowing [Denmark] gets electricity from other regions,” Wolak said. The same thing is happening in California, where natural gas powers homes to make up for solar’s down time. “Puerto Rico is an island. If there’s no sun or wind, there’s no transmission line to Miami.”
“It’s not going to be the lowest-cost way to get electricity back up.”
“[Installing a microgrid] would effectively amount to discarding a lot of capacity that’s already there,” Wolak said. “It’s not going to be the lowest-cost way to get electricity back up.” Moreover, installing a microgrid in a market of this size has never been done, Wolak said. And it’s a gamble to see if it will work — a politically palatable one, but a gamble nonetheless. “Everyone loves renewables. But this is not something that we have a proof of concept anywhere,” Wolak said. “Maybe it’s not the best time to do it for Puerto Rico as it’s trying to recover.”
For General John W. Raymond of the U.S. Air Force Space Command, the future of space rocket technology lies in reusability and autonomy, and to this goal, Elon Musk’s SpaceX is already paving the way. Speaking to Bloomberg on Monday, Raymond said that U.S. Air Force is ready to follow suit, and they are “completely committed” to launch future missions using pre-flown rockets similar to SpaceX’s.
The general explained that it would be “absolutely foolish” not to do so, as reusable rockets would drive down the cost of launch missions, something SpaceX has already seen to be effective. “What Elon has done is significantly reduce launch costs,” Raymond said. “That’s driving reduced launch costs across the world.” An autonomous system aboard these reusable rockets further reduces costs, requiring lesser manpower and saving turn-around time between launches.
“The folks out at Space and Missile Systems Center in Los Angeles that work for me would be in those dialogues,” the general added, referring to supposedly-ongoing talks to certify recycled boosters for military use. “I don’t know how far down the road we’ve gotten, but I am completely committed to launching on a reused rocket, a previously flown rocket, and making sure that we have the processes in place to be able to make sure that we can do that safely.”
“I want everybody to go this way and I think the commercial industry that’s developing is going that way because they’re going to have to, to complete. It will completely transform the way we do launch operations,” Raymond said. For the general, the market is certainly going this way, and it won’t be smart for the U.S. Air Force not to follow “What we have to do is make sure we do it smartly.”
An international team of scientists has come up with a new explanation for why space occupies three dimensions. The theory hinges upon flux tubes, which are comprised of quarks held together by gluons.
In normal circumstances, a flux tube linking a quark with an antiquark will disappear when the two particles make contact and annihilate one another. However, flux tubes can stabilize and form knots in certain circumstances, like if they become interlinked with one another. If the early universe was filled with such a network of flux tubes, the amount of energy contained within the structure could explain cosmic inflation, or the exponential expansion of the early Universe.
However, it’s thought that three-dimensional knots like the ones formed by flux tubes will unravel in higher dimensional spaces. As a result, cosmic inflation would have to have been limited to three dimensions, with any further dimensions being small beyond perception, if they do exist.
At this point, the theory is just a model for what could have happened during the earliest period of our universe’s formation. However, it could explain some burning questions about how the cosmos evolved into its current form. Researchers will continue to develop this concept and how we might better understand this cosmic history with the goal of making predictions about the universe that can be tested, in order to verify its accuracy.
Technology has the potential to radically change how elected officials perform their duties, and at least one political candidate thinks now’s the time to take advantage of that potential. If Camilo Casas is elected to the city council in Boulder, Colorado, this November, he promises he will not make any decision by himself. Instead, he plans to put that power in the hands of the people via a liquid democracy app called Parti.Vote.
Casas built the app himself, and as he explained to Motherboard, he designed it to facilitate a liquid democracy. The idea is that instead of simply hoping their elected representatives support their stance on issues, citizens will be empowered to directly affect policy-making by voting through the app.
If Casas were to be elected, citizens of Boulder would sign up for the app online, with Casas’ team vetting each person to prevent fraud. Casas would then vote according to how the people in the community did through the app. For example, if more than 50 percent voted “yes” on an issue or policy, Casas would vote the same way, regardless of his personal stance on the issue. Only when the citizen votes resulted in a tie would he be forced to decide based on his own beliefs.
Casas’ hope is that by effectively handing over his power as an elected official to an app, he’ll be able to create a liquid democracy that is more equitable than the representative one we currently have in the U.S. “I personally am convinced that when you have to lobby a constituency rather than an elected office you will on average get more democratic and consensual outcomes,” he told Motherboard.
For the People?
The idea of using technology to empower voters to directly influence policy has its appeal. If he is elected, Casas’ vote would actually represent the will of the people — the ideal outcome for a democratic system but one that was impractical in the pre-internet world. Democracy would get a more transparent tinge, and citizens could become more invested in policy-making knowing they have the ability to directly influence it.
However, Casas’ proposal isn’t without its risks. For one, technologically savvy citizens could be overrepresented as their smartphone-lacking counterparts metaphorically stay home from the polls. As with anything digital, the possibility of hacking or fraud must also be taken into account.
Still, Casas is convinced that Parti.Vote represents a step in the right direction. “We’re not trying to build the perfect system of democratic participation,” he explained. “We’re trying to improve the system we have.”
He isn’t alone in his belief that we should use technology to create a liquid democracy. The makers of the Sovereign app propose using blockchain to give people a direct say on important issues, while a candidate for New York City’s public advocate post wants to rely on an app called NYSpeaks to solicit direct feedback. Meanwhile, another blockchain-based app called Flux supported 13 candidates in Australia’s 2016 federal elections.
While Casas admits that a win in November is “rather unlikely,” he is drawing attention to an important issue simply by running: We have the technology to improve our political system. Isn’t it time we try using it?
Disclosure: Several members of the Futurism team, including the editors of this piece, are personal investors in a number of cryptocurrency markets. Their personal investment perspectives have no impact on editorial content.
Called a “digital institute,” the Arizona-based institution hopes to provide a quick and affordable way for people to learn new skills applicable to the realm of computer science. These skills would be invaluable to workers who wish to stay relevant and desirable in a market that’s embracing computers and robotics.
In the past, Wozniak has expressed his own concerns about the impact of artificial intelligence, saying they may eventually “think faster than us and they’ll get rid of the slow humans to run companies more efficiently.” Earlier this month, a survey revealed that a number of IT leaders believe 60 percent of jobs could be replaced by automation by 2022.
Making Tech More Approachable
In addition to speed and affordability, Wozniak explains that Woz U is also meant to make the tech industry less intimidating, and hopes to prove to people they’re capable of understanding it.
“People often are afraid to choose a technology-based career because they think they can’t do it,” he said in a statement. “I know they can, and I want to show them how.”
Woz U will launch on October 13 with online programs, and there are plans to build a number of campuses in 30 cities around the world. Initially, there will only be programs training computer support specialists and software developers. In time, the programs will be expanded to cover data science, mobile applications and cybersecurity.
Following that, Woz U will collaborate with K-12 schools to incorporate science, tech, engineering, arts, and math (STEAM) programs, as well as an an accelerator program “to identify and develop elite tech talent.”
“My entire life I have worked to build, develop and create a better world through technology and I have always respected education,” continued Wozniak. “Now is the time for Woz U, and we are only getting started.”
Dubai residents looking for a more environmentally friendly alternative to the SUVs and other fossil fuel-powered vehicles operating throughout the city can now call upon one of Uber’s50 Tesla Model S and Model X vehicles.
Dubai residents aren’t the first to have the option to summon Teslas through Uber. The UberOne service first launched in Madrid, Spain, in December.
Embracing Autonomous Technology
The Dubai Taxi Corporation wasn’t drawn to Teslas simply because the vehicles are all-electric — their future self-driving capabilities were also a factor as they align with the Dubai Future Foundation’s ultimate goals.
“By 2030, 25 percent of all transportation trips in Dubai will be smart and driverless. The strategy is projected to generate economic revenues and savings of up to Dh22 billion a year,” His Highness Sheikh Mohammed Bin Rashid Al Maktoum, Vice President and Prime Minister of the UAE and Ruler of Dubai, said last April.
Dubai continues to solidify its place as a world leader in futuristic transportation. Last November, the city partnered with Virgin Hyperloop One to install a new hyperloop network that is expected to cut travel time from Dubai to Abu Dhabi down to 12 minutes. More recently, the government of Dubai announced a successful test of its autonomous flying taxi, meaning it can move forward with its plans for a drone taxi service.
A new study reveals that a machine learning tool can help to identify which breast lesions, already classified as “high-risk,” are likely to become cancerous. The researchers behind the study believe that the technology could eliminate unnecessary surgeries. Breast lesions are classified as high-risk after a biopsy reveals they have a higher chance of developing into cancer. Surgical removal is typically the recommended treatment option for these lesions due to the increased risk, even though many of these lesions do not pose an immediate threat. With “less immediate” cases, surgery may be deemed unnecessary and follow up imaging or other treatments may be found to be the preferred course of action — but only if there is a reliable way of differentiating between the lesions.
“There are different types of high-risk lesions,” Dr. Manisha Bahl, study author and radiologist at Harvard Medical School and Massachusetts General Hospital (MGH), said in a press release. “Most institutions recommend surgical excision for high-risk lesions such as atypical ductal hyperplasia, for which the risk of upgrade to cancer is about 20 percent. For other types of high-risk lesions, the risk of upgrade varies quite a bit in the literature, and patient management, including the decision about whether to remove or survey the lesion, varies across practices.”
A Machine Learning Solution
A research team from MGH, MGH breast imaging experts, and researchers from the Computer Science and Artificial Intelligence Laboratory at the Massachusetts Institute of Technology (MIT) collaborated to study how a machine learning tool could be used to discern which high-risk lesions were more or less likely to turn into cancer. Machine learning allows an artificial intelligence (AI) system to learn from its past experiences and improve its performance as a result, similar to the way in which humans learn.
The model, in this case, analyzed traditional risk factors, like lesion histology and patient age, along with a few more novel features like keywords in biopsy pathology reports. The team trained the machine learning model on patients with high-risk lesions confirmed via biopsy who had either had surgery or two-year imaging after the biopsy and high-risk determination. In the group, a total of 1,006 high-risk lesions were identified, and 115 of them, or 11 percent, later developed into cancer.
The researchers trained the model on two-thirds of the high-risk lesions, and then tested it on the 335 lesions that remained. The model correctly predicted cancer upgrades in 37 of the 38 lesions or 97 percent. The team found that, had the model been used, it would have helped to prevent nearly one-third of the surgeries conducted on benign lesions. The machine-learning model also found that the keyword terms “severely” and “severely atypical” in pathology reports were associated with a greater risk of developing cancer.
“Our goal is to apply the tool in clinical settings to help make more informed decisions as to which patients will be surveilled and which will go on to surgery,” Dr. Bahl said in the release. “I believe we can capitalize on machine learning to inform clinical decision making and ultimately improve patient care.” This is just one more example of how machine learning is transforming how we treat, manage, and prevent disease.
A team of researchers at Case Western Reserve University School of Medicine has identified a conserved molecular pathway that controls health and lifespan in nematode worms (Caenorhabditis elegans) — a frequently studied model organism in biological research.
“We find that by artificially increasing or decreasing the levels of a family of proteins called Kruppel-like transcription factors (KLFs), we can actually get C. elegans to live for longer or shorter time periods,” study first author Dr. Nelson Hsieh said to Sci News. “Since this same family of proteins also exists in mammals, what is really exciting is that our data suggests KLFs also have similar effects on aging in mammals, too.”
Animal models are essential to biological research. To study a human disease or process you can’t perform most preliminary work on humans for ethical reasons. Therefore, you must instead develop a model. While some models can stay in vitro (outside a living body) eventually you must be able to produce the disease in a context that allows study — e.g. in a living body. Animal models, such as nematode worms and mice, enable a much closer approximation to a human response to disease and other processes.
Mice share 99 percent of their DNA with humans, and have short lifespans compared to humans and other mammals. Therefore, studies with mice involving “knockout” genes allows researchers to observe the effects of removing single genes. C. elegans is also useful, especially in aging research, for several reasons. For one, the worms have a short three day lifecycle, allowing researchers to observe many generations quickly. They are very simple organisms, which makes the effects of single proteins much easier to see. C. elegans’ genome was sequenced by the end of 1998, so scientists have had ample opportunities to experiment with this relatively short, but very well-known, genome.
Lessons Applied to Humans
The research also demonstrated that C. elegans with high levels of KLF proteins lived healthier, longer lives than normal worms. Similarly, mice with elevated levels of KLF proteins were found to exhibit a delay in age-related blood vessel dysfunction. The team determined that the KLF proteins’ function is to control autophagy — a quality control mechanism that allows cells to clear up debris such as molecular byproducts and misfolded proteins that build over time, reaching critical mass in old age. Loss of this process of cleaning and recycling is a classic sign of aging.
Cells are less able to undertake these recycling functions as they age. Once an unsustainable level of toxic debris builds up, cellular survival is threatened. This is why the team observed worms without KLF proteins dying early, as their cells were no longer able to maintain autophagy. From here, the researchers plan to study how autophagy affects blood vessel function, and develop strategies for targeting KLF proteins in humans.
“The observation that KLF levels decrease with age and that sustained levels of KLFs can prevent the age-associated loss of blood vessel function is intriguing given that vascular dysfunction contributes significantly to diverse age-associated conditions such as hypertension, heart disease, and dementia,” senior author Professor Mukesh K. Jain said to Sci News.
“As our population ages, we need to understand what happens to our heart and arteries, as we rely on them to function perfectly later and later on in our lives,” Dr. Hsieh added to Sci News. “Our findings illuminate what can happen during aging, and provide a foundation to designing interventions which slow these processes.”
After unveiling a concept model for a new line of autonomous vehicles, Toyota expects to start testing these driverless cars in 2020. Sure, carmakers now seem to be testing their self-driving cars all the time, and governments are paving the way for these trial-runs. Toyota, however, promises that their 2020 autonomous vehicle tests will be different, as these vehicles will be powered by artificial intelligence (AI).
Toyota is supposedly combining their Concept-i cars with an AI called “Yui” — the product of spending billions on a venture capital arm meant for AI development. Yui isn’t your typical autonomous driving system. Toyota wants their AI to be able to chat with drivers and get to know them better by using their preferences, emotions, and habits, which Yui builds through deep learning.
“By using AI technology, we want to expand and enhance the driving experience, making cars an object of affection again,” said Makoto Okabe, general manager of Toyota’s EV business planning division, speaking to Reuters.
Having just wrapped up filming his new series about Earth’s oceans, Blue Planet II, naturalist Sir David Attenborough offered some thoughts about society’s concerning reliance on plastics. To get an idea of just how serious this issue has become, one study estimated that our oceans will contain more plastics than fish by 2050. Currently, our oceans contain over 269,000 tons worth of plastic.
The Guardian reported Attenborough as saying humanity holds the future of the planet “in the palm of its hands” at the launch of Blue Planet II. He went on to add that plastic pollution is one of the biggest concerns for Earth’s oceans, alongside the impact of global warming. There’s so much plastic residing in ocean waters now that trace amounts of it have been found in tap water. As concerning as that sounds, it’s currently unclear how harmful this is to humans.
“What we’re going to do about 1.5 degrees rise in the temperature of the ocean over the next 10 years, I don’t know, but we could actually do something about plastic right now,” said Attenborough.
As of yet, there’s no formal plan in place to deal with plastic waste — at least nothing on the same scale as the Paris Climate Agreement to address climate change.
“I just wish we would,” Attenborough said, “There are so many sequences that every single one of us have been involved in – even in the most peripheral way – where we have seen tragedies happen because of the plastic in the ocean.”
Speaking about his experiences while filming Blue Planet II, Attenborough explained how the presence of so much plastic has already begun to affect certain wildlife. “We’ve seen albatrosses come back with their belly full of food for their young and nothing in it. The albatross parent has been away for three weeks gathering stuff for her young and what comes out? What does she give her chick? You think it’s going to be squid, but it’s plastic. The chick is going to starve and die. There are more examples of that. But we could do things about plastic internationally tomorrow.”
Attenborough didn’t suggest how we could reduce plastic pollution, but noted that everyone on the planet has a responsibility to protect the world’s oceans — even if you don’t live close to the water.
“We may think we live a long way from the oceans, but we don’t. What we actually do here, and in the middle of Asia and wherever, has a direct effect on the oceans – and what the oceans do then reflects back on us.”
Elon Musk helped create PayPal, built America’s first viable fully electric car company, started the nation’s biggest solar energy supplier, and he made commercial space travel more than a reality in our lifetime.
In the ongoing war to defeat antibiotic resistance, a new study has identified a protein that acts as a “membrane vacuum cleaner” — an attribute that means it could serve as a new target for antibiotics. The research indicates that the process of purging the outer membrane of gram-negative bacteria of specific lipids (which requires a particular protein) might be a vulnerability drugs could target. More specifically, antibiotics could possibly enhance their existing effectiveness by using the protein researchers identified, or even decrease the virulence of many common bacteria such as E coli.
Gram-negative bacteria have two membranes — one inner and one outer. This new research implicates the outer rather than the inner membrane. The outer membrane is an asymmetrical bilayer composed of inner and outer leaflets. The inner leaflet is made up of phospholipids, and the outer leaflet is made up of mostly lipopolysaccharides, which create a sugar-coated surface that efficiently excludes hydrophobic molecules and resists antibiotics — as well as other compounds that might endanger the bacteria.
However, the outer leaflet requires a cleaning system, because phospholipids from the inner leaflet accumulate inside it creating “islands” that render the outer membrane more permeable to toxic compounds. This, in turn, makes the entire bacterium more vulnerable.
The asymmetry and permeability barrier of the outer membrane must be restored in order to keep the bacterium healthy, which means those phospholipid molecules must be removed. This is the job of the maintenance of lipid asymmetry (Mla) system, which most Gram-negative bacteria have. The focus of the recent research is the MlaA protein, a component of the Mla system.
Newcastle University Professor of Membrane Protein Structural Biology and lead author Bert van den Berg explained in a press release: “Our three-dimensional structures and functional data show that MlaA forms a donut in the inner leaflet of the outer membrane. This binds phospholipids from the outer leaflet and removes these via the central channel, somewhat similar to a vacuum cleaner.”
This new research will aid in our ongoing fight against this critical issue. Professor van den Berg commented in the release, “Our study illuminates a fundamental and important process in Gram-negative bacteria and is a starting point to determine whether the Mla system of Gram-negative pathogens could be targeted by drugs to decrease bacterial virulence, and to make various antibiotics more effective.”
Companies working with artificial intelligence (AI) and related technologies are expected to benefit dramatically from the big changes automation will bring to the world of work. Workers themselves, on the other hand, could be facing a future of unemployment and poverty. To ensure humans still have a place in the workforce of tomorrow, Google has established the Grow with Google initiative.
As company CEO Sundar Pichai explained during yesterday’s Grow with Google announcement event in Pittsburgh, PA, Google plans to distribute $1 billion over the next five years to nonprofits that specialize in training workers and helping new businesses get off the ground.
$10 million from the fund has already been committed to Goodwill. The company plans to use that money to implement the Goodwill Digital Career Accelerator, which aims to prepare members of the American workforce for jobs that require a high level of technological know-how.
Google will also embark upon a tour of the U.S., visiting libraries and community organizations to dish out training sessions and career advice as part of a commitment to deliver one million hours of employee volunteering over the span of the project.
Google isn’t the only tech giant investing a significant amount of money in programs to ensure that people are ready for the jobs that will be available in the coming years.
In May 2017, Apple announced plans to set up a $1 billion fund to help foster manufacturing roles in the U.S., and the company also has plans to fund coding workshops. Microsoft also recently announced their TechSpark program to help prepare for workers for coming economic changes by improving their digital skills and computer literacy.
Automation isn’t guaranteed to devastate the workforce, but it is almost certain to disrupt it. The benefits of the technology are simply too great to ignore, so companies are going to want to implement it whenever possible. Thanks to the likes of Google, we have the opportunity to train the people most likely to be affected by this change, thereby ensuring that nobody gets left behind in this new era of automation.
The Netherlands has expressed a desire to end coal power by 2030, marking the beginning of the end for coal power plants in the European country.
The decision came from the new Dutch government earlier this week, which also announced plans to ban all petrol and diesel-powered cars by the same year. As reported by Megan Darby of Climate Home, the Netherlands will close all coal power plants by 2030, which includes three plants made in 2015 that are said to be more efficient that others. Despite their better performance, however, they quickly started to decrease in value in 2016.
In addition to phasing out coal, the Netherlands will also set a carbon floor price and seek deeper carbon cuts to make sure coal’s elimination doesn’t make it cheaper for companies to use coal elsewhere.
Making a Statement
In a report from the Institute for Energy Economics and Financial Analysis (IEEFA), IEEFA energy finance consultant Gerard Wynn said the government’s announcement “sent a dramatic signal to electricity markets today that no investment in coal-fired power in Europe is safe.”
Wynn continued, saying, “Today’s announcement highlights the risk of investing in either new or existing coal-fired power, and the lesson is clear: National coal phase-out plans such as this, combined with the rise of renewables and the impact on demand of improved efficiency, put old electricity-production models at risk.”
In September, the Netherlands Environmental Assessment Agency (NEAA) revealed new information that showed how global carbon dioxide (CO2) emissions remained unchanged in 2016. While this was a positive sign that people can prevent additional changes to our climate, the Netherlands wants to do better, hence its new goal to reduce emissions in the country by 49 percent, as well as increase the larger EU’s emissions goals from 40 percent to 55 percent.
“Failing that,” writes Darby, “the coalition said it would seek to agree [to] stronger action with ‘likeminded’ countries in northwestern Europe, to minimize any competitive disadvantage from tougher targets.”
Digital match-making services have done more than just change how we find our perfect squeeze; they’re changing the fundamental nature of our social networks.
According to a pair of researchers investigating online dating, the way we’re looking for love (and lust) is connecting communities in completely novel ways, breaking down boundaries and possibly even making for stronger long-term relationships.
It wasn’t all that long ago that most relationships would begin with a smile and a handshake, rather than a click or a swipe.
That began to change in the mid-1990s, when websites like Match.com merged traditional lonely-heart classifieds with the convenience of the internet.
Today there’s a wide variety of sites and apps to suit your tastes, lifestyle, sexuality, and budget, from Tinder and Bumble for a quick swipe to like, to OKCupid and eHarmony for those who want their wit to show with their words.
Any stigma over online dating has slowly evaporated over the years. 15 percent of Americans admit to having used online dating, and 5 percent of those who are married or committed long-term relationships stating they met their spouse online.
Love in the Digital Age
Not only has digital technology made dating easier for romantic hopefuls, the data collected by such sites has been a boon for researchers curious about human mating habits.
But it’s clear that the digital revolution hasn’t only been shaped by the human appetite for sex and companionship; it’s changed the way we form relationships.
Economists Josue Ortega from the University of Essex and Philipp Hergovich from the University of Vienna wanted to know just how the rise of digital match-making has affected the nature of society.
Society can be modelled as a web of interlinked nodes, where individuals are the node and the link describes how well they know one another.
Most people are tightly connected with about a hundred nodes, including close friends and family, and loosely connected with others.
Even just a few decades ago most new connections were just a jump or two away inside an existing network. A bar, a sporting team, church, or college would typically provide the perfect environment for those first hot sparks.
Not any more.
For heterosexual couples, online dating has risen to second place – just below ‘met through friends’ – as the context for that first introduction. Among homosexual couples, digital match-making has skyrocketed.
As far as networks go, this is like building new highways between towns, rather than taking the local backroads. Just a few random new paths between different node villages can completely change how a network functions.
Take interracial relationships, for example, long held to be a measure of the general social distances within a population.
Once illegal in many states, and long taboo, marriage between different ethnic groups in the US has slowly been on the rise since the mid-20th century.
The increase steepened at the turn of the 21st century in line with the rise in online dating, and then even further as swipe-to-match apps like Tinder went mainstream around 2014 (it launched in late 2012).
While there are almost certainly a variety of influences, the network changes resulting from online dating fits the observations perfectly.
“Our model predicts nearly complete racial integration upon the emergence of online dating, even if the number of partners that individuals meet from newly formed ties is small,” say Ortega and Hergovich.
Marriages online were also predicted by the model to be more robust and less likely to end in divorce, a hypothesis which is supported by a study conducted in 2013.
The study is currently available online on the pre-publish website arxiv.com, so it has not completed its full peer-review process just yet.
It can often seem as if the online world reinforces our echo chambers and leads us to become more insular, especially when it comes to social media.
It’s nice to have some evidence that the relationships we make online are also breaking down boundaries and making for stronger connections.
The convergence of technology and the city is seen as a possible remedy for the challenging issues of urbanisation. Autonomous vehicles are among the most popular of many smart city solutions. Also known as driverless car technology, it could reshape our cities.
One recent prediction is that by 2040 these vehicles will account for up to half of all road travel. A growing number of studies are exploring autonomous-vehicle-induced transport disruptions – “trip generation impacts.” It’s suggested these vehicles could:
decrease private motor vehicle ownership, congestion and air pollution;
increase ride sharing, road safety, access and mobility;
Less research has been done on the effects on urban landscapes and the development patterns of our cities. Every change in transport technology – from horse cart to coal-powered train to street car to automobile – has great impacts on our cities.
So, what might autonomous-vehicle-induced changes look like? What are their likely rebound effects on mobility?
In theory, autonomous vehicles can use road networks more efficiently and thus free up some road space if trip generation rate and population growth are held constant. This space can be redesigned for a whole new spectrum of social functions, street trees, walkways or bike lanes.
However, it is likely these vehicles will enable previously suppressed trips to be taken. The resulting increase in traffic volume will reduce the potential to free up road space for other uses.
Turning Parking Lots Into Social Uses
Autonomous vehicles will reduce and potentially eliminate the need for the significant amount of space set aside for parking in high-demand urban areas.
In these areas of high-value property, mandatory parking supply requirements will have to change. A reduction in parking lots has the potential to transform urban cores, as these spaces can be used for other activities—such as parks, more high-value activities, or affordable housing.
Business uplift resulting from higher-density activities is then entirely feasible (akin to agglomeration economies in cities). This can create more mixed-use and transit-oriented development, accelerate a trend towards inner-city living and make these areas more efficient, productive and liveable.
Redesigning Building and Street Interfaces
With an autonomous-vehicle-dominated city, buildings and development will have to adapt to new patterns of traffic flow. They will need to be designed for door-to-door services – mainly accommodating the drop-offs and pick-ups at each and every site.
High-volume sites will need a bespoke interface for multiple autonomous vehicles, while lower-volume sites will no longer need kerbside parking for each development.
This scenario offers much potential to free up kerb space for other uses.
Transforming Fuel Stations Into New Land Uses
Autonomous vehicles are largely envisaged as electric vehicles charged at their overnight parking spaces. The implication is that eventually, once these vehicles dominate road transport, fuel stations will not be needed on the streets.
These locations will require remedial environmental treatment for conversion to other land uses. But once that’s done, this will open the way to alternative uses for the former fuel stations in all neighbourhoods—more convenience stores or online shopping click-and-collect locations?
This raises the question of what would be an optimal productive use for such high-profile, highly accessible sites.
Converting Domestic Garage Spaces in Suburbia
Some visions of pooled/shared ownership of autonomous vehicles suggest we will have no need to own private motor vehicles. So we will no longer need to park and garage vehicles in residential dwellings.
This could transform a substantial share of housing stock, with garages converted to other uses such as studios, rented short-term lodging, or granny flats.
In theory, driveways will no longer be needed either. These could be turned into greened front yards, spaces for children to play and residents to walk and meet their neighbours.
Alternatively, however, if the space once used for garages and access ways becomes available for buildings, this could exacerbate the trend toward larger environmentally inefficient homes.
Increasing Urban Sprawl
Autonomous vehicles have the potential to induce more urban sprawl, as more effortless travel becomes available to more people. This may lead to a rethinking of the convenience of proximity to the city and major employment centres.
Low-cost housing on the urban fringes has been a major driver of sprawl in cities.
By making travel cheaper and more convenient, autonomous vehicles might make the economics and practicality of sprawl more attractive.
Changing Property Values, Planning Controls and Land Supply
While “location, location, location” will remain relevant, autonomous vehicles should act to inflate property values in some neighbourhoods and depress values in others.
Easier commutes in particular will have an impact on residential property prices, and might shift preferences from properties in urban centres to those in suburban areas.
While suburbanisation might speed up, densification of urban cores might also be enhanced. We might see people with very distinctive lifestyles preferring these different locations.
Planning controls and land supply will be key instruments to control the balance between greenfield and infill developments. We need to consider how these controls are applied in this new environment to maximise social and economic benefits.
How Planners Will Manage the Disruption of Land Use
Through the convergence of automation, electrification and ride-sharing technologies, autonomous vehicles could significantly reshape real estate, urban development and city planning—as the automobile did in the last century.
This transformation also creates an opportunity for planners to make our cities more citizen-centred by bringing back the human-scale and walkable city practices that motor vehicle domination removed.
How well prepared are urban planners, however, to mitigate the disruptive impacts on our cities? Do we yet even understand what these disruptions and their implications are?
Urban planning as a profession is largely unprepared for autonomous vehicles. Planners need to be aware, smart and proactive about the potential impacts, particularly in terms of the potential for renewed urban sprawl.
A future involving widespread use of autonomous vehicles presents both land-use opportunities and challenges. Progressive outcomes will require an objective assessment of their complex land-use, economic and community influences on our evolving cities.
Battery technology has essentially been the same over the past years, albeit with a bunch of improvements that increase battery capacity and prolong battery life. Lithium ion batteries remain the popular choice, and they’re found in all of today’s battery-powered mobile devices and in many electric cars. Soon, these batteries might also be powering your houses, thanks to the likes of Tesla and other startups that now sell these home batteries to utility providers.
According to a report by The Wall Street Journal, homes located in New York, California, Massachusetts, Hawaii, Vermont, Arizona, and in other states are working on new ways to make their electric grids battery-powered, an infrastructural switch which Ravi Manghani of GTM Research says is a “powerful need.” Without home batteries, the ability of utility companies to deliver power is in danger.
Utilities often have difficulty allocating excess power, particularly those on interstate markets where at certain times the price of electricity tends to dip into the negative. Usually, utilities resort to dumping excess electricity or paying others to take it. With the rise of solar power, the same issue happens. Energy generated by solar panels depend on certain conditions and, more often, generation doesn’t match the needs of homes.
In California and Arizona, the Journal reports, there’s lost of solar electricity during the day at cool times of the year and too little at night, when usage spikes. “This is not a long-term theoretical issue that might happen—this is now,” Marc Romito, Arizona Public Service director of customer technology, told the Journal. Home batteries are sorely needed.
In a Time of Need
There’s wisdom in keeping spare batteries at home, or in this case, keeping your home plugged into one. Particularly during times of disasters, home batteries can be really useful. When the grid is down, home batteries coupled with solar panels can provide much needed electricity, as was the case in the aftermath of Hurricane Irma, where customers of Tesla and German battery-maker Sonen were able to keep their houses powered. Tesla has also, in fact, started shipping batteries to Puerto Rico, which has been largely without power since Hurricane Maria.
It’s this self-sustaining energy ecosystem that Tesla’s been working on thanks to their Powerwall and Powerpack batteries. Both work as electricity storage units, with the former designed for homes, while the latter is meant for utilities. Instead of relying on the grid, the home batteries like the Powerwall allow households to source out electricity, so to speak, following what some have called a “grid defection.” It’s enough to even power a small island.
The likes of Tesla, Sonen, and even Ikea in the U.K., are making this grid defection into a reality, in the U.S. and abroad. For example, both companies have partnered with Green Mountain Power in Vermont, which offers 2,000 home owners the chance to install a Powerwall for just $15 a month. Meanwhile, real-estate developer Mandalay Homes recently announced plans to build some 4,000 energy-efficient homes each with an 8-kilowatt-hour battery from Sonen — 2,900 of which would be built in Prescott, Arizona.
In short, as the market for electricity undergoes a radical shift thanks to the availability of renewable energy sources — especially the increasing popularity of cheaper solar home panels — power storage is becoming an important factor. Home batteries are the future.
Back in May, Google revealed its AutoML project; artificial intelligence (AI) designed to help them create other AIs. Now, Google has announced that AutoML has beaten the human AI engineers at their own game by building machine-learning software that’s more efficient and powerful than the best human-designed systems.
An AutoML system recently broke a record for categorizing images by their content, scoring 82 percent. While that’s a relatively simple task, AutoML also beat the human-built system at a more complex task integral to autonomous robots and augmented reality: marking the location of multiple objects in an image. For that task, AutoML scored 43 percent versus the human-built system’s 39 percent.
These results are meaningful because even at Google, few people have the requisite expertise to build next generation AI systems. It takes a rarified skill set to automate this area, but once it is achieved, it will change the industry. “Today these are handcrafted by machine learning scientists and literally only a few thousands of scientists around the world can do this,” WIRED reports Google CEO Sundar Pichai said. “We want to enable hundreds of thousands of developers to be able to do it.”
Much of metalearning is about imitating human neural networks and trying to feed more and more data through those networks. This isn’t — to use an old saw — rocket science. Rather, it’s a lot of plug and chug work that machines are actually well-suited to do once they’ve been trained. The hard part is imitating the brain structure in the first place, and at scales appropriate to take on more complex problems.
The Future of Machine-Built AI
It’s still easier to adjust an existing system to meet new needs than it is to design a neural network from the ground up. However, this research seems to suggest this is a temporary state of affairs. As it becomes easier for AIs to design new systems with increased complexity, it will be important for humans to play a gatekeeping role. AI systems can easily make biased connections accidentally — such as associating ethnic and gendered identities with negative stereotypes. However, if human engineers are spending less time on the grunt work involved in creating the systems, they’ll have more time to devote to oversight and refinement.
Ultimately, Google is aiming to hone AutoML until it can function well enough for programmers to use it for practical applications. If they succeed in this, AutoML is likely to have an impact far beyond the walls of Google. WIRED reports Pichai stated, at the same event from last week, that “We want to democratize this,” — meaning, the company hopes to make AutoML available outside Google.
Elon Musk has made no secret of his ambition to help Puerto Rico regain power after the devastation caused by Hurricane Maria. Previously, Tesla had sent a few hundred Powerwall battery packs to the island, and now, Musk’s company has officially started shipping Powerpacks to Puerto Rico as part of relief efforts.
With a capacity of 210 kWh, a single Powerpack 2 battery is equivalent to 16 Powerwall cells. The priority for this hardware is to supply hospitals and other medical centers with power so that staff can continue their work.
Three weeks after the hurricane hit, less than 20 percent of the island has access to electricity, but even before the disaster, Puerto Rico’s power grid was in dire need of modernization. Musk wants to renew rather than just repair, and last week, he met with the island’s governor to discuss what Tesla could do to improve its infrastructure.
By shipping these Powerpacks to Puerto Rico right now, Tesla is providing the island with a lifeline. Any major work on the power grid will take months, but this hardware should ensure that the most critical facilities can continue to operate in the interim.
Intel has announced that it has successfully fabricated a 17-qubit superconducting test chip for quantum computing. The superconducting chip has been submitted to the company’s quantum research partner QuTech for further testing.
Quantum computing has the potential to be a truly revolutionary technology, providing a currently unprecedented amount of computational power. However, the qubits that underlie the hardware are notoriously fragile — Intel’s implementation requires an operating environment that maintains a temperature of 20 millikelvin to function.
The new chip boasts an improved design that provides better reliability, enhanced thermal performance, and a reduction of the amount of radio frequency interference between qubits. It also introduces a scalable interconnect scheme that makes it possible to exchange between 10 and 100 times more signals in and out of the chip, compared to a wirebonded alternative.
Crucially, Intel has employed processes, materials, and designs that will allow Intel to scale up its packagine for quantum integrated circuits, which are much larger than a standard silicon chip. This could prove to be an important step in moving from the production of components to a fully fledged quantum computer.
Next Generation of Superconducting
“Our quantum research has progressed to the point where our partner QuTech is simulating quantum algorithm workloads, and Intel is fabricating new qubit test chips on a regular basis in our leading-edge manufacturing facilities,” commented Dr. Michael Mayberry, the corporate vice president and managing director of Intel Labs, in a press release.
While the field of quantum computing has made significant advances in recent years, there is still plenty of work to be done before a large-scale universal quantum computer is viable. Intel is keeping its options open, continuing research into spin qubits in silicon even while focusing on superconducting qubits.
Theoretical research has propelled quantum computing forward by leaps and bounds over the past decade, but Intel’s investigations into the practical side of fabrication are essential for the next stage of the process.
Studies have shown that the worse off a person is financially, the more likely they are to have health problems. But what are the concrete factors that contribute to this link between socioeconomic status and health? Is it a lack of access to medicine and healthy foods? Are the smoking rates different?
According to Mel Bartley, a professor emerita of medical sociology at University College London, those factors only tell part of the story. “If you exactly knew somebody’s diet, exercise level, smoking habit, or alcohol consumption, you would be about 30 to 40 percent likely to accurately predict how long they are going to live,” she told The Guardian. “But what’s the rest? That’s the big question.”
“What’s happened historically is that social scientists have tended to measure health in a simple way – just asking people: ‘How do you rate your health right now?’” explained Meena Kumari, an epidemiologist and co-leader of the study. “But we wanted to bring together the biology and the social science.”
Biomarkers allow them to get an objective measure of each participant’s health and not just their perception of it. “These chemicals are like molecular flags: they allow us to see what happens inside people as they’re going through their life course, which they themselves might not be so aware of,” said Kumari.
Inflammation is a common response to a variety of ailments, including infections and stress, and chronic inflammation has been linked to conditions like diabetes, cancer, and depression. Research has even shown that people who have survived stressful natural disasters are at a higher risk of developing serious illnesses later in life.
To better understand the link between socioeconomic status and health, Kumari’s team focused their study on two molecules tied to inflammation: fibrinogen and C-reactive protein (CRP). When analyzing those molecules in 8,000 participants, the team found that lower socioeconomic status directly correlated with higher levels of these inflammation-linked molecules earlier in life.
The main takeaway from this research is something that we’ve all known in a casual sense, but that can now be backed up by science. Stress is intrinsically linked to our biology, and increased exposure to stress — a condition unfortunately familiar to those in lower socioeconomic standing — causes changes in our bodies that result in increased inflammation. That chronic inflammation, in turn, wreaks havoc on our health.
Of course, being able to point to higher levels of fibrinogen and CRP as concrete evidence of the link between socioeconomic status and health is just the first step. Finding ways to alleviate that stress to improve health is the ultimate goal.
“We need to understand what it is about living in a tougher social and economic situation that causes this underlying stress, in order to argue for more effective changes in policy,” asserted Bartley.
“Understanding the underlying biological pathways will help us to target what it is we should be focusing on,” added Kumari. “Our data suggest that it might be stress that we need to be thinking about, particularly for working age people. But this is just the beginning – there’s still a lot to do.”
“I so confidently declare that we will launch a [cryptocurrency] for one simple reason: if we do not, then in 2 months our neighbors in the Eurasian Economic Community will do it,” Russian communications minister Nikolay Nikiforov said, as reported by AIF.
The CryptoRuble won’t be a typical crypto coin, as it won’t be open for miners. Instead, the government would be the only one able to issue, control, and maintain the currency. CryptoRubles could be exchanged for Rubles anytime, with a 13 percent tax levied on owners unable to declare the source of their crypto coins. A similar 13 percent tax would be incurred by transactions involving the buying and selling of CryptoRubles.
“It will, most likely, be a closed model with a certain volume of regulated emissions,” Nikiforov explained. The minister added that further details are still in development, adding that a Russian cryptocurrency should use Russian cryptography.
While this guarantees the same type of security that blockchain-based coins provide, there’s one apparently major difference. It seems that the CryptoRuble doesn’t run on the same decentralized model that makes blockchain and crypto coins unique and popular. Still, it’s a first step to Russian cryptocurrency adoption, which experts predict will be the key for a blockchain revolution.
Disclosure: Several members of the Futurism team, including the editors of this piece, are personal investors in a number of cryptocurrency markets. Their personal investment perspectives have no impact on editorial content.
In 2016, the United States Air Force contracted SpaceX to begin designing and developing the Raptor prototype upper-stage engine, a bipropellant rocket engine. This design is based on SpaceX’s reusable, methane-powered Raptor engine, which is used in the upper stages of both the Falcon 9 and Falcon Heavy. On Saturday, in an ask me anything (AMA) on Reddit, Elon Musk got candid online and answered some questions about the prototype and other Raptor design objectives.
One person asked, “Could you update us on the status of scaling up the Raptor prototype to the final size? The sub-scale Raptor prototype has a (speculated) thrust of about ~100 tons-force currently, and will be scaled up to ~170 tons-force according to your IAC/2017 design. Can you tell us more about the current status and expected (best-case) timeline of this scale-up effort?”
Musk gave a thorough response: “Thrust scaling is the easy part. Very simple to scale the [developed] Raptor to 170 tons. The flight engine design is much lighter and tighter, and is extremely focused on reliability. The objective is to meet or exceed passenger airline levels of safety.”
“If our engine is even close to a jet engine in reliability, has a flak shield to protect against a rapid unscheduled disassembly and we have more engines than the typical two of most airliners, then exceeding airline safety should be possible,” he continued. “That will be especially important for point to point journeys on Earth. The advantage of getting somewhere in 30 mins by rocket instead of 15 hours by plane will be negatively affected if ‘but also, you might die’ is on the ticket.”
SpaceX’s reusable rockets have paved the way for cost-effective, environmentally conscious, and efficient space travel. The Raptor prototype engine is, as Musk explained in the AMA, currently on track to scale up for use in the aerospace company’s rockets. But Musk’s response, alongside his recent presentation at the International Aeronautical Congress (IAC), have added to the discussion. These updates not only confirmed that SpaceX will be able to scale up the engine, but asserted that the design will focus on human safety—all the while alluding to the possibility of casual travel by rocket.
So, is there the possibility, in the near future, that SpaceX will be marketing their rockets for casual, commercial use? It is possible. But currently, it seems SpaceX still has work to do in completing the scaling of the engine and improving the safety for riders within these vehicles. If the comments poking fun at Musk’s careful wording of potential failure are to be taken to heart, SpaceX and Musk’s priorities—the dreaded “unscheduled disassembly”—are in the right place.
You made a purchase, and you regretted it. Maybe you didn’t think it through completely, or maybe it was done on impulse. For years, this kind of buyer’s remorse was limited to brick-and-mortar stores, the sites of post-breakup self-care or idle Saturday afternoon splurges. But it takes on a different flavor in the era of online shopping. All that time researching the right blender might have brought you to descriptions of how the product was made or into the manufacturer’s corporate practices. Then you regret buying it not because it was an unnecessary purchase, but because getting it meant the abuse of people or the environment.
Being a more informed consumer is a noble goal, but it’s not easy to achieve. Most people wouldn’t know where to look for information about a company’s manufacturing practices; however, companies are expected to be more transparent than ever before, and consumers have shown growing interest and concern surrounding the origin of the products they buy, particularly those put on or in their bodies.
Over the last couple of decades, this interest seems to have evolved: Now, consumers want to know where their products have come from and how they were made. This knowledge has become part of a broader cultural drive to reassure ourselves that we are giving money to companies we can feel at least somewhat good about supporting.
But shopping with your conscience is often not cheap.
Some economists have suggested that capitalist societies inhibit “conscious consumerism” because capitalism depends on consumption. So for those living in a capitalist society aiming to consume less, or at least be more choosy about what they consume, the capitalist structure does not make constant vigilance an easy task.
Empowering consumers seemed like a natural fit for their combined talents and expertise. As they batted around ideas, the team realized that the information on companies’ manufacturing practices did exist. The only problem was that it wasn’t easily and readily accessible to consumers. “Transparency was our solution,” Jonathan Hecht, the company’s co-founder, told Futurism via email.
And so the Impakt Browser Extension was born. It is, as its name suggests, a browser extension for Chrome and Firefox that allows users to glean information about the practices, politics, and ethics of companies in real time as they shop.
In its final form, Impakt will offer users data on a range of metrics, from a company’s hiring diversity to whether they pay workers minimum wage. It will also look at the age of factory workers in factories (a possible indicator of child labor), as well as the company’s standards for environmental sustainability. Users will be able to customize the extension to show them the information they care about most. And it will work pretty much anywhere you’re likely to buy something on the internet: e-commerce sites like Amazon, online retailers like Best Buy, and brands like Sephora.
To begin building a tool like Impakt, Hecht and his team trawled government databases, industry reports, and news articles. The team wanted the information to be user-friendly, so a browser extension seemed to be “as low-friction as possible,” Hecht said.
Creating the browser extension was the easy part — the real challenge for Hecht’s team is to build up the database the extension would draw on to inform consumers. Even once the database is established, continuously honing its accuracy and usability will be a full-time job. With more sophisticated machine learning, the algorithm could scrape and crawl for data around the internet, ensuring that users receive the most up-to-date and accurate information. An artificial intelligence component will aggregate and store user data, allowing the program to learn about the impact that the extension has as users shop, which will allow for improvements down the road.
The team has created the bare bones of the tool, though it is not yet available for download. To complete the project, they first need to raise some funds.
To do this, the team needs nearly $30,000, which it’s raising via Kickstarter. As of publication, they’ve raised just over $17,500 and need another $12,000 before the campaign ends at the end of the month.
“As we gain the user base to justify it, we’ll hire a team to curate the database, expand the covered universe, and develop more powerful and intelligent algorithms to automate and scale,” Impakt’s tech lead, Chris Shaffer, explained via email.
At first, the developers estimate that the algorithm will make more errors than than they would like. It might “[mix] up Miami Dolphins hats with a tuna brand that harms dolphins,” as the team noted on its Kickstarter page. Mistakes like that will probably happen about 15 percent of the time. Impakt hopes that users will submit error reports to help improve the algorithms.
Making Impakt easy to use and accessible could also help the “conscious consumer” movement shed its elitist reputation. It’s no secret that it can be a costly endeavor to consistently buy only products that are ethically-produced and environmentally friendly.
It doesn’t always have to be so expensive to shop with your conscience, however. There’s something called the “Whole Foods Effect” — people think they have to spend a lot of money in order to be ethical, Julie Irwin, a professor at the University of Texas at Austin’s McComb’s School of Business, told Futurism via email. People may even think spending a lot is the best way to be ethical, but, in fact, not consuming in the first place is the most ethical tactic of all, she added.
Developing that consciousness in the first place is a separate challenge. Tools that can explain what it means for a product to be “ethically sourced” or produced in an “environmentally-friendly” fashion could help narrow the gap created by those who haven’t been educated on the subject.
“We’re fighting for a world where corporations answer to our highest values, not only their bottom line,” the team wrote on its Kickstarter page.
It may be easy to think that individual actions can’t make a difference. But Irwin cautions that we shouldn’t get hung up on that idea. “I think people only say that when they want to be unethical,” she said. In truth, she said, every ethical action begins with just one person — and any person can be the one who puts change in motion. “It is still important to try,” Irwin said.
“All social movements and changes happen because a lot of individuals make the right decision,” Irwin said. “I know I work against hopelessness all the time, but in the end, individual action can make a difference.”
Most people feel confident that they know a fair amount about their own body, in terms of general health and what they look like from the outside. However, most of us haven’t taken a look inside—literally speaking. Ed Barton and his UK-based startup Curiscope is hoping to change that with a unique blend of virtual reality (VR) and augmented reality (AR). Using an anatomy VR app and the company’s Virtuali-Tee, a t-shirt, they are allowing people to see inside of their own chest cavities.
Barton explained to Wired: “We use a mix of VR and AR to see inside the anatomy…With positionally tracked AR, you can position VR experiences physically within your environment.”
Barton and Curiscope co-founder Ben Kidd have so far raised almost $1 million in seed funding from LocalGlobe, and they’ve already sold almost 3,000 of the Virtuali-Tees.
High Tech T-Shirt
Barton told Wired that, using positional tracking, “we have a blurring of physical and digital items, and an experience more tightly connected to reality.” He continued, “With the Virtuali-Tee, AR is your interface and VR is used to transport you somewhere else. The technologies should be merging.”
This technology works using a highly-stylized QR code printed onto the front of the t-shirt. When you scan the code with the corresponding app, you can explore throughout the chest cavity, including the heart and lungs.
This t-shirt, specifically, could be a fantastic tool for the curious. It can be used for educational purposes, allowing anatomy and biology to be a fun experience that students can really wrap their minds around. Even outside of a formal educational setting, this device could allow us to better connect with our own biology. Virtuali-Tee could help people to better understand their own inner workings, and how the things we do every day—from what we eat to how we exercise—might affect our health.
At the 37th Gitex Technology Week at the Dubai World Trade Centre, the Dubai police force announced that officers will soon be speeding around the city’s skies on electric police hoverbikes. The Star Wars-style, battery-powered hoverbikes can fly at a height of five meters (16.4 feet) and reach speeds of up to 70 kilometers (43.5 miles) per hour. The bike was co-developed for emergency response teams with Russian drone manufacturer Hoversurf, and is called the Hoversurf Scorpion.
First Sergeant Ali Ahmad Mohammad told Gulf News the Scorpion can take an officer over heavy traffic in emergencies. “The bike can also fly without a passenger and can go up to six kilometers,” he said. “It can fly for 25 minutes and can carry up to 300kg (661 lbs) of weight at a speed of 70kph.”
Alexander Atamanov, CEO of Hoversurf, took to Facebook to confirm that the company and the Dubai police have agreed to mass produce the Scorpion in the Dubai area. He also posted the above video of the police hoverbike in action.
The City of the Future
Police at Gitex also revealed other advanced technology designed specifically for police use. For example, they showed audiences a smart electric motorbike equipped with multiple cameras used to identify reckless drivers, as well as small, self-driving vehicles that patrol the city and scan for persons of interest and criminals using biometric software.
“It can recognize people in any area and identify suspicious objects and can track suspects,” Dubai Police Smart Services Department director Brigadier Khalid Nasser Al Razooqi told Gulf News. “It will be deployed at tourist destinations in Dubai. It has cameras and will be linked to the command room.”
“This discovery is significant because it gives us two completely different ways to view the same thing,” Brian Koberlein, astrophysicist and Senior Lecturer at Rochester Institute of Technology, explained to Futurism. “Imagine trying to follow a detective show if you could just hear the show without video, or just see the show without hearing it. When you combine the two you get the whole picture.”
Gravitational waves are ripples in spacetime that were first theorized by Albert Einstein about a century ago. Last year marked the first time we’ve been able to observe the phenomena, and so far, all the gravitational waves we’ve observed have been caused by particularly intense events, such as the merging of neutron stars. This type of merger has also been considered the most likely cause of short gamma ray bursts, cosmic explosions that emit massive amounts of light.
Scientists have long suspected that the merger of two neutron stars would be followed by a kilonova, an explosive occurrence some 1,000 times brighter than a standard nova. They speculated that the near-simultaneous observations of gravitational waves and a gamma ray burst could indicate that such an event was taking place, and subsequent study by ESO facilities indicates that this was the case: a kilonova had finally been observed.
“I had never seen anything like it,” Stephen Smartt, who led observations with the ESO’s New Technology Telescope, said in an ESO press release. “Our data, along with data from other groups, proved to everyone that this was not a supernova or a foreground variable star, but was something quite remarkable.”
No single facility was responsible for this breakthrough discovery — it was truly a collaborative effort. A large swathe of sky containing millions of stars needed to be monitored after the initial detection of the gravitational waves and light, so a host of different facilities were called into action.
Paranal Observatory’s Visible and Infrared Survey Telescope and VLT Survey Telescope, La Silla Observatory’s Rapid Eye Mount telescope, Las Cumbres Observatory’s LCO 0.4-meter telescope, and Cerro Tololo Inter-American Observatory’s Dark Energy Camera were all used to observe further developments.
The Swope 1-meter telescope at Cerro Las Campanas was the first to spot a new point of light appearing close to the lenticular galaxy designated NGC 4993. This point of light was monitored by the Pan-STARRS telescope and the Subaru telescope in Hawaii overnight. Even more observatories and telescopes would study the event over the following couple of weeks.
“ESO’s great strength is that it has a wide range of telescopes and instruments to tackle big and complex astronomical projects, and at short notice,” said Andrew Levan, lead author of one of the papers. “We have entered a new era of multi-messenger astronomy!”
This interview has been slightly edited for clarity and brevity.
On Saturday, Elon Musk participated in an Ask Me Anything (AMA) forum on Reddit in which he answered questions pertaining to about SpaceX’s latest rocket, the “BFR,” as a follow up to his IAC 2017 talk. One user put forth the idea of internet on Mars, asking, “Does SpaceX have any interest in putting more satellites in orbit around Mars (or even rockets) for internet/communications before we get feet on the ground? Or are the current 5-6 active ones we have there sufficient?”
So, is the idea of internet on Mars truly feasible, or is it just an outlandish fantasy right now? One Reddit user had some pretty keen insight into its potential:
The concept of an internet connection on Mars is kinda awesome. You could theoretically make an internet protocol that would mirror a subset of the internet near Mars. A user would need to queue up the parts of the internet they wanted available and the servers would sync the relevant data. There could be a standard format for pages to be Mars renderable since server-side communication is impractical.
This exploration of the concept prompted Musk to simply reply, “Nerd” — which the user it was directed at and others in the thread took as an ultimate sign of respect.
Within the Realm of Possibility
Of course, Musk also dug a little bit deeper, “But, yes, it would make sense to strip the headers out and do a UDP-style feed with extreme compression and a CRC check to confirm the packet is good, then do a batch resend of the CRC-failed packets. Something like that. Earth to Mars is over 22 light-minutes at max distance.” he continued, “3 light-minutes at closest distance. So you could Snapchat, I suppose. If that’s a thing in the future.”
This is not the first time that the idea of an interplanetary internet has come up. Back in 2015, Musk suggested the idea of placing hundreds of satellites 1,200 km (750 miles) above Earth to help accomplish this task. He even told Bloomberg Businessweek that, “Our focus is on creating a global communications system that would be larger than anything that has been talked about to date.” The theory of this plan rests on the fact that light travels faster in the vacuum of space than through Earth’s atmosphere.
Newsweekalso discussed the topic last September with Josh Boehm, a former SpaceX employee. It seems as though, currently, the satellites and infrastructure necessary to make this a reality aren’t in place. But, since humans haven’t reached Mars yet or even begun the journey, there is still a few years to build a Martian internet.
North Korea has a weapon capable of crippling the United States’s power grid, and U.S. congress has now been warned of the full threat that it poses. An EMP attack could lead to the deaths of up to “90 percent of Americans.”
In a U.S. House of Representatives hearing on Thursday, members of the recently defunded EMP Commission informed congress of the devastation an Electromagnetic Pulse (EMP) attack could inflict on the country. In a commissioned report, former EMP Commissions chairman, William Graham, and its former chief of staff, Peter Vincent Pry, referred to North Korea using an EMP as a “doomsday scenario,” citing Kim Jong-Un’s well-documented threats to turn the U.S. to “ashes.”
It could use its demonstrated satellite launcher to carry a nuclear weapon over the South Polar region and detonate it…over the United States to create a high-altitude electromagnetic pulse … The result could be to shut down the U.S. electric power grid for an indefinite period, leading to the death within a year of up to 90 percent of all Americans.
“There have been an enormous increase in our dependency on electronics, computers, and microelectronics,” said Graham during a interview with Forbes. “An attack may never happen. But the more vulnerable the U.S. is to such an attack, the more likely it is to be used against us.”
Failing to Act
According to Graham and Pry, the U.S. has ignored concerns about North Korea’s capabilities for quite some time. In the last six months alone, the former EMP Commission provided information regarding North Korea’s nuclear weapons — including how many nukes it has, its ability to create miniaturized weapons, and how far away it was from developing hydrogen bombs. The organizations critics, however, openly dismissed the warnings.
Now that the EMP Commission’s predictions have been confirmed by North Korea’s recent displays of power, the threats of an EMP attack should be taken seriously. Even the Doomsday Clock has been moved forward as a result of North Korea’s nuclear tests — a relatively small thing to be impacted, but the symbolism is significant.
“It is critical, therefore, that the U.S. national leadership address the EMP threat as a critical and existential issue, and give a high priority to assuring the leadership is engaged and the necessary steps are taken to protect the country from EMP,” the experts reported to a Homeland Security subcommittee.
Congress’ discussion about North Korea’s EMP capabilities comes not too long after South Korea revealed its own bombs that could disrupt a nation’s electrical equipment. South Korea’s military said it would use these “blackout bombs” against North Korea if it appeared to be preparing to make a move. However, the South Korea said no such preparations have been observed — yet.
As climate change marches on, world leaders and scientists alike have considered the potential of geoengineering solutions to capture and store emissions. In fact, scientists recently concluded that we need to have “carbon-sucking” geoengineering tech in place by as early as 2030.
As reported by Quartz, it seems Iceland is ahead of that deadline, with the help of a 300-megawatt geothermal power plant that’s been built in Hellisheiði. The plant captures more carbon dioxide (CO2) than it produces, meaning it produces negative emissions. That said, it’s true that the plant only produces about one third of the carbon a traditional coal plant would — but more than what it emits is both captured and stored underground.
To accomplish this engineering marvel, a wall of fans sucks in air, filters out CO2, and injects the CO2 into water which is then pumped into the ground where it becomes rock. This process is simple and produces usable energy while eliminating emissions from the environment; truly a win-win. So why hasn’t this technology been immediately adopted and replicated in every state in every country in the world? The short answer is cost.
The Cost of Energy
Currently, this process costs about $30 (USD) for every ton of carbon dioxide that is turned into rock, which is not particularly expensive. However, capturing the CO2 from the air would be significantly more cost-intensive. If the cost of pulling carbon dioxide could be whittled down to $100 per cycle, as its creators are aiming for, then the technology’s adoptability would be much improved.
The concept of capturing and storing carbon underground is nothing brand new: geoengineering solutions to climate change have been brewing and developing for years. However, the concrete completion of this plant proves not only that this process works as intended, but that the costs of producing energy in this manner aren’t completely out of reach. As the technology continues to advance and improve, they will hopefully continue to become more affordable, and in turn, more widely adopted.
If we continue to produce energy in the same manner, and at the same rate, as we currently are, climate change will only worsen. Its life-threatening repercussions will continue to become increasingly devastating — not to mention costly. While we shift from fossil fuels to renewable resources, it’s important to note that our emissions aren’t going anywhere.
Even if we were to eliminate our entire carbon footprint right now, we’d would still see years and years of energy usage left in our wake. This isn’t to say we shouldn’t do anything, as we’ve already jeopardized ourselves and the planet. Rather, it serves as a reminder that while we make changes regarding the types of energy we use, and how we use them, we can also invest in and support the elimination of existing emissions through emerging technology.
Only Tesla employees and company insiders are getting their Model 3s delivered ahead of regular customers, suggesting the car belonged to someone in either group. According to Electrek, the car and, presumably, the listing both belong to a Tesla Monterey employee. However, the Craigslist page has since been taken down. The listing read:
First ever Tesla Model 3 for sale. Car is lightly used with just over 2,000 miles. This is a unique opportunity to own one of the most anticipated cars ever. Skip the line of over 400,000 people and buy the car of the future now! Car is fully loaded with the 310 Mile Long Range Battery, Panoramic Glass Roof, Premium Interior, Sound System and Aero Wheels. Car has been great for the past 2,000 miles and a unique circumstance is forcing me to part with the car. Fit and finish are excellent for an early production model. This car meets all of the hype and I plan on owning another in the future.
Apparently, Tesla foresaw something like this happening, and added a few lines prohibiting employees from selling it to make a profit.
“Because employees are receiving special priority, all Model 3 cars prioritized to employees must be registered to you or your family member and may not be resold for more than the original price. Reservation holders will agree to these terms when their order is placed.”
Tesla CEO Elon Musk said in July that the company should be able to produce 20,000 vehicles a month by December. It was supposed to hit 1,500 in September, but only managed to deliver 260. It’s unclear if the company still believes 20,000 a month is still achievable.
All over the world, the way people get around is changing quickly. However, it’s possible that there’s no greater hub for developing the future of transportation technology than the Netherlands.
The country is uniquely poised to facilitate this kind of work. It’s got great technological infrastructure, with complete 4G coverage that helps vehicles grab routing information and other updates quickly and efficiently. Moreover, it’s relatively small, making it easier to implement big changes to the transport network than it would be in a country like the US.
Coupled with strong governmental backing, these factors are incubating a lot of interesting projects that could have a profound effect on how the Dutch get from A to B.
From Hyperloop to Hire Cars
At the inaugural SpaceX Hyperloop Pod Competition in 2016, a team from the Delft University of Technology in the Netherlands managed to come out on top in terms of overall score. The Netherlands is a promising hotspot for research into Elon Musk’s potentially groundbreaking new form of city-to-city transport.
However, we’re not just seeing projects that aim to completely reimagine transport. There are also ongoing efforts to use technology to improve upon existing methods of getting around.
In a major metropolitan area, it doesn’t always make sense to own a car if it’s only going to be in use every now and again. That’s why car sharing programs are so popular — and Dutch startup Next Urban Mobility wants to go beyond what’s currently available to produce a “Netflix for transportation.”
Such a platform would give users on-demand access to hub spanning everything from cars to bicycles, public transport to delivery drivers.
Using public transport also makes things safer because there’s fewer drivers on the road, and if self-driving cars become the norm, traffic accidents and deaths caused by them could go down substantially.
One major focus for the organizations looking into new modes of transport in the Netherlands is sustainability. Various governments are set to enforce a ban on the sale of petrol- and diesel-powered vehicles in the next decade. Alongside that process, there’s a lot of ongoing research into other ways of making travel more environmentally friendly.
Eurocarbon is producing composite materials commonly used in the automotive industry that consume up to 40% less energy, without sacrificing anything in terms of strength and quality. Solliance is producing high grade thin film solar panels that could harvest the energy that powers tomorrow’s vehicles.
Crucially, organizations like Connekt and Amsterdam Smart City are helping to forge partnerships between people from all walks of life and companies with this kind of expertise in order to promote projects that will benefit communities. The future of transportation is going to require collaboration between various different specializations, so bringing high-spec manufacturers together with the designers and engineers that might utilize their materials is hugely valuable.
The Netherlands seems to be taking a very proactive stance on the future of transport, in all its forms. In many ways, this is a win-win — the best prospects look set to make travel easier, faster, and safer, but they should also be a major boon for the environment.
Just as ancient Greeks fantasized about soaring flight, today’s imaginations dream of melding minds and machines as a remedy to the pesky problem of human mortality. Can the mind connect directly with artificial intelligence, robots and other minds through brain-computer interface (BCI) technologies to transcend our human limitations?
Over the last 50 years, researchers at university labs and companies around the world have made impressive progress toward achieving such a vision. Recently, successful entrepreneurs such as Elon Musk (Neuralink) and Bryan Johnson (Kernel) have announced new startups that seek to enhance human capabilities through brain-computer interfacing.
How close are we really to successfully connecting our brains to our technologies? And what might the implications be when our minds are plugged in?
How do brain-computer interfaces work and what can they do?
Much of the recent work on BCIs aims to improve the quality of life of people who are paralyzed or have severe motor disabilities. You may have seen some recent accomplishments in the news: University of Pittsburgh researchers use signals recorded inside the brain to control a robotic arm. Stanford researchers can extract the movement intentions of paralyzed patients from their brain signals, allowing them to use a tablet wirelessly.
The most sophisticated BCIs are “bi-directional” BCIs (BBCIs), which can both record from and stimulate the nervous system. At our center, we’re exploring BBCIs as a radical new rehabilitation tool for stroke and spinal cord injury. We’ve shown that a BBCI can be used to strengthen connections between two brain regions or between the brain and the spinal cord, and reroute information around an area of injury to reanimate a paralyzed limb.
With all these successes to date, you might think a brain-computer interface is poised to be the next must-have consumer gadget.
Still early days
Not all BCIs, however, are invasive. Noninvasive BCIs that don’t require surgery do exist; they are typically based on electrical (EEG) recordings from the scalp and have been used to demonstrate control of cursors, wheelchairs, robotic arms, drones, humanoid robots and even brain-to-brain communication.But a careful look at some of the current BCI demonstrations reveals we still have a way to go: When BCIs produce movements, they are much slower, less precise and less complex than what able-bodied people do easily every day with their limbs. Bionic eyes offer very low-resolution vision; cochlear implants can electronically carry limited speech information, but distort the experience of music. And to make all these technologies work, electrodes have to be surgically implanted — a prospect most people today wouldn’t consider.
The first demonstration of a noninvasive brain-controlled humanoid robot “avatar” named Morpheus in the Neural Systems Laboratory at the University of Washington in 2006. This noninvasive BCI infers what object the robot should pick and where to bring it based on the brain’s reflexive response when an image of the desired object or location is flashed.
But all these demos have been in the laboratory — where the rooms are quiet, the test subjects aren’t distracted, the technical setup is long and methodical, and experiments last only long enough to show that a concept is possible. It’s proved very difficult to make these systems fast and robust enough to be of practical use in the real world.
Even with implanted electrodes, another problem with trying to read minds arises from how our brains are structured. We know that each neuron and their thousands of connected neighbors form an unimaginably large and ever-changing network. What might this mean for neuroengineers?
Imagine you’re trying to understand a conversation between a big group of friends about a complicated subject, but you’re allowed to listen to only a single person. You might be able to figure out the very rough topic of what the conversation is about, but definitely not all the details and nuances of the entire discussion. Because even our best implants only allow us to listen to a few small patches of the brain at a time, we can do some impressive things, but we’re nowhere near understanding the full conversation.
There is also what we think of as a language barrier. Neurons communicate with each other through a complex interaction of electrical signals and chemical reactions. This native electro-chemical language can be interpreted with electrical circuits, but it’s not easy. Similarly, when we speak back to the brain using electrical stimulation, it is with a heavy electrical “accent.” This makes it difficult for neurons to understand what the stimulation is trying to convey in the midst of all the other ongoing neural activity.
Finally, there is the problem of damage. Brain tissue is soft and flexible, while most of our electrically conductive materials — the wires that connect to brain tissue — tend to be very rigid. This means that implanted electronics often cause scarring and immune reactions that mean the implants lose effectiveness over time. Flexible biocompatible fibers and arrays may eventually help in this regard.
Learning to interpret and use artificial sensory information delivered via noninvasive brain stimulation.
Ultimately, we believe a “co-adaptive” bidirectional BCI, where the electronics learns with the brain and talks back to the brain constantly during the process of learning, may prove to be a necessary step to build the neural bridge. Building such co-adaptive bidirectional BCIs is the goal of our center.
Elon Musk’s new startup Neuralink has the stated ultimate goal of enhancing humans with BCIs to give our brains a leg up in the ongoing arms race between human and artificial intelligence. He hopes that with the ability to connect to our technologies, the human brain could enhance its own capabilities — possibly allowing us to avoid a potential dystopian future where AI has far surpassed natural human capabilities. Such a vision certainly may seem far-off or fanciful, but we shouldn’t dismiss an idea on strangeness alone. After all, self-driving cars were relegated to the realm of science fiction even a decade and a half ago — and now share our roads.
Connecting our brains directly to technology may ultimately be a natural progression of how humans have augmented themselves with technology over the ages, from using wheels to overcome our bipedal limitations to making notations on clay tablets and paper to augment our memories. Much like the computers, smartphones and virtual reality headsets of today, augmentative BCIs, when they finally arrive on the consumer market, will be exhilarating, frustrating, risky and, at the same time, full of promise.In a closer future, as brain-computer interfaces move beyond restoring function in disabled people to augmenting able-bodied individuals beyond their human capacity, we need to be acutely aware of a host of issues related to consent, privacy, identity, agency and inequality. At our center, a team of philosophers, clinicians and engineers is working actively to address these ethical, moral and social justice issues and offer neuroethical guidelines before the field progresses too far ahead.
One of the first questions, collected on the r/SpaceX subreddit leading up to the AMA, was about the Raptor’s thrust. It was one of the highest ranked questions at the AMA’s outset, and the first Musk answered.
Another question wondered just how the dry mass and thrust of the rockets would affect their return to Earth — namely, whether they would land by “hover-slam.” Musk responded that “Landing will not be a hoverslam,” and explained that the thrust to weight ratio will actually “feel quite gentle” and that as the ratio of thrust to weight at launch (also around 1.3) it will “pretty much look like a launch in reverse. . .”
Musk also provided some insight into the design of the rockets, and gave some context as to their design’s function and purpose. When one user asked about how the BFS will manage the temperature of propellents in zero gravity, Musk explained the venting procedure — adding that a cryocooler could be added in the future.
When user CMDR-Owl asked about what progress we’ll see in terms of development and testing over the next five or so years before SpaceX’s first planned launch, Musk explained that they’ll be starting “with starting with a full-scale Ship doing short hops of a few hundred kilometers altitude and lateral distance,” adding that:
Next step will be doing orbital velocity Ship flights, which will need all of the above. Worth noting that BFS is capable of reaching orbit by itself with low payload, but having the BF Booster increases payload by more than an order of magnitude. Earth is the wrong planet for single stage to orbit. No problemo on Mars.
3D printing has already become beneficial to many industries, and naturally the question or whether it will benefit rocket production was a good one to ask. Musk responded that although most of the Raptor’s parts will be machine forged, some could be 3D printed.
Speaking on life on Mars, while the focus was mainly on the rocket technology that will get us to there (and potentially elsewhere), users did have questions about what SpaceX needs to do to ensure that when we get to the Red Planet, we’ll be able to survive. Reddit user foxyjim99 asked about considerations such as food (namely, how you would calculate the amount needed for a mission and ensure that the need is met) — vitally important, but admittedly not something that SpaceX has as a primary focus. Musk responded that “Our goal is get you there and ensure the basic infrastructure for propellant production and survival is in place,” comparing the work that SpaceX is doing is roughly analogous to building “the equivalent of the transcontinental railway” — and as for terraforming Mars to make (and keep) it habitable, other companies and millions of people will need to be involved.
To that end, when user adammrxifgnqph asked if there were plans to send up additional satellites before the Mars mission, to facilitate communication, another user jumped in to ask about Mars-to-Earth communication. Musk’s response? “If anyone wants to build a high bandwidth comm link to Mars, please do.” Whether meant in jest, a challenge, or a call to action, user general-information pointed out that the concept interplanetary transmission is pretty cool. Musk responded to the user’s thoughts with an eloquent “Nerd,” but then offered a few more thoughts — ensuring us that when we go to Mars, we’ll probably be able to brag on social media about it.
The persistent woes of Equifax — the credit reporting agency responsible for the massive data- breach affecting about 143 million customers earlier this year — are continuing to incite new cybersecurity fears. Last week, the Equifax website featured an error message as the site went down after what appeared to be yet another potential breach.
Late on Wednesday, Ars Technica reported that independent security analyst Randy Abrams discovered the Equifax page was redirecting him to fraudulent Adobe Flash updates. He found that, when clicked, the download links would infect computers with adware. He ran into the phony Flash links on at least three times and shared a video of what he encountered on YouTube.
“We are aware of the situation identified on the equifax.com website in the credit report assistance link,” Equifax spokesman Wyatt Jefferies said in a statement. “Our IT and Security teams are looking into this matter and, out of an abundance of caution, have temporarily taken this page offline. When it becomes available or we have more information to share, we will.”
After the investigation was completed, the company stated the incident was not a breach, as reported by Endgadget. The website was back online by the weekend.
Privacy in a Digital Age
Meanwhile, Congress is considering a national credit freeze in the wake of the Equifax fiasco. Representative Patrick McHenry (R, North Carolina), who introduced the bill, explained, “It protects Americans by creating a national credit freeze that actually works. Finally, it prohibits the largest credit reporting agencies from continuing to rely upon the most sensitive of Americans’ personal information: our Social Security numbers.”
While this isn’t necessarily a bad temporary countermeasure, it does ignore the new cybersecurity fears we are beginning to face as a society. Ending reliance on Social Security numbers is a good thing to do. However, in a time when a single weapon could cause an electromagnetic pulse (EMP) that wipes out the power grid of the entire country, destroys our electronics, and creates “an existential crisis like nothing the world has ever witnessed,” loss of information and operations also demand the attention of Congress.
Further into the future, privacy issues will include protecting brain implants from hackers who attempt to access the human mind itself, and deciding how brain scanning should be used for — and against — citizens. Just as important, the neuroethical discussion needs to start now.
A proactive, science-based approach will be essential to mastering cybersecurity, cognitive privacy, and similar challenges, and both technical and legislative solutions will need to be implemented. Meanwhile, quantum encryption and computing are coming soon, and blockchain is already here, all offering new ways to protect information from prying eyes, if not EMPs.
In a statement, experts recommended to Congress that the U.S. military strengthen its ballistic missile defenses — including the deployment of space-based defenses — with the specific aim of protecting the U.S. from potential electromagnetic pulse (EMP) attacks. They also recommended the development and deployment of enhanced-EMP nuclear weapons and other means to deter attacks on the United States by North Korea.
An EMP attack has the potential to “shut down the U.S. electric power grid for an indefinite period,” according to the assessment delivered Thursday at the U.S. House of Representatives Committee on Homeland Security’s hearing, “Empty Threat or Serious Danger: Assessing North Korea’s Risk to the Homeland.” William Graham and Peter Vincent Pry — Chairman and Chief of Staff, respectively, for the Commission to Assess the Threat to the United States from EMP Attack (the EMP Commission) — provided the sobering statement.
The EMP Commission was established in 2001 to advise Congress, the Department of Defense (DoD), the President, and other agencies on any nuclear EMP threats. It was re-established with a broader charter in 2015 so its focus would include all manmade EMP threats, including cyber attacks, sabotage, and what they call “Combined-Arms Cyber Warfare.” However, their funding was recently terminated, as reported by Newsmax, in the same month as North Korea detonated a “Super-EMP” weapon.
The Detailed Recommendations
The report pointed out that, six months ago, North Korea’s nuclear capabilities were believed to be primitive, but the U.S. now approximates that the country has 60 nuclear weapons and can reach the United States with those weapons. The EMP Commission urged Congress to recognize the dire threat posed by EMP weapons, which can be devastating even when used with “primitive, low-yield” nuclear weapons.
Moreover, while much of the focus regarding North Korea has centered on its ability to reach a city with a missile, the EMP Commission argued that the threat from an EMP attack has been mostly ignored:
EMP attack does not require an accurate guidance system because the area of effect, having a radius of hundreds or thousands of kilometers, is so large … North Korea could make an EMP attack against the United States by launching a short-range missile off a freighter or submarine or by lofting a warhead to 30 kilometers burst height by balloon … even a balloon-lofted warhead detonated at 30 kilometers altitude could blackout the Eastern Electric Power Grid that supports most of the population and generates 75 percent of U.S. electricity.
Their bottom line recommendations?
We recommend the development and deployment of enhanced-EMP nuclear weapons and other means to deter adversary attack on the United States. Enhanced-EMP nuclear weapons, called by the Russians Super-EMP weapons, can be developed without nuclear testing … We recommend strengthening U.S. ballistic missile defenses — including deployment of space-based defenses considered by the Strategic Defense Initiative — and that these be designed and postured to also protect the U.S. from EMP attack.
In a nation utterly dependent on being online and having power, a single nuclear warhead detonated above us — even without much precision — could have apocalyptic consequences, creating, as Peter Kelly-Detwiler put it in Forbes, “an existential crisis like nothing the world has never witnessed.” The challenge for the U.S. — and every developed county — will be to take the necessary preemptive responses while at the same time avoiding the unintentional escalation of this precarious situation.
To reach that point, however, AI may need to become more human-like. At least, that’s what Google Cloud Chief Scientist and Director of the Stanford Artificial Intelligence and Vision Labs Fei-Fei Li believes. By doing so, AI would become more capable of improving our lives and providing us peace of mind.
In an interview with MIT Technology Review senior editor Will Knight, Li explained that thinking about AI’s impact on the world is one of the most important factors regarding the technology, and that her work with AI has revealed one thing: future developments and advancements need to be more human-centered.
Better AI Development
“If you look at where we are in AI, I would say it’s the great triumph of pattern recognition,” she said. “It is very task-focused, it lacks contextual awareness, and it lacks the kind of flexible learning that humans have. We also want to make technology that makes humans’ lives better, our world safer, our lives more productive and better. All this requires a layer of human-level communication and collaboration.”
Early signs of such communication and collaboration can already be seen, though their true impact has yet to be seen; in July, for example, Google’s DeepMind revealed it began developing AI with imagination, enabling it to think through decisions and plan for the future without human input.
“If our algorithms are to develop equally sophisticated behaviors, they too must have the capability to ‘imagine’ and reason about the future,” said researchers from DeepMind, speaking on AIs equipped with imagination. “Beyond that they must be able to construct a plan using this knowledge.”
Li’s own thoughts nearly echo DeepMind’s, as she also believes AI need to be able to think and act for themselves. To this end, she said that future research conducted needs to focus on “contextual understanding”, as well as “knowledge abstraction and reasoning.”
AI That Represents Humanity
That said, new features and capabilities won’t be enough to truly advance artificial intelligence. Diversity also needs to play a key role. As Li explained, studies have shown that a diverse team leads to more creative, innovative solutions. AI also cannot be biased or influenced by the morally questionable parts of humanity, otherwise its adoption and incorporation could hinder some groups of people over others.
“When you are making a technology this pervasive and this important for humanity, you want it to carry the values of the entire humanity, and serve the needs of the entire humanity,” said Li. “If the developers of this technology do not represent all walks of life, it is very likely that this will be a biased technology. I say this as a technologist, a researcher, and a mother. And we need to be speaking about this clearly and loudly.”
Genetically Engineered Eggs are Better than Golden Eggs
People often warn about the amount of cholesterol you get from eating too many eggs. But what if there were health benefits to eggs as well — like drugs that fight cancer, hepatitis, and other diseases? Japanese researchers from the National Institute of Advanced Industrial Science and Technology (AIST) did just that when they successfully genetically engineered chickens to lay eggs that contain a special pharmaceutical agent.
According to a report by The Japan News, the researchers at AIST genetically modified precursor cells of chicken sperm to produce a type of protein that’s related to the immune system called interferon beta.
This protein has been found to be effective in treating malignant skin cancer and hepatitis. The modified cells were used to fertilize eggs that produced male chicks. A few rounds of cross-breeding the male chicks resulted in chickens that inherited the genes with interferon beta.
Reagent import and sales firm Cosmo Bio Co. in Tokyo, which developed the method together with the AIST researchers and the the National Agriculture and Food Research Organization in Ibaraki Prefecture, now has three hens that lay eggs every one or two day. The egg whites from those eggs contain interferon beta.
Why go through such a tedious process? The project’s goal was to potentially reduce the costs of making drugs. “This is a result that we hope leads to the development of cheap drugs,” Hironobu Hojo, professor at Osaka University, told The Japan News. “In the future, it will be necessary to closely examine the characteristics of the agents contained in the eggs and determine their safety as pharmaceutical products.”
This is just one example of how gene editing methods can reshape industries, especially healthcare. Others have worked on applying gene editing such as CRISPR directly into cancer cells or to a patient. Producing cheap drugs from chicken eggs is another possibility — and a rather creative one, at that.
Moving forward, the researchers plan to work on stabilizing the interferon beta contents of the eggs to produce some a dozen milligrams to 100 milligrams from a single egg.
The level of minimum power demand from South Australia’s grid hit a new record low — and it did so about a week after setting the previous low demand record, thanks to rooftop solar panels. On Sunday, September 17, only 587.8MW of power was drawn from the grid, beating the low mark of 786.42MW from the previous Sunday.
Additionally, where record low demand times in the past happened during the nighttime hours, these new records happened during the middle of the day despite higher overall energy consumption during those hours — as you’d expect with solar power.
According to Renew Economy, moderate early spring temperatures (therefore, fewer air conditioners running) coupled with a high rooftop solar output of more than 700MW account for the new record.
The new numbers indicate that 47.8% of South Australia’s demand for electricity is currently met by rooftop solar, up more than 10% in a single week. This is a regional best for South Australia, and probably beats any record set by any comparably-sized grid anywhere.
New Sources of Power
The Australian Energy Market Operator (AEMO) estimates a record low demand of 354MW by 2019, and a possible zero grid demand within ten years. Western Australia is on the same timetable.
As prices drop along with demand, AEMO officials are working to shift practices and thinking to match the new reality. For example, South Australia is one of the first areas to recognize the middle of the day as an off-peak time.
While the US government is not supporting these kinds of sweeping initiatives, individual cities and states like California are crushing record after record. The trend isn’t going anywhere, and is the smarter long-term investment.
It is rare to encounter a scientific fact that stirs widespread debate and distrust quite like the matter of climate change.
Despite consensus among climate specialists about a theory that is supported by a mountain of facts from the physical, natural, and cultural sciences, the debate continues to be perpetrated by politicians, industrialists, academics, and armchair scientists.
When governments reject science, the rest of us are put at risk. By refusing to accept the facts and potential ramifications of climate change, as a society, we stand to delay or overlook actions that are urgently needed to reduce our impact on the environment and adapt our cities and farmlands to a different future.
Climategate Gave Wind to the Skeptics
Much of the intense skepticism about climate change science began in 2009, when thousands of emails and data files were stolen from the Climate Research Unit (CRU) at the University of East Anglia, in the UK, and later exposed under the guise of a purported conspiracy to alter facts.
The allegations claimed that scientists had only publicized results in support of their theory that climate change is driven by human activities. Other facts, that may negate this claim, were said to have been hidden.
A series of inquiries found no evidence that these scientists were in the wrong, though the investigations did generally call for more transparency. Selective reporting is indeed a serious issue in the scientific community, especially when it comes to theory building as theories require consideration of all available facts. Is it possible that the theory of climate change is based on a biased selection of facts?
We decided to find out.
Publication Bias in the Medical Sciences
But what exactly is publication bias? If researchers only publish results that confirm their specific view or previous expectations or hopes, then the bulk of results in this research field will be skewed towards that established belief.
For example, if a researcher is developing a medical drug to treat a disease, then all results of the clinical trial should be made public for the benefit of other researchers seeking the same cure.
We know that, in medicine, positive and statistically significant results are more likely to be published than non-results. This poses a risk to medical sciences as failed experiments that are not reported may lead other researchers to waste precious funds pursuing dead-ends. Moreover, if only positive results are published, people will think the drug may be more effective than it truly is.
Fortunately, there are established methods in numerical ecology and statistics that allow us to detect when non-significant results are missing from a field of research.
One such method is the “Fail-safe N” (or sometimes called “the file-drawer problem”). This refers to the practice of only publishing positive results but filing away studies with negative or non-confirmative results.
Statistically we can calculate the fail-safe N, that estimates how many negative studies would be required to make the statistical effect insignificant. This means that if publication bias was occurring in climate change science, we could detect it through “missing” negative results.
No evidence of publication bias
In our research, published in the journal Climatic Change, we analyzed more than 1,100 published results from the field of climate change science and found no evidence of under-reporting or missing results — even results that were not statistically significant or showing no positive effects were reported.
Our study revealed some stylistic biases in how articles are written, however. The largest, most prominent effects (as they relate to climate change) were reported in the upfront summary sections (also called the abstract) where they are most readily seen by readers, whereas the lesser effects and those that were not significant tended to be buried within the technical results sections where relatively few readers are likely to see them.
Stylistic biases are less concerning than a systematic tendency to under-report non-significant effects, assuming researchers read entire reports before formulating theories. However, most audiences, especially non-scientists including journalists who report on the findings, are more likely to read abstracts or summary paragraphs only, without perusing technical results.
The onus to effectively communicate science does not fall entirely on the reader; rather, it is the responsibility of scientists and editors to remain vigilant, to understand how biases may pervade their work, and to be proactive about communicating science to non-technical audiences in transparent and unbiased ways.
Climate science is built on a solid foundation
It is important to stress that we are not climate scientists. Rather, in this instance, we functioned as scientists holding climate scientists to account and tested to see if their reporting practices were sound.
Although climate scientists tend to highlight their most interesting results in the abstract of their articles, something that is hardly unique to their field, we can be confident that the theory of climate change is built on a solid foundation that gives credence to positive, neutral, and negative experimental results.
In scientific terms, we reject the accusation made by climate change skeptics and can confirm that there is no publication bias in climate change research.
This article is co-published with ScienceNordic and in Danish on ForskerZonen. It was co-written with Christian Harlos, formerly at Lund University and now working in local government marine conservation, and Tim Edgell, an ecologist for consultancy Stantec. Both were co-authors of the research that this article is based on.
Quantum theory predicts entanglement; that huge numbers of atoms can be intertwined due to quantum forces, across distances, or inside macroscopic structures. However, “predicts” has been the key phrase up until recently — as actual hard evidence from experiments has been lacking. Experimental evidence was just presented by University of Geneva scientists, who demonstrated the entanglement of 16 million atoms in a one-centimeter crystal.
Achieving entanglement hasn’t been the real challenge for physicists looking to generate empirical proof of the concept, though. Researchers can generate entangled photons by splitting a photon. It is the observation and recording of entanglement that has proven next to impossible — until now. With one caveat, as explained by UNIGE applied physics group researcher Florian Fröwis explained in a press release about the team’s research. “But it’s impossible to directly observe the process of entanglement between several million atoms since the mass of data you need to collect and analyze is so huge.”
Therefore, Fröwis and his team took inventory of which measurements they were able to take, and of those, which might be able to generate the evidence they were searching for. They settled on the single direction of light re-emitted by the crystal, and analyzed its statistical properties. This was how the the team was able to show the entanglement of 16 million atoms, rather than a few thousand.
Quantum networks will be essential to data protection in the future, because they make it possible to send a signal and detect any interception of that signal by a third part immediately. To send and receive these kinds of signals, you need quantum repeaters which can unify entangled atoms with a strong quantum relationship despite being separated by great distances. These quantum repeaters house crystal blocks supercooled to 270 degrees below zero and enriched with rare earth atoms. Once these blocks are penetrated by a photon, entanglement is created.
Particle entanglement is at the heart of the coming revolutions in quantum computing and quantum encryption, which will themselves be driving everything from artificial intelligence to personalized medicine. And while this is high-level stuff, it all depends on the entanglement of atoms at the quantum level, which this research has demonstrated on an unprecedented scale.
NASA’s new X3 thruster, which is being developed by researchers at the University of Michigan in collaboration with the agency and the US Air Force, has broken records in recent test. It’s hoped that the technology could be used to ferry humans to Mars.
A chemical rocket tops out at around five kilometers per second (1.86 miles/sec), while a Hall thruster can reach speeds of up to 40 kilometers per second (25 miles/sec). This kind of increase is particularly relevant to long-distance space travel, like a prospective voyage to Mars. In fact, project team leaders project that ion propulsion technology such as this could take humans to the Red Planet within the next 20 years.
Ion engines are also more efficient than their chemical-powered counterparts, requiring much less propellant to transport a similar amount of crew and equipment over large distances. Alec Gallimore, the project lead, stated that ionic propulsion can go around ten times farther using a similar amount of fuel in an interview with Space.com.
There are of course many other forms of deep-space travel on the table. The flaw of chemical-based designs is the need to bring the chemical fuel with them into space, which adds more mass that needs more fuel to lift into space, and so on. A Bussard ramjet, which is a type of fusion rocket, collects diffuse hydrogen in space with a huge scoop, which means, since its fuel is picked up en route, that it could approach light speed.
Sci-fi fans would recognize faster-than-light theoretical forms like the warp drive. General relativity stipulates that nothing can travel faster than the speed of light in the universe. However, if we could compact and expand the fabric of spacetime ahead of and behind us, respectively, we could technically be moving faster than the speed of light. However, the scientific consensus so far is that we’re nowhere near this kind of technology.
Red Planet, Green Light
Recent tests demonstrated that the X3 thruster can operate at over 100kW of power, generating 5.4 Newtons of thrust — the highest of any ionic plasma thruster to date. It also broke records for maximum power output and operating current.
The technology is apparently on track to take humans to Mars sometime in the next twenty years. However, it’s not without its limitations.
Compared to chemical rockets, the ionic alternative is capable of a very small amount of thrust. This means that it would have to operate for a very long time to reach the same level of acceleration as a chemical system, and as a result it’s not currently suitable for the launch process.
However, engineers are attempting to mitigate these issues with the X3 design. Multiple channels of plasma are being used rather than just one, but the current challenge is producing an engine that’s sufficiently powerful as well as being relatively compact. While most Hall thrusters can be picked up and carried around a lab with relative ease, the X3 needs to be moved with a crane.
In 2018, the team will continue to put the X3 through its paces with a test that will see it run continuously for 100 hours. A shielding system is also being developed that would prevent plasma from damaging the walls of the thruster, allowing it to operate for even longer, perhaps even several years at a time.
Japanese automaker Toyota is serious about perfecting hydrogen fuel cell technology to power its vehicles, and it’s scheduled an initial feasibility study operations for its zero-emissions heavy-duty truck a little over a week from today. A concept version of a truck running Toyota’s specialized hydrogen fuel cell system designed for heavy-hauling use will be moving goods from select terminals at the Port of LA and Long Beach to nearby warehouses and rail yards beginning on October 23.
“If you see a big-rig driving around the Ports of Los Angeles and Long Beach that seems oddly quiet and quick, do not be alarmed! It’s just the future,” Toyota wrote in a press release. The company expects the daily runs to cover some 322 kilometers (200 miles) to test the fuel cell system’s duty-cycle capabilities. Afterwards, longer trips could be introduced.
Image credit: Toyota
According to Toyota, this zero-emissions heavy-duty proof-of-concept truck has already covered roughly 6,437 kilometers (4,000 miles) in development tests, where it pulled a progressive weight of cargo — 36,287 kilograms (80,000 pounds) tops — while only emitting water vapor. It packs a 670 horsepower, with 1,325 pound-feet of torque, from two Mirai fuel cell stacks combined with a 12kWh battery.
Baidu, one of the world’s largest internet and artificial intelligence (AI) technology companies, has announced plans to deliver Level 4 self-driving cars by 2021 and Level 3 vehicles by 2019. Chinese automaker BAIC group will manufacture the vehicles, while Baidu provides the software to enable their self-driving capabilities, which will be developed through their Apollo autonomous driving program.
Right now, Tesla’s Autopilot system is considered a Level 2 since it still requires that a driver monitor the car’s behavior, and experts have expressed doubts on the company’s ability to deliver higher-level autonomy within Elon Musk’s proposed timeframe.
China is a large market for Tesla’s electric vehicles, so if Baidu and BAIC Group’s partnership is able to produce Level 3 or Level 4 self-driving cars before Tesla can, Musk’s company could be facing some stiff competition in one of their strongest markets. Of course, this competition could also provide Tesla with the motivation needed to deliver on their lofty promises.
Although more and more automakers, tech companies, and even government officials across the globe are taking notice of self-driving cars, they still have numerous hurdles to overcome, both in terms of technology and in convincing the public to hand over control of their cars to artificially intelligent tech. Still, whether they hit the roads in two years or 10, self-driving vehicles are looking more and more like the future of transportation.
Psychedelics like lysergic acid diethylamide (LSD) and psilocybin are popular for their use as party drugs, but less so for what researchers claim to be their therapeutic effects — which has been a major focus for a number of clinical trials in the last decade. Magic mushrooms, for example, have been the focus of some recent work that saw how it could help with treating some of the symptoms of clinical depression. For instance, a study from the U.S. last year showed how a single does of psilocybin can lift anxiety and depression felt by cancer patients.
Now, scientists from the Imperial College London have found how psilocybin, which is the active psychedelic compound that occurs naturally in magic mushrooms, can “reset” brain activity in patients suffering from depression. Their study, which was published in the journal Scientific Reports on Friday, highlights how psilocybin gave patients a “kick start” in fighting clinical depression.
The researchers at Imperial gave two doses (10 mg and 25 mg) of psilocybin, with a week in between each dose, to 20 patients with a treatment-resistant form of depression. Immediately after receiving the doses, the patients said they felt a decrease in depressive symptoms, which MRI scans of their brains revealed to have been due to a reduce in blood flow to areas involved in handling emotional responses, stress, and fear.
Rebooting Through Depression with Magic Mushrooms
In short, the patients experienced a sort of reboot. “We have shown for the first time clear changes in brain activity in depressed people treated with psilocybin after failing to respond to conventional treatments,” Robin Carhart-Harris, head of Psychedelic Research — there’s such a thing — at Imperial, said in a press release. “Several of our patients described feeling ‘reset’ after the treatment and often used computer analogies. For example, one said he felt like his brain had been ‘defragged’ like a computer hard drive, and another said he felt ‘rebooted’.”
It would seem that during the drug “trip,” brain networks went through an initial disintegration that was followed by a re-integration afterwards, when the patients “come down” from the psychedelic. “Psilocybin may be giving these individuals the temporary ‘kick start’ they need to break out of their depressive states and these imaging results do tentatively support a ‘reset’ analogy. Similar brain effects to these have been seen with electroconvulsive therapy,” Carhart-Harris added.
The researchers acknowledged, however, that while their study provides a new window into the brains of people who’ve taken psychedelics, the small number of patients tested and the absence of a control/placebo group limits the significance of their study. “Larger studies are needed to see if this positive effect can be reproduced in more patients,” said senior author David Nutt, director of the Neuropsychopharmacology unit of the Brain Sciences division at Imperial. “But these initial findings are exciting and provide another treatment avenue to explore.” The researchers also warned against self-medicating using such psychedelics.
Chemists from The Scripps Research Institute (TSRI) have demonstrated a way to create new materials from DNA. Using recent advances in editing techniques, researchers Floyd Romesberg and Tingjian Chen were able to chemically modify DNA nucleotides to produce substances with qualities that could be useful for medical purposes. Their work has been published in Angewandte Chemie.
Previously, Romesberg’s laboratory had successfully produced an artificial DNA polymerase enzyme capable of making copies of modified DNA in the same way that a standard polymerase makes copies of normal DNA. However, in that case, the researchers were only modifying the DNA by attaching fluorine or methoxy moieties to the nucleotides’ sugar backbone. This new study demonstrates several further modifications, each of which offers up its own potential applications.
One of the modifications carried out by the chemists was the addition of an azido group to the DNA. Using a series of techniques known as click chemistry, the researchers could then attach various other molecules to the DNA via this azido group.
A more advanced implementation affixed multiple strands of DNA to one central strand that had been modified with an azido group. That DNA was then amplified using a method known as polymerase chain reaction, and when the resulting structure was put in contact with water, it formed a hydrogel.
A Happy Surprise
The researchers weren’t expecting this hydrogel to form, but they were pleased to see that it did.
Hydrogels have a host of potential applications, ranging from drug delivery to the cultivation of cells in a three-dimensional culture. However, they’re difficult to produce in a controlled manner. The techniques being developed at TSRI could change all that.
Next, the researchers plan to further investigate how to put substances like these hydrogels to good use. They also plan to develop further DNA modifications that could easily be reproduced using the polymerase.
As TSRI’s Floyd Romesberg, a professor of chemistry at the institution, noted in a TSRI press release, “DNA has some unique properties as a material, and with this new ability to modify it and replicate it like normal DNA, we can really begin to explore some interesting potential applications.”
Researchers from Stanford University have developed an inexpensive alternative to lithium-ion batteries that could better help us prepare for a renewable energy future. The group was able to create a sodium-based battery that can store the same amount of energy as a lithium-based battery at less than 80 percent of the cost. Other researchers have created sodium-based batteries in the past, but this new approach promises to be more cost-effective.
“Nothing may ever surpass lithium in performance,” said chemical engineer Zhenan Bao “But lithium is so rare and costly that we need to develop high-performance but low-cost batteries based on abundant elements like sodium.”
The sodium in the Stanford battery binds to a compound called myo-inositol, an organic compound found in household products like baby formula. And, just as sodium is much more abundant than lithium, myo-inositol can easily be derived from rice bran or found in the byproducts made during the process of milling corn. This will help to make materials gathering cost-effective.
Batteries are Key
Available access to battery storage is an essential factor in the clean energy revolution. Sources of renewable energy generation like solar and wind are typically reliant on unpredictable environmental factors. Batteries allow excess power to be stored during peak generating conditions and saved for use when conditions are less favorable.
Cheaper sodium-based batteries could help to make renewable power more accessible for regions where the cost of lithium-ion batteries is a significant financial barrier.
The Stanford battery still has a long way to go before it can be adapted into a consumer product. The team’s analysis focused on cost-performance comparisons but it did not consider volumetric energy density, or how big the sodium-ion batteries need to be in order to store the same amount of power as lithium-ion batteries.
However, despite the work still to be done, the team is confident that their design can be improved.
To get from one city to another in just 30 to 60 minutes—who doesn’t want that? SpaceX founder and CEO Elon Musk definitely wants it, and that’s one of the potential uses for his redesigned BFR: earth to earth flights between major cities.
Musk previewed the latest BFR update at the 2017 International Astronautical Congress in September. Designed to be Musk’s new rocket and spacecraft for Mars, the BFR could also be a suborbital spacecraft for SpaceX, said former astronaut Leroy Chiao. Essentially, suborbital spacecrafts—like Virgin Galactic’s VSS Unity and Blue Origin’s New Shepard—are meant for the budding space tourism industry, and could function something like extremely high-tech, high-flying airplanes. Flying at the BFR’s 4.6 miles per second, you could get from New York to Los Angeles in just 25 minutes.
But just how would riding such a spacecraft feel? Chiao, who’s flown aboard three NASA space shuttles and a Russian Soyuz, described it to Business Insider: “[L]aunch, insertion and entry would be similar to a capsule spacecraft [like the Soyuz], with the difference being in the final phase of landing,” he said.
Can You Handle It?
Chiao suggests that flying aboard a BFR won’t exactly be easy. “During launch on a rocket with liquid engines […] the liftoff is very smooth and one really can’t feel it,” he described. “Ignition of the next stage engine(s) causes a momentary bump in g-force. As you get to the last part of ascent, you feel some g’s come on through your chest, but it is not uncomfortable.”
The crucial moment is when the BFR’s rocket engines separate from the spacecraft, when passengers would feel “instantly weightless.” Here’s how he describes it:
You feel like you are tumbling, as your balance system struggles to make sense of what is happening, and you are very dizzy. You feel the fluid shift [in your body], kind of like laying heads-down on an incline, because there is no longer gravity pulling your body fluids down into your legs. All this can cause nausea. As you start to re-enter the atmosphere, you would feel the g’s come on smoothly and start to build.
Then, finally the BFR lands. “[Y]ou would both feel and hear [the engines],” Chiao said. “As the thrust builds, you would feel the g’s come on again and then at touchdown, you would feel a little bump.”
If you think you can handle it, then maybe the BFR’s Earth-to-Earth travel is for you. “[T]his would not be for the faint of heart, and it is difficult to see how this would be inexpensive,” he said. Keep in mind, however, that there’s still a lot SpaceX and Musk have to figure out before this actually works. “But the one thing I’ve learned from observing Elon, is not to count him out,” Chiao added.
Earlier this month, NASA issued a press release stating that it’s likely that our solar system has a ninth planet—even if it’s proving difficult to find.
The planet could have a mass ten times that of Earth’s, and be situated twenty times as far from the sun as Neptune. It’s being referred to as “Planet Nine,” and while it’s very difficult to procure clear evidence of its existence, some scientists are absolutely convinced that it’s out there.
“There are now five different lines of observational evidence pointing to the existence of Planet Nine,” said Konstantin Batygin, a planetary astrophysicist at the California Institute of Technology (Caltech), who is part of a team on the search for the planet.
“If you were to remove this explanation and imagine Planet Nine does not exist, then you generate more problems than you solve. All of a sudden, you have five different puzzles, and you must come up with five different theories to explain them.”
In a 2016 paper, Batygin and co-author Mike Brown detailed six known objects in the Kuiper Belt that behave rather strangely. All of them have elliptical orbits pointing in the same direction, and all of those orbits are tilted the same way. Both of these traits serve as clues to the presence of Planet Nine.
Computer simulations that took the hypothesized planet into account indicated that there should also be several other bodies with more extreme tilts from the solar plane, on the order of 90 degrees. Brown realized that astronomers are already aware of five such objects, meaning nature did fit with the simulations.
Planet Nine would also explain why the plane in which the other planets orbit is tilted about six degrees away from the sun’s equator: over time, Planet Nine’s distant gravity has made the entire solar system plane wobble away from center.
Finally, there are the objects in the Kuiper belt that orbit in the opposite direction of everything else in our solar system.
“No other model can explain the weirdness of these high-inclination orbits,” explained Batygin. “It turns out that Planet Nine provides a natural avenue for their generation. These things have been twisted out of the solar system plane with help from Planet Nine and then scattered inward by Neptune.”
Based on the behavior of these distant objects, the astronomers believe the planet to be a Super-Earth, a massive rocky planet that is extremely common in the universe—but which our solar system, oddly, lacks. The planet could have coalesced out in the cold reaches of our system over millions of years, formed close to the sun and then been flung outward, or even been captured by the Sun from another system.
Batygin and Brown are using the Subaru Telescope at Hawaii’s Mauna Kea Observatory to continue their search for Planet Nine. According to Batygin, this telescope is the best tool available to hunt down something dim and distant in the vast expanse of sky.
However, Planet Nine isn’t the only explanation for the orbital behaviors observed. A recent survey of the outer solar system found over 800 trans-Neptunian objects. A random distribution of this matter could also potentially have the same effect on the tilt on the traits observed in various orbits—but the jury is still out.
After some emotional testimony from doctors and patients, a Food and Drug Administration (FDA) advisory panel has voted unanimously to approve a gene therapy that improves hereditary blindness. The treatment will now progress to a final decision from the FDA and, if approved, will be the first gene therapy legally available in the United States for an inherited disorder. The FDA is under no obligation to follow the advisory board’s recommendation but usually does.
The treatment, which will be marketed as Luxturna, fixes a mutation in the RPE65 gene. It involves a single treatment to each eye, which introduces genetically engineered virus particles carrying a corrected version of the mutated gene. Spark Therapeutics, the treatment’s developer, estimates that 6,000 people around the world could benefit from this treatment. More than 90 percent of the patients treated in the study showed some improvement in eyesight within just a few days of treatment.
Gene Therapy Breakthrough
This is a huge step forward for the field of gene therapeutics. “[O]n multiple fronts, it’s a first and ushers in a new era of gene therapy,” assistant professor of ophthalmology at the Oregon Health and Science University, Paul Yang, told NPR.
Alone, this treatment could also be applied to other formally incurable genetic eye diseases. “There are a lot of retinal diseases like this, and if you added them together it’s a big thing because they are all incurable,” says lead researcher Albert Maguire in an interview with NPR before the hearing.
Other gene therapy clinical trials are currently being held around the world. According to the Washington Post, diseases ranging from hemophilia to Huntington’s Disease, an inherited condition that causes the progressive breakdown of nerve cells in the brain, are being targeted for gene therapy treatments.
As with any medical treatment, successful gene therapy doesn’t come without risk. Still, this is huge progress toward correcting previously incurable conditions. This treatment and others like it have the potential to transform countless lives in the very near future.
Now, CoinDesk reports that the U.S. State Department is also looking into utilizing Blockchain technologies to “advance diplomacy and development objectives.”
U.S. Deputy Secretary of State John Sullivan put forward the idea while speaking at the Blockchain@State forum held in Washington DC earlier this week, claiming that Blockchain technology could play a key role in the restructuring plan initially proposed by Secretary of State Rex Tillerson.
“This forum has implications for our ongoing redesign efforts,” said Sullivan. “We’re interested to learn whether blockchain technology can have direct applications to many of the key features of our proposed redesign.”
Using Blockchain for Aid, Democracy, and Corruption
CoinDesk writes that several ideas were discussed regarding how a blockchain could be implemented to improve various aspects of the State Department, including how it provides foreign aid, promotes democracy, and improves governance and political institutions in U.S.-allied countries.
Beyond that, Sullivan suggested the technology could also help deal with matters of fraud and corruption in a government’s control over land title registries.
The U.S. government has been a bit slower than other countries when it comes to addressing the tech, and potentially adopting it. But despite this, many blockchain companies participated in the forum, with several reportedly supporting Sullivan’s ideas for incorporating blockchain technology.
“We’re particularly excited that the U.S. is waking up, big time, and realizing that this is a transformative technology,” said Joseph Lubin, founder of the blockchain development firm ConsenSys, which co-sponsored the event. “There are other smaller players who are embracing this technology strongly, but we do want to see America get out in front of this and transform society with it.”
Disclosure: Several members of the Futurism team, including the editors of this piece, are personal investors in a number of cryptocurrency markets. Their personal investment perspectives have no impact on editorial content.
Two years ago, a team of U.S. engineers going by the name MegaBots Inc. challenged their Japan counterparts, Suidobashi Heavy Industry, to a duel involving giant robots. After a lot of work and some back-and-forth between the two teams, the long-awaited duel will finally take place on Tuesday, October 17. It was initially set for sometime in August, but delayed for unknown reasons. We suspect it’s because robots take a long time to build and test.
The international robot duel will stream on Twitch for all to see, though the fight itself won’t be a live event. You see, the fight between MegaBots’ Eagle Prime and Suidobashis’ Kuratas already happened, held at an abandoned steel mill in Japan. What viewers will be watching will, presumably, be an edited version; the duel consisted of multiple rounds, with repairs between rounds requiring multiple days to complete.
There was no audience to witness the fight when it took place, save for the two teams of engineers and the commentators: Mike Goldberg of the MMA and robotics expert Saura Naderi. MegaBots’ Gui Cavalcanti explained to The Verge that the duel was fought on a knockdown system, meaning a team could only win if they knocked down or disabled the other team’s bot.
Specifications for the robots, including weight, size, and weapons, were all left up to the two teams, though weapons were required “to not cut through metal, but instead to damage it.” It’s a sensible decision when you consider the fact that both robots have pilots inside, who would be susceptible to anything that could penetrate their robot’s body or the cockpit.
Be prepared for a relatively slow fight, though. As you can see in the video above, the states’ Eagle Prime doesn’t move very fast, weighing in at 12 tons (24,000 lbs.). That said, people are probably more excited to see both robots torn apart than gracefully dodging and parrying attacks.
Following an October 6 blog post that outlines the case against President Trump’s NASA Administrator nominee, Representative Jim Bridenstine (R-OK), the Union of Concerned Scientists (UCS) has formally denounced the choice and started a petition against his appointment.
Among its reasons, the UCS’s petition cited that the NASA Administrator position is typically held by a space professional or scientist; Representative Bridestine has no formal training in science or engineering. The UCS also pointed out that Bridestine has a well-documented history of climate change denial and public statements that belie a misunderstanding of basic climate science.
As a new lawmaker, Bridenstine demanded that President Obama apologize for wasting money on climate change research. He has also blamed climate change on the sun, denied that carbon emissions are causing global temperatures to rise, and claimed that global temperatures stopped rising about ten years ago. Each of these assertions directly contradicts well-established climate science — including that produced by NASA, the agency he has been proposed to head.
Beyond that, the UCS says these remarks point to a larger problem: “His public remarks suggest that his current understanding of Earth Science is largely informed by politically charged skeptics of climate change research. We need a NASA administrator who can differentiate science from politics.”
The UCS also argues that Representative Bridestine’s remarks, and intentions they could reflect, present a real danger to American public health and safety. Bridestine has recommended that all Earth science related work be removed from the policy and purpose Congress declares for NASA, which would represent a significant change to the agency’s mission. Currently, NASA’s Earth science research advances our understanding of, and ability to cope with, natural disasters such as Hurricanes Harvey, Irma, and Maria. It also allows Americans — from federal agencies to local officials to average citizens on the ground — to more safely and effectively prepare and respond to extreme weather events.
A Larger Pattern
Currently, Bridestine is the only nominee for this key position. While the appointment is not set in stone, and has been criticized by some in Congress, it is highly likely that he will get the position. If he does, his leadership of NASA will only be the latest chapter in what many consider to be an invasion of U.S. politics mounted by climate change denial.
Elon Musk took to Instagram to post another video of Tesla’s team of KUKA industrial robots at work building the Model 3, to showcase Tesla’s focus on automation in its development of electric vehicles. Earlier this week Musk sent out a video of Tesla’s Model 3 assembly line slowed down to 1/10th speed.
Skeptics of an automated future like World Bank Chief Jim Yong Kim warn that humans are in for a job disruption not seen since the industrial revolution, and that we’d best invest in education and health. Kim argues that intelligent automation and reactionary political elements may threaten economic development (e.g., the resistance to forces of globalization)—putting the world, Kim adds, on a “crash course.”
Musk is likely sending out these videos in response to claims that the Model 3 is largely being built by hand, a claim which Tesla has stated to be “fundamentally wrong and misleading.” The company has been unable to keep pace with production level announced by Musk just this past summer.
Another statement from Tesla said, “We are simply working through the S-curve of production that we drew out for the world to see at our launch event in July. There’s a reason it’s called production hell.”
We can expect more videos of Musk proving the Model 3 is in the hands of an autonomous, streamlined production line. Hopefully, soon we can see the results of the process finally translate into speedy production. The company has so far had difficulty meeting the high demand of the new model.
“It looks like you just punched a hole in the ice,” Kent Moore, a professor of physics at the University of Toronto, told Motherboard. “This is hundreds of kilometers from the ice edge,” he added. “If we didn’t have a satellite, we wouldn’t know it was there.”
The researchers were monitoring the area with satellites because a similar hole in Antarctica’s ice opened up in 2016. This year’s hole — a phenomenon also known as a polynya — measures around 77,699 square kilometers (30,000 square miles), making it the largest observed in the Weddell Sea since the 1970s.
This hole in Antarctica’s ice formed because deep water from the Southern Ocean is pushed upwards by currents until it melts the topmost layer of ice. When that warm water is exposed to the atmosphere, it cools, sinks, and is reheated in deeper areas, continuing the cycle and preventing the formation of a new ice blanket.
While scientists know the basics regarding how this hole formed, they are less certain about its root cause and potential future effect on Antarctica and its oceans, Moore told National Geographic. They also aren’t sure whether it’s been prompted by climate change or something else, such as local marine life.
Moore calls the fact that the polynya came back after a forty year absence “remarkable,” and he and the rest of the researchers are now attempting to find out exactly what prompted this return and what future changes it might bring about.
Today, a panel will advise the US Food and Drug Administration whether Luxturna, a gene therapy treatment developed by Spark Therapeutics, should be approved for general usage. The treatment has already been used to improve the eyesight of more than two dozen people with retinal disorders.
Gene therapy typically uses an engineered virus to administer a patient with a faulty gene with a corrected version. Rather than simply responding to the symptoms of the condition in question, it attempts to make changes to the individual’s genetic make-up in order to solve the problem at its root.
Luxturna fixes a mutation in a gene known as RPE65, which is responsible for telling the body how to produce a protein that’s essential for normal eyesight. It introduces billions of engineered virus particles bearing a corrected version of the gene to the retinal cell, via a quick injection to the eyes.
The company estimates that 6,000 people around the world could benefit from Luxturna, including between 1,000 and 2,000 people in the US who suffer from diminished eyesight. The majority of these people would eventually lose their current level of vision entirely without treatment, and there are currently no drugs designed for people with an RPE65 mutation that are approved by the FDA.
However, Luxturna is not without its drawbacks. It’s not an outright cure, and it doesn’t give recipients full 20/20 vision. There’s currently no data on how long its effects last, so there’s a chance that patients’ sight might begin to recede once again over time.
Cost is also a major factor in how accessible it is. Two of the treatment’s biggest competitors, Strimvelis and Kymriah, cost around $700,000 and $475,000 respectively. Consequently it seems likely that Luxturna have to drop in price to be a feasible competitor. They might be losing ground, as Spark has announced plans to set up a program to help patients cover out-of-pocket costs like travel to Spark-proffering facilities.
Gene therapy has the potential to make huge improvements to the quality of life of people suffering from various genetic diseases. This blossoming form of treatment could well be the wave of the future.
“This is what I believe medicine is going to be like for the next 20, 30, if not 50 years,” said Spark CEO Jeff Marrazzo, speaking to the MIT Technology Review. “I think this is the beginning of an age that is going to fundamentally change medicine.”
We’ve already seen projects that use gene therapy to tackle everything from brain diseases to broken bones. However, these treatments will all require FDA approval — so scientists working in this sphere will likely be watching today’s decision regarding Luxturna very closely.
Engineers at Cornell University have developed a programmable synthetic skin inspired by the amazing ways octopus and cuttlefish are able to blend into their environment. The project spawned a stretchable material that’s capable of morphing into a variety of 3D shapes.
The pneumatically-activated material draws inspiration from the papillae that cephalopods use to camouflage. These papillae are muscular hydrostats with no skeletal support—much like the human tongue. The research team looked at these structures to create synthetic tissues capable of similar shape-shifting abilities.
The result is a synthetic material that can extend and retract to form a wide variety of 3D shapes. However, while the team was influenced by camouflage techniques, the project has a much broader range of uses.
Morph and Stretch
“Engineers have developed a lot of sophisticated ways to control the shape of soft, stretchable materials, but we wanted to do it in a simple way that was fast, strong, and easy to control,” said James Pikul, the lead author of the paper, in a press release. “We were drawn by how successful cephalopods are at changing their skin texture, so we studied and drew inspiration from the muscles that allow cephalopods to control their texture, and implemented these ideas into a method for controlling the shape of soft, stretchable materials.”
The engineers behind the project have indicated that the synthetic skin might offer up some important advantages in scenarios where temperature control is important. The material could be programmed such that its 2D configuration reflects light, while its 3D arrangement absorbs it, regulating or manipulating the temperature as needed.
The ability to quickly switch between a flat 2D surface and a bumpy 3D exterior could also be useful in objects that need to pass through water or air. Changing the amount of drag generated by the material might be an effective way of regulating speed. This is one of the main ways that cephalopods use their papillae—forming shapes to serve as camouflage while they remain very still, then quickly transitioning to a smooth surface so they are as hydrodynamic as possible for a quick escape.
Bitcoin is back in a big way. After a few tumultuous weeks caused by some intense government regulation from states like China and South Korea, and fears of impending bans in Russia, Bitcoin has been able to rally and surpass its previous record value established in early September.
Prices soared to $5,314.06 at the point of writing, beating the previous record of $5,013.91. Prices were expected to take a dip after the cryptocurrency hit the $5,000 point as some investors likely had sell orders placed for that milestone. However, China’s announcements about the temporary ban followed by other developments played a major role in the tumult that followed.
Bitcoin prices have achieved 424 percent growth since the beginning of the year, proving that the cryptocurrency is particularly resilient, contrary to some assumptions.
Bitcoin could be back on track with its meteoric rise. Experts seem to be unphased by the rollercoaster changes of the past few weeks. Speaking during CNBC’s Fast Money segment, former Fortress Investment Group hedge fund manager turned cryptocurrency investor Mike Novogratz predicted that Bitcoin will reach heights above $10,000 in the next six to ten months.
The cryptocurrency market is anything but stable. There is no definitive way to predict how it will perform in the coming days, let alone weeks or months. Still, whether crypto is a bubble or not, there’s no doubt that it is disrupting the financial sector.
Disclosure: Several members of the Futurism team, including the editors of this piece, are personal investors in a number of cryptocurrency markets. Their personal investment perspectives have no impact on editorial content.
Despite the name change, the transportation startup’s goals remain the same. It will continue to develop and test its hyperloop pod and tracks as well as investigate the viability of previously announced routes.
TechCrunch notes that included in the re-branding, Virgin Hyperloop One will now be associated with Virgin Group’s other projects. Hyperloop now has a powerful billionaire with an interest in the future backing it, which can go a long way when trying to introduce a new concept.
“Ever since our creation, Virgin has been known for disruption and investing in innovative companies,” said Branson in a blog post. “From our airlines to our trains to our spaceline, we have long been passionate about innovation in transport too, especially the development of technology that could transform people’s lives. This is just the latest example.”
Branson went on to explain how he’d recently visited the DevLoop test site outside Las Vegas to get a first-hand look at the technology involved in Hyperloop. At the time, the pod was able to reach a top speed of 310 km/h (192 mph) with a peak acceleration of 1.48 Gs – the equivalent of going from 0 to 60 mph in 1.85 seconds.
Soon, drivers will only have the option of using electric cars in Paris as authorities in the French capital have announced plans to remove all gas- and diesel-powered cars by 2030.
“This is about planning for the long term with a strategy that will reduce greenhouse gases,” transport official Christophe Najdovski told France Info radio, according to a report from Reuters. “Transport is one of the main greenhouse gas producers…so we are planning an exit from combustion engine vehicles, or fossil-energy vehicles, by 2030.”
Diesel-powered vehicles are actually set to be outlawed in Paris before their gas-powered equivalents. The deadline for those cars is 2024, which is when the city will play host to the Olympic games.
In a statement announcing the transition to only electric cars in Paris, officials made sure to clarify that the removal of all other types of cars by 2030 should not be interpreted as a ban but a “trajectory.” However, they did not clarify what, if any, penalties would be faced by those who didn’t meet the goal.
While fossil fuel emissions affect all of France and, indeed, the whole planet, Paris is a hotspot for particle pollution. City officials are often forced to issue temporary bans on gas- and diesel-powered cars in response to particularly bad surges.
The U.S. has yet to make any sweeping statements about when similar changes might be made, but the state of California is already well ahead of the curve. Last month, reports surfaced that officials were investigating whether a ban on non-electric cars could help the state reach its lofty goals with regards to climate change.
The days of gas- and diesel-powered cars are seemingly numbered, and many governments appear eager to take a proactive approach to the transition. The real question is which automakers are going to be able to keep up and emerge as leaders in the age of the electric car.
The organizations and scientists working hard to develop the next generation of medical treatments and pharmaceuticals rely on genetic data for their research. That data has to come from somewhere, and now, a startup has formed to give individuals the opportunity to get paid for the information that’s locked away in their genes.
The market value of genetic data hit $5.9 billion in 2010, a figure that’s predicted to grow significantly in the coming years. However, at present, a small number of genomics corporations, pharmaceutical firms, and scientific and medical institutions control all the available information.
Zenome wants to change that. The startup’s argument is that individuals should control the rights to their own genetic data. Using blockchain technology, the company plans to create a network that will offer anyone who needs genetic information the ability to purchase access to it while ensuring the privacy of the genetic donor.
Cash for Data
Because genetic data contains so many details about a person, ensuring that the data is not linked to the donor’s identity is vital if a service like the one proposed by Zenome is going to be successful. Though best known as the technology that allows traders to exchange cryptocurrencies like bitcoin and ether anonymously, blockchain could be the perfect way to ensure that an individual’s identity is decoupled from their genetic information.
However, this anonymity is just one advantage of blockchain technology. It can also add an extra layer of security. A large-scale leak of genetic data would be a huge scandal for companies conducting research, so storing this data on a blockchain would lessen the risk of such a scenario.
Of course, Zenome’s biggest selling point is the agency it offers to individuals. The platform would give donors added control over who can access their genetic data and for what purpose while still allowing them to make money off it.
When the Zenome platform goes live, ZNA tokens will be the currency used to buy access to data. The startup will launch an initial coin offering pre-sale on October 17, at which point interested parties will receive a 100 percent bonus on any ZNA tokens they purchase.
Disclosure: Several members of the Futurism team, including the editors of this piece, are personal investors in a number of cryptocurrency markets. Their personal investment perspectives have no impact on editorial content.
China has become the most powerful political force driving the globalization of electric vehicle (EV) technology. And, while natural driving forces such as dwindling fossil fuel resources and the hazards of climate change are also hastening change, these political pressures are, for now, the most effective driving factors in the electric car revolution.
Much of China’s journey toward EVs has been fueled by necessity. The Chinese government has invested vast sums of money into the industry and set aggressive pro-electric regulations because of its citizens suffering from worsening air pollution and because of the country’s overall goal of technological dominance including developments in artificial intelligence (AI).
At least one expert has asserted that all cars in China will be electric by 2030. The government is expected to ban production of fossil fuel vehicles soon. This is impressive for any country, but the automobile market in China is so vast that the nation’s policy decisions have global impact. China already makes and sells more EVs than any other country, and Chinese buyers will purchase more than three times as many EVs than American buyers in 2017 — and more than the rest of the world’s buyers altogether. And, although China taxes import cars at ten times the rate we do here in the US, Chinese
buyers still purchase more General Motors (GM) cars than Americans.
There is no guarantee that China will be able to dominate the electric car market. Despite notable auto manufacturing skills, the nation hasn’t yet developed a car that has become a trendy import, which is necessary to extend its influence. Most Chinese car buyers still prefer American and European cars, although joint ventures with international automakers may allow China’s car designers to imitate their style in the future.
The market is responding to China’s electric car boom as you might expect. GM, Ford, and Volkswagen are all adding electric models to their offerings and moving R&D operations for EVs to China. The Chinese government has made it a priority to recruit top electrical engineering talent from around the world, including the US.
This isn’t China’s first rodeo in terms of transforming industries. With its knockout combination of cheap, plentiful labor and strong government support, China has previously changed the steel making and clothing industries, and green energy businesses more recently. However, the auto industry is taking this pattern to an entirely new scale.
Meanwhile, for Chinese citizens, the environmental benefits of electric cars can’t come too soon and it will feel like they’re a long time coming. Especially in big cities like Beijing, citizens are exposed to highly hazardous air regularly. There will be no turning back for China, and it seems that the country will be leading the rest of the world in the right direction.
Today, people’s bodies are more perfectly melded with technology than we could have imagined mere decades ago. Superhuman strength, dexterity, and senses are no longer science-fiction — they’re already here.
Though cutting-edge technology offers us a glimpse into the capabilities of enhanced humans in the future, it’s most useful these days as support for people who have been affected by a disability. Cyborg technology can replace missing limbs, organs, and bodily senses. Sometimes, it can even enhance the body’s typical function.
Here are six of the most striking examples of this cyborg present. They show us how far we have already come, and how far we could go in the future.
Hearing Colors With An Antenna
Activist and artist Neil Harbisson was born without the ability to see color. In 2004, he decided to change that. He mounted an electronic antenna to the lower back of his skull that turns frequencies of light into vibrations his brain interprets as sound, allowing him to “hear color.” These frequencies are even able to go beyond the visual spectrum, allowing him to “hear” invisible frequencies such as infrared and ultraviolet.
“There is no difference between the software and my brain, or my antenna and any other body part. Being united to cybernetics makes me feel that I am technology,” he said in a National Geographic interview.
His body modification was not always well-accepted: the British government took issue when the antenna showed up in Harbisson’s passport photo. Harbisson fought the government to keep it in. He won, becoming the first “legally recognized” cyborg.
The LUKE Arm
The LUKE Arm (named after Luke Sywalker) is a highly advanced prosthetic that lends the wearer a sense of touch. A specialized motor can provide feedback to mimic the resistance offered by various physical objects — users can feel that a pillow offers less resistance than a brick. With the help of funding from the Defense Advanced Research Projects Agency (DARPA), the finished design received U.S. Food and Drug Administration (FDA) approval in 2014.
In his 20s, Jens Naumann was involved in two separate accidents that shot metal shards into his eyes, causing him to lose his vision. In 2002, at the age of 37, Naumann participated in a clinical trial performed at the Lisbon-based Dobelle Institute in which a television camera was connected straight to his brain, bypassing his faulty eyes. Dots of light combined to form shapes and outlines of the world around him, giving him “this kind of dot matrix-type vision.” The system enabled him to see Christmas lights outlining his home in Canada that year.
Unfortunately, the system failed only after a couple of weeks. And when William Dobelle, the original inventor of the technology, passed away in 2004, he left behind almost no documentation, leaving technicians no instructions for how to repair Naumann’s system. In 2010, Naumann had the system surgically removed, rendering him completely blind once again.
Mind-Controlled Bionic Leg
The mind-controlled bionic leg was first used in 2012 by Zac Vawter, a software engineer from Seattle whose leg was amputated above the knee in 2009. The technology that translates brain signals into physical movement, called Targeted Muscle Reinnvervation (TMR), was first created in 2003 for upper-limb prosthetics. But Vawter’s prosthetic was revolutionary because it was the first leg prosthetic use it.
In 2012, Zac Vawter climbed the 2,100 steps of the Willis Tower in Chicago, with the help of his prosthetic leg. It took him 53 minutes and nine seconds.
The bebionic Hand
Prosthetics company bebionic has created some of the most sophisticated prosthetic hands to date. Individual motors move every joint along every digit independently. To help with everyday use, the bebionic has 14 pre-determined grip patterns. Highly sensitive motors vary the speed and force of the grip in real-time — it’s delicate enough for the user to hold an egg between his or her index finger and thumb, and robust enough to hold up to 45 kilograms (99 pounds).
The bebionic hand has been available commercially since 2010. Models released in the years since have improved its battery life, flexibility, and software.
Russia has a new battle suit that seems to be visually inspired by Star Wars’s Imperial Shadow Stormtroopers. While Russia’s version likely doesn’t come with a cloaking device, the high-tech armor does have a few tricks up its sleeves, including nuclear blast resistant tech.
The suit was developed by Rostec and is called the Ratnik-3. The latest upgrade to the new armor includes a reportedly nuclear blast resistant watch. According to a statement released by the press office, the Chief Designer for the Life Support System of the Soldier Combat Outfit at the Central Scientific Research Institute for Precision Machine Engineering, Oleg Faustov, says “The watch, which we have included in the Ratnik outfit, retains its properties upon the impact of radiation and electromagnetic impulses, for example, upon a nuclear blast.”
The watch also features a self-winding mechanism and operates under water.
Other perks of the 59 items Rostec has included in the suit include a powered exoskeleton, which is said to give soldiers greater strength and stamina; the latest in bulletproof body armor tech; and a full face-covering visor and helmet equipped with a video game-esque heads-up display (HUD). According to Russian state-owned media outlet Tass, the weight of the completed combat gear will be reduced by 30% when it is released for use in the field.
The Ratnik 3 is expected to be ready for use by 2022.
Weapons are also getting next-gen upgrades, with laser weapons currently being deployed in various forms around the world. The United States Navy has the Laser Weapons System (LaWS) mounted on the USS Ponce, an amphibious naval transport dock, to defend against drone strikes and eventually incoming missiles. China has also previously given its soldiers laser weapons designed to blind opponents.
Autonomous cars are already on the road in California, as 42 companies including General Motors, Google/Waymo and Zoox are testing 285 self-driving cars in various cities. However, thus far, all of them have had humans inside them at all times. That’s about to change.
At the moment, requirements promulgated by the California Department of Motor Vehicles insist on the human driver for safety reasons. However, at some point that flesh and blood security blanket has to go in order to achieve truly reliable, proven autonomous technology. With this line of thinking in mind, the DMV released a proposal for updating the regulations for autonomous vehicles. The changes would allow companies to deploy autonomous vehicles without drivers on public roads, and they should be in place by June of 2018, if not sooner.
Meanwhile, Congress is slowly making progress around the issue of autonomous
vehicles. California has always served as a sort of laboratory for testing innovative laws, especially in the realm of tech. If California has driverless cars on the road before Congress acts in a definitive way, and these trials go well, it may well determine what comes next.
California’s new regulations also raise the price of testing permits from $150 a year to $3,600 for two years, and prohibit testing companies from charging riders fees for being passengers in testing vehicles. They also require testing companies to notify local authorities about any tests they undertake without drivers, including providing them with specific details about which roads and vehicles will be involved in the testing.
There is always the chance that the federal government will either refuse to follow California’s example or act before there’s an example to follow. If they set national rules for autonomous vehicles, state laws like the new California regulations will be preempted. However, if we’ve seen anything as we’ve navigated the quickly-changing autonomous driving landscape, it’s that predictions often fail — so we’ll just have to wait and see where the (driverless) road takes us.
When we say an emerging technology represents a “paradigm shift,” it’s often hyperbole. In the case of quantum computers, it’s an understatement.
In traditional computing—everything from PCs to ATMs to smartphones—all data is represented in bits that exist in one of two states: 0 or 1, off or on. In quantum computing, bits can be 0, 1 or both at the same time. That might not seem significant, but it means quantum computers can perform vastly more complex computations. For example, quantum computers can debug millions of lines of software code in seconds, making reliable aircraft, cars, MRI scanners, etc. more efficient to produce.
Scientists are eager to use quantum computers to analyze microbes so they can create new vaccines, which quantum computers could then be used to optimize to reduce unwanted side effects. Some scientists believe that quantum computers are essential for achieving breakthrough preventative and treatments protocols for healthcare. According to Donald Parsons, a New York State Department of Health research physician, “Without quantum computers, new DNA sequencing data, the learning of the specific activities of the folded conformations of proteins, and the search for new drugs by docking algorithms, are being held back from full clinical application.”
Quantum computing enables models that can more accurately predict future demand. Sticking with our smart cities example, brownouts and blackouts are complex phenomena with dozens of variables. In part, they are caused by utilities’ reliance on limited historical data and traditional computer models to allocate tax dollars to grid upgrades. What if cities could predict which neighborhoods will have households with one or two electrical vehicles (drawing 10 kW or more at night, five times the current average) years in advance? That would give utilities a much-needed head start on funding, designing and deploying the additional infrastructure.
If you were to shop for a quantum processor today, one vendor might advertise that its model has 2048 qubits, while another says its equivalent model has 50 qubits. “Confusions exist on what quantum computing or a quantum computer means,” says Hidetoshi Nishimori, a Tokyo Institute of Technology professor and a member of the IEEE P7130 working group. “This partly originates in the existence of a few different models of quantum computing. It is urgently necessary to define each keyword.”
A lack of common definitions for fundamental terms such as “qubit” isn’t just a comparison-shopping headache. Competing to own the lexicon wastes time and resources that would be better-spent refining and applying quantum computing technologies. A set of common definitions puts the world one big step closer to enjoying the innovations that quantum computing can enable.
By Whurley (William Hurley), chair, IEEE Quantum Computing Working Group.
Oxford city centre is considering becoming the first Zero Emissions Zone (ZEZ) in the world. The proposed plan would call for non-electric vehicles to be banned entirely from the Oxford city center by 2035. The scheme would unroll gradually with benchmarks to be met every five years starting in 2020 when the innermost ring of the zone would exclude all cars, buses, and taxis that run on fossil fuels. The gradual expansion of the ZEZ would allow the city to budget by backloading most the cost to the second half of the next 18 years.
The aim of the ZEZ, which is still just a proposal, is to cut levels of harmful nitrogen dioxide (NO2) to near-background levels. The Oxford ZEZ is part of the larger plan throughout the country designed to tackle illegal levels of air pollution; many streets in Oxford are past the legal limit set by the EU for Nitrogen Dioxide. The ban would reduce those levels by as much as a 74 percent.
Challenges for the ZEZ
While public response is more positive than not, many are concerned about the costs, especially to small businesses. According to the Oxford Mail, adjusting for the ZEZ would cost city councils, bus operators, haulage companies, and taxi firms approximately £14 million ($18.5 million). Anyone violating the ban with a non-electric vehicle in the ZEZ will probably be fined about £60 ($79) automatically.
It’s worth noting that whether people are eager to accept the Oxford ZEZ or not, the national government has already announced that the UK will ban the sale of all fossil fuel cars starting in 2040; the final roll-out of the ZEZ will be only five years before that anyway.
Dental fillings may soon be left in the ash heap of history, thanks to a recent discovery about a drug called Tideglusib. Developed for and trialled to treat Alzheimer’s disease, the drug also happens to promote the natural tooth regrowth mechanism, allowing the tooth to repair cavities.
Tideglusib works by stimulating stem cells in the pulp of teeth, the source of new dentine. Dentine is the mineralized substance beneath tooth enamel that gets eaten away by tooth decay.
Teeth can naturally regenerate dentine without assistance, but only under certain circumstances. The pulp must be exposed through infection (such as decay) or trauma to prompt the manufacture of dentine. But even then, the tooth can only regrow a very thin layer naturally—not enough to repair cavities caused by decay, which are generally deep. Tideglusib changes this outcome because it turns off the GSK-3 enzyme, which stops dentine from forming.
In the research, the team inserted small, biodegradable sponges made of collagen soaked in Tideglusib into cavities. The sponges triggered dentine growth and within six weeks, the damage was repaired. The collagen structure of the sponges melted away, leaving only the intact tooth.
Thus far, the procedure has only been used in mouse teeth. Yet as King’s College London Dental Institute Professor and lead author Paul Sharpe told The Telegraph, “Using a drug that has already been tested in clinical trials for Alzheimer’s disease provides a real opportunity to get this dental treatment quickly into clinics.”
He added, “The simplicity of our approach makes it ideal as a clinical dental product for the natural treatment of large cavities, by providing both pulp protection and restoring dentine.”
Particle physics is in some ways like peeling an onion; you can peel back each layer, going smaller and smaller, until you are studying the tiniest of particles. Interestingly, it’s these minuscule particles that offer physicists the most insight into the secrets of our vast universe. Every atom has a nucleus at its center, and inside nuclei are nucleons: protons and neutrons. Even smaller particles, quarks and gluons, comprise nucleons, each of which has its own intrinsic spin.
To understand how chemical and physical processes work, it is important to know how these elementary particles spin. The fundamental properties of any given material are controlled by spin, so how a material behaves at different temperatures, its conductivity, and other properties all have to do with the spin of these tiny particles.
Since 1987, the physics community has been stumped by the “proton spin crisis.” Although older theoretical models attributed all nucleon spin to the quarks that comprise the nucleon, research from CERN, SLAC, and DESY revealed that only 30 percent of proton spin can be attributed to quarks.
As scientists have been working to identify the source of the rest of the spin they’ve long chalked it up to relativistic quantum effects without being able to describe the process in any more detail — until now.
New Spin on an Old Physics Problem
Researchers have, at last, deciphered the nucleon’s spin — including how the various particles that make up the nucleon contribute to it. The team used the CSCS supercomputer Piz Daint to calculate the formerly mysterious quantum effects, and parse out the relative contributions to nucleon spin constituent gluons, quarks, and sea quarks each make. Sea quarks are an intermediate state of quark-antiquark pairs that exist inside the nucleon, albeit briefly.
The researchers first assessed the quarks’ true physical mass to accurately calculate their spin. This is easier said than done, since individual quarks and gluons cannot be isolated, as they are held together by the strong force — one of the four fundamental forces of physics. The team got around this issue by fixing the mass of up and down quarks based on the mass of the pion, which is a meson made up of one up and one down antiquark. Additional challenges included reducing statistical errors in calculating the spin contributions, and converting dimensionless values from simulations into measurable physical values. The Piz Daint supercomputer was essential to each of these processes.
“Making powerful supercomputers like Piz Daint open and available across Europe is extremely important for European science,” DESY-Zeuthen lead scientist and co-author Karl Jansen told Phys.org. “Simulations as elaborate as this were possible only thanks to the power of Piz Daint, and because we pre optimised our algorithms to make best use of the graphics processors of the machine,” University of Cyprus and the Cyprus Institute professor of physics and project leader Constantia Alexandrou added in the same report.
The numbers spell grim news for wildlife: according to the International Union for the Conservation of Nature (IUCN) assessment in 2009, 17,291 species—36 percent of all species evaluated—are at risk of extinction. Our current biodiversity crisis is so bad it’s been dubbed the planet’s “sixth mass extinction.” However, not all is doom and gloom; one group of scientists hopes to turn this tide through an ambitious effort that has mapped all known vertebrate species. The finished product is known as an “atlas of life,” and using it, the team is able to find new areas that require focused conservation actions.
Yet climate change isn’t the only thing altering the environment around wildlife and threatening their existence. We’re also to blame through deforestation and habitat destruction. In 2016, for example, illegal logging depleted the Amazon Rainforest much faster than it did the previous year. In January, it was reported that the areas we fail to protect in favor of forests with high carbon stocks could be at a higher risk of deforestation.
Enter an international team of scientists, led by those from the University of Oxford and Tel Aviv University, who have completed a new series of detailed maps identifying the locations of every known vertebrate on Earth. The project’s final push was mapping the global distributions of reptiles, which were combined with existing maps made for birds, mammals and amphibians. Meet the atlas of life.
Atlas of Life
Thirty-nine scientists came together to work on the final catalog for the atlas, which accounts for nearly 10,000 species of snakes, lizards and turtles/tortoises. It took over a decade for this work to be completed, because many thought there weren’t enough well known reptile species to be mapped out.
When it comes to conserving wildlife, however, the University of Oxford notes that the atlas had to be compiled before it was too late.
As described in a press release detailing the scientists’ work, “in order to best protect wildlife, it’s important to know where species live, so the right action can be taken and scarce funding allocated in the right places.”
The map revealed several areas in need of attention, where reptile biodiversity is particularly fragile: the Arabian Peninsula and the Levant, inland arid southern Africa, the Asian steppes, the central Australian deserts, the Brazilian caatinga scrubland, and the high southern Andes.
The 10,000 reptiles added bring the total evaluated species to 31,000, including some 5,000 mammals, 10,000 birds and 6,000 frogs and salamanders.
Improving Conservation Efforts
Conservationists also now have the means to review previous plans and programs to see if they’ve been as effective as possible. “This is not to say that the work done to date has been inaccurate: based on our knowledge at the time, conservationists have often made some really good decisions,” said Dr. Richard Grenyer, Associate Professor in Biodiversity and Biogeography at Oxford University, in the press release.
“But now conservation has the data and tools required to bring planning up to the same level as the businesses and governments who might have an eye on land for other uses. Maybe we’re actually a bit better, and we’re doing it in the open.”
Before the map can be used by businesses and conservation organizations, it has to go through IUCN. The organization is currently classifying the species identified, which includes adding a rating ranging from “critically endangered” to “least concern.” Once that’s done, the atlas of life will become available for public use.
“Mapping the distributions of all reptiles was considered too difficult to tackle,” explains Tel Aviv University Professor Shai Meiri. “But thanks to a team of experts on the lizards and snakes of some of the most poorly known regions of the world we managed to achieve this, and hopefully contribute to the conservation of these often elusive vertebrates that suffer from persecution and prejudice.”
Conventional methods of heart transplantation require that a donated organ reach its recipient within four hours of removal. Now, scientists at Sweden’s Lund University have made a major breakthrough that extends that timeframe for organ donation to at least 12 hours and perhaps as high as several days. This extra time could significantly widen the area where any given heart can travel for a transplant, putting us one step closer to a borderless network for organ donation.
The method involves a mini-heart lung machine that provides the donated heart with an oxygenated solution containing vital substances. Testing done on animal hearts showed that the device was able to preserve a heart for up to 24 hours, and last month, the researchers were able to store a human heart in the machine for three hours prior to transplanting it into a patient at Skåne University Hospital.
“The heart started at once with a good mobility in the whole heart, including the septum between the ventricles, which we normally do not see,” Johan Nilsson, senior consultant cardiotoracic surgeon, reported in a Lund University press release. If five more tests of three hours are successful, the researchers plan to increase the period gradually to 24 hours.
Global Organ Network
According to the United Network for Organ Sharing, nearly 4,000 patients in the United States are waiting for a heart, but given the short window of viability for the organs, heart donations are limited to a patient’s general region.
Nilsson is looking forward to the new possibilities that a longer transport time affords. “This new method gives increased opportunities for exchanges with countries far away, such as between Europe and eastern United States, and it also makes it possible to use older donors,” he explained.
Should the device be made available in the United States, we could see a nationwide network of hearts become available to patients. Recipients living near major airports, in particular, would have increased opportunities for organ donation, but as the longest domestic flights in the continental U.S. are under seven hours, that leaves some wiggle room for ground transport to patients in more remote areas as well.
In the future, we could see the development of similar devices for other organs, but in the meantime, this system has the potential to save thousands of lives simply by extending the window of opportunity.
Though much noise has been made of what’s still to come from artificial intelligence (AI), the technology has already changed our daily lives. Machine learning-powered image recognition, text analysis, and language translation tools allow us to navigate the world in previously unimagined ways, and our mobile devices can now predict so much of our behavior based on our past actions.
Now, an international, interdisciplinary team of researchers has devised a way to use machine learning to do something far more complex than foresee a smartphone user’s next move. They’ve built a machine that can predict molecular behavior, a feat that previously required very complex quantum calculations. Their study has been published in Nature Communications.
To create this system that can predict molecular behavior, the researchers trained an algorithm using a small sample set featuring a simple molecule called malonaldehyde. Then, they had the machine predict the complex chemical behaviors of this molecule. The researchers compared those predicted simulations with their current chemical understanding of malonaldehyde and found that the machine was able to learn how the molecule would behave using the limited data it had been trained on.
“By identifying patterns in molecular behavior, the learning algorithm or ‘machine’ we created builds a knowledge base about atomic interactions within a molecule and then draws on that information to predict new phenomena,” researcher Mark Tuckerman of New York University explained in a press release.
The researchers expect that this ability to predict molecular behavior could greatly contribute to the development of pharmaceuticals, as well as simulate molecular designs crucial for improving the performance of today’s new battery technologies, solar cells, and digital displays — basically, anything that used to rely on complex quantum mechanical calculations to model atomic or molecular interactions can benefit from their work.
While their machine does make it possible to model this behavior without involving intricate calculations, streamlining that complicated task is just the jumping-off point, according to Müller: “Now we have reached the ability to not only use AI to learn from data, but we can probe the AI model to further our scientific understanding and gain new insights.”
In September 2016, the European Space Agency’s (ESA) Rosetta spacecraft crashed into Comet 67P, bringing an end to 12 years of service — or so the ESA thought. While they believed they had already received Rosetta’s final image of the comet, the organization recently discovered one more, revealing the true final moment before impact.
The image previously thought to be Rosetta’s last was taken from a height of about 23.3 to 26.2 meters (76 to 86 feet), but the ESA estimates that this new image was taken from about 18 to 21 meters (59 to 68 feet) above the comet’s surface. They claim it captures an area of about one square meter (10 square feet).
When Rosetta purposefully set itself on a crash course with 67P, it transmitted the last of its images in six separate packets. However, due to an unexpected transmission interruption, only three made it back to Earth.
“Later, we found a few telemetry packets on our server and thought, wow, that could be another image,” said Holger Sierks, principal investigator for the OSIRIS camera at the Max Planck Institute for Solar System Research in Göttingen, Germany, in a statement.
The craft transmits images in layers, with each new layer adding detail to the image, so Sierks and his team had to assemble Rosetta’s final image one layer at a time. The ESA notes that while some of the finer details were lost, the final result is a zoomed-in shot of the spot Rosetta is thought to have impacted.
This may be the last we see from Rosetta, but it’s a fittingly unexpected end for a spacecraft that contributed so much to space exploration while traversing our solar system for more than a decade.
To advance research and development on artificial intelligence, quantum computing, and the Internet of Things, Alibaba is dedicating $15 billion in funding over the next three years to create the next generation of such technology.
As reported by Bloomberg, the company shared its plans in an emailed statement, shortly after Chief Technical Officer Jeff Zhang made the announcement at the company’s Computing Conference 2017 in Hangzhou, China. This is the beginning of its global research program, now known as Alibaba DAMO Academy — “DAMO” an acronym for Discovery, Adventure, Momentum and Outlook.
As part of the program, Alibaba intends to build seven new R&D labs across the U.S., Russia, China, Israel, and Singapore. Over 100 scientists worldwide who specialize in different aspects of AI, quantum computing, and the IoT will also be hired.
In an interview with Bloomberg, Alibaba Chief Technical Officer Jeff Zhang said the new labs will go a long way to “help solve issues that Alibaba is currently facing across its business lines. It will also be at the forefront of developing next-generation technology.”
More Than Labs and Scientists
Alongside new labs, Alibaba’s multi-billion dollar investment will also go towards funding various collaborations with universities, with the University of California at Berkeley already on board. Additionally, the company has tapped a number of professors from the likes of Princeton and Harvard to sit on an advisory board.
Fortunately, according to CNBC, the company has a “commitment to serve 2 billion customers and create 100 million jobs in 20 years.” We’ll have to wait and see what Alibaba comes up with in the coming years, but it’s a clear sign it’s prepared to see this through, and has the resources necessary to move the world forward.
On Monday, October 16, the National Science Foundation (NSF) will host an event at the National Press Club in Washington, DC featuring researchers from the Laser Interferometer Gravitational-Wave Observatory (LIGO) and Virgo collaborations, along with scientists from approximately 70
observatories from around the world. Journalists are also invited to attend the event, which is intended to be the global reveal for new findings on gravitational waves.
First the scientists will discuss the new findings, which are from LIGO, Virgo, and various other observatories from all over the world. Next, telescope teams studying extreme cosmic events in partnership with the LIGO and Virgo collaborations will discuss their recent findings. The event will begin for the press and public at 10:00 a.m., EDT.
On September 14, 2015 the LIGO team first detected gravitational waves, a discovery that they announced in February of 2016. Gravitational waves are created (among other things) by the compacting and releasing of the fabric of spacetime as two black holes orbit each other in a dance of death. The first observed event confirmed Einstein’s general theory of relativity, via which he posited spacetime as a singular and unitary phenomenon, and was a milestone in astronomy and physics that would usher in a new field of gravitational-wave astronomy. Three more detections were confirmed since then, the most recent of which was the first joint LIGO and Virgo detection.
Solving Time-Old Mysteries
Physicists from the LIGO project were recently awarded the Nobel for their work with gravitational waves. Their work detecting gravitational waves has permanently changed astronomy and physics, and not simply because it confirms Einstein’s theory of relativity. The detection of the waves will also offer insight into how the universe is expanding — insight that could never have been accessed without otherwise appealing to dark matter, a term that is ultimately a placeholder for a massive force of we-know-not-what that has long eluded the scientific community. Gravitational wave research is also likely to reveal the nature of dark matter.
Event organizers are asking journalists who wish to attend the event to RSVP as soon as possible to email@example.com, and no later than noon EDT Friday, October 13. The National Press Club is located in Holeman Lounge at 529 14th St. NW, 13th Floor, in Washington, DC.