Connect you with the world

Start uploading picture, videos and write about your activity to share it with friends and family today. Sign-up here »

Already a member?

Remember Me

Archives for Hard Science

CRISPR-Powered Viruses

Earlier this month, the annual CRISPR 2017 conference was held at Montana State University. Attendees were the first to hear about successes companies have had using CRISPR to engineer viruses to kill bacteria. One of the most exciting potential application for these viruses, called bacteriophages, would be killing bacteria that have become resistant to antibiotics. At least two of the companies aim to start clinical trials of these engineered viruses within 18 to 24 months.

The use of bacteriophages isn’t new. In the past, they have been isolated in the wild and purified for use. Although bacteriophages are regarded as being safe and effective for use in humans, because they are found in the wild, research on them has been sluggish. New discoveries can’t be patented, and furthermore, these discoveries can also be transient, because bacteria can, and often do, rapidly evolve.

However, using CRISPR to engineer them is definitely innovative. It renders viruses uniquely lethal to the most dangerous bacteria in the world, and initial tests saved the lives of mice who were infected with antibiotic-resistant infections that would have ultimately killed them, explained conference speaker Rodolphe Barrangou, chief scientific officer of Locus Biosciences.

This ability has lead researchers from at least two companies to use CRISPR in an attempt to turn the tables on antibiotic-resistant bacteria. Both companies cite treating bacterial infections linked to serious diseases as their primary goal. Eventually, they intend to engineer viruses that would allow them to do much more by taking a precision approach to the human microbiome as a whole. The idea would be to selectively remove any bacteria that occur naturally and have been associated with various health conditions. This could be anything from autism to obesity — and possibly even some forms of cancer.

How CRISPR Works: The Future of Genetic Engineering and Designer Humans
Click to View Full Infographic

Self-Destruct Switches

One company, Locus, is using CRISPR to send DNA that will create modified guide RNAs to find pieces of the antibiotic-resistance gene. After the virus infects the bacterium and the guide RNA connects with the resistance gene, the bacterium produces a phage-killing enzyme called Cas3. This is the bacterium’s usual response, only in this instance, it destroys its own antibiotic-resisting genetic sequence. Over time Cas3 destroys all of the DNA, and the bacterium dies.

Another company, Eligo Bioscience, is taking a slightly different approach. The team chose to insert the DNA that creates guide RNAs (this time with the bacterial enzyme Cas9), which removes all genetic replication instructions. Cas9 then severs the DNA of the bacterium at a specific place, and that cut triggers the self-destruct mechanism in the bacterium.

The third approach, by Synthetic Genomics, involves creating “supercharged” phages that contain dozens of enzymes. Each enzyme offers its own unique set of benefits, including the ability to camouflage the phages from the human immune system by breaking down proteins or biofilms.

Despite these promising results thus far, there will be challenges to bringing successful engineered phages to market. For example, there is a risk that phages could actually spread genes for antibiotic-resistance to non-resistant bacteria. Another potential issue is that it might take a very large number of phages to treat an infection, which in turn could trigger immune reactions that would sabotage the treatment.

Ideally, though, if clinical trials go well, engineered phages could provide humans with a powerful weapon in the fight against superbugs. A fight that has, thus far, included a variety of strategies. Whenever it happens, it wouldn’t be soon enough: this past January, the Centers for Disease Control (CDC) reported that a patient died from a superbug that was resistant to all 26 antibiotics available in the US.

The post Scientists Modify Viruses with CRISPR to Kill Antibiotic-Resistant Bacteria appeared first on Futurism.

For centuries, modern science has been shrinking the gap between humans and the rest of the universe, from Isaac Newton showing that one set of laws applies equally to falling apples and orbiting moons to Carl Sagan intoning that “we are made of star stuff” — that the atoms of our bodies were literally forged in the nuclear furnaces of other stars.

Even in that context, Gregory Matloff’s ideas are shocking. The veteran physicist at New York City College of Technology recently published a paper arguing that humans may be like the rest of the universe in substance and in spirit. A “proto-consciousness field” could extend through all of space, he argues. Stars may be thinking entities that deliberately control their paths. Put more bluntly, the entire cosmos may be self-aware.

The notion of a conscious universe sounds more like the stuff of late night TV than academic journals. Called by its formal academic name, though, “panpsychism” turns out to have prominent supporters in a variety of fields. New York University philosopher and cognitive scientist David Chalmers is a proponent. So too, in different ways, are neuroscientist Christof Koch of the Allen Institute for Brain Science, and British physicist Sir Roger Penrose, renowned for his work on gravity and black holes. The bottom line, Matloff argues, is that panpsychism is too important to ignore.

“It’s all very speculative, but it’s something we can check and either validate or falsify,” he says.

Three decades ago, Penrose introduced a key element of panpsychism with his theory that consciousness is rooted in the statistical rules of quantum physics as they apply in the microscopic spaces between neurons in the brain.

A Timeline of Future Space Exploration: Part 1 [INFOGRAPHIC]
Click to View Full Infographic

In 2006, German physicist Bernard Haisch, known both for his studies of active stars and his openness to unorthodox science, took Penrose’s idea a big step further. Haisch proposed that the quantum fields that permeate all of empty space (the so-called “quantum vacuum”) produce and transmit consciousness, which then emerges in any sufficiently complex system with energy flowing through it. And not just a brain, but potentially any physical structure. Intrigued, Matloff wondered if there was a way to take these squishy arguments and put them to an observational test.

One of the hallmarks of life is its ability to adjust its behavior in response to stimulus. Matloff began searching for astronomical objects that unexpectedly exhibit this behavior. Recently, he zeroed in on a little-studied anomaly in stellar motion known as Paranego’s Discontinuity. On average, cooler stars orbit our galaxy more quickly than do hotter ones. Most astronomers attribute the effect to interactions between stars and gas clouds throughout the galaxy. Matloff considered a different explanation. He noted that the anomaly appears in stars that are cool enough to have molecules in their atmospheres, which greatly increases their chemical complexity.

Matloff noted further that some stars appear to emit jets that point in only one direction, an unbalanced process that could cause a star to alter its motion. He wondered: could this actually be a willful process? Is there any way to tell?

If Paranego’s Discontinuity is caused by specific conditions within the galaxy, it should vary from location to location. But if it is something intrinsic to the stars — as consciousness would be — it should be the same everywhere. Data from existing stellar catalogs seems to support the latter view, Matloff claims. Detailed results from the Gaia star-mapping space telescope, due in 2018, will provide a more stringent test.

Matloff is under no illusion that his colleagues will be convinced, but he remains upbeat: “Shouldn’t we at least be checking? Maybe we can move panpsychism from philosophy to observational astrophysics.”

Mind Out of Matter

While Matloff looks out to the stars to verify panpsychism, Christof Koch looks at humans. In his view, the existence of widespread, ubiquitous consciousness is strongly tied to scientists’ current understanding of the neurological origins of the mind.

“The only dominant theory we have of consciousness says that it is associated with complexity — with a system’s ability to act upon its own state and determine its own fate,” Koch says. “Theory states that it could go down to very simple systems. In principle, some purely physical systems that are not biological or organic may also be conscious.”

Koch is inspired by integrated information theory, a hot topic among modern neuroscientists, which holds that consciousness is defined by the ability of a system to be influenced by its previous state and to influence its next state.

The human brain is just an extreme example of that process, Koch explains: “We are more complex, we have more self-awareness — well, some of us do — but other systems have awareness, too. We may share this property of experience, and that is what consciousness is: the ability to experience anything, from the most mundane to the most refined religious experience.”

A Timeline of Future Space Exploration: Part 2 [INFOGRAPHIC]
Click to View Full Infographic

Like Matloff, Koch and his colleagues are actively engaged in experimental tests of these ideas. One approach is to study brain-impaired patients to see if their information responses align with biological measures of their consciousness. Another approach, further off, is to wire the brains of two mice together and see how the integrated consciousness of the animals changes as the amount of information flowing between them is increased. At some point, according to integrated information theory, the two should merge into a single, larger information system. Eventually, it should be possible to run such experiments with humans, wiring their brains together to see if a new type of consciousness emerges.

Despite their seeming similarities, Koch is dubious of Matloff’s volitional stars. What is distinctive about living things, according to his theory, is not that they are alive but that they are complex. Although the sun is vastly bigger than a bacterium, from a mathematical perspective it is also vastly simpler. Koch allows that a star may have an internal life that allows it to “feel,” but whatever that feeling is, it is much less than the feeling of being an E. coli.

On the other hand, “even systems that we don’t consider animate could have a little bit of consciousness,” Koch says. “It is part and parcel of the physical.” From this perspective, the universe may not exactly be thinking, but it still has an internal experience intimately tied to our own.

A Participatory Cosmos

Which brings us to Roger Penrose and his theories linking consciousness and quantum mechanics. He does not overtly identify himself as a panpsychist, but his argument that self-awareness and free will begin with quantum events in the brain inevitably links our minds with the cosmos. Penrose sums up this connection beautifully in his opus “The Road to Reality:”

“The laws of physics produce complex systems, and these complex systems lead to consciousness, which then produces mathematics, which can then encode in a succinct and inspiring way the very underlying laws of physics that gave rise to it.”

Despite his towering stature as a physicist, Penrose has encountered resistance to his theory of consciousness. Oddly, his colleagues have been more accepting of the exotic, cosmic-consciousness implications of quantum mechanics. Ever since the 1920s, physicists have puzzled over the strangely privileged role of the observer in quantum theory. A particle exists in a fuzzy state of uncertainty…but only until it is observed. As soon as someone looks at it and takes its measurements, the particle seems to collapse into a definite location.

The late physicist John Wheeler concluded that the apparent oddity of quantum mechanics was built on an even grander and odder truth: that the universe as a whole festers in a state of uncertainty and snaps into clear, actual being when observed by a conscious being — that is, us.

Ancient Supernovae: It Came From Outer Space  [Infographic]
Click to View Full Infographic

“We are participators in bringing into being not only the near and here but the far away and long ago,” Wheeler said in 2006. He calls his interpretation the “participatory anthropic principle.” If he is correct, the universe is conscious, but in almost the opposite of the way that Matloff pictures it: only through the acts of conscious minds does it truly exist at all.

It is hard to imagine how a scientist could put the participatory anthropic principle to an empirical test. There are no stars to monitor, and no brains to measure, to understand whether reality depends on the presence of consciousness. Even if it cannot be proven, the participatory anthropic principle extends the unifying agenda of modern science, powerfully evoking the sense of connectedness that Albert Einstein called the cosmic religious feeling.

“In my view, it is the most important function of art and science to awaken this feeling and keep it alive in those who are receptive to it,” Einstein wrote in a 1930 New York Times editorial. Explorers like Matloff are routinely dismissed as fringe thinkers, but it is hard to think of any greater expression of that feeling than continuing the quest to find out if our human minds are just tiny components of a much greater cosmic brain.

Is the Universe Conscious? was originally published by NBC Universal Media, LLC on June 16, 2017 by Corey S. Powell. Copyright 2017 NBC Universal Media, LLC. All rights reserved.

The post Could the Universe Be Conscious? appeared first on Futurism.

Snail’s Minds Rendered Spotless

In the U.S. alone, post-traumatic stress disorder (PTSD) is estimated to affect about 8 million adults every year. Whether these individuals are struggling with events from a battlefront or a violent past encounter, they are likely to have at least two kinds of memories associated with their trauma — associative memories and non-associative memories.

Associative memories contain important information about the traumatic event, like who hurt you, or perhaps where the event took place. Non-associative memories, on the other hand, are about details that are not directly related to the traumatic event, but can still trigger symptoms of PTSD. But now, scientists have found a way selectively delete non-associative memories while retaining associative ones — at least in snails.

Image Credit: Schacher Lab/Columbia University Medical Center
Image Credit: Schacher Lab / Columbia University Medical Center

They were able to do this by blocking various molecules associated with an enzyme called Protein Kinase M (PKM), which is critical for maintaining long-term memories. Researchers recently reported in the journal Current Biology that they were able to erase different kinds of memories by blocking different molecules associated with PKM, and were even able to erase specific memories by blocking certain variants of these molecules.

“By isolating the exact molecules that maintain non-associative memory, we may be able to develop drugs that can treat anxiety without affecting the patient’s normal memory of past events.” Jiangyuan Hu, a co-author of the paper, said in an interview for a Columbia University press release.

Mastering Memory

While the team is hopeful that these methods will be as effective in humans as they were in snails, many more studies are needed before we get to that point. And, as yet, the team isn’t sure that the memories they erased are gone forever — only further research can clarify this issue.

This line of research may hold incredible promise for people suffering from PTSD, other mental health issues caused by traumatic events, and even drug addiction. However, this is a delicate area. Memories serve a purpose in most cases, and if we erase bad memories we may find ourselves making the same mistakes. In addition, some theorize that it’s not the memories per se that are the problem — it’s the recall process gone awry that causes the trauma to be re-experienced.

Furthermore, it isn’t clear how successful scientists will be at erasing only specific memories when, by their very nature, memories are interlinked with one another. Even if they can, will all of a person’s other memories make sense in context when one life-changing yet negative experience is suddenly gone?

Still, the team believes that the only way we can know if erasing memories is a viable medical strategy is by studying it. “Our study is a ‘proof of principle’ that presents an opportunity for developing strategies and perhaps therapies to address anxiety,” co-author Samuel Schacher said in the press release.

The post Scientists Selectively Erased Memories In Snails — Are We Next? appeared first on Futurism.

If realized, it would give us an endless source of free energy.

The post Could We Ever Make a Perpetual Motion Machine? appeared first on Futurism.

It all boils down to a single tweak in our evolutionary traits.

The post Survival of the Fittest? How Did Humans Come to be the Dominant Species on Earth? appeared first on Futurism.

When Will We Clone a Human?

Human cloning may endure as one of the go-to science fiction tropes, but in reality we may be much closer to achieving it than our fictional heroes might imply. At least in terms of the science required. On of the most prominent hurdles facing us may have less to do with the process and more to do with its potential consequences, and our collective struggle to reconcile the ethics involved. That being said, while science has come a long way in the last century when it comes to cloning a menagerie of animals, cloning humans and other primates has actually proven to be incredibly difficult. While we might not be on the brink of cloning entire human beings, we’re already capable of cloning human cells — the question is, should we be? 

Seeing Double: The History of Animal Cloning
Click to View Full Infographic

The astoundingly complex concept of cloning boils down to a fairly simple (in theory, at least) practice: you need two cells from the same animal — one of which is an egg cell from which you’ve removed the DNA. You take the DNA from the other somatic cell and put it inside the devoid-of-DNA egg cell. Whatever that egg cell goes on to produce for offspring will be genetically identical to the parent cell. While human reproduction is the result of the joining of two cells (one from each parent, each with their own DNA) the cellular photocopy technique does occur in nature. Bacteria reproduce through binary fission: each time it divides, its DNA is divided too so that each new bacterium is genetically identical to its predecessor. Except sometimes mutations occur in this process — and in fact, that can be by design and function as a survival mechanism. Such mutations allow bacteria to, for example, become resistant to antibiotics bent on destroying them. On the other hand, some mutations are fatal to an organism or preclude them coming into existence at all. And while it might seem like the picking-and-choosing that’s inherent to cloning could sidestep these potential genetic hiccups, scientists have found that’s not necessarily the case.

Prediction: When will the first human be cloned?Image Credit: Pixabay

What The Experts Say

While Dolly the sheep might be the most famous mammal science has ever cloned, she’s by no means the only one: scientists have cloned mice, cats, and several types of livestock in addition to sheep. The cloning of cows has, in recent years, provided a great deal of knowledge to scientists about why the process doesn’t work: everything from implantation failure to those aforementioned mutations that render offspring unable to survive. Harris Lewin, professor in the UC Davis Department of Evolution and Ecology, and his team published their findings on the impact cloning has on gene expression in the journal Proceedings of the National Academy of Sciences back in 2016. In the study’s press release Lewin noted that the findings were certainly invaluable to refining cloning techniques in mammals, but that their discoveries “also reinforce the need for a strict ban on human cloning for any purposes.”

The creation of entire mammals via reproductive cloning has proven a difficult process both practically and ethnically, as legal scholar and ethicist Hank Greely of Stanford University explained to Business Insider in 2016:

“I think no one realized how hard cloning would be in some species though relatively easy in others. Cats: easy; dogs: hard; mice: easy; rats: hard; humans and other primates: very hard.”

The cloning of human cells, however, may be a far more immediate application for humans. Researchers call it “therapeutic” cloning, and differentiate it from traditional cloning that has reproductive intent. In 2014, researchers created human stem cells through the same cloning technique that generated Dolly the sheep. Because stem cells can differentiate to become any kind of cell in the body, they could be utilized for a wide variety of purposes when it comes to treating diseases — particularly genetic diseases, or diseases where a patient would require a transplant from an often elusive perfect match donor. This potential application is already well underway: earlier this year a woman in Japan suffering from age-related macular degeneration was treated with induced pluripotent stem (iPS) cells created from her own skin cells, which were then implanted into her retinas and stopped her vision from degenerating further.

We asked the Futurism community to predict when they think we’ll be able to successfully clone a full human, and the majority of those who responded agree that it feels like we’re getting close: nearly 30 percent predicted we’ll clone our first human by the 2020s. “We have replaced, and replicated almost every biology on earth,” said reader Alicja Laskowska, “[the] next step is for cures and to do that you need clean DNA, and there’s your start.”

The post How Close Are We to Successfully Cloning the First Human? appeared first on Futurism.

Will the pros of CRISPR outweigh the potential cons of this breakthrough technology?

The post Experts Answer: Will Gene Editing Let Us Live Forever or Destroy Humanity? appeared first on Futurism.

Global security experts noticed a pattern emerge in the aftermath of the worst disasters—and it could be critical to preventing future ones.

The post WATCH: We Can Predict The Future to Prevent Disasters. Here’s How. appeared first on Futurism.

LISA Launch Set

The ESA’s Science Program Committee met on June 20, and at the top of their agenda was addressing planning for the agency’s missions to come over the next 20 years — one of which will be a three satellite Laser Interferometer Space Antenna mission (LISA). The space antenna was designed to detect gravitational waves. The LISA mission has been selected to move forward and is set to launch in 2034.

The project uses the satellite trio to create a huge triangle in space. The satellites form the corners and lasers bouncing across the 2.5 million kilometers (1.55 million miles) between them form its sides. The triangle itself will follow Earth as it orbits the Sun. Meanwhile, the satellites will be sorting through an impressive array of cosmic noise to determine which signals are the most promising signs of supermassive black holes, and which are just false leads.

Starting Where LIGO Stops

The Laser Interferometer Gravitational-wave Observatory (LIGO) first detected gravitational waves in September 2015, confirming its initial findings when the waves were detected again in June of 2016. By February of 2017, scientists learned that LIGO also produces the waves. Earlier this year, LIGO detected the waves for the third time. They appear to be from a supermassive black hole that’s 49 times larger than our sun.

LISA is taking the detection of gravitational waves to the next level by searching for supermassive black holes millions of times larger than those found by LIGO. LISA should also be able to give scientists enough lead time to observe the black holes with telescopes, enabling us to discover more about how these collisions work.

“We’ll be able to see signals for months, so we’ll have time to point all these other telescopes at that point in the sky to see if there’s any other signals coming from that area when the merger happens,” ESA’s senior advisor for science & exploration Mark McCaughrean told New Scientist.

The post Europe’s Space Program Will Launch a Gravitational Wave Hunting Spacecraft in 2034 appeared first on Futurism.

Ten Strong

The Solar System hasn’t been the same since Pluto was downgraded from a planet to a dwarf planet, and all bets have been off since this monumental decision. Case in point, last year, scientists at the California Institute of Technology proposed that there might be a true Planet Nine in our solar system. The team asserted that it appears to be 10 times the mass of Earth and that it is hiding out in the remote recesses of our solar system—well beyond the orbit of Pluto.

At the time of the discovery, Mike Brown, who was behind the work, noted that the existence of a 9th planet is extremely likely: “Hey Planet Nine fans, a new eccentric KBO was discovered. And it is exactly where Planet Nine says it should be,” Brown tweeted. Furthermore, he says, the new object “takes the probability of this being a statistical fluke down to ~.001% or so.”

Similar discoveries have been made in relation to new dwarf planets. Located about 13.7 billion km (8.5 billion miles) from the Sun, 2014 UZ224 measures about 530 km (330 miles) in diameter and takes around 1,100 Earth years to complete its orbit. And so, our littler corner of the cosmos has been in quite the state of flux.

But it appears that there may be more surprises lying in wait at the edge of our Solar System.

Kathryn Volk and Renu Malhotra at the University of Arizona have noticed some strange movement out in the Kuiper belt…movement that they believe could suggest the existence of a tenth planet. To break this down a bit: Objects in the Kuiper belt are far enough away from the other major bodies in our solar system that the gravitational influence of the large planets doesn’t impact them (at least, not to a measurable degree); however, their movements can still be predicted, thanks to sky surveys and a host of advanced technologies.

Notably, if these predictions don’t match up, it may mean there’s another object—another large body—past our vision whose gravity is impacting the movements of the Kuiper belt objects.

Closer to Home

The search for Planet Nine has lead scientists to believe that it is orbiting around 700 AU from the Sun. However, Volk and Malhotra believe that this tenth planet could be much closer, as the orbit of Kuiper belt bodies shifted just beyond 50 AU. They also contend that the planet would be roughly comparable to Mars, in terms of size.

An Exoplanetary Bestiary: The Weirdest Planets in the Universe [INFOGRAPHIC]
Click to View Full Infographic

Other astronomers, however, are not so quick to hop on this train of thought. Alessandro Morbidelli at the Côte d’Azur Observatory in Nice, France tells New Scientist “I am dubious that a planet so close and so bright would have remained unnoticed.”

Still, Volk is confident in these findings, which are set to be published in The Astronomical Journal. “It would have to be quite a fluke for this to not be a real effect,” she says. “We think there is a real signal there and this implies an additional planet.”

The discovery of more bodies in the Kuiper belt will help to support or debunk these findings. Either way, our little solar family is going to continue to grow with these long-lost distant relations.

The post New Evidence Suggests That Our Solar System Has 10 Planets appeared first on Futurism.

Schrödinger’s Cat

Even if you’re not that into heavy science, you’re probably familiar with Schrödinger’s cat, the thought experiment that allows us to consider quantum states in which more than one state is possible at once. The cat is in a box that is closed, and with it is a vial of poison, a hammer that can smash the vial, a geiger counter, and a trace amount of radioactive material. The radioactive material, however, is such a small amount that the geiger counter has only a 50 percent chance of detecting it. If it does, it will trigger the hammer’s smashing of the vial, and the cat will die.

We won’t know until we open the box if the cat is alive or dead. We just know that each possibility — it getting killed or surviving — is equally likely. So, until the box is open, the cat exists in a kind of super position — both alive and dead. Schrödinger’s point was that demonstrating its impossibility — and silliness. But thanks to quantum physics, we now know it’s not that silly — and not necessarily impossible.

Speaking of thought experiments used to talk about quantum physics that were devised by people who never even considered quantum physics, let’s consider the Zeno effect and the anti-Zeno effect. Zeno of Elea was a philosopher who made it his life mission to prove that everything was B.S., and he did that by devising paradoxes to demonstrate that even things that seem obviously true to us are, in fact, false. One of these is the arrow paradox, from which arises the Zeno effect and its corollary.

The Zeno effect works like this: in order to measure or observe something at a particular moment, it must be motionless. Say you want to see if an atom has decayed or not. In reality, although there are two possible states, most of the time the chances are not 50/50. That’s because it takes time for something to decay — at least a tiny bit of time. Therefore, if you check on the atom quickly and often enough, it won’t decay.The corollary anti-Zeno effect is also true. If you delay measurement until the atom is likely to have decayed, then keep this pattern going, you can force the system to decay more rapidly.

Disturbing Effects Of Measurement

Scientists at Washington University in St. Louis wanted to know what happens if you disturb the system again and again, but don’t relay any data. In other words, they wanted to see if it is the act of measurement and observation — or simply the disturbing influence — that causes the Zeno effect. To find out, they experimented with qubits and devised “quasimeasurement,”in which the atom is disturbed, but no information about it is measured or relayed.

The team found that even quasimeasurements cause the Zeno effect. The quantum environment doesn’t need to be connected to the outside environment for the disturbance to achieve the effect. These findings are interesting because they open up new areas of research into how we might be able to control quantum systems.

Oh, and by the way: no cats, philosophers, or physicists were hurt in the experiments.

The post How Schrödinger’s Cat Helps Explain the New Findings About the Quantum Zeno Effect appeared first on Futurism.

Sun-Free Tanning

Say goodbye to spray-on tans that look orange, and hello to a plant extract that causes any skin type to actually tan. The best part? It boosts natural resistance to skin cancer, too. Scientists have created a compound using this chemical, and it’s headed for human trials. Used along with sunscreen, it causes a temporary boost in melanin, the natural skin pigment found in most humans to varying degrees. The researchers were able to prove that it works in mice with the MC1R redhead gene.  Now, they just need to put it to the test in humans.

If the trials are successful, even people who always turn pink in the sun will be able to tan — all without the accompanying damage to the skin from UV radiation. It would also eliminate the need for “spray-on” tans, which don’t afford any protection from sun damage. The principal advantage to the compound scientists created is that it provides an actual melanin boost, which offers that protection.

Image Credit: Mujahid et al./CellImage Credit: Mujahid et al./Cell

Preventing Skin Cancer

According to the Skin Cancer Foundation, there are more than 5.4 million cases of non-melanoma skin cancer treated in more than 3.3 million people every year in the U.S. In fact, there are more new cases of skin cancer annually than there are cases of breast, colon, lung, and prostate cancer combined. More people have had skin cancer in the past 30 years than all other cancers put together. One in five people in the U.S. will get skin cancer at least once, and among Americans 65 and older, that rate rises to 40 to 50 percent. The U.S. spends around $8.1 billion year treating skin cancer.

“It would not actually be a fake tan, it would be the real thing,” research leader David Fisher told The Guardian. “It would just be sunless.” People who tan easily or have darker skin tones are at far lower risk of melanoma, the most deadly type of skin cancer. That’s because melanin, the pigment that creates darker skin tones and suntanned skin, limits radiation damage to cells by dissipating more than 99.9 percent of UV rays the skin absorbs. If this compound works, we may see considerably less skin cancer in our lifetimes.

Still, many experts say caution should still be the watchword when it comes to sun exposure. “I worry these molecules could give people a false sense of security,” dermatologic surgeon Jennifer Herrmann told Science. “If you are just slightly darker, you may not give yourself a huge amount of protection,” she added, pointing out that tan skin is less protective against UV rays than a low-SPF sunscreen. Fisher’s idea, however, is that the compound would not be a substitute for sunscreen, and ideally, sunscreen could be combined with it in a single product.

The post Scientists Create “Sun-Free Tan” With Chemical That Also Keeps Skin Safe From Sun Damage appeared first on Futurism.

A Strange Form of Matter

Thanks to technological advances, scientists now have various ways of manipulating matter. Often times, these result in discovering new types of matter that posses unique properties — like the famous metallic hydrogen and the bizarre time crystal. The discovery of such materials leads to a wide range of potential applications in electronics. One of these is the so-called “liquid light,” a strange matter which researchers from the CNR NANOTECH Institute of Nanotechnology in Italy and the Polytechnique Montréal in Canada recently formed at room temperature for the first time.

Super Materials of Tomorrow [INFOGRAPHIC]
Click to View Full Infographic

Liquid light can be categorized as a superfluid, derived from the ability of particles to condense in a state known as a Bose-Einstein condensate. Basically, it’s light that demonstrates liquid behavior, particularly the ability to flow out of its container. Ordinary light is thought to be composed of waves, but more recent findings show that light also has liquid properties.

Superfluid Bose-Einstein condensates follow the rules of quantum physics instead of classical physics. Usually, they are only able to exist for fractions of a second in near absolute zero temperatures, but this work, published in the journal Nature Physics, proved that is not always necessary.

“The extraordinary observation in our work is that we have demonstrated that superfluidity can also occur at room-temperature, under ambient conditions, using light-matter particles called polaritons,” lead researcher Daniele Sanvitto said in a press release.

Expanding Possibilities

The researchers achieved this by putting a 130-nanometer-thick film of organic molecules between two very reflective mirrors, which they then exposed to a 35 femtosecond laser pulse blast. Because of light’s strong interaction with the organic molecules, a hybrid light-matter superfluid was formed.

Scientists Demonstrate “Liquid Light” at Room Temperature for the First Time
The flow of polaritons encounters an obstacle in non-superfluid (top) and superfluid (bottom). Image Credit: Polytechnique Montréal

“Under normal conditions, a fluid ripples and whirls around anything that interferes with its flow,” Stéphane Kéna-Cohen, the Montreal team coordinator, explained in the press release. “In a superfluid, this turbulence is suppressed around obstacles, causing the flow to continue on its way unaltered.”

Naturally, the research team looked to potential applications of this new form of matter. Apart from providing a better foothold in the study of quantum hydrodynamics, the researchers said that it could also expand the study of room-temperature polariton devices — particularly in making super-conductive materials for electronics like LEDs, solar panels, and lasers.

“The fact that such an effect is observed under ambient conditions can spark an enormous amount of future work,” the researchers wrote in the study, “not only to study fundamental phenomena related to Bose-Einstein condensates with table-top experiments, but also to conceive and design future photonic superfluid-based devices where losses are completely suppressed and new unexpected phenomena can be exploited.”

The post Scientists Demonstrate “Liquid Light” at Room Temperature for the First Time appeared first on Futurism.

Famed physicist Neil deGrasse Tyson speaks to Larry King about what he thinks about one of life’s most important questions — what happens to us when we die?

The post WATCH: Neil deGrasse Tyson Explains Death and the Afterlife Using Science appeared first on Futurism.

Having It Both Ways

We cannot see dark matter, but we know it exists simply because it has to — without it, the universe cannot exist under our current understanding of natural law. All of the visible mass in the universe only accounts for about 15 percent of the total mass, and something has to make up the other 85 percent.

What Is Dark Matter?
Click to View Full Infographic

To account for this disparity, scientists have hypothesized the existence of dark matter, but they are still in the dark about the exact composition of the elusive substance.

Traditionally, scientists believe dark matter consists of weakly interacting particles that we can see influenced by gravity on larger scales. This “cold” form of dark matter meshes well with what we know about the relationship between galaxies within the universe, but it does not translate down to the smaller, single galaxy-scale and the rotations of stars within those galaxies.

Now, two scientists think they may know why that is.

Prove It

Justin Khoury from the University of Pennsylvania and Lasha Berezhiani from Princeton University hypothesize that dark matter can change phases, moving from its “cold” state when in the dense environment of the galactic halo into a superfluid state when within a single galaxy.

This hypothesis would allow for an additional force outside of gravity to explain galactic behaviors. As particle physicist Tim Tait from the University of California, Irvine, told Quanta Magazine, “It’s a neat idea. You get to have two different kinds of dark matter described by one thing.”

The findings have yet to be peer reviewed, and until the hypothesis can be tested, no one can say for sure whether it’s accurate. However, the research can be looked over on the pre-publication site arXiv.org, and Khoury and Berezhiani are currently developing tools to test their model.

If confirmed, the discovery could have far-reaching implications for the study of our universe and how it operates, but for now, we’ll just have to be content with what appears to be an exciting development in the search to understand dark matter.

The post A Clever New Hypothesis Could Explain the Elusive Nature of Dark Matter appeared first on Futurism.

Spooky Action Gets to Space

When it comes to weird science stuff, quantum entanglement is probably near the top of the list, especially back in the days when Einstein referred to it as that “spooky action at a distance.” Physicists have since demonstrated the “spooky” phenomenon to be possible, but now they want to extend its reach. A new study shows it’s possible for quantum entanglement to span far longer distances than previously demonstrated.

9 Physics Questions Baffling Scientists [INFOGRAPHIC]
Click to View Full Infographic

“We have demonstrated the distribution of two entangled photons from a satellite to two ground stations that are 1,203 kilometers [748 miles] apart,” lead author Juan Yin, physicist at the Science and Technology University of China in Shanghai, explained in a research paper published in the journal Science. The previous record for entanglement distribution reached only 100 kilometers (62 miles).

Yin’s team used the Micius, the world’s first quantum-enabled satellite which China launched in 2016, to transmit entangled photons to several ground stations separated by long distances. They managed to achieve this feat by using laser beams to prevent the light particles from getting lost as they traveled.

“The result again confirms the nonlocal feature of entanglement and excludes the models of reality that rest on the notions of locality and realism,” Yin and his colleagues wrote.

Widening Possibilities

Though quantum entanglement is incredibly complex, it’s possible to explain it in simple terms. Two or more particles are entangled or linked when a change in one’s state or properties instantaneously affects the other’s. What makes this stranger is that this link works regardless of distance. This phenomenon becomes particularly useful in storing information — as in the case of using quantum bits (qubits) in quantum computing.

By proving that quantum entanglement can be maintained in space over such a long distance, this work paves the way for long-distance satellite quantum communication and maybe even realize the possibilities for quantum teleportation. “Long-distance entanglement distribution is essential for the testing of quantum physics and quantum networks,” Yin’s team wrote.

Advances in quantum cryptography, which rely heavily on extending entanglement, could change the way information is stored and transferred in the future — opening up applications in improved security in communication and even payment systems.

The post Physicists Demonstrate Record Breaking Long-Distance Quantum Entanglement in Space appeared first on Futurism.

Don’t worry, the Sun won’t stop spinning in the near future—but it’s good to know why its rotation is slowing down in the first place.

The post The Sun’s Rotation is Slowing Down — Here’s Why appeared first on Futurism.

M-type stars, also known as “red dwarfs,” have become a popular target for exoplanet hunters of late. This is understandable given the sheer number of terrestrial (i.e. rocky) planets that have been discovered orbiting around red dwarf stars in recent years. These discoveries include the closest exoplanet to our Solar System (Proxima b) and the seven planets discovered around TRAPPIST-1, three of which orbit within the star’s habitable zone.

The latest find comes from a team of international astronomers who discovered a planet around GJ 625, a red dwarf star located just 21 light years away from Earth. This terrestrial planet is roughly 2.82 times the mass of Earth (aka. a “super-Earth”) and orbits within the star’s habitable zone. Once again, news of this discovery is prompting questions about whether or not this world could indeed be habitable (and also inhabited).

The international team was led by Alejandro Mascareño of the Canary Islands Institute of Astrophysics (IAC), and includes members from the University of La Laguna and the University of Geneva. Their research was also supported by the Spanish National Research Council (CSIS), the Institute of Space Studies of Catalonia (IEEC), and the National Institute For Astrophysics (INAF).

Diagram showing GJ 625’s habitable zone in comparison’s to the Sun’s. Image Credit: IAC

The study which details their findings was recently accepted for publication by the journal Astronomy & Astrophysics, and appears online under the title “A super-Earth on the Inner Edge of the Habitable Zone of the Nearby M-dwarf GJ 625.” According to the study, the team used radial-velocity measurements of GJ 625 in order to determine the presence of a planet that has between two and three times the mass of Earth.

This discovery was part of the HARPS-N Red Dwarf Exoplanets Survey (HADES), which studies red dwarf stars to determine the presence of potentially habitable planets orbiting them. This survey relies on the High Accuracy Radial velocity Planet Searcher for the Northern hemisphere (HARPS-N) instrument — which is part of the 3.6-meter Galileo National Telescope (TNG) at the IAC’s Roque de Los Muchachos Observatory on the island of La Palma.

Using this instrument, the team collected high-resolution spectroscopic data of the GJ 625 system over the course of three years. Specifically, they measured small variations in the star’s radial velocity, which are attributed to the gravitational pull of a planet. From a total of 151 spectra obtained, they were able to determine that the planet (GJ 625 b) was likely terrestrial and had a minimum mass of 2.82 ± 0.51 Earth masses.

Moreover, they obtained distance estimates that placed it roughly 0.078 AU from its star, and an orbital period estimate of 14.628 ± 0.013 days. At this distance, the planet’s orbit places it just within GJ 625’s habitable zone. Of course, this does not mean conclusively that the planet has conditions conducive to life on its surface, but it is an encouraging indication.

The Observatorio del Roque de los Muchachos, located on the island of La Palma. Image Credit: IAC

As Alejandro Suárez Mascareño explained in an IAC press release:

As GJ 625 is a relatively cool star the planet is situated at the edge of its habitability zone, in which liquid water can exist on its surface. In fact, depending on the cloud cover of its atmosphere and on its rotation, it could potentially be habitable.

This is not the first time that the HADES project detected an exoplanet around a red dwarf star. In fact, back in 2016, a team of international researchers used this project to discover two super-Earths orbiting GJ 3998, a red dwarf located about 58 ± 2.28 light years from Earth. Beyond HADES, this discovery is yet another in a long line of rocky exoplanets that have been discovered in the habitable zone of a nearby red dwarf star.

Such findings are very encouraging since red dwarfs are the most common type of star in the known Universe — accounting for an estimated 70% of stars in our galaxy alone. Combined with the fact that they can exist for up to 10 trillion years, red dwarf systems are considered a prime candidate in the search for habitable exoplanets.

But as with all other planets discovered around red dwarf stars, there are unresolved questions about how the star’s variability and stability could affect the planet. For starters, red dwarf stars are known to vary in brightness and periodically release gigantic flares. In addition, any planet close enough to be within the star’s habitable zone would likely be tidally-locked with it, meaning that one side would be exposed to a considerable amount of radiation.

Artist’s impression of of the exoplanets orbiting a red dwarf star. Image Credit: ESO/M. Kornmesser/N. Risinger (skysurvey.org).

As such, additional observations will need to be made of this exoplanet candidate using the time-tested transit method. According to Jonay Hernández — a professor from the University of La Laguna, a researcher with the IAC and one of the co-authors on the study — future studies using this method will not only be able to confirm the planet’s existence and characterize it, but also determine if there are any other planets in the system.

“In the future, new observing campaigns of photometric observations will be essential to try to detect the transit of this planet across its star, given its proximity to the Sun,” he said. “There is a possibility that there are more rocky planets around GJ 625 in orbits which are nearer to, or further away from the star, and within the habitability zone, which we will keep on combing.”

According to Rafael Rebolo — one of the study’s co-authors from the Univeristy of La Laguna, a research with the IAC, and a member of the CSIS — future surveys using the transit method will also allow astronomers to determine with a fair degree of certainty whether or not GJ 625 b has the all-important ingredient for habitability — i.e. an atmosphere:

The detection of a transit will allow us to determine its radius and its density, and will allow us to characterize its atmosphere by the transmitted light observe using high resolution high stability spectrographs on the GTC or on telescopes of the next generation in the northern hemisphere, such as the Thirty Meter Telescope (TMT).

Artist’s impression of a system of exoplanets orbiting a low mass, red dwarf star. Image Credit: NASA/JPL

But what is perhaps most exciting about this latest find is how it adds to the population of extra-solar planets within our cosmic neighborhood. Given their proximity, each of these planets represent a major opportunity for research. And as Dr. Mascareño told Universe Today via email:

While we have already found more than 3600 extra-solar planets, the exoplanet population in our near neighborhood is still somewhat unknown. At 21 ly from the Sun, GJ 625 is one of the 100 nearest  stars, and right now GJ 625 b is one of the 30 nearest exoplanets detected and the 6th nearest potentially habitable exoplanet.

Once again, ongoing surveys of nearby star systems is providing plenty of potential targets in the search for life beyond our Solar System. And with both ground-based and space-based next-generation telescopes joining the search, we can expect to find many, many more candidates in the coming years. In the meantime, be sure to check out this animation of GJ 625 b and its parent star:

The post Earth-Like Planet Discovered in Habitable Zone of Nearby Red Dwarf appeared first on Futurism.

No True Alternatives

Currently, the prevailing theory about the origin of the universe is still the Big Bang. There have been, however, attempts to prove that the universe might have come about in a much simpler manner; most notably the “no-boundary proposal” by James Hartle and Stephen Hawking, as well as Alexander Vilenkin’s “tunneling proposal.” New research from the Max Planck Institute for Gravitational Physics (Albert Einstein Institute/AEI) in Potsdam and the Perimeter Institute in Canada suggests that there’s no possible way to circumvent the Big Bang — not yet, anyway.

Based on data from Planck’s satellite mission, we know that the universe was made up of a hot, dense soup of particles some 13.8 billion years ago. From there, it began to expand — and it still is today. This theory of the origin of our universe is known as the hot Big Bang theory. While we have an idea of what the universe might have been when it began, cosmologists still fail at describing what the very first stages of this so-called Big Bang were like. According to Einstein’s theory of relativity, the energy density and the curvature of spacetime in such a bang would become infinite — a difficult feat to maintain.

As such, the “no-boundary” and “tunneling” proposals were presented as alternatives. They suggest that the universe in its original state was finite, and arose from nothing through quantum tunneling. This is called the smooth universe theory of origin. The AEI team led by Jean-Luc Lehners, however, calculated the implications of these two proposals and realized that these were not true alternatives to the Big Bang.

big bag origins of the universe
Image credit: J.-L. Lehners/Max Planck Institute for Gravitational Physics

Still Mysterious

Lehners and his colleagues applied Heisenberg’s uncertainty principle as a mathematical model to check the “no-boundary” and “tunneling” proposals. They realized that these models showed that both smooth and irregular, more crumpled universes could have theoretically tunneled out of nothing. Furthermore, the more irregular and crumpled the universes are, the more likely such tunneling becomes. A smooth beginning may have actually prevented the growth of the universe as we know it to be.

“Hence the “no-boundary proposal” does not imply a large universe like the one we live in, but rather tiny curved universes that would collapse immediately”, Lehners said in a press release. As such, despite the irregularities attached to it, the Big Bang theory isn’t something that can easily be dismissed.

Still, Lehners’ team hasn’t given up on trying to make sense of it. For now, they will try to figure out just what mechanism allowed for these large quantum fluctuations, and what kept them in check under rather extreme conditions. But extreme or not, it was those very conditions that allowed the universe to unfold into what it is today. It seems that the more we try to figure out the origins of the universe itself, the more baffling and puzzling it becomes — even pushing the limits of what’s currently scientifically possible. For now, though, it seems that the Big Bang theory is here to stay.

The post Researchers Say There’s No Alternative Theory to the Big Bang — Yet appeared first on Futurism.

Collapsing stars are a rare thing to witness. And when astronomers are able to catch a star in the final phase of its evolution, it is a veritable feast for the senses. Ordinarily, this process consists of a star undergoing gravitational collapse after it has exhausted all of its fuel, and shedding its outer layers in a massive explosion (aka. a supernova). However, sometimes, stars can form black holes without the preceding massive explosion.

This process, what might be described as “going out not with a bang, but with a whimper,” is what a team of astronomers witnessed when observing N6946-BH1 — a star located in the Fireworks Galaxy (NGC 6946). Originally, astronomers thought that this star would explode because of its significant mass. But instead, the star simply fizzled out, leaving behind a black hole.

The Fireworks Galaxy, a spiral galaxy located 22 million light-years from Earth, is so-named because supernova are known to be a frequent occurrence there. In fact, earlier this month, an amateur astronomer spotted what is now designated as SN 2017eaw. As such, three astronomers from Ohio Sate University (who are co-authors on the study) were expecting N6946-BH1 would go supernova when in 2009, it began to brighten.

Visible-light and near-infrared photos from NASA’s Hubble Space Telescope showing the giant star N6946-BH1 before and after it vanished out of sight by imploding to form a black hole. Image Credit: NASA/ESA/C. Kochanek (OSU)

However, by 2015, it appeared to have winked out. As such, the team went looking for the remnants of it with the help of colleagues from Ohio State University and the University of Oklahoma. Using the combined power of the Large Binocular Telescope (LBT) and NASA’s Hubble and Spitzer space telescopes, they realized that the star had completely disappeared from sight.

The details of their research appeared in a study titled “The Search for Failed Supernovae with the Large Binocular Telescope: Confirmation of a Disappearing Star, which recently appeared in the Monthly Notices of the Royal Astronomical Society. Among the many galaxies they were watching for supernovas, they had their sights set on the Fireworks Galaxy to see what had become of N6946-BH1.

After it experienced a weak optical outburst in 2009, they had anticipated that this red supergiant would go supernova – which seemed logical given that it was 25 times as massive as our Sun. After winking out in 2015, they had expected to find that the star had merely dimmed, or that it had cast off a dusty shell of material that was obscuring its light from view.

Their efforts included an LBT survey for failed supernovae, which they combined with infrared spectra obtained by the Spitzer Space Telescope and optical data from Hubble. However, all the surveys turned up negative, which led them to only one possible conclusion: that N6946-BH1 must have failed to go supernova and instead went straight to forming a black hole.

Simulated view of a black hole. Image Credit: Bronzwaer/Davelaar/Moscibrodzka/Falcke, Radboud University

As Scott Adams — a former Ohio State student who is now an astrophysicist at the Cahill Center for Astrophysics (and the lead author of the study) — explained in a NASA press release:

N6946-BH1 is the only likely failed supernova that we found in the first seven years of our survey. During this period, six normal supernovae have occurred within the galaxies we’ve been monitoring, suggesting that 10 to 30 percent of massive stars die as failed supernovae. This is just the fraction that would explain the very problem that motivated us to start the survey, that is, that there are fewer observed supernovae than should be occurring if all massive stars die that way.

A major implication of this study is the way it could shed new light on the formation of very massive black holes. For some time now, astronomers have believed that in order to form a black hole at the end of its life cycle, a star would have to be massive enough to cause a supernova. But as the team observed, it doesn’t make sense that a star would blow off its outer layers and still have enough mass left over to form a massive black hole.

As Christopher Kochanek — a professor of astronomy at The Ohio State University, the Ohio Eminent Scholar in Observational Cosmology and a co-author of the team’s study — explained:

The typical view is that a star can form a black hole only after it goes supernova. If a star can fall short of a supernova and still make a black hole, that would help to explain why we don’t see supernovae from the most massive stars.

This information is also important as far as the study of gravitational waves goes. In February of 2016, scientists at the Laser Interferometer Gravitational-wave Observatory (LIGO) announced the first detection of this strange phenomena, which were apparently generated by a massive black hole. If in fact massive black holes form from failed supernova, it would help astronomers to track down the sources more easily.

Be sure to check out this video of the observations made of this failed SN and black hole:

The post Stars Don’t Always Go Supernova When They Die appeared first on Futurism.

The foundations of organic chemistry are too abundant, which Neil deGrasse Tyson believes could mean we aren’t the only forms of life in the universe.

The post Neil deGrasse Tyson: “We’re Not Likely Alone in the Universe” appeared first on Futurism.

Stellar Doppelganger

We have long known that the dinosaurs were killed by a catastrophic comet impact with the Earth’s surface but what if there was some foul play afoot? Astronomers have discovered that our sun may have been born with a twin, and an evil one, at that. One hypothesis states that every 27 million years, the evil twin, aptly dubbed Nemesis, returns to wreck havoc on the solar system. They believe that the star lobs a few meteors in our direction as it makes its may through the outer limits of the solar system.

Research has lead scientists to believe that most stars are born with at least one sibling. According to UC Berkeley astronomer Steven Stahler, “We ran a series of statistical models to see if we could account for the relative populations of young single stars and binaries of all separations in the Perseus molecular cloud, and the only model that could reproduce the data was one in which all stars form initially as wide binaries.”

Image: NASA, ESA and J. Muzerolle, STScI
Image Credit: NASA, ESA and J. Muzerolle, STScI

Making Stars

These findings could have implications for our understanding of how stars are formed. Looking into how they maintain or break those familial relationships will give us a deeper understanding of how our Universe came to be what it is today. Stahler said, “Our work is a step forward in understanding both how binaries form and also the role that binaries play in early stellar evolution.” Stahler also pointed out that this could even lead to a better understanding of how galaxies are formed.

These findings were made possible by the VLA nascent disk and multiplicity survey (VANDAM) which took a census of a group of baby stars merely a half-million years old. Their findings have been accepted for publication in the Monthly Notices of the Royal Astronomical Society and can be read pre-publication at arXiv.org.

The post Astronomers Discover That Our Sun Likely Had an “Evil” Twin That Killed the Dinosaurs appeared first on Futurism.

A Map of the Milky Way

Humanity has no shortage of irons in the fire when it comes to space exploration. Right now, several agencies are fine-tuning plans to send humans to Mars, an almost 20-year-old spacecraft is gearing up to crash land on a distant planet, and we’re remarkably close to ushering in the era of space tourism. Yet, one mission stands above all others in terms of its potential to advance the human race: Gaia.

Launched in 2013, the Gaia spacecraft is currently halfway through a five-year mission to map the Milky Way, and Brian Koberlein, astrophysicist and Senior Lecturer at Rochester Institute of Technology, expects the European Space Agency (ESA)-operated space observatory to lead to huge scientific breakthroughs.

“In the next year, I would say the Gaia spacecraft [will likely give way to humanity’s greatest advancements],” he tells Futurism.

Gaia is positioned roughly 1.5 million kilometers (932,056 miles) beyond the Earth in the direction away from the Sun. The craft measures only 3.5 meters (11.4 feet) across, though its sunshade increases that diameter to 10 meters (32.8 feet). It is equipped with two telescopes that incorporate an array of 10 mirrors of various shapes and sizes, and with a total of nearly 1 billion pixels, its focal plane is the largest ever flown in space.

More so than its equipment, the purpose of Gaia is what makes it special: gather data that can be sent back to Earth to generate the world’s largest, most detailed three-dimensional map of our galaxy. Over the course of its mission, the craft will survey 1 percent of the Milky Way’s stars. That might not seem like a lot until you consider how many stars are located in the Milky Way. “[Gaia] will map out a billion stars in our galaxy,” says Koberlein, quantifying the ambitious project’s scope.

New Discoveries on the Horizon

The most obvious benefit of Gaia’s observations will be a dramatic increase in our knowledge of the objects being targeted: stars. Not only will we learn more about stars that resemble our own, we’ll also learn new details about brown dwarfs, distant supernovae, and massive quasars.

Gaia won’t just offer a snapshot of these stars, either — it will observe each one 70 times over the course of the mission. By observing each star repeatedly, the craft will be able to record changes, such as increases in brightness or adjustments in position. This will “give us a much better understanding of stellar dynamics in the Milky Way,” explains Koberlein.

A Tour of the Solar Neighborhood [INFOGRAPHIC]
Click to View Full Infographic

While observing these stars, Gaia is sure to directly discover other bodies in the galaxy as well, such as never-before-seen asteroids or icy bodies. By providing us with a glimpse into the “blind spot” between the Earth and the Sun, Gaia will also improve our understanding of our own celestial backyard.

Gaia has already led to several new discoveries, and research should really thrive once the final catalog is published in the early-2020s. As explained on the ESA website, “Huge databases of information will be compiled from the Gaia data, allowing astronomers to trawl the archives looking for similar celestial objects, or events and other correlations that might just provide the clue necessary to solve their particular, seemingly intractable, scientific puzzle.”

Koberlein points out one of the most exciting potential uses for these databases: “The data can also be used for…finding exoplanets.” The discovery of new exoplanets would mean new leads in the search for alien lifeforms or even Earth 2.0.


This interview has been slightly edited for clarity and brevity.

The post An Ambitious Spacecraft Is Set to Advance Humanity Further Than We Dreamed appeared first on Futurism.

Nanoparticles Under The Knife

A team of Carnegie Mellon University chemists led by professor Rongchao Jin have conducted the first site-specific surgery on a nanoparticle. This procedure allows researchers to tailor nanoparticles with absolute precision, creating them for almost any purpose — meaning it will significantly advance nanochemistry.

The surgical technique itself will allow scientists to enhance whichever functional properties of nanoparticles are most useful for a given application. For example, nanoparticles may have catalytic properties or exhibit photoluminescence. Using the technique, these qualities could be magnified for use in almost any field from medicine to manufacturing.

“Nanochemistry is a relatively new field, it’s only about 20 years old. We’ve been racing to catch up to fields like organic chemistry that are more than 100 years old,” Jin told Phys.org. “Organic chemists have been able to tailor the functional groups of molecules for quite some time, like tailoring penicillin for better medical functions, for example. We dreamed that we could do something similar in nanoscience. Developing atomically precise nanoparticles has allowed us to make this dream come true.”

Image Credit: Carnegie Mellon UniversityImage Credit: Carnegie Mellon University

Precise Control

To make this “nano-surgery” happen, the team first had to create atomically precise nanoparticles that they could reproduce reliably. Then, they set out to identify each particle’s structure and the functionality and properties that went along with it. Ultimately, the team performed the surgery on gold nanoparticles composed of 23 gold atoms with staple-like structures forming a protective surface layer.

Removing two of those staples revealed the connection between the surface layer of gold nanoparticles and photoluminescence, and allowed its photoluminescence to be enhanced by about tenfold. Among its many applications, photoluminescence is critical to cancer diagnosis and biological imaging, but the team is working to generalize the technique for use with other nanoparticles, too.

The post Scientists Complete First Successful “Surgery” on a Nanoparticle appeared first on Futurism.

Super Telescope

The construction of the world’s largest telescope has begun. At a ceremony at the European Southern Observatory’s (ESO) Paranal Observatory in Chile, officials gathered to celebrate the first stone of the European Extremely Large Telescope’s (E-ELT) long-awaited construction. Sophisticated telescope projects like the E-ELT take many years, so we can expect another similar ceremony sometime in 2021, when the E-ELT will see first light.

The E-ELT is the ESO’s flagship observatory. It’s primary mirror will be a 39.3 meter (129 ft.) monstrosity that will observe in the visible, near-infrared, and mid-infrared spectra. The construction of the site began in 2014, but this ceremony marks the beginning of the construction of the main telescope and its dome. The ceremony also marks the connection of the telescope to the electricity grid.

The Chilean President, Michelle Bachelet Jeria, attended the ceremony. She was welcomed by the Director General of ESO Tim de Zeeuw, by ELT Programme Manager Roberto Tamai, and by other officials from the ESO. Staff from the La Silla Paranal Observatory, and numerous engineers and technicians—as well as numerous representatives from Chilean government and industry—also attended the ceremony.

“With the symbolic start of this construction work, we are building more than a telescope here.” – President of the Republic of Chile, Michelle Bachelet Jeria

In her speech, the President spoke in favor of the E-ELT, and in support of science and cooperation. “With the symbolic start of this construction work, we are building more than a telescope here: it is one of the greatest expressions of scientific and technological capabilities and of the extraordinary potential of international cooperation.”

At the ceremony, a time capsule from ESO was sealed into place. The capsule is a hexagon shaped, one-fifth scale model of the E-ELT containing a poster made of photographs of current ESO staff, and a copy of the book detailing the E-ELT’s science goals.

E-ELT

The first stone ceremony is definitely an important milestone for this Super Telescope, but it’s just one of the milestones reached by the E-ELT in the past two weeks.

Construction Begins On the Next Super Telescope

The secondary mirror for the E-ELT has already been cast, and the ESO has announced that the contracts for the primary mirror have now been signed. The primary mirror segment blanks, all 798 of them, will be made by the Germany company SCHOTT. Once produced, they will be polished by the French company Safran Reosc. Safran Reosc will also mount and test the mirror segments.

“This has been an extraordinary two weeks!” – Tim de Zeeuw, European Southern Observatory’s Director General

Tim de Zeeuw, ESO’s Director General, is clearly excited about the progress being made on the E-ELT. At the contract signing, de Zeeuw said, “This has been an extraordinary two weeks! We saw the casting of the ELT’s secondary mirror and then, last Friday, we were privileged to have the President of Chile, Michelle Bachelet, attend the first stone ceremony of the ELT. And now two world-leading European companies are starting work on the telescope’s enormous main mirror, perhaps the biggest challenge of all.”

Construction Begins On the Next Super Telescope

It’s taken an enormous amount of work to get to the construction stage of the world’s largest telescope. Scientist’s, engineers, and technicians have been working for years to get this far. But without the contribution of Chile, none of it would happen. Chile is the world’s astronomy capital, and they continue working with the ESO and other nations to drive scientific discovery forward.

The E-ELT has three broad-based science objectives. It will:

  • Probe Earth-like exoplanets for signs of life
  • Study the nature of dark energy and dark matter
  • Observe the Universe’s early stages to understand our origins and the origin of galaxies and solar systems

Along the way, it will no doubt raise new questions that we can’t even imagine yet.

The post The Long-Awaited Construction of the World’s Largest Telescope is Underway appeared first on Futurism.

This is the most distant black hole merger that we have been able to detect.

The post Scientists Detect Two Black Holes Merging 3 Billion Light-Years Away appeared first on Futurism.

When it comes to temperature—we know that there’s an absolute zero. But is there an absolute ‘hot’?

The post Can Something Get So Hot that It Can’t Get Any Hotter? appeared first on Futurism.

Matter and Antimatter

The Standard Model of particle physics has been the predominant means of explaining what the basic building blocks of matter are and how they interact for decades. First proposed in the 1970s, the model claims that for every particle created, there is an anti-particle. As such, an enduring mystery posed by this model is why the Universe can exist if it is theoretically made up of equal parts of matter and antimatter.

This seeming disparity, known as the charge-parity (CP) violation, has been the subject of experiments for many years. But so far, no definitive demonstration has been made for this violation, or how so much matter can exist in the Universe without its counterpart. But thanks to new findings released by the international Tokai-to-Kamioka (T2K) collaboration, we may be one step closer to understanding why this disparity exists.

First observed in 1964, CP violation proposes that under certain conditions, the laws of charge-symmetry and parity-symmetry (aka. CP-symmetry) do not apply. These laws state that the physics governing a particle should be the same if it were interchanged with its antiparticle, while its spatial coordinates would be inverted. From this observation, one of the greatest cosmological mysteries emerged.

If the laws governing matter and antimatter are the same, then why is it that the Universe is so matter-dominated? Alternately, if matter and antimatter are fundamentally different, then how does this accord with our notions of symmetry? Answering these questions is not only important as far as our predominant cosmological theories go, they are also intrinsic to understanding how the weak interactions that govern particles work.

Answering the Disparity

Established in June of 2011, the international T2K collaboration is the first experiment in the world dedicated to answering this mystery by studying neutrino and anti-neutrino oscillations. The experiment begins with high-intensity beams of muon neutrinos (or muon anti-neutrinos) being generated at the Japan Proton Accelerator Research Complex (J-PARC), which are then fired towards the Super-Kamiokande detector 295 km away.

This detector is currently one of the world’s largest and most sophisticated, dedicated to the detection and study of solar and atmospheric neutrinos. As neutrinos travel between the two facilities, they change “flavor” – going from muon neutrinos or anti-neutrinos to electron neutrinos or anti-neutrinos. In monitoring these neutrino and anti-neutrino beams, the experiment watches for different rates of oscillation.

This difference in oscillation would show that there is an imbalance between particles and antiparticles, and thus provide the first definitive evidence of CP violation for the first time. It would also indicate that there are physics beyond the Standard Model that scientists have yet to probe. This past April, the first data set produced by T2K was released, which provided some telling results.

We’re One Step Closer to Knowing Why There’s More Matter Than Antimatter in the Universe

As Mark Hartz, a T2K collaborator and the Kavli IPMU Project Assistant Professor, said in a recent press release:

“While the data sets are still too small to make a conclusive statement, we have seen a weak preference for large CP violation and we are excited to continue to collect data and make a more sensitive search for CP violation.”

These results, which were recently published in the Physical Review Letters, include all data runs from between January 2010 to May 2016. In total, this data comprised 7.482 x 1020 protons (in neutrino mode), which yielded 32 electron neutrino and 135 muon neutrino events, and 7.471×1020 protons (in antineutrino mode), which yielded 4 electron anti-neutrino and 66 muon neutrino events.

In other words, the first batch of data has provided some evidence for CP violation, and with a confidence interval of 90%. But this is just the beginning, and the experiment is expected to run for another ten years before wrapping up. “If we are lucky and the CP violation effect is large, we may expect 3 sigma evidence, or about 99.7% confidence level, for CP violation by 2026,” said Hartz.

If the experiment proves successful, physicists may finally be able to answer how it is that the early Universe didn’t annihilate itself. It is also likely help to reveal aspects of the Universe that particle physicists are anxious to get into! For it here that the answers to the deepest secrets of the Universe, like how all of its fundamental forces fit together, are likely to be found.

Further Reading: Kavli IMPU, Physical Review Letters

The post New Findings Lead Us Closer to Answering One of the Biggest Questions About Matter appeared first on Futurism.

The Holographic Universe

For decades now, scientists have been investigating the possibility that our Universe is, or once was, a giant hologram, where the laws of physics require just two dimensions, but everything appears three-dimensional to us.

It sounds far-fetched, but if true, it would actually solve some pretty hefty questions in physics, and recent research has dished up some tantalising evidence that suggests the hologram principle works just as well as the standard Big Bang model in explaining the early Universe.

First proposed in the 1990s, the hologram principle is an attempt to unify the two major arms of thought in modern physics – quantum mechanics and general relativity.

As the Life Noggin video below explains, the idea is that all the information about a volume of space can be thought of as encoded on a two-dimensional boundary to that region.

Think of a black hole – some physicists think that information about all the stuff that falls into a black hole is actually retained (or encoded) on its boundary, the event horizon.

Answering the Biggest Questions of Physics

If we apply this principle to the rest of the Universe, all of the information that makes you, me, and all other matter we encounter, is encoded on a two-dimensional boundary and expressed in three dimensions.

While this hypothesis is obviously incredibly difficult to prove, over the past 20 years, more than 10,000 papers have been published on the idea.

One study earlier this year even claimed to have found what could be the first direct observational evidence that Universe once was a hologram, based on the cosmic microwave background – the ‘afterglow’ of the Big Bang.

So what hints is this ‘baby picture’ of the Universe giving off?

I’ll let the Life Noggin episode below explain that to you, but let’s just say this is one out-there hypothesis that we shouldn’t write off just yet.

The post Here’s What Scientists Mean When They Say the Universe Could Be a Giant Hologram appeared first on Futurism.

Modeling a Complex System

Quantum entanglement, which Einstein referred to as “a spooky action at a distance,” is one of the many possible quantum systems that physicists have been exploring for potential applications in a wide range of fields. These are at the heart of quantum computing technologies, for example, and the underlying principle behind quantum teleportation and quantum cryptography.

Researchers have yet to explore these quantum systems fully. Realizing the need for better tools to do so, physicists from the Joint Quantum Institute (JQI) and the University of Maryland’s Condensed Matter Theory Center (CMTC) have turned to artificial neural networks, which are constructed to function and pass information like neurons in the brain.

“If we want to numerically tackle some quantum problem, we first need to find an efficient representation,” JQI researcher Dongling Deng said in a press release. He got the idea after hearing about DeepMind’s Go-playing artificial intelligence (AI) AlphaGo famously defeated human professional players in 2016. Machine learning, which is behind the achievements of current AI systems, seemed like a plausible tool.

At the same time, a paper proposed the idea of using neural networks to represent quantum states. “We immediately recognized that this should be a very important paper,” Deng said, “so we put all our energy and time into studying the problem more.” They published their study in the journal Physical Review X.

Understanding Quantum States

In order to use neural networks to represent quantum systems, Deng and his colleagues studied those that rely on two distinct groups of neurons. One group, called “visible neurons,” was used to represent real quantum particles similar to atoms in an optical lattice or ions in a chain. The second group of neurons was used to account for the interactions between particles. These so-called “hidden neurons” linked up with the visible neurons. As long as the number of connections were kept relatively small, the description of the neural network stayed simple.

AI Forecast for 2017
Click to View Full Infographic

“This research is unique in that it does not just provide an efficient representation of highly entangled quantum states,” JQI fellow Sankar Das Sarma said in the press release. The models were able to produce compact representations of a number of interesting quantum states, including those with surprising amounts of entanglement. “It is a new way of solving intractable, interacting quantum many-body problems that uses machine learning tools to find exact solutions,” Das Sarma added.

Of course, the neural networks had their limits, and they weren’t able to do an efficient universal representation, Deng said. Still, the research could significantly improve how we understand quantum entanglement. As the team wrote in their paper: “Our results uncover the unparalleled power of artificial neural networks in representing quantum many-body states regardless of how much entanglement they possess, which paves a novel way to bridge computer-science-based machine-learning techniques to outstanding quantum condensed-matter physics problems.”

The post Artificial Neural Networks Are Revealing The Quantum World appeared first on Futurism.

 

Studying The Cosmic Web

University of Zurich (UZH) researchers have simulated the way our Universe formed by creating the largest virtual universe with a large supercomputer. The team turned 2 trillion digital particles into around 25 billion virtual galaxies that together make up this virtual universe and galaxy catalogue. The catalogue will be used on the Euclid satellite, to be launched in 2020 to explore the nature of dark energy and dark matter, to calibrate the experiments.

“The nature of dark energy remains one of the main unsolved puzzles in modern science,” UZH professor of computational astrophysics Romain Teyssier said in a press release. Euclid will not be able to view dark matter or dark energy directly; the satellite will instead measure the tiny distortions of light of distant galaxies by invisible mass distributed in the foreground—dark matter. “That is comparable to the distortion of light by a somewhat uneven glass pane,” UZH Institute for Computational Science researcher Joachim Stadel said in the release.

Image Credit: Joachim Stadel/UZH
Image Credit: Joachim Stadel/UZH

Going Back In Time

The precise calculations that have allowed the team to create the virtual universe have also allowed them to simulate small concentrations of matter, dark matter halos, which may be the loci in which galaxies like ours form. The Euclid’s mission is to explore the dark side of the Universe, so part of the challenge of the virtual universe project was to accurately model galaxies only one-tenth the size of the Milky Way, within a massive volume the size of the observable Universe. The behavior observed in the virtual model will help Euclid know what to look for on its journey.

“Euclid will perform a tomographic map of our Universe, tracing back in time more than 10-billion-year of evolution in the cosmos,” Stadel said in the release. Researchers hope to learn more about dark energy from the Euclid data, but they are also eager to discover new areas of physics beyond the standard model, such as a new type of particle or a modified version of general relativity. Each bit of evidence along this journey may well take us one step closer to understanding the origins of our galaxy, and perhaps the entire Universe.

The post The Largest Virtual Universe Ever Created Was Just Simulated in a Supercomputer appeared first on Futurism.

Comet “Wow”?

St. Petersburg College astronomer Antonio Paris believes that a comet called 266P/Christensen, uncatalogued at the time that the “Wow!” signal was first discovered, may actually have been the signal’s source. However, not all astronomers agree — including Jerry Ehman, the astronomer who discovered the signal in 1977. “We do not believe the two-comets theory can explain the Wow! signal,” Ehman told Live Science on June 12.

Ehman reviewed Paris’s work with Robert Dixon, the director of the radio observatory at The Ohio State University. The team agreed on two major problems with the conclusions that Paris drew. First, the signal didn’t repeat. Second, it appeared for a very short time. Ehman also noted that the configuration of the Big Ear telescope should have allowed for a repeat if the source was a comet; instead, there was only one, as if the source was abruptly cut off. The telescope had two “feed horns,” and each offered a slightly different field of view.

“We should have seen the source come through twice in about 3 minutes: one response lasting 72 seconds and a second response for 72 seconds following within about a minute and a half,” Ehman told Live Science. “We didn’t see the second one.”

The frequency of transmission is also problematic. SETI senior astronomer Seth Shostak argues that comets wouldn’t generate sufficient hydrogen to create the signal. “I don’t think anyone ever found such emission from comets,” Shostak told Live Science.

So, does this lend credence to the theory that the “Wow!” signal was sent by aliens?

Image Credit: Republica/Pixabay
Image Credit: Republica/Pixabay

Wait, Aliens?! Probably Not

No. It remains extremely unlikely that the signal was sent by aliens, and Ehman is certainly not making that argument. There are a variety of sources that might cause a signal to disappear suddenly, such as fast radio bursts (FRBs), which last only milliseconds. Their origins are debated, and they seem to produce irregular signals. If the Big Ear caught only a fragment of that sort of strange signal, Ehman speculated, it might look like “Wow!” did. Another possibility is a problem with the Big Ear itself, which Paris acknowledged to Live Science.

In any event, an absence of evidence for a comet, for example, isn’t evidence in and of itself for aliens. We may have to continue our search for alien life as well as evidence that can resolve any remaining elements of the mystery of the “Wow!” signal.

The post Astronomer Who Discovered The “Wow!” Signal Says We Still Don’t Understand Its Origin appeared first on Futurism.

Seven is, in many cultures, a lucky number — and NASA is hoping it could prove to be the same for them. Tonight, June 13, between 9:04 and 9:19 pm EDT, the space agency will attempt for the seventh time to launch a sounding rocket from its Wallops Flight Facility in Virginia.

Five minutes into the rocket’s takeoff, it will jettison 10 canisters, each about the size of a soft drink can, which will spray minute amounts of barium, strontium, and cupric oxide into the atmosphere. This cloud-seeding process will create red and blue-green artificial clouds.

What’s it for? Well for one, to create colorful clouds to adorn to night sky — but also as a way to study how particles move around the ionosphere. Located about 74 km (46 miles) above the surface and extending into space nearly 1,000 km (621 miles), the ionosphere is the part of the Earth’s upper atmosphere where charged particles from the sun turn atoms into ions.

“After releasing this material in space, researchers then visually observe the subsequent movement of the vapor as it traces the motions of background environment,” NASA wrote in a post on their official website. “This technique is analogous to that of injecting a small, harmless dye into a river or stream, to study its currents, eddies, and other motions.”

If you want to observe this perfectly harmless experiment, it will be live-streamed via the link below.

Offline, in real life, it may actually be visible along the eastern seaboard of the U.S. from New York to North Carolina, and as far inland as to Charlottesville, Virginia — assuming there are clear skies. Otherwise NASA might just have cancel — if lucky number seven doesn’t prove lucky after all.

The post NASA is Making Glowing Rainbow Clouds Tonight — Here’s How to Watch appeared first on Futurism.

Theoretical physicist Geoffrey West talks about the remarkable properties of life and explains how everything is actually based on one algorithm.

The post Everything in This World Is Governed By A Single Mathematical Principle appeared first on Futurism.

A Quantum Future

In a rather remarkable demonstration, physicists from the University of Science and Technology of China and the Nanjing University of Posts and Telecommunications have developed a way to use quantum memory for quantum secure direct communication (QSDC). They published the results of their experiment in the journal Physical Review Letters.

Whenever one encounters the word quantum, it’s not uncommon to feel a bit unsure of what it means. Since quantum refers to the smallest types of matter — usually particles — the concept shouldn’t be too fundamentally difficult to explain. Quantum physics, in essence, deals with “the physics of the small.”  Other theories or applications describe some behavior of particles. Quantum communication, the exchange of information using quantum particles, is one such application.

QSDC is a secure form of quantum communication; what’s commonly known as quantum cryptography.

“Quantum communication provides an absolute security advantage, and it has been widely developed over the past 30 years.” the researchers wrote in the study’s abstract, adding that “[QSDC] promotes high security and instantaneousness in communication through directly transmitting messages over a quantum channel.” Usually, QSDC protocols rely on fiber delay lines to transmit information, which have their limitations. The use of quantum memory, however, may be able to break through those limitations.

Long Distance Quantum Communication

As quantum communication is more secure, physicists have been working on ways to extend its usual reach. This requires quantum memory, which would allow it to effectively control information transfer in the quantum networks of the future. One method for quantum memory relies on using entangled photons. Entanglement is a quantum state that allows for particles to be linked even when separated by huge distances. In the case of quantum memory, the particles would be stored to establish entanglement between separated memories.

Researchers have demonstrated the necessary steps in a QSDC protocol: generating entanglement, having a secure channel, and the ability to distribute, store, and encode the entangled photons. To bypass the usual difficulty in decoding entangled photons, the researchers opted for an alternative method that was easier to implement.

While they were able to demonstrate QSDC effectively using quantum memory, the researchers hope to extend its distance further: perhaps up to 100 km (62 miles) if not more. Quantum teleportation and entanglement have already been shown to be capable of bridging such distances. This would be an important step to realize the future of long-distance, satellite-based and global-scale QSDC for a more secure transfer of information around the world.

The post Physicists Develop Method to Send Secure Messages Using Quantum Memory appeared first on Futurism.

Seeing Like Never Before

Neuroscientists have used a classic branch of maths in a totally new way to peer into the structure of our brains. What they’ve discovered is that the brain is full of multi-dimensional geometrical structures operating in as many as 11 dimensions.

The Evolution of Brain-Computer Interfaces [INFOGRAPHIC]
Click to View Full Infographic

We’re used to thinking of the world from a 3-D perspective, so this may sound a bit tricky, but the results of this new study could be the next major step in understanding the fabric of the human brain – the most complex structure we know of.

This latest brain model was produced by a team of researchers from the Blue Brain Project, a Swiss research initiative devoted to building a supercomputer-powered reconstruction of the human brain.

The team used algebraic topology, a branch of mathematics used to describe the properties of objects and spaces regardless of how they change shape. They found that groups of neurons connect into ‘cliques’, and that the number of neurons in a clique would lead to its size as a high-dimensional geometric object.

“We found a world that we had never imagined,” says lead researcher, neuroscientist Henry Markram from the EPFL institute in Switzerland. “There are tens of millions of these objects even in a small speck of the brain, up through seven dimensions. In some networks, we even found structures with up to 11 dimensions.”

A Lot of Power to Work With

Human brains are estimated to have a staggering 86 billion neurons, with multiple connections from each cell webbing in every possible direction, forming the vast cellular network that somehow makes us capable of thought and consciousness.

With such a huge number of connections to work with, it’s no wonder we still don’t have a thorough understanding of how the brain’s neural network operates. But the new mathematical framework built by the team takes us one step closer to one day having a digital brain model.

To perform the mathematical tests, the team used a detailed model of the neocortex the Blue Brain Project team published back in 2015. The neocortex is thought to be the most recently evolved part of our brains, and the one involved in some of our higher-order functions like cognition and sensory perception.

After developing their mathematical framework and testing it on some virtual stimuli, the team also confirmed their results on real brain tissue in rats.

According to the researchers, algebraic topology provides mathematical tools for discerning details of the neural network both in a close-up view at the level of individual neurons, and a grander scale of the brain structure as a whole.

By connecting these two levels, the researchers could discern high-dimensional geometric structures in the brain, formed by collections of tightly connected neurons (cliques) and the empty spaces (cavities) between them.

“We found a remarkably high number and variety of high-dimensional directed cliques and cavities, which had not been seen before in neural networks, either biological or artificial,” the team writes in the study.

“Algebraic topology is like a telescope and microscope at the same time,” says one of the team, mathematician Kathryn Hess from EPFL.

“It can zoom into networks to find hidden structures, the trees in the forest, and see the empty spaces, the clearings, all at the same time.”

Those clearings or cavities seem to be critically important for brain function. When researchers gave their virtual brain tissue a stimulus, they saw that neurons were reacting to it in a highly organised manner.

“It is as if the brain reacts to a stimulus by building [and] then razing a tower of multi-dimensional blocks, starting with rods (1D), then planks (2D), then cubes (3D), and then more complex geometries with 4D, 5D, etc,” says one of the team, mathematician Ran Levi from Aberdeen University in Scotland.

“The progression of activity through the brain resembles a multi-dimensional sandcastle that materialises out of the sand and then disintegrates.”

These findings provide a tantalising new picture of how the brain processes information, but the researchers point out that it’s not yet clear what makes the cliques and cavities form in their highly specific ways.

And more work will be needed to determine how the complexity of these multi-dimensional geometric shapes formed by our neurons correlates with the complexity of various cognitive tasks.

But this is definitely not the last we’ll be hearing of insights that algebraic topology can give us on this most mysterious of human organs – the brain.

The study was published in Frontiers of Computational Neuroscience.

The post Scientists Discover That Our Brains Can Process the World in 11 Dimensions appeared first on Futurism.

Ultrabright Galaxies

The Hubble Space Telescope has captured images of ultrabright galaxies up to 10 to 100 trillion times brighter than the sun — 10,000 more luminous than our own Milky Way. According to NASA, each of these galaxies forms around 10,000 new stars annually, in contrast to our galaxy, which births fewer than 10 each year. The captivating images reveal arcs and rings of light as well as other odd shapes, the artifacts of gravitational lensing, a cosmic effect that is in part responsible for the telescope’s ability to see these galaxies.

The formation of stars (as well as their demise) generates large amounts of dust and gas, which in turn tend to envelop galaxies in layers of light of different wavelengths. This makes them tougher to spot in most cases, but the Hubble can penetrate these layers with infrared light. Its observations were further enhanced and clarified by gravitational lensing; the magnification and warping of the brilliant light of these distant galaxies by closer galaxies in the foreground functioning like lenses would. First identified by the European Space Agency’s Planck satellite, the images of these galaxies produced by the Hubble are the best to date, although we can expect far better images in the future from larger telescopes like the James Webb.

Image Credit: NASA, ESA, and J. Lowenthal (Smith College)
Image Credit: NASA, ESA, and J. Lowenthal (Smith College)

Our Mysterious Universe

The research team indicates that there are only a few dozen of these ultrabright galaxies in the Universe, situated in regions of space that are unusually dense. It seems that their locations somehow triggered this rapid-fire star formation in the first ages of the Universe. For these reasons, these bright spots could offer clues about how these galaxies formed eons ago — and by comparison, how our own, more average galaxy formed as well.

The burning question for researchers to answer remains: what is powering the explosive rate of star birth? The superluminous galaxies could be further, brighter relatives of ultra-luminous infrared galaxies (ULIRGS), but computer simulations of galaxy birth and growth indicate that the timing appears wrong for major mergers such as those seen in these galaxies. Another theory is that star-birth fuel, gas, is flooding distant galaxies.

“The sky is covered with all kinds of galaxies, including those that shine in far-infrared light,”  lead researcher James Lowenthal of Smith College said in a statement to NASA. “What we’re seeing here is the tip of the iceberg: the very brightest of all.”

The post Hubble Uses Cosmic Magnification to Capture Images of Ultrabright Galaxies appeared first on Futurism.

A New Kind of Carbon

An international team comprising scientists from the Carnegie Institution for Science and Yanshan University has developed an incredible new form of carbon. It is extremely light, ultra-strong, elastic, and even electrically conductive. This unusual combination of qualities makes this material versatile for a number of applications.

Super Materials of Tomorrow [INFOGRAPHIC]
Click to View Full Infographic

Carbon is unique in the sense that its electron configuration allows it to self-bond in a number of different ways, which means it is capable of taking many different forms.

The scientists created this interesting type of carbon through an equally interesting method, placing glassy carbon — a non-graphitizing and structurally disordered form of carbon — under about 250,000 times the normal atmospheric pressure. After then heating it to 982.2 degrees Celsius (1,800 degrees Fahrenheit), the researchers found the new form.

Before they finally developed the carbon mentioned here, the researchers did play with different temperatures and pressures. However, only under these specific conditions did the carbon take this form, retain its structure, and not crystallize.

A Very Desirable Form

So, what can we do with carbon that is rubbery, rock-hard, conductive, and still very light? As Zhisheng Zhao, a professor at Yanshan University, explains in a Carnegie press release, “Light materials with high strength and robust elasticity like this are very desirable for applications where weight savings are of the utmost importance, even more than material cost.”

A material that is both strong and elastic without a lot of weight could be extremely beneficial for technologies like military vehicles and armor. In particular, this type of material could dramatically improve military exoskeletons. It could also be extremely useful in providing resilient structure and electrical conductivity without weighing down a ship bound for Mars or elsewhere in the solar system.

Those are just a few of the many potential applications for this particular type of carbon, and it could even lead to the development of other kinds. According to Zhao, “We believe that this synthesis method could be honed to create other extraordinary forms of carbon and entirely different classes of materials.”

The post This New “Super” Carbon Is Tough, Elastic, and Electrically Conductive appeared first on Futurism.

LHC Pt. 2

The Large Hadron Collider (LHC) still has plenty of good work left to do in the field of particle acceleration, but scientists are already starting work on its replacement, set to be three times as big as the (already huge) original.

Development on the LHC’s replacement is expected to take decades, which is why experts are keen to get started now. The aim is to put together a machine that’s some seven times more powerful than the hardware we have today.

More than 500 scientists from across the world have been putting their heads together in Berlin, Germany this week to figure out what LHC 2.0 – or the Future Circular Collider (FCC) – might look like.

Where the LHC’s looping tunnel stretches for 27 kilometres (nearly 17 miles), the FCC’s circuit would measure between 80-100 kilometres (50-62 miles), and it’ll be located just down the road from its predecessor on the France-Switzerland border.

Auto Draft

With double-strength magnets installed, the plan is to have the next machine able to smash particle beams together with a strength of up to 100 tera electron Volts – about the same level of particle acceleration that you would get from 10 million lightning strikes.

That extra energy would potentially enable researchers to see hidden particles even heavier than the Higgs boson, a particle underpinning our most fundamental understanding of the laws of nature that the LHC found evidence for in 2012.

Researchers say the next LHC could also help us develop more radiation-resistant materials for use in nuclear reactors and power networks.

Worth the Wait

The team looking into plans for the FCC is hoping to have a proposal on the table by next year, but it could be 20 years after that before it boots up for the first time. The LHC took nearly 30 years to put together, from planning to launch.

Ultimately the aim is to peer even deeper into the laws of the Universe.

“When you look into things like the movement of galaxies, we see that we can only understand and explain about five percent of what we observe,” Michael Benedikt, who is leading the FCC research, told Joe Dodgshun at Horizon.

“But with questions like the so-called problem of dark matter, which is linked to the fact that galaxies and stars are not moving as you would expect them to, the only explanation we have is that there must be matter we do not see which distorts the movement accordingly.”

Let’s not write off the Large Hadron Collider yet, though. The revolutionary particle smasher, which causes particle beam collisions strong enough to break matter into its constituent parts, is expected to keep the scientific discoveries coming until 2035 or so.

Earlier this year the LHC identified five new subatomic particles from just one analysis.

The LHC is due for a substantial upgrade in the mid-2020s too, which will enable it to boost the rate of its particle collisions and improve its accuracy even further. It will also get a name upgrade too, to the High Luminosity Large Hadron Collider (HL-LHC).

We’re looking forward to what the LHC and its successor are going to find over the next few decades, and you can keep up to date with the latest developments at the official Future Circular Collider Study website.

Let’s just hope when the FCC does arrive, it’s weasel-proof.

The post The Next Particle Collider Will Be Three Times Bigger Than the LHC appeared first on Futurism.

Alien Plants

The idea of discovering and studying alien life has tantalized scientists for centuries. Astronomers have been studying planets in the “Goldilocks Zone” — a distance from a sun in which water on a given planet is not constantly boiled off or frozen — for any sign of life, whether that be microbial or intelligent. But what would alien plant life look like given their planets’ different sizes, atmospheres, and associated stars?

Other Earths: The Best Exoplanet Candidates for Life [INFOGRAPHIC]
Click to View Full Infographic

If these planets happen to be smaller than Earth, any plants on them would be far taller and thinner. The plants would also probably have large leaves than Earth plants because the lower gravity would permit the stems to support more mass. If the planet was more massive than Earth, this might cause the plants to be shorter and broader with smaller leaves, as everything on the planet would be heavier.

A thin atmosphere would cause any breeze on the planet to be exceptionally weak — this would mean that, evolutionarily, the plants would not grow to be particularly robust and would be tall and thin. A thick atmosphere would produce plants that would have to be much thicker and closer to the ground in order to withstand the powerful winds.

Our plants are green in order to capture the blue and red light our Sun emits while reflecting the green light, which is minimal. Not all stars operate in this color spectrum, though. A star that emits a more green and blue light would likely produce plants that were bright red.

TRAPPIST-1

Given the potentially huge changes different environments could have on plants, a key aspect of planning for space exploration concerns accommodating for these changes. Interplanetary colonists will need to predict both what type of life we could find on these planets and how we would adapt to those environments.

Three potentially habitable planets are housed in the TRAPPIST-1 planetary system, which about 39 light years away from Earth and is home to seven Earth-sized planets in total. The TRAPPIST-1 star is called an ‘ultracool dwarf star’ — it is  around the same size as Jupiter, and is much cooler and dimmer than our sun.

[Taken] Plants in the ‘Goldilocks Zone’

This means all seven planets circle their sun at a closer proximity than Mercury circles ours, making orbits and therefore years extremely short — One year would be about two Earth weeks. This also means planets get hit by solar flares frequently, which is something that may hinder life.

The nature of the TRAPPIST-1 star means that it emits mostly infrared heat, capable of warming the air on the surface of the planet, but causing it to be much dimmer than we are used to. Plants that evolved in this system would have bigger leaves to capture as much of the light possible and may have developed a method to convert infrared radiation into energy.

This alien plant life would most likely grow on only one side of their host planets because the planets in the system are ‘tidally locked’ — which means only one side ever faces the sun. But wherever we find alien plant life, and whatever it looks like, the discovery will cause major advances in our understanding of botany, and of life itself.

The post Scientists Imagine What Plants Would Look Like on Alien Worlds appeared first on Futurism.

Riddled With Holes

If you picture what the universe looks like, you can imagine it as Swiss cheese. The sizes of holes, or voids, in the matter of the universe vary greatly. One example would be the cold spot that was found in the Universe’s radiation. Galaxies are connected by filaments of dark matter that cannot be seen.

The Most Amazing Galaxies In The Universe [Infographic]
Click to View Full Infographic

This means that the Milky Way exists in one of these voids, specifically the KBC void, which a 2013 study showed to be not as densely packed with matter as other voids. It’s like our galaxy rests in a cosmological backwater — the Outer Rim, so to speak. A new study from the University of Wisconsin-Madison which was presented at the 2017 American Astronomical Society meeting confirms that the Milky Way does indeed reside in an enormous void.

However, the researchers claim that being in this huge KBC void also helps resolve the differences in measuring the rate the universe expands — the Hubble Constant, the number used to describe this cosmic expansion.

The Universe is Expanding?

There are two ways to measure the expansion of the universe. One is a local determination using the Hubble Space Telescope, which relies on observations of supernovae that exploded in galaxies nearby. The other is a cosmic determination that relies on Cosmic Microwave Background (CMB) data — the Big Bang’s residual noise — taken by the Planck observatory.

One possible explanation for the differences in the Hubble Constant measurement is due to differences in time. Hubble’s observations rely on cosmic objects nearer in terms of time, while the CMB was emitted back when the universe was young. Simply put, being in a void with considerably lesser matter affects the measurement taken by observing supernovae explosions. On the other hand, the CMB technique doesn’t distort the measurement at all.

“No matter what technique you use, you should get the same value for the expansion rate of the Universe today,” lead researcher Ben Hoscheit said in a press release. “Fortunately, living in a void helps resolve this tension.”

The post Scientists Assert That Earth May Exist in a Massive Cosmic Void appeared first on Futurism.

Breaking the Lepton Universality

In order to make sense of the physical world, scientists have worked hard to discover theories and principles that govern the physics of matter. This is what’s called the Standard Model of Physics, which includes all the laws and principles concerning matter in all its forms and sizes. Bascially, the Standard Model applies to even particle physics. Or so it should.

Scientists from the University of California at Santa Barbara (USCB) and colleagues from various other institutions have recently discovered that there might be a break in the application of the Standard Model, particularly with a fundamental principle called the lepton universality. Their discovery comes from reviewing the data from three separate experiments conducted in the United States, Switzerland, and Japan.

But before we jump into the details of the study published in the journal Nature, a little backgrounder is in order. The lepton universality is an assumption concerning elementary particles called leptons, which don’t undergo strong interactions. Supposedly, lepton universality asserts that the interactions of these particles are the same, regardless of differences in masses and decay rates. The three experiments reviewed in the studies are charged leptons, which are electrons, muons, and the heavier taus.

Challenging the Norms of Physics

All three experiments revealed that taus actually decay faster than the standard model predicts. The surprising thing was the data which came from the LHCb experiment at CERN in Switzerland, the BaBaR detector of the SLAC National Accelerator Laboratory in California, and the Belle experiment in Japan challenged lepton universality at four standard deviations. This means that there’s a 99.95 percent certainty that this data is accurate, according to the USCB team.

“The tau lepton is key, because the electron and the muon have been well measured. Taus are much harder because they decay very quickly,” USCB’s Franco Sevilla said in a press release. “Now that physicists are able to better study taus, we’re seeing that perhaps lepton universality is not satisfied as the Standard Model claims.”

9 Physics Questions Baffling Scientists [INFOGRAPHIC]
Click to View Full Infographic

Initial reading into these results would seem to indicate that there is indeed a deviation from the Standard Model of particle physics. This could mean that an entirely different model of physics is needed to explain the peculiar behavior of the tau particle. In other words, new physics is required. That’s not a simple thing, as these principles often correlate with one another. A change in one could affect the others.

Sevilla admitted that they aren’t entirely sure yet how this would play out. “We’re not sure what confirmation of these results will mean in the long term,” he explained. “First, we need to make sure that they’re true, and then we’ll need ancillary experiments to determine the meaning.”

The post Physicists Discover a Possible Break in the Standard Model of Physics appeared first on Futurism.

The Cosmic Chorus

The Universe is full of noises — and Earth now also contributes to the cacophony. The first time we called out to the stars was on November 19, 1962 with The Morse Message. This message was sent in Morse code from the Evpatoria Planetary Radar to Venus. What did we say? “MIR” — the Russian word for both world and peace. This was followed a few days later on November 24 by “LENIN” and “SSSR” (Russia’s leader and the abbreviation for the Soviet Union, respectively). Later, in 1999, a team headed by Alexander Zaitsev, a rogue Search for Extraterrestrial Intelligence (SETI) member, beamed Cosmic Call 1 to four nearby suns from the Yevpatoria RT-70 radio telescope in Crimea. He called his system Messaging to Extraterrestrial Intelligence (METI).

Alien or Natural: Strangest Sounds & Signals Detected from Space
Click to View Full Infographic

He argued that his decision was due to the SETI Paradox, which he characterized as “This paradoxical disparity of effort, a passionate desire to receive and no corresponding attempt to give.” He also stated that he did “not want to live in a cocoon, in a ‘one-man island.’”

Similar messages were subsequently sent out in 2001 (Teen Age Message), 2003 (Cosmic Call 2), and 2008 (A Message From Earth). These messages caused fierce debate within the scientific coommunity, prompting multiple meetings by the Royal Society in 2010 on the topic of “Towards a Scientific and Societal Agenda on Extraterrestrial Life.”

SETI has sent authorized messages into the cosmos, including the Lone Signal in 2013 and A Simple Response to an Elemental Message in 2016. Other messages not related to or verified by SETI have also been sent, such as the the Hello From Earth message in 2009.

In response, we have heard very little back, causing some to dub the universe “The Great Silence” — David Brin told Phys.org that the most obvious possibilities have now been ruled out, “including gaudy tutorial beacons that advanced ETCs would supposedly erect.”

A particularly exciting narrowband radio signal from space was detected by the Ohio State University’s Big Ear radio telescope in 1977, which many have since dubbed the Wow! Signal, although it has now been shown to probably be caused by comets. A less notable example is Radio source SHGb02+14a, which was detected in 2003. The radio source was 1420 MHz and lasted for a minute each time it was observed, although the signal was extremely weak.

How do we Speak to Aliens?

There are two aspects of our communication with aliens: how we send it, and what we say. There has been vigorous discussion about both facets of inter-galaxy communication.

The main means we currently have of broadcasting ourselves across the universe is through radio signals. Frequency modulated radio waves were used when we projected a message from the Arecibo radio telescope in Puerto Rico in 1974: it contained, in binary, pictorial representations of humanity, formulas for the elements and compounds that make up DNA, as well as representations of the Solar System. Other systems have been more manual: for example, the Pioneer 10 and 11 probes bolting ‘Pioneer Plaques’ to their doors.

Future efforts will try to update this system, using either the more sophisticated radio signals we possess today, or turning to lasers to beam ourselves to other planets. METI will begin their search by beaming to Proxima Centauri, the nearest star to our sun.

Douglas Vakoch, the former director of Interstellar Message Composition at the SETI Institute and president of METI, said to Forbes, “It’s too late to conceal ourselves in the universe, so we should decide how we want to represent ourselves.” But how can we know how what we choose to represent is what will be received when we have no comprehension of the technology aliens may be using, or of their specific culture?

The central debate over what we send to aliens stems from what they would think if they received a signal. Opinion is split among scientific heavyweights over whether aliens would be benevolent or malevolent. Carl Sagan believes that any contact would be benign because, as he stated in his novel Contact, written in 1985, “In the long run, the aggressive civilizations destroy themselves, almost always.” On the other hand, Stephen Hawking believes that “if aliens visit us, the outcome would be much as when Columbus landed in America, which didn’t turn out well for the Native Americans.”

Other specialists such as Seth Shostak, an astronomer at SETI, think that if we truly believed in a threat, we would be more careful about all radio use; he told phys.org, “We cannot pretend that our present level of activity with respect to broadcasting or radar usage is ‘safe.’ If danger exists, we’re already vulnerable.”

An encounter with aliens is a real possibility, and one that would have earth-changing consequences. When we will meet them is anyone’s guess — it may be in ten years, it may be never — but it is important to have discussion surrounding how to deal with an encounter to prepare for every possible outcome.

The post Humans Have Been Trying to Communicate with Space for Decades appeared first on Futurism.

Magnetism in Flatland

2D materials (materials consisting of a single layer of atoms) are nothing brand new; scientists have created 2D semiconductors, insulators, and superconductors. However 2D magnets have eluded creation until now, some had even believed that they were not possible.

A collaborative piece of research — conducted by Pablo Jarillo-Herrero, a condensed-matter physicist at the Massachusetts Institute of Technology, and Xiaodong Xu, an optoelectronics researcher at the University of Washington in Seattle — has found that chromium triiodide can act as a 2D magnet.

The material is suitable because of its combination of magnetic and crystalline properties. Chromium triiodide is a ferromagnet (a material that has an aligned spin of electrons, making it especially susceptible to magnetism) as well as being anisotropic (meaning that the electrons have a preferred spin direction, and therefore a specific magnetic direction). It is also a crystal composed of thin layers. These can be removed through the ‘Scotch tape method’, which entails using adhesive tape to peel off thinner and thinner layers until the material is a single atom thick.

2D Magnetism in The Real World

The potential uses of the 2D magnets are wide-ranging and significant. On an industry-specific level, it allows scientists to discover unprecedented properties of 2D materials. Jarillo-Herrero, for instance, wants to investigate chromium triiodide’s potential interactions with a superconductor — “Does the superconductor destroy the ferromagnet, or does the ferromagnet destroy the superconductor?”

Nitin Samarth, a physics professor at Penn State, said in a commentary on the study, “The promise of incorporating these 2D ferromagnets into spin-based electronics and information technologies also beckons.” Spin-based electronics (spintronics) is an exciting avenue that could supersede the silicon age due to its reliance on electron spin rather than electricity, which limits transistor size due to the heat it produces. Spintronics, therefore, allows for greater processing speeds and more data storage.

However, perhaps most excitingly, spintronics is a promising power source for quantum computers, as it theoretically has the necessary processing power to handle the huge amount of information quantum machines would produce.

The post Physicists Have Finally Created the World’s First 2D Magnet appeared first on Futurism.

The End of Genetic Disease

CRISPR-Cas9, the world’s best gene editing tool, has lent itself to a plethora of research and experiments. Scientists owe a great deal to the person considered to be its founder — if we could really credit one person specifically with its advent . Certainly, University of California Berkeley biologist Jennifer Doudna deserves consideration for the title as one of the world’s leading figure in what’s being called the “CRISPR Revolution.” It was Doudna’s work in 2012 that first suggested the possibility of using CRISPR-Cas9 for genome editing. Since then, it has certainly been put into good use — human clinical trials of the technology’s capabilities are expected to begin soon in the United States.

On Thursday, speaking at WIRED’s 2017 Business Conference in New York, Doudna made a bold claim about the future of CRISPR. “I think it’s really likely that in the not-too-distant future it will cure genetic disease,” she said at the conference. However, Doudna remains aware that the use of such a powerful tool needs to be carefully considered — especially since studies have shown it can have unintended repercussions.

“But globally we need to come up with a consensus on moving forward in a responsible way,” Doudna added. This wasn’t the first time she emphasized need for ethical responsibility in using CRISPR.

Doudna herself has certainly been careful to practice what she preaches: In 2015, she became part of a broad coalition of leading biologists that put parameters in place for the use of CRISPR. They agreed to a worldwide moratorium on gene editing to what’s called the “germ line.” In other words, putting a prohibition on edits that would be passed down to subsequent generations. However, because it isn’t legally binding, it wasn’t able to stop such experiments from taking place. In China, for instance, there’s already work involving CRISPR to edit the genome of human embryos.

A More Immediate Impact

Recognizing the legal and ethical hurdles CRISPR still needs to overcome, Doudna went on to speak about a much more plausible area for this gene editing tool to demonstrate more immediate success: its application for farming. “When I think about where we are likely to see the biggest impacts in the shortest amount of time, I really think it’s going to be in agriculture,” Doudna told the audience in New York.

Indeed, CRISPR has already been already been successfully used to grow — and eat! — one crop in particular. The first was that cabbage in Sweden, and now, agricultural giant Monsanto has even been given license by the Broad Institute to use CRISPR-Cas9 in seed development. Doudna also mentioned research by scientists from the Cold Spring Harbor Laboratory in New York that could make harvesting tomatoes easier.

“For me, that really illustrates the potential for this,” Doudna said — who is herself a tomato farmer. “[CRISPR] allows plant breeders to do things that would have been very difficult, sometimes impossible in the past.”

With the science, the ethics, and the legal ramifications of CRISPR still being ironed out, farming seems to a reasonable compromise for continue to experiment with the tech, and many experts are already working on its continued refinement. Given that dedication, it might not be that long before the end of genetic diseases —  which is what Doudna hopes could be ultimately realized— would be accomplished with the help of CRISPR.

The post Berkeley Biologist: “CRISPR Gene Editing Will Cure Genetic Disease” appeared first on Futurism.

New State Of Matter

Duke University postdoctoral fellow Sho Yaida has just solved a 30-year mystery in the physics world with 30 pages of calculations, revealing a strange new state of matter in the process. The mystery arose initially when observations of glass, plastic, and other materials appeared to have “disordered” atomic structures and physical behavior, in contrast to the orderly atomic structures of crystals. These observations led physicists to theorize about some kind of phase transition—one that would be present in theoretical models of disordered materials as well as actual materials like glass.

Yaida used algebra and particle physics to calculate and prove — by hand, no less — that some kinds of glass at low temperatures may exist within a new state of matter. This changes when and how they break, and how they respond to stress, heat, and sound. The breakthrough here is that the phase transition can exist, not just in a theoretical Universe with infinite numbers of dimensions, but also in a real world setting.

“I knew this could cause trouble,” Yaida’s team member Patrick Charbonneau told Gizmodo.

Image Credit: Unsplash/Pixabay
Image Credit: Unsplash/Pixabay

New Applications

This isn’t the only new state of matter discovered by researchers; new discoveries like this happen fairly regularly, and each new development in this area opens up new possibilities for application. In 2015, scientists came across Jahn-Teller metals, a new state of matter that has advanced the understanding of the physics behind high-temperature superconductors. In March of this year, MIT researchers created a new form of matter, a supersolid, that could be key in improving efficient energy transport. Then in April, researchers discovered another new state of matter: 3D quantum liquid crystals that may advance microchip technology and quantum computing.

In the case of this discovery, the proof that the phase transition can actually exist will mean that research questions might be approached in new ways. “The fact that this transition might actually exist in three dimensions means that we can start looking for it seriously,” Charbonneau said in a press release. “It affects how sound propagates, how much heat can be absorbed, the transport of information through it. And if you start shearing the glass, how it will yield, how it will break.”

“It changes profoundly how we understand amorphous materials in general, whether they be amorphous plastics or piles of sand or window glasses,” he added.

The post A New State of Matter is Discovered – And It’s Strange appeared first on Futurism.

Mollusks “Thinking” Differently

While animals may act similarly, the brain patterns they use to accomplish these identical behaviors may vastly differ, even when the animals have closely related brains, researchers from Georgia State University have discovered. This finding challenges basic ideas about the relationship between neurology and behavior. The scientists have not yet unraveled the cause for this, but this research shows that key animal behaviors can survive evolution processes that affect the brain.

The studies focused on two different species of nudibranch, gastropod mollusks similar to sea slugs that exist in many shapes and colors around the world. The team studied two species: the lion’s mane nudibranch and the giant nudibranch. Both species swim in the same way — by flattening their bodies and then flexing them from side to side.

Image Credit: Tentaculata/WikiCommonsImage Credit: Tentaculata/WikiCommons[/caption]This shared form of propulsion isn’t all they have in common. The species also both have simple nervous systems populated with similar neurons, which appear to be located in similar positions. Before this research was conducted, biologists assumed that the neural circuits governing movement in these species were the same — but the team from Georgia State has proven that they’re not.After the researchers observed different brain patterns in the two nudibranch, they blocked the giant nudibranch’s normal neural pathway for swimming and instead stimulated its brain to simulate the neural connections in the hooded nudibranch. The giant nudibranch was able to swim again, but now used its neurons the way the hooded nudibranch did. This confirmed that the two different brain patterns did indeed produce identical behaviors.

New Ideas About Neurology

These results suggest that the brain can find new ways to preserve important behaviors even as the nervous system evolves. The scientists are not sure how this process occurs, or what led to the peculiar case of the two nudibranch species, but they plan to keep exploring this phenomenon.

Deep Ocean Dwellers [INFOGRAPHIC]
Click to View Full Infographic

“Perhaps some of the neurons in one or the other species have taken on additional functions that provide selective pressure to alter the ancestral connectivity,” the researchers suggested in their paper.

While this may simply be an example of random genetic diversity, it could also provide insight into our studies of the human brain. At the very least this research demonstrates the mystery that is the nervous system, and that even cases that appear simple at first glance can have complexities we would have never imagined just beneath the surface.

The post Colorful Mollusks Have Revealed A Startling Neurological Phenomenon appeared first on Futurism.

Celestial Warping

According to Einstein’s general relativity theory, spacetime bends or gets distorted when it crosses a massive object due to its gravity. In the same manner, Einstein theorized that such a distortion also happens to light from a distant star when it passes another star along a line of sight from Earth — like a near stellar eclipse, so to speak. In this case, gravity should act as a magnifying lens by brightening and bending the light from the distant star, warping its apparent position.

Einstein, however, wasn’t particularly confident about ever seeing such this gravitational deflection of starlight. In an article published in Science in 1936, he said that because stars are so distant from one another “there is no hope of observing this phenomenon directly.” An international team of researchers, however, have just proven Einstein wrong — but also right — by seeing a type of gravitational microlensing from a star other than the Sun.

Image credit: NASA, ESA, and A. Feild (STScI)
Image Credit: A. Feild (STScI) / NASA, ESA

The team, led by Kailash Sahu from the Space Telescope Science Institute, reported their findings in a new study published in the same journal. “Einstein would be proud,” Terry Oswalt said in an interview with Wired. “One of his key predictions has passed a very rigorous observational test.”

Confirming General Relativity

Sahu’s team used the Hubble Space Telescope with its superior angular resolution to measure the shifts in a distant background star’s apparent position around a nearby white dwarf star called Stein 2051 B as its light was deflected. The observations were made on eight dates in a two year period, between October 2013 and October 2015.

Seeing Stars: The FAST Radio Telescope
Click to View Full Infographic

In the same way that a 1919 total solar eclipse provided the first observations of starlight curving and gave the initial convincing proofs of Einstein’s general theory of relativity, the new study confirms it anew. It also gave Sahu’s team a way to measure the mass of the foreground white dwarf star, which was previously unmeasurable.

“The astrometric lensing approach used by Sahu et al. can be used equally well for any other nearby stars that happen to pass in front of background stars,” Oswalt wrote in a perspective article in Science. “In the coming era of truly massive sky surveys such as the Large Synoptic Survey Telescope, astronomers are bound to observe other similar events, despite their rarity.”

The post Scientists Finally Witnessed a Phenomenon That Einstein Thought “Impossible” appeared first on Futurism.

A Bright Giver

Astronomers have taken particular interest in the study of young stars similar to our Sun in its formative years. These solar-type protostars are windows into the past, allowing astronomers to observe cosmic conditions similar to those that resulted in our solar system a little more than 4.5 billion years ago. Now, two teams of astronomers have detected a chemical building block of life during their study of one such multiple star system.

The two teams, whose separate papers are published in the same issue of the Monthly Notices of the Royal Astronomical Society, used the Atacama Large Millimeter/submillimeter Array (ALMA) in Chile to observe IRAS 16293-2422, a multiple star system located in a large star-forming region known as the Rho Ophiuchi, part of the constellation of Ophiuchus, some 400 light-years away from Earth.

In their separate observations using ALMA, the teams detected traces of methyl isocyanate, a prebiotic complex organic molecule, in the gas regions near these stars. “This family of organic molecules is involved in the synthesis of peptides and amino acids, which, in the form of proteins, are the biological basis for life as we know it,” explained Niels Ligterink and Audrey Coutens, two of the astronomers involved in the research, in an ESO news release.

Life From Stardusts

In 2012, another group of astronomers found a simple sugar known as glycolaldehyde in the gas surrounding IRAS 16293-2422. That marked the first time sugar was found in space, so this new discovery only adds to the system’s importance. “This star system seems to keep on giving!” explained Ligterink and Coutens.

How Life Evolved on Earth (Infographic)
Click to View Full Infographic

According to existing models of the universe’s origins, heavy and light elements in space were all formed from stars. For example, the planets in our solar system were all formed from leftover materials, so to speak, from the formation of the Sun. “By finding prebiotic molecules in this study, we may now have another piece of the puzzle in understanding how life came about on our planet,” Rafael Martín-Doménech and Víctor M. Rivilla, lead authors of one of the studies, told the ESO.

Apart from giving us a peek into the origins of life on Earth, the findings also raise hopes about the possibility of life within another star system. It would probably take some billions of years before the molecules around IRAS 16293-2422 would resemble anything we’d consider life, but they may help us narrow our search for a system a little further along in the process.

The post Researchers Discover The “Basis for Life as We Know It” in an Infant Star System appeared first on Futurism.

Rediscovering Ourselves

On Wednesday, scientists reported they had discovered the oldest known remains of Homo sapiens in Morocco. The bones and other remains are approximately 300,000 years old. This revelation provides new insights into the origins of humankind: a consequence of such findings would be that human beings evolved earlier than had previously believed. The fossils also indicate that despite fundamental differences in the brains of modern humans and early Homo sapiens, our faces strongly resemble those of our early ancestors.

Until this remarkable find at Jebel Irhoud, the oldest human fossils only dated back 195,000 years. These new fossils make experts believe that our species evolved not in Eastern Africa (specifically near Ethiopia) where later fossils were found, but across the continent in Western Africa where modern Morocco is situated.

“We did not evolve from a single cradle of mankind somewhere in East Africa,” paleoanthropologist Phillipp Gunz, a co-author of the two new studies on the fossils, told The New York Times.

Image Credit: Mohammed Kamal/Max Planck Institute for Evolutionary AnthropologyImage Credit: Mohammed Kamal/Max Planck Institute for Evolutionary Anthropology[/caption]

Investing In Research

Before now, fossils found in different places made paleoanthropologists believe that Homo sapiens arose in East Africa and then moved across the continent. However, mysterious human fossils from other parts of Africa didn’t seem to fit in with this story, and caused scientists to wonder where they fit into the Homo sapiens puzzle. The remains discovered in Morocco will help solve these mysteries, even as it suggests new questions for further research.

For example, the recent finds at Jebel Irhoud confirm that Homo sapiens had flatter faces, similar to ours today. National Museum in London paleoanthropologist Christopher Stringer speculates that the flattened faces of early Homo sapiens may be related to the advent of speech. “We really are at very early stages of trying to explain these things,” Dr. Stringer told the The New York Times.

The larger, rounder brain of modern humans is a more recent development. Dr. Gunz indicates that the human brain may have evolved into a rounder shape during a later phase of human existence. Two areas of the brain in particular — the cerebellum and the parietal lobe,  both toward the back of the head — seem to have adapted over thousands of years. That being said, scientists don’t yet know how the rounder brain changed how humans think.

Flint blades from around the same time have been found elsewhere across Africa, and the Jebel Irhoud fossils suggest that they may have been made by early humans. Dr. Gunz and his team believe that is this is true: Homo sapiens may have evolved across the continent as a network of groups. The only way we’ll ever know for sure — and resolve other questions these findings may bring up — will be through additional research, which will require adequate funding.

The post New Discovery of the Oldest Known Human Remains Will Change the Narrative of Human Evolution appeared first on Futurism.

The Power of Supramolecules

Photosynthesis is one of nature’s most efficient phenomena: aside from providing much of the oxygen human beings need to breathe, this naturally occurring process gives plants the food and energy they need to survive. It utilizes visible light — which the Earth has an abundance of — to provide the “fuel” they need. Researchers have been working on ways to artificially recreate this natural process in labs, in the hopes of producing fuel, too —specifically methane.

Now, a team of chemists from the Brookhaven National Laboratory and Virginia Tech have designed two supramolecules, each made up of a number of light-harvesting ruthenium (Ru) metal ions attached to a single catalytic center of rhodium (Rh) metal ions.

“By building supramolecules with multiple light absorbers that may work independently, we are increasing the probability of using each electron productively,” Gerald Manbeck, lead author of the study published in the Journal of the American Chemical Society, said in a press release.

*3* Researchers Discover Supramolecules That Can Make Synthetic Photosynthesis Work

Clean Air & Clean Energy

While both could act as catalysts, the researchers set out to determine which of the two supremolecules they created did the job best. They found that the one with six Ru light light absorbers could produce some 280 hydrogen molecules for every catalyst in a 10-hour period. Meanwhile, the supramolecule with only three Ru ions could produce 40 hydrogen molecules for four hours  — at which point it stopped working.

“To promote catalysis, the Rh catalyst must be low enough in energy to accept the electrons from the Ru light absorbers when the absorbers are exposed to light,” co-author Etsuko Fujita explained. What that means is that the larger of the supramolecules was slightly more electron-deficient, which made it more receptive to electrons needed for synthetic photosynthesis.

This work is a huge step in realizing the goals of many researchers who seek to use photosynthesis for the purpose of creating clean energy fuel. The great thing about synthetic photosynthesis is that it could be used to remove pollutants in the air, too, because carbon dioxide is a necessary component in the process. In that way, it achieves two goals for the price of one: cleaning the air and providing cleaner energy.

The post Breakthrough Research Has Made Synthetic Photosynthesis Possible appeared first on Futurism.

How Our Brains “See”

It’s common knowledge that our vision is determined by how our eyes takes in light and color, sending these stimuli as signals to the brain, which are then processed and rendered into an image. But how exactly does the brain process what the eyes pick up into composite images? A study by researchers from the California Institute of Technology is the first to provide a full and simple explanation of how this process works, and they started by examining how the brain recognizes human faces.

“We’ve cracked the brain’s code for facial identity,” author Doris Tsao from CalTech told New Scientist. Their study, published in the journal Cell, looked at face-recognition function in the brains of macaque monkeys.

They identified individual brain cells that work together to create an infinite range of facial images by encoding 50 different dimensions of a face, such as its shape, the size of and distances between eyes, skin texture, and other features. By inserting electrodes into three patches of these so-called “face cells” in the brains of the macaques, Tsao and colleague Steven Le Chang were able to record the activity of 205 neurons.

Image credit: Doris Tsao/Cell
Image credit: Doris Tsao/Cell

The Brain’s Imaging Powers

By showing 2,000 images of human faces to the macaques, they discovered that each face cell’s view of the face was different, but when combined, a clear composite image was produced. In order to see what these monkeys saw, the researchers developed an algorithm that tracked the face cell feedback in their brains.

This discovery could extend to research into how the brain retains memories of facial images and associates these with specific individuals. Previous work by researchers from the Allen Institute for Brain Science have identified individual cells in the hippocampus, the brain’s memory center, responsible for remembering the faces of people — the so-called “Jennifer Aniston cells.”

“Tsao’s work provides the first specific hypothesis for how the response of face cells in the cortex can be utilized by cells in the hippocampus to form memories of individuals we’ve seen before,” Ueli Rutishauser, from the Cedars-Sinai Medical Center, told New Scientist.

The study could also provide insight into how the brain forms other images, too: “Our work suggests that other objects could be encoded by analogous metric coordinate systems,” the authors wrote.

Another potential application of the research into how the brain processes memories of people’s faces would be in the development of treatments for memory-related diseases, such as Alzheimer’s. The applications could extend beyond humans, too: such work could also help to improve the image recognition abilities of artificial neural networks.

The post A New Study Reveals How the Brain “Sees” Faces appeared first on Futurism.

What is “Open Science”?

Open science is a set of practices designed to make scientific processes and results more transparent and accessible to people outside the research team. It includes making complete research materials, data and lab procedures freely available online to anyone. Many scientists are also proponents of open access, a parallel movement involving making research articles available to read without a subscription or access fee.

Why are researchers interested in open science? What problems does it aim to address?

Recent research finds that many published scientific findings might not be reliable. For example, researchers have reported being able to replicate only 40 percent or less of cancer biology results, and a large-scale attempt to replicate 100 recent psychology studies successfully reproduced fewer than half of the original results.

This has come to be called a “reproducibility crisis.” It’s pushed many scientists to look for ways to improve their research practices and increase study reliability. Practicing open science is one way to do so. When scientists share their underlying materials and data, other scientists can more easily evaluate and attempt to replicate them.

Also, open science can help speed scientific discovery. When scientists share their materials and data, others can use and analyze them in new ways, potentially leading to new discoveries. Some journals are specifically dedicated to publishing data sets for reuse (Scientific Data; Journal of Open Psychology Data). A paper in the latter has already been cited 17 times in under three years – nearly all these citations represent new discoveries, sometimes on topics unrelated to the original research.

Wait – open science sounds just like the way I learned in school that science works. How can this be new?

Under the status quo, science is shared through a single vehicle: Researchers publish journal articles summarizing their studies’ methods and results. The key word here is summary; to write a clear and succinct article, important details may be omitted. Journal articles are vetted via the peer review process, in which an editor and a few experts assess them for quality before publication. But – perhaps surprisingly – the primary data and materials underlying the article are almost never reviewed.

Historically, this made some sense because journal pages were limited, and storing and sharing materials and data were difficult. But with computers and the internet, it’s much easier to practice open science. It’s now feasible to store large quantities of information on personal computers, and online repositories to share study materials and data are becoming more common. Recently, some journals have even begun to require or reward open science practices like publicly posting materials and data.

There are still some difficulties sharing extremely large data sets and physical materials (such as the specific liquid solutions a chemist might use), and some scientists might have good reasons to keep some information private (for instance, trade secrets or study participants’ personal information). But as time passes, more and more scientists will likely practice open science. And, in turn, science will improve.

Some do view the open science movement as a return to science’s core values. Most researchers over time have valued transparency as a key ingredient in evaluating the truth of a claim. Now with technology’s help it is much easier to share everything.

Why isn’t open science the default? What incentives work against open science practices?

Two major forces work against adoption of open science practices: habits and reward structures. First, most established researchers have been practicing closed science for years, even decades, and changing these old habits requires some upfront time and effort. Technology is helping speed this process of adopting open habits, but behavioral change is hard.

Second, scientists, like other humans, tend to repeat behaviors that are rewarded and avoid those that are punished. Journal editors have tended to favor publishing papers that tell a tidy story with perfectly clear results. This has led researchers to craft their papers to be free from blemish, omitting “failed” studies that don’t clearly support their theories. But real data are often messy, so being fully transparent can open up researchers to critique.

Additionally, some researchers are afraid of being “scooped” – they worry someone will steal their idea and publish first. Or they fear that others will unfairly benefit from using shared data or materials without putting in as much effort.

Taken together, some researchers worry they will be punished for their openness and are skeptical that the perceived increase in workload that comes with adopting open science habits is needed and worthwhile. We believe scientists must continue to develop systems to allay fears and reward openness.

I’m Not a Scientist; Why Should I Care?

Open access is the cousin to open science – the idea is that research should be freely available to all, not hidden behind paywalls. h_pampel, CC BY-SA

Science benefits everyone. If you’re reading this article now on a computer, or have ever benefited from an antibiotic, or kicked a bad habit following a psychologist’s advice, then you are a consumer of science. Open science (and its cousin, open access) means that anyone – including teachers, policymakers, journalists and other nonscientists – can access and evaluate study information.

The ConversationConsidering automatic enrollment in a 401k at work or whether to have that elective screening procedure at the doctor? Want to ensure your tax dollars are spent on policies and programs that actually work? Access to high-quality research evidence matters to you. Open materials and open data facilitate reuse of scientific products, increasing the value of every tax dollar invested. Improving science’s reliability and speed benefits us all.


Elizabeth Gilbert, Postdoctoral Research Fellow in Psychiatry and Behavioral Sciences, The Medical University of South Carolina and Katie Corker, Assistant Professor of Psychology, Grand Valley State University

This article was originally published on The Conversation. Read the original article.

The Conversation

The post What Is “Open Science”? (And Why Some Researchers Want It) appeared first on Futurism.

Missing, or Sleeping?

During a lunchtime conversation in 1950, Enrico Fermi, the “architect of the nuclear age” who built the world’s first nuclear reactor, posed a question that has resulted in decades of research and hundreds of published papers: “Where is everybody?”

This is the crystallization of the paradox he perceived (now known as the Fermi Paradox) which questions, “If the universe is  large enough to have more than sufficient opportunity to create more life than what we see on earth, why haven’t we met that alien life yet?” Scientists, like Anders Sandberg, an Oxford neuroscientist, and Milan Ćirković, part of the Astronomical Observatory of Belgrade, have used this as a jumping-off point in their attempts to debunk — or defend — the possibility of extraterrestrial civilizations.

Sandberg and Ćirković teamed up with Stuart Armstrong, an AI expert at the Future of Humanity institute, to author  a new paper arguing that alien life is hibernating. Their “aestivation hypothesis” rests on two premises: advanced societies have transitioned to machine-based life, and machines compute faster in cooler conditions. Therefore, they argue, aliens are hibernating until the universe cools so they can develop at a faster rate.

Sandberg and Ćirković explain on their blog that “as the Universe expands, this background temperature will decline exponentially” and “in principle, running processing becomes 10 times more efficient if your computer is 10 times colder (measured in Kelvins).” Because of this aliens are hibernating and waiting for the universe to cool.

They came up with the possible solution while discussing how to maximize the lifespan of a civilization. The theory developed as part of “eschatology considerations of what the best possible outcome for a civilization would be, not directly an urge to solve the Fermi question. However, it does seem to provide one new possible answer to the question.”

Other Possible Solutions

Solutions to the Fermi Paradox fall into three categories:

If Our Universe Is So Old and Vast, Then Where Are All the Aliens?
Click to View Full Infographic

First, that we are alone and there is no alien life. This could either be because aliens never developed or because they have now died off. Aditya Chopra, from the Australian National University, provided a causal explanation for extinction called the “The Gaian Bottleneck.” He believes that while other life has developed, it was too fragile, and humanity is the only species that has proven persistent enough to survive through The Great Filter (the challenges life faces as it develops, codified by Robin Hanson).

Second, that aliens exist but not here, due to limitations in our methods of communication, our technology, or our comprehension. Michio Kaku details this answer in an interview with The Daily Grail using the analogy of an ant and a highway:

Let’s say we have an anthill in the middle of the forest. And right next to the anthill, they’re building a ten-lane super-highway. And the question is “Would the ants be able to understand what a ten-lane super-highway is?

Third, that there are aliens here, but they are not interacting in an obvious manner. Rather than blaming us primitive humans, this solution suggests the extraterrestrials themselves are being ambiguous. This category holds the hibernation hypothesis and could also be the explanation behind the baffling light fluctuations of the star KIC 8462852, which some have argued could be due to alien interference.

Whatever the explanation — if there even is one — it is a fascinating question to try to answer. But perhaps the next question, and one also worth thinking about, is “what happens when we do meet aliens?”

The post Oxford Scientists Suggest That Aliens Aren’t Extinct, Just Hibernating appeared first on Futurism.

A new crop of ultra-fast processors has arrived from Intel. The new chips boast some serious power, allowing for unprecedented levels of multitasking. Intel has unveiled the Core X-series of chips with the i9-7980XE leading the pack. This chip is the company’s first 18-core CPU and is capable of more than a teraflop of computing power.

However, that much power is certainly going to cost you. If the $1,999 price tag is a little steep, you may want to consider some of the other offerings in the series. There are also 10-, 12-, 14- and 16-core offerings costing as little as $999.

Granted the higher end of the series is not for the casual user, but more for those who have the will and resources to invest in that kind of raw power. The chip is ideal for those who require a lot out of their rigs, like gamers who wish to play in 4K while broadcasting or those who create and edit 4K video.

Meet The Most Powerful Computers in the World
Click to View Full Infographic

An 18-core processor may be a far cry from the 1000-core processor created nearly a year ago at UC Davis, but it still packs more of a punch than most users will need in their lifetime. Still, the real future of computing lies in the quantum realm, which is, even in its earliest stages, already producing technology that’s outperforming conventional computers.

The post Intel’s New Processor is the First Consumer Chip to Offer a Teraflop of Power appeared first on Futurism.

In the era of fake news and alternative facts, age-old conspiracies and junk science are experiencing a resurgence. For every Bill Nye and Neil deGrasse Tyson out there fighting to combat misinformation, there is a Jim Carrey running around giving credence to nonsense.

One of the more pressing pieces of balderdash is the small group of easily impressionable folks known as Flat-Earthers. As their name suggests, they believe that we’re all being lied to and the Earth is not a globe, but a disc. One argument they like to use is that in their everyday perspective, the Earth looks and feels flat. As Shaq put it, “I drive from Florida to California all the time, and it’s flat to me.”

How do we battle this kind of pseudoscience?

With actual science, of course!

The video above gives a few examples of how we know and can measure that the Earth is indeed a globe. From sunsets to ships on the horizon, there are plenty of ways anyone can observe the curvature of the Earth.

Still, for every scientific piece of evidence, there will be a made-to-order dissent that always has a tenuous grasp of actual science (or reality, for that matter). All we can do is to continue to educate, hoping to mitigate the spread of lies and obfuscation.

The post Here’s a Primer to Help You Educate Flat-Earthers appeared first on Futurism.

A Miracle Material for Spintronics

A team lead by the University of Utah has discovered a new group of materials called organic-inorganic hybrid perovskites that could bring spintronics from a largely conceptual concept into reality. Spintronics aims to use the magnetic properties of electrons rather then their ability to conduct electricity to store and transmit information. Instead of using the ebb and flow of electrical current (charge, recorded using ones and zeros) through thousands of electrons, spintronics uses the ‘up’ or ‘down’ orientation of far fewer electrons.

Changing The Spin of Electrons. Image Credit: SLAC National Accelerator Laboratory
Changing the spin of electrons. Image Credit: SLAC National Accelerator Laboratory

The problem spintronics has encountered is that a material could not be found that could have its spin changed easily and retain the change in spin. Sarah Li, assistant professor of the Department of Physics & Astronomy at the University of Utah and lead author of the study, told Newswise that what makes this discovery special is that the material “can be manipulated and, at the same time, have a long spin lifetime.”

The electrons in perovskites can be changed many times within a nanosecond, meaning they can have lots of information imparted and altered in them. This was surprising, as perovskites are a heavy metal, which usually has good spin alterability, but a bad spin lifetime.

Smaller, Faster, Stronger

Spintronic materials can be used to process much more information than classical materials that use charge because they can operate more reliably on a smaller scale. Although Moore’s Law states that transistors (which control the current) per square inch in integrated circuits approximately double every year, we are currently approaching the limit. Li says, “The silicon technology, based only on the electron charge, is reaching its size-limit. The size of the wire is already small. If gets any smaller, it’s not going to work in a classical way that you think of.”

Spintronics could, therefore, allow data to be processed faster and increase random-access memory (RAM, which allows you access the specific pieces of information you want on computers and phones). This would have the effect of increasing the efficiency, speed, and memory capacity of nearly all computational devices, which would use less power and increase battery life. Finally, materials that spintronics use don’t emit an external magnetic field, even though they use magnetism on an atomic level; this means that they do not interfere with other devices and should make it harder for them to be spied on.

If mass production is possible, spintronics could mark the end of the silicon transistor age.

The post This Could Mark the End of the Silicon Age appeared first on Futurism.

Astronomers are understandably fascinated with the Epsilon Eridani system. For one, this star system is in close proximity to our own, at a distance of about 10.5 light-years from the Solar System. Second, it has been known for some time that it contains two asteroid belts and a large debris disk. And third, astronomers have suspected for many years that this star may also have a system of planets.

On top of all that, a new study by a team of astronomers has indicated that Epsilon Eridani may be what our own Solar System was like during its younger days. Relying on NASA’s Stratospheric Observatory for Infrared Astronomy (SOFIA) aircraft, the team conducted a detailed analysis of the system that showed how it has an architecture remarkably similar to what astronomer believe the Solar System once looked like.

Led by Kate Su – an Associate Astronomer with the Steward Observatory at the University of Arizona – the team includes researchers and astronomers from the Department of Physics & Astronomy of Iowa State University, the Astrophysical Institute and University Observatory at the University of Jena (Germany), and NASA’s Jet Propulsion Laboratory and Ames Research Center.

Auto Draft
Artist’s diagram showing the similar structure of the Epsilon Eridani to the Solar System. Credit: NASA/JPL-Caltech

For the sake of their study – the results of which were published in The Astronomical Journal under the title “The Inner 25 AU Debris Distribution in the Epsilon Eri System” – the team relied on data obtained by a flight of SOFIA in January 2015. Combined with detailed computer modeling and research that went on for years, they were able to make new determinations about the structure of the debris disk.

As already noted, previous studies of Epsilon Eridani indicated that the system is surrounded by rings made up of materials that are basically leftovers from the process of planetary formation. Such rings consist of gas and dust, and are believed to contain many small rocky and icy bodies as well — like the Solar System’s own Kuiper Belt, which orbits our Sun beyond Neptune.

Careful measurements of the disk’s motion has also indicated that a planet with nearly the same mass as Jupiter circles the star at a distance comparable to Jupiter’s distance from the Sun. However, based on prior data obtained by the NASA’s Spitzer Space Telescope, scientists were unable to determine the position of warm material within the disk — i.e. the dust and gas — which gave rise to two models.

In one, warm material is concentrated into two narrow rings of debris that orbit the star at distances corresponding respectively to the Main Asteroid Belt and Uranus in our Solar System. According to this model, the largest planet in the system would likely be associated with an adjacent debris belt. In the other, warm material is in a broad disk, is not concentrated into asteroid belt-like rings, and is not associated with any planets in the inner region.

NASA’s SOFIA aircraft before a 2015 flight to observe a nearby star. Credit: Massimo Marengo.

Using the new SOFIA images, Su and her team were able to determine that the warm material around Epsilon Eridani is arranged like the first model suggests. In essence, it is in at least one narrow belt, rather than in a broad continuous disk. As Su explained in a NASA press release:

“The high spatial resolution of SOFIA combined with the unique wavelength coverage and impressive dynamic range of the FORCAST camera allowed us to resolve the warm emission around eps Eri, confirming the model that located the warm material near the Jovian planet’s orbit. Furthermore, a planetary mass object is needed to stop the sheet of dust from the outer zone, similar to Neptune’s role in our solar system. It really is impressive how eps Eri, a much younger version of our solar system, is put together like ours.”

These observations were made possible thanks to SOFIA’s on-board telescopes, which have a greater diameter than Spitzer – 2.5 meters (100 inches) compared to Spitzer’s 0.85 m (33.5 inches). This allowed for far greater resolution, which the team used to discern details within the Epsilon Eridani system that were three times smaller than what had been observed using the Spitzer data.

In addition, the team made use of SOFIA’s powerful mid-infrared camera – the Faint Object infraRed CAmera for the SOFIA Telescope (FORCAST). This instrument allowed the team to study the strongest infrared emissions coming from the warm material around the star which are otherwise undetectable by ground-based observatories – at wavelengths between 25-40 microns.

This artist’s conception of the Epsilon Eridani system, the closest star system who’s structure resembles a young Solar System. Credit: NASA/JPL/Caltech

These observations further indicate that the Epsilon Eridani system is much like our own, albeit in younger form. In addition to having asteroid belts and a debris disk that is similar to our Main Belt and Kuiper Belt, it appears that it likely has more planets waiting to be found within the spaces between. As such, the study of this system could help astronomers to learn things about the history of our own Solar System.

Massimo Marengo, one of he co-authors of the study, is an Associate Professor with the Department of Physics & Astronomy at Iowa State University. As he explained in a University of Iowa press release:

“This star hosts a planetary system currently undergoing the same cataclysmic processes that happened to the solar system in its youth, at the time in which the moon gained most of its craters, Earth acquired the water in its oceans, and the conditions favorable for life on our planet were set.”

At the moment, more studies will need to be conducted on this neighboring stars system in order to learn more about its structure and confirm the existence of more planets. And it is expected that the deployment of next-generation instruments – like the James Webb Space Telescope, scheduled for launch in October of 2018 – will be extremely helpful in that regard.

“The prize at the end of this road is to understand the true structure of Epsilon Eridani’s out-of-this-world disk, and its interactions with the cohort of planets likely inhabiting its system,” Marengo wrote in a newsletter about the project. “SOFIA, by its unique ability of capturing infrared light in the dry stratospheric sky, is the closest we have to a time machine, revealing a glimpse of Earth’s ancient past by observing the present of a nearby young sun.”

The post A Star System Only 10 Light-Years Away is Our Mini-Doppelganger appeared first on Futurism.

During the 1960s, Freeman Dyson and Nikolai Kardashev captured the imaginations of people everywhere by making some radical proposals. Whereas Dyson proposed that intelligent species could eventually create megastructures to harness the energy of their stars, Kardashev offered a three-tiered classification system for intelligent species based on their ability to harness the energy of their planet, solar system, and galaxy, respectively.

With missions that are now capable of locating extra-solar planets (i.e. the Kepler Space Observatory) scientists have been on the lookout for signs of possible alien megastructures. Unfortunately, aside from some very arguable results, no concrete evidence has yet come to light. Lucky for us, in a study from the Free University of Tbilisi, Professor Zaza Osmanov offers some new insight on why megastructures may have eluded us so far.

While fascinating, the idea of alien megastructures invariably suffers from the same problem as all other attempts to find signs of intelligent life in our Universe. Basically, if intelligent life exists, why have we consistently failed to find any evidence of it? This conundrum, which was summed up by Enrico Fermi in the 1950s (thereafter known as the Fermi Paradox), has hung like a shadow over all our efforts.

Artist’s impression of an orbiting swarm of dusty comet fragments around Tabby’s Star. Image Credit: NASA/JPL-Caltech

For example, in the summer of 2015, a team of astronomers announced that they found what might be an indication of an alien megastructure around Tabby’s Star (KIC 8462852). However, they were quick to point out that any number of possibilities could explain the strange dimming pattern coming from the star, and subsequent studies offered even more plausible explanations — such as the star having consumed a planet at some point in its past.

To this, Osmanov has argued that the problem is that we are looking in the wrong places. Last year, he wrote a research paper in which he ventured that an alien super civilization — i.e. one that was consistent with a Level II Kardashev civilization — would likely use ring-like megastructures to harness the power of their stars. This is in contrast to the traditional concept of a “Dyson’s Sphere,” which would consist of a spherical shell.

Furthermore, he argued that these Dyson Rings would likely be built around pulsars rather than stars, and offered estimates on their dimensions which were dependent on the rotational speed of the pulsar. According to Osmanov’s latest study, titled “Are the Dyson rings around pulsars detectable?“, Osmanov extends the problem of spotting alien megastructures to the observational realm.

Specifically, he addressed how alien megastructures could be spotted by identifying their infrared energy signatures, and at what kinds of distances. By examining how such structures would vary in terms of the amount of IR radiation they would emit, he believes that they could be spotted within our local Universe using existing instruments.

Artist’s impression of the exotic double object that consists of a tiny neutron star orbited every two and a half hours by a white dwarf star. Image Credit: ESO/L. Calçada

Once again, it comes down to the diameter of the structures, which would in turn depend on the type of pulsar they orbit. As he states in the paper:

A couple of years earlier before publishing the paper of Kardashev, the prominent physicist Freeman Dyson has suggested that if such superadvanced (in the terminology of Kardashev, Level-II) extraterrestrials exist, for increasing efficiency of energy consumption they can construct a thin spherical shell with radius ?1AU surrounding a host star (Dyson 1960). It has been argued that for such distances the sphere will be in the so-called habitable zone (HZ) and therefore the sphere will have the temperature of the order of (200 – 300 K), making this object visible in the infrared spectrum.

Extending this to pulsars, Osmanov estimates that the habitable zone around a relatively slowly-rotating pulsar (with a period of about half a second) would be on the order of 0.1 AU. According to his calculations, a ring-like megastructure that orbited a pulsar at this distance would emit temperatures on the order of 390 K (116.85 °C; 242.33 °F), which means that the megastructure would be visible in the IR band.

From this, Osmanov concludes that modern IR telescopes — such as the Very Large Telescope Interferometer (VLTI) and the Wide-field Infrared Survey Explorer (WISE) — would have the necessary capacity to monitor nearby pulsars for signs of alien megastructures. He further concludes that for this purpose, these telescopes would have an effective range of up to 200 parsecs (~652 light years).

Ever since it was first announced in 2015, there has been speculation as to what could account for the dimming of KIC 8462852. Image Credit: Eburacum45/SentientDevelopments.com

In addition, he goes on to state that within this volume of space, multiple candidates could be found and examined using these same existing instruments:

We have considered the sensitivity of VLTI and by taking into account its higher possible angular resolution, 0.001 mas, it has been shown that the maximum distance ~0.2 kpc leads to the IR spectral density of the order of 7.4 mJy, which in turn, can be detected by the VLTI. We have argued that by monitoring the nearby zone of the Solar System approximately 64 pulsars are expected to be located inside it.

Beyond these distances, up to the kiloparsec range (about 3260 light years), the angular resolution of these telescopes would not be enough to detect the structure of any rings. As such, finding megastructures at this distance would require telescopes that can conduct surveys in the UV band — which corresponds to the surface temperatures of neutron stars (7000 K). However, this would have to wait upon the development of more sensitive instruments.

“As we see, the search of infrared rings is quite promising for distances up to -0.2 kpc, where one will be able to monitor potentially 64 ± 21 pulsars by using the IR instruments,” he concluded. “Observation of distant pulsars (up to -1kpc), although will significantly increase the total number of potential objects – to 1600 ± 530, but at this moment the UV instruments cannot provide such a level of sensitivity.”

There are Dyson rings and spheres and this, an illustration of a Dyson swarm. Could this or a variation of it be what we’re detecting around KIC? Not likely, but a fun thought experiment. Image Credit: Falcorian/Wikipedia Commons

So while the range would be limited, the opportunities for testing this hypothesis would not. All told, between 43 and 85 candidates exist within the observable volume of space, according to Osmanov’s estimates. And with existing IR telescopes — and next-generation telescopes like the James Webb Space Telescopes — up to the task, some surveys could be conducted that would yield valuable information either way.

The concept of alien megastructures remains a controversial one, and for good reason. For one, the potential evidence for such structures – i.e. the periodic dimming of a star – can easily be explained by other means. Second, there is an undeniable degree of wishful thinking when it comes to the search for extra-terrestrial intelligence, which means that any findings could be subject to bias.

Nevertheless, the search for intelligent life remains a very fascinating field of study, and a necessary one at that. Not only would finding other examples of life in our Universe put to rest one of the most burning existential questions of all time — are we alone? — it would also allow us to learn a great deal about what other forms life could take. Is all life carbon based, are there other possibilities, etc? We would like to know!

In the end, the Fermi Paradox will only be resolved when we find definitive evidence that there is intelligent life out there other than our own. In the meantime, we can expect that we will keep searching until we find something. And anything that make this easier by telling us where we should (and what specifically to look for) is sure to help.

The post We May Be Looking for Alien Megastructures in the Wrong Places appeared first on Futurism.

The Physics of Entanglement

Traditionally, measuring quantum states is a tedious affair. The technique used, called quantum tomography, requires measuring multiple copies of the quantum state in various ways, in order to count all possible outcomes and arrive at a full set of probabilities. Although important in testing quantum systems, this is not very practical. That’s why researchers from the Center for Quantum Technologies at the National University of Singapore and the California Institute of Technology devised a much simpler method.

In a study published in the journal Nature Communications, the researchers proposed a measurement system that can pinpoint the fingerprint of any two-particle entangled quantum state. This device-independent way of certifying quantum states, among other things, “can bound specific quantities like the amount of randomness, the length of the secret key in quantum cryptography,” according to the study.

But first, a bit of a background: Quantum entanglement is a phenomenon where two particles are held in a multitude of undecided outcomes or possibilities. The change in one affects the change in another, regardless of distance. As such, entanglement is at the heart of quantum technologies like quantum computing and quantum cryptography — as well as the possibility of quantum teleportation.

For instance, quantum computing relies on entangled particles known as quantum bits (qubits). These qubits hold quantum states capable of being either a 0 or 1, in terms of carrying information. Building on existing findings regarding qubits, the team extended their work to higher-dimensional qubits known as qudits — capable of storing more information, not just 0s and 1s, but a 0, 1, 2, 3, 4, and so on.

Securing Quantum Technologies

The general problem is determining whether quantum systems actually work and can deliver on the properties expected of them. “I like to see our work as bringing the power of testing quantum devices to the consumers who use them,” NUS researcher Goh Koon Tong explained to Phys.org. “Currently, only those who build the devices or understand the engineering aspect of them can perform the test.”

The team hopes that it would be possible for engineers and consumers of quantum technologies to perform such tests in the future. They encourage other researchers to develop ways to incorporate their device-independent checks that would allow for self-testing quantum technologies. According to researcher Valerio Scarani from NUS, there’s already interest. “Of all my work in the past five years, this has attracted the most attention,” he said.

This would allow engineers to spot errors in quantum technologies and devices that don’t perform what they promise to do. That’s especially crucial, since quantum computing is poised to be the future of information processing, which could improve the way we handle problems and conduct research in various fields. Likewise, quantum cryptography is also being promoted as the future of cybersecurity.

The post New Breakthrough Work Finally Reveals Quantum “Fingerprints” appeared first on Futurism.

Gravitational Waves Revealing The Universe

The Laser Interferometer Gravitational-wave Observatory (LIGO) just detected gravitational waves, ripples in time and space, for the third time. Two black holes collided, forming a huge black hole 49 times more massive than our sun, and this generated the waves. This kind of collision was also the cause of the waves detected previously by LIGO, although the masses of the black holes varied. This repetition of the discovery confirms that a new area of astronomy now exists.

“We have further confirmation of the existence of stellar-mass black holes that are larger than 20 solar masses — these are objects we didn’t know existed before LIGO detected them,” MIT’s David Shoemaker, a LIGO spokesperson, said in a press release. “It is remarkable that humans can put together a story, and test it, for such strange and extreme events that took place billions of years ago and billions of light-years distant from us. The entire LIGO and Virgo scientific collaborations worked to put all these pieces together.”

In September 2015, LIGO first directly observed these gravitational waves during its first run since receiving Advanced LIGO upgrades. The second detection followed in December 2015, and this latest detection, called GW170104, followed in January of this year. In each case, both of LIGO’s twin detectors perceived gravitational waves from the collisions of the black holes, but this latest observation does offer a few new pieces of information.

For example, it suggests which directions the black holes might be spinning in, and indicates that at least one of the black holes in the pair may not be aligned with the overall orbital motion. Scientists are hoping that they can learn more about how binary black holes form by making more LIGO observations.

Image Credit: S. Ossokine, A. Buonanno/Max Planck Institute for Gravitational Physics
Image Credit: S. Ossokine, A. Buonanno/Max Planck Institute for Gravitational Physics

LIGO’s Future

This work is testing, and thus far providing proof for, the theories proposed by Albert Einstein. For example, the theory of relativity says that dispersion, the effect that happens as light waves in a physical medium travel at different speeds, cannot happen in gravitational waves. LIGO has not found any evidence of dispersion in gravitational waves, as predicted by relativity.

“It looks like Einstein was right — even for this new event, which is about two times farther away than our first detection,” Georgia Tech’s Laura Cadonati, the Deputy Spokesperson of the LIGO Scientific Collaboration (LSC), said in the press release. “We can see no deviation from the predictions of general relativity, and this greater distance helps us to make that statement with more confidence.”

Moving forward, the LIGO-Virgo team will keep searching LIGO data for any hint of gravitational waves emanating from the far corners of the Universe. The sensitivity of the detector will improve during the next run starting in late 2018 after researchers apply technical upgrades, hoping to see even more. Caltech’s David Reitze, the LIGO Laboratory’s executive director, said in the press release, “While LIGO is uniquely suited to observing these types of events, we hope to see other types of astrophysical events soon, such as the violent collision of two neutron stars.”

The post LIGO Just Detected the Oldest Gravitational Waves Ever Discovered appeared first on Futurism.

Stripping It Bare

Essentially, a black hole is an object whose gravity is so strong that it sucks in surrounding objects. That’s what became of a small molecule that researchers from the Stanford operated SLAC National Accelerator Laboratory blasted with the world’s most powerful X-ray laser. The molecule turned into an atom-sucking “molecular black hole.”

Image credit: DESY/Science Communication Lab
Image credit: DESY/Science Communication Lab

Worry not, however, as such an effect requires an X-ray laser that’s “a hundred times more intense than what you would get if you focused all the sunlight that hits the Earth’s surface onto a thumbnail,” Sebastien Boutet said in a press release. That’s how strong the Coherent X-ray Imaging instrument used for this experiment is, said Boutet, who is a co-author of the study the team published in the journal Nature. It’s capable of releasing hard X-rays by using the highest possible energies available from the equipment.

The researchers used special mirrors to focus the X-ray beam into a very small spot, which was a little bit over 100 nanometers in diameter, to check three types of samples with heavy atoms: individual xenon atoms (with 54 electrons each) and two types of molecules with single iodine atoms (containing 53 electrons each). They didn’t expect the extreme effect the X-ray laser would actually have on these samples, which surpassed their calculations based on previous studies.

X-Ray Experiments

When blasted with the X-ray laser beam from the Coherent X-ray Imaging instrument, the molecule’s iodine atom lost more than 50 electrons with just 30 femtoseconds — or millionths of a billionth of a second. The void that was left then pulled in electrons from the rest of the molecule, which it also blasted out before finally blowing up.

X-Ray Navigation: Pulsars, The GPS of the Universe
Click to View Full Infographic

“We think the effect was even more important in the larger molecule than in the smaller one, but we don’t know how to quantify it yet,” lead researcher Artem Rudenko explained in the press release. “We estimate that more than 60 electrons were kicked out, but we don’t actually know where it stopped because we could not detect all the fragments that flew off as the molecule fell apart to see how many electrons were missing. This is one of the open questions we need to study.”

That effect, of course, wasn’t something the researchers intended. However, they did learn a very important lesson from it. Using X-rays with ultrahigh intensities is necessary for experiments that try to image individual biological objects — such as bacteria and viruses — at a high resolution. At the same time, it’s also useful in studying the charge dynamics of complex molecules and to understand how matter behaves in extreme conditions.

“For any type of experiment you do that focuses intense X-rays on a sample, you want to understand how it reacts to the X-rays,” Daniel Rolles, who also headed the study, said in the press release. “This paper shows that we can understand and model the radiation damage in small molecules, so now we can predict what damage we will get in other systems.”

The post Scientists Just Produced A “Molecular Black Hole” Using X-Rays appeared first on Futurism.

Teaching Individual Neurons

The Neurophysiology department at the University of Lund has discovered that individual neurons can be taught patterns rather than just respond to a single, specific signal. This means that individual Purkinje cells (cells that control motor movement) are capable of learning, rather than learning being an emergent property (a property that a collection has but individual members do not).

Purkinje Cells (In Red). Image Credit: ZEISS Microscopy, Flickr
Purkinje Cells (In Red). Image Credit: ZEISS Microscopy, Flickr

Scientists’ previous understanding was that learning occurred due to an interaction of an entire neural network, however the study states:

Cerebellar control and coordination of motor behaviors may rely more on intracellular mechanisms and less on neuronal network properties than previously thought. It also suggests the capacity for information storage in the individual neuron is vastly greater and of a very different nature than suggested by the dominant paradigm.

The Lund researchers ‘taught’ the cells over a number of hours to associate different signals. Eventually, this meant the cells could learn several reactions in a series. The responses followed the time pattern of the stimuli, for example: They responded to “Signal – brief pause – signal – long pause – signal” with “response – brief pause – response – long pause – response.”

Dan-Anders Jirenhed, a researcher of associative learning at Lund, concluded that, “this means that the brain’s capacity for learning is even greater than previously thought!”

Learning Difficulties and AI

Disruptions in these systems could be responsible for a number of learning difficulties such as Autism, ADHD and language disorders, meaning that scientists may base future treatment therapies on principles derived from the study.

Understanding Machine Learning [INFOGRAPHIC]
Click to View Full Infographic

The most exciting application of the discovery, though, is for deep learning and artificial intelligence (AIs), most of which currently use neural networks. In order to build human-like intelligence, researchers need to understand how our minds accrue information.

This study shows that our minds learn on a single-cell basis, rather than through the combined effort of a neural network. Therefore, when building AI, we need to employ a similar model – programming the individual components to learn. If done successfully, this could result in more complex tasks being dealt with in a more efficient way, lessening the divide between human and machine even more.

The post Revolutionary Discovery About the Human Brain Could Lead to Second-Gen AI appeared first on Futurism.

Communication is Key

Just like flying cars and artificial intelligence, nanobots are mainstays in a number of science fiction flicks.  While there are efforts to develop the technology, researchers are stumped by the difficulty of making miniature electronic devices. Shrinking computer circuitry to nano-sized bits isn’t yet possible.

Researchers Manage to Get Artificial Nanoparticles to Talk to Each Other
Photo Credit: Idaho National Laboratory / Flickr

Furthermore, nanobots usually work better together — Big Hero 6, yeah? — which makes it necessary for these miniature devices to communicate with each other.

“A nanomachine by itself cannot do much,” Josep Miquel Jornet from the University at Buffalo in New York told New Scientist. “Just like you can do many more things if you connect your computer to the internet, nanomachines will be able to do many, many more things if they are able to interact.”

However, current communication technologies for nanotech still rely on large components. Researchers from the Complutense University of Madrid (UCM) in Spain may have found a way around this problem.

In a study published in the journal Nature Communications, Reynaldo Villalonga and his colleagues took their cue from biology.  “[A]n interesting approach for establishing communication at the nanometric level is to mimic how nature communicates,” the researchers wrote.

The Future of Nanobots

Cells and bacteria communicate using chemical signals, and there has been interest in labs around the world, like those at the Koch Institute and UCM, in getting nanoparticles can be made to do the same thing. Villalonga and his team demonstrated that this is possible. Their research involved using two so-called Janus particles — particles that had one side made of porous silica carrying some sort of dye and another side made of gold — which were coated with reactive chemicals.

The communication process took place through chemical transformations that occurred on the particles’ surfaces as they interacted with each other, culminating in the release of dye so the scientists knew the communication was successful. “It seems simple, but this is happening between machines that are tens of nanometers across,” Jornet, who wasn’t involved in the study, told New Scientist.

While it’s still to be tested inside a human body, the method shows potential in developing disease detection and prevention technology on a single-cell scale. For instance, the silica side of the nanoparticle could instead be made to carry medication instead of dye, say for cancer treatment.

“This is one of the first steps toward constructing a nanosized robot,” says Villalonga. “Our dream is to construct an autonomous nanomachine that can be used to fight cancers.”

Indeed, nanobots may well be part of our bodies sooner than we think. Ray Kurtzweil thinks it would happen by the 2030s. For that to be possible, studies like this of Villalonga and his colleagues are necessary.

The post Scientists Created Artificial Nanoparticles That Can Communicate With Each Other appeared first on Futurism.

The “Neighborhood” Cell

A new type of neuron has been discovered by  University of Amsterdam (UvA) researchers that plays a key role in larger scale navigation. The scientists colloquially called the neuron, seated in the perirhinal cortex, the “neighborhood cell” because it deals with the ability to tell the difference between general areas of an environment.

The cells work in conjunction with “hippocampal place cells,” which provide more fine-grained assessments of location. To use an analogy from the study, if an individual is trying to get from her house to a remote location in another region, the  perirhinal cortex neurons deal with navigating through streets and neighborhoods, while the hippocampal place cells are only needed when she needs to situate herself precisely, such as gauging “which houses she will pass in succession.”

The scientists discovered the new cells by monitoring the neural activity of four areas of a rat’s brain (the perirhinal cortex, hippocampus, and two sensory areas) while they navigated a figure-eight maze that overlaps in the middle. They found that while the activity of the hippocampal place cells depended on what area of the maze the rat was in, activity in the perirhinal cortex was more sustained.

Helping People With Alzheimer’s

While human brains may not work exactly like the rat brains studied in this research, these findings may help us understand how we navigate our environment on a more general scale. They may also provide insight into some of the conditions in which this process is impaired. Among other conditions, the study states that “the lack of this capacity is one of the hallmarks of Alzheimer’s disease.”

Alzheimer’s specialists could eventually develop treatment strategies for those specifically affected by wandering if they focus therapy on this area of the brain. Alzheimer’s disease affects up to 5.5 million people in the U.S, and the wandering it causes is extremely worrying for family members and caregivers as it puts these vulnerable individuals at risk.

The treatment strategies could work in conjunction with other possible breakthroughs in Alzheimer’s, such as the drug nilotinib and light therapy, in order to provide a spectrum of approaches to stall the development of this debilitating disease.

The post Scientists Just Discovered a New Type of Neuron appeared first on Futurism.

Commercial Quantum Computing

Quantum computing is, if you are not already familiar, simply put, a type of computation that uses qubits to encode data instead of the traditional bit (1s and 0s). In short, it allows for the superposition of states, which is where data can be in more than one state at a given time.

So, while traditional computing is limited to information belonging to only one or another state, quantum computing widens those limitations. As a result, more information can be encoded into a much smaller type of bit, allowing for much larger computing capacity. And, while it is still in relatively early development, many believe that quantum computing will be the basis of future technologies, advancing our computational speed beyond what we can currently imagine.

It was extremely exciting then when researchers from MIT, Harvard University, and Sandia National Laboratories unveiled a simpler way of using atomic-scale defects in diamond materials to build quantum computers in a way that could possibly allow them to be mass produced.

For this process, defects are they key. They are precisely and perfectly placed to function as qubits and hold information. Previous processes were difficult, complex, and not precise enough. This new method creates targeted defects in a much simpler manner. Experimentally, defects created were, on average, at or under 50 nanometers of the ideal locations.

The significance of this cannot be overstated. “The dream scenario in quantum information processing is to make an optical circuit to shuttle photonic qubits and then position a quantum memory wherever you need it,” says Dirk Englund, an associate professor of electrical engineering and computer science, in an interview with MIT. “We’re almost there with this. These emitters are almost perfect.”

Image Credit: carmule / pixabayImage Credit: carmule / Pixabay

A Quantum Future

While the reality of quantum computers, let alone mass produced quantum computers, is still a bit of a ways off, this research is promising. One of the main remaining hurdles is how these computers will read the qubits. But these diamond defects aim to solve that problem because they naturally emit light, and since the light particles emitted can retain superposition, they could help to transmit information.

The research goes on to detail how the completion of these diamond materials better allowed for the amplification of the qubit information. By the end, the researchers found that the light emitted was approximately 80-90 percent as bright as possible.

If this work eventually leads to the full creation of a quantum computer, life as we know it would change irrevocably. From completely upending modern encryption methods to allowing us to solve previously “unsolvable” problems, our technology and infrastructure would never be the same.  Moreover, the limitations that currently exist in how we store and transmit information would shatter, opening new opportunities for—as yet—unimaginable exploration.

The post MIT Just Unveiled A Technique to Mass Produce Quantum Computers appeared first on Futurism.

Potential Hazards

Gene editing is a relatively new technology, and no other method currently available is as fast, precise, and efficient as CRISPR-Cas9. It has had unprecedented success in a number of fields, particularly in medicine—allowing scientists to edit HIV out of living organisms and engineer an end to malaria through the modification of mosquitoes.

Chinese scientists asserts that they’ve already used CRISPR on a human being, and clinical trials are in the mix in many locals (including the United States). Given the expected widespread use of CRSIPR in the world of tomorrow, researchers from the Columbia University Medical Center (CUMC) are offering a word of caution. A study published in the journal Nature Methods revealed that CRISPR-Cas9 can lead to unintended mutations in a genome.

“We feel it’s critical that the scientific community consider the potential hazards of all off-target mutations caused by CRISPR, including single nucleotide mutations and mutations in non-coding regions of the genome,” co-author Stephen Tsang, from CUMC, said in a press release.

In the study, Tsang’s team sequenced the genome of mice that they had previously used CRISPR on in an attempt to cure their blindness. They looked for all possible mutations, even those that might have changed just a single nucleotide. The researchers discovered a staggering 1,500 single-nucleotide mutations and over 100 larger deletions and insertions in the genomes of two of the recipients.

To better understand the difference between single nucleotide changes and larger deletions, see the below infographic:

Refining Methods

The study isn’t suggesting a wholesale ban on CRISPR—not even close. After all, CRISPR was found to be effective in the mice they used it on. Instead, the scientists are offering caution, and calling for a clearer method of checking for mutations, deletions, or insertions into genomes. They propose doing whole-genome sequencing instead of relying on computer algorithms, which didn’t detect any of the mutations they discovered in the study.

“We hope our findings will encourage others to use whole-genome sequencing as a method to determine all the off-target effects of their CRISPR techniques and study different versions for the safest, most accurate editing,” Tsang said.

“[P]redictive algorithms seem to do a good job when CRISPR is performed in cells or tissues in a dish, but whole genome sequencing has not been employed to look for all off-target effects in living animals,” added co-author Alexander Bassuk, from the University of Iowa.

The potential of the CRISPR-Cas9 system as a gene editing technology is undeniable. As previously mentioned, already, it has seen success in developing possible cancer treatments, in making animals disease-resistant, and has shown promise in even replacing antibiotics altogether.

It has also been used to program living cells into digital circuitry. Among its more exciting — and perhaps dangerous(?) — applications is nothing short of a Jurassic Park-esque revival of extinct species. There seems to be no limit to what CRISPR can do.

However, as this research reveals, in order to realize the full potential of gene editing, and to ensure that living organisms aren’t harmed as a result of the edits that we make to DNA—the source code of life itself—we must advance slowly and with careful scrutiny.

The post CRISPR Gene Editing May Cause Hundreds of Unintended Mutations in DNA appeared first on Futurism.

Redefining Physics

For the most part, we seem to have our universe figured out: there are four fundamental forces that govern the interactions of every conceivable object, from atoms to planets. These four — the weak and strong nuclear forces, electromagnetic force, and gravity — explain all there is. Except they don’t, really.

As much as we’ve understood these forces, there are still phenomena that the standard model of physics and Einstein’s theory of general relativity don’t quite make explain. For instance, there’s more gravity in space than what all visible matter can supposedly produce. That’s why some suggest undiscovered dark matter as the source of this, or as other physicists suggests, that a hidden “fifth force” is out there. One such physicist is Andrea Ghez, director at the University of California, Los Angeles, Galactic Center Group.

The center of the Milky Way galaxy, with the supermassive black hole Sagittarius A* (Sgr A*), located in the middle. Image credit: X-ray: NASA/UMass/D.Wang et al., IR: NASA/STScI
The center of the Milky Way galaxy with Sagittarius A* in the middle. Image credit: X-ray: NASA/UMass/D.Wang et al., IR: NASA/STScI

The key to detecting this fifth force, according to Ghez and her team, is studying the supermassive black hole at the center of the Milky Way and the stars around it. “By watching the stars move over 20 years using very precise measurements taken from Keck Observatory data, you can see and put constraints on how gravity works,” she explained in a press release. “If gravitation is driven by something other than Einstein’s theory of General Relativity, you’ll see small variations in the orbital paths of the stars.” They published their method in the journal Physical Review Letters.

Filling the Holes

This research could go a long way to answering questions that have risen since Einstein published his theories, Ghez said.

9 Physics Questions Baffling Scientists [INFOGRAPHIC]
Click to View Full Infographic

“Einstein’s theory describes [gravity] beautifully well, but there’s lots of evidence showing the theory has holes,” Ghez said in her interview for the press release. “The mere existence of supermassive black holes tells us that our current theories of how the Universe works are inadequate to explain what a black hole is.”

In particular, the team is excited to observe a star called S0-2 as it passes closer than ever to the Milky Way’s supermassive black hole next year. If the orbital path of these stars show deviations from what general relativity predicts, then the researchers might discover clues about the supposed fifth force.

If a fifth force does exist and Ghez’s method discovers it, we’d probably need to reexamine the physics of our Universe. “This is really exciting. It’s taken us 20 years to get here, but now our work on studying stars at the center of our galaxy is opening up a new method of looking at how gravity works,” Ghez said in the interview.

The post Our Galaxy’s Supermassive Black Hole May Reveal the Universe’s Hidden Fifth Force appeared first on Futurism.

A Cut-Throat Industry

The inventor Thomas Edison is quoted as saying, “If you want to succeed, get some enemies.”

His observation cuts to the heart of an ongoing debate within the scientific community — whether the field would be better served by fostering collaboration or by fueling competition. While all fields of science are shifting towards increased international cooperation, organizations like NASA and IBM are also looking to initiate breakthroughs by sponsoring contests.

And these competitions have some historical support for their usefulness to humanity. Scientists have long harbored rivalries — some friendly, some not — but many of which spurred scientific and technological advances that have brought us to where we are today. Here are five examples of when scientists competed and we all won.

1. The Shape of the Earth — Wallace vs. Hampden

Image Credit: NASA

The year was 1870, and society’s understanding of the universe — and even Earth itself — was quite limited. So limited that many people, including John Hampden, held fast to the notion that the Earth was flat. He was so convinced of this idea that he was willing to offer a wager on it, and Alfred Russel Wallace stepped up to the challenge. However, after they completed the agreed-upon measurements (which supported the fact that Earth is spherical), Hampden refused to pay up. At least NASA pays Wallace homage.

2. Alternating and Direct Current — Tesla vs. Edison

Edison was certainly one to practice what he preached, and one of his enemies was a former employee, Nikola Tesla. Not long after the flat Earth wager, Edison promised Tesla a reward of $50,000 if he was able to improve Edison’s direct current (DC) power plants. Tesla completed the improvements in just a few months, but when he asked to be paid, Edison said he’d just been joking. Thus was born rivalry for the history books, with Edison backing DC electricity and Tesla pioneering alternating current (AC) power generation and transmission. We use both today.

3. The Structure of DNA — Watson and Crick vs. Pauling

Image Credit: Caroline Davis2010/Visual Hunt

While we don’t usually think of Nobel Prize winners as beating out the competition, biologist James Watson certainly bragged about his victory in his book The Double Helix. He and Francis Crick teamed up to discern the structure of our genetic code before the famous chemist, Linus Pauling, did. When Pauling’s research led him down a wrong path, Watson wrote in his book that he and Crick celebrated with a toast, “to the Pauling failure … Though the odds still appeared against us, Linus had not yet won his Nobel.” Watson and Crick won the prize for their discovery of the double helix in 1962.

4. Black Holes — Hawking vs. Thorne

Image Credit: M. Weiss/NASA

Black holes have been puzzling scientists ever since they were predicted by Albert Einstein in his general theory of relativity. They have long been a particular interest of Stephen Hawking, but perhaps he decided to play devil’s advocate with himself when he agreed on a wager with astrophysicist Kip Thorne that Cygnus X-1 was not a black hole. It was, and not only did we find the first hard evidence of these collapsed stars, but Thorne earned his prize — a soft porn magazine subscription (according to some sources, anyway).

5. The Cause of Climate Change — Mashnich and Bashkirtsev vs. Annan

In 2005, James Annan, a climate modeler, staked $10,000 that global temperatures would continue to rise until 2017, demonstrating that the temperature was dependent on greenhouse gases, not solar activity. Two Russian solar physicists, Galina Mashnich and Vladimir Bashkirtsev, took him up on that bet. This wager is an example of the controversy that has fueled numerous studies that focus on climate change and how we can stop it. The bet hasn’t been settled yet, but last year Annan told Reuters that, unless 2017 is the coldest year we’ve seen in almost a century, he expects to win. And, if this spring is any indication, Annan has nothing to worry about — at least as far as the bet goes.

The post Wagers, Contests, and Competitions That Inspired Scientific Breakthroughs appeared first on Futurism.

There’s a revolution happening in biology, and its name is CRISPR.

CRISPR (pronounced “crisper”) is a powerful technique for editing DNA. It has received an enormous amount of attention in the scientific and popular press, largely based on the promise of what this powerful gene editing technology will someday do.

CRISPR was Science magazine’s 2015 Breakthrough of the Year; it’s been featured prominently in the New Yorker more than once; and The Hollywood Reporter revealed that Jennifer Lopez will be the executive producer on an upcoming CRISPR-themed NBC bio-crime drama. Not bad for a molecular biology laboratory technique.

Two of the CRISPR co-inventors, Emmanuelle Charpentier (middle-left) and Jennifer Doudna (middle-right), rubbing elbows with celebs after receiving the 2015 Breakthrough Prize in Life Sciences. Image Credit: Breakthrough Prize Foundation, CC BY-ND

CRISPR is not the first molecular tool designed to edit DNA, but it gained its fame because it solves some longstanding problems in the field. First, it is highly specific. When properly set up, the molecular scissors that make up the CRISPR system will snip target DNA only where you want them to. It is also incredibly cheap. Unlike previous gene editing systems which could cost thousands of dollars, a relative novice can purchase a CRISPR toolkit for less than US$50.

Research labs around the world are in the process of turning the hype surrounding the CRISPR technique into real results. Addgene, a nonprofit supplier of scientific reagents, has shipped tens of thousands of CRISPR toolkits to researchers in more than 80 countries, and the scientific literature is now packed with thousands of CRISPR-related publications.

When you give scientists access to powerful tools, they can produce some pretty amazing results.

The CRISPR revolution in medicine

The most promising (and obvious) applications of gene editing are in medicine. As we learn more about the molecular underpinnings of various diseases, stunning progress has been made in correcting genetic diseases in the laboratory just over the past few years.

Take, for example, muscular dystrophy — a complex and devastating family of diseases characterized by the breakdown of a molecular component of muscle called dystrophin. For some types of muscular dystrophy, the cause of the breakdown is understood at the DNA level.

In 2014, researchers at the University of Texas showed that CRISPR could correct mutations associated with muscular dystrophy in isolated fertilized mouse eggs which, after being reimplanted, then grew into healthy mice. By February of this year, a team here at the University of Washington published results of a CRISPR-based gene replacement therapy which largely repaired the effects of Duchenne muscular dystrophy in adult mice. These mice showed significantly improved muscle strength — approaching normal levels — four months after receiving treatment.

Using CRISPR to correct disease-causing genetic mutations is certainly not a panacea. For starters, many diseases have causes outside the letters of our DNA. And even for diseases that are genetically encoded, making sense of the six billion DNA letters that comprise the human genome is no small task. But here CRISPR is again advancing science; by adding or removing new mutations — or even turning whole genes on or off — scientists are beginning to probe the basic code of life like never before.

CRISPR is already showing health applications beyond editing the DNA in our cells. A large team out of Harvard and MIT just debuted a CRISPR-based technology that enables precise detection of pathogens like Zika and dengue virus at extremely low cost — an estimated $0.61 per sample.

Using their system, the molecular components of CRISPR are dried up and smeared onto a strip of paper. Samples of bodily fluid (blood serum, urine, or saliva) can be applied to these strips in the field and, because they linked CRISPR components to fluorescent particles, the amount of a specific virus in the sample can be quantified based on a visual readout. A sample that glows bright green could indicate a life-threatening dengue virus infection, for instance. The technology can also distinguish between bacterial species (useful for diagnosing infection) and could even determine mutations specific to an individual patient’s cancer (useful for personalized medicine).

In Just a Few Short Years, CRISPR Has Sparked a Research Revolution
Feng Zhang, another co-inventor of CRISPR technology, discussing its safety and ethical ramifications. Image Credit: AP Photo/Susan Walsh

Almost all of CRISPR’s advances in improving human health remain in an early, experimental phase. We may not have to wait long to see this technology make its way into actual, living people though; the CEO of the biotech company Editas has announced plans to file paperwork with the Food and Drug Administration for an investigational new drug (a necessary legal step before beginning clinical trials) later this year. The company intends to use CRISPR to correct mutations in a gene associated with the most common cause of inherited childhood blindness.

CRISPR will soon affect what we eat

Physicians and medical researchers are not the only ones interested in making precise changes to DNA. In 2013, agricultural biotechnologists demonstrated that genes in rice and other crops could be modified using CRISPR — for instance, to silence a gene associated with susceptibility to bacterial blight. Less than a year later, a different group showed that CRISPR also worked in pigs. In this case, researchers sought to modify a gene related to blood coagulation, as leftover blood can promote bacterial growth in meat.

You won’t find CRISPR-modified food in your local grocery store just yet. As with medical applications, agricultural gene editing breakthroughs achieved in the laboratory take time to mature into commercially viable products, which must then be determined to be safe. Here again, though, CRISPR is changing things.

A common perception of what it means to genetically modify a crop involves swapping genes from one organism to another — putting a fish gene into a tomato, for example. While this type of genetic modification — known as transfection — has actually been used, there are other ways to change DNA. CRISPR has the advantage of being much more programmable than previous gene editing technologies, meaning very specific changes can be made in just a few DNA letters.

White Agaricus bisporus mushrooms with no browning are more visually appealing. Image Credit: Olha Afanasieva/Shutterstock.com

This precision led Yinong Yang — a plant biologist at Penn State — to write a letter to the USDA in 2015 seeking clarification on a current research project. He was in the process of modifying an edible white mushroom so it would brown less on the shelf. This could be accomplished, he discovered, by turning down the volume of just one gene.

Yang was doing this work using CRISPR, and because his process did not introduce any foreign DNA into the mushrooms, he wanted to know if the product would be considered a “regulated article” by the Animal and Plant Health Inspection Service, a division of the U.S. Department of Agriculture tasked with regulating GMOs.

“APHIS does not consider CRISPR/Cas9-edited white button mushrooms as described in your October 30, 2015 letter to be regulated,” they replied.

Yang’s mushrooms were not the first genetically modified crop deemed exempt from current USDA regulation, but they were the first made using CRISPR. The heightened attention that CRISPR has brought to the gene editing field is forcing policymakers in the U.S. and abroad to update some of their thinking around what it means to genetically modify food.

New frontiers for CRISPR

One particularly controversial application of this powerful gene editing technology is the possibility of driving certain species to extinction — such as the most lethal animal on Earth, the malaria-causing Anopheles gambiae mosquito. This is, as far as scientists can tell, actually possible, and some serious players like the Bill and Melinda Gates Foundation are already investing in the project. (The BMGF funds The Conversation Africa.)

Most CRISPR applications are not nearly as ethically fraught. Here at the University of Washington, CRISPR is helping researchers understand how embryonic stem cells mature, how DNA can be spatially reorganized inside living cells, and why some frogs can regrow their spinal cords (an ability we humans do not share).

It is safe to say CRISPR is more than just hype. Centuries ago we were writing on clay tablets — in this century we will write the stuff of life.

The post In Just a Few Short Years, CRISPR Has Sparked a Research Revolution appeared first on Futurism.

In this MinutePhysics video, the channel takes on the famous “twins paradox,” which provides a hypothetical situation to explain special relativity. In the twins scenario, one twin stays in the same place as the other flies out into space in a rocket ship, only to turn around and come back. We already know that moving things experience time more slowly, so the twin staying on Earth may assume that they are older than the astronaut twin. However, the twin on the rocket ship “sees” the Earth and the other twin moving.

As the twins watch each other, time seems to be faster to each observer relative to the other, yet we already know that the moving twin experiences less time because of the motion. This is the same reason that atomic clocks on satellites and on Earth register slightly different times. So how is it possible that time can appear to be faster for both of two different twins at the same time? And of the two twins, one Earthbound and the other exploring the galaxy, who is actually older?

MinutePhysics explains that the mystery is solved by Lorentz transformations, and provides a great way to think about the problem visually. Spoiler alert: riding on rocket ships can keep you younger.

The post Watch: Would a Twin on Earth Really Age Faster Than One Traveling Through Space? appeared first on Futurism.

Since the late 1920s, astronomers have been aware of the fact that the Universe is in a state of expansion. Initially predicted by Einstein’s Theory of General Relativity, this realization has gone on to inform the most widely-accepted cosmological model — the Big Bang Theory. However, things became somewhat confusing during the 1990s, when improved observations showed that the Universe’s rate of expansion has been accelerating for billions of years.

This led to the theory of dark energy, a mysterious invisible force that is driving the expansion of the cosmos. Much like dark matter which explained the “missing mass,” it then became necessary to find this elusive energy, or at least provide a coherent theoretical framework for it. A new study from the University of British Columbia (UBC) seeks to do just that by postulating the Universe is expanding due to fluctuations in space and time.

The study — which was recently published in the journal Physical Review D – was led by Qingdi Wang, a PhD student with the Department of Physics and Astronomy at UBC. Under the supervisions of UBC Professor William Unruh (the man who proposed the Unruh Effect) and with assistance from Zhen Zhu (another PhD student at UBC), they provide a new take on dark energy.

Diagram showing the Lambda-CBR universe, from the Big Bang to the the current era. Image Credit: Alex Mittelmann/Coldcreation

The team began by addressing the inconsistencies arising out of the two main theories that together explain all natural phenomena in the Universe. These theories are none other than general relativity and quantum mechanics, which effectively explain how the Universe behaves on the largest of scales (i.e. stars, galaxies, clusters) and the smallest (subatomic particles).

Unfortunately, these two theories are not consistent when it comes to a little matter known as gravity, which scientists are still unable to explain in terms of quantum mechanics. The existence of dark energy and the expansion of the Universe are another point of disagreement. For starters, candidates theories like vacuum energy — which is one of the most popular explanations for dark energy — present serious incongruities.

According to quantum mechanics, vacuum energy would have an incredibly large energy density to it. But if this is true, then general relativity predicts that this energy would have an incredibly strong gravitational effect, one which would be powerful enough to cause the Universe to explode in size. As Prof. Unruh shared with Universe Today via email:

The problem is that any naive calculation of the vacuum energy gives huge values. If one assumes that there is some sort of cutoff so one cannot get energy densities much greater than the Planck energy density (or about 1095 Joules/meter³)  then one finds that one gets a Hubble constant — the time scale on which the Universe roughly doubles in size — of the order of 10-44 sec. So, the usual approach is to say that somehow something reduces that down so that one gets the actual expansion rate of about 10 billion years instead. But that ‘somehow’ is pretty mysterious and no one has come up with an even half convincing mechanism.

New Explanation for Dark Energy? Tiny Fluctuations of Time and Space.
Timeline of the Big Bang and the expansion of the Universe. Image Credit: NASA

Whereas other scientists have sought to modify the theories of general relativity and quantum mechanics in order to resolve these inconsistencies, Wang and his colleagues sought a different approach. As Wang explained to Universe Today via email:

Previous studies are either trying to modify quantum mechanics in some way to make vacuum energy small or trying to modify General Relativity in some way to make gravity numb for vacuum energy. However, quantum mechanics and General Relativity are the two most successful theories that explain how our Universe works… Instead of trying to modify quantum mechanics or General Relativity, we believe that we should first understand them better. We takes the large vacuum energy density predicted by quantum mechanics seriously and just let them gravitate according to General Relativity without modifying either of them.

For the sake of their study, Wang and his colleagues performed new sets of calculations on vacuum energy that took its predicted high energy density into account. They then considered the possibility that on the tiniest of scales — billions of times smaller than electrons — the fabric of spacetime is subject to wild fluctuations, oscillating at every point between expansion and contraction.

New Explanation for Dark Energy? Tiny Fluctuations of Time and Space.
Could fluctuations at the tiniest levels of space time explain Dark Energy and the expansion of the cosmos? Image Credit: University of Washington

As it swings back and forth, the result of these oscillations is a net effect where the Universe expands slowly, but at an accelerating rate. After performing their calculations, they noted that such an explanation was consistent with both the existence of quantum vacuum energy density and general relativity. On top of that, it is also consistent with what scientists have been observing in our Universe for almost a century. As Unruh described it:

Our calculations showed that one could consistently regard [that] the Universe on the tiniest scales is actually expanding and contracting at an absurdly fast rate; but that on a large scale, because of an averaging over those tiny scales, physics would not notice that ‘quantum foam.’ It has a tiny residual effect in giving an effective cosmological constant (dark energy type effect). In some ways it is like waves on the ocean which travel as if the ocean were perfectly smooth but really we know that there is this incredible dance of the atoms that make up the water, and waves average over those fluctuations, and act as if the surface was smooth.

In contrast to conflicting theories of a Universe where the various forces that govern it cannot be resolved and must cancel each other out, Wang and his colleagues presents a picture where the Universe is constantly in motion. In this scenario, the effects of vacuum energy are actually self-cancelling, and also give rise to the expansion and acceleration we have been observing all this time.

While it may be too soon to tell, this image of a Universe that is highly-dynamic (even on the tiniest scales) could revolutionize our understanding of spacetime. At the very least, these theoretical findings are sure to stimulate debate within the scientific community, as well as experiments designed to offer direct evidence. And that, as we know, is the only way we can advance our understanding of this thing known as the Universe.

The post New Explanation for Dark Energy? Tiny Fluctuations of Time and Space. appeared first on Futurism.

Following the Dots

Situations that require thinking ahead or anticipating what comes next are part of our daily reality. They could be as trivial as trying to catch a ball or as important as avoiding getting hit by a car on the road. Regardless of the specific situation, researchers think there’s a mechanism in the human brain that allows us to visualize what comes next in a series of events.

Experiments that test how this “preplay” of images happens have previously been limited to looking at what goes on in the visual cortexes of animals. Now, researchers from Radboud University in the Netherlands have developed an experiment that looks into what happens in the visual cortex of humans. Their findings have been published in Nature Communications.

The researchers tested 29 university students using a functional magnetic resonance imaging (fMRI) scanner. The students were told to repeatedly watch a white dot move across a screen, an event that took just half a second. After viewing this event 108 times, the students developed a mental expectation of the dot’s movement. Next, the students watched 24 other random dot movies. In some, nothing changed, but in others, the dot didn’t move from the starting or ending position at all, or it moved more slowly.

preplay visual brain neuroscience visual cortex
Image Credit: Ekman etal / Nature Communications

The process was done twice for all the students, with four students serving as controls. The fMRI mapped the brain activity of the students throughout the experiments, taking multiple scans at ultra-high speeds to observe the blood flow in specific tissues. While watching the movies of the dot in the starting position, the students’ visual cortexes would activate, showing that they were visually picturing the completion of the move, but twice as quickly as it would actually occur.

Fast Forward to the Future

In effect, this research indicates that the human brain not only produces an image of the future movement in fast forward, the brain’s visual cortex also works in tandem with past experiences to predict future outcomes. Most probably, it developed this ability to predict the future in order to facilitate quicker action that could be the difference between life and death.

The Evolution of Brain-Computer Interfaces [INFOGRAPHIC]
Click to View Full Infographic

“Imagine you are standing at a road, a car is approaching and you need to decide ‘Do I cross, or do I wait for the car to pass first?’,” lead researcher Matthias Ekman told The Daily Mail. “Our study suggests that our visual system can fast-forward the trajectory of the car and thereby help us with our decision whether to wait or not.”

This is just one example of research expanding our understanding of the infinitely complex human brain. This new knowledge will prove invaluable as scientists work to create systems that let us manipulate it to overcome paralysis, upload skills, or just get generally smarter. Even without any cyborg super-tech, though, the brain is pretty remarkable, so the next time you catch that ball or avoid that speeding car, thank it for predicting the future — even just a short bit of it.

The post Our Brains Can Predict Future Events Twice as Fast as They Actually Happen appeared first on Futurism.

Prospects of Bio-Prospecting

When people hear about prospecting, they might imagine old forty-niners (miners) with pickaxes hunting for gold, or maybe an agent for the San Francisco 49ers (football team) scouting for new talent. In my lab, we do another version called bio-prospecting – searching for useful substances from natural sources. Bio-prospecting has produced many valuable products, including anti-cancer drugs derived from plants and extremely strong silks spun by tropical spiders. The Conversation

Our work focuses on enzymes, which are proteins that speed up chemical reactions. We are looking for new and powerful enzymes that can break apart polysaccharides – common molecules that consist of long chains of sugars. Polysaccharides are extremely abundant in the fruits and vegetables that we eat, the cotton clothes we wear and the lumber we use to build houses.

Enzymes that can break down polysaccharides have many uses – for example, in detergents that dissolve stains on clothes. Similar types of enzymes can also be used to release sugars found in plants, which can then be used for manufacturing biodegradable plastic.

In my lab, we are searching for new enzymes that could improve biotechnology for making renewable fuels and chemicals.

Enzymes jump-start chemical reactions but are not consumed by the reactions.

Learning From Microbes

The plants that form a central node of the food web produce billions of tons of polysaccharides every year. The sugars locked away in plants are linked together in long chains. They consist of three major polysaccharides: cellulose, xylan and pectin. These polysaccharides give plants their structure and help protect them against insect damage.

When plants die, these strong polysaccharides trap a large amount of sugars in the plant leaves and stems. Bacteria and fungi break this leaf litter down to get to the nutrients that it contains. It takes unique microbes to produce the enzymes that will degrade plant polysaccharides, a process called saccharification. These microbes are called saprophytes, and they are found everywhere in nature, including the soil of your backyard.

By understanding how saprophytes degrade polysaccharides, we learn fundamental biological principles about this natural process, such as what happens in compost piles and how microbes aide polysaccharide degradation in your gut. We can also adopt their methods to find solutions to real-world problems, such as creating better nutritional supplements, detergents, and fuels.

Discovering Useful Enzymes

My research group studies how bacteria sense the environment and acquire energy. We work with a saprophytic bacterium called Cellvibrio japonicus, which produces nearly 200 enzymes specifically for degrading polysaccharides. Because this bacterium has such an arsenal of enzymes, C. japonicus is able to completely degrade all of the polysaccharides found in plant biomass.

Atomic Force Microscopy image of individual Cellvibrio japonicus cells. These bacteria produce many enzymes to break down polysaccharides that they use as an energy source. The whip-like structure at the end of the bacterial (flagellum) cells helps them move around. Jeffrey Gardner, Author provided

We are very interested in understanding how this bacterium can detect and then eat different polysaccharides so completely. Three key questions we want to answer are: (1) Why does C. japonicus have hundreds of enzymes to degrade polysaccharides? (2) What specific function does each enzyme perform? and (3) How does the bacterium integrate information about the environment and regulate the production of these enzymes?

To answer these questions, we study the bacterium’s physiology, genetics and biochemistry. Plant biomass is a complex mixture of different polysaccharides, so we routinely focus our research by looking at individual polysaccharides and the specific enzymes that degrade them.

For example, when we analyzed how C. japonicus breaks down cellulose, we found that the degradation of small soluble pieces of cellulose (oligosaccharides) controls the production of many degradation-specific enzymes. We also found that four enzymes thought to play the same role in cellulose degradation are not interchangeable. Rather, they are very specific, and the cell uses each of them in only certain contexts and for specific polysaccharides.

Overall, we have found that for the degradation of cellulose, C. japonicus requires only a very small number of the polysaccharide-degrading enzymes it can produce. These enzymes have unique properties and are potentially very useful in industrial applications.

Biotechnological Applications

While we are very interested in what saprophytic bacteria are doing out in the environment, our work also aims to solve some biotechnologically important problems. For example, one major challenge in understanding interactions between bacteria and plant material is measuring how fast bacteria are growing as they break plant biomass down.

Plant biomass is completely insoluble in water, so when we combine bacteria with plant material in a flask, it quickly becomes clouded with bits of plant material. This makes it hard to measure bacterial growth in the solution.

Dried stems and leaves from a corn plant in a solution of bacterial growth medium. The plant material is insoluble, so it swirls around the flask during the experiment. Jeffrey Gardner, Author provided

Using it, we can pack the filter device with some plant leaves and stems and put it into a liquid growth medium. After adding some bacteria to the flask, we can measure bacterial growth rates very quickly and accurately because we do not have to continually remove small bits of digested plants. The filter device keeps all of the plant pieces contained. At the end of the experiment we can easily recover any plant material that is left over to determine what remains after bacterial digestion.To solve this problem, we used 3-D printing to construct a filter device very similar to a tea strainer that you might use at a cafe. This device allows us to separate plant material from the bacteria in the surrounding solution.

Basic Research is Key to Solving Real-World Problems

I often am asked why my group spends time doing basic research instead of focusing exclusively on applied work for creating improved detergents or chemicals, since applied work might seem “better” in terms of human benefit. I believe that scientists need to be very conscientious in answering this question, because the justifications and benefits of basic research are not instantly recognizable.

One response is that many important discoveries, including the initial study of X-rays, green fluorescent protein and bacterial immunity to phages, started off as basic research. Over time, these fundamental studies developed, respectively, into the power to image broken bones, study cancer cells and edit the genomes of many types of organisms. The real-world benefits were very much worth the early investments in basic research.

We are starting to identify real-world benefits from understanding polysaccharide degradation. As we continue to prospect for new enzymes, I expect that we will find solutions to many technical challenges by studying the fascinating ways microbes go about obtaining their next meal.


Jeffrey Gardner, Assistant Professor of Biological Sciences, University of Maryland, Baltimore County

This article was originally published on The Conversation. Read the original article.

The Conversation

The post “Bio-Prospectors” Are Studying Living Organisms for Signs of Useful Substances appeared first on Futurism.

Space Shield

Human beings have known for quite some time that our behavior has a significant influence on our planet. In fact, during the 20th century, humanity’s impact on the natural environment and climate has become so profound that some geologists began to refer to the modern era as the “Anthropocene”. In this age, human agency is the most deterministic force on the planet.

But according to a comprehensive new study by an Anglo-American team of researchers, human beings might be shaping the near-space environment as well. According to the study, radio communications, EM radiation from nuclear testing and other human actions have led to the creation of a barrier around Earth that is shielding it against high-energy space radiation.

The study, which was published in the journal Space Science Reviews under the title “Anthropogenic Space Weather“, was conducted by a team of scientists from the US and Imperial College, London. Led by Dr. Tamas Gombosi, a professor at the University of Michigan and the director at the Center for Space Modelling, the team reviewed the impact anthropogenic processes have on Earth’s near-space environment.

These processes include VLF and radio-frequency (RF) radio communications, which began in earnest during the 19th century and grew considerably during the 20th century. Things became more intense during the 1960s when the United States and the Soviet Union began conducting high-altitude nuclear tests, which resulted in massive electromagnetic pulses (EMP) in Earth’s atmosphere.

To top it off, the creation of large-scale power grids has also had an impact on the near-space environment. As they state in their study:

“The permanent existence, and growth, of power grids and of VLF transmitters around the globe means that it is unlikely that Earth’s present-day space environment is entirely “natural” – that is, that the environment today is the environment that existed at the onset of the 19th century. This can be concluded even though there continue to exist major uncertainties as to the nature of the physical processes that operate under the influence of both the natural environment and the anthropogenically-produced waves.”

The existence of radiation belts (or “toroids”) around Earth has been a well-known fact since the late 1950s. These belts were found to be the result of charged particles coming from the Sun (i.e. “solar wind”) that were captured by and held around Earth by it’s magnetic field. They were named Van Allen Radiation Belts after their discover, the American space scientist James Van Allen.

The twin Radiation Belt Storm Probes, later renamed the Van Allen Probes. Credit: NASA/JHUAPLThe extent of these belts, their energy distribution and particle makeup has been the subject of multiple space missions since then. Similarly, studies began to be mounted around the same time to discover how human-generated charged particles, which would interact with Earth’s magnetic fields once they reached near-space, could contribute to artificial radiation belts.

However, it has been with the deployment of orbital missions like the Van Allen Probes (formerly the Radiation Belt Storm Probes) that scientists have been truly able to study these belts. In addition to the aforementioned Van Allen Belts, they have also taken note of the VLF bubble that radio transmissions have surrounded Earth with. As Phil Erickson, the assistant director at the MIT Haystack Observatory, said in a NASA press release:

“A number of experiments and observations have figured out that, under the right conditions, radio communications signals in the VLF frequency range can in fact affect the properties of the high-energy radiation environment around the Earth.”

One thing that the probes have noticed was the interesting way that the outward extent of the VLF bubble corresponds almost exactly to the inner and outer Van Allen radiation belts. What’s more, comparisons between the modern extent of the radiations belts from the Van Allen Probe data shows that the inner boundary is much farther away than it appeared to be during the 1960s (when VLF transmissions were lower).

Two giant belts of radiation surround Earth. The inner belt is dominated by protons and the outer one by electrons. Credit: NASA

What this could mean is that the VLF bubble we humans have been creating for over a century and half has been removing excess radiation from the near-Earth environment. This could be good news for us, since the effects of charged particles on electronics and human health is well-documented. And during periods of intense space weather – aka. solar flares – the effects can be downright devastating.

Given the opportunity for further study, we may find ways to predictably and reliably use VLF transmissions to make the near-Earth environment more human and electronics-friendly. And with companies like SpaceX planning on bringing internet access to the world through broadband internet-providing satellites, and even larger plans for the commercialization of Near-Earth Orbit, anything that can mitigate the risk posed by radiation is welcome.

The post Human Activity May Have Caused a Barrier That Shields Earth From Space Radiation appeared first on Futurism.

The Lightbulb

The lightbulb moment. Its presence in the retelling of scientific discovery is universal. From cartoons to documentaries, we see depictions of scientists’ eyes suddenly sparkling, a lightbulb popping up over their head, and “Eureka!” escaping their lips.

It’s a moment we are all familiar with, but what is it, really? Is there some scientific reason behind this eureka moment or is it just a trope in media? Well, both.

Some argue that it’s impossible to exactly pinpoint and record idea creation and discovery, but we can get pretty close. Using the power of modern technology, scientist have been able to look at brain scans of a person in the midst of idea creation. They can see activity taking place in the right hemisphere, so clearly, we do have at least one way to observe creative thinking processes, even if the picture it provides is incomplete.

Also incomplete is our understanding of many of history’s biggest “A-Ha!” moments. While flashes of insight might have sparked some of the world’s greatest discoveries, it’s safe to say that many historical retellings of eureka moments have been embellished.

One moment that is questionably historically accurate is the discovery of saccharin by food scientist Constantin Fahlberg. It is said that he simply forgot to wash his hands before dinner after working in the laboratory. His meal tasted so sweet that he realized he’d gotten a mysterious substance on his hands while at work and, in the midst of a tasty “A-Ha!”, he ran back to the lab to taste everything there until he’d found the sweet source. Now, this might seem a little bit dramatic, but, as he was a food scientist, it is not all that far-fetched.

Another famous eureka moment belongs to none other than Albert Einstein. According to a lecture in 1922, while working in a patent office at the tender age of 28, Einstein was daydreaming and was suddenly struck with an idea: “If a man falls freely, he would not feel his weight. I was taken aback. This simple thought experiment made a deep impression on me. This led me to the theory of gravity.”

That theory wasn’t complete for eight more years, so the moment wasn’t so much a eureka moment as the beginning of a long train of thought and experimentation. Einstein didn’t come up with the whole thing in an instant, struck by mathematical equations in the patent office. He was, more believably, struck by a simple notion that was powerful because of how he considered it. That doesn’t stop history from perpetuating the idea that a eureka moment at work led to Einstein’s breakthrough.

The Power of Memory

The Evolution of Human Understanding of the Universe [INFOGRAPHIC]
Click to View Full Infographic

Whether caused by our faulty human memories, repeated retellings, or even deliberate exaggeration, the legacy of the eureka moment includes stories ranging from the incredible to the truly unbelievable. While it’s true that the human brain does operate in incredible and, sometimes, mysterious ways, more often than not, months, years, or even decades of thought and experimentation are necessary to reach truly revolutionary scientific conclusions.

Now, did Einstein come up with the inklings of the general theory of relativity while at work? Most likely. Did Constantin Fahlberg discover saccharin because he forgot to wash his hands?  Maybe. What’s important isn’t whether or not these things happened exactly as history tells them, but rather that people get interested in science. The story of saccharin’s discovery might not seem that appealing to some audiences, but the tale of a scientist with dirty hands tasting everything in his lab until he found the mysterious source of sweetness might be.

While science is, of course, super exciting to scientists, it is crucial that everyone get involved and interested. From understanding our own health to how the Earth moves within the galaxy, we are all a part of the scientific properties, discoveries, and theories made famous by these (often tall) tales.

The post The Truth Behind the Biggest “Eureka” Moments in Science appeared first on Futurism.

Pushing and Pulling

If you follow all the strange news about the physics of the universe, you may have already come across the topic of blackbody force. In 2013, a team of physicists announced they’d discovered the existence of an unusual force that could potentially be stronger than gravity. As the force is exerted by objects known as blackbodies, blackbody force seemed a fitting name for it. Now, researchers from the Ceará State University and the Federal University of Ceará, Brazil, have uncovered new details about the strange phenomenon.

Planets of Doom: The Strange Worlds of Dead and Dying Stars [INFOGRAPHIC]
Click to View Full Infographic

Blackbodies are theoretically perfect opaque objects that absorb all incoming light without reflecting or emitting any. One example is a neutron star.

A blackbody is said to emit a type of thermal radiation that can both repel and pull nearby objects like atoms and molecules. For objects that aren’t so massive and are hot enough, this blackbody radiation could even be stronger than their gravitational pull. Both the blackbody radiation (push) and the blackbody force (pull) produce an interplay of forces that’s oft explored in the field of quantum optics.

The new study out of Brazil, which has been published in Europhysics Letters, explores how a blackbody’s shape, as well as its effect on the curvature of surrounding spacetime, influences this optical attraction and repulsion. To so this, the researchers calculated the topology, or the warping of space, surrounding both spherical and cylindrical blackbodies, measuring how each object’s blackbody radiation forces are affected. They found that the curvature of space around spherical blackbodies amplifies the attractive force. Meanwhile, no such magnification was detected in cylindrical blackbodies.

Of Planets and Stars

So, how does this affect what we know about the interaction between cosmic bodies? While this effect isn’t exactly detectable in a laboratory or even for objects as massive as the Sun, the researchers believe it makes a considerable difference when it comes to massive blackbodies.

“We think that the intensification of the blackbody force due to the ultradense sources can influence in a detectable way the phenomena associated with them, such as the emission of very energetic particles, and the formation of accretion discs around black holes,” lead researcher Celio Muniz told Phys.org.

Image credit: Muniz et al./EPL
Image credit: Muniz et al./EPL

The researchers think that this new understanding of blackbody force and radiation can help refine how we model the formation of planets and stars. It could even help us discover a specific type of blackbody force known as Hawking radiation that would allow black holes to evaporate.

“This work puts the blackbody force discovered in 2013 in a wider context,” explained Muniz.

The post Study Reveals New Insights Into a Strange Force That Can Both Pull and Push Objects appeared first on Futurism.

Data and DNA

Microsoft executives have revealed that they aim to have a “proto-commercial” DNA data storage system available in three years and hope to have an operational model in a decade. The eventual device will be around the size of a 1970s era Xerox Printer.

Microsoft’s current system works by firstly converting data from zeroes and ones to the ATCG molecules DNA is composed of, with markers to show how the original piece of data was composed. These sequences are then synthesized into actual DNA, and pooled with the other sequences created.

To extract and access the files, a polymerase chain reaction is used to select the appropriate sequences. These are then read, and the ATCG molecules are turned back into data. Both Microsoft’s studies and a similar experiment run by Erlich Lab members Dina Zielinski and Yaniv Erlich (who also predicted DNA storage would be usable in about a decade) showed that the extracted content was error-free.

While the process has been refined, the cost and time of the procedure is impeding further development. The chemical process used to manufacture DNA strands is both laborious and expensive: the 13,448,372 unique pieces of DNA used in the Microsoft study would cost $800,000 on the open market. That research — while record breaking in quantity — “did not [show] any progress towards the goal” of increasing speed or decreasing cost, Elrich said in an interview with MIT Technology Review.

Elrich himself has proposed a novel modification to tackle the problem: replace the 40-year-old and time consuming process it currently takes to make DNA with one that uses enzymes, as our own bodies do.

Biotech Solution to a Tech Problem

While these obstacles need to be overcome, DNA data storage could be the solution to a world that needs more and more data stored more and more compactly. Victor Zhirnov, Chief Scientist of the Semiconductor Research Corporation, told MIT Technology Review, “efforts to shrink computer memory are hitting physical limits,” while Louis Ceze, Associate Professor at Washington University, said in a Microsoft video that “we’re storing a lot of data, and current storage technologies cannot keep up with it.”

DNA offers a solution to this issue, and a possible worldwide data revolution because of three of its properties: density, longevity, and continued relevance.

“DNA is the densest known storage medium in the universe, just based on the laws of physics,” Zhirnov said in the interview. Some of the statistics the scientists quote are mind boggling: every movie ever made could fit inside a volume of DNA smaller than a sugar-cube; the whole accessible internet, estimated to be a quintillion bytes,  would fill no more than a shoebox; and all of your data could be stored in a drop of DNA.

The longevity remains somewhat controversial. While many tape storage experts (who Microsoft eventually wants to make obsolete) remain doubtful, the teams undertaking these study vouch that DNA is thousands of times more durable than a silicon device, and cite the example of DNA still being extracted from ancient remains.

Finally, because the medium is the same as our biological make-up, the Microsoft Scientists claim that DNA won’t be subject to the transient whims of trends and time. Elrich said in an interview with Researchgate, “humanity is unlikely to lose its ability to read these molecules. If it does, we will have much bigger problems than data storage.”

As the the world’s population grows and becomes increasing relent on ever-advancing technology, it produces more and more data, all of which needs to be stored securely. DNA data storage could be the solution that allows the march of big data (which was recently estimated by some to be more valuable than oil) to continue unimpeded.

The post Tech Leaders Say You Could Be Storing Data in Your DNA in the Next 10 Years appeared first on Futurism.

Spooky Action

Perhaps one of the most intriguing and interesting phenomena in quantum physics is what Einstein referred to as a “spooky action at a distance” — also known as quantum entanglement. This quantum effect is behind what makes quantum computers work, as quantum bits (qubits) generally rely on entanglement to process data and information. It’s also the working theory behind the possibility of quantum teleportation.

The long and short of it is this: entangled particles affect one another regardless of distance, where a measurement of the state of one would instantly influence the state of the other. However, it remains “spooky” because — despite following the laws of quantum physics — entanglement seems to reveal some deeper theory that’s yet to be discovered. A number of physicists have been working on determining this deeper theory, but so far nothing definitive has come out.

As for entanglement itself, a very famous test was developed by physicist John Bell in 1964 to determine whether particles do, in fact, influence one another in this way. Simply put, the Bell test involves a pair of entangled particles: one is sent towards location A and the other to location B. At each of these points, a device measures the state of the particles. The settings in the measuring devices are set at random, so that it’s impossible for A to know the setting of B (and vice versa) at the time of measurement. Historically, the Bell test has supported the spooky theory.

Human Consciousness and Free Will

Now, Lucien Hardy, a theoretical physicist from the Perimeter Institute in Canada, is suggesting that the measurements between A and B could be controlled by something that may potentially be separate from the material world: the human mind. His idea is derived from what French philosopher and mathematician Rene Descartes called the mind-matter duality, “[where] the mind is outside of regular physics and intervenes on the physical world,” as Hardy explained.

To do this, Hardy proposed a version of the Bell test involving 100 humans, each hooked up to EEG headsets that would read their brain activity. These devices would be used to switch the settings on the measuring devices for A and B, set at 100 kilometers apart. “The radical possibility we wish to investigate is that, when humans are used to decide the settings (rather than various types of random number generators), we might then expect to see a violation of Quantum Theory in agreement with the relevant Bell inequality,” Hardy wrote in a paper published online earlier this month.

If the correlation between the measurements don’t match previous Bell tests, then there could be a violation of quantum theory that suggests A and B are being controlled by factors outside the realm of standard physics. “[If] you only saw a violation of quantum theory when you had systems that might be regarded as conscious, humans or other animals, that would certainly be exciting. I can’t imagine a more striking experimental result in physics than that,” Hardy said. “We’d want to debate as to what that meant.”

What it could mean is this: that the human mind (consciousness) isn’t made up of the same matter governed by physics. Furthermore, it could suggest that the mind is capable of overcoming physics with free will. This could potentially be the first time scientists gain a firm grasp on the problem of consciousness. “It wouldn’t settle the question, but it would certainly have a strong bearing on the issue of free will,” said Hardy.

The post Scientists Have an Experiment to See If the Human Mind Is Bound to the Physical World appeared first on Futurism.

“No One Understands Quantum Mechanics”

Quantum mechanics: a branch of physics that is, to many, confusing and misunderstood. It encompasses and deals with the actions and interactions between energy and both subatomic particles and atoms. In other words: how nature operates on an extremely small scale. Quantum mechanics helps us understand both how life works on Earth and beyond. How everything from light to the molecules that make up human beings function and interact.

So, can one simple experiment explain quantum mechanics? Perhaps. Veritasium, on YouTube, has created a video that they think can really show people what quantum mechanics is. The video description aptly quotes Richard Feynman, famed theoretical physicist, in saying, “I think I can safely say that no one understands quantum mechanics.” While understanding how it works is perhaps a little more than a single video can take on, the demonstration at least gives viewers a better idea of what quantum mechanics is.

The experiment involves placing silicone oil drops onto a vibrating bath. While the droplets are obviously too big to be considered part of a quantum system, this strange experiment visualizes pilot wave theory which says that particles oscillate, producing waves, and this interaction creates the particle’s motion. This theory tries to explain the weird — and often contradictory — behaviors of particles. This explanation can be better understood by watching the silicon drops create waves and interact with them in the video above. With something as difficult to grasp as quantum mechanics, it’s often helpful when you can actually see what it is you’re talking — or theorizing — about.

The post Watch a Simple Experiment Make Sense of Quantum Mechanics appeared first on Futurism.

“Sherlocking” our Galaxy

While scientists have already uncovered answers to some of the universe’s greatest mysteries, there’s still a great deal we don’t understand — the “alien megastructure” star, fast radio bursts (FRBs), and the (non-)existence of dark matter are just a few examples. Thanks to new research by a team of astronomers, we may now be able to exclude from this list the previously unexplained mystery of antimatter in the Milky Way.

Essentially, antimatter is matter with its electrical charge reversed. For example, an electron’s antiparticle is a positron, while a proton’s antiparticle is an antiproton. When a particle of matter meets its antimatter cohort, they release a burst of energy and annihilate each other.

Some 40 years ago, scientists detected gamma rays that suggested that around 1043 positrons were being annihilated in the Milky Way every second. They believe most were concentrated in the central bulge of the galaxy, which didn’t make sense as that area hosts less than half of the galaxy’s total mass. As Roland Crocker, particle astrophysicist at the Australian National Universitytold Space.com“The origin of these positrons is a 40-year-old mystery in astrophysics.”

That mystery may finally have a resolution, and it’s a relatively simple one. “You don’t need anything exotic like dark matter to explain the positrons,” claims Crocker. Instead, he and his research team suggest that the positrons may have come from just one kind of supernova. Their study has been published in Nature Astronomy.

Dead Stars, Lost Stars

9 Physics Questions Baffling Scientists [INFOGRAPHIC]
Click to View Full Infographic

Crocker’s team thinks that dim supernovas known as SN 1991bg-like could create all of those unexplained positrons. These supernovas result from the merging of two superdense white dwarf stars, and they generate a huge number of radioactive isotopes known as titanium-44, which is capable of releasing the positrons being annihilated.

But why were these positrons concentrated at the interior of the Milky Way? The researchers explained that SN 1991bg-like supernovas — unlike those resulting from the death of young, massive stars — likely occur in galactic neighborhoods with stars roughly 3 billion to 6 billion years old. Our galaxy’s central bulge happens to have a population of stars older than those in its outer disk.

Of course, while the simplest explanation does often prove to be the right one, Crocker isn’t ruling out a more exciting solution to this decades-old mystery. “The most recent data show that there’s a positron source connected to the very center of the galaxy,” he said. “In our model, this is explained as due to the old stars distributed on roughly 200-parsec [650 light years] scales around the galaxy’s supermassive black hole, but the black hole itself is an interesting alternative source.”

The post Scientists May Have Found a Simple Solution to a 40-Year-Old Space Mystery appeared first on Futurism.

Naked Physics

Einstein’s general theory of relativity is a cornerstone of our understanding of how the universe works. A great deal of the science we do has roots in this theory. As Phys.org points out, estimating the age of stars, using GPS for navigation, and a host of other possibilities exist thanks to Einstien’s calculations. The theory has stood the test of time, even with over a century of challenges.

The theory does break down — as do all standard laws of physics — at a singularity. Singularities are points in the universe where a celestial body’s gravitational field becomes infinite. In our universe, general relativity says that this phenomenon exists only in the center of a black hole. Singularities existing outside of this condition would be known as “naked singularities.” A concept known as the cosmic censorship conjecture, introduced in 1969, stated all singularities would be cloaked by an event horizon. Naked singularities, however, would be exempt from this principle.

Image of (1 + 1)-dimensional anti-de Sitter space embedded in flat (1 + 2)-dimensional space. Credit: Wikimedia Commons.
Anti-de Sitter space. Credit: Wikimedia Commons.

Saddle Up

Using computer simulations, researchers have predicted the formation of a naked singularity in three-dimensional space for the first time. That being said, although the simulations may have shown a naked singularity, it wasn’t a simulation of our universe. Researchers Toby Crisford and Jorge Santos from Cambridge’s Department of Applied Mathematics and Theoretical Physics used a universe shaped quite differently from our (relatively) flat one. They used anti-de Sitter space for their simulation, which curves in the shape of a saddle. Having a universe with curvature allows for some novel possibilities. Given this shape, researchers were able to force the creation of a naked singularity.

The known universe is not curved, therefore the findings are not directly applicable to our universe. However, that does not make this discovery insignificant: other seemingly unrelated theories of particle physics are connected to gravity in anti-de Sitter space. Equipped with this simulated cosmic censorship violation, there’s no telling what the future has in store for the field of theoretical physics.

The post New Research Shows That a “Naked Singularity” Could Exist in Physics appeared first on Futurism.

A Whole New View

If you’ve played any of the recent Batman video games, you’re probably already familiar with his ability to scan through walls using one of the many gadgets he has at his disposal. In real life, German scientists from the Technical University of Munich (TUM) may have developed a technology that could give you a similar Batman-esque skill. What makes it even cooler? Basically all it takes is a Wi-Fi signal.

“It can basically scan a room with someone’s Wi-Fi transmission,” Philipp Holl said, speaking to Business Insider. Holl and TUM professor Friedemann Reinhard developed the concept behind the technology and published their findings in the journal Physical Review Letters.

It’s surprisingly simple, exploiting Wi-Fi’s ability to pass through walls. Using two antennas, they record a Wi-Fi field around a particular room. The antennas capture the intensity and the phase of the Wi-Fi field both from its source spot and the places it bounces off from. The result is a holographic image of the room that, while it’s not yet vivid, proves that the concept works in practice rather than just theory.

Life-Saving Tech

The ability to see through walls might be a little unsettling at first, as it could open up potential privacy exploitation issues. “Of course, this raises privacy questions. After all, to a certain degree even encrypted signals transmit an image of their surroundings to the outside world,” Reinhard said in a press release from TUM, “However, it is rather unlikely that this process will be used for the view into foreign bedrooms in the near future. For that, you would need to go around the building with a large antenna, which would hardly go unnoticed. There are simpler ways available.”

The tech could also have many beneficial — if not life-saving — applications. Apart from the potential to be used by spy agencies for legitimate operations requiring the scanning of buildings, it could also be an asset to rescue operations after a disaster such as an earthquake or an avalanche. The antennas could be placed in a truck and then driven around the rubble or debris are to survey and look for survivors.

“These antennas don’t need to be big. They can be very small, like the ones in a smartphone,” Holl said, meaning they could be easily wielded even in the smallest, most remote, spaces.

Further research, such as on the transparency of specific materials, is needed to refine the technology. But it’s exciting to think that superhero technology could come to life in such a way, especially with potentially life-saving applications.

The post New Tech Uses WiFi and Holograms to Let You “See” Through Walls appeared first on Futurism.

Wonder Material Meets Supercomputer

Right now, graphene and quantum computers both stand out as symbols of the next steps in human technological innovation. Each represents a paradigm shift both in their respective originating fields (materials and computing) as well as in the fields to which they are applied. But perhaps the most exciting developments for these two technologies will come as they combined.

Graphene is often referred to as “the wonder material” since it holds so much promise to change the world in some certainly unimaginable ways. On the other hand, quantum computers are the next step in computing as we rapidly approach the physical limits of current computing technology. Researchers are discovering that graphene may help us take this next step.

Researchers at EPFL’s Laboratory of Photonics and Quantum Measurements have been working to build a quantum capacitor that can create stable qubits (the units of information storage in quantum computers) that are also resistant to common electromagnetic interference. Such a capacitor is easier to produce using a two dimensional material — such as graphene. Their research was published in 2D Materials and Applications.

A Perfect Union

Quantum computers work by taking advantage of special rules reserved for sub-atomic particles in order to perform the most complex tasks at currently impossible speeds. While they aren’t likely to replace our home computers as their capabilities are well beyond our daily needs, what they are capable of will revolutionize what’s possible for high-tech applications such as running quantum simulations which can unlock previously impossible to access information.

Graphene: The Miracle Material of the Future [INFOGRAPHIC]
Click to View Full Infographic

Taking advantage of graphene’s special properties in the designs of quantum capacitors will move us closer to figuring out how to create a practical quantum computer. And this is just one example of graphene’s many uses. From the understatedly important capability to turn sea water into drinking water, to the ability to become zero-resistance superconductors, graphene has the potential to lead us into a new era of science.

We’re likely far from a functioning practical quantum computer, but watching the beginnings of what might be one of the most significant human technological achievements in our age is quite exciting. We’re standing on the precipice of the next step in our tech evolution.

The post Graphene Just Brought Us One Step Closer to Practical Quantum Computers appeared first on Futurism.

Alive and Breathing

If it was up to a team of researchers from the Massachusetts Institute of Technology (MIT), the next trend in sportswear would be clothes made out of living cells. You got that right, living microbial cells. With a design that looks like it came straight out of science fiction, the self-ventilating workout suit developed by the MIT researchers gives a new meaning to breathable and no-sweat clothing — plus, it comes with a pair of running shoes lined with the same living cells on the inside.

“[W]e propose to use genetically tractable microbial cells to create multifunctional, moisture-responsive interfaces,” the researchers wrote in a study published in Science Advances. “We hypothesized that microbial cells can be used as functional building blocks for constructing moisture-responsive materials.”

The cells that researchers used are the most common nonpathogenic strain of E. coli, printed onto latex sheets. The cells were then designed into ventilating flaps in the workout suit. These tiny thumbnail- and finger-sized flaps open and close in reaction to the heat and sweat generated by an athlete’s body. The microbial cells respond to changes in humidity, expanding when the body begins to sweat and shrinking when it cools down. Furthermore, the microbial cells used are durable and also safe to touch or even consume (but please, don’t).

https://player.vimeo.com/video/142208383?portrait=0

Biologic from Tangible Media Group on Vimeo.

Cooler Stuff

“We can combine our cells with genetic tools to introduce other functionalities into these living cells,” explained MIT’s Wen Wang, lead author of the study. “We use fluorescence as an example, and this can let people know you are running in the dark. In the future we can combine odor-releasing functionalities through genetic engineering. So maybe after going to the gym, the shirt can release a nice-smelling odor.”

While this biofabric is pretty cool, both literally and figuratively, we can expect even more cool stuff as researchers gain access to new and innovative materials. For example, biofabrication has already opened up doors into 3D-printing human organs, and the development of biodegradable shoes. Outside the realm of biomaterials, there are what are considered to be the future’s super materials — most notably, graphene and its other iterations.

As for the self-ventilating workout suit, it’s definitely at the forefront of biofabrication and the use of biomaterials. “This work is an example of harnessing the power of biology to design new materials and devices and achieve new functions,” said Xuanhe Zhao, a co-author of the study. “We believe this new field of ‘living’ materials and devices will find important applications at the interface between engineering and biological systems.”

The post This “Living” Clothing Keeps You Cool by Breathing appeared first on Futurism.

In the video below, minutephysics explain how teleportation could be theoretically possible using quantum physics. Quantum teleportation uses quantum entanglement — a situation where one set of particles is dependent on the state of another. In principle, if scientists create specific sets of particles that are capable of being rearranged into whatever they wish to teleport, they can send partial information about one end of the entanglement — encoded as a quantum state — and thereby produce it in the other end. As an analogy: imagine taking a scan of what you want to transport, sending it to the other entangled particles, and rebuilding it from that.

While being able to transport anything large, like a cat — the example the video uses — is a long way off, scientists have managed to transport a single photon or electron about 100km. The difficulty lies in creating two entangled sets of particles and subsequently transporting one of them without it becoming disentangled.

This is linked to scientists achieving direct counterfactual quantum communication for the first time recently, which operates using the Zeno effect (freezing the situation by observing it) rather than entanglement. In the experiment, scientists successfully transported information using the phase of light.

The post Teleportation Could Be Possible Using Quantum Physics appeared first on Futurism.

No Cloning in Physics

Cloning used to be a science fiction staple until along came Dolly the sheep in 1996. However, cloning in biology isn’t the same thing as cloning in physics. As difficult as biological cloning is, cloning in physics is much harder. In fact, it has been proven to be impossible.

A video from YouTube’s MinutePhysics goes into quantum detail to explain just why perfect copies are not possible.

As the host explains, it’s been mathematically proven that while we may be able to make copies of objects, we can never make a perfect clone that is identical on the quantum level. The math to explain it is highly complicated — as is anything to do with quantum mechanics.

The video simplifies those complex mathematics (to a degree) by considering Schrodinger’s cat. The crux of the math deals with a concept in quantum mechanics called superposition. This is the concept that is explored in the Schrodinger thought experiment. Simplified, it deals with a cat in a box with exploded or unexploded gunpowder (or poison, depending on the telling). Without observing the inside of the box and therefore knowing whether the cat is dead (the gunpowder exploded) or alive (the gunpowder is intact) the cat exists in a superposition where it is both dead and alive.

Watch the MinutePhysics video to treat your brain to the superposition of being exploded and enlightened.

The post Physics: A Perfect Clone is Impossible appeared first on Futurism.

Splitting H2O

Physicists at the University of Houston have discovered a low-cost, efficient, and easily available catalyst that can split water into hydrogen and oxygen. The catalyst is far more efficient than other options that have previously been employed, and because it is grown from ferrous metaphosphate on a conductive nickel foam platform, it is both more durable and cheaper to produce.

“Cost-wise, it is much lower and performance-wise, much better,” lead author and Anderson professor of physics Zhifeng Ren, M.D. told the University of Houston News. “Some catalysts are outstanding but are only stable for one or two hours. That’s no use.”

Breaking water down into its components — oxygen and hydrogen — is theoretically simple, but practically complex. The process demands two separate reactions, each with its own electrode; one reaction evolves hydrogen, and one evolves oxygen. While hydrogen is the component that is sought after in this process, it can’t be attained without producing oxygen — and that’s the issue. Efficient oxygen catalysts, unlike the readily available hydrogen catalysts, are hard to find, and that’s where this discovery comes into play.

The evolution of oxygen usually depends upon electrocatalysts that use expensive “noble metals” such as ruthenium, platinum, or iridium. “In this work, we discovered a highly active and stable electrocatalyst based on earth-abundant elements, which even outperforms the noble metal based ones,” Principal investigator Shuo Chen told the UH News. “Our discovery may lead to a more economic approach for hydrogen production from water electrolysis.”

Scientists Found a Low-Cost Way to Produce the World’s Cleanest Energy Source

Clean Energy Options

Either electric current or photocatalysis can trigger water splitting. However, since water absorbs only a narrow section of the light spectrum, the direct solar-powered splitting of water is too inefficient. Instead, using solar to power the electrical current used to split water molecules and produce hydrogen for energy is a better plan. Hydrogen, when produced, is one of the best renewable energy sources. It is stored easily, unlike solar or wind power, and, using solar to produce hydrogen eliminates the carbon footprint left by producing hydrogen through traditional methods like coal gasification or steam methane reforming.

“Hydrogen is the cleanest primary energy source we have on earth,” Texas Center for Superconductivity chief scientist Paul C. W. Chu told UH News. Now that these researchers have discovered a way to produce hydrogen cheaply with no carbon footprint, this may very well be the green energy source of tomorrow.

The post Scientists Found a Low-Cost Way to Produce the World’s Cleanest Energy Source appeared first on Futurism.

Einstein + Quantum Physics

Scientists at the University of British Columbia have proposed a radical new theory to explain the exponentially increasing size of the universe. Ultimately, it seeks to reconcile two different concepts in physics: Quantum Mechanics and Einstein’s Theory of General Relativity. the researchers argue that instead of dark energy causing the universe’s growth, it could be explained by constant quantum fluctuations of vacuum energy.

In their work, the researchers argue that, instead of dark energy causing the universe’s growth, it could be explained by constant quantum fluctuations of vacuum energy. The paper claims — if their findings are true — that “the old cosmological constant problem would be resolved.” The press release notes the potentially transformative nature of the work: “Their calculations provide a completely different physical picture of the universe.”

Similarly, Bill Unruh, the physics and astronomy professor who supervised P.H.D student Qingdi Wang’s work, stated that the research offers an entirely new take on old problems: “This is a new idea in a field where there hasn’t been a lot of new ideas that try to address this issue.” In the end, their calculations provide a fundamentally different picture of the universe: one in which space-time is “constantly moving,” fluctuating between contraction and expansion. It’s the small net effect towards expansion, though, that drives the expansion of the universe.

Unruh uses the sea as an analogy to explain why we cannot feel the effects: “It’s similar to the waves we see on the ocean […] They are not affected by the intense dance of the individual atoms that make up the water on which those waves ride.”

The Big Why

Previous belief has held that the universe is expanding steadily due to dark energy pushing other matter further and further away. When we apply quantum theories to vacuum energy, it results in an increasing density which could in turn result in universal explosion — due to the gravitational effect of the density.

The discovery that the universe is expanding was made simultaneously by two independent teams in 1998: Supernova Cosmology Project and the High-Z Supernova Search Team. Three members of the two teams have since won Nobel prizes for their work, which measured light using ‘standard candles.’ Since that discovery was made, scientists have tried to work out exactly what this energy is that’s driving the cosmos apart.

Despite the fact that it has been a compelling mystery for decades, there haven’t been that many theories posed. So, while the work of Wang and Unruh may not provide the ultimate answer, they present a new, potential solution to one of the most fundamental problems in cosmology.

Editor’s note: This article has been updated. A previous version mistakenly referred to “dark energy” as “dark matter.”

The post New Research May Reconcile General Relativity and Quantum Mechanics appeared first on Futurism.

Bulletproof Science

Often the best parts of James Bond films is the inevitable scenes with his quartermaster Q, where Bond receives a myriad of high-tech spy gear to aid him on his latest world-saving mission. A new invention by a United States Air Force cadet and her professor feels like it’s in that vein of miraculous technological wizardry.

The finished prototype is also an example of innovative solutions hiding in plain sight. As the Business Insider profile of the invention said, the science behind this new bullet-stopping material is not new. Cadet 1st Class Hayley Weir and her professor Ryan Burke were surprised to learn that no defense contractor had previously explored making a material using their proposed methods. The substance was created using principles of non-Newtonian physics, and can be applied to existing body armor to strengthen it. The armor Weir and Burke are proposing is lighter weight that what’s traditionally warn, yet can still offer high-level protection.

The material began development in the second half of 2016 and ended up stopping its first bullet in December. Thus far, the material has stopped bullets of a variety of calibers at close range including a 9mm, a .40 caliber, and  a.44 magnum. The latter would technically qualify the substance as type 3 body armor, which is often worn by security personnel. Interestingly enough, the tests seemed to indicate that the larger the bullet, the greater the substance’s stopping power.

Bullets flattened during tests of Weir and Burke's prototype. Image credit: NBC/KUSA 9 News
Bullets flattened during tests of Weir and Burke’s prototype. Image credit: NBC/KUSA 9 News

According to Burke, “We now understand more about what the important variables are, so now we’re going to go back and pick all the variables apart, optimize each one, and see if we can get up to a higher level of protection.”

This is just the latest in military technology that is aiming to better protect our soldiers. The Navy is also working on expanding its laser weapons programs to protect from drones and missiles. Other technology, such as these bullets designed to self-destruct if they miss a target, can help reduce collateral damage.

The post New Ultra Lightweight “Fluid Armor” Can Stop a .44 Magnum Bullet appeared first on Futurism.

Counterfactual Communication

Quantum communication is a strange beast, but one of the weirdest proposed forms of it is called counterfactual communication – a type of quantum communication where no particles travel between two recipients.

Theoretical physicists have long proposed that such a form of communication would be possible, but now, for the first time, researchers have been able to experimentally achieve it – transferring a black and white bitmap image from one location to another without sending any physical particles.

If that sounds a little too out-there for you, don’t worry, this is quantum mechanics, after all. It’s meant to be complicated. But once you break it down, counterfactual quantum communication actually isn’t as bizarre as it sounds.

First up, let’s talk about how this differs from regular quantum communication, also known as quantum teleportation because isn’t that also a form of particle-less information transfer?

Well, not quite. Regular quantum teleportation is based on the principle of entanglement – two particles that become inextricably linked so that whatever happens to one will automatically affect the other, no matter how far apart they are.

This is what Einstein referred to as “spooky action at a distance“, and scientists have already used it to send messages over vast distances.

But that form of quantum teleportation still relies on particle transmission in some form or another. The two particles usually need to be together when they’re entangled before being sent to the people on either end of the message (so, they start in one place, and need to be transmitted to another before communication can occur between them).

Enter Zeno

Alternatively, particles can be entangled at a distance, but it usually requires another particle, such as photons (particles of light), to travel between the two.

Direct counterfactual quantum communication, on the other hand, relies on something other than quantum entanglement. Instead, it uses a phenomenon called the quantum Zeno effect.

Very simply, the quantum Zeno effect occurs when an unstable quantum system is repeatedly measured.

In the quantum world, whenever you look at a system, or measure it, the system changes. And in this case, unstable particles can never decay while they’re being measured (just like the proverbial watched kettle that will never boil), so the quantum Zeno effect creates a system that’s effectively frozen with a very high probability.

If you want to delve a little deeper, the video below gives a great explanation:

Counterfactual quantum communication is based on this quantum Zeno effect, and is defined as the transfer of a quantum state from one site to another without any quantum or classical particle being transmitted between them.

This requires a quantum channel to run between two sites, which means there’s always a small probability that a quantum particle will cross the channel. If that happens, the system is discarded and a new one is set up.

To set up such a complex system, researchers from the University of Science and Technology of China placed two single-photon detectors in the output ports of the last of an array of beam splitters.

Because of the quantum Zeno effect, the system is frozen in a certain state, so it’s possible to predict which of the detectors would ‘click’ whenever photons passed through. A series of nested interferometers measure the state of the system to make sure it doesn’t change.

It works based on the fact that, in the quantum world, all light particles can be fully described by wave functions, rather than as particles. So by embedding messages in light the researchers were able to transmit this message without ever directly sending a particle.

The Answer in Light

The team explains that the basic idea for this set up came from holography technology.

“In the 1940s, a new imaging technique – holography – was developed to record not only light intensity but also the phase of light,” the researchers write in the journal Proceedings of the National Academy of Sciences.

“One may then pose the question: Can the phase of light itself be used for imaging? The answer is yes.”

The basic idea is this – someone wants to send an image to Alice using only light (which acts as a wave, not a particle, in the quantum realm).

Alice transfers a single photon to the nested interferometer, where it can be detected by three single-photon detectors: D0, D1, and Df.

If D0 or D1 ‘click’, Alice can conclude a logic result of one or zero. If Df clicks, the result is considered inconclusive.

As Christopher Packham explains for Phys.org:

“After the communication of all bits, the researchers were able to reassemble the image – a monochrome bitmap of a Chinese knot. Black pixels were defined as logic 0, while white pixels were defined as logic 1 …

In the experiment, the phase of light itself became the carrier of information, and the intensity of the light was irrelevant to the experiment.”

Not only is this a big step forward for quantum communication, the team explains it’s technology that could also be used for imaging sensitive ancient artefacts that couldn’t surprise direct light shined on them.

The results will now need to be verified by external researchers to make sure what the researchers saw was a true example of counterfactual quantum communication.

Either way, it’s a pretty cool demonstration of just how bizarre and unexplored the quantum world is.

The research has been published in the journal Proceedings of the National Academy of Sciences.

The post Scientists Achieve Direct Counterfactual Quantum Communication For The First Time appeared first on Futurism.

This Is Your Brain on a Chip

Researchers from the Australian National University (ANU) have found a new way to create a brain-on-a-chip, and it could have a remarkable impact on how we study and treat neurological diseases.

The team first designed semiconductor wafers to act a base for their creation, working with a group in the university’s Research School of Physics and Engineering to fabricate and embed nanowires onto the wafer in a specific geometry. After that, they added brain cells to the wafer for observation. As hoped, the cells used the nanowires as a scaffold as they grew into functioning circuits.

Via Stuart Hay, ANU
Image credit: Stuart Hay, ANU

“We were able to make predictive connections between the neurons and demonstrated them to be functional with neurons firing synchronously,” project group leader Vincent Daria said in an ANU news release. This is the first time researchers have been able to build functional, interconnected neuronal circuits on a nanowire scaffold, and the research has been published in NanoLetters.

The Future of Testing

The ANU team’s chips will provide a useful platform for studying brain cell growth and interconnection. That knowledge could then be used to find innovative new ways to integrate the organic and non-organic for disease treatment. According to Daria, “This work could open up [a] new research model that builds up a stronger connection between materials nanotechnology with neuroscience.”

The Evolution of Brain-Computer Interfaces [INFOGRAPHIC]
Click to View Full Infographic

It may even make it possible for scientists to create neuro-prosthetics for use after degenerative neurological diseases or damage — a process much trickier than creating prosthetics for other body parts because neuro-prosthetics require a fairly complete understanding of how neurons grow, connect, form circuits, and process data. “Unlike other prosthetics like an artificial limb, neurons need to connect synaptically, which form the basis of information processing in the brain during sensory input, cognition, learning, and memory,” explained Daria.

Several research teams have been pursuing “on a chip” technologies with much success. Harvard researchers have been experimenting with their own brain-on-a-chip model in an attempt to study neurological and psychological diseases more effectively. Evatar, a miniature scale female reproductive tract on a chip, was announced earlier this year — its creators hope the device will advance the study of a variety of reproductive diseases and treatments. Even the FDA has acknowledged that the future of drug and treatment development is not in animal testing, but in organs on chips, so breakthroughs like this one out of ANU should continue for the foreseeable future.

The post Researchers Just Created a Revolutionary “Brain on a Chip” appeared first on Futurism.

Back to Basics

Hydrogen is one of the most common elements in the world — making up 75 percent of all visible matter and more than 90 percent of all the atoms in the universe. At the same time, hydrogen bonds are some of the weakest molecular bonds known to occur between atoms. As such, these bonds are difficult to observe, much less study.

As scientists endeavor to understand the number one element in the periodic table, the ability to see hydrogen bonds is something they’ve only dreamed of. But now it’s become a reality.

Researchers from the University of Basel’s Swiss Nanoscience Institute, in collaboration with colleagues from Japan, published a paper in the journal Science Advances detailing how they’ve managed to get a clear image of hydrogen bonds using what’s called an atomic force microscope (AFM) — which is a scanning probe microscope with very high resolution capable of visualizing and measuring minuscule forces, like those of a hydrogen.

“The hydrogen atom — the smallest and most abundant atom — is of utmost importance in physics and chemistry,” the researchers wrote in their paper’s abstract. “Although many analysis methods have been applied to its study, direct observation of hydrogen atoms in a single molecule remains largely unexplored.”

Physicists Finally Observe Hydrogen Bonds, Taking Physics to the Next Level
Credit: University of Basel, Department of Physics

Using compounds known as propellanes — because, yes, they resemble propellers — the researchers were able to measure the force and distance between an oxygen atom and two hydrogen atoms (the components of water). These were observed using an AFM that was made extremely sensitive to hydrogen by adding a carbon monoxide layer at its tip. The carbon monoxide then formed a bond with the tip of the propellane compounds. It was this bond that the researchers studied and found similar to established hydrogen bond calculations.

Like You’ve Never Seen Them Before

Scientists haven’t been able to study hydrogen bonds because, aside from their weak nature rendering them rather vulnerable and easy to break, hydrogen atoms are also as small as atoms could get. Don’t underestimate these hydrogen bonds, however, as they play key roles in many of nature’s wonders. Hydrogen bonding plays an important part in giving water its properties, for example.

If you want to go up close and personal, hydrogen bonds keep the DNA’s double-helix structure together, becoming the building blocks to life’s building blocks. Therefore, this research could shed new light on our genetic structure. “Our … calculations confirm the signature of directional bonding, characteristic of very weak hydrogen bonding,” the researchers wrote in the study. “The direct measurement of the interaction with a hydrogen atom paves the way for the identification of three-dimensional molecules such as DNAs and polymers.”

Understanding the most basic of nature’s atomic bonds opens up new possibilities in understanding the physics of our world. “[H]ydrocarbons are one of the most varied and functionalized products at the heart of engineering, chemistry, and life, and hydrogen is often critical in their function,” the Swiss team added. This research could be the first step for bringing the exploration of the material world to the next level.

The post Physicists Got Their First Look at the Bond That Holds Most of Our Universe Together appeared first on Futurism.

Big Questions in Physics

Across all disciplines of science there exists an abundance of unanswered questions and mysteries that have endured for as long as humans have attempted to solve them. Perhaps one of the most query-filled fields is physics, which continues to confound even the most brilliant minds. While it’s unlikely that all these quandaries will be answered in our lifetime, some of them may well could be.

Why is there more matter than antimatter? 

In theory, the Big Bang should have created as much matter as antimatter in the universe. But it didn’t — at least not as far as physicists can see. Matter is everywhere, all around us — it makes up everything. But so far, we’ve only found small amounts of antimatter. Why is that? Shouldn’t a particle of matter have an antimatter counterpart? It must be that the laws of the universe and nature just don’t apply to antimatter the same way they do to matter. Scientists have to figure out why that is and what it means for the universe.

What Is Dark Matter?
Click to View Full Infographic

What is dark energy/dark matter/all this dark stuff?

For all we can see of our universe, there’s much more that we have yet to see — in part because it’s made up of invisible stuff called dark energy and dark matter. The thing is, we haven’t even actually visualized dark matter — only the effect it has on what we can see. Physicists are trying to gain a deeper understanding of the invisible force by studying the behavior of stars and galaxies which appear to react to the presence of it. It is, however, a challenge to study something we can’t see. We need technology to catch up and let us come face-to-face with these “dark” substances.

The Universe’s Unsolved Mysteries

One of the simplest questions to ask may be one of the hardest to answer:

How big is the universe?

Asking how big, or how old, the universe is depends on what part of the universe you’re talking about: the observable universe, or the entirety of it. We know through studying light that the observable universe is around 13.8 billion years old — that’s 46 billion light years. Determining how big even the observable universe is presents challenges because it isn’t a static entity: the universe is still expanding.

Are there parallel universes? 

Even though we don’t quite have a handle on our own universe, it hasn’t stopped us from wondering if there are others out there — particularly so-called parallel universes. The idea is part of the multiverse theory, for which there are at least five accepted theories. Those who don’t think it’s possible point out that after the Big Bang, with all that matter taking up space in space, inflation would have slowed down. Because it slowed down, if there were multiverses they wouldn’t be expanding at the same rate as our universe is. That would throw a wrench in the idea of a truly parallel universe.

This is merely a fraction of the questions physicists are asking, and it all kind of comes down to answering one final question: what’s the ultimate fate of the universe? How will it end? Some theories posit that it will be a lot like how it began: a big bang. More specifically, a big crunch. One of the most prominent theories about the inevitable death of the universe is that eventually, as the universe expands, it will eventually reach a density that exceeds critical density — at which point it would collapse in on itself in an event referred to as the Big Crunch. Of course, that’s just one theory. Another posits that the end will be more of an endless void — but in either case, it won’t happen for billions, if not trillions, of years. As far as we know, anyway.

The post Five Unsolved Mysteries That Keep Scientists up at Night appeared first on Futurism.

Warm and Unusual

The easiest way to make sense of the various unfamiliar objects we find floating around in the cosmos is by thinking about them in the context of celestial bodies we are familiar with. Our Solar System has become the starting point for our analysis of other bodies, especially exoplanets.

An Exoplanetary Bestiary: The Weirdest Planets in the Universe [INFOGRAPHIC]
Click to View Full Infographic

That’s how the term “hot Neptune” came about — it’s our way of describing exoplanets as massive as Uranus or Neptune but that orbit closer to their stars. We’ve already come across a number of these hot Neptunes, but one of the most recently discovered defies all expectations astronomers have about the exoplanets.

Located about 437 light-years away from Earth, HAT-P-26b is a hot Neptune that has a primordial atmosphere composed mostly of hydrogen and helium — an unusual mixture for planets very close to their stars. “Astronomers have just begun to investigate the atmospheres of these distant Neptune-mass planets, and almost right away, we found an example that goes against the trend in our Solar System,” said researcher Hannah Wakeford from NASA’s Goddard Space Flight Center in a news release.

Different Strokes

Usually, massive planets have lower metallicities — a lower presence of elements heavier than hydrogen and helium in their atmosphere. For example, Jupiter and Saturn’s are five and ten times than that of the Sun, respectively. Smaller planets like Uranus and Neptune, on the other hand, have higher metallicities, about a hundred times greater than the Sun.

For hot Neptune HAT-P-26b, however, this standard fails. It’s metallicity is only about 4.8 times that of the Sun — much closer to Jupiter’s than to Neptune’s.

Image credit: NASA/GSFC
Image credit: NASA/GSFC

Such a discovery just confirms that there’s so much we don’t know about the universe and what’s in it. Assumptions that may be true for Earth or for the planets in our solar system may not necessarily be true for others out there. “This analysis shows that there is a lot more diversity in the atmospheres of these exoplanets than we were expecting, which is providing insight into how planets can form and evolve differently than in our solar system,” said University of Exeter’s David K. Sing, second author of the paper, which is published in Nature.

For Wakeford, it’s discoveries like this that make her work exciting. “This kind of unexpected result is why I really love exploring the atmospheres of alien planets,” she said.

The post Astronomers Just Found a Planet They Didn’t Know Was Possible appeared first on Futurism.

An Inflated Discussion

The players: 36 of the world’s top physicists. The arena: the entire cosmos, sort of. For physicists and cosmologists, the origin of the universe is proving to be a battle worth fighting over — or, more accurately, writing an angry letter about.

Back in February, three physicists sparked a debate within the scientific community by writing a controversial article criticizing the cosmic inflation theory. Now, 33 other physicists, including Stephen Hawking, Lisa Randall, Reiner Weiss, Frank Wilczek, and cosmologist Alan Guth — the author of the theory himself — have written an (angry) letter in defense of the theory. Both the original article and the response were published in Scientific American.

A quick refresher on the cosmic inflation theory. First proposed in 1980 by Guth, the theory describes how the universe quickly expanded after the Big Bang, producing galaxies from quantum fluctuations. Most people are familiar with the theory as it’s what’s taught in schools when discussing the origin of the universe.

In their article, Princeton physicist Anna Ijjas, Harvard astronomer Abraham Loeb, and physicist Paul Steinhardt — who is actually a former colleague of Guth — criticized some assumptions about the inflation theory, including recent studies on cosmic microwave background that don’t support cosmic inflation. They also pointed out that such an inflation should have generated primordial gravitational waves, but these are yet to be found.

“We should not just accept the assumption that inflation happened, especially because it does not offer a simple explanation of the observed features of the universe,” the trio wrote.

Testing theories over and over is a normal part of science, and no credible scientist would argue that questioning and scrutinizing assumptions is important. It was another part of the trio’s article that turned out to be highly debatable:

[I]nflationary cosmology, as we currently understand it, cannot be evaluated using the scientific method…Some scientists accept that inflation is untestable but refuse to abandon it. They have proposed that, instead, science must change by discarding one of its defining properties: empirical testability.

A Heated Response

That last bit was the kicker. Guth and the other 32 physicists heartily disagreed with the idea that the inflation theory couldn’t be tested with evidence and decided to voice their opinion on the trio’s accusations:

[The authors] close by making the extraordinary claim that inflationary cosmology “cannot be evaluated using the scientific method” and go on to assert that some scientists who accept inflation have proposed “discarding one of [science’s] defining properties: empirical testability,” thereby “promoting the idea of some kind of nonempirical science.”

We have no idea what scientists they are referring to.

Indeed, the ability to test a theory is an essential aspect of science. No real science is done by just wildly making and sticking by assumptions without testing them. Guth, Hawking, and the 31 other experts explain in their letter that the inflation theory hasn’t been blindly accepted by the scientific community. On the contrary, more evidence has come to light that tests the theory.

The Evolution of Human Understanding of the Universe [INFOGRAPHIC]
Click to View Full Infographic

That’s is to be expected as our current understanding of the universe is limited. However, Guth and the other the authors assert that those limitations shouldn’t prevent us from exploring possibilities: “The fact that our knowledge of the universe is still incomplete is absolutely no reason to ignore the impressive empirical success of the standard inflationary models.”

“[A]s time passes, the successful [theories] become better and better established by improved experimental tests and theoretical advances. This has happened with inflation,” they argue. “Progress continues, supported by the enthusiastic efforts of many scientists who have chosen to participate in this vibrant branch of cosmology.”

This debate clearly shows one thing, though. As Guth and his 32 colleagues concluded: “Empirical science is alive and well!”

The post Stephen Hawking Responds to Colleagues’ Critical Article on the Origins of the Universe appeared first on Futurism.

Life on Land

Scientists currently hold two different assumptions regarding the origins of life on Earth. The first and more popular one assumes life began under Earth’s ancient oceans, in deep sea hydrothermal vents. The second suggests that life originated on land. Now, scientists from the University of New South Wales (UNSW) have uncovered fossil evidence that supports the second assumption and pushes back — way back —the earliest known existence of life in hot springs.

“Our exciting findings don’t just extend back the record of life living in hot springs by 3 billion years, they indicate that life was inhabiting the land much earlier than previously thought, by up to about 580 million years,” Tara Djokic, first author of the study, which is published in Nature Communications, explained in a UNSW press release.

The UNSW scientists made their discovery of fossil evidence of early life while studying hot spring deposits from the ancient Dresser Formation within Western Australia’s Pilbara Craton. These well-preserved deposits were approximately 3.5 billion years old and contained evidence of geyserite, a mineral deposit found only in a terrestrial hot spring environment.

Life on Alien Lands

Living Off The Land: A Guide To Settling Mars [Infographic]
Click to View Full Infographic

Though they are noteworthy for what they tell us about early life on Earth, the findings of the UNSW study have implications that extend far beyond our planet. “Our research also has major implications for the search for life on Mars, because the Red Planet has ancient hot spring deposits of a similar age to the Dresser Formation in the Pilbara,” Djokic explained.

Indeed, the presence of water on Mars is already backed by quite a bit of evidence. Some of this water is in the form of ponds similar to ancient hot springs on Earth, most notably in a region of the Red Planet known as Columbia Hills. “The Pilbara deposits are the same age as much of the crust of Mars, which makes hot spring deposits on the Red Planet an exciting target for our quest to find fossilized life there,” Australian Center for Astrobiology director Van Kranendonk said about the study.

Of course, this all assumes that whatever life could exist on Mars would be similar to that of Earth’s, at least on a microbial level. “If life can be preserved in hot springs so far back in Earth’s history, then there is a good chance it could be preserved in Martian hot springs, too,” Djokic concluded.

The post New Fossil Evidence Suggests Life May Have Originated on Land and Not in the Oceans appeared first on Futurism.

Developing Human Connectome Project

Scientists at the Developing Human Connectome Project (DHCP) are mapping the brains of babies in the hope that the images will help unravel the mysteries of neurological disorders. Thus far, the brains of 1,000 sleeping newborns have been mapped, as well as the brains of 500 babies that were still in utero. The researchers just released thousands of images of the brains of 40 of the newborns studied during the first phase of the project.

Via DHCP
Via DHCP

Because the high-resolution images require two to three hours of scanning, the process of obtaining them is tricky. When a sleeping newborn wakes up, all bets are off, as too much motion makes the scanning impossible. To improve their chances of success, the researchers adjusted the scanner software to ensure it wouldn’t make noise and could correct for slight movements computationally. Of course, even a baby sleeping soundly might wiggle or cry out, and even that minor motion can ruin an image.

Via DHCP
Via DHCP

Probing the Brain’s Subtle Wiring

The project’s ultimate goal is to create the first 3D map of the gestational human brain from 20 weeks onward. They will create the map using millions of images, one taken each second, which they will then stitch together into a coherent whole. Some of the babies in the study are identified as being at a high risk of developing autism and other neurological disorders. Researchers will ideally be able to create 3D maps of high risk and average risk brains for comparison.

Via DHCP
Via DHCP

In the longer term, the researchers hope to have both genetic and medical data for the babies, along with their test results as they grow older. This holistic, long-term view of brain development and health outcomes would afford scientists unprecedented insights into how even the most subtle changes in the anatomy and wiring of the brain might influence health and behavior later in life, perhaps leading to new avenues of treatment for various disorders.

The post In-Womb Project Reveals Striking Images of Developing Human Brains appeared first on Futurism.

Synthetic Biology

Synthetic biology, the application of principles of engineering to biology, allows scientists to design, reimagine, and fabricate entirely new biological systems and components that do not exist in nature. The burgeoning field combines genomics and the chemical synthesis of DNA, allowing researchers to manufacture various DNA sequences quickly and assemble them in all new ways, creating unique genomes.

In short, synthetic biology is the realm of creating life from scratch. Scientists are closer to achieving this ability than they’ve ever been before — and investors and technology innovators are watching. Record sums are being invested in synthetic biology, which is promising to deliver novel biofuels, drugs, foods, materials, bio-products like organs for transplant, chemicals, and even perfumes. All of this promise is offsetting safety and ethical concerns for many investors who can’t resist the siren song of the incredible range of applications for synthetic biological products, and their vast market potential. This temptation is enhanced by the fact that the cost of DNA analysis only continues to drop.

Matt Ocko, a venture capitalist whose past Silicon Valley investments include Uber, Facebook, and Zynga, thinks that synthetic biology has already proven it can deliver economic value with its “epiphany” moment.

“Synthetic biology companies are now becoming more like the disruptive, industrial-scale value propositions that define any technology business,” he told Reuters. “The things that sustain and accelerate this industry are today more effective, lower cost, more precise and more repeatable. That makes it easier to extract disruptive value.”

Experts met in London during the first week in April for a global synthetic biology conference. This meeting came only four weeks after researchers revealed they were almost finished synthesizing the entire genome of baker’s yeast. This is significant in that yeast is a eukaryote, and its cells contain nuclei — just like ours. The yeast project has proven that large-scale DNA manipulation is possible. In practice, the scientists working on the project have treated the process as if they were “decoding” it, thereby learning to work with a novel programming language that has four chemical building blocks (A, C, G, and T) rather than 1s and 0s.

This is a very relatable concept for tech investors who are looking for the next amazing breakthrough, and the numbers in synthetic biology prove it. According to synthetic biology network SynBioBeta, in 2016, a record $1.21 billion was invested in the synthetic biology sector worldwide. That’s three times as much as in 2011 — and in that same amount of time. Further, the number of synthetic biology firms almost doubled to 411. A variety of synthetic biology companies are starting up, offering everything from new industrial chemicals to DNA synthesis and software — such as Britain’s Synthace and Twist Bioscience in the U.S.

This is a far cry from the original algal biofuel focus seen in synthetic biology circles, and many commentators think that’s a good thing. Stanford University’s Drew Endy explained to Reuters: “Why would you bank your whole platform on a bulk high-volume, low-price, low-margin product? It’s baffling, not strategic.”

Synbio firms have gotten smarter, too — not just in terms of the tech and science, but in drawing investment. Bolt Threads of California recently debuted yeast-derived spider’s silk in the form of a limited edition $314 necktie. Their Japanese competitor, Spiber, created a spider-silk concept parka jacket. Gingko Bioworks of Boston is developing a synthetic rose oil for Robertet, a French fragrance house, and Evolva of Switzerland has created a vanilla extract that is free of petrochemicals — unlike most vanilla flavorings.

Synthesizing The Future

As with any new technology — especially one that touches healthcare, food, and the environment — synthetic biology has its critics. In a time of rampant anti-GMO sentiments, it’s no surprise that Friends of the Earth has described the vanillin derived from yeast as “extreme genetic engineering.” Other controversies, such as those surrounding de-extinction of species such as the Tasmanian tiger and the Woolly Mammoth, are making waves in both scientific and lay communities.

However, the crux of the anti-synthetic biology argument is substantially the same as that of the anti-GMO argument: we shouldn’t play God. If we do, there may be unpredictable, Frankenstein-esque consequences that we can’t control. This argument isn’t all that persuasive, primarily because it is neither evidence-based nor specific enough to mean much. Caution and scientifically rigorous process are certainly called for, as is the collaborative application of bioethics at these early stages in the game. These kinds of safety nets will bear far more protective fruit than either fear-mongering or outright prohibitions, especially considering how much we all stand to gain from synthetic biology.

The field is likely to yield tremendous benefits to human health, numerous industries, and the environment. Scientists will use synthetic biology to identify and remove environmental contaminants from air and water. Engineers will apply synthetic biology to create devices to diagnose, monitor, and treat disease, and researchers will use it to develop new, more effective vaccines and drugs. Synthetic organs, organs on chips, and super biomaterials will be produced using this technology. Synthetic biology could be used to create living cell lines for pharmaceutical testing ,as well as increase efficiency in biomanufacturing and chemical technology.

Intrexon, which has been fighting Zika virus and other dangerous diseases by releasing genetically manipulated mosquitoes by the millions in Brazil, is providing an example of synthetic biology in action. While many critics have spoken up against Woolly Mammoth 2.0 or lab-grown meats — particularly since it’s easier to criticize without a dire need or an essential, articulable problem to solve — it’s harder to argue against a possible solution for mosquito-borne illnesses. Success for Intrexon will hopefully mean more acceptance for synthetic biology across the board.

The post Billions of Dollars Are Being Spent to Perfect the Engineering of Life appeared first on Futurism.

The Nature of Black Holes

A world-renowned theoretical physicist and professor at the Institute for Advanced Study (IAS) has a theory on how the world will end, and while it’s not with the same Big Bang that began it, the event will be no less dramatic.

Mathematical physicist Robbert Dijkgraaf is director and Leon Lery Professor at the IAS. His research is largely focused on how mathematics and particle physics intersect, and he has made important contributions to the study of string theory. He has also developed formulas that have contributed to a greater understanding of the nature of black holes. It’s black holes, in fact, that Dijkgraaf believes hold the key to the end of the world as we know it — or, more accurately, the end of the universe as we know it.

Dijkgraaf explained his theory that “black holes will eat up everything else in the universe” in a video for Business Insider.

An Island Universe

Dijkgraaf explained that physicists have discovered in recent years that not only is the universe expanding, but forces are pushing it apart. The forces exist within voids and seem to be accelerating the universe’s expansion. Eventually, they’ll reach speeds so fast (faster than the speed of light, he theorizes) that the distant galaxies we may not even be aware of yet will fade away, then the galaxies closer to us, until we’re living in what Dijkgraaf calls something of “an island universe.”

The Evolution of Human Understanding of the Universe [INFOGRAPHIC]
Click to View Full Infographic

The implications of such cosmic expansion are not insignificant. “When the galaxy is left alone in this kind of empty universe, the stars will go out one by one because they will burn their fuel. And if all the stars are done burning their fuel, the only force that’s left is the force of gravity,” Dijkgraaf explains. “And it will slowly pull them in, and the black holes will eat up everything else in the universe, and this will be it.”

“It’s a very desolate future for the universe,” he adds, “And I think that’s why many physicists were very hesitant to believe that this is actually the true future history of the universe. But it turns out, that the experiments are all pointing in that direction.”

As doomsday prophecy as Dijkgraaf’s assertions may sound, even if the universe does ultimately reach this rather dark (literally and figuratively) end, the event will no doubt be a thing of great beauty, too.

The post World-Renowned Physicist Explains How Our Universe Will End appeared first on Futurism.

Page 1 of 4:«1 2 3 4 »