Connect you with the world

Start uploading picture, videos and write about your activity to share it with friends and family today. Sign-up here »

Already a member?

Remember Me

Archives for Hard Science

Hope for Exoplanets

Last year, the Trappist-1 system was discovered, and it reignited humanity’s hope of finding planets that could one day (or currently) sustain life. But there has been much debate over the probability that planets within that system, which is located 39 light-years away, could hold life as we know it. While Trappist planets might fall within what is known as the “habitable zone,” other factors, such as the fact that they orbit a red dwarf, might make them less prone to sustaining life than initially thought.

This uncertainty might get you down until you read that astrophysicists at the University of Texas at Arlington (UTA) have predicted the existence of a much closer planet that is Earth-like in nature. Published as “Dynamics of a probable Earth-Like Planet in the GJ 832 System” in The Astrophysical Journal, this planet is only 16 light-years away, within the star system Gliese 832, and it appears to be both stable and Earth-like.

Image Credit: National Science Foundation / Zina Deretsky

Lead author Suman Satyal, UTA physics researcher, stated in an UTA press release that, “According to our calculations, this hypothetical alien world would probably have a mass between 1 to 15 Earths masses.”

“This is an important breakthrough demonstrating the possible existence of a potential new planet orbiting a star close to our own,” UTA Physics Chair Alexander Weiss said in the press release. “The fact that Dr. Satyal was able to demonstrate that the planet could maintain a stable orbit in the habitable zone of a red dwarf for more than 1 billion years is extremely impressive and demonstrates the world class capabilities of our department’s astrophysics group.”

Going Forward

According to Satyal, “The existence of this possible planet is supported by long-term orbital stability of the system, orbital dynamics and the synthetic radial velocity signal analysis…At the same time, a significantly large number of radial velocity observations, transit method studies, as well as direct imaging are still needed to confirm the presence of possible new planets in the Gliese 832 system.”

So, it is not yet 100 percent certain that this unicorn of a planet exists. However, these predictions are strongly supported by evidence garnered by the research group. But what could this mean if and when these scientists prove the existence of such a planet?

Breakthrough Initiatives [INFOGRAPHIC]
Click to View Full Infographic

Technologies which allow us to explore further and further into the cosmos continue to develop. From SpaceX’s advances to the Breakthrough Initiatives, it is becoming ever-more possible to reach previously unseen corners of the universe. Now, Mars is “only” an average of 12.5 light-minutes from Earth, so 16 light-years might seem like an unconquerable distance.

However, compared to the distances of other promising exoplanets, this could be one of the best shots we have to survey and study an exoplanet that fits many habitable criteria. The exploration of the universe is a never-ending journey that will only continue to further our knowledge and curiosity.

The post Astrophysicists Believe They’ve Found Another Earth-Like Planet in a Nearby Star System appeared first on Futurism.

Brian is bringing the story of science to the world. Uncover the mysteries here.

The Pace of Time

A day on Earth is longer than it used to be. The increase is tiny. Over the span of a hundred years the Earth’s day will increase by only a few milliseconds. It’s only been in the past few decades that we’ve been able to measure Earth with enough precision to see this effect directly. Using atomic clocks and ultra-precise measurements of distant quasars, we can measure the length of a day to within nanoseconds. Our measurements are so precise that we can observe various fluctuations in the length of a day due to things like earthquakes. Those fluctuations make it a challenge to answer another question. How has Earth’s rotation changed over longer periods of time?

Variation of the length of a day in recent years. Image Credit: Wikipedia.

Part of the reason Earth’s days are getting longer is due to the gravitational pull of the Moon on our oceans. The tides slosh against the Earth, gradually slowing its rotation. Over millions of years this means Earth’s day was hour shorter than it is now, thus there were more days in a year than today. We see this effect in the geological record, which tells us an Earth day was about 22 hours long 620 million years ago. Trying to measure the length of a day between the recent and geological era, however, is difficult. Hundreds of years ago clocks weren’t accurate enough to measure this variation, and the length of a day was fixed to its rotation, making any such comparison impossible. But recent work has found a way to study Earth’s changing days.

A Series of Observations

Although our ancestors of centuries past didn’t have accurate clocks, they were good astronomers. They observed and documented astronomical events such as the occultation of bright stars by the Moon, as well as solar eclipses. The occurrence of these events depends critically on when and where you are. If, for example, an astronomer in one city sees the Moon pass in front of a star one night, an astronomer in a nearby city will only see the Moon pass close to the star. By comparing the observations of these astronomical events with the actual time of their event as calculated from the orbital motions of the Earth and Moon, we know exactly when and where they occurred. Fitting a history of observations together, we can get an average rate for the increase of a day. That turns out to be about 1.8 milliseconds per century.

There are two things that are interesting about this result. The first is that it’s pretty amazing to be able to determine this rate from historical documents. The observations span more than two and a half millennia, and are written in various languages and locations. Gathering them all together and verifying them is an amazing effort. The other is that this rate is actually less than the rate theorized from the tidal effects of our Moon (about 2.3 ms/century). This is likely due to changes in Earth’s overall shape. We know, for example, that the melting of ice since the last ice age (about 10,000 years ago) has released pressure at the Earth’s poles, allowing it to return to a more spherical shape. This would tend to shorten Earth’s days a bit. The combination of these two effects give us the historical rate we see.

Overall this work is a great demonstration of how history can speak to us. If we listen closely, we can even see the changes of time.

The post Solar Eclipses Reveal That the Pace of Time is Changing appeared first on Futurism.

The when, where, and how for watching Monday’s solar eclipse.

The post Are You Ready for the Solar Eclipse? Here’s How to Prepare. appeared first on Futurism.

An Innovation in Artificial Incubation

Around 15 million babies are born preterm or premature every year, according to the World Health Organization (WHO). This number is expected to rise, bringing more infants into the world before completing 37 weeks of gestation. How we are going to care for a growing number of premature infants is a real concern: preterm birth complications were responsible for almost a million deaths in 2015, making it the leading cause of death among children below 5 years of age.

Thankfully, there are a number of interventions that can help, many of which involve developing better incubation chambers — even artificial wombs and placentas — where the premature infants can continue their growth outside the womb. One of these is an artificial womb developed by a combined team of researchers from the Women and Infants Research Foundation, the University of Western Australia, and Tohoku University Hospital, Japan.

“Designing treatment strategies for extremely preterm infants is a challenge,” lead researcher Matt Kemp said in a press release. “At this gestational age the lungs are often too structurally and functionally under-developed for the baby to breathe easily.” Their work, published in The American Journal of Obstetrics & Gynecology, took a different approach. The key was treating the preterm infants not as babies, but as fetuses.

EVE Therapy

Their device and method successfully incubated healthy baby lambs in an ex-vivo uterine environment (EVE) for a one-week period. “At its core, our equipment is essentially is a high-tech amniotic fluid bath combined with an artificial placenta. Put those together, and with careful maintenance what you’ve got is an artificial womb,” Kemp explained.

Image credit: Women & Infants Research Foundation
Image credit: Women & Infants Research Foundation

He added in the press release, “By providing an alternative means of gas exchange for the fetus, we hoped to spare the extremely preterm cardiopulmonary system from ventilation-derived injury, and save the lives of those babies whose lungs are too immature to breathe properly. The end goal is to provide preterm babies the chance to better develop their lungs and other important organs before being brought into the world.” It’s this approach that makes it revolutionary.

The scientists hope that this EVE therapy could soon help bring preterm human babies to term. “We now have a much better understanding of what works and what doesn’t, and although significant development is required, a life support system based around EVE therapy may provide an avenue to improve outcomes for extremely preterm infants,” Kemp said.

The post Further Research into Artificial Wombs Brings Us Closer to a Future Where Babies Grow Outside the Body appeared first on Futurism.

Working Behind the Scenes

When I visited the Cerro Tololo Inter-American Observatory a few years ago, the facility lost power. The observatory is in a remote area of northern Chile, so you can’t just call the power company to complain.

Cerro Tololo. Image Source: Tudorica Alexandru/ Flickr Commons

Fortunately there’s a skilled team of technicians on the job. Their first priority was to maintain the cooling system of the large Blanco telescope. It contains an array of 62 CCDs known as the Dark Energy Camera, which must be kept cold. Discussing the matter in a mix of English and Spanish, the team worked through the challenge of rigging generators to the telescope.

Meanwhile the CTIO kitchen staff had to figure out how to feed dozens of people without electricity. With portable gas cookers they prepare baked fish and steamed rice, with ample hot water for tea and coffee. Meanwhile, facility administrators were coordinating with the Chilean electrical grid to restore power to the observatory. By the end of the day everything was back up and running.

Whenever CTIO or other large science facilities make a breakthrough discovery, we hear about it all over the web. What we don’t hear about is the work done behind the scenes. We don’t hear about the technicians who saved a million dollar camera, or the staff who ensure that everyone is safe and fed, or the machinists who build and maintain these facilities. We also don’t hear about how these remote facilities interact with neighboring communities. How they face the challenge of being good neighbors while pursuing their scientific goals. These are stories worth telling, which is why I’ve been working on a new television project.

The Stories Behind the Headlines

For about a year I’ve been working with journalist Mark Gillespie, and Canadian TV producers Steven Mitchell and Al Magee to develop a new kind of science show. One that will tell the stories behind the science headlines. Steven and Al have decades of experience in television storytelling, and have won several awards for their outstanding work. They also share my desire to present science honestly and without hype. Mark has worked in some of the most remote areas of the world, and knows how bring out stories that are meaningful and powerful.

We’ve already developed relations with many big science facilities, and we know several stories we want to tell. But in order for the project to succeed we need to film a “sizzle reel” demonstrating the show to the networks. It will be filmed on location at Green Bank Observatory. But it’s going to take some funding, so we’ve launched a Kickstarter campaign.

Science is not just about breakthrough discoveries. It’s about people coming together to do extraordinary things. I hope you’ll help us tell this story.

The post A Different Kind of Science Show appeared first on Futurism.

Pushing the Pace

The CRISPR gene editing tool has already been used to perform some incredible feats of science, from manipulating the social behavior of ants to making superbugs kill themselves. It’s an incredibly powerful asset, but this week at CrisprCon, there was plenty of discussion about where we should draw a line on its usage.

“I’ve never seen science move at the pace it’s moving right now,” said CRISP co-discoverer Jennifer Doudna, who has spent recent months touring the world campaigning for a ‘global consensus’ on appropriate implementations of gene-editing technologies. “Which means we can’t put off these conversations.”

CRISPR has already been used to edit harmful conditions out of animals and even viable human embryos. From this point, it wouldn’t take a great leap to start using the technology to enhance healthy organisms — which is why now is the time for discussions about the consequences.

Handle with Care

While medical uses of CRISPR are perhaps the most ethically urgent, the conversation about its usage goes beyond medicine. Companies like Monsanto and Cargill have already licensed CRISPR technologies to help with their agricultural efforts. However, early attempts at genetically modified crops struggled to gain mainstream acceptance, and that’s something these firms need to keep in mind as they implement the latest techniques.

“It was a convenience item for farmers,” observed organic farmer Tom Wiley at the convention, according to Wired. “And a profit center for corporations.” To combat genetically modified food’s perception problem, companies using CRISPR will have to make sure that the technology benefits the consumer, not just the production process.

The convention addressed CRISPR usage in many different fields: from the importance of ensuring it is used to address the widest range of medical conditions as possible, to the potentially damaging effects of gene drives on a delicate ecosystem.

Science is moving at a rapid pace, and CRISPR is too — but if we don’t carefully consider which applications are safe and valid, it could quickly cause as many problems as it solves.

“Crispr is not a light on the nation, it’s a mirror,” said CrisprCon keynote speaker Greg Simon, director of the Biden Cancer Initiative; Wired reporter Megan Molteni interpreted those words as, “it’s just another technology that’s only as good as the people using it.”

The post CRISPR Co-Discoverer: “I’ve Never Seen Science Move at the Pace It’s Moving Now” appeared first on Futurism.

Extra Chromosomes

Scientists have developed healthy offspring from altered male mice that were once genetically infertile, resulting in a new tool to manage human infertility. The X and Y chromosomes determine sex – with chromosomes (XY) signifying male, and two X chromosomes (XX) signifying female. However, about 1 in 500 boys are born with one extra chromosome. Whether it’s an X or a Y the presence of a third chromosome can cause infertility by disrupting the formation of mature sperm.

Scientists at the Francis Crick Institute have now discovered a technique for removing extra sex chromosomes in developed stem cells and producing fertile offspring. If they can transfer their findings and this technique for application in humans, those with either Double Y syndrome (XYY) or Klinefelter syndrome (XXY) that now experience male chromosomal infertility might one day be able to have children via assisted reproduction.

Image Credit: Zappys Technology Solutions/Flickr
Image Credit: Zappys Technology Solutions/Flickr

“Our approach allowed us to create offspring from sterile XXY and XYY mice,” lead author and Francis Crick Institute scientist Takayuki Hirota said in a press release. “It would be interesting to see whether the same approach could one day be used as a fertility treatment for men with three sex chromosomes.”

A Promising Technique

To accomplish this research, the team removed small sections of ear tissue from mice with both XXY and XYY chromosomal anomalies and cultured it. They next collected fibroblasts, which are connective tissue cells, and turned them into stem cells. During the transformation, the researchers noted that some of the cells had dropped the extra sex chromosome. Following this, they used a method developed in the past to guide the stem cells with specific chemical signals into becoming potential sperm cells. Once these cells were injected into mouse testes, they developed into mature sperm. These mature sperm were found to be viable, and so the researchers harvested and used them in assisted reproduction cycles to create fertile, healthy offspring.

The team has also conducted one preliminary experiment using men with Klinefelter syndrome. In that experiment, the researchers followed the first part of the same process and discovered that stem cells produced from the fibroblasts of these men also shed the extraneous sex chromosome. However, this was an early stage experiment, and extensive research remains before the technique will ever be viable for use in humans.

“There is currently no way to make mature sperm outside of the body. In our mouse experiments we have to inject cells that have the potential to become sperm back into the testes to help them finish developing. But we found that this caused tumors in some of the mouse recipients,” senior author and Group Leader at the Francis Crick Institute James Turner explained in the press release. “So reducing the risk of tumor formation or discovering a way to produce mature sperm in a test tube will have to be developed before we can even consider this in humans.”

The post Scientists Have Officially Created Healthy Offspring from Genetically Infertile Male Mice appeared first on Futurism.

Altered States

In a new study published in Nature, researchers lead by a team from the Los Alamos National Laboratory (LANL) reveal that they may have discovered a new state of matter.

Super Materials of Tomorrow [INFOGRAPHIC]
Click to View Full Infographic

At the core of this research is CeRhIn5, a heavy-fermion superconductor. When the researchers placed CeRhIn5 within a high magnetic field, its electronic symmetry broke and the compound went into an electronic nematic state.

In this state, the electrons of CeRhIn5 aligned in such a way that the crystal’s symmetry was reduced. Based on this research, this change appears to be the norm for unconventional superconductors, a broader category of which heavy-fermion superconductors are a subtype.

“The appearance of the electronic alignment, called nematic behavior, in a prototypical heavy-fermion superconductor highlights the interrelation of nematicity and unconventional superconductivity, suggesting nematicity to be common among correlated superconducting materials,” said Filip Ronning, the paper’s lead author, in a LANL press release.

State Lines

For years, young students have had it drilled into their brains that only three states of matter exist: solid, liquid, and gas. However, we’re learning that that’s not accurate. While not common on Earth, materials can reach a plasmic state, typically when they’re heated to searing temperatures while in gas form. In just the past couple of decades, researchers have discovered another state: Bose-Einstein condensate (BEC).

Even more recently, research has shown that a second liquid state could exist. When water was heated to temperatures between 40-60°C (104-140°F), new properties were observed, such as changes to surface tension and thermal conductivity.

Identifying common traits that pertain to groups of materials with shared attributes helps scientists understand more about how to classify them. In this case, the LANL team’s research can help us learn more about superconductors, which are able to transfer energy with zero electrical resistance.

Superconductors need very low temperatures to function, so it’s possible that this newly discovered trait could, in fact, be evidence of another state beyond the parameters of a solid, akin to how plasma is one step beyond gas.

More research will be needed to find out, but as everything from particle accelerators to MRI machines use superconductors, any new insights into these remarkable materials could have far-reaching consequences.

The post Researchers May Have Just Discovered an Entirely New State of Matter appeared first on Futurism.

MIT’s project Materiable is made up of a grid of white plastic pins that are designed to mimic various haptic properties such as elasticity, viscosity and flexibility.

The post It May Just Trick Your Mind appeared first on Futurism.

Simon Says

A team of researchers at the University of Buffalo has used magneto-thermal stimulation to control the movements of mice.

These results were achieved working with mice that had been genetically engineered such that targeted neurons would produce ion channels sensitive and even receptive to temperature. Nanoparticles consisting of a cobalt-ferrite core coated in manganese ferrite were injected into the brain, attaching themselves to neurons.

Applying an alternating magnetosphere causes the magnetization of these nanoparticles to switch back and forth, raising their temperature. As a result, the ion channels open up, causing the associated neurons to fire.

neurology magneto-thermal stimulation brain mice
Neural cells in a mouse’s striatus. By stimulating cells in the motor cortex, the striatum, and a deep section of the brain, respectively, the mice could be compelled to start running, turn around, and come to a halt. Image credit: Munshi et al., eLife

Inside the Mind

This research isn’t really about bossing mice around — it’s about linking specific parts of the brain to the movements and behaviors they control. Understanding how the brain works in this respect could help develop treatments for conditions that affect particular groups of neurons, including Parkinson’s disease, dystonia, and traumatic brain injuries.

“There is a lot of work being done now to map the neuronal circuits that control behavior and emotions,” said Professor of Physics Arnd Pralle, the lead researcher. “The technique we have developed could aid this effort greatly.”

Techniques like transcranial magnetic stimulation already allow us to tinker with the brain, and research has shown that Optogenetics can be used to make major changes to the behavior of mice in a lab setting. However, magneto-thermal stimulation is considered less invasive than these other methods. No evidence of brain damage was found in the mice used during testing.

The researchers hope to expand their studies by figuring out how to stimulate multiple parts of the brain at the same time.

The post Researchers Use Genetics and Electromagnetism to Control the Movements of Mice appeared first on Futurism.

Snowballs and Algae

This week, in a press release, the Australian National University asserted that a team of their scientists had figured out how animal life came into existence on our planet. The release states, “Research led by The Australian National University (ANU) has solved the mystery of how the first animals appeared on Earth, a pivotal moment for the planet without which humans would not exist.”

The discovery started with ancient sedimentary rock samples from central Australia. The team, led by ANU Research School of Earth Sciences associate professor Jochen Brocks, crushed the rock samples into powder and then “extracted molecules of ancient organisms from them,” Brocks explained in the press release. “These molecules tell us that it really became interesting 650 million years ago. It was a revolution of ecosystems, it was the rise of algae.”

Scientists generally agree that complex, animal life on Earth started some 600 million years ago. Brocks and his team claim that a significant event preceded this. “Before all of this happened, there was a dramatic event 50 million years earlier called Snowball Earth,” he said, referring to a hypothesis that suggests a period when the planet’s surface was entirely frozen from pole to pole.

This event is key for the ANU team’s findings, as it was what made the growth and spread of algae possible. “The Earth was frozen over for 50 million years. Huge glaciers ground entire mountain ranges to powder that released nutrients, and when the snow melted during an extreme global heating event rivers washed torrents of nutrients into the ocean,” Brocks explained.

Evolving Insights

At the end of this frozen period, when global temperatures settled into more hospitable levels, the nutrients that flooded into the oceans sparked the transition from microbial life into a more complex life, Brocks said. “These large and nutritious organisms at the base of the food web provided the burst of energy required for the evolution of complex ecosystems, where increasingly large and complex animals, including humans, could thrive on Earth,” he added.

This study is, indeed, promising and could potentially be groundbreaking. The origin of complex life on Earth remains one of the most puzzling events in science, as the evolutionary jump between single-celled microbes and multi-cellular, nucleus-bearing, mitochondria-powered organisms is simply immense.

Additionally, our understanding of the moment this occurred keeps on evolving. While it’s understood that conditions on Earth became ideal at a certain point to make life possible, being able to pinpoint just what these conditions were — and exactly when they occurred — could better help explain complex life’s rise.

Studies like this ANU research help in achieving this. Recently, a couple of studies have been published that also attempted to shed light into the origin of life on Earth. One suggested that life didn’t just begin in the oceans, but also on land. Another claimed that the so-called Asgard microbes are the bacterial ancestors of human life.

Studying the origins of life on this planet could also help us to figure out how life develops in the cosmos. Right now, the search for extraterrestrial life is a concern that keeps a number of scientists and institutions occupied. Understanding life on Earth might just help us understand life beyond it.

The post Scientists Claim They Have “Solved the Mystery of How the First Animals Appeared on Earth” appeared first on Futurism.

What exactly does the process of the big bang, the formation of stars, and the birth of planets mean to our lives?

The post Watch: Why Cosmic Evolution Matters appeared first on Futurism.

Deep Freeze

A group of scientists from the University of Warwick has taken inspiration from nature for a new research project that could greatly improve the process of human organ transplantation.

An Exponential Timeline of Organ Transplants
Click to View Full Infographic

The researchers have developed a new synthetic antifreeze that mimics the properties of natural antifreeze proteins (AFPs). These proteins are produced naturally by extremophile species in cold environments, which need to be able to moderate the formation and spread of ice in order to survive.

The Warwick team’s synthetic antifreeze is iron-based, and they attribute its ability to slow the growth of ice crystals to the separation of the iron complex into regions with either water-loving or water-hating characteristics, imitating the composition of APFs found in nature.

Heart of Ice

This synthetic antifreeze could be useful for a wide range of applications, from making airplane wings less susceptible to the cold to helping produce smoother ice cream. However, it could have a major impact on something far more significant than dessert: organ transplants.

After organs are removed from a donor, they have to be iced in order to prevent them from dying off before they reach their recipient. This process is problematic, as organs typically only last for four hours before being rendered unusable. Sixty percent of hearts and lungs donated for transplants are discarded each year due in part to this short shelf life.

If organs were frozen, and not simply put in cold storage, they could last longer, but under normal circumstances, organs simply can’t survive the freezing process — cells are liable to shrivel up or even collapse entirely, the matrices that connect cells together might be torn apart, and blood vessels can disintegrate entirely.

Ice crystals are the cause of all of these major problems. If the Warwick researchers’ synthetic antifreeze can thwart the growth of these crystals, it could make it possible for surgeons to freeze organs without the associated negative effects. This would make transplants safer and add a lot more flexibility to the process in terms of timing and transport.

Most significantly, this ability to freeze organs could dramatically decrease the number of donations that are wasted as a result of their short timeframe for use. Data from the U.K. suggests that if half of the wasted organs were actually used, transplant waiting lists could be eliminated within just a couple of years.

This project hasn’t yet moved to trials using human organs, so real-world use is still far off, but the research has the potential to give people waiting for transplants a much better shot at getting the organ donation they need.

The post A New Technique Could Revolutionize Organ Transplants appeared first on Futurism.

The Universe’s Story

It is often thought that science is about data, but at its heart science is about stories. The universe has a story to tell, and through science we can learn a part of that story. When I write about scientific discoveries, I try to tell that story. If I tell the story well, then it becomes both understandable and relevant to readers. But there is a part of the scientific story we don’t often tell. That’s why I’ve been working on a new project

Image Source: Liis Trauser/ Flickr Commons

Many of the breakthroughs in modern science come from big facilities such as ALMALIGO and CERN. These are massive projects that require the support of thousands of people. Everything from cooking to engineering comes together to make these facilities possible. Many facilities are also in remote locations. They have a huge impact on the surrounding communities, often entering the sacred spaces of indigenous cultures. The interactions can be filled with tension, but they also enrich this human endeavor we call science.

Communicating Science

Part of communicating science is not just how you tell the story but also what stories you choose to tell. For about a year I’ve been working with journalist Mark Gillespie, and Canadian producers Steven Mitchell and Al Magee to develop a new kind of science show. One that will tell the stories behind the science headlines. Steven and Al have decades of experience in television storytelling, and have won several awards for their outstanding work. They also share my desire to present science honestly and without hype. Mark has worked in some of the most remote areas of the world, and knows how bring out stories that are meaningful and powerful.

For the past year we’ve been developing stories and building connections to several science facilities and their surrounding communities. The next step to making the project real is to film a “sizzle reel” demonstrating the show to the networks. It will be filmed on location at Green Bank Observatory. But that’s going to take some funding. So today we’ve launched a Kickstarter campaign. With your support we can make this project a reality.

You can find the project at Obviously I’d love if you contribute financially, but we also need as much media buzz as we can get. The more you share it on social media, the better chance it has to succeed.

Many of you already support One Universe at a Time, which has made it possible for me to tell the stories of science. I very much hope you’ll support this new project as well.

The post Uncover The Hidden Stories Behind Our Greatest Scientific Discoveries appeared first on Futurism.

For Better (In)Visibility

From cloaking devices that conceal spaceships, to Harry Potter’s hand-me-down disappearing blanket, or even the One Ring and its power to conceal its wearer, invisibility is a staple in science fiction and fiction in general. Scientists have been hard at work, however, to bring such a technology into reality. Joining the research and development of cloaking technology is Japanese car manufacturer Toyota.

The company recently acquired a patent from the United States Patent and Trademark Office (USPTO) for “Apparatuses and methods for making an object appear transparent,” which Toyota filed last June 17. Seems exciting, especially since it’s a car maker working on an invisibility tech or cloaking device. It actually is interesting, but not because it’s meant to turn Toyota’s cars invisible — well, at least not all of them.

The patent describes a cloaking device designed to turn vehicles’ A-pillars to the left and right of the car’s dashboard invisible, improving road visibility for the driver. Seems ironic that a cloaking technology would improve visibility, right? Toyota thinks that it’s the way to go, especially since A-pillars have become rather large following crash-safety standards. The wider they are, the more they end up obscuring a driver’s vision.

Image credit: USPTO
Image credit: USPTO

Different Shades of Cloaking

According to Toyota, such a technology is already possible — like the Rochester Cloak — but it would require video cameras and other expensive equipment for it to work in cars. This cloaking device, on the other hand, would be a less expensive solution. It would use mirrors to bend visible light around the A-pillars to allow the driver to “see” through them. This would give drivers a wider view of the road and their surroundings. It also benefits pedestrians, as drivers would see them better.

“Light from an object on an object-side of the cloaking device [i.e., facing the road] is directed around an article [the A-pillars] ]within the cloaking region and forms an image on an image-side of the cloaking device [i.e., facing the driver’s seat] such the article appears transparent to an observer looking towards the object,” according to a description of the device in the patent.

Making things “invisible” is all about manipulating light. The other invisibility technologies currently being developed do the same thing, albeit using different methods or materials. For instance, researchers from the Max Planck Institute are working on mimicking the biology of moths’ eyes to turn lenses and glass invisible. Meanwhile, a group from the University of California, San Diego is controlling how light reflects on objects using a thin “carpet cloak” made form Teflon and ceramic particles.

All of these studies bring us closer to a working invisibility cloak, or at least a technology which allows us to see through objects by making them bend light — like Toyota’s device. We will have to wait and see.

The post Toyota Just Patented a “Cloaking Device” appeared first on Futurism.

Welcome to the World of Biohacking

Biologists pay scrupulous attention to DNA synthesis, not just out of a need for precision, but also as a precaution. Scientists don’t want to create or spread a dangerous stretch of genetic code that, with a little bit of criminal ingenuity, could be used to make a toxin or an infectious disease. But recently, a group of biohackers has found a way to cross biology with the digital by resequencing DNA, not to infect humans or animals, but computers.

A group of researchers from the University of Washington has shown that, for the first time, it’s possible to encode malicious software into physical strands of DNA. And so, when a gene sequencer performs an analysis, the output transforms into a program which corrupts gene-sequencing software to gain control of an entire computer.

This is a far cry from a criminal implementation. Researchers agree that this new method could be integrated into real-world applications once DNA sequencing grows more powerful and ubiquitous.

“We know that if an adversary has control over the data a computer is processing, it can potentially take over that computer,” said Tadayoshi Kohno, Computer Science Professor at the University of Washington in an interview with WIRED, noting the similarity to traditional hacker attacks. “That means when you’re looking at the security of computational biology systems, you’re not only thinking about the network connectivity and the USB drive and the user at the keyboard but also the information stored in the DNA they’re sequencing. It’s about considering a different class of threat,” he added.

Image Credit: Lorna

While, currently, this feels more like an alternate take on Neal Stephenson’s Snow Crash than an imminent cyber threat, the means for disseminating this bio-delivered code are already coming into being. University labs, for one, are employing increasingly centralized services for students and professors working with college-owned (and expensive) gene sequencing equipment. And as other, non-academic environs like corporations, medical facilities, and government-run facilities follow suit, this DNA-malware delivery trick will become feasible. Additionally, the DNA-malware delivery will come from outside sources, which could provide the perfect window for malware insertion.

Called an “exploit” by hackers, this specific kind of computer attack is known as a “buffer overflow” which fills the space in a computer’s memory allocated for the gene sequence and then spills out into other parts of the computer’s memory until it can plant its own commands.

Flaws to the Prototype

So far, challenges to the process — like squeezing the code into a few hundred DNA bases and the tendency of redundant base patterns to cause DNA to fold into itself — have limited the digital translation to a mere 37% success rate. Clearly, DNA-hacking is still only a reality for readers of biopunk sci-fi, but we’d be naive to dismiss it as the next generation’s problem (or boon). After all, Seth Shipman of a Harvard team recently encoded a video in a DNA sample. Once perfected, DNA storage could replace magnetic encoding in flash memory because DNA maintains its structure for much longer.

DNA coding and bio-exploitation might sound like fiction today, but, like the cell phone’s origins in Star Trek canon, it’s only a matter of doing.

The post Researchers Hacked Into DNA and Encoded it With Malware appeared first on Futurism.

From Light to Super-Proton

Five years ago, Martin Weitz and his team accomplished what other physicists had thought impossible: they created a photonic Bose-Einstein condensate, a completely new source of light.

A photonic Bose-Einstein condensate is when individual photons are collected together in a single location, cooled, and brought together to create what is known as a super-photon. Recently, Weitz — of the Institute of Applied Physics at Germany’s University of Bonn — set out to conduct an experiment with a newly made one.

In this new experiment, Weitz and his team were able to create “wells” that allowed super-photons to flow from one well to the next, an achievement that could one day lead to much-anticipated quantum computing.

The team accomplished this task by bouncing a laser between two mirrors, moving the light through a pigment between the mirrors that cooled the light and turned it into a super-photon. Before introducing the laser light, a polymer was mixed in with the cooling pigment used to cool the light. Using this polymer allowed Weitz to influence the experiment’s refractive index using heat; increasing the temperature would let longer light wavelengths travel back and forth between the two mirrors.

By inducing different temperature patterns, Weitz’s team was able to induce a pseudo-warping effect in the polymer, creating “wells” at certain points that had a different refractive index than the polymer as a whole. The team then found that the super-photon would flow into the wells, just as a liquid might flow into a hollow space.

“The special thing is that we have built a kind of optical well in various forms, into which the Bose-Einstein condensate was able to flow,” Weitz said in a press release. “With the help of various temperature patterns, we were able to create different optical dents.”

Another Step Towards Quantum Circuits

Following the creation of the photonic Bose-Einstein condensate, Weitz team of researchers observed the behavior of two adjacent optical wells. By adjusting the temperatures of the polymer, the light in both wells came to have similar energy levels, thereby allowing the created super-photon to move from one to the other.

According to Weitz, this innovation could be the precursor for quantum circuits, which are expected to play a large role in the future of quantum computers and communication.

The work done by Weitz and his group could also lead to better developed lasers, such as ones used for welding or drilling.

Computing applications of this technology aren’t expected for quite a while, but some believe the first true quantum computers may debut as early as next year. It was only in July that two Swedish PhD students broke a quantum computing record, nudging use slightly closer to such a reality.

It’s currently a race to see who gets us to that point first, but it’s only a matter of time before we figure out how to create the right machines capable of handling quantum circuits. When we do, whole new aspects of our universe may become open to us, as our computer systems inevitably become faster and more powerful.

The post Physicists Have Made Exotic Quantum States From Light appeared first on Futurism.

Forget To Learn

Researchers have developed a new computing technology that learns how to forget less important memories and hold on to critical information. The technology is called “organismoids,” and it mimics the way human thought works, making room for new information and lifelong learning by purging information that’s not as important.

Image Credit: macrovector/
Image Credit: macrovector/

“I learn slowly, but I keep forgetting other things along the way, so there is a graceful degradation in my accuracy of detecting things that are old,” Purdue University’s Edward G. Tiedemann Jr. and distinguished Professor of Electrical and Computer Engineering Kaushik Roy explained to “What we are trying to do is mimic that behavior of the brain to a certain extent, to create computers that not only learn new information but that also learn what to forget.”

Quantum Materials, Neural Learning

Samarium nickelate, a ceramic “quantum material,” was central to creating the organismoid devices for this work. “These devices possess certain characteristics of living beings and enable us to advance new learning algorithms that mimic some aspects of the human brain,” Roy said to “The results have far reaching implications for the fields of quantum materials as well as brain-inspired computing.”

The samarium nickelate undergoes a massive change in resistance when exposed to hydrogen gas, as hydrogen atoms “dope” its crystal lattice. The material essentially “breathes,” expanding and contracting as hydrogen is added and removed. This resistance change is a quantum mechanical effect that is highly unusual, and its gradual conductance decay is similar to habituation behavior in animals. For this reason, the researchers describe the way that these materials behave as “organismic” and analogous to the ebb and flow of memory and information retention in living brains.

“This could be really important because it’s one of the first examples of using quantum materials directly for solving a major problem in neural learning,” Ramanathan said. The result is their “neural learning model,” adaptive synaptic plasticity, which they use the organismoids to implement.

Using these tools, the researchers are modeling actual problems in neuromorphic computing such as catastrophic forgetting. Eventual applications may be seen in AIs capable of improved facial recognition, complex decision-making, and reasoning. Organismoids could also be applied in the nascent field of spintronics, which replaces the binary system of ones and zeros of traditional systems with the “spin state” of electrons. Eventually, using these technologies, researchers might be able to create circuits that mimic the human brain, with a far more compact design and components that resemble synapses and neurons, with just one spintronic device.

The post A New Computing Technology Mimics Aspects of Human Thought appeared first on Futurism.

New Reactions

The chemical processes which govern combustion, cloud formation, and climate change are extremely complex. And it appears that some new research has shown that these reactions (in addition to others) are entirely more complex than we ever thought. Researchers from Columbia University, led by Assistant Professor of Mechanical Engineering Michael P. Burke, have discovered a fourth class of chemical reaction known as “chemically termolecular reactions.” This involves the breaking and forming of chemical bonds between three molecules, when the collision of two molecules collides again, with a third. Their work was published in the journal Nature Chemistry.

The reaction was actually first hypothesized in the 1920s, but was at thought at the time to be unimportant or nonexistent, so no one managed to properly study them until now. The research used computer simulations to observe chemical reactions in a way that is difficult, if not impossible, in the traditional lab setting. According to Burke, “The power of these state-of-the-art computational methods is that they can provide a unique lens into harsh chemical environments ill-suited for experimental techniques for studying individual reaction dynamics.”Image credit: Michael P. Burke/Columbia Engineering

This means a new kind of chemical reaction does in fact exist, one that may have a profound impact on our comprehensive understanding of chemical reactions. This marks a fundamental shift in the way that these types of reactions are viewed. Consequently, the extent of its implications are not fully known, but what’s clear is that we may have a small revolution on our hands.

What This Means for Everyday Life

Better understanding of these reactions could lead to safer and more efficient engines, among a host of other potential scientific improvements. According to Burke:

Potentially there could be innumerable reactions from this new class that impact how we model gas phase chemistry, from designing new types of engines to understanding the planetary chemistry responsible for cloud formations, climate change, evolution of pollutants, even perhaps the sequence of reactions that could impact the conditions for extraterrestrial life. Our discovery opens up a whole new world of possibilities.

Advancing our knowledge of the chemistry behind planetary atmospheres would have a profound impact on the search for extraterrestrial life in the universe. Many methods of determining the composition of a planet’s atmosphere — and thus its habitability — are based on chemistry. Having a finer grasp of the most minute and ambiguous details of incoming data could give new insight into extraterrestrial life — and even how life on Earth came to be.

The Science of Searching for Exoplanets [INFOGRAPHIC]
Click to View Full Infographic

This discovery lends weight to the notion of scientific investigation as an attempt to know the world through falsifiable observation, i.e., hypotheses about the world that are refutable through empirical (or sensorial) means. As new technology extends our empirical acumen, our capacity to seek out new life on new extraterrestrial planets also sinks into the universe beyond our five little human senses. When it was first hypothesized in the early 20th century, the theory of chemically termolecular reactions was beyond our capacity for observation, and so was not falsifiable. but with the advent of technology like quantum simulations, untold realities are just beyond our fingertips, awaiting discovery.

The post Researchers Have Officially Discovered a New Kind of Chemical Reaction appeared first on Futurism.

Graphene At It Again

The discovery of graphene in 2004 began a flurry of studies to isolate other two-dimensional materials. Graphene was found to be a wonder material, possessing a set of unique and remarkable properties. One of these is its ability to conduct electricity ten times better than copper, the most commonly used conductor in electronics. At room temperature, graphene is also capable of conducting electricity 250 times better than silicon, a rate faster than any other known substance.

These properties led a team of researchers from Northwestern University, The University of Texas at Dallas (UT Dallas), University of Illinois at Urbana-Champaign, and University of Central Florida (UCF) to consider developing a graphene-based transistor. In a study published in the journal Nature Communications, the team found that a graphene-based transistor could actually work better than silicon transistors used in today’s computers.

A quick explanation first: Transistors are key in today’s computer circuits, as these act as on and off switches that allow electronic signals and electrical power through. When put together, transistors form logic gates — the core of microprocessors, serving as input and output and acting either as 0s or 1s (so-called binary bits). These are what allow microprocessors to solve logic and computing problems.

“If you want to continue to push technology forward, we need faster computers to be able to run bigger and better simulations for climate science, for space exploration, for Wall Street,” co-author Ryan Gelfand, an assistant professor at UCF, said in a press release. “To get there, we can’t rely on silicon transistors anymore.”

Better Logic Gates

Microprocessors built using silicon transistors have been stuck at processing speeds mostly in the 3 to 4 gigahertz range since 2005. There’s a limit to the rate of signals and power these transistors can handle, largely due to the material’s resistance. The team of researchers, however, found a way through this limitation by using graphene instead of silicon.

The researchers first built a graphene ribbon by unzipping a carbon nanotube (a thin folded graphene sheet). They then applied a magnetic field to the graphene ribbon, which made them realize they could control the resistance of the flowing current through the ribbon. By using adjacent nanotubes to increase or decrease the current, the magnetic field could control the flow of current.

The team’s graphene transistor-based logic circuits improved the clock speed of microprocessors by a thousand times, and would require a hundredth of the power required by silicon-based computers. Plus, these circuits were also smaller than logic circuits that use silicon transistors. This could allow for smaller electronic devices that squeeze in more functionality, Gelfand explained. A similar study also explored graphene as a potential capacitor for quantum computers.

An all-carbon computing system still exists only on the drawing board, says co-author Joseph S. Friedman of UT Dallas, but Friedman and his collaborators in the NanoSpinCompute research laboratory are currently working on a prototype.

“The exceptional material properties of carbon materials permit Terahertz operation and two orders of magnitude decrease in power-delay product compared to cutting-edge microprocessors,” the researchers wrote. “We hope to inspire the fabrication of these cascaded logic circuits to stimulate a transformative generation of energy-efficient computing.”

The post Graphene Computers Work 1000 Times Faster, Use Far Less Power appeared first on Futurism.

Total Solar Eclipses

If you happen to be in the path of the upcoming total solar eclipse, count yourself lucky. Actually, we humans are fortunate to live on a planet that experiences these celestial events at all. Our solar eclipses are really just a matter of coincidence: good luck and great alignment. Our moon is just large enough, and at just the right distance relative to both Earth and the sun, to blot out the solar light when it’s in the right position — at least for the time being.

Image Credit: NASAImage Credit: NASA

“They appear to be the same size because of their distance away from us,” Clemson University astronomer Amber Porter told NPR. The moon is far smaller than the sun, but “even though the [diameter of the] moon is about 400 times smaller than [that of] the sun, it’s about 400 times closer to us here on Earth, which is how that perfect kind of magic happens.”

Eclipses Changed Over Time

This perfect match has given us the strangely beautiful image of the sun hiding behind the moon with only its wispy corona visible. However, this is — in both geological and space terms — a relatively recent phenomenon. Like everything else in space, the moon is in motion, so in the past total solar eclipses didn’t look the way they do now.

“The size of the sun hasn’t really changed over the age of Earth, but the moon has been moving away from Earth over eons. So in the past it looked bigger,” SETI Institute planetary scientist Matija Cuk told NPR. In fact, the moon is still in motion, shifting about an inch and a half further away from us each year. Within around 600 million years, the moon will no longer cover the sun completely, and the total solar eclipse will be a piece of Earth history.

The post Catch the Eclipse While You Can — They Won’t Happen in the Future appeared first on Futurism.

Sieving Out Salt

A team of researchers led by Rahul Nair of the University of Manchester have invented a graphene oxide membrane that can sieve salt from seawater, producing potable water.

Graphene has been tipped as a promising material for this kind of purpose for some time, but previous attempts have failed because graphene oxide membranes swell up when submerged. This allows salt particles to flow through, impeding its ability to properly filter the liquid.

However, Nair and his team counteracted this problem by building walls of epoxy resin on both sides of the membrane. This offered them close control over the size of the membrane’s pores, which could be made small enough to sieve all the varieties of salt that tend to be present in seawater.

“Water molecules can go through individually, but sodium chloride cannot. It always needs the help of the water molecules,” said Nair in an interview with the BBC. “The size of the shell of water around the salt is larger than the channel size, so it cannot go through.”

Water, Water Everywhere

Data from a 2017 report by the WHO/UNICEF Joint Monitoring Programme indicates that 844 million people don’t have access to clean water. The graphene oxide sieve could help provide a cheap, quick, and effective way to filter seawater.

Single-layer graphene is notoriously difficult to mass produce, but since this project uses graphene oxide, the end result should be relatively inexpensive and easy to manufacture.

“The ultimate goal is to create a filtration device that will produce potable water from seawater or wastewater with minimal energy input,” said Nair.

However, at the moment the concept has only been demonstrated in a lab setting. Additional research is needed to make it a viable real-world option.

Scientists all over the world are hard at work searching for and developing better ways to produce clean drinking water, ranging from large-scale implementations of conventional desalination techniques to innovative machines that can turn urine into potable liquid.

With sea levels on the rise, and millions of people in dire need of drinking water, devising a method for making seawater drinkable could prove to be an efficient, cost-effective method.

The post Researchers Made a Graphene Sieve That Can Make Seawater Safe to Drink appeared first on Futurism.

Test the Limits

A small tweak on a definitive experiment in quantum physics has allowed scientists to observe for the first time exactly how molecules behave as waves.

The results are solidly in line with what theory covering complex quantum phenomena predicts, so don’t expect any radical new physics here. But as with most quantum experiments, the implications of seeing such a counter-intuitive theory in action makes our head spin.

Image Source: Group for Quantum Nanophysics, Faculty of Physics, University of Vienna; Image-Design: Christian Knobloch

Researchers from the Universities of Vienna and Tel Aviv have recently collaborated on turning a two-decade old idea into a reality, replacing tiny particles with large organic molecules in a variation on Clinton Davisson and Lester Germer’s classic 1927 double slit experiment in order to test the limits of a law governing their behaviour.

“The idea has been known for more than twenty years,” says researcher Christian Brand from the Vienna Centre for Quantum Science and Technology at the University of Vienna.

“But only now do we have the technological means to bring all the components together and build an experiment capable of testing it with massive molecules.”

To understand the significance, it helps to go back to the beginning.

For the first quarter of the 20th century, scientists were wrestling with what seemed like two completely different Universes of physical laws.

One was the Universe of Newton, where falling apples and shooting stars behaved in similar ways, only differing in terms of scale.

The second was born when Albert Einstein suggested that the mathematics being invented to explain how light was absorbed and emitted wasn’t just a convenient way to crunch the numbers – light really was made up of discrete bits called quanta.

Enter stage left, Prince Louis de Broglie.

Because the idea of light being made of tiny shooting balls wasn’t messed up enough, this intrepid French physicist decided one way to make sense of the latest models of the atom was to describe electrons – those little spheres whizzing around a nucleus – as waves as well.

Famous names such as Werner Heisenberg and Erwin Schrödinger subsequently found different ways to predict how an atom’s structure should behave, but one pictured electrons as continuous waves and the other as discrete bits of stuff.

The mad thing was, both theories were solid. By the same token, a thing couldn’t be a wave and a ball at the same time, could it?

American physicists Clinton Davisson and Lester Germer then took inspiration from an even earlier experiment that had demonstrated light was a wave.

Their version showed that a beam of electrons passing through a pair of closely-aligned parallel slits could produce a wave-like pattern of behaviour similar to light, backing up de Broglie’s hypothesis. Case closed.

Double Slit Experiment

Except ever since then, various versions of this double slit experiment have continued to mess with our minds, showing small objects like electrons and photons can behave as both particles and waves, depending on how we measure them.

Worse still, it’s not just a matter of the very tiny. In 2012, a new record was set in showing a molecule a whopping 800 atoms in size also has wave-like properties.

This latest experiment hasn’t smashed any records, but the researchers still used massive free-floating particles that weighed 515 atomic mass units, or roughly 42 carbon atoms in size. Not exactly tiny, and not easy to manage.

Their goal was to put some limits on the wave-like nature of big things like molecules by passing them through different numbers of slots.

It’s tempting to picture those waves as bunches of spheres jittering up and down like fleas on a hotplate.

Instead, an object such as an electron, a photon, a molecule, or (just to blow your mind) your grandmother, can be thought of as a blend of properties called a superposition that have different states at once.

The probabilities of those states, each describing its position and energy in time and space, is what we call waves. Seriously, stop trying to imagine it in a classical, physical sense, you’ll get a nose bleed.

For tiny particles, this probability can be inferred from measurements plugged into something called Born’s law.

More complex systems, such as molecules (and presumably grandmothers), demand extensions to this formula.

A little over 20 years ago a physicist named Rafael Sorkin determined you only needed the measurements from just two paths – such as those taken through dual slits – for certain extensions to Born’s law to still work. Adding a third, fourth, or hundredth should make no difference.

Thanks to the results of this experiment, we can sleep easier at night knowing Sorkin’s ‘two pathway’ limit stands for molecule-sized particles.

“This is the first time an explicit test of this kind has been conducted with massive particles”, says researcher Joseph Cotter the University of Vienna.

“Previous tests have pushed the frontiers with single photons and microwaves. In our experiment, we put bounds on higher-order interference of massive objects.”

While this is all well and good for physics, it’s also one more piece of evidence that shows quantum mechanic weirdness, such as existing as both particles and waves, isn’t just something that happens to unimaginably small things.

No wonder our head feels fuzzy.

This research was published in Science Advances.

The post Quantum Weirdness Has Been Tested Beyond The Particle Scale For The First Time appeared first on Futurism.

In Los Angeles, temperatures that exceed 38 degrees Celsius (100 degrees Fahrenheit) during the summer are not uncommon. Dark pavement can contribute to these higher temperatures, but a new asphalt treatment could reportedly cut street temperatures by as many as 6.6 degrees Celsius (12 degrees Fahrenheit) after just one coat.

Technological Fixes for Climate Change
Click to View Full Infographic

Black asphalt absorbs between 80 and 95 of sunlight, but the “cool pavement” treatment marketed by the California-based sealcoating firm GuardTop reflects it. This could have a big impact on street temperatures and make life more comfortable for people in urban environments.

After successful testing the treatment in parking lots, Los Angeles is ready to be the first major city to put it through its paces on a public road. Officials will monitor how residents react to the new pavement, as well as how long it takes for traffic conditions to soil the gray coloration of the coating.

As Alan Barreca, an environmental science professor at the University of California, Los Angeles, told AFP, “Lower temperatures — due to the pavement — mean less reliance on air conditioning. So, that means less greenhouse gases.”

However, despite the potential advantages of cool pavement coatings, research conducted by the Department of Energy indicated that they do have some drawbacks in terms of the energy and emissions associated with their manufacture, installation, use, and disposal.

By approaching the treatment with caution, rather than rushing into a wide rollout, L.A. can determine whether these drawbacks are outweighed by the benefits and make an educated decision on how to proceed.

The post L.A. Is Testing a Pavement Treatment That Can Drop Street Temperatures by 12 Degrees appeared first on Futurism.

Enormous Mergers

In January of 2016, researchers at the Laser Interferometer Gravitational-Wave Observatory (LIGO) made history when they announced the first-ever detection of gravitational waves. Supported by the National Science Foundation (NSF) and operated by Caltech and MIT, LIGO is dedicated to studying the waves predicted by Einstein’s Theory of General Relativity and caused by black hole mergers.

According to a new study by a team of astronomers from the Center of Cosmology at the University of California Irvine, such mergers are far more common than we thought. After conducting a survey of the cosmos intended to calculate and categorize black holes, the UCI team determined that there could be as many as 100 million black holes in the galaxy, a finding which has significant implications for the study of gravitational waves.

The study which details their findings, titled “Counting Black Holes: The Cosmic Stellar Remnant Population and Implications for LIGO“, recently appeared in the Monthly Notices of the Royal Astronomical Society. Led by Oliver D. Elbert, a postdoc student with the department of Physics and Astronomy at UC Irvine, the team conducted an analysis of gravitational wave signals that have been detected by LIGO.

Caltech/MIT/LIGO Lab

More Questions

Their study began roughly a year and a half ago, shortly after LIGO announced the first detection of gravitational waves. These waves were created by the merger of two distant black holes, each of which was equivalent in mass to about 30 Suns. As James Bullock, a professor of physics and astronomy at UC Irvine and a co-author on the paper, explained in a UCI press release:

“Fundamentally, the detection of gravitational waves was a huge deal, as it was a confirmation of a key prediction of Einstein’s general theory of relativity. But then we looked closer at the astrophysics of the actual result, a merger of two 30-solar-mass black holes. That was simply astounding and had us asking, ‘How common are black holes of this size, and how often do they merge?’”

Traditionally, astronomers have been of the opinion that black holes would typically be about the same mass as our Sun. As such, they sought to interpret the multiple gravitational wave detections made by LIGO in terms of what is known about galaxy formation. Beyond this, they also sought to create a framework for predicting future black hole mergers.

From this, they concluded that the Milky Way Galaxy would be home to up to 100 million black holes, 10 millions of which would have an estimated mass of about 30 Solar masses – i.e. similar to those that merged and created the first gravitational waves detected by LIGO in 2016. Meanwhile, dwarf galaxies – like the Draco Dwarf, which orbits at a distance of about 250,000 ly from the center of our galaxy – would host about 100 black holes.

They further determined that today, most low-mass black holes (~10 Solar masses) reside within galaxies of 1 trillion Solar masses (massive galaxies) while massive black holes (~50 Solar masses) reside within galaxies that have about 10 billion Solar masses (i.e. dwarf galaxies). After considering the relationship between galaxy mass and stellar metallicity, they interpreted a galaxy’s black hole count as a function of its stellar mass.

A Common Occurrence?

In addition, they also sought to determine how often black holes occur in pairs, how often they merge and how long this would take. Their analysis indicated that only a tiny fraction of black holes would need to be involved in mergers to accommodate what LIGO observed. It also offered predictions that showed how even larger black holes could be merging within the next decade.

As Manoj Kaplinghat, also a UCI professor of physics and astronomy and the second co-author on the study, explained:

“We show that only 0.1 to 1 percent of the black holes formed have to merge to explain what LIGO saw. Of course, the black holes have to get close enough to merge in a reasonable time, which is an open problem… If the current ideas about stellar evolution are right, then our calculations indicate that mergers of even 50-solar-mass black holes will be detected in a few years.”

In other words, our galaxy could be teeming with black holes, and mergers could be happening in a regular basis (relative to cosmological timescales). As such, we can expect that many more gravity wave detections will be possible in the coming years. This should come as no surprise, seeing as how LIGO has made two additional detections since the winter of 2016.

With many more expected to come, astronomers will have many opportunities to study black holes mergers, not to mention the physics that drive them!

The post Cosmic Census: There Could Be 100 Million Black Holes in Our Galaxy Alone appeared first on Futurism.

The Importance of 30 Minutes

The International Space Station is nearly twenty years old. During almost two decades in low-Earth orbit, the floating laboratory has offered the opportunity to test many a hypothesis in microgravity.

Often, these experiments have to do with biology and biochemistry. Take for instance studying the effects of space radiation on mammalian reproduction, or flatworm regeneration in microgravity. However, hardware also has a place in the lab.

The current computers on the ISS – the ones that operate the station – run on a microprocessor first introduced in 1985. That may not sound like enough to power the almost five-mile-long station however, these computers are supported by 24/7 monitoring from the ground by even more powerful computers.

Image Credit: NASA

The system does the job, for now. It doesn’t take long for information to travel from the ISS to the ground. However, when humans eventually get to the Red Planet, communicating between Mars and Earth will result in a bit of a delay. No, not quite a la The Martian. More like 30 minutes each way.

This may not sound like much, but, as Alain Andreoli, Hewlett-Packard Enterprise’s (HPE) senior vice president of its data center infrastructure group explained in a blog post:

A long communication lag would make any on-the-ground exploration challenging and potentially dangerous if astronauts are met with any mission critical scenarios that they’re not able to solve themselves.

Essentially, half an hour could cost someone their life.

Hardened Software

So why aren’t scientists just sending better computers to space? Well, space travel is pretty rough on technology, and NASA has high demands. Computers aboard the ISS need to withstand space-related problems such as “radiation, solar flares, subatomic particles, micrometeoroids, unstable electrical power, irregular cooling,” explained Andreoli. This “hardening” process results in additional costs and unnecessary bulk.

Image Credit: NASA

What if traditional, off-the-shelf computer components could be made to withstand the rigors of space? NASA and HPE are working together to find out. Monday, a SpaceX rocket will launch a supercomputer called the Spaceborne Computer to the ISS for a year-long experiment (coincidently, the amount of time it would take humans to get to Mars).

The computer has not been hardened for the radiation environment on the space station in the traditional sense. Instead, it’s been “software hardened.” The goal is to better understand how space will degrade the performance of an off-the-shelf computer. Meanwhile, back on Earth, an identical model will run in a lab as a control.

The computer is only about the size of two pizza boxes stuck together. It has a special water-cooled enclosure as well as custom software that can automatically adjust for environmentally-induced computer errors. It may not be the most powerful computer on the market, but with its 1 teraflop computing speed, it’ll be the most powerful computer ever sent into space.

If this experiment works, it opens up a universe of possibility for high performance computing in space.

“This goes along with the space station’s mission to facilitate exploration beyond low Earth orbit,” Mark Fernandez, HPE’s leading payload engineer for the project, told Ars Technica. “If this experiment works, it opens up a universe of possibility for high performance computing in space.”

Not only will this result in better computers aboard the ISS and other NASA crafts that can send humans farther into space, but it will also help with experiments on the ISS. Fernandez explains that scientists could use an on-board supercomputer for data processing, rather than sending the data back to Earth.

The post SpaceX, NASA, and HP Are Sending a Supercomputer to the ISS appeared first on Futurism.

In the Zone

Earth’s location in space is perfect: not too close to but not too far from the Sun, it gives our planet the balmy temperature that helps supports life. However, a new study suggests that it might be even more difficult than previously expected to find a celestial body that falls within this ‘Goldilocks zone.’

The habitable zone of any given star is the area where planets can maintain a temperature that allows liquid water to be found on its surface. Too close to the star, and that water will turn to vapor — too far away, and it’ll turn to ice.

However, stars like our sun gradually get more luminous over time, which changes the parameters of their habitable zone. This means that icy planets can feasibly reach a point where their conditions are warm enough to support life — but according to a recent study in Nature Geoscience, that’s not always the way the situation will pan out.

Too Hot to Handle

A planet’s ability to support life-sustaining temperatures hinges on at least two factors: the amount of ice on the surface, and the amount of greenhouse gases being released into its atmosphere. Yet many icy planets don’t have the volcanic activity needed to contain any greenhouse gases besides water vapor.

So this study’s team, led by Jun Yang of Peking University, developed a model that could simulate how the climate of an ice-covered planet with only water vapor in the atmosphere would change over time. The results suggested it would take 10 to 40 percent more energy than the Earth receives from the sun before they began to melt.

Without ice to reflect incoming heat, this heat-intensive process was often followed by a speedy uptick in temperature that caused the planet’s oceans to boil off. And without water, these worlds wouldn’t be able to support life after all.

This isn’t necessarily bad news. Thanks to increasingly sharp-eyed instruments, the number of known exoplanets has skyrocketed in the past two decades, from a mere handful in the mid-90s to nearly 2000 today. In February 2014 alone, NASA announced a “planet bonanza” discovery of 715 new planets, found by the Kepler satellite. But identifying which of these distant worlds might be friendly to life is still tricky.

Scientists are able to infer the atmospheric content of a planet based on the way light passes through it, a process that’s already been used to detect water on a distant Earth-sized planet. However, this method doesn’t tell scientists what else is happening on the planet — such as whether it’s in the runaway, ocean-boiling cycle Yang’s team identified.

If we’re on the search for a planet that humans can live on, having this information at hand gives us more insight into which worlds are in contention.

The post New Study Explains How Some Planets May Never Become Habitable appeared first on Futurism.

Microbial Internet

Creating a huge global network connecting billions of individuals might be one of humanity’s greatest achievements to date, but microbes beat us to it by more than three billion years. These tiny single-celled organisms aren’t just responsible for all life on Earth. They also have their own versions of the World Wide Web and the Internet of Things. Here’s how they work.

Much like our own cells, microbes treat pieces of DNA as coded messages. These messages contain information for assembling proteins into molecular machines that can solve specific problems, such as repairing the cell. But microbes don’t just get these messages from their own DNA. They also swallow pieces of DNA from their dead relatives or exchange them with living mates.

Image Source: LucindaBurdett/ Flickr

These DNA pieces are then incorporated into their genomes, which are like computers overseeing the work of the entire protein machinery. In this way, the tiny microbe is a flexible learning machine that intelligently searches for resources in its environment. If one protein machine doesn’t work, the microbe tries another one. Trial and error solve all the problems.

But microbes are too small to act on their own. Instead, they form societies. Microbes have been living as giant colonies, containing trillions of members, from the dawn of life. These colonies have even left behind mineral structures known as stromatolites. These are microbial metropolises, frozen in time like Pompeii, that provide evidence of life from billions of years ago.

Microbial colonies are constantly learning and adapting. They emerged in the oceans and gradually conquered the land – and at the heart of their exploration strategy was information exchange. As we’ve seen, individual members communicate by exchanging chemical messages in a highly coordinated fashion. In this way, microbial society effectively constructs a collective “mind”.

This collective mind directs pieces of software, written in DNA code, back and forth between trillions of microbes with a single aim: to fully explore the local environment for resources using protein machines.

When resources are exhausted in one place, microbial expedition forces advance to find new lands of plenty. They transmit their discoveries back to base using different kinds of chemical signals, calling for microbial society to transform from settlers to colonisers.

In this way, microbes eventually conquered the entire planet, creating a global microbial network that resembles our own World Wide Web but using biocehmical signals instead of electronic digital ones. In theory, a signal emitted in waters around the South Pole could effectively travel fast to waters around the North Pole.

Internet of Living Things

The similarities with human technology don’t stop there. Scientists and engineers are now working on expanding our own information network into the Internet of Things, integrating all manner of devices by equipping them with microchips to sense and communicate. Your fridge will be able to alert you when it is out of milk. Your house will be able to tell you when it is being burgled.

Microbes built their version of the Internet of Things a long time ago. We can call it the “Internet of Living Things”, although it’s more often known as the biosphere. Every organism on the planet is linked in this complex network that depends on microbes for its survival.

More than a billion years ago, one microbe found its way inside another microbe that became its host. These two microbes became a symbiotic hybrid known as the eukaryotic cell, the basis for most of the lifeforms we are commonly familiar with today. All plants and animals are descended from this microbial merger and so they contain the biological “plug-in” software that connects them to the Internet of Living Things.

For example, humans are designed in a way that means we cannot function without the trillions of microbes inside our bodies (our microbiome) that help us do things like digest foodand develop immunity to germs. We are so overwhelmed by microbes that we imprint personal microbial signatures on every surface we touch.

The Internet of Living Things is a neat and beautifully functioning system. Plants and animals live on the ecological waste created by microbes. While to microbes, all plants and animals are, as author Howard Bloom puts it, “mere cattle on whose flesh they dine”, whose bodies will be digested and recycled one day.

Microbes are even potential cosmic tourists. If humans travel into deep space, our microbes will travel with us. The Internet of Living Things may have a long cosmic reach.

The paradox is that we still perceive microbes as inferior organisms. The reality is that microbes are the invisible and intelligent rulers of the biosphere. Their global biomass exceeds our own. They are the original inventors of the information-based society. Our internet is merely a by-product of the microbial information game initiated three billion years ago.

The post Microbes Have Their Own Version of the Internet appeared first on Futurism.

Seeing Single Photons

Physicists already know that the rods at the back of the human eye are sensitive enough to light that they can be stimulated by a single photon. However, whether or not this sensitivity carries through the visual and cognitive systems of the brain and triggers perception is a different question entirely. Testing this query has demanded technology that was unavailable until recently. Now, physicists can produce pairs of photons using photon guns that work reliably and on demand.

The simple experiment to test the human ability to perceive a lone photon involves firing one into a human eye and determining if the subject observed it. Pairs of photons are important to the setup of this experiment because they ensure that the researchers can monitor when the photon gun was actually triggered each time. However, it’s not easy to get good results despite this simple structure, because human observers are unreliable and a statistically significant result requires a high number of trials.

Last month, researchers from the University of Vienna in Austria published findings from their own experiment, which was similar. Their version made itself unique by asking subjects to record the confidence of their observation. The subjects correctly observed single photons just over half of the time, at 51.6 percent. This led the team to conclude: “Humans can detect a single-photon incident on the cornea with a probability significantly above chance.”

Image Credit: Tinsley, et al./Nature Communications
Image Credit: Tinsley, et al./Nature Communications

However, researchers a the University of Illinois at Urbana-Champaign question have questioned that conclusion, saying that the data does not support it, and arguing that the experiment lacks sufficient statistical significance. The University of Illinois team has conducted its own research in human vision, so they have expertise in the field.

Our Powerful Senses

So, why does this matter? To begin with, the ability to detect such a minuscule amount of energy would define the physical limits of biology expressed in the workings of the human brain — a “machine” functioning in a wet environment plagued by constant noise. If our eyes could pick up on single photons, measuring around 10 to 19 Joules each, it would paint a compelling picture of the power of human biological senses.

Engineers and physicists aspire to achieve that kind of precision and sensitivity with their own machines in wet, warm environments, so modeling our machines after our own senses could allow us to reach higher goals. As we reveal our own sensory limits to ourselves, we can expect to see more impressive technological advances.

The post Can the Human Eye Detect Single Photons? appeared first on Futurism.

Ants Marching

Researchers have used the CRISPR technique to manipulate the social activities of ants for a study that will be published in Cell. Two independent teams knocked out the orco (odorant receptor coreceptor) gene in entire colonies of ants, which negated their ability to perceive pheromone signals they use to communicate. Without those cues, they began to exhibit asocial behaviors like leaving the safety of the nest and declining to aid efforts to hunt for food.

Ants possess a whopping 350 genes for odorant receptors, but as they all need to liaise with the orco gene to be effective, they could all be knocked out at once. The two teams chose different ants based on two distinct strategies for proliferating this edit among the colony.

One group chose a species that has no queens, instead procreating using unfertilized eggs that mature as clones, producing ants that are genetically identical. CRISPR was used to edit lone eggs, which produced an entire colony with the desired modification.

Meanwhile, the other team of researchers selected a species known for an unusual trait that sees worker ants graduate to the role of egg-laying pseudoqueen in the event that the former queen dies. The chosen worker ant had its genetic makeup modified being converted into a pseudoqueen and prompted to spawn a new colony.

Superorganism Socialization

The social interactions of ants are fascinating because of the way a colony acts as a single entity. And as such, these amalgamate superorganisms can potentially tell us a lot about the way we humans interface with one another.

How CRISPR Works: The Future of Genetic Engineering and Designer Humans
Click to View Full Infographic

The researchers observed that disabling the orco gene resulted in certain substructures from the ants’ central processing centers going missing. These parts of the brain are essential to their olfactory communications, and it’s not known exactly why they disappeared. Symptomatically, this is analogous to a number of human mental disorders.

The hope is that further research into the biochemistry of ant colony behavior could reveal more about disorders that affect social communication, like depression or schizophrenia. If we can better understand this process as it occurs in an ant’s brain, and then that of the invisible hand moving the colony as superorganism, we might shine a light on how similar changes that affect mammals.

The post Researchers Used CRISPR to Manipulate the Social Activities of Ants appeared first on Futurism.

New Neurons

Mapping exactly how the human brain functions is, perhaps, the most promising step when it comes to transforming how humans operate on a fundamental level. Mapping how the brain works down to the molecular level could help us find new ways to combat neurological and even  allow us to enhance human intelligence. Already, a host of innovators are working to develop technology that intertwines with the brain to enhance its functionality; however, before we can deploy such technologies, we need to fully understand how the brain works.

And we just got a little bit closer. Today, a team of researchers from the Salk Institute and the University of California San Diego announced that they have made a major discovery that could aid us in this effort. Through a relatively new process, the scientists were able to discover new types of brain cells.

The Evolution of Brain-Computer Interfaces [INFOGRAPHIC]
Click to View Full Infographic

According to the co-senior author, Joseph Ecker, professor and director of Salk’s Genomic Analysis Laboratory and an investigator of the Howard Hughes Medical Institute, “Decades ago, neurons were identified by their shape. Now we are taking a molecular approach by looking at this modification of the methylation profile between cells and that tells us what type of cell it is pretty accurately.”

In short, sequencing the molecular structure of neurons that look the same under a microscope, we can begin to sort them into subgroups to give a better understanding of each subgroup’s functionality. “We think it’s pretty striking that we can tease apart a brain into individual cells, sequence their methylomes, and identify many new cell types along with their gene regulatory elements, the genetic switches that make these neurons distinct from each other,” Ecker notes.

Making the List

This research will allow scientists to get a complete “parts list” of each neuron and its function. According to Chonguan Luo, a Salk research associate, and co-first author of the research paper, such mapping will open a host of new doors: “There are hundreds, if not thousands, of types of brain cells that have different functions and behaviors and it’s important to know what all these types are to understand how the brain works.”

As previously mentioned, these findings could have a profound impact on how we study and treat neurological disorders. Ecker’s next move is to research molecular differences in the brains of healthy subjects versus those with brain disease. “If there’s a defect in just one percent of cells, we should be able to see it with this method,” he says. “Until now, we would have had no chance of picking something up in that small a percentage of cells.”

Researchers will be able to pin point the exact cell types that may be responsible for a particular disease. With that knowledge, future research would be able to focus on correcting that abnormality. “…we can develop, from this information, new tools to be able to study particular cell populations once we know they exist,” says Ecker.

Understanding the brain on this minute of a level will certainly open up a wide range of possibility for the future of treating disease, as well as preparing us for a new level of bionic integration.

The post Making A Better Brain: Scientists Just Discovered a New Kind of Brain Cell appeared first on Futurism.

Dark Energy Survey

Scientists have discovered a mismatch between a map of the early universe and measurements of the universe today. If the disparity remains through future measurements, that difference could rewrite physics.

The recent results, part of the ongoing Dark Energy Survey (DES) of a huge portion of the southern sky, reveal how matter is distributed among 26 million galaxies. They also provide one of the clearest and most powerful images of our universe to date.

The new data is also being compared with images taken in 2013 with the Planck satellite, which show the universe as it once was. The comparison enables scientists to get an “in motion” sense of the universe as an evolving system, and to make predictions about the future. And while many astronomers think that dark energy is a constant force, these preliminary results seem to suggest that it might not be.

The Planck images showed that dark matter comprised 34 percent of the universe in its early days, but these new findings indicate that it currently makes up only 26 percent.

Understanding how matter is distributed helps us to know how dark energy and dark matter oppose each other in our universe. Dark energy pulls every galaxy apart as it causes the universe to accelerate; dark energy is the opposing force, physicists think, which pushes galaxies back together. If this is correct, dark matter (which scientists are still searching for) could be losing its cosmic battle to hold things together, changing physics as we know it.

Image Credit: RedHeadsRule/Pixabay
Image Credit: RedHeadsRule/Pixabay

An Uncertain Outcome

The new results arose from the first observational season of the four-meter Victor M. Blanco Telescope. The observational period for the DES lasted only six months, but has already produced this strange result. The survey with the Blanco telescope will continue for five years, yielding more and better data as time passes.

Astronomers are rightly reluctant to come to overly dramatic conclusions based solely upon these initial data. From a statistical perspective, the variation between the early universe and the current version is slight. The mismatch may also disappear with more data, indicating that one or both of the measurements was incorrect.

However, this isn’t the only disparity; for example, results from the South Pole Telescope also conflict with the Planck data.

Although the scientific community has been assuming that the universe would continue to expand while galaxies would remain glued together, this may be wrong. If dark energy continues to increase, it’s possible that one day galaxies and everything inside them — down to atoms themselves — could expand enough to be torn apart.

As unsettling as that thought is, this may be yet another fascinating anomaly about our universe, with context that changes as we continue scanning the skies.

The post This Dark Energy Discovery Could Rewrite Physics appeared first on Futurism.

Not Far, Far Away

About 12 light years away from the Sun exists tau Ceti, the nearest Sun-like star from our solar system. This star supports a multi-planetary solar system that’s been a point of focus for astronomers for some time, and now, an international team of researchers led by the University of Hertfordshire in the U.K. have revealed something new about the system: it may include four Earth-sized planets, two of which could be habitable.

The four Earth-like planets are the smallest orbiting around tau Ceti. One has an estimated mass of just 1.7 times that of the Earth, and two of the planets are super-Earths — exoplanets with masses between 1.5 and roughly 10 times Earth’s mass — in the tau Ceti system’s habitable zone. This could translate to the existence of liquid water on their surfaces.

Image credit: University of Hartfordshire
Image credit: University of Hartfordshire

The researchers discovered the four Earth-like exoplanets by observing wobbles in the movement of tau Ceti. “We’re slowly learning to tell the difference between wobbles caused by planets and those caused by stellar active surface,” researcher Mikko Tuomi explained in a press release. “This enabled us to verify the existence of the two outer, potentially habitable, planets in the system.”

The Hunt for Life

In the search for potentially habitable Earth-like exoplanets, those found orbiting Sun-like stars are good candidates. Tau Ceti closely resembles the Sun in terms of size and brightness, and it also has multiple planets around it, just like our Sun. Unfortunately, the presence of a massive debris disk around tau Ceti could mean the planets are constantly bombarded by asteroids and comets, lessening chances of habitability.

The Science of Searching for Exoplanets [INFOGRAPHIC]
Click to View Full Infographic

However, the researchers say the improved sensitivity of their observation techniques using the ESO’s HARPS spectrograph and the Keck-HIRES in Hawaii is allowing them to better their chances of finding planets that will be habitable.

“We’re getting tantalizingly close to observing the correct limits required for detecting Earth-like planets,” lead researcher Fabo Feng said in the press release. “Our detection of such weak wobbles is a milestone in the search for [Earth-like exoplanets] and the understanding of the Earth’s habitability through comparison with these.”

As always, the search for habitable exoplanets will continue, and thanks to advancing tech, new discoveries are being made more frequently. In addition to this news from the University of Hertfordshire team, NASA has recently revealed data from the latest Kepler mission that identified 10 more Earth-like planets in other star systems. A team of international researchers has discovered a super-Earth 21 light years away, as well.

Eventually, one of these Earth-like planets could provide proof that we aren’t alone in the universe or even act as our future home, an Earth 2.0 not entirely unlike humanity’s point of origin.

The post Researchers Have Discovered Two Potentially Habitable Planets 12 Light Years Away appeared first on Futurism.

Evolution actually gave us a solution to uncertainty.

The post Watch: Science Isn’t Really a Method—It’s Your Brain Celebrating Danger and Uncertainty appeared first on Futurism.

The Architecture Of Biofilms

While just over half of humans live in urban areas, 99.9 percent of bacteria opt for urban dwelling in the form of biofilms. Biofilms are clusters of billions of bacteria — often joined by protozoa, algae, fungi, products of corrosion, and debris — existing within an extracellular matrix made of protein and sugar. They can cling to almost any surface, from the inside of a catheter to the surface of our teeth.

Biofilms are advantageous to bacteria, who adapt based on where in the biofilm “city” they’re located. Both the physical shape of the biofilm and the matrix structure itself protect the bacteria living toward the center from attack by antibiotics and predators, while those dwelling in the outer neighborhoods are more vulnerable. Those safer central cells might serve as genetic reservoirs or amass food reserves, while the outer dwellers might beat back attacks. And while to some extent the inhabitants are still competitive against each other, there is also cooperative living within the biofilm.

Scientists are now working to decode the rules that govern the ways biofilms form and evolve. The idea is that just as an urban planner can use their understanding of regulations and principles of engineering to make a city more or less inhabitable, a bioengineer or microbiologist can use the rules that govern biofilms to render any given location more or less hospitable for colonies of bacteria. This is a new way of considering bacteria, because bacteria growing in a culture in a lab act very differently than the complex urban center presented by the biofilm.

Scott Chimileski and Roberto Kolter, Harvard Medical School

Microbiologist Katharine Coyte’s recent work with microfluidic chips explores how the number of bacteria and the flow of water can shape and limit the growth of biofilms. This work joins a growing body of literature that demonstrates the way physical rather than chemical forces — the stickiness of the extracellular matrix and hydrodynamic shear, for example — push the microbes to form complex collectives. This is a departure from the work of earlier biofilm researchers, who were much more focused on the chemical environments of the colonies.

Coyte and her team observed that prosperity of different strains of bacteria varied as water flow conditions changed, but that in general no one strain ever gained a permanent advantage. There was a back-and-forth stabilization between strains of bacteria occupying the biofilms under observation; both strains and the biofilm itself were adapting to survive.

Focusing On Biophysical Rules

According to biophysicist Knut Drescher of the Max Planck Institute for Terrestrial Microbiology, in the past decade, advances in high-resolution microscopy and microscale engineering have enabled researchers to observe and measure the ways physical forces affect individual cells. This has in turn allowed scientists to track the way that biofilms form in realistic replications of a variety of environmental conditions.

This change in focus has been fruitful, revealing phenomena previously dismissed as improbable. For example, researchers had assumed that biofilms could not form where water was too fast-flowing. However, University of Washington researchers discovered that some strains of E. coli actually adhere better in conditions with high levels of hydrodynamic shear, where fast-flowing water begins to have a cutting effect on other surfaces.

Vaccine Myths
Click to View Full Infographic

Understanding the complex possibilities of biofilms and their inhabitants is critical for real-world applications. Very recent work on the bacterium Pseudomonas aeruginosa, which often settles inside catheters and ventilators, causing deadly infections in immunocompromised people, shows that the bacteria can make different kinds of matrix molecule to form their resilient biofilms. When the researchers changed the environment’s structure by adding obstacles, different strains dominated the biofilms that formed. Obviously, if this technique could be mastered, the insides of medical equipment could be bioengineered to be physically hostile to this bacteria’s biofilm requirements.

Antibiotics are excellent as far as they go, but biofilms are far less vulnerable to them, and exclusive focus on more and better antibiotics has merely produced stronger bacteria. Focusing instead on the biophysical rules controlling these bacterial urban centers and thus forcing bacteria to die or move according to their own rules could give scientists a better tool for fighting bacterial “urban sprawl,” and patients a better shot at keeping the neighborhood to themselves.

The post Researchers Seek New Ways to Fight Antibiotic-Resistant Bacteria Cities appeared first on Futurism.

Color Science

It’s no secret that animals see the world in a very different light to humans, but an international collaboration between scientists has revealed that we are on the brink of a new era when it comes to figuring out how animals see, use and manipulate colour. 

Image Source: Justin Marshall/Queensland Brain Institute

The in-depth review takes a deep dive into the field of colour science, which has experienced explosive growth in the last two decades thanks to technological advances

The authors say that the new insights on colouration are set to have an enormous impact on various fields, including medical applications.

“Knowledge of production and perception and function of colouration is poised to make contributions to medicine, security, clothing and the military,” says Tim Caro, one of the researchers from the University of California, Davis. 

While it’s easy to take seeing the world in colour for granted, colouration plays a vital role in how we understand and interact with our environment.

Whether it’s deciding on a mate, avoiding predators or preventing parasites, animals use colour as a tool for survival and reproductive success. 

Thanks to rapid improvement of relevant technologies, such as spectrophotometry and digital imaging, researchers are beginning to understand how colouration works at the nanoscale and the mechanisms behind colour perception. 

“We can now pose questions about the evolution of camouflage based on what a prey’s main predator can see,” says Caro.

Color Perception

“We can start to appreciate that gene changes underlying colour production have occurred in parallel in unrelated species.”

Drawing on the work of 27 international scientists, the new research provides a snapshot of colouration in a variety of animals and how they use it to interact with their environment.

Thanks to their unique facial patterns, paper wasps are masters at recognising and remembering faces, not unlike humans.

A study revealed that the Polistes fuscatus wasp can spot the difference between images of another wasp’s face and images featuring other objects or tweaked faces.

In another wasp species, Polistes dominula, facial patterns signal how well an individual can fight, just like a karate belt represents fighting level in martial arts.

In a recent study, researchers found that each wasp has a unique facial pattern which is used to assess whether to pick a fight or not. The number of black spots on a wasp’s face indicates how likely they are to win a fight. This ability to size each other up helps to reduce the costs of conflict.

“Colouration is such a key biological trait,” says one of the researchersElizabeth Tibbetts from the University of Michigan.

“Over the years, I’ve learned a lot about the evolution of deception, cooperation, sensory systems and cognition by studying colouration in paper wasps.”

While imitation is often the highest form of flattery, in the animal kingdom it can mean the difference between life and death.

Another recent study highlighted in the paper reveals that more than 150 snake species mimic the red-and-black banded colouring of deadly black coral snakes to ward off predators.

Using geographic, ecological and colouration data, the researchers were able to pinpoint where these non-venomous coral snake lookalikes were found.

The team also found that these coral snake mimics likely appeared after the real coral snakes came on the scene.

And when it comes to seeing the world in colour, animals don’t always play by the rules, especially if they live in the ocean.

Recent research revealed that mantis shrimp have four times as many colour receptors as humans do. While we are limited to red, green and blue receptors, mantis shrimps have 12.

“Some animals use fluorescence or ‘enhanced’ colour,” says one of the researchers Justin Marshall, from the University of Queensland.

“We humans have reinvented this – and we use fluorescence in everyday objects, such as highlighter pens, or in washing power to achieve that ‘whiter-than-white’ look.”

And it turns out colour has played a role in human evolution too, according to a study on skin pigmentation.

The research revealed that darker skin colour evolved as a shield against harmful ultraviolet radiation, particularly in areas close to the equator. Apart from working hard on your tan, sunbathing during summer also provides a boost of vitamin D.

“This trade-off can affect behaviour,” Caro told Seeker.

“For example, people in northern European climates like to take advantage of the sun in spring and sunbathe to produce vitamin D after the long winter.”

The brilliant shimmer of hummingbird and peacock feathers are also revealing how colour works at the nanoscale.

The secret behind feather iridescence lies in how tiny structures interfere with light. These structures are fine enough to produce colour through the warping of light rather than pigmentation.

Understanding how colour works at a structural level could be useful for the development of sensors in medical and security applications.

With all of this in mind, we can expect the world to become a lot more colourful in the next few years.

“We are on the threshold of a new era of colour science, and the interdisciplinary nature of this collaborative enterprise holds enormous promise,” the authors conclude.

The research is published in Science.

The post We Are Entering a New Era of Colour Science and Researchers Are Buzzing appeared first on Futurism.

Observing Antihydrogen

Matter and antimatter are opposites. Although both are made up of atoms, the subatomic particles that comprise antimatter have the opposite electrical charges as their subatomic counterparts in matter. A regular atom has positively charged protons and negatively charged electrons in its nucleus. In contrast, an atom in antimatter has negatively charged antiprotons in place of protons, and positively charged positrons (which are really just anti-electrons) in place of electrons.

Since matter and antimatter must reflect each other as opposites, the spectral lines of antimatter atoms should reflect those of normal atoms exactly. This has been mere conjecture up until now, since physicists have not been able to observe antimatter spectral lines. However, the ALPHA collaboration at CERN has now replaced the proton and electron of a hydrogen atom with an antiproton and positron, respectively, creating antihydrogen.

The team detected the 1S-2S transition between positron energy levels using UV light in 2016, which reinforced the idea that matter and antimatter are symmetrical. Now, they have flipped the spin of the positron using microwaves. In other words, they have detected two distinct signatures from two transitions, which allowed the team to plot the first antimatter transition line shape and directly measure hyperfine splitting independent of a magnetic field.

Image Credit: Ahmadi et al./Nature

“By studying the properties of anti-atoms we hope to learn more about the Universe in which we live,” co-lead author Professor Michael Hayden told Sci-News. “We can make antimatter in the lab, but it doesn’t seem to exist naturally except in minuscule quantities. Why is this? We simply don’t know. But perhaps antihydrogen can give us some clues.”

Unraveling Mysteries Of Physics

This discovery could help us answer some of the largest conundrums in physics today. Scientists know that matter and antimatter were created in about equal proportions after the Big Bang, as the universe cooled and expanded, but they can’t explain the asymmetry of matter and antimatter, or why antimatter, which annihilates anything it comes into contact with, didn’t just wipe out all matter.

“[W]e have yet to answer a central question of why didn’t matter and antimatter, which it is believed were created in equal amounts when the Big Bang started the Universe, mutually self-annihilate?” co-author Professor Mike Charlton said to Sci-News. “We also have yet to address why there is any matter left in the Universe at all. This conundrum is one of the central open questions in fundamental science, and one way to search for the answer is to bring the power of precision atomic physics to bear upon antimatter.”

The post Scientists Just Made a Major Discovery by Observing Antihydrogen Atoms appeared first on Futurism.

The Wallflower Particle

It sounds like the setup to a nerdy joke: “What did the physicists say when they saw a neutrino bump into a nucleus? Finally!”

In a study, published August 3rd in Science, researchers from the Oak Ridge National Laboratory (ORNL) and UChicago announced that they had observed phenomena that have eluded physicists for decades: elusive neutrino particles bouncing off of a nucleus.

Neutrinos have long been difficult to spot, due to their famously shy nature. Unlike most elemental particles, they rarely interact with other particles. So, when physicist Daniel Freedman proposed in 1974 that they might bump off of atomic nuclei, a phenomenon known as coherent elastic neutrino-nucleus scattering (CEvNS), he wrote that “our suggestion may be an act of hubris” because of how difficult experimental confirmation would be.

“Why did it take 43 years to observe this interaction?” asked Juan Collar, co-author of the Science study and a professor of physics at UChicago, in a press release. “What takes place is very subtle.”

Previous searches for neutrinos have tackled the problem by throwing as many particles at them as possible, using enormous tanks built underground.

In contrast, the ORNL collaboration — known as COHERENT — utilized a new type of detector about the size of a fire extinguisher, which searched specifically for the fingerprint of CEvNS. When a neutrino collides with a nucleus, the nucleus recoils slightly from the impact, and displaces a few electrons from their orbits around the nucleus and its neighboring particles in the process.

Juan Collar, a professor in physics at the University of Chicago, with a prototype of the world's smallest neutrino detector used to observe for the first time an elusive interaction known as coherent elastic neutrino nucleus scattering.

Juan Collar, a professor in physics at the University of Chicago, with a prototype of the world’s smallest neutrino detector. Image Credit: Jean Lachat/University of Chicago

Size Matters

To detect that recoil, the COHERENT collaboration sought out the perfect material for neutrinos to interact with. They needed something with a Goldilocks nucleus. The right nucleus would be heavy and neutron dense, presenting a larger target for the neutrino to hit, but it could not be so heavy that the tiny nuclear recoil became impossible to spot. The material also had to be transparent, so that photons produced by the runaway electrons could hit a detector.

That perfect material turned out to be cesium iodide crystal doped with sodium and bombarded with slow-moving neutrinos from ORNL’s Spallation Neutron Source particle collider.

The use of the cesium iodide crystal allowed COHERENT to create a detector only 4 inches by 13 inches in size. This detector itself is the other exciting outcome of the experiment: because of its transportable size, it could allow for neutrino detection in a host of other applications.

Scientists have suggested using the small detector to monitor the neutrino fluctuations of nuclear power plants and detect the neutrinos produced by clandestine nuclear weapon construction. It could also be used to learn more about supernovae, which release massive amounts of energy in the form of neutrinos, and adapted to detect the elusive particles that make up dark matter.

For Freedman, these results also showed that his original theory wasn’t so god-defying after all. “It’s a real thrill that something that I predicted 43 years ago has been realized experimentally,” Freedman told Science.

The post This Handheld Detector Spotted A Never-Before-Seen Physics Phenomenon appeared first on Futurism.

Simulating Shadows

On August 21, a swatch of Earth residents will witness the rare side effect of our planet’s long dance with its oldest partner: a total solar eclipse, caused by the perfect alignment of the Moon in front of the sun.

A new visualization from NASA shows in extraordinary detail where the total eclipse will be visible; primarily along an approximately 112 km (70 mile) stretch, cutting across the United States diagonally from Oregon to South Carolina. This diagonal lies in the path of the umbra, the part of the Moon’s shadow where the sun is entirely blocked by the Moon.

The visualization shows that the umbra is shaped like an irregular, slightly curved polygon, rather than the circle you might expect. The same dips and bumps that shade faces and imaginary seas into the Moon’s surface also affect how light passes around it.

“With this new visualization, we can represent the umbral shadow with more accuracy by accounting for the influence of elevation at different points on Earth, as well as the way light rays stream through lunar valleys along the moon’s ragged edge,” said NASA visualizer Ernie Wright of Goddard Space Flight Center in Maryland.

This new level of detail comes from pairing 3-D maps of the Moon’s surface, created by NASA’s Lunar Reconnaissance Orbiter, with elevation data for the places its shadow will hit.

The Last Dance

Because the umbral shadow is relatively small, a solar eclipse is only visible somewhere on Earth roughly every 18 months.

“So, if you find your area in the path of totality one year, you’ve hit the jackpot,” NASA officials explain in the video above. “Because on average, that same spot on Earth will only get to see a solar eclipse every 375 years.”

However, this already rare phenomenon is getting even more so: the Moon exerts a pull on the earth that creates ocean tides, and subsequently slows our planet’s rotation. This also transfers energy into the Moon’s orbit that pushes it away from Earth.

As a result, our Moon recedes by about 1.48 inches every year — roughly the same speed at which your fingernails grow. Someday, it will be too far to block the sun fully.

“Over time, the number and frequency of total solar eclipses will decrease,” explains Richard Vondrak, a lunar scientist at NASA’s Goddard, in the statement.  “About 600 million years from now, Earth will experience the beauty and drama of a total solar eclipse for the last time.”

The post New NASA Visualization Shows Where This Month’s Solar Eclipse Will Be Visible appeared first on Futurism.

Integrals and Infections

What does the subject you likely found most difficult in high school have to do with preventing about 887,000 deaths annually from viral infection? A lot, actually, because math — that frustratingly complex discipline — is the key to understanding the universe.

Numbers and equations help us to understand everything from patterns in plant life to the masses of distant stars. They have also helped us predict the shapes of tiny viruses — which can be too small to see by most microscopes.

Math is especially helpful in the case of viruses because the protein shells of most viruses are essentially geometric puzzles. The short genetic sequences of viruses code only a few distinctly shaped proteins, but these proteins can self-assemble into a variety of different 3D shapes. These structures are used to make the shell which encapsulates the viral genes, forming a virus that is ready to infect new organisms.

Scientists have been aware of virus self-assembly since the heyday of Watson and Crick, but how it occurs in many specific viruses remains a mystery that is still discussed. In fact, fifteen years ago, Reidun Twarock, a mathematician at the University of York, England, stumbled across a lecture about how viruses assemble a certain spherical shell, and she thought it related well to a project she was already working on. She decided to tackle the problem, and thus embarked on a new phase of her research.

“That snowballed,” Twarock said in an interview with Quanta Magazine. She realized her math could not only reveal how the viruses assemble, but how they infect cells and evolve. She began a collaboration with Peter Stockley, a professor of biological chemistry at the University of Leeds, England, and the marriage of math and biology turned out to be very fruitful.

“We really developed this integrative, interdisciplinary approach,” Twarock said in the interview, “where the math drives the biology, and the biology drives the math.”

Geometry Beats Germs

It was not long before the math drove the researchers to the conclusion that the order they were discovering on the viral surfaces must indicate that the virus’s genetics were playing a role — a physical one — in the protein shell’s assembly. After applying several mathematical theories to the problem, Twarock determined that the virus’s RNA was sticking to the proteins, helping them form the new virus.

The team applied this approach to viruses of special interest, reporting their findings about a common-cold virus this February in a Nature Communications study, and then about the HBV virus — the one that causes 887,000 deaths a year — this summer in a Nature Microbiology study.

The Story of How Vaccines Changed the World
Click to View Full Infographic

And their research has not just gleaned academic insight into these viruses. “We’d like to vaccinate people against several hundred infections,” Stockley told Quanta Magazine. And the way they are perusing this goal is by exploring what could be a new class of vaccines.

There are currently two main types of approved vaccines — one that uses “dead” pathogens, which is not always as effective, and another that uses “live” but weakened pathogens, which causes stronger immunity, but also poses greater risks. However, the new kind of vaccine that Stockley proposes would use what math has revealed about viral assembly to engineer virus-like particles. They would be similar enough to the actual virus that they might be able to bring about the same immunity “live” viruses can — but without the risk, because they wouldn’t be able to infect cells.

Such a class of vaccines would not only be able to protect people from the HBV virus, but from a multitude of other viruses as well, adding to the millions of people that vaccines have already saved. And without the advanced math skills of people like Twarock, this kind of medicine would be impossible.

The post Math Can Help Us Treat Diseases and Develop Better Vaccines appeared first on Futurism.

After more than 25 years of searching, neuroscientists in the UK recently announced that they’ve discovered a woman who has an extra type of cone cell — the receptor cells that detect color — in her eyes.

According to estimates, that means she can see an incredible 99 million more colors than the rest of us, and the scientists think she’s just one of a number of people with super-vision, which they call “tetrachromats”, living amongst us.

Most humans are trichromats, which means we have three types of cone cells in our eyes.

Each type of cone cell is thought to be able to distinguish around 100 shades, so when you factor in all the possible combinations of these three cone cells combined, it means we can distinguish around 1 million different colors.

Most people who are color blind only have two functioning types of cone cells, which is why they can only see around 10,000 shades — and almost all other mammals, including dogs and New World monkeys, are also dichromats.

But there’s one doctor in northern England who has four cone cell types, taking the potential number of colors she can distinguish up to 100 million — colors most of us have never even dreamed of.

Identified only as cDa29, the scientists finally found this woman two years ago, but they’ve been searching for more than 25 years — and think there are more tetrachromats like her out there.

So how do you get a fourth type of cone cell?

The idea of tetrachromats was first suggested back in 1948 by Dutch scientist HL de Vries, who discovered something interesting about the eyes of color blind people.

While color blind men only possess two normal cone cells and one mutant cone that’s less sensitive to either green or red light, he showed that the mothers and daughters of color blind men had one mutant cone and three normal cones.

That meant they had four types of cone cells, even though only three were working normally — something that was unheard of before then.

Despite the significance of the finding, no one paid much attention to tetrachromats until the late ’80s, when John Mollon from Cambridge University started searching for women who might have four functioning cone cells.

Assuming that color blind men pass this fourth cone cell onto their daughters, Mollon estimated that around 12 percent of the female population should be tetrachromats.

But all of his tests showed that these women could only perceive the same colours as the rest of us — which meant only three of their cone cell types were working, so they weren’t true tetrachromats.

Then, in 2007, neuroscientist Gabriele Jordan from Newcastle University in the UK, who had formerly worked alongside Mollon, decided to try a slightly different test to look for this super-vision.

She took 25 women who had a fourth type of cone cell, and put them in a dark room. Looking into a light device, three colored circles of light flashed before these women’s eyes.

To a trichromat, they all looked the same, but Jordan hypothesized that a true tetrachromat would be able to tell them apart thanks to the extra subtlety afforded to her by her fourth cone.

Incredibly, one of the women tested, cDa29, was able to differentiate the three different colored circles in every single test.

“I was jumping up and down,” Jordan told Veronique Greenwood from Discover magazine.

So if so many female children of color blind men have four cones, why have we only been able to find one true tetrachromat?

For starters, the team was only looking within the UK. But the bigger issue that Jordan thinks most true tetrachromats would never need to use their fourth cone cell type, and so would never realize they had special vision.

“We now know tetrachromacy exists,” she told Greenwood. “But we don’t know what allows someone to become functionally tetrachromatic, when most four-coned women aren’t.”

Jay Neitz, a vision researcher at the University of Washington, who wasn’t involved in the study, thinks that it might take practice and specially designed hues to truly unlock the power of tetrachromats.

“Most of the things that we see as colored are manufactured by people who are trying to make colors that work for trichromats,” he said. “It could be that our whole world is tuned to the world of the trichromat.”

In other words, the colors we use are so limited that the fourth cone cell never gets a work out.

The research on cDa29 hasn’t been peer-reviewed or published as yet, and Jordan is continuing her research and search for more tetrachromats.

There’s a lot more work to be done and Jordan’s results need to be replicated and verified. But if we can confirm that tetrachromats really do exist, it won’t just teach us about the limitations of human senses, it could help scientists develop better artificial sensing devices, and also help us figure out more about how vision works.

One thing we might never be able to understand, sadly, is exactly what the world looks like through cDa29’s eyes, seeing as it’s our brains that truly perceive color — our cone cells just receive the data to be processed.

“This private perception is what everybody is curious about,” Jordan told Discover“I would love to see that.”

The post A UK Woman Has an Extra Cone Cell in Her Eyes and Can See More Colors appeared first on Futurism.

Plasma Chamber

Scientists have observed how a magnetic dipole field can create what looks like a glowing, localised fireball inside a plasma chamber.

By placing a regular dipole bar magnet near the surface of the cathode, the researchers were able to generate an intense “glowing, fireball like structure”inside the plasma chamber, which varied in its brightness depending on how they positioned the magnet.

According to the team from the Saha Institute of Nuclear Physics in India, the localised glowing results from increased ionisation in the plasma chamber, due to the way electrons are confined by the magnetic field near the negatively charged cathode surface.

Scientists have previously conducted a range of research on how magnetism interacts with plasma using chambers just like this, but most of the time those experiments involve a magnet positioned inside the chamber – which means variations in the strength of the magnetic field don’t usually show up.

By placing and repositioning a mobile magnet on the outside of the chamber, however, it highlights a different effect of the magnetic field.

“Though bar magnets have been used in plasma experiments, the focus was mainly on the measurement of plasma equilibrium parameters like density, potential, and other fluctuation measurements,” says lead researcher Pankaj Kumar Shaw.

“In our opinion, this is the first effort to investigate nonlinear dynamical phenomena of the fluctuations under dipolar magnetic field.”

Image Source: Shaw et al/AIP Publishing

When a magnetic field is introduced to plasma, it induces fluctuations in the plasma, which become less ordered and more chaotic as the strength of the magnetic field increases.

While scientists already knew about this effect, the researchers here discovered that the transition from order to chaos in the plasma reflects what’s called period-doubling bifurcation – a mathematical equation that explains how systems can repeat in a doubling pattern in response to changes in parameters.

Image Source: Shaw et al/AIP Publishing

“Following a particular sequence from order to chaos via [a] period-doubling route was unexpected,” says Shaw.

“Changing position of the bar magnet varied the strength of magnetic field over 1–10G. This observation in such a low range of magnetic field was surprising.”

Theoretical Interest

Aside from the cool fireball effect, the results may largely be of theoretical interest for other plasma researchers right now, but the team thinks that in the future the findings could impact the study of how magnetic anomalies affect solar wind interactions with planetary bodies.

Tbh, that’s a pretty academic area too, but it’s one that could also have massive repercussions on future space exploration and colonisation inside our Solar System – especially since NASA has indicated it wants to study the possibilities of launching a giant magnetic field to make Mars habitable again.

The idea there is that, over the passing of galactic eons, the Red Planet may have lost its once lush atmosphere, which was stripped away by high-energy particlesprojected from the Sun.

Understanding more about how plasma and magnetic fields play together could be a vital step in learning how to restore the Martian atmosphere, by reinstating the Red Planet’s own magnetic field.

Of course, that’s a pretty giant leap from the discovery we’re telling you about today – but it’s all part of a scientific continuum, folks.

One little fireball could help us heal the hurt created by a much bigger one.

The findings are reported in Physics of Plasmas.

The post Physicists Make a Glowing Fireball From a Magnet and a Plasma Chamber appeared first on Futurism.


It’s been about nine months since a team of CERN researchers succeeded in their goal of measuring the spectrum of light emitted from hydrogen’s mirror particle, antihydrogen.

antimatter positrons supernova milky way
Image Source: Tod Strohmayer/NASA

They were just getting started. Now the researchers have detailed evidence of the structure of antihydrogen using spectroscopy, setting a landmark in our quest to determine why there is something in the Universe rather than nothing.

Led by Canadian researchers under what’s called the ALPHA Collaboration, the first detailed observation of “home made” antihydrogen’s structure has shown its spectral lines are virtually identical to those of hydrogen.

Had they been even slightly different this would be quite a different story, one heralding a crack in our models on the Universe that could reveal why it looks the way it does.

One of the current big mysteries facing modern physics is the question of why everything seems to be made of one kind of matter, when there are two kinds.

The Standard Model of physics predicts that all particles have something of a twin; a matching particle that has mirror properties, such as an opposite charge.

For example, the negatively charged electron has a positively charged partner called a positron.

These particles form together as a pair. What’s more, if the opposing kinds of particles meet, they cancel out in a blaze of gamma radiation.

That leaves the question why there is so much of one kind of matter, and not just an empty Universe humming with radiation.

If there was some kind of imbalance in the apparent symmetry of the Universe, it would go a long way to explaining why we ended up with enough matter sticking around after the Big Bang to build a couple trillion galaxies.

Looking for a difference in the two kinds of matter is as good a place to start as any.

Step number one is getting enough antimatter in one place, which is no easy task.

The ALPHA Collaboration managed to do it by cranking up CERN’s Antiproton Decelerator and churning out about 90,000 antiprotons.

To make the element antihydrogen, they needed to couple each antiproton with a positron.

Even after making 1.6 million positrons, the researchers only managed to make about 25,000 antihydrogen atoms.

A relative handful of these were slow enough to be trapped inside a special force field that kept them from touching ‘normal’ matter and vanishing in a blink of light.

“We have to keep them apart,” says researcher Justine Munich.

“We can’t just put our anti-atoms into an ordinary container. They have to be trapped or held inside a special magnetic bottle.”

In all, the team managed to trap and detect just 194 atoms over a number of trials, which gives you some idea of the difficulties involved in studying even the simplest forms of antimatter.

Fortunately it was enough to irradiate a sample of antihydrogen with microwaves of varying frequencies and observe their reaction.


When a unit of electromagnetic radiation such as a microwave hits an electron, it absorbs it and changes position. Bouncing back, it spits out its own wave of light.

Different elements absorb and emit their own spectrum of light at specific wavelengths, producing  a pattern that tells physicists a lot about the structure of the atom producing them.

“Spectral lines are like fingerprints. Every element has its own unique pattern,” says researcher Michael Hayden from Simon Fraser University.

Theoretically as mirrors of the same element, hydrogen and antihydrogen should share this pattern.

Earlier research suggested this was true, but the detail wasn’t clear enough to be conclusive.

For the first time, researchers have found a way to capture fine details of antihydrogen’s spectral lines and show they are in fact identical to hydrogen.

Radiating the antihydrogen atoms with microwaves allowed the physicists to determine its light fingerprint in a rather indirect way, using specific changes in the antihydrogen that caused them to eject from the magnetic bottle to fine tune estimates on its spectral lines.

“Spectroscopy is a very important tool in all areas of physics. We are now entering a new era as we extend spectroscopy to antimatter,” says Jeffrey Hangst, spokesperson for the ALPHA experiment.

“With our unique techniques, we are now able to observe the detailed structure of antimatter atoms in hours rather than weeks, something we could not even imagine a few years ago.”

Right now, the comparison has shown the effectiveness of using spectroscopy rather than resulting in monumental new physics. But new tools like these are going to be important in studying antimatter in the future.

“By studying the properties of anti-atoms we hope to learn more about the Universe in which we live,” says Hayden.

“We can make antimatter in the lab, but it doesn’t seem to exist naturally except in miniscule quantities. Why is this? We simply don’t know. But perhaps antihydrogen can give us some clues.”

This research was published in Nature.

The post Physicists Have Captured the First Spectral Fingerprints of Antimatter appeared first on Futurism.

Out of This World Computing

If you enjoy roaming around open world video games, take a moment to consider a virtual environment comprising the entire universe — and no, we’re not talking about No Man’s Sky.

In June, a team of researchers from the University of Zurich used a supercomputer to create a simulation containing 2 trillion digital particles shaped into roughly 25 billion virtual galaxies. At the time, it was the world’s largest virtual universe, but now, researchers from the Chinese Academy of Sciences in Beijing say they’ve beaten the Swiss team’s work, according to an announcement published on Wednesday by Science and Technology Daily, the official newspaper of the science ministry.

The Chinese team used the world’s most powerful supercomputer, the Sunway TaihuLight based in Wuxi, to create their virtual universe. The team’s simulation was limited to an hour as another group had booked time on the supercomputer, but over the course of those 60 minutes, they were able to simulate the creation of the universe from the Big Bang up to early expansions — a feat that required 10 trillion digital particles.

“We just got to the point of tens of millions years after the Big Bang. It was still a very young stage for the universe. Most galaxies were not even born,” Gao Liang, the computational cosmology group chair of the academy’s National Astronomical Observatories, told the South China Morning Post.

Modeling to Understand

Scientists have been trying to create simulations that model the universe for decades now, but simulating every known object in the universe requires a very sophisticated machine. Back in the 1970s, the available computing power could only process up to a thousand particles. Thanks to ever-more-powerful computers, that number has grown into the trillions in recent years.

The Evolution of Human Understanding of the Universe [INFOGRAPHIC]
Click to View Full Infographic

Developing a virtual universe isn’t just for fun. Astronomers use such models to better study and understand the universe and the cosmic bodies in it. Simulations break down the universe’s mass into particles that interact with one another through physical forces like gravity. The more particles there are in a simulation, the better its chances of replaying and forecasting the evolution of the universe.

Until we develop and perfect universal quantum computers, supercomputers are our best tool for such computations. For Gao, his team’s work is just the beginning. “This is just a warm-up exercise. We still have a long way ahead to get what we want,” he said. To fully simulate the universe at its current age of 1.3 billion years, the team will need more than an hour on the Sunway TaihuLight. “Or its successor, which is better,” Gao added.

The post Chinese Researchers Just Unveiled the World’s Largest Virtual Universe Simulation appeared first on Futurism.

A Hitchhiker’s Guide to Antimatter

This week, a paper came out that takes the work of twentieth-century physicists on subatomic interactions and delves into the strange and nascent study of antimatter. Published by Mostafa Ahmadi and his colleagues in the journal Nature, the study shows that antihydrogen — the antimatter counterpart of hydrogen — also produces the standard 21-cm (8.3-inch) emission wavelength. That is half the answer to life, the Universe, and everything as predicted by Douglas Adams‘ Hitchhikers’ Guide to the Galaxy…which is 42.

Antimatter is hard to study since it tends to annihilate any matter it comes into contact with. Using CERN’s ALPHA-2 particle accelerator device, 90,000 antiprotons are slowed and captured from the Antiproton Decelerator, ultimately producing 25,000 antihydrogen atoms. That might seem like a big number, but an average grain of sand has 1-100 million million million atoms in it. That’s about twenty more zeroes more than the quantity of antihydrogen atoms, so far.

Image Credit: Nature Journal

Results Support The Standard Model

Developed in the 1970s, The Standard Model successfully explained nearly all experimental results of how the basic building blocks of the Universe, called fundamental particles, in addition to three universal forces, interact. The Standard Model not only predicted a variety of phenomena like charge-parity-time, it has become an established, reliable theory.

According to Nature, “Charge-parity-time (CPT) symmetry predicts that energy levels in antimatter and matter should be the same. Even the slightest violation of this rule would require a serious rethink of the standard model of physics.” Scientists test The Standard Model by finding inconsistencies — if there are discrepancies between the behavior of matter and antimatter, this raises doubt or may even falsify a set of rules. But, since the opposite has obtained, here’s to the continued posterity of twentieth-century subatomic particle physics, if it isn’t too big of a mouth-full.

The post New Antimatter Study Lends Strength to Fundamental Theory appeared first on Futurism.

Entirely New Discoveries

In a world of quantum oddities, the phenomenon of indistinguishability, the impossibility of distinguishing between two quantum particles, remains notable. Superposition is one of the underlying causes of indistinguishability because there is no sure way to lock down an exact position of a quantum particle. This, in turn, makes it impossible to know which particle is which when two quantum particles interact in the same place. This leads to exotic particle behaviors, especially at low temperatures. Under those conditions, behavioral qualities of particles can resemble each other closely, causing phenomena such as Bose-Einstein condensates and superfluidity.

Chemistry, though, requires relatively high temperatures, which cause most substances to shed their quantum properties. This is why indistinguishable physics and chemistry have traditionally been approached as if they were completely distinct, allowing chemists to ignore the effects of quantum indistinguishability with confidence. However, University of California Santa Barbara researchers Matthew Fisher and Leo Radzihovsky are turning the field of chemistry on its head, proving this confidence has been misplaced.

The pair has now demonstrated for the first time that even at ordinary temperatures, quantum indistinguishability plays a significant role in some chemical processes. This means that indistinguishability most likely causes entirely new chemical phenomena such as isotope separation, and may also give better explanations for poorly understood phenomena such as reactive oxygen species and their enhanced chemical activity. The quantum coherence of atomic nuclei is of particular interest to the team.

Image Credit: WikimediaImages
Image Credit: WikimediaImages
Despite their coherence, nuclear spins easily couple with other physical states. When coupling takes place, indistinguishability and its properties are no longer confined to the nuclei, instead, they influence the entire molecule. According to Fisher and Radzihovsky, this effect is especially strong in small symmetric molecules such as water. This is because symmetry controls which configurations are possible for the molecules when the spins of nuclei interact, so the molecule is limited to interacting in conditions with similarly symmetrical spins. The end result is spin-isomers, different arrangements of the same molecule with the spins moving together or in opposite directions.

The Central Science, Upended

Things like spin-isomers and symmetry are important in chemistry because many reactions depend upon molecules being able to fit together precisely. Fisher and Radzihovsky have demonstrated that quantum indistinguishability changes the way molecules fit together, then quantum indistinguishability prevents reactions that don’t achieve symmetry between nuclei. They’ve also shown that para molecules with their greater range of possible symmetrical matches are necessarily more reactive than ortho molecules.

This research will have a major impact on enzymatic catalysis. Hydrogen, for example, is subject to the influence of quantum indistinguishability and is also central to the work of many enzymes. This is easier to predict than to test, however, since it is difficult to separate ortho- and para-versions of molecules.

Fisher and Radzihovsky also believe quantum indistinguishability will influence isotope fractionation by providing it with a new mechanism, and offer insight into reactive oxygen species and their enhanced chemical activity, not to mention biochemical molecules in general. Testing these predictions may be an uphill battle, but understanding some of the most critical and subtle phenomena in chemistry will be a worthwhile payoff.

The post A New Quantum Understanding is About to Turn Chemistry on Its Head appeared first on Futurism.

What exactly does the leap second do?

The post Watch: The Leap Second Explained appeared first on Futurism.

A Materials Science Surprise

Some of the tiniest crystals in the world can, together, form superlattices, the basic elements of various novel materials. These crystals are also called “artificial atoms,” because they can organize themselves into structures that look a lot like molecules.

Scientists from the Department of Energy’s (DoE’s) SLAC National Accelerator Laboratory (SLAC) and Stanford University have now serendipitously observed these nanocrystals forming superlattices as they grow. In fact, during the routine synthesis of nanocrystals, superlattices can form incredibly fast — in a matter of seconds, not days. These observations, the first of their kind, will help scientists to adapt and improve the assembly process and use it to make novel materials for applications such as solar cells, optoelectronics, magnetic storage, and catalysts that speed up chemical reactions.

Image Credit: Greg Stewart/SLAC National Accelerator Laboratory
Image Credit: Greg Stewart/SLAC National Accelerator Laboratory
Scientists have been creating nanocrystals in labs for decades. Tiny inhabitants of the realm of quantum mechanics, these crystals often have fascinating and unusual properties that vary based on their composition, shape, and size. Superlattices feature the ordered structure of crystals, but rather than individual atoms, they are composed of nanocrystals that imbue the superlattices with some of their properties. Superlattices also have unusual properties of their own, and part of the purpose of this kind of research is to expand our understanding of how we grow superlattices for certain qualities, control their properties, and keep them uniform.

A Quickened Pace

Until this work, superlattices were grown at low temperatures slowly, often over the course of several days. However, an accidental discovery revealed that heating palladium nanocrystals with an SSRL X-ray beam, which “fries” the nanocrystals, also prompts them to self-assemble into superlattices.

“The challenge was to understand what brings the particles together and attracts them to each other but not too strongly, so they have room to wiggle around and settle into an ordered position,” Stanford assistant professor of chemical engineering Jian Qin said in a statement. Since that time, the team has focused follow-up experiments on how to change the composition, properties, and size of the superlattices, and on how the superlattices grow. They have also realized that the process is more generalizable.

“Once we understood this system, we realized this process may be more general than we initially thought,” Stanford postdoctoral researcher Liheng Wu, who originally observed the process, said in the statement. “We have demonstrated that it’s not only limited to metals, but it can also be extended to semiconducting materials and very likely to a much larger set of materials.”

The post An Accidental Discovery Shows Artificial Atoms Can Quickly Self-Assemble appeared first on Futurism.

A Simple Method

Scientists at the University of Cambridge have created a new material that mimics the properties of spider silk, which is stronger than steel and tougher than kevlar. The team’s research has been published in Proceedings of the National Academy of Sciences.

Super Materials of Tomorrow [INFOGRAPHIC]
Click to View Full Infographic

Many researchers have attempted to replicate spider silk with varying degrees of success. The most impressive aspect of the Cambridge team’s effort might be that the synthetic silk they have been able to produce is 98 percent water.

The team first creates a hydrogel out of the water, silica, and cellulose. Holding the latter two elements together are binding molecules called cucurbiturils. Thirty minutes after pulling the long fibers of silica and cellulose from the hydrogel, the water evaporates, and the scientists are left with a tough, stretchy fiber.

From Balloons to Boats

Spider silk needs to be extremely absorbent so that the creatures’ webs can tolerate impact and catch prey. For that reason, Darshil Shah, a Research Associate at the Center for Natural Material Innovation at Cambridge, identifies protective clothing, such as bulletproof vests, as a particularly suitable application for the team’s creation.

“Things like shrapnel resistant or other protective military clothing, that would be an exciting application,” he told The Smithsonian. Bike or skateboard helmets, hot air balloon materials, parachute fabric, and sail cloth have been suggested as other potential uses.

Scientists are finding ways to create materials with all sorts of remarkable characteristics, from being able to heal themselves when split in two to charging your phone in minutes, and while the Cambridge team’s spider silk itself is another exciting material discovery, the creation process is the real innovation — it works at room temperature, using cheap and accessible materials, giving it an advantage over other synthetics like nylon.

According to Shah, different materials could be created using the same process by substituting the cellulose with other polymers. “It’s a generic method to make all fibers, to make any form of fiber green,” he told The Smithsonian.“We think that this method of making fibers could be a sustainable alternative to current manufacturing methods,” he added in an interview with AZO Materials.

The post Scientists Have Created a Synthetic Spider Silk That is Stronger Than Steel appeared first on Futurism.

To truly understand dark energy, you have to look at Einstein’s general theory of relativity on a cosmic scale.

The post Watch: Why the Universe Needs Dark Energy appeared first on Futurism.

Etch A Circuit Electronics

Physicists at Washington State University have used a laser to write an electrical circuit into a crystal. This new technique changes the nature of the usually nonconductive crystal, therefore making transparent, three-dimensional electronics that can be erased and reconfigured possible. In other words, the development could lead to invisible, changeable, Etch-a-Sketch style electronics.

The phenomenon was discovered by WSU researchers four years ago by accident when a doctoral student left a crystal exposed to light, which lead to a 400-fold increase in its electrical conductivity. Now, WSU professor of physics and materials science Matt McCluskey has etched a line into the crystal with a laser and placed electrical contacts at each end, enabling the crystal to carry a current.

“It opens up a new type of electronics where you can define a circuit optically and then erase it and define a new one,” McCluskey told “It’s exciting that it’s reconfigurable. It’s also transparent. There are certain applications where it would be neat to have a circuit that is on a window or something like that, where it actually is invisible electronics.”

Better Materials, Simpler Processes

Although crystals typically do not conduct electricity, under the right conditions strontium titanate crystals can be heated and altered so that light can render them 1,000 times more conductive, a new attribute that can last for up to one year. This “persistent photoconductivity” phenomenon can also take place at room temperature, making it a major improvement over materials that must be cooled with liquid nitrogen. The circuit can also be erased with heat and then recast using an optical pen.

“It’s an Etch A Sketch,” McCluskey said to “We’ve done it a few cycles. Another engineering challenge would be to do that thousands of times.”

The post Physicists Turned a Crystal into an Electrical Current That Works Like an “Etch A Sketch” appeared first on Futurism.

2012 TC4

Our Solar System is littered with chunks of space rocks that whizz around in different orbits and varying speeds – and it’s no big deal until one of those rocks turns out to be on a nasty unexpected trajectory and smashes into Earth.

It’s actually a pretty unlikely apocalypse scenario, but even with one in 10,000 odds it’s a good idea to keep an eye out. Now NASA scientists are excited they’ll finally get to test out some of their defence systems with an upcoming asteroid fly-by in October.

Don’t run for the panic room just yet – the inbound asteroid, called 2012 TC4, is estimated to safely pass our planet at a distance of about 6,800 kilometres (4,200 miles). We don’t have a more concrete number because the space rock has been out of telescope range since 2012.

At a width of roughly 10-30 metres (30-100 feet), TC4 is pretty small. So far astronomers have only caught a glimpse of it once – when it hurtled past Earth back in 2012 at a distance much closer than our own Moon.

Back then they only had a window of seven days to make observations and calculate when this asteroid will show up next.

But based on that data, it looks like TC4 will zoom around again on October 12 this year, and researchers are making preparations to not only update their observations of this particular object, but also to test out some of their planetary defense strategies.

“This time we are adding in another layer of effort, using this asteroid flyby to test the worldwide asteroid detection and tracking network, assessing our capability to work together in response to finding a potential real asteroid threat,” says observation campaign lead Michael Kelley from NASA.


Near-Earth Objects

This is the first time NASA researchers get to use an actual space rock for their planetary defense efforts, which involve astronomers from all over the world.

The Planetary Defense Coordination Office (PDCO) was only established last year, with the goal to survey the skies for any near-Earth objects that are big enough and close enough to pose a risk to us.

Just last October the office’s new space monitoring systems gave us a heads-up about an unexpected asteroid flyby, calculating its timing, risk, and potential flight paths.

But this time around researchers can actually plan to coordinate their activities.

“This is a team effort that involves more than a dozen observatories, universities and labs across the globe so we can collectively learn the strengths and limitations of our near-Earth object observation capabilities,” says Vishnu Reddy from the University of Arizona who will coordinate this new exercise.

While we know for sure that TC4 is not going to smash into Earth, there’s so little astronomers know about its orbit that it’s actually a great test subject for strategies that can improve our ability to track and predict near-Earth objects.

“It will be incumbent upon the observatories to get a fix on the asteroid as it approaches, and work together to obtain follow-up observations than make more refined asteroid orbit determinations possible,” explains Paul Chodas from NASA’s Center for Near-Earth Object Studies.

Even though so far a space rock apocalypse is relatively unlikely, NASA’s efforts to detect and catalogue as many asteroids as possible is commendable – it only took one stray chunk of space debris to wipe out the dinosaurs, after all.

Unfortunately, even with the coordinated efforts to find these space threats, for now there’s not a lot we can actually do about them. That’s why some experts warn we really should build an intercepting spacecraft before we need it.

You don’t have to lose any sleep over the October 12 flyby, but let’s hope that by the time PDCO has sharpened its skills and does find a real threat, we actually have some space bomb to blow it up with. Or something.

The post An Asteroid Headed Our Way Is About to Test NASA’s Planetary Defense System appeared first on Futurism.

“Rocks Rolling Downhill”

How did life first originate from nothing? This has been the focus of biologists, specifically astrobiologists, and popular theories have included everything from meteorites to seemingly random chemicals to luck. In 1859, Charles Darwin posited that “All organic beings that have lived on Earth could be descended from some primordial form,” in The Origin of Species. His basic idea was that chemical components and energy sources somehow spontaneously generated life in the primordial soup.

However, in 2013, MIT biophysicist Jeremy England proposed a new theory that substituted thermodynamics in place of luck. He derived a mathematical formula to explain how atoms, driven by external energy (such as that found in primordial soup) and heat (like you’d find in an atmosphere), will gradually restructure themselves to dissipate more and more energy. In other words, under the right conditions, matter naturally acquires the basic physical quality — the tendency to capture energy from the environment and dissipate it as heat — associated with life, based on the law of increasing entropy or the second law of thermodynamics, also called the “arrow of time.”

Image Credit: Dieter_G/Pixabay
Image Credit: Dieter_G/Pixabay
If this theory is right, England commented in 2013, luck has nothing to do with it, and life should evolve following those laws and “should be as unsurprising as rocks rolling downhill.”

The Arrow Of Time

England has since been testing his formula and his idea more generally using computer simulations. He published two studies in July, and both experiments appear to support his basic theory about adaptation driven by dissipation. However, what these results ultimately mean for the origins of life remain unclear.

In the simulation, a soup of 25 chemicals reacted together in multiple ways as environmental sources of energy “force” certain chemical reactions, just like ATP provides the chemical fuel for cellular metabolism. In some cases, the system reaches an equilibrium state, the most familiar outcome produced by the second law of thermodynamics. However, in other cases, the chemical reaction network evolves as reactions harvest as much energy from the environment as possible, ending at fixed points far away from equilibrium.

These “rare states of extremal thermodynamic forcing” are similar to the extreme forcing that living creatures engage in as we burn up chemical energy. England believes that atoms acquire the very specific form and function designed for optimal chemical energy consumption and become a bacterium because thermodynamics dictates this natural outcome in far-from-equilibrium systems. Many biophysicists agree with England, but since there is still disagreement about what the essence of life is, it’s difficult to nail down how explanatory this theory is in terms of life’s origins.

So, what’s next? Perhaps physically simulating primordial soup outside of a computer environment is the next step, but this does involve some guesswork. And, while England sees his theory as underlying Darwinian evolution, some disagree that this dissipation-driven adaptation theory could distinguish between things that are simply structured in a certain way and things that are alive. For example, the ability to perceive, process, and pass on information in the form of reproduction may not be fully explained by England’s theory. In any case, this work provides fascinating insights into one possible explanation of how our planet made something out of nothing.

The post New Theory for Life Suggests It Was Not an Accident of Biology. It Was Physics. appeared first on Futurism.

Our First Exomoon?

A signal has been detected by the Kepler Space Telescope that may well be the first moon we have ever discovered outside of our solar system. It was picked up by David Kipping and his colleagues at Columbia University, who remain speculative. Kipping told New Scientist, “It is consistent with the signal that we might expect from a moon, but it might be consistent with other things as well.”

Kepler finds cosmic bodies by measuring their disruption of light as they cross the path between us and distant stars. The team detected the dips that indicated the moon over three orbits of a planet in the star system called “Kepler-1625.” They graded the exomoon’s likelihood at four sigma — which means that the probability of seeing the same data if the moon did not exist is roughly one in 16,000.

However, if it is indeed a moon, it would be a cosmic anomaly, as the team states that it is roughly the size of Neptune and orbiting a planet the size of Jupiter. On the other hand, as David Waltham at the University of London, said to New Scientist, “It would be pretty odd that there are hundreds of moons in the solar system but none anywhere else.”

Also, it may be that the first exomoons we detect will have to be relatively large ones, as smaller moons will likely cause blips in the data that are too small for us to detect yet. In this case, we are attempting to distinguish patterns in the faint light from a star that is 4,000 light-years away.

A Place to Call Home?

The discovery of an exomoon is exciting because “habitable exomoons could be even more numerous than Earth-analogue exoplanets themselves,” Helmut Lammer argues in a paper published in Origins of Life and Evolution of the Biosphere. This is for two main reasons.

The first reason is that “the majority of confirmed exoplanets and candidates orbiting in the habitable zone (HZ) of their host stars are either super-Earths or gas giants.” Essentially, because our instruments are not precise enough, the vast majority of exoplanets we have discovered so far are inhabitable due to their size, which Kepping asserts in a paper would have to be twice as massive as Mars for us to even see them. This causes them to have gassy and toxic atmospheres, or a gravitational pull that would be unsuitable for humans.

Alien Worlds: Meet The Most Remarkable Moons in Our Solar System
Click to View Full Infographic

Second, exomoons are tidally locked to their exoplanets, meaning that their days should be shorter than their stellar years, making them more Earth-like and favorable for humans.

The discovery is also significant because, if we can confirm that a moon of this size exists, it vastly expands the horizons of the field. We could start looking specifically for moons of this size and potentially accelerate the discovery of other exomoons.

We won’t know for certain until Hubble is given a chance to gather data on the possible moon. But, if it is confirmed, this exomoon will be a fabulous discovery that gives us knowledge of the existence of worlds that could be more habitable than exoplanets, as well as enlightening us on another secret of the universe.

The post We May Have Just Found the First Moon Outside Our Solar System appeared first on Futurism.

Using Light to Make Winners

Your level of success may be all in your head — literally. Chinese scientists have turned submissive “loser” mice into dominant “winner” mice in a study published in Science. They triggered the transformation by promoting activity in the part of the brain associated with “effortful behavior” and “social dominance.”

The experiment used a typical “Dominance Tube Test,” which involves putting two mice in opposite ends of a tube, and observing which one pushes the other out of the way. While the “winners” are not necessarily stronger, professor Hailan Hu, who led the study, asserted to The Guardian that the test ascertains which subject holds the higher social position based “not [on] aggressiveness per se,” but “perseverance, motivational drive, grit.”

Neuroscientists at the Chinese Academy of Sciences collaborated with other institutions to use optogenetics — a fiber optic implant which encourages neural activity using light — to stimulate the dorsomedial prefrontal cortex (dmPFC) in mice who had been “losers” in the test case. In order to do this, the scientists also had to genetically engineer the mice to make their dmPFCs light sensitive.

After being exposed to the treatment, the former “losers” won in 90 percent of cases. When the researchers conducted further experiments, they found that the effect also worked vice versa — “activation or inhibition of the dmPFC induces instant winning or losing, respectively” — and that some mice retained “this newly dominant position.”

The Ultimate Makeover

The findings of the study are interesting in two respects. First, it gives us an insight into the way that power is constructed in the animal kingdom, which we may also be able to apply to humans. Ivan de Araujo, a psychiatry researcher at the Yale University School of Medicine, told The Guardian that:

[An animal’s] history of winning is one characteristic of social dominance that is relevant for almost every social species studied, from insects to primates. Because each brain region investigated has its direct primate homologue, the present study opens new opportunities for understanding the involvement of brain regions linked to planning and decision-making in establishing social hierarchies.

Second, it opens up an avenue of research that could allow us to use similar implants as a form of therapy for individuals who are either too competitive or not competitive enough. If the system can be optimized and applied to humans, we could use it to curb the impulses and behaviors of overly aggressive individuals or to empower people with social anxiety or confidence-related disorders.

This is not the first startling behavioral change that can be instigated using optogenetics. Studies over the last few years have found that the technique can be used to make a sterile mouse fertile again, combat addiction by rewriting memories and experiences, and send mice into a rage that causes them to bite and attack anything in their path.

The post Activity in Part of the Brain Could Determine Whether You are a “Winner” appeared first on Futurism.

Hypothalamic Stem Cells

The hypothalamus is the region of the brain that helps to regulate internal conditions like body temperature and blood concentration, but new research shows that it may fail us as we age. The research indicates that as the hypothalamus’s stem cells die off, the region actually starts to promote aging, causing mental and physical faculties to decline at a more rapid pace.

In the past, researchers have observed that the hypothalamus becomes inflamed over time. This lead them to posit that the area is connected to aging. Recent research on mice proved that reversing the inflammation in the hypothalamus increases the animal’s life span and slows physical deterioration. In this latest study, scientists focused on the stem cells of the hypothalamus. In younger animals, these stem cells divide and replace damaged and dead cells. However, as this research shows, over time the number of stem cells present in the hypothalamus drops. In old age, they are essentially gone.

The team believed they were on to something, but undertook some practical experiments to see if their ideas were borne out by the evidence. First, they altered mice genetically to ensure they’d be out of stem cells (at a point earlier than would occur naturally). Reducing the stem cells in the mice by around 70 percent meant a life span that was about 8 percent shorter. This accelerated loss of stem cells also caused a loss of coordination, endurance, and memory, as well as behavior that was less youthful, curious, and social. When the team injected stem cells into the hypothalami of middle-aged mice, those mice gained about about 10 percent more mental and physical capabilities compared to mice injected with regular brain cells.

4 Scientifically Proven Ways to Help Reverse Aging
Click to View Full Infographic

Right Area, Different Mechanism

Originally, scientists believed that the stem cell loss could be significant because it meant the host was unable to repair and replace damaged and dead cells. However, when the hypothalami of middle-aged mice were injected with stem cells, they improved too rapidly for this to be the correct mechanism. Instead, the team suspected microRNAs might be at work.

The RNA molecules, called microRNAs, are manufactured and released by stem cells to carry messages to other cells. Practically, based on the messages they carry, microRNAs may alter the proteins cells produce. The researchers discovered that the stem cells in the hypothalamus produce massive amounts of microRNAs contained in tiny exosomes. In fact, when they injected mice with exosomes packed with microRNA from young hypothalamus stem cells, the effects were almost as effective in slowing signs of mental and physical aging as injections of stem cells were.

Recent research has focused on the role of mitochondria in aging and on the use of induced pluripotent stem (iPS) cells in combatting aging in hematopoietic stem cells. Research from this year has also shown that cannabis-based treatment appears to reverse aging in the brains of mice. Concerning this research, protecting or replacing the stem cells of the hypothalamus — or somehow reinforcing or replacing the microRNA effects — could slow aging in humans. This could mean testing current drugs such as acarbose (presently used to treat diabetes) to see if they can suppress the hypothalamic inflammation that causes the stem cells to die.

The post Hypothalamic Stem Cells Could Provide New Insights Into Aging appeared first on Futurism.

An Underappreciated Idea

In 1915, Germany introduced chlorine gas into the ongoing horror that was World War I. It was the war to end all wars, bringing devastation that would scar a generation. But one man’s mind continued to soar above the poisoned clouds as he submitted the final part of a series of papers that laid out his theory of gravitation — something he called “General Relativity.”

Einstein’s theory was, simply put, a mathematical equation that related gravitation with geometric aspects of spacetime. It suggests that mass effects the fabric of space and time itself, predicting a number of astronomical phenomenon.

In the years following Einstein’s publication of his theory, his ideas were celebrated by the public as the battles came to an end. Leopold Infeld, who wrote The Evolution of Physics with Einstein, believed that Einstein’s theory gave survivors a form of intellectual escapism after the war. He concluded that “people were weary of hatred, of killing,” Ronald Clark quoted Infeld as saying. “Here was something which captured the imagination […]. The mystery of the Sun’s eclipse and of the penetrating power of the human mind.”

But the theory’s popularity — at least in the scientific community — quickly burned out. Researchers detailed how scientists largely ignored General Relativity between the mid-1920s to the mid-1950s in a recent paper published in The European Physical Journal H.

The researchers suggest that this was likely because scientists viewed the theory as a lot of extraordinarily complicated math that produced results very similar to those of Isaac Newton’s equations — which are much simpler.

In other words, scientists thought the theory was correct, but just not that important, dismissing its broader implications. So, for about 30 years, scientists actively discouraged research of Einstein’s ideas, which were “viewed merely as providing small corrections to Newtonian gravity,” the researchers wrote.

That was, until the “Renaissance of General Relativity,” as physicist C.M. Will dubbed it.

Relativity Reemerges

Much of the research that fueled this renaissance was conducted within a new branch of theoretical physics — relativistic astrophysics. This “sub-discipline” exploded onto the scene, and, by the 1970s, researchers were mining fruitful experiments from the theory of General Relativity.

Suddenly, scientists were solving previously unsolvable problems, and breakthroughs in astrophysics were radically changing our understanding of the Universe.

This dramatic shift in Einstein’s theory from an interesting equation to “one of the pillars of modern physics” was undoubtedly aided by new technologies, increased research funding, and a stronger partnership between the fields of physics and mathematics. However, the researchers suggest that the renaissance represented an growth in scientists’ understanding of the theory and its real-world implications.

Finding a general way to extract the physical content of the theory first became a major concern in the Renaissance years, with many papers opening with remarks concerning the difficulty of interpreting General Relativity. Only after the central issues had been resolved in the Renaissance was General Relativity applicable to any given physical problem, providing an interpretation in its own terms.

Whatever the cause, this renaissance brought with it our discovery of the existence of black holes, applications of gravitational lensing, and, most recently, the detection of gravitational waves — all of which were predicted by Einstein’s theory.

Einstein died in 1955. He certainly experienced success in his time — but not all of it. Not even most of it, some would argue, because the past few decades have represented not only a renaissance in General Relativity, but in our understanding of the universe. Scientists believe, as the researchers wrote, that the future of General Relativity “will be as astonishing and revealing as its past has already been.”

Even beyond these scientific discoveries, Einstein’s ideas will continue, as Infeld stated, to turn “human eyes looking from an Earth covered with graves and blood to the heavens covered with the stars.”

Thank you, Einstein.

The post Einstein’s Theory of General Relativity was Largely Ignored for Three Decades appeared first on Futurism.

It’s getting hot in here… put out the flame with this self-imploding extinguisher.

The post This Fire Extinguishing Ball Will Put out a Small Fire in One Toss appeared first on Futurism.

Emerging Leaders

Quantum computers are, unarguably, the next great evolutionary step in the development of computing tech. Their successful creation will be a paradigm shifting achievement—one that will alter the future of humanity and revolutionize operations across a broad spectrum of applications.

And in case you missed it, we just took a massive leap forward into this new realm.

Last week, in a stunning reveal at the 2017 International Conference on Quantum Technologies, held in Moscow, Russia, the co-founder of the Russian Quantum Center and head of the Lukin Group of the Quantum Optics Laboratory at Harvard University, Mikhail Lukin, announced that his team had successfully built a 51-qubit quantum computer.

Alan Turing: The Father of Computer Science [Infographic]
Click to View Full Infographic

A press release was distributed to conference attendants shortly after the announcement stating that Lukin’s team “created and successfully tested a programmable 51-qubit quantum computer, thus becoming the leader among those engaged in the quantum race.”

The announcement was made a few hours before the main event of the conference, a public talk by John Martinis, the man in charge of building Google’s 49-qubit quantum computer (the timing of Lukin’s reveal was, no doubt, very intentional).

A Brewing Race

Futurism was fortunate enough to be present at the conference in Moscow at the moment of this historic announcement and spoke with Professor Lukin about this achievement. Notably, his group was one of two teams that created the first ever time crystals back at the start of this year. Before diving into his exciting announcement, we asked a little about his research in that area and how it applies to quantum computing.

Basically, the unique thing that happens with these time crystals is that they can be a stable state of matter. These states, in principle, can hold quantum coherence for a long time. So basically, it means you can have super-positions of states. That’s kind of the basic ingredient for all this quantum science and technology.

So on one hand, we can think about using it as memory for a quantum computer—in principle it’s true, but as for practical use…it’s not so clear.

To that end, there is still much uncertainty regarding how (and with what materials) quantum computers can and should be made. However, many entities are racing to be the first to create a working quantum computer, so innovation is moving at an increasingly accelerated rate.

This is a very good thing.

As we approach the physical limits of Moore’s Law, the need for increasingly faster and more efficient means of information processing isn’t going to end—or even slow. To break this down a bit, the physical limit of Moore’s Law exists as the size of transistors heads into the quantum realm. We can no longer rely on the laws of the standard model of physics at this scale. As such, developing technology that does operate at the quantum scale not merely allows for the linear progression of computing power, it will launch exponential shifts in power and capability.

Understanding Quantum Computers

The capabilities of this technology are ultimately based on the number of qubits in the system. Each qubit that is added does not simply multiply the processing capability by a single bit, but exponentially in creases it.

For example, 4 classical bits can be in one of 2positions, allowing for 16 possibilities but only one at a time. However, four qubits in superposition (being every possible combination of 0 and 1 at the same time) can be in all 16 states at once, and that number grows exponentially with every qubit added.

This means that a 20-qubit system can store 1,000,000 values simultaneously.

It is unclear the number of qubits required to make an effective quantum computer but Lukin, right now, stands at the forefront in the field. He notes that finding the answer all comes down to crunching the numbers: “Basically, the only way we are able to find out is by building machines big enough that we can actually really run these algorithms.”

To that end, in his presentation, Lukin mentioned that the team is planning on using their technology to launch the famous Shor’s quantum algorithm. Equipped with an operational quantum computer, this algorithm can destroy modern encryption as we know it. This leaves many experts with the view that quantum computers could act as tools of mass disruption, if not destruction.

However, even the scientists who are working on developing this technology cannot identify all of the innumerable ways that quantum computing will transform our world.

A New War Brewing

During the Cold War, we saw an intense boost in scientific advancements, especially in the field of rocketry and space-based research. The Space Race launched with the United States and Russia moving the required science and technology forward at breakneck speeds. Of course, the innovators of this era did not fully realize all the befits that this space race would bring humanity (memory foam, cochlear implants, artificial limbs, fire-resistant reinforcements, and on and on).

Now, a new contest for technological supremacy is brewing, and while the United States and Russia are certainly at play, China and even private enterprise have also joined the fray.

Much like the scientific community’s uncertainty about the complete scope of space exploration’s potential, physicists working on quantum computers are quick to say that, while they have some inclinations about what the first quantum computers will be able to do, they are not sure what possibilities these immensely powerful machines will be capable of. But one thing is certian: The future is going to be like nothing we’ve ever seen.

This interview has been slightly edited for clarity and brevity.

The post A New Breakthrough in Quantum Computing is Set to Transform Our World appeared first on Futurism.

Novel DNA Dye

Although the basic double helix structure of the DNA molecule is known, its three-dimensional organization has continued to elude scientists. Every single cell somehow packs DNA that is two meters long into its nucleus, a tiny space only one-thousandth of a millimeter across. The question researchers have been trying to answer, then, is how does the structure manage such extreme compaction while maintaining function?

The three-dimensional organization and structure of DNA influences everything from how our cells function to how we pass on our genes to offspring. Scientists at the University of California, San Diego’s Salk Institute have now provided a world-first view of the three-dimensional structure of human chromatin intact and inside the nucleus of live human cells. Prior to these results, chromatin — a mix of DNA and proteins — had never before been seen without the cells it was extracted from being broken down and subjected to harsh DNA processing techniques.

The Salk researchers identified a novel combined technology called ChromEMT, which involves painting the chromatin with a unique metal cast DNA dye that allows highly detailed visualization when paired with advanced electron microscopy (EM). The researchers were able to see the chromatin structure in both resting and mitotic cells, and the captured EM image data provides a visualization that makes it easy to see where the chromatin’s contour lines vary. However, the researchers did not see anything resembling the chromosomes we have come to imagine when we think of DNA and mitosis.

“The textbook model is a cartoon illustration for a reason,” first author and Salk research associate Horng Ou said to “Chromatin that has been extracted from the nucleus and subjected to processing in vitro — in test tubes — may not look like chromatin in an intact cell, so it is tremendously important to be able to see it in vivo.”

A New View

Rather than the “beads on a string” forming larger structures model often depicted in textbooks, the chromatin researchers saw was in a “beads on a string” model that formed a semi-flexible chain in both dividing and resting cells. The chain bent and flexed to achieve different levels of compaction along the flexible chain —  suggesting that the packing density of the chromatin, rather than any higher-order structure, is what determines the active and suppressed regions of the genome.

They also used 3D microscopy reconstructions to depict how the RNA polymerase transcription process might also be controlled by chromatin’s variable packing density. In terms of what this means for genetics research, these results do more than revise the standard model for DNA and transcription. They also suggest a possible new method for preventing, diagnosing, and treating diseases like cancer: controlling access to chromatin. The team will next explore whether the structure of chromatin is universal from organism to organism and among cell types.

The post Scientists Unravel the Mystery of DNA Organization appeared first on Futurism.

Within less than 10,000 years, we’ll have used up all the particles in the universe.

The post Watch: How Many Particles Exist in the Universe? appeared first on Futurism.

Where is knowledge biologically in the brain?

The post Watch: Is Anything “Real”? appeared first on Futurism.

The Angel Particle

For every fundamental particle in the universe, there is an antiparticle with the same mass and the opposite charge — at least that’s what we thought. When particles and antiparticles meet, they immediately destroy each other in a burst of energy, but physicists have long theorized that somewhere in the universe exists an exception to this rule. Scientists have finally found the first strong evidence for this type of particle, which they’re calling the “angel particle.”

Back in 1928, physicist Paul Dirac predicted that all fundamental particles had their opposites, and a few years later the positron was discovered. This was the first antimatter particle and the opposite of the electron, and it confirmed the prediction. By 1937, physicist Ettore Majorana had calculated that there was a missing member of the fermion family. Electrons, protons, quarks, neutrons, and neutrinos are all fermions with corresponding antiparticles, but Majorana believed that there should be an additional particle which was its own antiparticle.

Initially neutrons and neutrinos were the most likely Majorana fermion suspects, since they have no charge — but antineutrons have since been discovered. This leaves just neutrinos, and physicists are working to find out whether they might in fact be their own antiparticles. That answer is not likely to be forthcoming anytime soon, because those experiments are very difficult to do. In the meantime, scientists are searching for “quasiparticles.”

Quasiparticles And Smoking Guns

Quasiparticles aren’t exactly natural particles. They’re created when the collective behavior of electrons in a solid material gives rise to certain properties typically found in particles that areweakly interacting in free space. Thinking of the ways physical properties often change at the nanoscale, imagine a nanomaterial that behaves as if it’s comprised of bouncing bubbles, exhibiting collective behavior. The quasiparticle is really the behavior, not a physical particle per se.

Similarly, quasiparticles exist only within very specific conditions. If they exhibit all the right properties, however, they might be considered to be Majorana fermions. Researchers from Stanford and the University of California say they found a “smoking gun experimental signature” that indicates these hypothetical fermions are real.

The team stacked quantum materials to create a superconducting topological insulator in hopes of revealing these quasiparticles along the edges. When magnetic material was added, the researchers observed electrons moving along one edge in one direction, and along the other edge in the opposite direction. Then, by sweeping a magnet over the material, they slowed the electrons down and forced them to change directions. Quasiparticles then began emerging from the materials in pairs, turning in half-steps compared to the electrons. This made sense to the researchers, since they were only half particles.

This loss of half of each quasiparticle pair in this process was precisely the evidentiary phenomenon the researchers had been hoping to find. Senior author of the paper Shoucheng Zhang proposed that the quasiparticle be called the “angel particle” based on Angels and Demons by Dan Brown — a novel which describes a matter and antimatter bomb. While this discovery is for now still theoretical, this knowledge could one day be of service to improving the security of quantum computers.

The post Scientists Find Evidence of a Particle That is Its Own Antiparticle appeared first on Futurism.

An Uncertain Bad Experience

Everyone worries sometimes, but about one in four adults in the U.S. suffer from anxiety disorder, a mental illness characterized by anxiety that interferes with their daily lives. New research from St. Louis’s Washington University School of Medicine provides insight into the workings of the anxious brain, revealing a group of neurons that becomes active when animals are faced with unpleasant events that are possible — but not certain.

“We found a population of neurons that activated specifically when monkeys thought something bad or annoying – like a puff of air to the face – might be coming, but not when they knew for certain it was,” study author and assistant professor of neuroscience and of biomedical engineering Ilya Monosov told MedicalXpress. “These neurons did not activate when the animals thought they might get something good, like a sip of sweet juice. Only an uncertain bad experience activated these cells.”

The team studied the anterior cingulate cortex, a region of the brain’s outer layer where its two hemispheres meet. The area plays a clear role in mental disorders like anxiety, attention deficit disorder, depression, and obsessive-compulsive disorder. Researchers have confirmed differences in the anterior cingulate cortex between healthy people and people with these mental disorders. However, he functions of cells associated with these brain differences, and their consequences, are not well-understood.

Cellular Roots Of Anxiety

For their study, the team trained the monkeys to respond not just to the uncertainty around when they would receive the unpleasant puff of air in their face, but to the certainty and uncertainty of possible rewards as well. They observed that possible rewards engaged an entirely different group of neurons.

This research opens up possibilities for studying the roots of anxiety, and could one day lead to new treatments. The findings from the study also help elucidate the cellular bases of complex mental processes. It appears that our brain’s response to uncertainty is rooted in our neurons.

“Now that we know which cells are active when an animal is faced with the uncertainty of a bad experience, we can try to disrupt the activity of these cells,” Monosov said to MedicalXpress. “It opens up avenues of research, which may one day lead to new ways to treat disorders such as anxiety and depression.”

The post New Study Reveals a Group of Neurons Active in Anxious Brains appeared first on Futurism.

Is it the awareness of your own experience and thoughts?

The post Watch: What is Consciousness? appeared first on Futurism.

New Loophole-Free Bell Test

One of the most interesting (and confusing) phenomena in quantum physics is quantum entanglement. We observe this quantum effect when we see entangled particles affect each other regardless of distance. For example, when we measure the state of one particle at a distance from another and the measurement of the state of the first instantly influences the state of the other, we have quantum entanglement.

Einstein was disturbed by this, and didn’t like the idea that quantum entanglement might violate the speed of light if the particles were somehow sending each other information faster than light could travel. Therefore, he developed the idea of local realism, which assumes a pre-existing value for any possible measurement of a particle — an objective value a particle must have. This theory is based on the idea of locality, the principle that there is a minimum amount of time it takes for distant objects to influence each other, and realism, the idea that objects exist whether or not they are measured.

In the 1960s, Physicist John Bell developed a famous test to determine whether particles really do influence each other in the way quantum entanglement suggests. In the Bell test, a pair of entangled particles are sent in different directions toward different locations. A device measures the state of each particle in each location, and the settings of each device are set at random; this way it’s impossible for device one to know the setting of device two at the time of measurement, and vice versa.

If quantum entanglement is real, then local realism shouldn’t work, and the Bell inequality test should be violated. If scientists do observe violations of the Bell inequality test, it means that quantum mechanics violates locality, realism, or both — making local realism incorrect. In recent research, physicists have reported some of the best evidence to date that quantum entanglement exists, and the quantum world is free of the constraints of local realism. Researchers performed a Bell inequality test that was, essentially, loophole-free, and demonstrated that two atoms one-quarter of a mile apart shared correlations probably caused by quantum entanglement. According to local realism, this should be impossible.

Image Credit: Rosenfeld et al. Published by the American Physical Society
Image Credit: Rosenfeld et al. Published by the American Physical Society
The only way the observed correlations could be explained by local realism would be if there were unknown “hidden variables” instead of quantum entanglement. And, according to the researchers, the odds of this being the case are less than one in a billion. The odds drop even further once all seven months of their accumulated data is accounted for, dipping to about one in ten quadrillions. The team concluded that the laws of the quantum world violate locality, realism, or possibly both.

Closing Loopholes

Although the test in this research was essentially loophole-free, all loopholes are not completely closed. One of the last possible loopholes that remain for most Bell tests has to do with how particle states are measured. It is critical that hidden variables are not somehow allowing particles to synchronize their properties by influencing the choice of measurement. This is called the freedom of choice or free-will loophole. In this research, the team used a high-speed quantum random number generator to eliminate this loophole, but the minuscule possibility of communication between random number generators or with other experimental elements technically remains possible.

Other research has used humans to randomly choose numbers, relying upon the randomness of unique human minds. The physicists in this study felt that an extraterrestrial random number generator is the only way to truly close the loophole since such massive distances would prevent covert communication. There are several such extraterrestrial random number generators under development in physics labs now, intended for this purpose.

Closing the loopholes matters, because scientists hope to use quantum entanglement to safely encode messages. The demands of quantum cryptography would mandate further refinement of the measurement process. Ultimately, the hope is that quantum entanglement will allow us to transmit quantum information over long distances almost instantaneously, making quantum computing and interstellar communication possible.

The post Physicists Take One Large Step Towards Proving Quantum Entanglement appeared first on Futurism.

Axial–Gravitational Anomaly Observed

Physicists have reported observing an exotic particle physics effect, called the axial–gravitational anomaly, in a solid Weyl semimetal crystal. The effect describes the bending of the space-time of the cosmos, and it is theorized to occur in noticeable ways in immense gravitational fields — like those close to a black hole, or those that existed around the time of the Big Bang. The effect is significant: it states that huge gravitational fields should destroy the symmetry of some mirror-image pairs of particles, called Weyl fermions, resulting in an imbalance — and, in the end, more of one type of particle. 

9 Physics Questions Baffling Scientists [INFOGRAPHIC]
Click to View Full Infographic

Physicist Johannes Gooth and his team at IBM Research near Zurich, Switzerland, say they observed evidence for the long-predicted axial–gravitational anomaly. For obvious reasons, until now no one had been able to recreate the right conditions for observing the effect in a laboratory. However, to create an in-lab analogue of the anomaly, the researchers exploited a strange parallel between temperature and gravity in niobium phosphide crystals.

The effect inside the crystal sounds a little something like being at your high school dance: but instead of pairs of left and right feet, there’s an abundance of left feet and a shortage of right feet. This result supports the idea that quantum crystals can serve as loci for physics effect experiments that have never before been seen — except under exotic circumstances, such as near or inside black holes.

Practical Uses Of Quantum Crystals

Physicists note that the anomaly generates an electric current as it causes more electrons from one to move from one side of the exotic material to the other — a current which they were able to measure. This might have practical applications for electricity generation from waste heat sources, and it could also potentially provide other practical uses in electronics.

Understanding the way this anomaly manifests in exotic materials like quantum crystals could also generate new lines of inquiry in physics. IBM is hoping this research will help them improve the efficiency of a range of materials that use temperature gradients to generate electricity.

“For us, whenever we build transistors, we are always bound to conservation laws,” Gooth said to IEEE Spectrum. “These define and limit everything. And now suddenly we have materials where these high-energy, quantum mechanics equations allow for us to break some of these laws. It opens up a completely new playground for device design. Because it’s simply new physics, which circumvents classical limits.”

The post This Strange Particle Physics Effect Could Help Explain the Physics of Black Holes appeared first on Futurism.

Searching For Tectonic Plates

Tectonic plates — which divide Earth’s crust and reshape our planet in an ongoing, dynamic process — may be the key to supporting life. In fact, because Earth is the only planet known to be home to both life and plate tectonics, researchers are now wondering if there is a relationship between the two. Some researchers believe that life is made possible only through the temperature regulation that is imparted to a planet through shifting plates.

[Taken] Tectonic Activity of Earth May Be Essential for Life and Rare in the Galaxy

If tectonic plates are essential to life, finding life elsewhere — or worlds that can support it — may depend on finding planets with tectonic activity. Cayman Unterborn, an Arizona State University astronomer, has published research that indicates few exoplanets could sustain plate tectonics for long periods of time.

The results that Unterborn and his team compiled suggest that not only are plate tectonics rare, but they may also be difficult to sustain even when they do occur. Still, if astronomers are correct that up to 40 billion potentially habitable planets exist in our galaxy, even if the study’s findings are correct, about 13 billion planets may yet be habitable worlds.

The Field Of Exogeology

How critical to life are plate tectonics? They seem to have been vital here on Earth, keeping the planet from freezing. And while there is evidence of geological activity on planets without plate tectonics such as Mars, it is past activity. This suggests that such activity can’t be sustained without plate tectonics. In other words, plate tectonics may not be the key to all geological activity — but could the key to sustaining geological activity and stabilizing temperature.

This latest research contradicts earlier work which theorized that plate tectonics would be almost inevitable on larger rocky planets. Unterborn’s work takes into account not only a planet’s size, but also its composition. Obviously there was no way to peer inside the core of other planets (at least not yet), so Unterborn and his team used computer models to predict what the mantle and crust of planets studied would look like based on the composition of their host stars.

Other Earths: The Best Exoplanet Candidates for Life [INFOGRAPHIC]
Click to View Full Infographic

The results highlight that the habitability of a planet cannot be determined by the “Goldilocks zone,” or its orbital distance from its star, alone. Density alone is also insufficient to determine potential habitability. Unterborn sees his work as the next step in a new field: exogeology — the marriage of geology and astronomy.

Unterborn and his colleagues have proposed studying how different materials react under high temperatures and pressures to the NASA Astrobiology Institute. The goal would be to explore how plate tectonics begin on a planet, as changing materials crack the lithosphere. “I think it’s definitely the future,” Unterborn said to Scientific American. “I’m glad to be at the forefront of it.”

The post The Tectonic Activity of Earth May Be the Key To Supporting Life appeared first on Futurism.

Cold Or Fuzzy?

Dark matter is the unseen, mostly unknown material that comprises the bulk of our Universe. However, exactly what dark matter is has long been debated, in large part because it has yet to be directly detected. New research from an international team of cosmologists narrows down the possibilities of what dark matter might be using data from intergalactic space.

The research findings conflict with the concept of “fuzzy dark matter,” and instead suggest the existence of a “cold dark matter” model. These results will change the way scientists search for dark matter, informing what kinds of properties they seek out. “What we have done is place constraints on what dark matter could be—and ‘fuzzy dark matter,’ if it were to make up all of dark matter, is not consistent with our data,” lead author and University of Washington Department of Astronomy postdoctoral researcher Vid Iršič said to

Image Credit: Vid Iršič
Image Credit: Vid Iršič
Both the “fuzzy” and “cold” dark-matter theories sought to explain the ways in which dark matter seems to affect galaxies and the intergalactic medium that separates them. However, both theories seem to fail when modeled based on the latest observations of the intergalactic medium (IGM). When the research team, in this case, simulated the IGM and calculated the type of particle that dark matter might be based on observations, they found that the fuzzy dark matter theory doesn’t seem to work, although that kind of particle may still exist.

Searching For Dark Matter

Regardless of what dark matter is, identifying it remains an elusive goal for many scientists. But why is discovering the nature of dark matter so important? Simply put, each discovery we make about dark matter redefines our understanding of the fundamental forces in our Universe. For example, astronomer Hendrik Hildebrandt commented in press materials describing his work on dark matter in December 2016: “Our findings will help to refine our theoretical models of how the Universe has grown from its inception up to the present day.”

Image Credit: Vid Iršič
Image Credit: Vid Iršič
The work may well change the way that scientists, moving forward, engage in their search for dark matter, providing them with a new description of the suspect. “Either way, the IGM remains a rich ground for understanding dark matter,” said Iršič.

The post Scientists Are One Step Closer to Finding Dark Matter appeared first on Futurism.

Potato spindle tuber disease wasn’t a life-or-death situation, but it led to the discovery of viroids.

The post Watch: The Smallest Pathogens on Earth appeared first on Futurism.

Is there a way to define science and separate it from other things?

The post Watch: What Counts as Science? appeared first on Futurism.

Academic Publishing

If we ever needed a timely reminder that in the world of academic publishing not all scientific journals are created equal, we now have it.

A Neuroscientist Just Tricked 4 Dodgy Journals Into Accepting a Fake Paper on ‘Midi-Chlorians’
Image credit: Lucasfilm

To test just how low the quality bar is for exploitative predatory journals, a prominent neuroscientist has tricked four publications into accepting a totally fake paper about midi-chlorians – the entirely fictional life forms in Star Wars that make ‘the force’ possible.

Neuroskeptic, a working neuroscientist who anonymously blogs about science for Discover, set up the sting, submitting the nonsensical study to nine scientific journals – only to have four of them accept it.

The journals approached are among those sometimes described as predatoryin science circles because they exploit researchers into paying fees to have their papers published in them.

But in this case, three of the publications just went ahead and published the fake paper straight up – clearly not having read or checked it first – even without requiring payment of a fee.

Another, the American Journal of Medical and Biological Research, also accepted the paper, but demanded a $360 fee before publishing it.


The absurd thing, as Neuroskeptic explains, is the average human being would only need about five minutes (or less) with the paper to see that it’s entirely bogus and riddled with inexplicable Star Wars references.

For a start, it’s written by none other than the decidedly fishy-looking Dr Lucas McGeorge and Dr Annette Kin, and while at a very quick scan it might pass for a chemistry discussion, that’s only because Neuroskeptic scraped the content of the Wikipedia page on mitochondrion (real) and reworded it, changing references to midi-chlorian/midichlorian (not so real).

To further make things obvious – just in case any ‘peer-reviewers’ working for the publications were actually paying attention – Neuroskeptic dropped in entire passages ripped off wholesale from Star Wars, inserting them not-so-subtly into the text.

“Midichlorians-mediated oxidative stress causes cardio-myopathy in Type 2 diabetics. As more fatty acids are delivered to the heart, and into cardiomyocytes, the oxidation of fatty acids in these cells increases,” the paper reads, sounding kind of legit and science-y, but then suddenly:

“Did you ever hear the tragedy of Darth Plagueis the Wise? I thought not. It is not a story the Jedi would tell you. It was a Sith legend. Darth Plagueis was a Dark Lord of the sith, so powerful and so wise he could use the Force to influence the midichloria to create life.”

If that wasn’t enough to twig the peer-reviewers, an admission in the study that the “majority of the text in the current paper” was in fact stolen from Wikipedia surely should have let them know something was up?

But again, nothing – at least not on the part of the journals that accepted the paper.

Some of the other journals who didn’t accept the paper did pick up on the Star Wars references, but puzzlingly Neuroskeptic was asked to simply revise the text and resubmit it – including suggestions to revert the spelling of “midichlorians” back to “mitochondria”. Really?

As funny as it is to see this dysfunctional peer review process laid bare by such overt fakery, it also highlights a serious breach that hurts science – because the existence of this kind of publishing scam exploits real researchers who are trying to get their work noticed – and brings the whole notion of peer review into disrepute.

“It’s just a reminder that at some ‘peer reviewed’ journals, there really is no meaningful peer review at all,” Neuroskeptic explains.

“This matters because scientific publishers are companies selling a product, and the product is peer review.”

The ‘findings’ – such as they are – are reported in the International Journal of Molecular Biology: Open Access, the Austin Journal of Pharmacology and Therapeutics (PDF), and the American Research Journal of Biosciences.

The post A Neuroscientist Just Tricked 4 Dodgy Journals Into Accepting a Fake Paper on ‘Midi-Chlorians’ appeared first on Futurism.

A “Wet” Mantle

Scientists have long speculated about the presence of water on the Moon. Previous studies found traces of water in the icy shadowed regions of the Moon’s poles, but a recent study from Brown University suggests that there may be substantial amounts of water hiding inside the lunar rock.

Unlike the water in the lunar poles, which could have been caused by hydrogen coming from solar winds, the water in the mantle came very early in the Moon’s formation, Brown University geologist Ralph Milliken said in an interview with In their study published in the journal Nature Geoscience, Milliken and his team examined pyroclastic deposits — rock layers which were probably formed from large volcanic eruptions — using satellite data from the Moon Mineralogy Mapper instrument on the Indian Chandrayaan-1 probe.

“Our work shows that nearly all of the large pyroclastic deposits also contain water, so this seems to be a common characteristic of magmas that come from the deep lunar interior,” Milliken said in the interview. “That is, most of the mantle of the moon may be ‘wet.’”

Fueling Future Explorations

How large amounts of water ended up under the Moon’s surface remains unclear. Understanding it the water got there, possibly “delivered after the giant impact event,” as Milliken’s team thinks, could help explain how water came to be on Earth.

Even more interesting is the potential help lunar water could give to future exploration on the Moon and beyond. “These deposits may be much easier to access than potential water ice in shadowed regions at the lunar poles,” Milliken added in the interview. “Water is heavy and expensive to take from Earth to space, so any bit of water that you can get on the moon instead of bringing with you from Earth is a big deal and opens up possibilities for sustained human presence on the moon.”

The Race for a Moon Base: Who Will Build the First Lunar Colony?
Click to View Full Infographic

This would be especially useful since some Mars missions in the works right now — notably NASA’s — involve using the Moon as a jump off point. A potential lunar base would be easier to maintain given the presence of water on Earth’s satellite.

“The better we understand how much water is there, then the better we can estimate the processes responsible and the usefulness of the deposits for future human exploration,” Milliken said to

The post Scientists Have Discovered That There is Water Under the Moon’s Surface appeared first on Futurism.

DARPA “Safe Genes” Program

Last week, the US Defense Advanced Research Projects Agency (DARPA) created a program called “Safe Genes” with an award of a $65 million in research funding. Seven research teams will share the funding, all dedicated to the broader goal of improving the accuracy and safety of CRISPR gene editing techniques. DARPA’s stated goals include improving the understanding of gene editing technologies, more predictably harnessing them for beneficial uses, and addressing issues of intentional or accidental misuse that could caused potential health and security problems.

Image Credit: DARPAImage Credit: DARPA[/caption]The “Safe Genes” program sets forth three technical objectives, and each of the seven teams housed under the program will be assigned at least one of them. DARPA sets forth these objectives as follows: “develop genetic constructs — biomolecular “instructions” — that provide spatial, temporal, and reversible control of genome editors in living systems; devise new drug-based countermeasures that provide prophylactic and treatment options to limit genome editing in organisms and protect genome integrity in populations of organisms; and create a capability to eliminate unwanted engineered genes from systems and restore them to genetic baseline states.”The seven teams from different research institutions will be focusing on different areas of research, from developing an “on and off” switch for genome editing in bacteria, insects, and mammals in order to take aim at diseases like malaria, to safeguarding genomes by detecting, preventing, and reversing radiation-induced mutations. This kind of work, improving our understanding of CRISPR and how it works, will ideally help settle spats about whether the technique causes mutations and other scientifically controversial issues. The ethical and safety components of the project may also defuse controversies about using CRISPR to revive extinct species or create new ones.

The post New DARPA Initiatives Aim to Improve Safety and Efficiency of CRISPR Gene Editing appeared first on Futurism.

Surprise, Surprise!

The detection of peculiar radio signals from a dim star roughly 11 light-years away from Earth had the public speculating wildly last week about the possibility that we’d finally detected an alien communication.

Alien or Natural: Strangest Sounds & Signals Detected from Space
Click to View Full Infographic

After looking further into the signals with a little help from the Search for ExtraTerrestrial Life (SETI) Institute, astronomers at the Planetary Habitability Laboratory at the University of Puerto Rico at Arecibo are ready to confirm the source of the signals from Ross 128, and it’s not ET.

“We are now confident about the source of the Weird! Signal,” Abel Mendez, director of the PHL, wrote in a blog post on Friday. “The best explanation is that the signals are transmissions from one or more geostationary satellites.”

The signal — officially named the Weird! Signal — was only spotted near Ross 128, one of several red dwarf stars Mendez and his team have been studying, because Ross 128 is “close to the celestial equator where many geostationary satellites are placed,” Mendez explained.

Weird! and Wow!

The PHL also conducted an informal survey of almost 800 participants, including more than 60 astronomers, to find out what people thought was causing the Weird! Signal. Most said it was likely astronomical in nature. “This is interesting since in the absence of solid information about the signal, most astronomers would think that [radio interference or instrumental failures] would probably be the most likely explanation,” Mendez noted.

Unexplained signals, like the more popular and recently resolved Wow! signal, draw significant attention because many people are hopeful that we’ll find some trace of alien life. Indeed, the unofficial public survey showed that roughly 200 of the 800 participants believed the Weird! Signal was communication from intelligent alien life.

However, as Mendez explained, “Unexplained here does not mean inexplicable; it just means we are not able to tell which is the precise source from many possibilities.”

*4* Weird Signals From Nearby Star Finally Solved

Just like the Wow! Signal, the Weird! Signal from Ross 128 demonstrates that there’s likely to be some non-alien answer to many of the mysteries we uncover from the universe.

Of course, this doesn’t mean that the search for alien life isn’t worth it. Before we can consider any such explanation, however, we need to explore all stellar and astronomical possibilities. Sometimes, these possibilities may take 40 years to be completely explored, as was the case with the Wow! Signal, but such is the nature of scientific inquiry.

“This was a great experience of open science,” Mendez wrote. “Sometimes projects, observational campaigns, or missions do not necessarily reach their objectives. The lesson here is that we all need to continue exploring and sharing results openly. Some people prefer to only learn about the successes, but others prefer science in real-time, no matter the end result.”

The post Source of Weird! Signal From Nearby Star Finally Confirmed appeared first on Futurism.

IC 342

The IC 342, or the “Hidden Galaxy,” is a galaxy that is located near the Milky Way’s galactic disk; a cloudy, dusty region that has so far been exceedingly difficult for scientists to see. While it is (cosmically) relatively close to the Milky Way — 10 million light-years away from us — the nature of this galaxy’s location has presented researchers with many challenges. It is true that IC 342 is gloriously bright and fairly large. But between us and this galaxy are stars, a variety of gases, and billowing clouds of cosmic dust. And so, even though researchers are using technologies as advanced as the Hubble Space Telescope, it continues to be “hidden” and difficult to image and, therefore, study.

An image of the “Hidden Galaxy.” Image Credit: ESA/Hubble & NASA

It might be surprising to know that, despite the many advancements in space technology, there are still many cosmic objects that remain relatively mysterious to us because, simply, we cannot see them. One example happened just last year, as researchers finally were able to observe a particularly elusive brown dwarf. Seen as a link between planets and stars, it is essential that we understand these cosmic objects as best we can.

It was a long and arduous journey for scientists to capture a picture of Pluto that provided substantial detail or information. The Hubble Space Telescope, while a vital resource, simply was not up to the task in the same way that the New Horizons spacecraft was. Throughout our Solar System and beyond, there exist obstacles as simple as dust that have a surprisingly massive impact on our ability to observe and study the cosmos.

Future Observations

IC 342 remains relatively elusive — that is, with current technologies like probes and the Hubble Space Telescope. The conditions surrounding this galaxy are insurmountable by existing means. But this doesn’t mean that regions of the cosmos like IC 342 are visually “off-limits” forever. Set to launch in October 2018, the James Webb Space Telescope (JWST) is poised to replace the Hubble, providing improvements that might hopefully expand our view and knowledge of the Universe that we inhabit. This telescope is said to be so powerful that it could even detect a bumblebee on the Moon. In addition to this, the Hydrogen Epoch of Reionization Array (HERA) radio telescope is being upgraded to allow scientists to observe what the Universe was like 13 billion years ago. Researchers are even using the Sun, through gravitational lensing, to improve our telescopic abilities and further expand our cosmic observational capacity.

The Space Telescopes of Tomorrow [Infographic]
Click to View Full Infographic

Technologies like the JWST and the application of gravitational lensing will only continue to develop. At this moment, galaxies like IC 342 remain mysterious. While scientists can, and have, explore and analyze them through the limited information available, there is still much that is unknown. But, just as cosmic objects like the troublesome brown dwarf and Pluto have moved from relative obscurity to become the subjects of detailed portraits, parts of the Universe that are now elusive might not always be that way. Technological advances will certainly not be slowing down anytime soon and, as so many push to get humans to Mars and continue to incorporate machine learning with advancing robotics, there’s no telling what innovation and ingenuity might arise. We might learn more about the different cosmic objects that inhabit our Solar System or even further investigate the potential for existing life outside of Earth.

With the combination of JWST’s launch and other emerging telescopic advancements alongside potential future inventions, the shadowy places that exist throughout the Universe will be lit. Seeing as how the Universe is expanding and we inhabit such an infinitesimally small piece of it, there might always be parts of the cosmos that elude us. The quest for knowledge and to see what we could not see before might never have an end, but this just means that we have infinite possibilities for exploration.

The post Neighboring “Hidden Galaxy” Remains Elusive to Current Technology appeared first on Futurism.

The Highest Form of Flattery

When it comes to overcoming an array of varied problems with an even more diverse set of solutions, scientists have nothing on natural selection. Countless generations of millions of species have developed numerous ways to defy gravity, stick to objects while underwater, and even lift objects thousands of times more massive than themselves.

Medics, scientists, and even engineers have long been interested in learning how biology has overcome certain obstacles so they can glean principles that we might apply to our own challenges. A historical example of such a strategy can be found in the Wright brothers’ study of birds as they worked on their first plane. They observed how the shapes of birds wings affected airflow and lift, and incorporated this information into what became the first aircraft to complete a sustained, controlled flight.

Today, aerospace engineers continue to use this strategy, now with the focus specifically on owls to make aircraft quieter. And with tools that allow us to study organisms on a molecular level, we can now look at everything from sea creatures to woodland animals for inspiration to improve our tech, our medicine, and — ultimately — our lives.

One company that has successfully adopted this strategy, called “biomimicry,” is Gecko Biomedical, founded by Jeffrey Karp. He designed a medical tape to replace sutures and staples that was inspired by — you guessed it — the gecko. By simulating the tiny hairs that allow gecko’s feet to stick to walls (and adding some glue for good measure), Karp was able to design a tape that could seal incisions in internal tissues like gastrointestinal tracts.

Karp’s team has also gone on to invent a surgical glue after studying the mucus of sandcastle worms. The glue not only remains sticky in wet environments, but is strong enough to hold together a beating heart — an application for which the glue is undergoing clinical trails in Europe.

Solving a Problem

Karp maintains that his goal in biomimicry is never to simply copy certain elements in nature just because we can. Instead, he aims to identify real human needs, and only then look at the natural world to see if it offers any insight.

“When we look to solve problems, it’s not so we can publish papers and get pats on the back from the academic community,” Nick Sherman, a research technician at Karp Lab, said in an interview with the Guardian. “It’s more like, ‘Is this work going to help patients? If not, how do we make it help them?’”

That is why the team has focused on developing medical devices that have the potential to vastly improve patient care, as is the case with the surgical glue. Jean-Marc Alsac, a cardiovascular surgeon who is overseeing the trial, told the Guardian, “This could completely transform how we perform surgery.”

Bio-Robots: Animal-Inspired Machines
Click to View Full Infographic

But medicine is not the only field that nature has been transforming. Robotics, too, has been building off of biological models, yielding ‘bots that can do amazing things. For example, engineers recently designed 100-gram (3.5-ounce) robots that work together like ants. When six of these cooperate, they can lift a 1,800-kg (3,900-pound) car.

The robo-ants could be applied to a number of today’s problems, which are many. With continued symptoms of climate change, an aging population at higher risk of cancer, and an increasingly digital society that is more susceptible than ever to cyber attacks, we are in desperate need of fresh solutions. These machines, medical devices, and robots are a testament to the wisdom of the natural world.

Karp believes biomimicry is a winning strategy because “every living creature that exists today is here because it tackled a number of challenges,” he told the Guardian. “In essence, we are surrounded by solutions. Evolution is truly the best problem-solver.”

The post “Bioinspired” Researchers Look to Nature to Solve Humanity’s Biggest Problems appeared first on Futurism.

Modern Physics

Modern physics has accustomed us to strange and counterintuitive notions of reality—especially quantum physics which is famous for leaving physical objects in strange states of superposition. For example, Schrödinger’s cat, who finds itself unable to decide if it is dead or alive. Sometimes however quantum mechanics is more decisive and even destructive.

Scientists Observe Gravitational Anomaly on Earth
Prof. Dr. Karl Landsteiner, a string theorist at the Instituto de Fisica Teorica UAM/CSIC and co-author of the paper made this graphic to explain the gravitational anomaly. Image Source: IBM Research

Symmetries are the holy grail for physicists. Symmetry means that one can transform an object in a certain way that leaves it invariant. For example, a round ball can be rotated by an arbitrary angle, but always looks the same. Physicists say it is symmetric under rotations. Once the symmetry of a physical system is identified it’s often possible to predict its dynamics.

Sometimes however the laws of  mechanics destroy a symmetry that would happily exist in a world without , i.e classical systems. Even to physicists this looks so strange that they named this phenomenon an “.”

Quantum Anomalies

For most of their history, these quantum anomalies were confined to the world of elementary particle physics explored in huge accelerator laboratories such as Large Hadron Collider at CERN in Switzerland. Now however, a new type of materials, the so-called Weyl semimetals, similar to 3-D graphene, allow us to put the symmetry destructing quantum anomaly to work in everyday phenomena, such as the creation of electric current.

In these exotic materials electrons effectively behave in the very same way as the elementary particles studied in high energy accelerators. These particles have the strange property that they cannot be at rest—they have to move with a constant speed at all times. They also have another property called spin. It is like a tiny magnet attached to the particles and they come in two species. The spin can either point in the direction of motion or in the opposite direction.

Scientists Observe Gravitational Anomaly on Earth
An international team of scientists have verified a fundamental effect in a crystal that had been previously only thought to be observable in the deep universe. The experiments have verified a quantum anomaly that had been experimentally elusive before. The results are appearing in the journal Nature. Image Source: Robert Strasser, Kees Scherer; collage: Michael Büker

When one speaks of right- and left-handed particles this property is called chirality. Normally the two different species of , identical except for their chirality (handedness), would come with separate symmetries attached to them and their numbers would be separately conserved. However, a quantum anomaly can destroy their peaceful coexistence and changes a left-handed particle into a right-handed one or vice-versa.

Appearing in a paper published today in Nature, an international team of physicists, material scientists and string theoreticians, have observed such a material, an effect of a most exotic quantum anomaly that hitherto was thought to be triggered only by the curvature of space-time as described by Einstein’s theory of relativity. But to the surprise of the team, they discovered it also exists on Earth in the properties of , which much of the computing industry is based on, spanning from tiny transistors to cloud data centers.

“For the first time, we have experimentally observed this fundamental quantum anomaly on Earth which is extremely important towards our understanding of the universe,” said Dr. Johannes Gooth, an IBM Research scientist and lead author of the paper. “We can now build novel solid-state devices based on this anomaly that have never been considered before to potentially circumvent some of the problems inherent in classical electronic devices, such as transistors.”

New calculations, using in part the methods of string theory, showed that this gravitational anomaly is also responsible for producing a current if the material is heated up at the same time a magnetic field is applied.

“This is an incredibly exciting discovery. We can clearly conclude that the same breaking of symmetry can be observed in any physical system, whether it occurred at the beginning of the universe or is happening today, right here on Earth,” said Prof. Dr. Karl Landsteiner, a string theorist at the Instituto de Fisica Teorica UAM/CSIC and co-author of the paper.

IBM scientists predict this discovery will open up a rush of new developments around sensors, switches and thermoelectric coolers or energy-harvesting devices, for improved power consumption.

This article was provided by IBM Research. Materials may have been edited for clarity and brevity. 

The post Scientists Observe Gravitational Anomaly on Earth appeared first on Futurism.

Soft But Tough

The old cliché that looks can be deceiving definitely comes to mind when considering a new material developed by researchers from the Hokkaido University. It looks soft and squishy — which it is — but it’s as tough as metal. In fact, its developers say it’s five times stronger than carbon steel, while at the same time having the flexibility of rubber.

The reinforced material is a product of combining hydrogels, typically found in products like contact lenses, with glass fibers. The result is a tough but bendable mesh that’s built to maximize resilience. “It’s the strongest soft material ever obtained by human beings,” lead researcher Jian Ping Gong told CNN.

The new fiber-reinforced composite demonstrates “extremely high toughness and tensile properties, far superior to those of the neat components, indicating a synergistic effect,” the team wrote in their research, which was published in the journal Advanced Functional Materials. Based on how much energy is needed to destroy it, this fiber-reinforced hydrogel is about 100 times tougher than regular hydrogels, and 25 times stronger than ordinary glass fiber fabric.

The inspiration for the material, Gong explained, comes from nature. “Sometimes when I see a flower coming out from a very hard floor — concrete crack — I was wondering why the small flower, such a small lovely flower, has such a big large energy to come out,” she told CNN. “This gel itself looks very soft and weak but it can create a huge amount of energies.”

Flexible Range of Applications

With today’s more advanced engineering and design technologies, scientists have been experimenting with a number of materials to come up with so-called metamaterials. These are synthetic composites of existing materials with properties not found in their original components. Research in this field has led to the development of several shape-shifting metamaterials. One can be used for charging devices wirelessly, and another that moves using light.

The Japanese fiber-reinforced hydrogel, on the other hand, offers even more potential applications. “The material has multiple potential applications because of its reliability, durability and flexibility. For example, in addition to fashion and manufacturing uses, it could be used as artificial ligaments and tendons, which are subject to strong load-bearing tensions,” Gong said in a press release.

Indeed, such a soft but tough material seems perfect for biological substitutes; artificial joints made with it could be even stronger than the real thing. Gong’s team is already in talks for potential collaboration in developing artificial cartilages with several companies.

Apart from developing artificial organs, Gong also sees applications in sports clothing and protective gear, such as helmets and bullet proof vests. “I think it could have very good performance against a bullet,” she told CNN. Even better, because the composite hydrogel has a 40 percent water level, it’s also environmentally friendly.

Of course, the technology has not yet been perfected, but it has opened the door for similar work. For Gong and her team, the goal seems much simpler. “If our material can … make people live in better ways, I think we will be very, very happy.”

The post Researchers Create Soft, Flexible Material That’s 5 Times Stronger Than Steel appeared first on Futurism.

Illusions aren’t just fun and games, they also teach us about our brains.

The post Watch: Understand the Human Brain with These Moving Illusions appeared first on Futurism.

Consciousness is the process of understanding our position in space with respect to others.

The post Michio Kaku: Consciousness Can be Quantified appeared first on Futurism.

DNA Methylation And Genomic Imprinting

Genomic imprinting is the biological process that turns genes on and off to help control early mammalian development as the embryo and placenta grow. Although errors in genomic imprinting can cause profound developmental defects and severe disorders that lead to lifelong health problems, neither the mechanisms behind genomic imprinting nor the errors that cause errors in the process are well understood.

Boston Children’s Hospital and Harvard Medical School scientists are working to remedy that shortcoming, and have identified a mechanism that regulates the process of gene imprinting in mice, including certain genes central to placental growth. Should the mechanism be confirmed via further research in humans, it is likely to offer a range of epigenetic therapies for various developmental disorders such as autism.

Image Credit: freepik
Image Credit: freepik
“Since its discovery over two decades ago, DNA methylation has been the only known mechanism governing genomic imprinting,” the study’s first author and postdoctoral research fellow Azusa Inoue of the Harvard Medical School Department of Genetics said in a press release. “However, much to our surprise, the imprinted genes we looked at lacked DNA methylation, which told us there must be another mechanism at play.”

Easier Genetic Fixes

Researchers in this study noticed mysterious imprinted regions that were independent of DNA methylation as they were mapping parts of the genome in mouse embryos. In their hunt for patterns across the regions, they found a chemical modification to one of the histones — specifically, the consistent presence of H3K27me3. They then demonstrated that this histone modification was necessary for imprinting certain genes, in which DNA methylation played no role.

In total, the scientists identified 76 genes potentially regulated by H3K27me3 rather than DNA methylation. These imprinted genes are linked to severe eye anomalies, limb abnormalities, and issues with placental development, among other disorders. The study’s results could lead to therapies for genetic imprinting defects. “A gene that is turned off by epigenetic modifications can be turned on much more easily than a gene that is mutated or missing can be fixed,” senior investigator and Boston Children’s/Harvard Medical School professor Yi Zhang said in the press release.

The post New Epigenetic Research Could Lead to Future Therapies for Developmental Disorders appeared first on Futurism.

The Strongest Bear

Tardigrades are virtually microscopic critters that are extremophiles in the most, well, extreme sense. They often go by the name “water bears” (or sometimes “moss piglets”) because, while they are segmented and have eight legs, when magnified, they look like adorable, chubby little gummy bears — scientifically speaking.

So You Want To Kill A Tardigrade? [Comic]
Click to View Full Infographic

But the extreme nature of these organisms isn’t in their cuteness, it’s in their virtual indestructibility. That’s right, what you may have heard before is true. Tardigrades can survive just about anything — even the vacuum of space (for a limited period of time, of course).

Water bears prefer to live in moist environments like within the sediment at the bottom of lakes. But these hearty animals can survive unbelievable temperatures, radiation, extreme pressure, and so much more. This is why, according to a recent study published in Nature by researchers from the Oxford and Harvard, tardigrades might be the last species (of anything on Earth) still alive when the Sun dies.

The Last Species

David Sloan was part of the team that explored whether or not tardigrades could survive asteroid impact, gamma ray bursts, and supernovae (the explosion marking the death of a star). “To our surprise we found that although nearby supernovae or large asteroid impacts would be catastrophic for people, tardigrades could be unaffected,” Sloan said in an interview for a Harvard press release. “Therefore it seems that life, once it gets going, is hard to wipe out entirely.”

So, if these researchers are right, why does it matter? Well, this research helps to advance more than just the understanding of tardigrade biology. With the discovery of Trappist-1, recent obstacles found in the search for life on Mars, and the potential to find life on Titan, the study of extremophiles is essential to the quest to find life outside of Earth.

The better that we understand the limits and boundaries of life as we know it to exist on Earth, the better equipped we will be to search for life in the cosmos.

The post Water Bears Could One Day Outlive Everything Except the Sun Itself appeared first on Futurism.

Quantum Mechanical Particles

Mathematicians at the Universities of York, Munich and Cardiff have identified a unique property of quantum mechanical particles – they can move in the opposite way to the direction in which they are being pushed.

New Breakthrough Discovery—Every Quantum Particle Travels Backwards
Image Source: Varsha Y S / WIkimedia Commons

In everyday life, objects travel in the same direction as their momentum – a car in forward motion is going forwards, and certainly not backwards.

However, this is no longer true on microscopic scales –   can partially go into reverse and travel in the direction opposite to their momentum. This unique property is known as ‘backflow’.

New Discovery

This is the first time this has been found in a particle where external forces are acting on it. Previously, scientists were only aware of this movement in “free” quantum particles, where no  is acting on them.

Using a combination of analytical and numerical methods, researchers also obtained precise estimates about the strength of this phenomenon. Such results demonstrate that backflow is always there but is a rather small effect, which may explain why it has not been measured yet.

This discovery paves the way for further research into , and could be applied to future experiments in quantum technology fields such as computer encryption.

Unique to Quantum Particles

Dr Henning Bostelmann, Researcher in York’s Department of Mathematics, said: “This new theoretical analysis into quantum mechanical particles shows that this ‘backflow’ effect is ubiquitous in quantum physics.

“We have shown that backflow can always occur, even if a force is acting on the quantum particle while it travels. The backflow effect is the result of wave-particle duality and the probabilistic nature of quantum mechanics, and it is already well understood in an idealised case of force-free motion.”

Dr Gandalf Lechner, Researcher in Cardiff’s University’s School of Mathematics, said: “Forces can of course make a particle go backwards – that is, they can reflect it, and this naturally leads to increased backflow. But we could show that even in a completely reflection-free medium, backflow occurs. In the presence of reflection, on the other hand, we found that backflow remains a , and estimated its magnitude.”

External Forces

Dr Daniela Cadamuro, Researcher at the Technical University of Munich, said: “The backflow effect in quantum mechanics has been known for quite a while, but it has always been discussed in regards to ‘free’ quantum particles, i.e., no external forces are acting on the particle.

“As ‘free’  are an idealised, perhaps unrealistic situation, we have shown that backflow still occurs when external forces are present. This means that external forces don’t destroy the backflow effect, which is an exciting .”

“These new findings allow us to find out the optimal configuration of a quantum particle that exhibits the maximal amount of backflow, which is important for future experimental verification.”

This article was provided by University of York. Materials may have been edited for clarity and brevity. And make the name of the source a link back to their website.

The post New Breakthrough Discovery—Every Quantum Particle Travels Backwards appeared first on Futurism.

New Limits

University of Houston evolutionary biologist Dan Graur has published new calculations showing that a maximum of one-quarter of the human genome is functional. These findings contrast with the notion promulgated by Encyclopedia of DNA Elements (ENCODE) scientists that 80 percent or more of the sequence is functional. Graur’s work estimates that the human genome’s functional portion is more likely between 10 and 15 percent, with 25 percent representing the upper limit. The remainder of the genome is essentially “junk DNA”, harmless but also useless and not encoding for anything.

Image Credit: Darryl Leja, NHGRI/Flickr
Image Credit: Darryl Leja, NHGRI/Flickr

Graur determined how much of the genome is functional simply by using the replacement fertility rate and the deleterious mutation rate – the rate of occurrence of harmful mutations. The deleterious mutation rate and the size of the genome are already known, and data on human population levels is accessible through historical records. From there, Graur simply needed to create a model for calculating the “mutational load,” the drop in reproductive success caused by harmful mutations relative to the functional portion of the genome.

Junk DNA

Graur found that even if the rate of deleterious mutation was very low, birth rates would need to be unrealistically high to sustain the population if 80 percent of the genome were functional. As reported by The University of Houston: 

For 80 percent of the human genome to be functional, each couple in the world would have to beget on average 15 children and all but two would have to die or fail to reproduce…If we use the upper bound for the deleterious mutation rate (2 × 10−8 mutations per nucleotide per generation), then … the number of children that each couple would have to have to maintain a constant population size would exceed the number of stars in the visible universe by ten orders of magnitude.

Due to deleterious mutations, couples in every generation must have just over two children, on average, to maintain the same population size. In fact, for the past 200,000 years, maintenance fertility rates have stayed between 2.1 to 3.0 children per couple. As a result, the global population was nearly static until the 19th century, at which point infant mortality rates dropped, and fertility rates surpassed replacement levels.

ENCODE announced in 2012 that 80 percent of the human genome was functional. However, Graur said his work proves these claims aren’t accurate, which should change the general approach to human genomics science. “We need to know the functional fraction of the human genome in order to focus biomedical research on the parts that can be used to prevent and cure disease,” Graur told The University of Houston. “There is no need to sequence everything under the sun. We need only to sequence the sections we know are functional.”

The post New Study Shows That Only About 25% Of Our DNA is Functional appeared first on Futurism.

The basics of string theory in under 5 minutes.

The post Watch: Michio Kaku Explains String Theory appeared first on Futurism.

Super Spinach: Plant Nanobionics

MIT engineers have embedded carbon nanotubes into spinach leaves, transforming the plants into sensors that can communicate with human users. This application is among the first demonstrations of plant nanobionics, an area of research that engineers electronic systems in living plants.

“The goal of plant nanobionics is to introduce nanoparticles into the plant to give it non-native functions,” MIT Carbon P. Dubbs Professor of Chemical Engineering Michael Strano said to MIT News.

In this case, the carbon nanotubes embedded in plants were designed to detect nitroaromatics, chemical compounds often used in explosives, present in the groundwater. Upon detection, the leaves of the plant emit a fluorescent signal that an infrared camera attached to a smartphone-sized computer reads. The computer then wirelessly sends the user an email to warn about the chemicals.

Applied Plant Nanobionics

Plants are the perfect choice for environmental monitoring because they naturally take in extensive information from their surroundings. “Plants are very good analytical chemists,” Stano commented to MIT News. “They have an extensive root network in the soil, are constantly sampling groundwater, and have a way to self-power the transport of that water up into the leaves.”

Strano and his team first demonstrated plant nanobionics two years ago when they enhanced plants’ photosynthesic ability with nanoparticles so they could sense the combustion pollutant nitric oxide. The lab has also used the carbon nanotube implants in leaves to sense more than explosives: they’ve enineered plants that can sense hydrogen peroxide, dopamine, and the nerve gas, sarin — which was recently used in the deadly attacks in Syria.

The researchers also hope to apply this technology to fight drought and improve crop yields by optimizing soil quality. “These sensors give real-time information from the plant. It is almost like having the plant talk to us about the environment they are in,” graduate student and researcher Min Hao Wong told MIT News. “In the case of precision agriculture, having such information can directly affect yield and margins.”

The post Researchers Engineer Bionic Spinach Plants to Detect Bombs appeared first on Futurism.

Rediscovering the Macrauchenia

Charles Darwin discovered the bones of Macrauchenia, which went extinct toward the end of the Ice Age, in 1838 while digging in Patagonia. To him they seemed to belong to a kind of prehistoric llama. The remains were analyzed later on by the top anatomist in the UK, Richard Owen, who named the mystery mammal Macrauchenia patachonica. Although Darwin was right that there were some similarities with the llama, Macrauchenia didn’t appear to be a good fit with any existing group of mammals.

Counter-intuitively, the discovery of more fossils clouded the picture instead of clearing it up. Finding a nasal opening that signaled a trunk on the animal’s face resembling a tapir’s caused paleontologists to categorize Macrauchenia as a litoptern. This group of South American mammals arrived on Earth soon after the non-avian dinosaurs died out, and stayed until the end of the Pleistocene period. Perhaps the strangest thing about this group was that they superficially resembled animals found elsewhere in the world — such as elephants and horses — but evolved independently in South America.

However, of all of the mysterious factors surrounding Macrauchenia‘s identity, the strangest was that attempting to trace the animal’s origins through the bones — a process that was usually successful for paleontologists — wasn’t working. The relative isolation of South America, much like that of the Galapagos Islands, allowed evolution to create mammals that were confounding to scientists.

*1* [Evergreen] A Strange Creature Discovered by Darwin Has Baffled Researchers for Decades
That is, until new technology allowed geneticist Michael Westbury and his team to shed more light on the situation. In the past, attempts to harvest genetic details from the bones of Macrauchenia failed. However, a new technique allowed this team to assemble a mitochondrial genome that is very nearly complete. Analysis of this new information allowed scientists to correctly place Macrauchenia within the larger family tree of Earth’s creatures at long last.

Technology Clarifies Past, Future

The mitochondrial genome of any creature reveals matrilineal inheritance — and, therefore, can reveal siblings and other relatives with common female ancestors. The newly sequenced matrilineal genome divulged a sister taxon to the Macrauchenia and its litoptern family: perissodactyls. Also called odd-toed ungulates, this group includes horses, rhinos, and tapirs.

Westbury and his team also found that the two branches of the larger tree diverged around 66 million years ago as the Age of Mammals began. Macrauchenia was one of the last inhabitants of a group that arose just after dinosaurs such as Triceratops and Tyrannosaurus disappeared forever. This transition away from the age of the dinosaur and into the age of mammals is a classic illustration of the way that catastrophic extinction is really only catastrophic from a certain point of view; for survivors, it is simply a new and different era. The solution of the Macrauchenia mystery after almost 200 years is a testament to technology marching on, allowing scientists to understand our planet’s history. The more we know about the Earth’s past, the more mysteries (past and present) will become solvable.

The post A Strange Creature Discovered by Darwin Has Baffled Researchers for Decades appeared first on Futurism.

Physics and science play a huge part in everything around us — and pop culture is a great way to frame new information.

The post Watch: Neil deGrasse Tyson Explains How to Teach Science appeared first on Futurism.

A New Perspective

In 2016, one decade after Pluto was downgraded to the status of dwarf planet, Caltech researchers proposed the existence of a new Planet Nine. Their conclusion was based largely on the distribution of objects in the space known as the Kuiper Belt, and not long after their data was released, it was challenged for possible observational bias.

Now, two astronomers from the Complutense University of Madrid have used a new technique that’s less susceptible to this observational bias to analyze the space where Planet Nine could be lurking. The new study is published in the Monthly Notices of the Royal Astronomical Society (MNRAS).

Meet the Kuiper Belt: One of Our Solar System’s Biggest Mysteries
Click to View Full Infographic

Each celestial object has two nodes, points where the object’s orbit crosses the solar system’s plane. These are the points where a possible interaction with another celestial object — like a Planet Nine — is likely to occur.

For their study, the researchers measured the distance between the nodes of the 28 known extreme trans-Neptunian objects (ETNOs) and the Sun. They noticed that the nodes of the ETNOs’ were clustered at a particular range of distances from the Sun, and they also found a correlation where there shouldn’t be one between the nodes’ positions and inclinations.

“Assuming that the ETNOs are dynamically similar to the comets that interact with Jupiter, we interpret these results as signs of the presence of a planet that is actively interacting with them in a range of distances from 300 to 400 AU,” study author Carlos de la Fuente Marcos told the Information and Scientific News Service (SINC). “We believe that what we are seeing here cannot be attributed to the presence of observational bias.”

Free of Bias

Of course, there’s still no certainty yet whether the cause of these perturbations is indeed a planet. To be really sure that Planet Nine — and maybe even Planet Ten — exists, we might have to wait for data from the James Webb Space Telescope, set to launch in October 2018.

If we do uncover additional planets in our solar system, they’ll do more than simply force us to update our science texts. Such a discovery could help us solve a decades-old questions about our corner of the universe — why the Sun has an unusual tilt — and it would also force us to rethink our simulations about the future of our solar system and how it might meet its ultimate end.

For now, however, all we can do is wait to see if the elusive Planet Nine actually exists. Future observations similar to those conducted by the team at the Complutense University of Madrid will ensure that we avoid any observational bias as the hunt continues.

The post Scientists Just Presented New Evidence That Supports Planet Nine’s Existence appeared first on Futurism.

Scientists have discovered the smallest star known to science; in fact, it is so tiny that it barely qualifies as a star. Called EBLM J0555–57Ab, it is only slightly larger than Saturn. The star is part of a binary system, orbiting a much bigger star approximately 600 light-years from Earth.

“Our discovery reveals how small stars can be,” astronomer Alexander Boetticher from the University of Cambridge said in a press release. “Had this star formed with only a slightly lower mass, the fusion reaction of hydrogen in its core could not be sustained, and the star would instead have transformed into a brown dwarf.”

Image Credit: Alexander von Boetticher et al
Image Credit: Alexander von Boetticher et al. / Astronomy & Astrophysics

The issues that make this star a bit of a “borderline” case are the same that cause brown dwarves to be called “failed stars.” EBLM J0555–57Ab is just massive enough to enable hydrogen fusion to occur in its core, forming helium, as the researcher describe in their study published in Astronomy & Astrophysics. However, it remains very faint and difficult to see; it is approximately 2,000 to 3,000 times fainter than our Sun.

This, along with its proximity to parent star EBLM J0555–57A, made finding the tiny star a real challenge. Initially, EBLM J0555–57Ab was suspected of being an exoplanet as it orbited in front of its parent star. Only closer examination of the measurements revealed its true nature.

Dim, smaller stars like this one are prime candidates for hosting worlds that could support life because they provide the milder environments in which liquid water on planetary surfaces is more likely to survive. However, these minuscule stars are mysterious to us, not leastly because we rarely spot them. Hopefully, scientists will have more clues for finding them moving forward, having learning from this first discovery.

The post Researchers Have Just Discovered The Tiniest Star Known to Science appeared first on Futurism.

DNA Storage

Researchers working to test the potential and limits of DNA storage have used CRISPR to insert an animated image or GIF into the genomes of live E. coli bacteria. They converted each individual pixel in the GIF into nucleotides, the components of the DNA molecule.

The ability to convert bits of information into nucleotides would make it possible to save massive amounts of data in microscopic molecules and carry them with you — even embedded in your skin. Harvard University geneticist George Church, who is leading the team behind the GIF insertion experiment, believes all of this is possible within the realm of this exciting area of research.

The five frame GIF of a horse and rider was placed into the live bacteria frame by frame. The researchers then simply needed to sequence the bacteria’s DNA to retrieve the data and reconstruct the animation. Using this method, their reconstruction was 90 percent accurate. Given that it’s possible to extract and sequence DNA from hundreds of thousands of years into Earth’s past, DNA storage is a durable way to store tremendous amounts of information in a compact space.

The other notable achievement to take away from this success with the horse and rider GIF is that the researchers were able to store and retrieve the data in the DNA of a living organism, despite the constant dynamism of live cells which change, divide, move, and die. Previous DNA storage experiments have been confined to synthetic DNA.

*5* Scientists Just Used CRISPR to Encode a GIF into Bacteria

Future Directions

Next up, the team will be exploring “living sensors” for DNA storage that are sensitive to their environment. Study researcher Seth Shipman commented to MIT Technology Review, “What we really want to make are cells that encode biological or environmental information about what’s going on within them and around them.”

Although the dream of mass DNA storage within our bodies won’t be realized anytime soon, the technique is already proving valuable to researchers. For example, scientists may be able to use DNA storage devices with sensory capabilities to record the molecular events that occur as neurons form during brain development. According to what Shipman told MIT Technology Review, researchers will be able to place these “hard drives” in bacteria, allowing them to record processes or events of interest, and then sequence their DNA to reap the informational rewards.

The post Scientists Just Encoded a GIF into the DNA of Bacteria Using CRISPR appeared first on Futurism.

A Star is Born

Supernovas are a celestial double-edged sword: they represent the violent death of a star and pose an immediate threat to anything within cosmic striking distance. But they also create a chemical cocktail capable of giving birth to new stars. That is, after all, the prevailing theory for the beginning of our Universe. What we know of supernovas, however, is limited by what the technology we have to observe them can tell us. Unless, of course, the eruptions are unfathomably massive — which is what happened 30 some years ago when a supernova 168,000 light-years away exploded with such intensity, and such magnitude, that it was visible with the naked eye. Supernova 1987A was the first event in more than 400 years that allowed astronomers to see — with their own two eyes — what a star exploding with the brightness of a 100 million suns looked like.

That was several decades ago, however, and technological advancement in the interim has given researchers today greater capabilities for analyzing Supernova 1987A. In Chile, the Atacama Large Millimetre/submillimetre Array (ALMA) has helped a group of researchers to create a 3D model of how that cosmic dust shapes into new stars. They also discovered several chemical elements, like silicon monoxide (SiO) and carbon monoxide (CO), which had never before been detected within the heart of the supernova. The researchers undertook a separate study to look for additional chemicals and found formyl cation (HCO+) and sulfur monoxide (SO) as well — which are findings that may even be more thrilling than the 3D modeling of a star’s birth. If these chemical formations have gone undetected, and we now have the means to reveal them, it begs the question: what other secrets may be lurking in a supernova’s core?

“This is the first time that we’ve found these species of molecules within supernovae,” said Mikako Matsuura, one of the researchers from Cardiff University, “which questions our long held assumptions that these explosions destroy all molecules and dust that are present within a star.”

The post The Heart of a Supernova Revealed Unexpected Clues About How Stars Form appeared first on Futurism.

Color is an illusion.

The post Watch: Do People Really See Colors Differently? appeared first on Futurism.

Efimov Molecule

Scientists have discovered that a simple model used to describe the formation an exotic type of quantum particle, an Efimov molecule, is wrong – flipping ten years of experimental data on its head.

Surprise Find Flips a Decade of Research Into an Exotic Quantum Particle
Image Source: Daniel Kessel/ Flickr

The result of this study came as a complete surprise to the researchers and has implications for our fundamental understanding of how molecules appeared in the early universe.

“I have to say that I am surprised,” said Cheng Chin, research leader from the University of Chicago. “This was an experiment where I did not anticipate the result before we got the data.”

Efimov molecules are quantum objects formed when three particles bind together because the two-particle version of the system is forbidden. These types of molecules are held together by quantum forces and not chemical bonds like we see in everyday molecules such as H2O.

Because the quantum forces are so puny, Efimov molecules have a perilous existence. Normal room temperature provides enough energy to break the weak bonds and so Efimov molecules can only exist at temperatures very close to absolute zero.

Until now, the long-standing theory was that the size of an Efimov molecule was related to the size of the particles that it is made of – physicists call this property ‘universality‘.

Quantum Debate

“This hypothesis has been checked and rechecked multiple times in the past 10 years, and almost all the experiments suggested that this is indeed the case,” Chin said, “But some theorists say the real world is more complicated than this simple formula. There should be some other factors that will break this universality.”

The discovery was only made possible by perfecting a super sensitive technique developed in Chin’s research lab over many years.

The sensitive experiments needed to be conducted at temperatures 50 billionths of a degree above absolute zero and in a strong magnetic field, used to control the interaction between the particles.

The Efimov molecules only form when the magnetic field strength is in a particular, narrow range. By analysing the precise conditions in which formation occurs, the scientists were able to work out the size of the molecules and test the universality theory.

The reality of controlling the particles so precisely was a massive challenge for the group. Even heat generated by an electric current, used to create the magnetic field, was enough to change the field, making it hard to reproduce experiments.

The technique needed to be adjusted so the instrument didn’t affect the measurement. Finding a new way to stabilise the magnetic field was the key to making reproducible measurements.

In a stroke of genius, one of the research team, Jacob Johansen, developed a new way to probe the field using microwave electronics and the atoms themselves, allowing the field to be controlled with amazing accuracy.

“[Johansen] can control the field with such high accuracy and perform very precise measurements on the size of these Efimov molecules and for the first time the data really confirm that there is a significant deviation of the universality,” Chin said.

The findings of this study have wide reaching implications.

They will help scientists understand why normal materials have a diverse range of properties that could not have arisen if their quantum behaviour was the same. Understanding the three-body Efimov system puts scientists in the ‘sweet spot’ of where universal material behaviour disappears.

“Any quantum system made with three or more particles is a very, very difficult problem,” Chin said. “Only recently do we really have the capability to test the theory and understand the nature of such molecules… This will be a building block for understanding more complex material.”

The findings were reported in Nature Physics.

The post Surprise Find Flips a Decade of Research Into an Exotic Quantum Particle appeared first on Futurism.

A Quantum Record

Not long ago, in the early 1990s, scientists only speculated that teleportation using quantum physics could be possible. Since then, the process has become a standard operation in quantum optics labs around the world. In fact, just last year, two separate teams conducted the world’s first quantum teleportation outside of a laboratory.

Now, researchers in China have taken the process a few steps further: they successfully teleported a photon from Earth to a satellite orbiting more than 500 km (311 mi) away.

9 Physics Questions Baffling Scientists [INFOGRAPHIC]
Click to View Full Infographic

The satellite, called Micius, is a highly sensitive photo receiver capable of detecting the quantum states of single photons fired from the ground. Micius was launched to allow scientists to test various technological building blocks for quantum feats including entanglement, cryptography, and teleportation.

This teleportation feat was announced as one of the first results of these experiments. Not only did the team teleport the first object ever from the ground to orbit, they also created the first satellite-to-ground quantum network, smashing the record for the longest distance for which entanglement has been measured.

“Long-distance teleportation has been recognized as a fundamental element in protocols such as large-scale quantum networks and distributed quantum computation,” says the Chinese team to MIT Technology Review. “Previous teleportation experiments between distant locations were limited to a distance on the order of 100 kilometers, due to photon loss in optical fibers or terrestrial free-space channels.”

Defining Teleportation

What comes to mind when you think of teleportation?

Your brain might conjure images of Scotty beaming up the Enterprise crew in Star Trek, but it’s actually quite a different process than sci-fi films present.

Quantum teleportation relies on quantum entanglement — a situation where one set of quantum objects (such as photons) form at the same instant and point in space. In this way, they share the same existence. This shared existence continues even when the photons are separated – meaning a measurement on one immediately influences the state of the other, regardless of the distance between them.

This link can be used to transmit quantum information by “downloading” the information associated with one photon over an entangled link to another photon. This second photon takes on the identity of the first.

Voilà. Teleportation.

In this particular instance, the Chinese team created entangled pairs of photons on the ground at a rate of about 4,000 per second. They then beamed one of these photons to the satellite, and kept the other photon on the ground. Finally, they measured the photons on the ground and in orbit to confirm that entanglement was taking place.

It’s worth noting that there are some limits to this technology. Transporting anything large, for instance, is a ways off. In theory, there’s also no maximum transportation distance, but entanglement is fragile, and the links can easily be broken.

Despite these limits, this research paves the way for even more ambitious studies of quantum teleportation. “This work establishes the first ground-to-satellite up-link for faithful and ultra-long-distance quantum teleportation, an essential step toward global-scale quantum internet,” says the team.

The post Scientists Just Teleported a Photon from Earth to Orbit for the First Time appeared first on Futurism.

The Memory Loss Chemical

Researchers at St. Jude Children’s Research Hospital have discovered that limiting the quantity or delivery mechanism of the neuromodulator adenosine in the auditory thalamus — the part of the brain responsible for absorbing sound — can radically increase auditory memory after childhood, which is the time period when a person’s ability to learn languages and music is strongest. Their research has been published in Science.

As part of their study, the scientists limited the amount of adenosine in the brains of adult mice or blocked the receptor the neuromodulator uses to function. They then exposed the mice to specific tones and measured how the animals reacted to those tones weeks or months later. They found that the treated mice responded to the tones more strongly than those that hadn’t had adenosine blocked, and they even had the ability to differentiate between similar tones, an ability rarely observed in adult mice.

*3* We Now Know What Makes Children Learn So Fast

“By disrupting adenosine signaling in the auditory thalamus, we have extended the window for auditory learning for the longest period yet reported,” Stanislav Zakharenko, a corresponding author of the study, told Science Daily. “These results offer a promising strategy to extend the same window in humans to acquire language or musical ability by restoring plasticity in critical regions of the brain.”

Remember, Remember

The research is promising for any of us who want to regain the ability to pick up and remember languages or music as easily as children can. It is especially pertinent, though, for people suffering from memory-related neurodegenerative diseases, such as Alzheimer’s, as restoring a person’s ability to remember new information could dramatically increase their quality of life.

This research is not the first to explore how we could combat the problem of our memories fading as we get older. Two drugs, trazodone and dibenzoylmethane, have proven to be capable of preventing memory deficits in brain cells. At the more experimental end of the spectrum, researchers have discovered that transfusions of babies’ blood could also be useful in improving memory in adults.

Science has never been working harder to improve our quality of life as we get older, and as Eric Verdin, CEO of the Buck Institute for Research on Aging, told Nautilus, researchers no longer consider aging and deterioration fundamentally linked. Thanks to this shift in perception, we may soon be able to avoid the downsides of growing old altogether.

The post A New Study Has Uncovered a Way to Potentially Improve Auditory Memory in Adults appeared first on Futurism.

A Bad Model

Last year, 820,812 animals were used for scientific experimentation in the United States alone, and that number climbs to millions when you look at research worldwide. About 60 percent of animal studies are related to human and animal diseases, such as testing for drugs and medical devices. These mice, dogs, and primates have been injected with lethal cancers, exposed to experimental drugs, and subjected to genetic experimentation.

Animals testing is generally viewed as necessary. Proponents claim it’s the only way to predict whether potential pharmaceuticals are nontoxic and will have some positive benefit to humans without putting actual human subjects at risk. However, the results of animal testing simply do not support that theory.

Seeing Double: The History of Animal Cloning
Click to View Full Infographic

Of the drugs that enter human trials after passing all of their animal studies, about 90 percent are demonstrated to be unsafe or ineffective in humans. In fact, in the realm of cancer, less than 8 percent of findings from animal studies prove applicable to humans. This supports the idea that animals are too different from humans to be good predictors for how our bodies deal with drugs and disease.

These are some of the reasons that Charu Chandrasekera, a researcher at the University of Windsor, has founded the Canadian Centre for Alternatives to Animal Methods (CCAAM). Chandrasekera said in an interview with the Windsor Star that she saw firsthand the ineffectiveness of animal testing during her 17 years engaged in biomedical research.

“Yes, we’ve used animals for decades — it’s not working,” Chandrasekera said in the interview. “New technologies will soon replace the utility of animal models.”

Replacing Pets With Programs

Chandrasekera will be directing researchers at CCAAM to advance these such technologies starting in October. She argues that recent advances in virtual modeling could enable computer programs to predict the potential effects of a drug on humans better than animal testing can. Another animal alternative Chandrasekera is particularly interested in is organoid modelling — a technique for growing human organs from adult human stem cells.

CCAAM researchers are not the only ones working on better tools for studying human health. A company called Emulate is partnering with the Food and Drug Administration (FDA) to create chips that function as human organs as a research tool. Emulate been working on a liver chip to study how drugs break down in the human body, but they have also been awarded a grant to make a brain chip for studies on the International Space Station.

Researchers from the University of Leeds and Avacta Life Sciences are developing a potential alternative to studies in which animals are used to discover new antibodies, proteins that have become increasingly important tools in biological research and medicine. These researchers have developed a new type of protein called “Affirmer proteins” that can do the same job. Affirmer proteins have the benefit of being grown in cells rather than animals, and they are also faster and cheaper to make.

These new methods for studying human health and medicine not only have the potential to provide better data than that attained through animal testing, they are also a more humane way to practice science. But simply developing these new tools will not ensure that researchers will actually use them, Chandrasekera argues. That is why CCAAM aims to eventually offer degree programs in animal replacement science.

“Our current paradigm is being ingrained into students,” Chandrasekera said in the interview. “We want to get them to think outside of that cage. We want to teach them to think beyond animal models.”

The post Animal Testing Isn’t Working, but Better Alternatives Are on the Way appeared first on Futurism.

Reviving Extinct Viruses

Canadian researchers revived an extinct horsepox virus last year on a shoestring budget, by using mail-order DNA. That may not seem like a big deal, until you consider that this relatively inexpensive technique could be used by anyone — perhaps even to bring back something like smallpox, one of the most feared diseases in humanity’s history. The team’s research — which remains unpublished — was intended to create better vaccines and even cancer treatments. Though David Evans of the University of Alberta, the research lead, admitted that he also undertook the project to prove that it could be done. And, that it wouldn’t necessarily require a lot of time, money, and even biomedical skill or knowledge. As he told Science, “The world just needs to accept the fact that you can do this and now we have to figure out what is the best strategy for dealing with that.” Thus reigniting a powerful debate in the biomedical science community.

How We Beat Smallpox [COMIC]
Click to View Full Infographic

The researchers bought overlapping DNA fragments from a commercial synthetic DNA company. Each fragment was about 30,000 base pairs long, and because they overlapped, the team was able to “stitch” them together to complete the genome of the 212,000-base-pair horsepox virus. When they introduced the genome into cells that were already infected with a different kind of poxvirus, the cells began to produce virus particles of the infectious horsepox variety. While horsepox doesn’t infect humans, other pox viruses do: and if the technique works to recreate one kind of pox virus, it could likely work for others as well. This technique was first demonstrated by another group of researchers in a Proceedings of the National Academy of Sciences paper in 2002.

Possible Implications

The idea that it would someday be possible to synthesize poxviruses is nothing new. In 2002, virologists assembled the poliovirus from scratch. However, this new work certainly does raise disturbing questions about how modern biotechnology could help terrorists weaponize viruses, which has in turn prompted a discussion about the regulation of science: “There is always an experiment or event that triggers closer scrutiny, and this sounds like it should be one of those events where the authorities start thinking about what should be regulated,” Northern Arizona University anthrax expert Paul Keim told Science.

This work also changes the longstanding debate about what to do with the world’s few remaining smallpox samples. While scientists have argued about whether to destroy them or study them, if the viruses — or viruses very much like them — could be manufactured, it wouldn’t matter what happened to those samples. “You think it’s all tucked away nicely in freezers, but it’s not,” National Institutes of Allergy and Infectious Diseases virologist Peter Jahrling told Science. “The genie is out of the lamp.”

This brings us back to David Evans of the University of Alberta, who led the horsepox research. Pox viruses are common and infect many animals (including humans), but after it was eradicated, what’s left of the dreaded smallpox virus is held at CDC and cannot be studied. Evans had initially requested the use of existing horsepox samples from the CDC, but his request was declined because his purposes were commercial. So, instead, he synthesized a new virus instead, hoping to gain some insight into creating better vaccines. “This is the most successful vaccine in human history,” Evans said of the smallpox vaccine in Science, “the foundation of modern immunology and microbiology, and yet we don’t know where it came from. There is a huge, interesting academic question here.”

The post Researchers Brought Back a Pox Virus Using Mail-Order DNA and it Only Cost $100,000 appeared first on Futurism.

How to Create the World’s Sharpest Laser

Researchers from the Physikalisch-Technische Bundesanstalt (PTB), which is the national metrology institute of Germany, have broken the record for the sharpness of a laser by building one that has a linewidth (the width of a laser’s optical spectrum) of just 10 millihertz (0.01 hertz).
The laser is stable for 11 seconds before the light waves are desynchronized, but due to them oscillating approximately 200 trillion times per second, in this time the “wave train emitted has already attained a length of approx. 3.3 million kilometers. This length corresponds to nearly ten times the distance between the Earth and the moon,” the team said a press release.
*3* Laser with Record Breaking Linewidth Created in Germany
In order to do this, the scientists used a Fabry-Pérot silicon resonator, which consists of a fixed double cone with two mirrors facing each other inside. To optimize the laser, the researchers then began to rid the system of any interference by eliminating the movement in the mirrors caused by sound, seismic waves, or temperature fluctuations.They then optimized the resonator. In order to prevent any thermal motions of the atoms themselves, they cooled the silicon in the resonator to the point that Brownian motion was minimized — which in this case was -150° Celsius (-238° Fahrenheit).

A Solution to a Problem

A ‘sharper’ laser increases the accuracy of whatever system it is put into by making it more precise. Currently, the scientists are using the technology to measure ultracold atoms and increase the quality of time telling. According to Thomas Legero, a member of the team, “the smaller the line-width of the laser, the more accurate the measurement of the atom’s frequency in an optical clock. This new laser will enable us to decisively improve the quality of our clocks.”

However, the possible uses of this technology could extend much further. These lasers could be used to test Einstein’s theory of relativity more accurately, as the two lasers that were fired into space that gave evidence to the discovery were far cruder.

This is one of many advancements that have been made recently in the field of laser technology. NASA has developed their Lunar Orbiter Laser Altimeter (LOLA) in order to make accurate topographical maps of the moon, while the world’s largest laser — the European XFEL — is primed to reveal the secrets of the atomic world.

The post German Researchers Just Created the World’s Sharpest Laser appeared first on Futurism.

Page 1 of 6:«1 2 3 4 »Last »